CN106662750B - See-through computer display system - Google Patents

See-through computer display system Download PDF

Info

Publication number
CN106662750B
CN106662750B CN201680002425.3A CN201680002425A CN106662750B CN 106662750 B CN106662750 B CN 106662750B CN 201680002425 A CN201680002425 A CN 201680002425A CN 106662750 B CN106662750 B CN 106662750B
Authority
CN
China
Prior art keywords
light
eye
image
hwc
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680002425.3A
Other languages
Chinese (zh)
Other versions
CN106662750A (en
Inventor
J.N.博尔德
N.L.尚斯
J.比特里
J.D.哈迪
R.F.奥斯特豪特
K.埃尔卡查
R.M.罗斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Manto first acquisition Co.,Ltd.
Original Assignee
Osterhout Group Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/623,932 external-priority patent/US20160239985A1/en
Priority claimed from US14/635,390 external-priority patent/US20150205135A1/en
Priority claimed from US14/670,677 external-priority patent/US20160286203A1/en
Priority claimed from US14/741,943 external-priority patent/US20160018645A1/en
Priority claimed from US14/813,969 external-priority patent/US9494800B2/en
Priority claimed from US14/851,755 external-priority patent/US9651784B2/en
Priority claimed from US14/861,496 external-priority patent/US9753288B2/en
Priority claimed from US14/884,567 external-priority patent/US9836122B2/en
Priority to CN202110186961.6A priority Critical patent/CN113671703A/en
Application filed by Osterhout Group Inc filed Critical Osterhout Group Inc
Publication of CN106662750A publication Critical patent/CN106662750A/en
Publication of CN106662750B publication Critical patent/CN106662750B/en
Application granted granted Critical
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/10Beam splitting or combining systems
    • G02B27/1006Beam splitting or combining systems for splitting or combining different wavelengths
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/10Beam splitting or combining systems
    • G02B27/14Beam splitting or combining systems operating by reflection only
    • G02B27/142Coating structures, e.g. thin films multilayers
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0118Head-up displays characterised by optical features comprising devices for improving the contrast of the display / brillance control visibility
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)

Abstract

A head mounted display having an improved high transmission perspective view of the surrounding environment with an overlaid high contrast display image, comprising: an upper optic having a first optical axis and a non-polarizing lower optic having a second optical axis, the upper optic comprising: an emissive image source providing image light comprising one or more narrow spectral bands of light; one or more lenses; stray light traps, and the non-polarizing lower optic comprises a planar beam splitter and a curved portion mirror angled with respect to the first and second optical axes, wherein one or more of the reflective surfaces are treated to reflect a majority of incident light within the one or more narrow spectral bands and transmit a majority of incident visible light from the ambient environment.

Description

See-through computer display system
Priority declaration
This application claims priority to us non-provisional application No. 14/884,567 (ODGP-3017-U01) filed on day 15/10/2015.
The present application claims priority from the following U.S. patent applications, which are incorporated herein by reference in their entirety: U.S. patent application No. 14/635,390 (ODGP-2014-U01) filed 3, 2/2015.
This application claims priority to U.S. non-provisional application (ODGP-2015-U01) filed 3/27/2015, entitled "See-Through Computer Display Systems," application No. 14/670,677.
This application claims priority to U.S. application No. 14/741,943 filed on 17.6.2015 (ODGP-2016-U01).
This application claims priority benefits to a U.S. non-provisional application (ODGP-2017-U01) filed on 30/7/2015, entitled "SEE-THROUGH developer DISPLAY SYSTEMS," application No. 14/813,969.
This application claims priority benefits to a U.S. non-provisional application (ODGP-2018-U01) filed 11/9/2015, entitled "SEE-THROUGH developer DISPLAY SYSTEMS," application No. 14/851,755.
This application claims priority rights to a U.S. non-provisional application (ODGP-2019-U01) filed on 22/9/2015, entitled "SEE-THROUGH developer DISPLAY SYSTEMS," application No. 14/861,496.
This application claims priority to us non-provisional application No. 14/623,932 (ODGP-3016-U01), filed on 17.2.2015.
All of the above applications are incorporated herein by reference in their entirety.
Technical Field
The invention relates to a see-through computer display system.
Background
Head Mounted Displays (HMDs) and in particular HMDs that provide a see-through view of the environment are valuable instruments. Presenting content in a see-through display can be a complex operation when attempting to ensure that the user experience is optimized. There is a need for improved systems and methods for presenting content in a see-through display to improve the user experience.
Disclosure of Invention
Aspects of the present invention relate to methods and systems for see-through computer display systems with the ability to convert from augmented reality (i.e., high perspective transmission through the display) to virtual reality (i.e., low perspective or non-perspective transmission through the display).
In an aspect, a head-mounted display may include a display panel sized and positioned to create a field of view to present digital content to the eyes of a user, and a processor adapted to present the digital content to the display panel such that the digital content is presented only in a portion of the field of view that is in the middle of the field of view such that horizontally opposing edges of the field of view are blank areas. The processor may be further adapted to shift the digital content into one of the blank regions to adjust a convergence distance of the digital content and thereby change a perceived distance from the user to the digital content. The digital content may include an augmented reality object. The perceived distance may be within reach of the user. The convergence distance may be adjusted corresponding to the type of digital content being displayed or the use case with which the augmented reality object is associated. The convergence may be measured by an eye imaging system of the head-mounted display. An eye imaging system images a frontal perspective of a user's eye.
In an aspect, a head-mounted display may include a display panel sized and positioned to create a field of view to present digital content to the eyes of a user, and a processor adapted to present the digital content to the display panel such that the digital content is presented only in a portion of the field of view that is in the middle of the field of view such that horizontally opposing edges of the field of view are blank areas. The processor may be further adapted to shift the digital content into one of the blank regions to adjust a position of the digital content based on a focus distance of the digital content.
In an aspect, a head-mounted display may include a display panel sized and positioned to create a field of view to present digital content to the eyes of a user, and a processor adapted to present the digital content to the display panel such that the digital content is presented only in a portion of the field of view that is in the middle of the field of view such that horizontally opposing edges of the field of view are blank areas. The processor may be further adapted to shift the digital content into one of the blank regions to adjust a position of the digital content based on an indication that the user is looking at an edge of the digital content. The indication that the user is looking at the edge of the digital content may be based on an eye image captured by a camera in the head mounted display. The indication that the user is looking at the edge of the digital content may be based on an indication that the user turned the user's head soon after it was the user turned the user's eyes.
In an aspect, a head-mounted display may include a display panel sized and positioned to create a field of view to present digital content to an eye of a user, and a processor adapted to present the digital content to the display panel such that the digital content is presented in only a portion of the field of view, the portion being in the middle of the field of view such that horizontally opposing edges of the field of view are blank regions, wherein each blank region includes approximately 10% or more of a lateral region of the field of view. The processor may be further adapted to shift the digital content into one of the blank regions to adjust a position of the digital content. The total amount of blank area in the field of view, including the combined left and right portions of the field of view, remains constant while the left and right portions are varied to position the digital content within the field of view. The digital content may be positioned to adjust a convergence distance associated with the digital content. The digital content may be positioned to adjust an inter-pupillary distance associated with the digital content.
These and other systems, methods, objects, features and advantages of the present invention will be apparent to those skilled in the art from the following detailed description of the preferred embodiments and the accompanying drawings. All documents mentioned herein are hereby incorporated by reference in their entirety.
Drawings
Embodiments are described with reference to the following figures. The same numbers are used throughout to reference like features and components that are shown in the figures.
FIG. 1 illustrates a head-mounted computing system according to the principles of the present invention.
FIG. 2 illustrates a head-mounted computing system having an optical system in accordance with the principles of the present invention.
Figure 3a illustrates a large prior art optical arrangement.
Figure 3b illustrates an upper optical module according to the principles of the present invention.
FIG. 4 illustrates an upper optical module according to the principles of the present invention.
Figure 4a illustrates an upper optical module according to the principles of the present invention.
Figure 4b illustrates an upper optical module according to the principles of the present invention.
FIG. 5 illustrates an upper optical module according to the principles of the present invention.
Figure 5a illustrates an upper optical module according to the principles of the present invention.
Figure 5b illustrates an upper optical module and a dark trap in accordance with the principles of the present invention.
Figure 5c illustrates an upper optical module and a dark trap in accordance with the principles of the present invention.
Figure 5d illustrates the upper optical module and the dark trap according to the principles of the present invention.
Figure 5e illustrates the upper optical module and the dark trap according to the principles of the present invention.
Fig. 6 illustrates upper and lower optical modules according to the principles of the present invention.
Fig. 7 illustrates the angles of the combiner element according to the principles of the present invention.
Fig. 8 illustrates upper and lower optical modules according to the principles of the present invention.
Fig. 8a illustrates upper and lower optical modules according to the principles of the present invention.
Fig. 8b illustrates upper and lower optical modules according to the principles of the present invention.
Fig. 8c illustrates upper and lower optical modules according to the principles of the present invention.
Figure 9 illustrates an eye imaging system according to the principles of the present invention.
Fig. 10 illustrates a light source according to the principles of the present invention.
Fig. 10a illustrates a back lighting system according to the principles of the present invention.
Fig. 10b illustrates a back lighting system according to the principles of the present invention.
Fig. 11a to 11d illustrate a light source and a filter according to the principles of the present invention.
Fig. 12a to 12c illustrate a light source and quantum dot system according to the principles of the present invention.
Fig. 13a to 13c illustrate a peripheral illumination system according to the principles of the present invention.
Fig. 14a to 14h illustrate a light suppression system according to the principles of the present invention.
Fig. 15 illustrates an external user interface in accordance with the principles of the present invention.
Fig. 16a to 16c illustrate a distance control system according to the principles of the present invention.
Fig. 17a to 17c illustrate a force interpretation (interpretation) system according to the principles of the present invention.
Fig. 18a to 18c illustrate a user interface mode selection system according to the principles of the present invention.
Fig. 19 illustrates an interactive system according to the principles of the present invention.
Fig. 20 illustrates an external user interface in accordance with the principles of the present invention.
FIG. 21 illustrates an mD trajectory representation presented in accordance with the principles of the present invention.
FIG. 22 illustrates an mD trace representation presented in accordance with the principles of the present invention.
Fig. 23 illustrates the environment of an mD scan in accordance with the principles of the present invention.
FIG. 23a illustrates an mD trajectory representation presented in accordance with the principles of the present invention.
Fig. 24 illustrates a stray light suppression technique according to the principles of the present invention.
Fig. 25 illustrates a stray light suppression technique according to the principles of the present invention.
Fig. 26 illustrates a stray light suppression technique according to the principles of the present invention.
Fig. 27 illustrates a stray light suppression technique according to the principles of the present invention.
Fig. 28a to 28c illustrate DLP mirror angles.
Fig. 29 to 33 illustrate an eye imaging system according to the principles of the present invention.
Fig. 34 and 34a illustrate a structured eye illumination system according to the principles of the present invention.
Fig. 35 illustrates eye glints in the prediction of eye direction analysis in accordance with the principles of the present invention.
Fig. 36a illustrates eye characteristics that may be used for personal identification by analysis of the system in accordance with the principles of the present invention.
Fig. 36b illustrates a digital content representative reflection off the eye of a wearer that may be analyzed in accordance with the principles of the present invention.
Fig. 37 illustrates eye imaging along various virtual target lines and various focal planes in accordance with the principles of the present invention.
Fig. 38 illustrates content control with respect to eye movement based on eye imaging in accordance with the principles of the present invention.
Fig. 39 illustrates eye imaging and eye convergence according to the principles of the present invention.
FIG. 40 illustrates content location dependent sensor feedback in accordance with the principles of the present invention.
FIG. 41 illustrates content location dependent sensor feedback in accordance with the principles of the present invention.
FIG. 42 illustrates content location dependent sensor feedback in accordance with the principles of the present invention.
FIG. 43 illustrates content location dependent sensor feedback in accordance with the principles of the present invention.
FIG. 44 illustrates content location dependent sensor feedback in accordance with the principles of the present invention.
Fig. 45 illustrates various orientations (headings) over time in an example.
FIG. 46 illustrates content location dependent sensor feedback in accordance with the principles of the present invention.
FIG. 47 illustrates content location dependent sensor feedback in accordance with the principles of the present invention.
FIG. 48 illustrates content location dependent sensor feedback in accordance with the principles of the present invention.
FIG. 49 illustrates content location dependent sensor feedback in accordance with the principles of the present invention.
Fig. 50 illustrates light striking an eye in accordance with the principles of the present invention.
Fig. 51 illustrates a view of an eye according to the principles of the present invention.
Fig. 52a and 52b illustrate views of an eye having a structured light pattern in accordance with the principles of the present invention.
Fig. 53 illustrates an optics module according to the principles of the present invention.
Fig. 54 illustrates an optics module according to the principles of the present invention.
FIG. 55 shows a series of example spectra for various controlled substances measured using a form of infrared spectroscopy.
Fig. 56 shows an infrared absorption spectrum for glucose.
Fig. 56a, 56b, 56c, and 56d depict examples of eye blinking.
Figure 56e depicts a graph of the measured anterior and posterior sphere radii of a human eye.
Fig. 57 illustrates a scenario where a person walks with a HWC mounted on his head.
Fig. 58 illustrates a system for receiving, developing, and using movement orientation, field of view orientation, eye orientation, and/or duration information from the HWC(s).
FIG. 59 illustrates a rendering technique according to the principles of the present invention.
Fig. 60 illustrates a rendering technique according to the principles of the present invention.
FIG. 61 illustrates a rendering technique according to the principles of the present invention.
FIG. 62 illustrates a rendering technique according to the principles of the present invention.
FIG. 63 illustrates a rendering technique according to the principles of the present invention.
Fig. 64 illustrates a rendering technique according to the principles of the present invention.
Fig. 65 illustrates a rendering technique according to the principles of the present invention.
FIG. 66 illustrates a rendering technique according to the principles of the present invention.
Fig. 67 illustrates an optical configuration according to the principles of the present invention.
Fig. 68 illustrates an optical configuration according to the principles of the present invention.
Fig. 69 illustrates an optical configuration according to the principles of the present invention.
Fig. 70 illustrates an optical configuration according to the principles of the present invention.
Fig. 71 illustrates an optical configuration according to the principles of the present invention.
Fig. 72 illustrates an optical element according to the principles of the present invention.
Fig. 73 illustrates an optical element according to the principles of the present invention.
Fig. 74 illustrates an optical element according to the principles of the present invention.
Fig. 75 illustrates an optical element according to the principles of the present invention.
FIG. 76 illustrates optical elements in a see-through computer display according to the principles of the present invention.
Fig. 77 illustrates an optical element according to the principles of the present invention.
Fig. 78 illustrates an optical element according to the principles of the present invention.
Fig. 79a illustrates a schematic view of an upper optic according to the principles of the present invention.
Fig. 79 illustrates a schematic view of an upper optic according to the principles of the present invention.
Fig. 80 illustrates a stray light control technique according to the principles of the present invention.
Fig. 81a and 81b illustrate a display with gap and masking techniques according to the principles of the present invention.
FIG. 82 illustrates an upper module with a trimmed polarizer according to the principles of the invention.
FIG. 83 illustrates an optical system having stacked multiple polarizer films according to principles of the present invention.
Fig. 84a and 84b illustrate partially reflective layers according to the principles of the present invention.
FIG. 84c illustrates a stacked, multiple polarizer with complex bends according to the principles of the present invention.
FIG. 84d illustrates a multiple polarizer with a curved stack according to the principles of the present invention.
Fig. 85 illustrates an optical system adapted for a head mounted display according to the principles of the present invention.
Fig. 86 illustrates an optical system adapted for a head mounted display according to the principles of the present invention.
Fig. 87 illustrates an optical system adapted for a head mounted display according to the principles of the present invention.
Fig. 88 illustrates an optical system adapted for a head mounted display according to the principles of the present invention.
Fig. 89 illustrates an optical system adapted for a head mounted display according to the principles of the present invention.
Fig. 90 illustrates an optical system adapted for a head mounted display according to the principles of the present invention.
Fig. 91 illustrates an optical system according to the principles of the present invention.
Fig. 92 illustrates an optical system according to the principles of the present invention.
Fig. 93 illustrates an optical system according to the principles of the present invention.
Fig. 94 illustrates an optical system according to the principles of the present invention.
Fig. 95 illustrates an optical system according to the principles of the present invention.
Fig. 96 illustrates an optical system according to the principles of the present invention.
Fig. 97 illustrates an optical system according to the principles of the present invention.
Fig. 98 illustrates an optical system according to the principles of the present invention.
Fig. 99 illustrates an optical system according to the principles of the present invention.
Fig. 100 illustrates an optical system according to the principles of the present invention.
Fig. 101 illustrates an optical system according to the principles of the present invention.
Fig. 102 illustrates an optical system according to the principles of the present invention.
Fig. 103, 103a and 103b illustrate an optical system according to the principles of the present invention.
Fig. 104 illustrates an optical system according to the principles of the present invention.
Figure 105 illustrates a blocking optic in accordance with the principles of the present invention.
Fig. 106a, 106b and 106c illustrate a blocking optics system according to the principles of the present invention.
Fig. 107 illustrates a full color image according to the principles of the present invention.
108A and 108B illustrate color splitting management in accordance with the principles of the present invention.
Fig. 109 illustrates a time stamp sequence in accordance with the principles of the present invention.
Fig. 110 illustrates a time stamp sequence in accordance with the principles of the present invention.
Fig. 111a and 111b illustrate sequentially displayed images according to the principles of the present invention.
Fig. 112 illustrates a see-through display with a rotating component in accordance with the principles of the present invention.
Fig. 113 illustrates an optics module with a twisted reflective surface in accordance with the principles of the present invention.
Fig. 114 illustrates PCB and see-through optics module locations within an eyewear form factor in accordance with the principles of the present invention.
Fig. 115 illustrates PCB and see-through optics module locations within an eyewear form factor in accordance with the principles of the present invention.
Fig. 116 illustrates PCB and see-through optics module locations within an eyewear form factor in accordance with the principles of the present invention.
Fig. 117 illustrates a user interface according to the principles of the present invention.
Fig. 118 illustrates a user interface in accordance with the principles of the present invention.
Fig. 119 illustrates a lens arrangement according to the principles of the present invention.
Fig. 120 and 121 illustrate an eye imaging system according to the principles of the present invention.
FIG. 122 illustrates an identification process in accordance with the principles of the invention.
Fig. 123 and 124 illustrate combiner components according to the principles of the present invention.
Fig. 125 shows a graph of the sensitivity of the human eye to luminance.
Fig. 126 is a graph showing the measured luminance (illuminance) of the luminance (L) perceived by the human eye with respect to the color shade.
FIG. 127 is an illustration of a perspective view of the surrounding environment with an outline showing a display field of view smaller than a typical perspective field of view.
FIG. 128 is an illustration of a captured image of a surrounding environment, which may be a much larger field of view than the displayed image, such that a cropped version of the captured image of the environment may be used for the alignment process.
Fig. 129a and 129b illustrate first and second target images having invisible marks.
Fig. 130 and 131 illustrate targets overlaid onto a perspective view, wherein the targets are moved by using eye tracking controls, in accordance with the principles of the present invention.
Fig. 132 shows a diagram of a multi-fold optic for a head-mounted display including solid prisms, according to principles of the invention.
Fig. 133a, 133b, and 133c show illustrations of the steps associated with bonding a reflective plate to a solid prism, in accordance with the principles of the present invention.
Fig. 134 shows a diagram of multiple folding optics for a reflective image source with a backlight assembly positioned behind a reflective plate, in accordance with the principles of the present invention.
FIG. 135 shows a diagram of a prismatic film bonded to a reflective plate according to the principles of the present invention.
FIG. 135a shows a diagram of a multiple fold optic according to the principles of the present invention showing two cones of illumination light provided by a prismatic film.
136, 137 and 138 show illustrations of different embodiments of additional optical elements included in a solid prism for imaging a user's eye, in accordance with the principles of the present invention.
Fig. 139 shows a diagram of an eye imaging system for a multi-fold optic in which the image source is a self-emitting display, in accordance with the principles of the present invention.
Fig. 140a and 140b are diagrams of an eye imaging system according to the principles of the present invention.
Fig. 141a and 141b are illustrations of a folded optic according to the principles of the present invention that includes a waveguide having an angled partially reflective surface and a reflective surface with optical power.
Fig. 142a and 142b are illustrations of folding optics for a head-mounted display including a waveguide having at least one holographic optical element and an image source, according to principles of the present invention.
FIG. 143 is an illustration of folding optics for a head mounted display in which illumination light is injected into a waveguide and redirected by a holographic optical element such that a user's eye is illuminated, in accordance with the principles of the present invention.
FIG. 144 shows a diagram of folding optics for a head mounted display in which a series of angled partial mirrors are included in a waveguide, according to principles of the invention.
FIG. 145 shows a diagram of a beam splitter-based optical module for a head mounted display according to the principles of the present invention.
Fig. 146 shows a diagram of an optical module for a head mounted display according to the principles of the present invention.
Fig. 146a shows a diagram of a side view of an optics module including a corrective lens element.
Fig. 147 shows a diagram of a left optics module and a right optics module connected together in a rack in accordance with the principles of the present invention.
Fig. 148 illustrates a left image and a right image provided at a nominal vergence distance within a left display field of view and a right display field of view in accordance with the principles of the present invention.
Fig. 149 illustrates how left and right images are shifted laterally toward each other within left and right display fields of view in accordance with the principles of the present invention.
Fig. 150a and 150b illustrate a mechanism for moving an image source according to the principles of the present invention.
Fig. 151a and 151b show illustrations of upper and lower wedges from the location of an image source, in accordance with the principles of the present invention.
Fig. 152 shows an illustration of a spring clip applying force to an image source in accordance with the principles of the present invention.
Fig. 153a, 153b and 154 show illustrations of example display optics including eye imaging in accordance with the principles of the present invention.
Fig. 155a, 155b, 156a, 156b, 157a, 157b, 158a, 158b, 159a and 159b illustrate a focus adjustment module in accordance with the principles of the present invention.
Fig. 160 shows a diagram of an example of a multi-fold optic as viewed from an eye position, in accordance with the principles of the present invention.
Fig. 161 and 162 illustrate an optical system according to the principles of the present invention.
Fig. 163A illustrates an abrupt change in the appearance of content in the field of view of the see-through display.
FIG. 163B illustrates a managed appearance system in which content is reduced in appearance as it enters a transition region near the edge of the field of view.
The diagram 164 illustrates a mixed field of view including a centered field of view and an extended field of view positioned at or near or overlapping an edge of the centered field of view.
Fig. 165 illustrates a hybrid display system in which the main centered field of view is generated with optics in the upper module and the extended field of view is generated with the display system mounted above the combiner.
Fig. 166A-166D illustrate examples of extended display, or extended image content optical configurations.
Fig. 167 illustrates another optical system using a hybrid optical system including a main display optical system and an extended field-of-view optical system.
168A-168E illustrate various embodiments in which a see-through display panel is positioned directly in front of a user's eyes in a head-mounted computer to provide extended and/or overlapping fields of view in a hybrid display system.
FIG. 169 shows a cross-sectional illustration of an example optics assembly for a head mounted display, according to principles of the present invention.
Diagram 170 illustrates a graphical representation of optical traps operating to reduce stray light in accordance with the principles of the present invention.
FIG. 171 shows a diagram of a simple optical system providing a 60 degree display field of view in accordance with the principles of the present invention.
Fig. 172 shows a graph of the acuity of a typical human eye with respect to angular position in the field of view.
Fig. 173 shows a graph of acuity versus eccentricity for a typical human eye in simplified form highlighting the acuity as eccentricity decreases and the difference between no acuity and colored acuity.
Fig. 174A and 174B show exemplary graphs of eye movement and head movement given in radians versus time.
Fig. 175 is a graph illustrating the effective relative lack of acuity compared to the acuity of the fovea centralis (fovea), which is provided by a typical human eye within the eye's visual field when movements of the eye are involved.
Fig. 176 is a graph showing the minimum design MTF versus angle field position required to provide a uniformly sharp viewed image in a wide field of view display image.
Fig. 177 is a graph showing the relative MTF required to be provided by the display optics for a wide-field display to provide sharpness in the peripheral region of the display field that matches the acuity of the human eye.
Fig. 178 shows modeled MTF curves associated with the optical system of fig. 171, where MTF curves are shown for various different angular positions within the display field of view.
FIG. 179 is an illustration of a resolution chart in which the sharpness of an image has been reduced by blurring the peripheral portions of the image to simulate an image from optics that provide a less sharp peripheral region for a central sharp region of +/-15 degrees.
Fig. 180 and 181 are diagrams illustrating how images are shifted within a display field of view as a user moves their head in accordance with the principles of the present invention.
FIG. 182 illustrates a blank portion of a display field of view in which the place from which the image has been shifted is displayed as a dark area to enable a user to see through to the surrounding environment in the blank portion, in accordance with the principles of the present invention.
Fig. 183 shows an illustration of a wide display field of view in which a user may choose to display a smaller field of view for a given image or application (e.g., game) to improve a personal viewing experience, in accordance with the principles of the present invention.
Fig. 184 and 185 illustrate the physical arrangement of an optical system according to the principles of the present invention.
Fig. 186 illustrates a 30:9 format field of view and a 22:9 format field of view with the two fields of view having the same vertical field of view and different horizontal fields of view in accordance with the principles of the present invention.
Fig. 187 depicts a user's eyes looking through a display field of view.
Fig. 188 depicts a lateral image shift within a display field of view.
Fig. 189 depicts a depiction of a left display image and a right display image as they would be presented within a display field of view.
Fig. 190 depicts a graphical representation of the left display image and the right display image as they would be presented within the display field of view.
Fig. 191 depicts a diagram showing a user's eyes looking through a display field of view.
Fig. 192 depicts a depiction of the left display image and the right display image as they would be presented within the display field of view.
While the invention has been described in connection with certain preferred embodiments, other embodiments will be understood by those of ordinary skill in the art and are encompassed herein.
Detailed Description
One aspect of the invention relates to head-mounted computing ("HWC") systems. HWC in some examples relates to systems that mimic the appearance of head-worn glasses or sunglasses. The eyewear may be a fully developed computing platform, such as including a computer display that is presented to the user's eyes in each of the lenses of the eyewear. In embodiments, the lenses and display may be configured to allow a person wearing the glasses to see the environment through the lenses while also seeing a digital image that forms a digital augmented image or an overlay image of augmented reality ("AR") perceived by the person as the environment.
HWC involves placing a computing system more than just on the head of a person. The system may require a computer display designed to be lightweight, compact, and fully functional, such as a high resolution digital display where the computer display includes a high level rendering (emersion) that provides a high level of rendition consisting of a perspective view of the displayed digital content and surroundings of the environment. Unlike those user interfaces and control systems used with more conventional computers, such as laptops, a user interface and control system suitable for HWC devices may be required. For HWC and associated systems to be most effective, the glasses may be equipped with sensors to determine environmental conditions, geographic location, relative positioning to other points of interest, objects identified by imaging and movement by the user or other users in the connected group, and so forth. In what is generally referred to as a context aware HWC, the HWC may then change the mode of operation to match conditions, location, positioning, movement, etc. The glasses may also need to be wirelessly or otherwise connected to other systems, either locally or over a network. Controlling the glasses may be accomplished by using an external device, automatically through contextually collected information, through user gestures captured by a glasses sensor, and the like. Each technique may be further refined according to the software application being used in the eyewear. The eyewear may further be used to control or coordinate with external devices associated with the eyewear.
Referring to fig. 1, an overview of a HWC system 100 is presented. As shown, HWC system 100 includes HWC 102, which HWC 102 is configured in this example as glasses to be worn on the head with sensors, such that HWC 102 is aware of objects and conditions in environment 114. In this example, HWC 102 also receives and interprets control inputs, such as gestures and movements 116. HWC 102 may communicate with external user interface 104. External user interface 104 may provide a physical user interface to retrieve control instructions from a user of HWC 102, and external user interface 104 and HWC 102 may communicate bi-directionally to implement user commands and provide feedback to external devices 108. HWC 102 may also communicate bi-directionally with an externally controlled or coordinated local device 108. For example, external user interface 104 may be used in conjunction with HWC 102 to control an externally controlled or coordinated local device 108. Externally controlled or coordinated local devices 108 may provide feedback to HWC 102 and may present a customized GUI in HWC 102 based on the type of device 108 or device specifically identified. HWC 102 may also interact with remote devices and information sources 112 via network connection 110. Moreover, external user interface 104 may be used in conjunction with HWC 102 to control or otherwise interact with any of remote devices 108 and information sources 112 in a similar manner as when external user interface 104 is used to control or otherwise interact with an externally controlled or coordinated local device 108. Similarly, HWC 102 may interpret gestures 116 (e.g., captured from a front-facing sensor, a downward-facing sensor, an upward-facing sensor, a rearward-facing sensor (such as camera(s), a range finder, an IR sensor, etc.) or environmental conditions sensed in environment 114 to control local or remote devices 108 or 112.
We will now describe in more detail each of the main elements depicted on fig. 1; however, these descriptions are intended to provide general guidance and should not be construed as limiting. Additional descriptions of each element may be further described herein.
HWC 102 is a computing platform intended to be worn on the head of a person. HWC 102 may take many different forms to suit many different functional requirements. In some cases, the HWC 102 will be designed in the form of conventional eyeglasses. The glasses may or may not have an active computer graphics display. In instances where the HWC 102 has integrated a computer display, the display may be configured as a see-through display to enable the digital image to be overlaid with respect to a user view of the environment 114. There are many see-through optical designs that can be used, including those with reflective displays (e.g., LCoS, DLP), emissive displays (e.g., OLED, LED), holograms, TIR waveguides, etc. In embodiments, the illumination system used in conjunction with the display optics may be a solid state illumination system, such as an LED, OLED, quantum dot LED, or the like. Furthermore, the optical configuration may be monocular or binocular. It may also include vision correcting optics. In an embodiment, the optical device may be packaged as a contact lens. In other embodiments, HWC 102 may be in the form of: helmets with see-through shields, sunglasses, safety glasses, goggles, masks, fire helmets with see-through shields, police helmets with see-through shields, military helmets with see-through shields, utility forms customized for a work task (e.g., inventory control, logistics, repair, maintenance, etc.), and the like.
The HWC 102 may also have a number of integrated computing facilities, such as integrated processors, integrated power management, communication structures (e.g., cell net, WiFi, bluetooth, local connection, mesh connection, remote connection (e.g., client server, etc.)), and so forth. The HWC 102 may also have a number of location awareness sensors such as GPS, electronic compass, altimeter, tilt sensor, IMU, etc. It may also have other sensors such as cameras, range finders, hyperspectral cameras, geiger counters, microphones, spectral illumination detectors, temperature sensors, chemical sensors, biological sensors, humidity sensors, ultrasonic sensors, etc.
HWC 102 may also have integrated control techniques. The integrated control techniques may be context-based control, passive control, active control, user control, and the like. For example, the HWC 102 may have an integrated sensor (e.g., a camera) that captures user hand or body gestures 116, enabling an integrated processing system to interpret the gestures and generate control commands for the HWC 102. In another example, the HWC 102 may have sensors that detect movement (e.g., nodding, shaking, etc.), including accelerometers, gyroscopes, and other inertial measurements, where the integrated processor may interpret the movement and generate control commands in response. HWC 102 may also automatically control itself based on measured or sensed environmental conditions. For example, if bright in the environment, HWC 102 may increase the brightness or contrast of the displayed image. In embodiments, integrated control techniques may be installed on HWC 102 to enable a user to interact directly with it. For example, HWC 102 may have button(s), touch capacitive interface, and the like.
As described herein, HWC 102 may communicate with external user interface 104. The external user interface may come in many different forms. For example, a cell phone screen may be adapted to take user input for controlling aspects of HWC 102. The external user interface may be a dedicated UI such as a keyboard, touch surface, button(s), joystick, etc. In an embodiment, the external controller may be integrated into another device (such as a ring, watch, bicycle, car, etc.). In each case, the external user interface 104 may include sensors (e.g., IMU, accelerometer, compass, altimeter, etc.) to provide additional inputs for controlling the HWD 104.
HWC 102 may control or coordinate with other local devices 108 as described herein. The external device 108 may be an audio device, a visual device, a vehicle, a cellular telephone, a computer, or the like. For example, local external device 108 may be another HWC 102, where information may then be exchanged between separate HWCs 108.
In a similar manner in which HWC 102 may control local device 106 or coordinate with the local device 106, HWC 102 may control or coordinate with remote device 112, such as HWC 102 communicating with remote device 112 over network 110. Again, the form of remote device 112 may have many forms. Included in these forms is another HWC 102. For example, each HWC 102 may communicate its GPS location so that all HWCs 102 know where all HWCs 102 are located.
Fig. 2 illustrates HWC 102 having an optical system that includes an upper optical module 202 and a lower optical module 204. While the upper optical module 202 and the lower optical module 204 will generally be described as separate modules, it should be understood that this is merely illustrative and that the invention encompasses other physical configurations, such as where two modules are combined into a single module or where the elements making up the two modules are configured into more than two modules. In an embodiment, the upper module 202 includes a computer controlled display (e.g., LCoS, DLP, OLED, etc.) and image light delivery optics. In an embodiment, the lower module includes eye delivery optics configured to receive the image light of the upper module and deliver the image light to an eye of a wearer of the HWC. In fig. 2, it should be noted that although the upper optical module 202 and the lower optical module 204 are illustrated in one side of the HWC to enable image light to be delivered to one eye of the wearer, it is envisioned by the present invention that embodiments will incorporate two image light delivery systems, one for each eye.
Fig. 3b illustrates the upper optical module 202 according to the principles of the present invention. In this embodiment, upper optical module 202 includes a DLP (also known as a DMD or digital micromirror device) computer-operated display 304 (which includes pixels comprised of rotatable mirrors, such as, for example, DLP3000 available from texas instruments), a polarized light source 302, ¼ wave retarder film 308, a reflective polarizer 310, and a field lens 312. Polarized light source 302 provides substantially uniformly polarized light that is directed generally toward reflective polarizer 310. The reflective polarizer reflects light of one polarization state (e.g., S-polarized light) and transmits light of the other polarization state (e.g., P-polarized light). Polarized light source 302 and reflective polarizer 310 are oriented such that polarized light from polarized light source 302 is generally reflected toward DLP 304. The light then passes through the ¼ wave film 308 once before illuminating the pixels of the DLP 304, and then passes through the ¼ wave film 308 again after being reflected by the pixels of the DLP 304. Upon passing through the ¼ wave film 308 twice, the light is converted from one polarization state to the other (e.g., the light is converted from S-polarized light to P-polarized light). The light then passes through the reflective polarizer 310. With the DLP pixel(s) in the "on" state (i.e., the mirror is positioned to reflect light toward the field lens 312), the "on" pixel generally reflects light along the optical axis and into the field lens 312. Such light reflected by the "on" pixels and directed generally along the optical axis of field lens 312 will be referred to as image light 316. Image light 316 then passes through a field lens for use by lower optical module 204.
The light provided by polarized light source 302, which is subsequently reflected by reflective polarizer 310 before it is reflected from DLP 304, will generally be referred to as illumination light. Light reflected by the "off" pixels of the DLP 304 is reflected at a different angle than light reflected by the "on" pixels, such that light from the "off" pixels is generally directed away from the optical axis of the field lens 312 and toward the side of the upper optical module 202 as shown in fig. 3. The light reflected by the "off" pixels of the DLP 304 will be referred to as dark state light 314.
The DLP 304 operates as a computer controlled display and is generally referred to as a MEMs device. DLP pixels consist of small mirrors that can be directed. The mirror is typically flipped from one angle to another. The two angles are generally referred to as states. When light is used to illuminate a DLP, the mirror will reflect the light in a state-dependent direction. In embodiments herein, we generally refer to the two states as "on" and "off," which are intended to depict the condition of a display pixel. An "on" pixel will be seen by a viewer of the display as emitting light as it is directed along the optical axis and into the associated remainder of the field lens and display system. The "off" pixels will be seen by a viewer of the display as not emitting light, since the light from these pixels is directed to the sides of the optical housing and into the light trap or light stack (light dump) where the light is absorbed. The pattern of "on" and "off" pixels generates image light that is perceived by a viewer of the display as a computer-generated image. A full color image is presented to the user by sequentially providing supplemental colors (such as red, green, and blue) to the illumination light. Wherein the sequence is presented in a re-emission cycle that is faster than the user can perceive as separate images and as a result the user perceives a full color image consisting of the sum of sequential images. Bright pixels in the image are provided by pixels that remain in the "on" state for the full time of the cycle, while darker pixels in the image are provided by pixels that switch between the "on" state and the "off state for the time of the cycle or frame time as in a video sequence of images.
FIG. 3a shows a schematic representation of a system for a DLP 304, wherein an unpolarized light source 350 is directed at the DLP 304. In this case, the angle required for the illumination light is such that field lens 352 must be positioned substantially away from DLP 304 to avoid the illumination light from being clipped by field lens 352. The large distance between the field lens 352 and the DLP 304 along with the straight-line path of the dark state light 354 means that the light trap for the dark state light 354 is also located at a large distance from the DLP. For these reasons, this configuration is larger in size than the upper optics module 202 of the preferred embodiment.
The configuration illustrated in fig. 3b can be lightweight and compact so that it fits into a small portion of the HWC. For example, the upper module 202 illustrated herein can be physically adapted to be mounted in an upper frame of the HWC such that image light can be directed into the lower optical module 204 for presenting digital content to the eyes of the wearer. The packaging of the components that combine to generate image light (i.e., polarized light source 302, DLP 304, reflective polarizer 310, and l/4 wave film 308) is very light and compact. The height of the system excluding the field lens may be less than 8 mm. The width (i.e., front to back) may be less than 8 mm. The weight may be less than 2 grams. The compactness of this upper optical module 202 allows for a compact mechanical design of the HWC, and the light-weight nature of these embodiments helps make the HWC lightweight to provide a HWC that is comfortable for the wearer of the HWC.
The configuration illustrated in fig. 3b can produce sharp contrast, high brightness, and deep black, especially when compared to LCD or LCoS displays used for HWC. The "on" and "off" states of the DLP provide strong differentiators in the light reflection paths representing the "on" and "off" pixels. As will be discussed in more detail below, dark state light reflected from "off" pixels can be managed to reduce stray light in the display system in order to produce images with high contrast.
Fig. 4 illustrates another embodiment of an upper optical module 202 in accordance with the principles of the present invention. This embodiment includes a light source 404, but in this case, the light source is capable of providing unpolarized illumination light. Illumination light from the light source 404 is directed into the TIR wedge 418 such that the illumination light is incident on an inner surface of the TIR wedge 418 (shown as the angled lower surface of the TRI wedge 418 in fig. 4) at an angle beyond the critical angle as defined by equation 1.
Critical angle = arc-sin (l/n) equation 1.
Wherein the critical angle is the following angle: beyond this angle, illumination light is reflected from the inner surface when the inner surface comprises an interface from a solid (solid) with a higher refractive index (n) to air with a refractive index of 1 (e.g. critical angle is 41.8 degrees for an acrylic to air interface with a refractive index of n = 1.5; critical angle is 38.9 degrees for a polycarbonate to air interface with a refractive index of n = 1.59). Thus, the TIR wedge 418 is associated with a thin air gap 408 along the inner surface to create an interface between a solid with a higher index of refraction and air. By selecting the angle of the light source 404 relative to the DLP 402 corresponding to the angle of the inner surface of the TIR wedge 418, the illumination light is rotated toward the DLP 402 at an angle suitable to provide image light 414 reflected from "on" pixels. Wherein the illumination light is provided to the DLP 402 at approximately twice the angle of the pixel mirrors in the "on" state in the DLP 402, such that after reflection from the pixel mirrors, the image light 414 is directed generally along the optical axis of the field lens. Depending on the state of the DLP pixels, the illumination light from the "on" pixels is reflected as image light 414 directed toward the field lens and lower optical module 204, while the illumination light reflected from the "off" pixels (generally referred to herein as "dark" state light, "off" pixel light, or "off" state light) 410 is directed in a separate direction, which may be captured and not used for the image ultimately presented to the wearer's eye.
Light traps for dark state light 410, which have the function of absorbing dark state light, may be located along the optical axis defined by the direction of dark state light 410 and in the side of the housing. To this end, the optical traps may be comprised of regions outside of the cone of image light 414 from "on" pixels. The light traps are typically constructed of a light absorbing material, including a coating of black paint, or other light absorbing material, to prevent light scattering from dark state light from degrading the image perceived by the user. In addition, the light trap may be recessed into a wall of the housing or include a mask or guard to block stray light and prevent the light trap from being viewed adjacent to the display image.
The embodiment of fig. 4 also includes a corrective wedge 420 to correct the refractive effect of the image light 414 as it exits the TIR wedge 418. By including the corrective wedge 420 and providing a thin air gap 408 (e.g., 25 microns), image light from an "on" pixel is generally maintained in a direction along the optical axis of the field lens (i.e., the same direction as defined by image light 414), so it passes into the field lens and lower optical module 204. As shown in fig. 4, the image light 414 from the "on" pixel exits the corrective wedge 420 generally perpendicular to the surface of the corrective wedge 420, while the dark state light exits at an oblique angle. As a result, the direction of the image light 414 from the "on" pixel is largely unaffected by refraction as the image light 414 exits the surface of the corrective wedge 420. In contrast, when the dark state light 410 exits the corrective wedge 420, the dark state light 410 is refracted substantially in direction.
The embodiment illustrated in fig. 4 has similar advantages to those discussed in connection with the embodiment of fig. 3 b. The size and weight of the upper module 202 depicted in fig. 4 may be approximately 8 x 8 mm, which has a weight of less than 3 grams. The difference in overall performance between the configuration illustrated in fig. 3b and the configuration illustrated in fig. 4 is that the embodiment of fig. 4 does not require the use of polarized light as supplied by the light source 404. This can be advantageous in some cases, as will be discussed in more detail below (e.g., increased see-through transparency of the HWC optics from the user's perspective). Polarized light may be used in embodiments in conjunction with the embodiment depicted in fig. 4. An additional advantage of the embodiment of fig. 4 compared to the embodiment shown in fig. 3b is that: dark state light (shown as DLP off light 410) is directed away from the optical axis of the image light 414 at a steeper angle due to the increased refraction encountered when the dark state light 410 exits the corrective wedge 420. This steeper angle of dark state light 410 allows the optical traps to be positioned closer to DLP 402 so that the overall size of upper module 202 can be reduced. Since the light trap does not interfere with the field lens, the light trap can also be made larger, so that the efficiency of the light trap can be improved and as a result stray light can be reduced and the contrast of the image perceived by the user can be improved. FIG. 4a illustrates the embodiment described in connection with FIG. 4 with an example set of corresponding angles at various surfaces where the angle of reflection of one beam of light passes through the upper optical module 202. In this example, the DLP mirror is provided at 17 degrees to the surface of the DLP device. The angles of the TIR wedges are selected to correspond to each other to provide TIR reflected illumination at the correct angle for the DLP mirror, while allowing image and dark state light to pass through a thin air gap, various combinations of angles are possible to achieve this.
Fig. 5 illustrates yet another embodiment of an upper optical module 202 in accordance with the principles of the present invention. As with the embodiment shown in fig. 4, the embodiment shown in fig. 5 does not require the use of polarized light. Polarized light may be used in conjunction with this embodiment, but this is not required. The optical module 202 depicted in fig. 5 is similar to the optical module presented in connection with fig. 4; however, the embodiment of fig. 5 comprises closing the light-redirecting wedge 502. As can be seen from the illustration, turning off the light-redirecting wedge 502 allows the image light 414 to continue generally along the optical axis toward the field lens and into the lower optical module 204 (as illustrated). However, the turn-off light 504 is redirected substantially towards the side of the corrective wedge 420 where it passes into the optical trap. This configuration may allow for a further high degree of compactness in the HWC, as light traps (not shown) intended to absorb the shut-off light 504 can be positioned laterally adjacent to the upper optical module 202, as opposed to below it. In the embodiment depicted in fig. 5, there is a thin air gap between the TIR wedge 418 and the corrective wedge 420 (similar to the embodiment of fig. 4). There is also a thin air gap between the corrective wedge 420 and the closing light redirecting wedge 502. There may be HWC mechanical configurations that ensure that the optical traps for dark state light are located elsewhere, and the illustration depicted in fig. 5 should be seen as illustrating the following concept: the shutdown light can be redirected to create the compactness of the overall HWC. Fig. 5a illustrates an example of the embodiment described in connection with fig. 5, with the addition of more detail on the relative angles at the various surfaces, and the ray traces for the image light and for the dim light are shown as they pass through the upper optical module 202. Again, various combinations of angles are possible.
Fig. 4b shows an illustration of a further embodiment, wherein a set of solid transparent matched wedges 456 is provided with a reflective polarizer 450 at the interface between the wedges. Wherein the interfaces between the wedges in the wedge set 456 are provided at an angle such that the illumination light 452 from the polarized light source 458 is reflected at an angle appropriate for the DLP mirror "on" state (e.g., 34 degrees for a 17 degree DLP mirror) such that the reflected image light 414 is provided along the optical axis of the field lens. The general geometry of the wedges in the wedge set 456 is similar to that shown in fig. 4 and 4 a. A quarter wave film 454 is provided on the DLP 402 surface so that the illumination light 452 is one polarization state (e.g., S polarization state), while the image light 414 is converted to another polarization state (e.g., P polarization state) upon passing through the quarter wave film 454, reflecting off the DLP mirror, and returning through the quarter wave film 454. The reflective polarizer is oriented such that illumination light 452 having its polarization state is reflected and image light 414 having its other polarization state is transmitted. Since the dark state light from the "off" pixel 410 also passes through the quarter wave film 454 twice, it is also in the other polarization state (e.g., the P polarization state) such that it is transmitted by the reflective polarizer 450.
The angle of the faces of the wedge set 450 corresponds to the desired angle for providing the illumination light 452 at the desired angle when in the "on" state by the DLP mirror, such that the reflected image light 414 is reflected from the DLP along the optical axis of the field lens. The wedge set 456 provides an internal interface in which a reflective polarizer film can be positioned to redirect the illumination light 452 toward the mirrors of the DLP 402. The set of wedges also provides matched wedges on opposite sides of the reflective polarizer 450 such that image light 414 from "on" pixels exits the set of wedges 450 substantially perpendicular to the exit surface, while dark state light from "off" pixels 410 exits at an oblique angle to the exit surface. As a result, the image light 414 is substantially not refracted when exiting the set of wedges 456, while dark state light from the "off" pixels 410 is substantially refracted when exiting the set of wedges 456, as shown in fig. 4 b.
By providing a solid transparent matched set of wedges, the planarity of the interface is reduced because the variations in planarity have negligible effect as long as they are within the cone angle of the illumination light 452. It can be f # 2.2 with a cone angle of 26 degrees. In a preferred embodiment, the reflective polarizer is bonded between the mating inner surfaces of the wedge set 456 using an optical adhesive in order to reduce fresnel reflections at interfaces on either side of the reflective polarizer 450. The optical adhesive can be index matched to the material of the wedge set 456 and the segments of the wedge set 456 can all be made of the same material (such as BK7 glass or cast acrylic). Wherein the wedge material can be selected to also have low birefringence to reduce non-uniformity in brightness. The wedge set 456 and quarter wave film 454 can also be bonded to the DLP 402 to further reduce fresnel reflections at DLP interface losses. Furthermore, since the image light 414 is substantially perpendicular to the exit surface of the wedge set 456, the planarity of the surface is not critical to maintaining the wavefront of the image light 414 so that high image quality can be achieved in the displayed image without requiring very tight tolerance planarity on the exit surface.
A further embodiment of the invention, not illustrated, combines the embodiments illustrated in fig. 4b and 5. In this embodiment, the set of wedges 456 consists of three wedges, wherein the general geometry of the wedges in the set of wedges corresponds to the geometry shown in fig. 5 and 5 a. A reflective polarizer is bonded between the first and second wedges, similar to the case shown in fig. 4b, however a third wedge is provided similar to the embodiment of fig. 5. Wherein there is an angled thin air gap between the second and third wedges such that dark state light is TIR reflected towards the sides of the second wedge where it is absorbed in the light trap. This embodiment (like the one shown in fig. 4 b) uses a polarized light source, as already described previously. The difference in this embodiment is that the image light is transmitted through the reflective polarizer and through the angled thin air gap so that it exits perpendicular to the exit surface of the third wedge.
FIG. 5b illustrates the upper optical module 202 with a dark light trap 514 a. As described in connection with fig. 4 and 4a, image light can be generated from a DLP when using TIR and corrective lens configurations. The upper module may be mounted in the HWC housing 510 and the housing 510 may include a dark light trap 514 a. The dark light trap 514a is generally positioned/configured/formed in a position that is optically aligned with the dark light optical axis 512. As illustrated, the dim light trap may have a depth that causes the trap to internally reflect dim light to strongly absorb light further and prevent the dim light from combining with the image light passing through the field lens. The dark light trap may have a shape and a depth such that the dark light trap absorbs the dark light. Further, the dark light trap 514b may be made of or coated with a light absorbing material in embodiments. In an embodiment, recessed light trap 514a may include baffles to block the view of dark state light. This may be combined with a black surface and a textured or fibrous surface to help absorb light. The baffles can be part of an optical trap associated with the housing or field lens, etc.
Figure 5c illustrates another embodiment with optical traps 514 b. As can be seen in the illustration, the shape of the wells is configured to enhance internal reflection within the light wells 514b to increase absorption of the dim light 512. Figure 5d illustrates another embodiment with optical traps 514 c. As can be seen in the illustration, the shape of the well 514c is configured to enhance internal reflection to increase absorption of the dim light 512.
FIG. 5e illustrates another embodiment of the upper optical module 202 with a dark light trap 514 d. This embodiment of the upper module 202 comprises closing the light-reflective wedge 502 as illustrated and described in connection with the embodiment of fig. 5 and 5 a. As can be seen in fig. 5e, a light trap 514d is positioned along the optical path of the dim light 512. The dark light trap 514d may be configured as described in other embodiments herein. The embodiment of the light trap 514d illustrated in fig. 5e includes black regions on the sidewalls of the wedge, where the sidewalls are positioned substantially away from the optical axis of the image light 414. In addition, baffles 5252 can be added to one or more edges of the field lens 312 to block the view of the light trap 514d adjacent to the display image seen by the user.
Fig. 6 illustrates a combination of an upper optical module 202 and a lower optical module 204. In this embodiment, the image light projected from the upper optical module 202 may or may not be polarized. The image light reflects off the flat combiner element 602 such that it is directed towards the user's eye. Wherein the combiner element 602 is a partially reflective mirror that reflects image light while transmitting a substantial portion of the light from the environment, so that a user can see through the combiner element and see the environment surrounding the HWC.
The combiner 602 may include a holographic pattern to form a holographic mirror. If a monochromatic image is desired, there may be a single wavelength reflection design for the holographic pattern on the surface of combiner 602. If the intent is to have multiple colors reflected from the surface of combiner 602, multiple wavelength holographic mirrors may be included on the combiner surface. For example, in a three color embodiment, where red, green, and blue pixels are generated in the image light, the holographic mirror is reflective for wavelengths that substantially match the wavelengths of the red, green, and blue light provided by the light source. This configuration can be used as a wavelength specific mirror, where light of a predetermined wavelength from the image light is reflected to the eye of the user. This configuration may also be configured such that substantially all other wavelengths in the visible light pass through the combiner element 602, so that the user has a substantially clear view of the surroundings when looking through the combiner element 602. When using a combiner that is a holographic mirror, the transparency between the user's eye and the surroundings may be approximately 80%. Wherein a laser light can be used to create an interference pattern in the holographic material of the combiner to construct the holographic mirror, wherein the wavelength of the laser light corresponds to the wavelength of light subsequently reflected by the holographic mirror.
In another embodiment, the combiner element 602 may comprise a notch mirror (notch mirror) consisting of a multilayer coated substrate, wherein the coating is designed to substantially reflect the wavelengths of light provided by the light source and to substantially transmit the remaining wavelengths in the visible spectrum. For example, in the case where red, green, and blue light is provided by the light source such that a full color image can be provided to the user, the notch mirror is a three color notch mirror, where the multilayer coating is designed to reflect narrow bands of red, green, and blue light that match the bands provided by the light source and the remaining visible wavelengths are transmitted through the coating to enable a view of the environment through the combiner. In another example, where a monochromatic image is provided to the user, the notch mirror is designed to reflect a single narrow band of light matching the wavelength range of the light provided by the light source, while transmitting the remaining visible wavelengths to enable a see-through view of the environment. The combiner 602 with the notch mirror will operate in a similar manner from the user's perspective as a combiner comprising a holographic pattern on the combiner element 602. Due to the match between the color of the image light and the reflected wavelength of the notch mirror, a combiner with a three-color notch mirror will reflect the "on" pixel to the eye and the wearer will be able to see the surroundings with a high degree of clarity. When using a three-color notch mirror, the transparency between the user's eye and the surroundings may be approximately 80%. Furthermore, the image provided by the upper optical module 202 with the notch mirror combiner can provide a higher contrast image than a holographic mirror combiner due to less scattering of the imaging light by the combiner.
The light can escape through the combiner 602 and a face glow can be generated because the light is directed generally downward onto the user's cheek. When using a holographic mirror combiner or a three-color notch mirror combiner, the escaping light can be captured to avoid facial glow. In an embodiment, if the image light is polarized before the combiner, a linear polarizer can be laminated to or otherwise associated with the combiner, with the transmission axis of the polarizer oriented relative to the polarized image light such that any escaping image light is absorbed by the polarizer. In an embodiment, the image light will be polarized to provide S-polarized light to the combiner for better reflection. As a result, the linear polarizer on the combiner will be oriented to absorb S-polarized light and pass P-polarized light. This also provides a preferred orientation of the polarizing sunglasses.
If the image light is not polarized, a micro-louvered film (such as a privacy filter) can be used to absorb the escaping image light while providing the user with a see-through view of the environment. In this case, the absorption or transmission of the micro-louvered film depends on the angle of the light, with steep angle light being absorbed and light at smaller angles being transmitted. For this reason, in embodiments, the combiner with the micro-louvered film is at an angle greater than 45 degrees to the optical axis of the image light (e.g., the combiner can be oriented at 50 degrees so that the image light from the field lens is incident on the combiner at an oblique angle.
Fig. 7 illustrates an embodiment of the combiner element 602 at various angles when the combiner element 602 includes a holographic mirror. Typically, the mirrored surface reflects light at an angle equal to the angle at which the light is incident on the mirrored surface. Typically, this necessitates the combiner element to be at 45 degrees (i.e. 602 a), so if the light is presented to the combiner vertically, the light can be reflected horizontally towards the wearer's eye. In embodiments, the incident light can be presented at an angle other than perpendicular to enable the mirror surface to be oriented at an angle other than 45 degrees, but in all cases where a mirrored surface is employed (including the three-color notch mirror described previously), the incident angle is equal to the reflection angle. As a result, increasing the angle of combiner 602a requires that the incident image light be presented to combiner 602a at a different angle that positions upper optical module 202 to the left of the combiner, as shown in fig. 7. In contrast, the holographic mirror combiner included in embodiments can be configured such that light is reflected at a different angle than the angle at which the light is incident on the holographic mirrored surface. This allows the angle of the combiner element 602b to be freely selected independently of the angle of the incident image light and the angle of the light reflected into the eye of the wearer. In an embodiment, the angle of the combiner element 602b is greater than 45 degrees (shown in fig. 7) as this allows for a more laterally compact HWC design. The increased angle of the combiner element 602b reduces the front-to-back width of the lower optical module 204 and may allow for a thinner HWC display (i.e., the element furthest from the wearer's eye can be closer to the wearer's face).
FIG. 8 illustrates another embodiment of the lower optical module 204. In this embodiment, polarized image light provided by upper optical module 202 is directed into lower optical module 204. The image light reflects off the polarizer 804 and is directed to the focusing portion mirror 802, which is adapted to reflect polarized light. An optical element, such as an ¼ wave film, positioned between the polarizer 804 and the partially reflective mirror 802 is used to change the polarization state of the image light such that light reflected by the partially reflective mirror 802 is transmitted by the polarizer 804 to present the image light to the eye of the wearer. The user can also see through the polarizer 804 and the partial mirror 802 to see the surrounding environment. As a result, the user perceives a combined image consisting of the displayed image light overlaid onto the see-through view of the environment.
Although many embodiments of the present invention have been referred to as upper and lower modules containing certain optical components, it should be understood that the image light and dark light generation and management functions described in connection with the upper module may be arranged to direct light in other directions (e.g., upward, sideways, etc.). In an embodiment, it may be preferable to mount the upper module 202 over the wearer's eye, in which case the image light will be directed downward. In other embodiments, it may be preferable to generate light from the side of the wearer's eye or from below the wearer's eye. Further, the lower optical module is generally configured to deliver image light to the eye of the wearer and allow the wearer to see through the lower optical module, which may be accomplished by various optical components.
Fig. 8a illustrates an embodiment of the invention in which the upper optical module 202 is arranged to guide image light into a TIR waveguide 810. In this embodiment, upper optical module 202 is positioned over the wearer's eye 812 and light is directed horizontally into TIR waveguide 810. The TIR waveguide is designed to internally reflect the image light in a series of downward TIR reflections until it reaches the portion in front of the wearer's eye where the light exits the TIR waveguide 812 to pass into the wearer's eye. In this embodiment, an external shield 814 is positioned in front of the TIR waveguide 810.
Fig. 8b illustrates an embodiment of the invention in which upper optical module 202 is arranged to direct image light into TIR waveguide 818. In this embodiment, upper optical module 202 is disposed on a side of TIR waveguide 818. For example, when the HWC is configured as a pair of head-worn eyeglasses, the upper optical module may be located in or near the arms of the HWC. TIR waveguide 818 is designed to reflect the internally reflected image light in a series of TIR until it reaches the portion in front of the wearer's eye where the light exits TIR waveguide 812 to pass into the wearer's eye.
Fig. 8c illustrates still further embodiments of the invention in which the upper optical module 202 directs polarized image light into a light guide (optical guide) 828 where the image light passes through the polarizing reflector 824, changes polarization state upon reflection of the optical elements 822 (which include, for example, ¼ wave films), and is then reflected by the polarizing reflector 824 toward the wearer's eye due to the change in polarization of the image light. The upper optical module 202 may be positioned to direct light toward the mirror 820 to laterally position the upper optical module 202, and in other embodiments, the upper optical module 202 may direct image light directly toward the polarizing reflector 824. It should be understood that the invention encompasses other optical arrangements that are intended to direct image light into the eye of a wearer.
Another aspect of the invention relates to eye imaging. In an embodiment, a camera is used in conjunction with upper optical module 202, enabling imaging of a wearer's eye using pixels on the DLP in an "off" state. Fig. 9 illustrates a system in which an eye-imaging camera 802 is mounted and angled such that the field of view of the eye-imaging camera 802 is redirected toward the wearer's eye by the reflective mirror pixels of the DLP 402 in an "off" state. In this manner, the eye imaging camera 802 can be used to image the wearer's eyes along the same optical axis as the display image presented to the wearer. Wherein the image light presented to the wearer's eye illuminates the wearer's eye to enable the eye to be imaged by eye imaging camera 802. In this process, light reflected by the eye passes back through the optical train of lower optical module 204 and a portion of the upper optical module to where it is reflected by the "off" pixels of DLP 402 toward eye imaging camera 802.
In an embodiment, the eye imaging camera may image the wearer's eye at a time in which there are enough "off pixels to achieve the required eye image resolution. In another embodiment, the eye imaging camera collects eye image information from "off" pixels over time and forms an image over time. In another embodiment, a modified image is presented to the user, including sufficient "off state pixels of desired resolution and brightness available to the camera for imaging the wearer's eye, and eye image capture is synchronized with the presentation of the modified image.
An eye imaging system may be used for the safety system. The HWC may not allow access to the HWC or other systems if the eye is not discerned (e.g., by eye characteristics including retinal or iris characteristics, etc.). In some embodiments, HWC may be used to provide persistent security access. For example, eye safety confirmation may be a continuous, near-continuous, real-time, quasi-real-time, periodic, etc. process, so that the wearer is effectively constantly verified as known. In embodiments, the HWC may be worn and track eye safety for accessing other computer systems.
An eye imaging system may be used to control the HWC. For example, blinking, or in particular eye movement may be used as a control mechanism for software applications operating on the HWC or associated device.
The eye imaging system may be used in determining how or when the HWC 102 delivers the digitally displayed content to the wearer. For example, the eye imaging system may determine that the user is looking in one direction and then the HWC may change the resolution in the area of the display or provide some content associated with something in the environment that the user may be looking at. Alternatively, the eye imaging system may identify different users and change the enabled features or displayed content provided to the users. The user may be identified from a user eye characteristics database located on HWC 102 or remotely located on network 110 or server 112. Further, the HWC may identify a primary user or a group of primary users from the eye characteristics, where the primary user(s) are provided with an enhanced set of features and all other users are provided with a different set of features. Thus, in this use case, HWC 102 uses the identified eye characteristics to enable or disable features, and the eye characteristics need only be analyzed in comparison to a relatively small database of personal eye characteristics.
Fig. 10 illustrates a light source (e.g., polarized light sources if light from the solid state light sources is polarized, such as polarized light sources 302 and 458)) and a light source 404 that may be used in association with upper optics module 202. In an embodiment, to provide uniform surface light 1008 to be directed directly or indirectly into the upper optical module 202 and towards the DLP of the upper optical module, the solid state light source 1002 may be projected into the back-lit optical system 1004. The solid state light source 1002 may be one or more LEDs, laser diodes, OLEDs. In an embodiment, the back-lit optical system 1004 includes an extended section having a length/distance ratio greater than 3, where the light undergoes multiple reflections from the sidewalls to homogenize the light supplied by the solid state light source 1002. The back-lit optical system 1004 can also include structures on the surface opposite (on the left as shown in fig. 10) where uniform light 1008 exits the backlight 1004 to redirect the light toward the DLP 302 and the reflective polarizer 310 or the DLP 402 and the TIR wedge 418. The back-lit optical system 1004 may also include the following: this structure serves to collimate the uniform light 1008 to provide light to the DLP with a smaller angular distribution or narrower cone angle. Diffusers or polarizers can be used on the entrance or exit surface of the back-lit optical system. The diffuser can be used to spread or homogenize the emerging light from the backlight to improve the uniformity of uniform light 1008 or to increase the angular spread of uniform light 1008. An elliptical diffuser that diffuses light more in some directions and diffuses light less in other directions can be used to improve the uniformity or etendue of the uniform light 1008 in a direction perpendicular to the optical axis of the uniform light 1008. A linear polarizer can be used to convert unpolarized light supplied by the solid state light source 1002 into polarized light, thus polarizing the uniform light 1008 in a desired polarization state. A reflective polarizer can be used on the exit surface of the backlight 1004 to polarize the uniform light 1008 to a desired polarization state while reflecting other polarization states back into the backlight where it is recycled by multiple reflections within the backlight 1004 and at the solid state light sources 1002. Thus, by including a reflective polarizer at the exit surface of the backlight 1004, the efficiency of the polarized light source is improved.
Fig. 10a and 10b show illustrations of structures in the backlight optical system 1004 that can be used to change the direction of light provided by the light source to the entrance face 1045 and then collimate the light in a direction transverse to the optical axis of the exiting uniform light 1008. The structure 1060 includes an angled sawtooth pattern in the transparent waveguide where the left edge of each sawtooth clips the steep angle rays, thereby limiting the angle of the redirected light. The steep surface on the right side (as shown) of each serration then redirects the light so that it reflects off the angled surface on the left portion of each serration and is redirected toward the exit surface 1040. The serrated surface shown on the lower surface in fig. 10a and 10b can be smooth and coated (e.g., with an aluminum coating or a dielectric mirror coating) to provide a high level of reflectivity without scattering. Structure 1050 includes curved surfaces (as shown) on the left side to focus the rays after they pass through exit surface 1040, thereby providing a mechanism for collimating uniform light 1008. In further embodiments, a diffuser can be provided between the solid state light source 1002 and the incident face 1045 to homogenize the light provided by the solid state light source 1002. In yet another embodiment, a polarizer can be used between the diffuser and the entrance face 1045 of the backlight 1004 to provide a polarized light source. Because the sawtooth pattern provides a smooth reflective surface, the polarization state of the light can be preserved from the entrance face 1045 to the exit face 1040. In this embodiment, light incident to the backlight from the solid state light source 1002 passes through a polarizer such that the light is polarized in a desired polarization state. If the polarizer is an absorbing linear polarizer, light of the desired polarization state is transmitted and light of the other polarization state is absorbed. If the polarizer is a reflective polarizer, light of a desired polarization state is transmitted into the backlight 1004 and light of another polarization state is reflected back into the solid state light source 1002 where it can be recycled as previously described to increase the efficiency of the polarized light source.
Fig. 11a illustrates a light source 1100 that may be used in association with the upper optics module 202. In an embodiment, the light source 1100 may provide light to a back-lit optical system 1004, as described above in connection with fig. 10. In an embodiment, the light source 1100 includes a tri-color notch filter 1102. The three-color notch filter 1102 has narrow band pass filters for three wavelengths, as indicated in the transmission diagram 1108 in fig. 11 c. The graph shown as 1104 in fig. 11b illustrates the output of three different colored LEDs. One can see that the bandwidth of the emission is narrow, but they have long tails. A tri-color notch filter 1102 can be used in conjunction with such LEDs to provide a light source 1100 that emits light at a narrow filter wavelength as shown in transmission graph 1110 in fig. 11 d. Where it can be seen that the clipping effect of the tri-color notch filter 1102 has cut off the tail from the LED emission graph 1104 to provide light of a narrower wavelength band to the upper optical module 202. The light source 1100 can be used in conjunction with a combiner 602 having a holographic mirror or a three-color notch mirror to provide narrow-band light that is reflected toward the wearer's eyes with less wasted light (that is not reflected by the combiner), thereby improving efficiency and reducing escaping light that can cause a glow in the face.
Fig. 12a illustrates another light source 1200 that may be used in association with the upper optics module 202. In an embodiment, the light source 1200 may provide light to a back-lit optical system 1004, as described above in connection with fig. 10. In an embodiment, the light source 1200 includes a quantum dot cover glass 1202. Where the quantum dots absorb light of shorter wavelengths and emit light of longer wavelengths (figure 12b shows an example where a UV spectrum 1202 applied to the quantum dots results in the quantum dots emitting a narrow band shown as PL spectrum 1204), depending on the size and material composition of the quantum dots. As a result, the quantum dots in the quantum dot cover glass 1202 can be tailored to provide a narrow bandwidth light (e.g., red, green, and blue) emission depending on the band or bands of different quantum dots included, as illustrated in the graph shown in fig. 12c, where three different quantum dots are used. In embodiments, the LED driver lamp emits UV light, deep blue or blue light. For sequential illumination of different colors, multiple light sources 1200 will be used, where each light source 1200 will include a quantum dot cover glass 1202 with quantum dots selected to emit in one of the desired colors. The light source 1100 can be used in conjunction with a combiner 602 having a holographic mirror or a three-color notch mirror to provide narrow transmission band light that is reflected toward the wearer's eye with less wasted light (not reflected).
Another aspect of the invention relates to generating a peripheral image lighting effect for a person wearing a HWC. In embodiments, a solid state lighting system (e.g., LED, OLED, etc.) or other lighting system may be included inside the optical elements of the lower optical module 204. The solid state lighting system may be arranged such that lighting effects outside a field of view (FOV) of the presented digital content are presented to create immersion effects for a person wearing the HWC. To this end, the lighting effect may be presented to any portion of the HWC visible to the wearer. The solid state lighting system may be digitally controlled by an integrated processor on the HWC. In an embodiment, the integrated processor will control the lighting effect in coordination with the digital content presented within the FOV of the HWC. For example, movies, pictures, games, or other content may be displayed or played within the FOV of the HWC. The content may show a bomb explosion on the right side of the FOV and at the same time, and the solid state lighting system inside the upper module optics may flash quickly in concert with the FOV image effect. The effect may not be as fast, it may be more permanent to indicate, for example, a general glow or color on the side of the user. The solid state lighting system may be color controlled, for example with red, green and blue LEDs, to enable color control to be coordinated with the content of the digital presentation within the field of view.
Fig. 13a illustrates the optical components of lower optical module 204 along with outer lens 1302. FIG. 13a also shows an embodiment including effect LEDs 1308a and 1308 b. Fig. 13a illustrates image light 1312 being directed into an upper optical module, as described elsewhere herein, where the light will reflect off of the combiner element 1304, as described elsewhere herein. The combiner element 1304 in this embodiment is angled toward the wearer's eye at the top of the module and away from the wearer's eye at the bottom of the module, as also described and illustrated in connection with fig. 8 (e.g., at a 45 degree angle). Image light 1312 provided by upper optical module 202 (not shown in fig. 13 a) reflects off combiner element 1304 away from the wearer's eye towards collimating mirror 1310, as described elsewhere herein. Image light 1312 then reflects and focuses off of collimating mirror 1304, passes back through combiner element 1304, and is directed into the wearer's eye. The wearer is also able to view the surroundings through the transparency of the combiner element 1304, collimating mirror 1310, and outer lens 1302 (if included). As described elsewhere herein, the various surfaces are polarized to create an optical path for the image light and provide transparency of the elements to enable the wearer to view the surrounding environment. The wearer will generally perceive image light as forming an image in the FOV 1305. In an embodiment, an outer lens 1302 may be included. The outer lens 1302 is an outer lens that may or may not be corrective and that may be designed to conceal the lower optical module component in an attempt to make the HWC appear to be in a form similar to standard eyeglasses or sunglasses.
In the embodiment illustrated in fig. 13a, the effect LEDs 1308a and 1308b are positioned at the sides of the combiner element 1304 and the outer lens 1302 and/or collimating reflector 1310. In an embodiment, the effect LED 1308a is positioned within the confines defined by the combiner element 1304 and the outer lens 1302 and/or collimating reflector. Effect LEDs 1308a and 1308b are also positioned outside the FOV 1305. In this arrangement, the effect LEDs 1308a and 1308b can provide illumination effects within the lower optical module outside the FOV 1305. In an embodiment, light emitted from the effect LEDs 1308a and 1308b may be polarized such that the light passes through the combiner element 1304 towards the wearer's eye and does not pass through the outer lens 1302 and/or collimating mirror 1310. This arrangement provides a peripheral lighting effect to the wearer in a more private setting by not transmitting the lighting effect into the ambient environment via the front of the HWC. However, in other embodiments, the effect LEDs 1308a and 1308b may be unpolarized, thus making the provided lighting effect purposefully viewable for entertainment by others in the environment, such as to give the effect of glow to the wearer's eyes corresponding to the image content viewed by the wearer.
Fig. 13b illustrates a cross section of the embodiment described in connection with fig. 13 a. As illustrated, the effect LED 1308a is located in a top-front region inside the optic of the lower optic module. It should be understood that the effect LED 1308a location in the described embodiment is merely illustrative and alternative layouts are contemplated by the present invention. Additionally, in embodiments, one or more effect LEDs 1308a may be present in each of the two sides of the HWC to provide a peripheral lighting effect near one or both eyes of the wearer.
Fig. 13c illustrates an embodiment in which the combiner element 1304 is angled away from the eye at the top and towards the eye at the bottom (e.g., according to the holographic or notch filter embodiments described herein). In this embodiment, the effect LED 1308a is located on the outer lens 1302 side of the combiner element 1304 to provide a covert appearance of the lighting effect. As with other embodiments, the effect LED 1308a of fig. 13c may include a polarizer to enable emitted light to pass through the polarizing element associated with the combiner element 1304 and be blocked by the polarizing element associated with the outer lens 1302.
Another aspect of the invention relates to mitigating light escaping from the space between the wearer's face and the HWC itself. Another aspect of the invention relates to maintaining a controlled lighting environment proximate to the wearer's eye. In an embodiment, both the maintenance of the lighting environment and the mitigation of light escape is achieved by including a removable and replaceable flexible shield for the HWC. Wherein a removable and replaceable shield can be provided for one or both eyes, corresponding to the use of the display for each eye. For example, in night vision applications, the display for only one eye can be used for night vision, while the display for the other eye is turned off to provide good perspective when moving between areas where visible light is available and dark areas where night vision enhancement is needed.
Fig. 14a illustrates a removable and replaceable flexible eyeshade 1402 with an opening 1408, which removable and replaceable flexible eyeshade 1402 can be quickly attached to and removed from the HWC 102 by using a magnet 1404. Other attachment methods may be used, but for the purposes of illustrating the invention we will focus on the magnet embodiment. In an embodiment, a magnet may be included in eye shield 1402, and a magnet of opposite polarity may be included (e.g., embedded) in the frame of HWC 102. With the opposite polarity configuration, the magnets of the two elements will be attracted quite strongly. In another embodiment, one of the elements may have a magnet and the other side may have a metal for attraction. In an embodiment, eyecup 1402 is a flexible, resilient shield. In an embodiment, eyecup 1402 may be a wave elastic shell (bellows) design to accommodate flexibility and more closely align with the wearer's face. Fig. 14b illustrates a removable and replaceable flexible eyeshade 1402 adapted as a single eyeshade. In embodiments, a single eye shield may be used on each side of the HWC to cover both eyes of the wearer. In embodiments, a single eyeshade may be used in conjunction with a HWC that includes only one computer display for one eye. These configurations block light generated and generally directed toward the wearer's face by covering the space between the wearer's face and the HWC. Opening 1408 allows the wearer to see through opening 1408 to view the displayed content and the surrounding environment through the front of the HWC. The image light in the lower optics module 204 can be prevented from being emitted from the front of the HWC by, for example, an internal optics polarization scheme as described herein.
Fig. 14c illustrates another embodiment of a light suppression system. In this embodiment, eye flap 1410 may be similar to eye flap 1402, but eye flap 1410 includes front light shield 1412. The front light shield 1412 may be opaque to prevent light from escaping the front lens of the HWC. In other embodiments, front light shield 1412 is polarized to prevent light from escaping the front lens. In a polarizing arrangement, in embodiments, the internal optical elements of the HWC (e.g., of lower optical module 204) may polarize light transmitted toward the front of the HWC and front light shield 1412 may be polarized to prevent light from transmitting through front light shield 1412.
In an embodiment, an opaque front light shield 1412 may be included and the digital content may include an image of the surrounding environment to enable the wearer to visualize the surrounding environment. A night time visual environment map may be presented for one eye and an opaque front light shield 1412 may be used to cover the ambient optical path of that eye. In other embodiments, this arrangement may be associated with both eyes.
Another aspect of the invention relates to automatically configuring the illumination system(s) used in HWC 102. In embodiments, display illumination and/or effect illumination as described herein may be controlled in a manner suitable when eyeshade 1402 is attached to HWC 102 or removed from HWC 102. For example, during the night when the light in the environment is weak, the illumination system(s) in the HWC may enter a low light mode to further control any amount of stray light escaping from the HWC and the area surrounding the HWC. A masking operation at night while using night vision or standard vision may require a solution that prevents as much of the escaping light as possible so the user can clip on the eye shield(s) 1402 and then the HWC can enter a low light mode. If the HWC identifies that the environment is in a low light condition (e.g., detected by an ambient light level sensor), the low light mode may, in some embodiments, enter the low light mode only when eyeshade 1402 is attached. In an embodiment, the low light level may be determined to be at an intermediate point between full light and low light depending on the environmental conditions.
Another aspect of the invention relates to automatically controlling the type of content displayed in the HWC when eyeshade 1402 is attached to or removed from the HWC. In embodiments, when eye shield(s) 1402 are attached to the HWC, the displayed content may be constrained in amount or in color amount. For example, the display(s) may enter a simple content delivery mode to constrain the amount of information displayed. This may be done to reduce the amount of light generated by the display(s). In an embodiment, the display(s) may be changed from a color display to a monochrome display to reduce the amount of light generated. In an embodiment, the monochromatic illumination may be red to limit the impact on the wearer's eyes, thereby maintaining the ability to see better in the dark.
Another aspect of the invention relates to a system adapted to quickly transition from a see-through system to a non-see-through system or a very low-see-through system for a more immersive user experience. The conversion system may include a replaceable lens, eye shield, and optics adapted to provide a user experience in two modes. The lens may be "blacked out", for example, to provide an experience in which the full user attention is devoted to the digital content and then the lens may be switched to a high perspective lens, so that the digital content will enhance the user's view of the surrounding environment. Another aspect of the invention relates to a low transmission lens that permits a user to see through the lens, but remains dark enough to maintain most of the user's attention on the digital content. The slight perspective may provide the user with a visual connection to the surrounding environment, and this may reduce or eliminate nausea and other problems associated with completely removing the surrounding view while viewing the digital content.
Fig. 14d illustrates the head-worn computer system 102 having a see-through digital content display 204, the see-through digital content display 204 adapted to include a removable outer lens 1414 and a removable eye shield 1402. The eye shield 1402 may be attached to the head-worn computer 102 using a magnet 1404 or other attachment system (e.g., mechanical attachment, a snug friction fit between the arms of the head-worn computer 102, etc.). The eye shield 1402 may be attached when a user wants to cut down the limit of stray light escaping the head-worn computer, create a more immersive experience by removing an otherwise visible peripheral view of the surrounding environment, and so forth. The removable outer lens may be of several kinds for various experiences. It may have no transmission or very low transmission to create a dark background for the digital content, creating an immersive experience for the digital content. It may have a high transmission so a user may see through the see-through display and lens to view the surrounding environment, creating a system for a heads-up display, augmented reality display, assisted reality display, and the like. The lens 1414 may be dark in the middle portion to provide a dark background (i.e., a dark backdrop behind the see-through field of view from the user's perspective) for the digital content and a higher transmission region elsewhere. For example, the lens 1414 may have a transmission in the range of 2 to 5%, 5 to 10%, 10 to 20% for the immersion effect and a transmission above 10% or 20% for the augmented reality effect. The lens 1414 may also have adjustable transmission to facilitate changes in system effect. For example, lens 1414 may be an electronically adjustable lens (e.g., a liquid crystal, or with crossed polarizers with adjustments to the level of crossing).
In embodiments, the eye shield may have a transparent or partially transparent region to provide some visual connection to the user's surroundings. This may also reduce or eliminate nausea or other sensations associated with completely removing a view of the surrounding environment.
Fig. 14e illustrates the head-worn computer 102 assembled with the eye shield 1402 without the lens in place. The lens may be held in place with magnets 1418 in embodiments for easy removal and replacement. In embodiments, the lens may be held in place with other systems, such as mechanical systems.
Another aspect of the invention relates to an effects system that generates out-of-view effects in a see-through display of a head-mounted computer. The effects may be, for example, lighting effects, sound effects, haptic effects (e.g., through vibration), air movement effects, and so forth. In an embodiment, the effect generation system is mounted on eyecup 1402. For example, lighting systems (e.g., LED(s), OLED(s), etc.) may be mounted on the inner surface 1420, or exposed through the inner surface 1420, as illustrated in fig. 14f, so that they may create lighting effects (e.g., bright light, colored light, subtle color effects) in coordination with the content being displayed in the field of view of the see-through display. The content may be, for example, a movie or a game, and scripted, an explosion may occur to the right of the content, and in match with the content, a bright flash may be generated by the effect lighting system to create a stronger effect. As another example, the effects system may include a vibration system mounted near or otherwise near a side or temple (temple), and when the same explosion occurs, the vibration system may generate a vibration on the right to increase the user experience, indicating that the explosion has a real sound wave producing the vibration. As yet another example, the effects system may have an air system, where the effect is a puff of air that is blown onto the user's face. This may create a feeling of proximity to some fast moving object in the content. The effects system may also have a speaker directed towards the user's ear or an attachment for an ear plug or the like.
In embodiments, the effects generated by the effects system may be scripted by the author to coordinate with the content. In embodiments, a sensor may be placed within the eye shield to monitor content effects that would then cause the generation of effect(s) (e.g., a light sensor to measure a strong lighting effect or a peripheral lighting effect).
The effects system in the eye shield may be powered by an internal battery, and in embodiments, the battery may also provide additional power to the headset computer 102 as a backup system. In an embodiment, the effects system is powered by a battery in the head-worn computer. Power may be delivered through an attachment system (e.g., magnet, mechanical system) or a dedicated power system.
The effects system may receive data and/or commands from the head-worn computer through a wired or wireless data connection. The data may come, for example, through an attached system, a separate line, or through bluetooth or other short-range communication protocols.
In an embodiment, the eye shield is made of reticulated foam, which is very light and can conform to the contours of the user's face. The reticulated foam also allows air circulation due to the open-cell nature of the material, which can reduce user fatigue and increase user comfort. The eye mask may be made of other materials that are soft, rigid, flexible, etc., and may have another material on the periphery that contacts the face for comfort. In embodiments, the eye shield may include a fan to exchange air between the external environment and an interior space, wherein the interior space is partially defined by the user's face. The fan may be operated very slowly and at low power to exchange air to keep the user's face cool. In an embodiment, the fan may have a variable speed controller and/or a temperature sensor may be positioned to measure the temperature in the interior space to control the temperature in the interior space to a specified range, temperature, and the like. The interior space is generally characterized by the space enclosed by the user's eyes and the space in front of the upper cheek where the eye shield encloses the area.
Another aspect of the invention relates to flexibly mounting audio headphones over the head-worn computer 102 and/or the eye shield 1402. In embodiments, the audio headset is mounted with a relatively rigid system having flexible joint(s) (e.g., a swivel joint at the connection with the eyecup, a swivel joint in the middle of a rigid arm, etc.) and extension(s) (e.g., a telescoping arm) to provide adjustability to the user so as to allow a comfortable fit on, in, or around the user's ear. In an embodiment, the audio headset is mounted with a more flexible system throughout (such as with a wire-based connection).
Fig. 14g illustrates the head-worn computer 102 with a removable lens 1414 and a mounted eye shield 1402. In an embodiment, the head-worn computer includes a see-through display (as disclosed herein). Eye shield 1402 also includes mounted audio headphones 1422. In this embodiment, a mounted audio headset 1422 is mounted to the visor 1402 and has an audio cord connection (not shown). In an embodiment, the connection of the audio line may be connected to an internal wireless communication system (e.g., bluetooth, NFC, WiFi) to make a connection to the processor in the headset computer. In embodiments, the audio line may be connected to a magnetic connector, a mechanical connector, or the like to make up the connection.
Fig. 14h illustrates the eye shield 1402 with the audio headset 1422 installed, not installed. As illustrated, the mechanical design of the eye shield is adapted to fit onto a headset computer to provide visual or partial isolation as well as audio headphones.
In an embodiment, eye shield 1402 may be adapted to be removably mounted on a head-mounted computer 102 having a see-through computer display. An audio headset 1422 having an adjustable mounting may be connected to the eye shield, where the adjustable mounting may provide expansion and rotation to provide a mechanism for a user of the headset computer to align the audio headset with the user's ear. In an embodiment, the audio headset includes an audio cord connected to a connector on the eyecup, and the eyecup connector may be adapted to removably mate with a connector on the headset computer. In an embodiment, the audio headset may be adapted to receive audio signals from the headset computer over a wireless connection (e.g., bluetooth, WiFi). As described elsewhere herein, the head-worn computer may have a removable and replaceable front lens. The eye shield may include a battery to power the system inside the eye shield. The eye shield may have a battery to power the system inside the headset computer.
In embodiments, the eye shield may include a fan adapted to exchange air between an interior space partially defined by the user's face and an external environment, thereby cooling air in the interior space and on the user's face. In embodiments, the audio headset may include a vibration system (e.g., a vibration motor in the armature and/or in the on-ear section, a piezoelectric motor, etc.) adapted to provide a user with tactile feedback coordinated with digital content presented in the see-through computer display. In an embodiment, the head-worn computer includes a vibration system adapted to provide the user with tactile feedback coordinated with digital content presented in the see-through computer display.
In an embodiment, eye shield 1402 is adapted to be removably mounted on a head-mounted computer having a see-through computer display. The eye shield may also include a flexible audio headset mounted to the eye shield, wherein the flexibility provides a mechanism for a user of the headset computer to align the audio headset with the user's ear. In an embodiment, the flexible audio headset is mounted to the eye shield using a magnetic connection. In an embodiment, the flexible audio headset may be mounted to the eye shield using a mechanical connection.
In embodiments, the audio headset may be spring loaded or otherwise loaded such that the headset presses inward toward the user's ear for a more secure fit.
Referring to FIG. 15, we turn now to a description of the particular external user interface 104, which is generally referred to as a pen 1500. The pen 1500 is a specially designed external user interface 104 and is capable of operating as a user interface, such as for many different styles of HWC 102. The pen 1500 generally follows the form of a conventional pen, which is a familiar user handheld device and creates an intuitive physical interface for many of the operations to be performed in the HWC system 100. The pen 1500 may be one of several user interfaces 104 used in conjunction with controlling operations within the HWC system 100. For example, HWC 102 may attend to gesture 116 and interpret gesture 116 as a control signal, where pen 1500 may also be used as a user interface with respect to the same HWC 102. Similarly, a remote keyboard may be used as the external user interface 104 in concert with the pen 1500. The combination of user interfaces or the use of only one control system generally depends on the operation(s) being performed in the system 100 of the HWC.
While the pen 1500 may follow the general form of a conventional pen, it contains many technologies that enable it to function as an external user interface 104. FIG. 15 illustrates the technology included in pen 1500. As can be seen, the pen 1500 may include a camera 1508 arranged to view through the lens 1502. The camera may then be focused, such as by lens 1502, to image a surface on which the user writes or otherwise moves to interact with HWC 102. There are situations where the pen 1500 will also have ink, graphite, or other systems to enable the content being written to be viewed on the writing surface. The following other cases exist: where the pen 1500 does not have such a physical writing system and therefore no deposit on the writing surface, where the pen will only always transmit data or commands to the HWC 102. The lens configuration is described in more detail herein. The function of the camera is to capture information from the unstructured writing surface to enable strokes to be interpreted as intended by the user. To assist in the prediction of the intended stroke path, the pen 1500 may include sensors, such as the IMU 1512. Of course, the IMU can be included in the pen 1500 in a separate part of the pen (e.g., gyroscope, accelerometer, etc.), or the IMU can be included as a single unit. In this example, the IMU 1512 is used to measure and predict the motion of the pen 1500. In turn, the integrated microprocessor 1510 will take the IMU information and camera information as input and process the information to form a prediction of pen tip movement.
The pen 1500 may also include a pressure monitoring system 1504, such as to measure the pressure exerted on the lens 1502. As will be described in greater detail herein, the pressure measurements can be used to predict user intent to change line thickness, line type, stroke type, click, double click, and the like. In embodiments, the pressure sensor may be constructed using any force or pressure measurement sensor (including, for example, a resistive sensor, a current sensor, a capacitive sensor, a voltage sensor (such as a piezoelectric sensor), etc.) located behind the lens 1502.
The pen 1500 may also include a communication module 1518, such as for bi-directional communication with the HWC 102. In an embodiment, the communication module 1518 may be a short-range communication module (e.g., bluetooth). The communication module 1518 may be securely matched to the HWC 102. The communication module 1518 is arranged to communicate data and commands to and from the microprocessor 1510 of the pen 1500. The microprocessor 1510 may be programmed to interpret data generated from the camera 1508, IMU 1512, pressure sensor 1504, etc., and then communicate commands to the HWC 102, e.g., via the communication module 1518. In another embodiment, data collected by the microprocessor from any of the input sources (e.g., camera 1508, IMU 1512, pressure sensor 1504) may be communicated by the communication module 1518 to the HWC 102, and the HWC 102 may perform data processing and prediction of the user's intent while using the pen 1500. In yet another embodiment, the data may be further communicated over the network 110 to a remote device 112 (such as a server) for data processing and prediction. The command may then be communicated back to HWC 102 for execution (e.g., display writing in an eyewear display, selecting within a UI of an eyewear display, controlling remote external device 112, controlling local external device 108), etc. The pen may also include memory 1514 for long term or short term use.
The pen 1500 may also include a number of physical user interfaces, such as a quick launch (launch) button 1522, a touch sensor 1520, and the like. The quick launch button 1522 may be adapted to provide a quick way for a user to jump to a software application in the HWC system 100. For example, the user may be a regular user of a communication software package (e.g., email, text, Twitter, Instagram, Facebook, Google + (Google +), etc.), and the user may program the quick launch button 1522 to command the HWC 102 to launch an application. The pen 1500 may be provided with a number of quick start buttons 1522, which may be user programmable or manufacturer programmable. The quick launch button 1522 may be programmed to perform an operation. For example, one of the buttons may be programmed to clear the digital display of HWC 102. This will create a quick way for the user to clear the screen on HWC 102 for any reason, such as, for example, for a better viewing environment. The quick start button function will be discussed in further detail below. Touch sensor 1520 may be used to take gesture pattern input from a user. For example, a user may be able to take a single finger and maneuver the finger across touch sensor 1520 to effect page scrolling.
The pen 1500 may also include a laser pointer 1524. The laser pointer 1524 may coordinate with the IMU 1512 to coordinate the pose and laser pointing. For example, the user may use the laser 1524 while rendering to help guide the viewer with graphical interpretation, and the IMU 1512 may interpret user gestures as commands or data inputs, either simultaneously or when the laser 1524 is off.
Fig. 16A-C illustrate several embodiments of a lens and camera arrangement 1600 for a pen 1500. One aspect relates to maintaining a constant distance between the camera and the writing surface to enable the writing surface to be held in focus to better track the movement of the pen 1500 on the writing surface. Another aspect relates to maintaining an angled surface following the circumference of the writing tip of the pen 1500 such that the pen 1500 is able to roll or partially roll in the user's hand to create the feel and freedom of a conventional writing instrument.
FIG. 16A illustrates an embodiment of a writing lens tip of a pen 1500. The configuration includes a spherical lens 1604, a camera or image capture surface 1602, and a dome cover lens 1608. In this arrangement, the camera views the writing surface through the spherical lens 1604 and the dome cover lens 1608. The spherical lens 1604 causes the camera to focus so that the camera views the writing surface when the pen 1500 is held in a hand in a natural writing position (such as where the pen 1500 is in contact with the writing surface). In an embodiment, the spherical lens 1604 should be separated from the writing surface to get the highest resolution of the writing surface at the camera 1602. In an embodiment, the spherical lenses 1604 are separated by approximately 1 to 3 mm. In this configuration, the dome cover lens 1608 provides a surface that can maintain the spherical lens 1604 separated from the writing surface by a constant distance (such as substantially independent of the angle used to write on the writing surface). For example, in an embodiment, the field of view of the cameras in this arrangement would be approximately 60 degrees.
A dome cover lens or other lens 1608 for physically interacting with the writing surface will be transparent or transmissive within the effective bandwidth of the camera 1602. In an embodiment, the dome lens 1608 may be spherical or other shape and composed of glass, plastic, sapphire, diamond, or the like. In other embodiments, where low resolution imaging of the surface is acceptable. The pen 1500 can omit the dome cover lens 1608 and the spherical lens 1604 can be in direct contact with the surface.
FIG. 16B illustrates another configuration in which the configuration is somewhat similar to that described in connection with FIG. 16A; this embodiment, however, does not use a dome cap lens 1608, but instead uses a spacer 1610 to maintain a predictable distance between the spherical lens 1604 and the writing surface, where the spacer may be spherical, cylindrical, tubular, or other shape that provides spacing while allowing an image to be obtained by the camera 1602 through the lens 1604. In a preferred embodiment, the spacers 1610 are transparent. Further, although the spacer 1610 is shown as spherical, other shapes may be used, such as elliptical, annular, hemispherical, pyramidal, cylindrical, or other forms.
Fig. 16C illustrates yet another embodiment, wherein the structure includes a post 1614 such as through the center of the lensed end of the pen 1500. The posts 1614 may be ink deposition systems (e.g., ink cartridges), graphite deposition systems (e.g., graphite holders), or dummy posts whose purpose is primarily only alignment. The choice of the type of post depends on the use of the pen. For example, where the user wants to use the pen 1500 as a conventional ink depositing pen and a fully functional external user interface 104, an ink system post would be the best choice. If there is no need to make the "writing" visible on the writing surface, the choice will be a false post. The embodiment of fig. 16C includes camera(s) 1602 and associated lens 1612, where camera(s) 1602 and lens 1612 are positioned to capture a writing surface without substantial interference from post 1614. In an embodiment, the pen 1500 may include multiple cameras 1602 and lenses 1612 to enable more or all of the circumference of the tip 1614 to be used as an input system. In an embodiment, the pen 1500 includes a contoured grip that holds the pen aligned in the user's hand so that the camera 1602 and lens 1612 remain pointing at the surface.
Another aspect of the pen 1500 relates to sensing a force applied by a user to a writing surface with the pen 1500. Force measurement can be used in many ways. For example, the force measurements may be used as discrete values or discontinuous event tracking and compared to a threshold value in a process for determining the user's intent. For example, the user may want to interpret the force as a "click" when selecting an object. The user may intend to interpret multiple force applications as multiple clicks. There may be times when a user holds the pen 1500 in a certain position or holds a certain part of the pen 1500 (e.g., a button or a touchpad) while clicking to achieve a certain operation (e.g., 'right click'). In an embodiment, the force measurements may be used to track forces and force trends. For example, force trends may be tracked and compared to threshold limits. There may be one such threshold limit, multiple limits, groups of related limits, etc. For example, when the force measurements indicate a fairly constant force that generally falls within a range of associated thresholds, the microprocessor 1510 can interpret the force trend as an indication that the user desires to maintain a current writing style, writing tip type, line thickness, stroke type, etc. In the event that the force trend appears to have intentionally gone out of the threshold set, the microprocessor may interpret the action as an indication: i.e. the user wants to change the current writing style, writing tip type, line thickness, stroke type, etc. Once the microprocessor has made a determination of the user's intent, changes in the current writing style, type of writing tip, thickness of line, type of stroke, etc. may be performed. In an embodiment, the change may be indicated to the user (e.g., in a display of HWC 102) and the user may be presented with an opportunity to accept the change.
Fig. 17A illustrates an embodiment of a force sensing surface tip 1700 of a pen 1500. The force sensing surface tip 1700 includes a surface attachment tip 1702 (e.g., a lens as described elsewhere herein) in combination with a force or pressure monitoring system 1504. As the user writes on a surface or simulates writing on a surface using the pen 1500, the force monitoring system 1504 measures the force or pressure applied by the user to the writing surface and it transmits the data to the microprocessor 1510 for processing. In this configuration, the microprocessor 1510 receives force data from the force monitoring system 1504 and processes the data to make a prediction of the user's intent in applying the particular force currently being applied. In embodiments, the processing may be provided at a location other than on the pen (e.g., at a server in the HWC system 100, on the HWC 102). For the sake of brevity, when reference is made herein to processing information on the microprocessor 1510, the processing of information contemplates processing information at locations other than on the pen. The microprocessor 1510 can be programmed with force threshold(s), force signature(s), a library of force signatures, and/or other characteristics intended to guide an inference procedure in determining a user's intent based on measured force or pressure. The microprocessor 1510 can be further programmed to make inferences from the force measurements as to whether the user has attempted to initiate a discrete action (e.g., the user interface selects a 'click') or is performing a constant action (e.g., writing within a particular writing pattern). The inference process is important in that it enables the pen 1500 to act as an intuitive external user interface 104.
Fig. 17B illustrates a trend graph of force 1708 versus time 1710 with a single threshold 1718. The threshold 1718 may be set at a level that indicates a discrete force application that indicates a desire by the user to cause an action (e.g., selecting an object in the GUI). For example, event 1712 may be interpreted as a click or selection command because the force quickly increases from below threshold 1718 to above threshold 1718. Event 1714 may be interpreted as a double click because the force quickly increases above threshold 1718, decreases below threshold 1718, and then repeats itself quickly. The user may also cause the force to exceed the threshold 1718 and remain for a period of time, indicating that the user is intending to select an object in a GUI (e.g., a GUI presented in a display of the HWC 102) and "remain" for further operations (e.g., moving the object).
While thresholds may be used to assist in interpreting a user's intent, characteristic force event trends may also be used. The threshold and the feature may be used in combination or each method may be used alone. For example, the click characteristics may be represented by a certain set of force trend characteristics or features. For example, click feature(s) may require that the trend meet the following criteria: a rise time between x and y values, a hold time between a and b values, and a fall time between c and d values. Features may be stored for various functions (such as click, double click, right click, hold, move, etc.). The microprocessor 1510 can compare the real-time force or pressure traces to features from a library of features to make decisions and issue commands to software applications executing in the GUI.
Fig. 17C illustrates a trend graph of force 1708 versus time 1710 with multiple thresholds 1718. As an example, a force trend is plotted on a graph with a number of pen force or pressure events. As noted, there are both speculatively intentional events 1720 and speculatively unintentional events 1722. The two thresholds 1718 of FIG. 17C create three force zones: lower, medium and higher ranges. The beginning of the trend indicates that the user is throwing a lower amount of force. This may mean that the user is writing with a given line thickness and does not intend to change the thickness the user is writing. The trend then shows a significant increase in force into the mid-force range 1720. This force change from the trend appears to have been abrupt and continues thereafter. The microprocessor 1510 can interpret this as an intentional change and as a result change the operation (e.g., change the line width, increase the line thickness, etc.) according to preset rules. The trend then continues with the second apparently intentional event 1720 in the higher force range. During performance in the higher force range, the force drops below the upper threshold 1718. This may indicate an inadvertent force change and the microprocessor may detect a change in range, however, not effect a change in operation being coordinated by the pen 1500. As indicated above, trend analysis may be accomplished with thresholds and/or features.
In general, in this disclosure, an instrument stroke parameter change may refer to a change in line type, line thickness, tip type, stroke width, stroke pressure, color, and other forms of writing, coloring, painting, and the like.
Another aspect of the pen 1500 relates to selecting an operational mode for the pen 1500 based on contextual information and/or selection interface(s). The pen 1500 may have several modes of operation. For example, the pen 1500 may have a writing mode in which the user interface(s) of the pen 1500 (e.g., writing surface tip, quick launch button 1522, touch sensor 1520, motion-based gestures, etc.) are optimized or selected for tasks associated with writing. As another example, pen 1500 may have a recognition pen mode (wan mode) in which the user interface(s) of the pen are optimized or selected for tasks associated with software or device control (e.g., HWC 102, external local device, remote device 112, etc.). As another example, the pen 1500 may have a presentation mode in which the user interface(s) are optimized or selected to assist the user in presenting a presentation (e.g., pointing with the laser pointer 1524 while using the button(s) 1522 and/or gestures to control the presentation or an application related to the presentation). For example, the pen may have a mode that is optimized or selected for the particular device that the user is attempting to control. The pen 1500 may have a variety of other modes and one aspect of the present invention relates to selecting such a mode.
FIG. 18A illustrates automatic user interface mode selection(s) based on context information. Microprocessor 1510 may be programmed with IMU thresholds 1814 and 1812. The thresholds 1814 and 1812 may serve as indications of upper and lower bounds for the angles 1804 and 1802 of the pen 1500 for certain expected positions during certain prediction modes. For example, when the microprocessor 1510 determines that the pen 1500 is being held or otherwise positioned within the angle 1802 corresponding to the writing threshold 1814, the microprocessor 1510 may then initiate a writing mode for the pen's user interface. Similarly, if the microprocessor 1510 determines (e.g., by the IMU 1512) that the pen is being held at an angle 1804 that falls between predetermined wand thresholds 1812, the microprocessor may initiate a wand mode for the user interface of the pen. Both of these examples may be referred to as context-based user interface mode selection because the mode selection is based on context information (e.g., location) that is automatically collected and then used by an automatic evaluation process to automatically select the user interface mode(s) of the pen.
As with other examples presented herein, the microprocessor 1510 may monitor contextual trends (e.g., pen angle over time) in an attempt to decide whether to stay in one mode or change modes. For example, through features, thresholds, trend analysis, etc., the microprocessor may determine that the change is an unintentional change and thus a user interface mode change is not desired.
FIG. 18B illustrates automatic user interface mode selection(s) based on context information. In this example, the pen 1500 (e.g., by its microprocessor) is monitoring whether the camera at the writing surface end 1508 is imaging a writing surface in close proximity to the writing surface end of the pen 1500. If the pen 1500 determines that the writing surface is within a predetermined relatively short distance, the pen 1500 may determine that the writing surface is present 1820 and the pen may enter into the writing mode user interface mode(s). In the event that the pen 1500 does not detect a relatively close writing surface 1822, the pen may predict that the pen is not currently being used as a writing instrument and the pen may enter a non-writing user interface mode(s).
Fig. 18C illustrates manual user mode selection(s). The user interface mode(s) may be selected based on twisting a section 1824 of the pen 1500 housing, clicking on an end button 1828, pressing a quick launch button 1522, interacting with the touch sensor 1520, detecting a predetermined action (e.g., a click) at the pressure monitoring system, detecting a gesture (e.g., detected by the IMU), and so forth. The manual mode selection may involve selecting an item in a GUI associated with the pen 1500 (e.g., an image presented in a display of the HWC 102).
In an embodiment, a confirmation selection may be presented to the user in the event that the mode is to be changed. The presentation may be physical (e.g., vibration in the pen 1500), through a GUI, through a light pointer, and so forth.
FIG. 19 illustrates a pair of pen usage scenarios 1900 and 1901. There are many usage scenarios and for the reader's further understanding, we have presented a pair of usage scenarios in conjunction with FIG. 19 as a way of illustrating the usage scenarios. As such, the usage scenarios should be considered illustrative and non-limiting.
The usage scenario 1900 is a writing scenario in which the pen 1500 is used as a writing implement. In this example, the quick launch button 122A is pressed to launch the notes application 1910 in the GUI 1908 of the display 1904 of the HWC 102. Once the quick start button 122A is pressed, the HWC 102 starts the notes program 1910 and puts the pen into writing mode. A user uses the pen 1500 to scribe (script) a symbol 1902 on a writing surface, the pen recording the scribe and sending the scribe to the HWC 102, wherein the symbol representing the scribe is displayed 1912 within the notes application 1910.
The usage scenario 1901 is a gesture scenario in which the pen 1500 is used as a gesture capture and command device. In this example, the quick launch button 122B is activated and the pen 1500 is activated to recognize the pen mode to enable the application launched on HWC 102 to be controlled. Here, the user sees the application selector 1918 in the display(s) of the HWC 102, where different software applications can be selected by the user. The user gestures (e.g., slides, rotates, turns, etc.) with a pen to cause the application selector 1918 to move from the application to the application. Once the correct application is identified (e.g., highlighted) in the selector 1918, the user may gesture or click or otherwise interact with the pen 1500 to cause the identified application to be selected and launched. Once the application is launched, the recognition pen mode may be used, for example, to scroll, rotate, change the application, select an item, initiate a process, and so on.
In an embodiment, the quick launch button 122A may be activated and the HWC 102 may launch an application selector to present the set of applications to the user. For example, a quick start button may activate a selector to show all communication programs available for selection (e.g., SMS, Twitter, Instagram, Facebook, email, etc.) to enable a user to select a program the user desires and then enter a writing mode. As a further example, the initiator may offer a selection of various other groups (e.g., Microsoft Office products, communication products, productivity products, note products, organization products, etc.) that are related or categorized as being commonly selected at a given time.
Fig. 20 illustrates yet another embodiment of the present invention. Fig. 20 illustrates a watch band clip type controller 2000. The watchstrap clip-on controller may be a controller used to control a device in the HWC 102 or HWC system 100. The watchband clip-on controller 2000 has a fastener 2018 (e.g., a rotatable clip) that is mechanically adapted to attach to a watchband, as illustrated at 2004.
The watch band controller 2000 may have a quick launch interface 2008 (e.g., to launch applications and selectors as described herein) a touchpad 2014 (e.g., to be used as a touch mouse for GUI control in the HWC 102 display) and a display 2012. The clip 2018 may be adapted to fit a wide variety of watch straps so it can be used in conjunction with an independently selected watch for its function. In an embodiment, the clip is rotatable to enable a user to position it in a desired manner. In an embodiment, the clip may be a flexible strip. In embodiments, the flexible strap may be adapted to be stretched for attachment to a hand, wrist, finger, device, weapon, or the like.
In an embodiment, the watch band controller may be configured as a removable and replaceable watch band. For example, the controller may be incorporated into the band at a certain width, segment spacing, etc., to enable the watch band, along with its incorporated controller, to be attached to the watch body. In an embodiment, the attachment may be mechanically adapted to attach with a pin on which the watch band rotates. In embodiments, the watch band controller may be electrically connected to the watch and/or watch body to enable the watch, watch body, and/or watch band controller to communicate data therebetween.
The watch band controller may have 3-axis motion monitoring (e.g., by IMU, accelerometer, magnetometer, gyroscope, etc.) to capture user motion. The user motion may then be interpreted for gesture control.
In an embodiment, the watchband controller can include a fitness (fitness) sensor and a fitness computer. The sensors may track heart rate, calories burned, stride, distance covered, etc. The data may then be compared to performance goals and/or criteria for user feedback.
Another aspect of the invention relates to visual display techniques involving micro-doppler ("mD") target tracking features ("mD features"). mD is a radar technique that uses a series of angle-dependent electromagnetic pulses that are broadcast into the environment and return pulses captured. Changes between the broadcast pulse and the return pulse indicate changes in the shape, distance, and angular position of objects or targets in the environment. These changes provide a signal that can be used to track and identify the target through the mD features. Each object or object type has a unique mD characteristic. The shifts in the radar pattern can be analyzed in the time and frequency domains based on mD techniques to derive information about: the type of object present (e.g., whether a person is present), the motion of the object and the relative angular position of the object, and the distance to the object. By selecting the frequency relative to a known object in the environment for the mD pulse, the pulse is able to penetrate the known object to enable information about the target to be collected even when the target is visually obstructed by the known object. For example, a pulse frequency can be used that will penetrate a concrete building to enable a person to be identified within the building. Multiple pulse frequencies can also be used in an mD radar to enable different types of information to be collected about objects in the environment. Furthermore, the mD radar information can be combined with other information (such as captured images or range measurements of the environment) that are collectively analyzed to provide improved object recognition and improved target recognition and tracking. In embodiments, the analysis can be performed on the HWC or information can be transmitted to a remote network for analysis, and the results transmitted back to the HWC. The distance measurement can be provided by laser ranging, structured lighting, stereo depth maps or sonar measurement. Images of the environment can be captured using one or more cameras that can capture images from visible, ultraviolet, or infrared light. The mD radar can be attached to the HWC, proximally located (e.g., in a vehicle), and wirelessly associated with the HWC or remotely located. Maps or other previously determined information about the environment can also be used to analyze the mD radar information. Embodiments of the present invention relate to visualizing mD features in a useful manner.
Fig. 21 illustrates the FOV 2102 of the HWC 102 from the perspective of the wearer. As described elsewhere herein, the wearer has a see-through FOV 2102 in which the wearer views adjacent surroundings, such as the building illustrated in fig. 21. As described elsewhere herein, the wearer is also able to see displayed digital content presented within the portion of the FOV 2102. The embodiment illustrated in fig. 21 is indicating that the wearer is able to see buildings and other surrounding elements in the environment and digital content representing the trajectory or path of travel of bullets being fired by different people in the area. Surroundings are viewed through the transparency of the FOV 2102. The trajectory is presented via a digital computer display, as described elsewhere herein. In an embodiment, the traces presented are based on mD features that are collected in real-time and communicated to the HWC. The mD radar may itself be on or near the wearer of the HWC 102, or it may be located remotely from the wearer. In an embodiment, the mD radar scans the area, tracks and identifies targets (such as bullets), and transmits the trace to the HWC 102 based on location.
In the embodiment illustrated in fig. 21, there are several trajectories 2108 and 2104 presented to the wearer. The trajectory transmitted from the mD radar may be associated with a GPS location, and the GPS location may be associated with an object in the environment (such as a person, building, vehicle, etc.), both in terms of latitude and longitude perspectives, and in terms of altitude perspectives. The location may be used as a marker for the HWC so that the trajectory as presented in the FOV can be correlated or fixed in space relative to the marker. For example, if the friend shot trajectory 2108 is determined by mD radar to have originated from the upper right window of a building on the left, as illustrated in fig. 21, a virtual marker may be provided on or near the window. When the HWC views the window of the building, e.g., through its camera or other sensor, the trace may then be virtually anchored (anchor) with a virtual marker on the window. Similarly, the marker may be disposed near the end of the friend fire trajectory 2108 or other flight location (such as the upper left window of the center building on the right), as illustrated in fig. 21. This technique fixes the trajectory in space so that the trajectory appears to be fixed to an environmental location independent of where the wearer is looking. Thus, for example, as the wearer's head rotates, the trajectory appears to be fixed to the marked location.
In an embodiment, certain user positions may be known and therefore identified in the FOV. For example, the shooter of a friend's shot trajectory 2108 may be from a known friend's warrior and as such his location may be known. The location may be known based on his GPS location, which is based on a mobile communication system on his body, such as another HWC 102. In other embodiments, a friend warrior may be marked by another friend. For example, if the location of a friend in the environment is known through visual contact or transmitted information, the wearer of HWC 102 may mark the location using gestures or external user interface 104. If the friend warrior location is known, the origin location of the friend shot trajectory 2108 may be color coded or otherwise distinguished from unidentified trajectories on the displayed digital content. Similarly, the enemy fire trajectory 2104 may be color-coded or otherwise differentiated on the displayed digital content. In an embodiment, there may be additional differentiated appearances on the displayed digital content for unknown tracks.
In addition to the contextually relevant track appearance, the track color or appearance may vary from an origin location to a termination location. This path appearance change may be based on the mD features. The mD feature may indicate, for example, that a bullet is slowing as it propagates, and this slowing pattern may be reflected as a color or pattern change in the FOV 2102. This can create an intuitive understanding of where the shooter is located. For example, the origin color may be red indicating a high speed, and it may change over the course of the track to yellow indicating a slower track. This style change may also be different for friends, enemies, and unknown soldiers. For example, a enemy versus friend's track may turn blue to green.
Fig. 21 illustrates an embodiment in which the user sees the environment through the FOV and can also see a color-coded trajectory that depends on bullet velocity and warrior type, with the trajectory fixed in an environmental position independent of the wearer's perspective. Other information such as distance, range ring, time of day, date, type of engagement (e.g., hold, stop shooting, back, etc.) may also be displayed in the FOV.
Another aspect of the invention relates to mD radar technology that tracks and identifies targets through other objects such as walls (commonly referred to as through-wall mD) and visualization techniques related thereto. FIG. 22 illustrates a through-wall mD visualization technique according to the principles of the present invention. As described elsewhere herein, the mD radar scanning the environment may be local or remote to the wearer of HWC 102. The mD radar can identify visible targets (e.g., people) 2204 and then track the targets as he walks behind the wall 2208. The tracking may then be presented to the wearer of the HWC 102 such that digital content reflecting (even behind the wall) the target and the movement of the target is presented in the FOV 2202 of the HWC 102. In an embodiment, the target may be represented by an avatar (avatar) in the FOV when outside the visible field of view to provide the wearer with an image representing the target.
The mD target discrimination method is capable of identifying the identity of a target based on vibration or other small movements of the target. This can provide personal characteristics for the target. In the case of humans, this may result in personal identification of targets that have been previously characterized. Aerobic movements (cardio), heart beats, lung dilation, and other small movements within the body may be unique to the person, and if those attributes are pre-identified, they may be matched in real-time to provide personal identification of the person in the FOV 2202. The mD characteristics of the person may be determined based on the location of the person. For example, the database of personal mD feature attributes may include mD features for a person standing, sitting, lying down, running, walking, jumping, and the like. This can improve the accuracy of personal data matching when tracking targets by mD feature techniques in the field. In the case of personally identifying a person, a specific identification of the identity of the person may be presented in the FOV 2202. The indication may be a color, shape, shading, name, indication of the type of person (e.g., enemy, friend, etc.), etc. to provide the wearer with intuitive real-time information about the person being tracked. This may be very useful in scenarios where there is more than one person in the area of the person being tracked. If only one person is personally identified in an area, the person or an avatar of the person can be presented differently than other people in the area.
Fig. 23 illustrates an environment 2300 of mD scanning. The mD radar may scan the environment in an attempt to identify objects in the environment. In this embodiment, the mD scanned environment exhibits two vehicles 2302a and 2302b, an enemy warrior 2309, two friend warriors 2308a and 2308b, and a shooting track 2318. Each of these objects may be personally identified or type identified. For example, the vehicles 2302a and 2302b may be identified as tanks and heavy trucks through the mD features. Enemy warrior 2309 may be identified as one type (e.g., enemy warrior) or more personally identified (e.g., by name). A friend's warrior may be identified as one type (e.g., friend's warrior) or more personally identified (e.g., by name). The shot trajectory 2318 may be characterized by, for example, the weapon type or projectile type for the projectile.
Fig. 23a illustrates two separate HWC 102 FOV display techniques in accordance with the principles of the present invention. The FOV 2312 illustrates a map view 2310 in which the mD scanned environment is presented. Here, the wearer has a perspective to the mapped area, so he can learn all tracked objects in the area. This allows the wearer to traverse the area with knowledge of the target. The FOV 2312 illustrates a head-up (head-up) view to provide an augmented reality style view to the wearer that is close to the wearer's environment.
One aspect of the present invention relates to suppression of external light or stray light. As discussed elsewhere herein, eye glow and facial glow are two such artifacts that develop from such light. The eye glow and the face glow can be caused by image light escaping from the optics module. When the user is viewing a bright display image with the HWC, the escaping light is visible, especially in dark environments. The light escaping through the front of the HWC is visible as an eye glow because it is visible in the area of the user's eyes. The eye glow can appear in the form of a small version of the display image that the user is viewing. Light escaping from the bottom of the HWC shines onto the user's face, cheeks, or chest, making these portions of the user appear to glow. Both the eye glow and the face glow can increase the visibility of the user and emphasize the use of HWC, which may be negatively perceived by the user. As such, it is advantageous to reduce eye glow and facial glow. In combat scenarios (e.g., the mD trajectory presentation scenarios described herein) and in certain gaming scenarios, the suppression of extraneous or stray light is very important.
The disclosure relating to fig. 6 shows an example in which a portion of the image light passes through the combiner 602 so that the light impinges on the user's face, thereby illuminating a portion of the user's face, which is generally referred to herein as a facial glow. Facial glow is caused by any portion of the light from the HWC illuminating the user's face.
An example of a light source for a facial glow may be from a wide cone angle light associated with the image light incident on combiner 602. Wherein the combiner may comprise a holographic mirror or a notch mirror, wherein the narrow band of high reflectivity is matched to the wavelength of the light generated by the light source. The wide cone angle associated with the image light corresponds to the field of view provided by the HWC. Typically, the reflectivity of the holographic and notch mirrors is reduced as the cone angle of the incident light is increased above 8 degrees. As a result, for a field of view of 30 degrees, most of the image light can pass through the combiner and cause a face glow.
Fig. 24 shows a diagram of the light trap 2410 for the light of the facial glow. In this embodiment, the extension of the outer shield lens of the HWC is coated with a light absorbing material in the area where the converging light causing the facial glow is absorbed in the light trap 2410. The light absorbing material may be black or it may be a filter designed to only absorb light of a particular wavelength provided by the light source(s) in the HWC. In addition, the surface of the optical trap 2410 may be textured or fibrous to further improve the absorption.
Fig. 25 illustrates the optical system for a HWC that includes an external absorptive polarizer 2520 to block light of the facial glow. In this embodiment, the image light is polarized, and as a result, the light causing the face glow is similarly polarized. The absorptive polarizer is oriented about the transmission axis such that light of the facial glow is absorbed and not transmitted. In this case, the remainder of the imaging system in the HWC may not require polarized image light and the image light may be polarized at any point before the combiner. In an embodiment, the transmission axis of the absorptive polarizer 2520 is vertically oriented such that external glare (S-polarized light) from water is absorbed, and accordingly, the polarization of image light is selected to be horizontal (S-polarization). Thus, the image light that passes through the combiner 602 and is then incident on the absorbing polarizer 2520 is absorbed. In fig. 25, the absorbing polarizer 2520 is shown outside the shield lens, alternatively the absorbing polarizer 2520 may be positioned inside the shield lens.
Fig. 26 illustrates an optical system for a HWC including a film with an absorptive notch filter 2620. In this case, the absorptive notch filter absorbs narrow band light, which is selected to match the light provided by the light source of the optical system. As a result, the absorptive notch filter is opaque with respect to the light of the facial glow and transparent to the rest of the wavelengths included in the visible spectrum, so that the user has a clear view of the surrounding environment. A ternary notch filter suitable for this scheme is available from the wottawa profil electro-optic spectroscopy Technologies, inc (Iridian Spectral Technologies, Ottawa), ON: http:// www.ilphotonics.com/cdv2/Iridian-Interference%20Filters/New%20Filters/Triple%20Notch%20Filter.
In an embodiment, the combiner 602 may include a notch mirror coating to reflect light of a wavelength in the image light, and the notch filter 2620 can be selected corresponding to the light of the wavelength provided by the light source and the narrow band of high reflectivity provided by the notch mirror. In this way, image light that is not reflected by the notch mirror is absorbed by the notch filter 2620. In embodiments of the present invention, the light source can provide one narrow band light for monochromatic imaging or three narrow band lights for full color imaging. The notch mirror and associated notch filter will then each provide a narrow band or three narrow bands of high reflectivity and absorption, respectively.
Fig. 27 includes a micro-louvered film 2750 for blocking light from the facial glow. Micro-louvered films are sold by 3M as ALCF-P, for example, and are typically used as privacy filters for computers. See http:// multimedia.3m.com/mws/mediawebserver
Figure DEST_PATH_IMAGE001
mwsId=SSSSSuH8gc7nZxtUoYxlYeevUqel7z HvTSevTSeSSSSSS--&fn = ALCF-P ABR2 Control Film ds. The micro-louvered film transmits light within a certain narrow angle (e.g., 30 degrees from normal and absorbs light 30 degrees beyond normal). In fig. 27, the micro-louvered film 2750 is positioned such that the light 2758 of the facial glow is incident on the micro-louvered film 2750 more than 30 degrees from normal, while the see-through light 2755 is incident on the micro-louvered film 2750 within 30 degrees of normal. As such, light 2758 of the facial glow is absorbed by the micro-louvered film, and the see-through light 2755 is transmitted so that the user has a week A bright perspective view of the surrounding environment.
We turn now back to the description of the eye imaging technique. Aspects of the invention relate to various methods of imaging the eyes of a person wearing the HWC 102. In an embodiment, techniques for imaging an eye using an optical path involving an "off" state and a "neutral" state are described, which are described in detail below. In embodiments, techniques are described for imaging an eye with an optical configuration that does not involve reflecting an eye image off a DLP mirror. In embodiments, unstructured light, structured light, or controlled lighting conditions are used to predict the position of the eye based on light reflected off the front of the wearer's eye. In an embodiment, a reflection of a presented digital content image is captured as it reflects off of the wearer's eye, and the reflected image may be processed to determine the quality (e.g., sharpness) of the presented image. In an embodiment, the image may then be adjusted (e.g., differently focused) to improve the quality of the rendered image based on image reflections.
28a, 28b and 28c show illustrations of various positions of DLP mirrors. Fig. 28a shows the DLP mirror in the "on" state 2815. With the mirror in the "on" state 2815, the illumination light 2810 is reflected along an optical axis 2820 that extends into the lower optical module 204. FIG. 28b shows the DLP mirror in the "off state 2825. With the mirror in the "off" state 2825, the illumination light 2810 is reflected along an optical axis 2830 that is substantially to the side of the optical axis 2820 such that the "off" state light is directed toward a dark light trap as already described elsewhere herein. FIG. 28c shows the DLP mirror in a third position, which occurs when no power is applied to the DLP. This "no power" state differs from the "on" and "off" states in that the mirror edge is not in contact with the substrate and as such is less accurately positioned. FIG. 28c shows all DLP mirrors in the "No Power" state 2835. The "no power" state simultaneously sets the voltage to zero by an "on" contact and an "off" contact for the DLP mirror, as a result of which the mirror returns to the unstressed position where the DLP mirror is in the plane of the DLP platform, as shown in fig. 28 c. Although not typically done, it is also possible to apply a "neutral" state to the individual DLP mirrors. When the DLP mirrors are in the "no power" state, they do not contribute to the image content. Rather, as shown in fig. 28c, when the DLP mirror is in the "no power" state, the illumination light 2810 is reflected along an optical axis 2840 between optical axes 2820 and 2830 associated with the "on" and "off" states, respectively, and as such, such light does not contribute to the displayed image as light or dark pixels. However, this light can contribute scattered light into the lower optical module 204 and as a result, the displayed image contrast can be reduced or artifacts that detract from the image content can be created in the image. Therefore, it is generally desirable in embodiments to limit the time associated with the "no power" state to the time when an image is not displayed, or to reduce the time associated with having a DLP mirror in the "no power" state, so that the effects of scattered light are reduced.
Fig. 29 illustrates an embodiment of the present invention that can be used to display a digital content image to a wearer of HWC 102 and capture an image of the wearer's eyes. In this embodiment, light from the eye 2971 passes back through the optics module in the lower module 204, the solid corrective wedge 2966, at least a portion of the light passes through the partially reflective layer 2960, the solid illuminating wedge 2964, and is reflected by the plurality of DLP mirrors on the DLP 2955 in the "neutral" state. The reflected light then passes back through the illuminating wedge 2964 and at least a portion of the light is reflected by the partially reflective layer 2960 and the light is captured by the camera 2980.
For comparison, illumination rays 2973 from the light source 2958 are also shown being reflected by the partially reflective layer 2960. Where the angle of the illumination light 2973 is such that: which causes the DLP mirror to reflect the illumination light 2973 when in the "on" state to form image light 2969, the image light 2969 sharing substantially the same optical axis as the light from the wearer's eye 2971. In this way, an image of the wearer's eyes is captured in a field of view that overlaps with the field of view for the displayed image content. In contrast, light reflected by the DLP mirror in the "off" state forms dim light 2975, which is substantially directed to the side of image light 2969 and light from eye 2971. The dark light 2975 is directed toward the light trap 2962, which light trap 2962 absorbs the dark light to improve the contrast of the displayed image, as has been described above in this specification.
In an embodiment, the partially reflective layer 2960 is a reflective polarizer. The light reflected from the eye 2971 can then be polarized before the incident corrective wedge 2966 such that the light reflected from the eye 2971 can be substantially polarized (e.g., with an absorptive polarizer between the upper module 202 and the lower module 204) by the polarization orientation of the reflective polarizer that is transmitted by the reflective polarizer. The quarter-wave retarder layer 2957 is then included adjacent to the DLP 2955 (as previously disclosed in fig. 3 b), such that light reflected from the eye 2971 passes through the quarter-wave retarder layer 2957 once before being reflected by the plurality of DLP mirrors in the "neutral" state, and then passes through a second time after being reflected. By passing through the quarter-wave retarder layer 2957 twice, the polarization state of the light from the eye 2971 is reversed, such that when it is incident on the reflective polarizer, the light from the eye 2971 is then substantially reflected toward the camera 2980. By using a partially reflective layer 2960 (which is a reflective polarizer) and polarizing the light from the eye 2971 before entering the corrective wedge 2964, the losses caused by the partially reflective layer 2960 are reduced.
Fig. 28c illustrates a case where the DLP mirrors are simultaneously in a "no power" state, which mode of operation can be particularly useful when HWC 102 is first worn on the head of a wearer. When HWC 102 is first worn on the head of the wearer, it is not yet necessary to display an image. As a result, the DLP can be in a "dead" state for all DLP mirrors and an image of the wearer's eye can be captured. The captured image of the wearer's eye can then be compared to a database using iris recognition techniques or other eye pattern recognition techniques to determine, for example, the identity of the wearer.
In a further embodiment illustrated by FIG. 29, all DLP mirrors are placed in a "dead" state for a portion of the frame time (e.g., 50% of the frame time for the digital content images displayed) and the capture of eye images is synchronized to occur at the same time and for the same duration. By reducing the time that a DLP mirror is in a "no power" state, the time that light is scattered by a DLP mirror that is in a "no power" state is reduced so that the wearer does not perceive a change in the displayed image quality. This is possible because DLP mirrors have response times on the order of microseconds, whereas a typical frame time for a displayed image is about 0.016 seconds. This method of capturing images of the wearer's eyes can be used periodically to capture repeated images of the wearer's eyes. For example, an eye image can be captured within 50% of the frame time of every 10 frames displayed to the wearer. In another example, the eye image can be captured within 10% of the frame time of each frame displayed to the wearer.
Alternatively, a "no power" state can be applied to a subset of DLP mirrors (e.g., 10% of DLP mirrors) while another subset is busy generating image light for content to be displayed. This enables the capture of eye image(s) during the display of the digital content to the wearer. DLP mirrors used for eye imaging can be randomly distributed across an area of the DLP, for example, to minimize the impact on the quality of the digital content being displayed to the wearer. For example, to improve the displayed image perceived by the wearer, the individual DLP mirrors that enter the "no power" state for capturing each eye image can vary over time, such as in a random pattern. In yet another embodiment, a DLP mirror that enters a "no power" state for eye imaging may be coordinated with digital content in such a way that the "no power" mirror is taken away from portions of the image that require less resolution.
In the embodiments of the present invention as illustrated in fig. 9 and 29, the reflective surface provided by the DLP mirror in both cases does not preserve the wavefront of the light from the wearer's eye, so that the image quality of the captured image of the eye is somewhat limited. This may still be useful in some embodiments, but it is somewhat limited. This is because the DLP mirrors are not constrained to be in the same plane. In the embodiment illustrated in FIG. 9, the DLP mirrors are tilted such that they form rows of DLP mirrors sharing a common plane. In the embodiment illustrated in FIG. 29, the individual DLP mirrors are not accurately positioned to lie in the same plane because they are not in contact with the substrate. Examples of advantages of the embodiment associated with fig. 29 are: first, the camera 2980 can be positioned between the DLP 2955 and the illumination source 2958 to provide a more compact upper module 202. Second, the polarization state of the light reflected from the eye 2971 can be the same as the polarization state of the image light 2969, so that the optical paths of the image light and the light reflected from the eye can be the same in the lower module 204.
FIG. 30 shows a diagram of an embodiment for displaying an image to a wearer and simultaneously capturing an image of the wearer's eye, where light from the eye 2971 is reflected by the partially reflective layer 2960 toward the camera 3080. The partially reflective layer 2960 can be an optically flat layer such that the wavefront of the light from the eye 2971 is preserved and, as a result, a higher quality image of the wearer's eye can be captured. Further, since DLP 2955 is not included in the optical path for light from eye 2971, and the eye imaging process shown in fig. 30 does not interfere with the displayed image, an image of the wearer's eye can be captured from the displayed image independently (e.g., independent of timing for image light, effect on resolution, or pixel count).
In the embodiment illustrated in FIG. 30, the partially reflective layer 2960 is a reflective polarizer, the illumination light 2973 is polarized, the light from the eye 2971 is polarized and the camera 3080 is located behind the polarizer 3085. The polarization axis of the illumination light 2973 and the polarization axis of the light from the eye are oriented perpendicular to the transmission axis of the reflective polarizer such that they are both substantially reflected by the reflective polarizer. The illumination light 2973 passes through the quarter wave layer 2957 before being reflected by a DLP mirror in the DLP 2955. The reflected light passes through the quarter wave layer 2957 such that the polarization states of the image light 2969 and the dark light 2975 are reversed compared to the illumination light 2973. As such, image light 2969 and dark light 2975 are substantially transmitted by the reflective polarizer. With the DLP mirror in the "on" state providing image light 2969 along the optical axis extending into the lower optical module 204 to display an image to the wearer. At the same time, the DLP mirror in the "off" state provides dim light 2975 along the optical axis extending to the side of the upper optics module 202. In the region where the dim light 2975 is incident on the corrective wedge 2966 on the side of the upper optics module 202, the absorptive polarizer 3085 is positioned with its transmission axis perpendicular to the polarization axis of the dim light and parallel to the polarization axis of the light from the eye, so that the dim light 2975 is absorbed and the light from the eye 2971 is transmitted to the camera 3080.
Fig. 31 shows a diagram of another embodiment of a system for displaying an image and simultaneously capturing an image of a wearer's eye, similar to the diagram shown in fig. 30. The difference in the system shown in fig. 31 is that light from the eye 2971 undergoes multiple reflections before being captured by the camera 3180. To enable multiple reflections, a mirror 3187 is provided behind the absorbing polarizer 3185. Thus, light from the eye 2971 is polarized before entering the corrective wedge 2966 about a polarization axis that is perpendicular to the transmission axis of the reflective polarizer comprising the partially reflective layer 2960. In this manner, light from eye 2971 is reflected a first time by the reflective polarizer, a second time by mirror 3187, and a third time by the reflective polarizer before being captured by camera 3180. Although the light from the eye 2971 passes through the absorbing polarizer 3185 twice, it is substantially transmitted by the absorbing polarizer 3185 because the polarization axis of the light from the eye 2971 is oriented parallel to the polarization axis of the light from the eye 2971. As with the system described in connection with fig. 30, the system shown in fig. 31 includes an optically flat partially reflective layer 2960 that preserves the wavefront of the light from the eye 2971 to enable the capture of a higher quality image of the wearer's eye. Also, since the DLP 2955 is not included in the optical path for light reflected from the eye 2971, and the eye imaging process shown in fig. 31 does not interfere with the displayed image, it is possible to independently capture an image of the wearer's eye from the displayed image.
Fig. 32 shows an illustration of a system for displaying an image and simultaneously capturing an image of a wearer's eye that includes a beam splitting plate 3212, the beam splitting plate 3212 including a reflective polarizer held in the air between a light source 2958, DLP 2955, and a camera 3280. Both the illumination light 2973 and the light from the eye 2971 are polarized about a polarization axis that is perpendicular to the transmission axis of the reflective polarizer. As a result, both the illumination light 2973 and the light from the eye 2971 are substantially reflected by the reflective polarizer. The illumination light 2873 is reflected by the reflective polarizer toward the DLP 2955 and splits the illumination light 2873 into image light 2969 and dark light 3275 depending on whether the respective DLP mirrors are in the "on" state or the "off" state, respectively. By passing through the quarter-wave layer 2957 twice, the polarization state of the illumination light 2973 is reversed compared to the polarization states of the image light 2969 and the dark light 3275. As a result, image light 2969 and dark light 3275 are then substantially transmitted by the reflective polarizer. The absorbing polarizer 3285 at the side of the beam splitting plate 3212 has a transmission axis perpendicular to the polarization axis of the dim light 3275 and parallel to the polarization axis of the light from the eye 2971, such that the dim light 3275 is absorbed and the light from the eye 2971 is transmitted to the camera 3280. As in the system shown in fig. 30, the system shown in fig. 31 includes an optically flat beam splitting plate 3212 that preserves the wavefront of the light from the eye 2971 to enable capture of a higher quality image of the wearer's eye. Also, since the DLP 2955 is not included in the optical path for light from the eye 2971, and the eye imaging process shown in fig. 31 does not interfere with the displayed image, it is possible to independently capture an image of the wearer's eye from the displayed image.
An eye imaging system in which the polarization state of light from eye 2971 needs to be opposite that of image light 2969 (as shown in figures 30, 31, and 32) needs to be used with a lower module that includes a combiner that will reflect both polarization states. As such, these upper modules 202 are best suited for use with lower modules 204 that include combiners (which are reflective regardless of polarization state), examples of which are shown in fig. 6, 8a, 8b, 8c, and 24-27.
In yet another embodiment, shown in fig. 33, a beam splitting plate 3222 is comprised of a reflective polarizer on the side facing the illumination light 2973 and a short pass dichroic mirror on the side facing the light from the eye 3271 and the camera 3280. An absorbing surface 3295 is provided to capture the dark light 3275 and a camera 3280 is positioned in an opening of the absorbing surface 3295. In this manner, the system of fig. 32 can be made to function with unpolarized light from the eye 3271.
In embodiments involving capturing an image of a wearer's eye, the light used to illuminate the wearer's eye can be provided by several different sources, including: light from the displayed image (i.e., image light); light from the environment passing through a combiner or other optics; light provided by dedicated eye light, etc. Fig. 34 and 34a show illustrations of the dedicated eye illumination light 3420. Fig. 34 shows an illustration from a side view, where dedicated illumination eye light 3420 is positioned at the corner of combiner 3410 so that it does not interfere with image light 3415. When the wearer is viewing the display image provided by image light 3415, the dedicated eye illumination light 3420 is directed such that the eye illumination light 3425 illuminates the eye box 3427 where the eyes 3430 are located. Fig. 34a shows a diagram of the viewing angle from the wearer's eye to show how the dedicated eye illumination light 3420 is positioned at the corners of the combiner 3410. Although the dedicated eye illumination light 3420 is shown at the upper left corner of the combiner 3410, other locations along one of the edges of the combiner 3410 or other optical or mechanical components are possible. In other embodiments, more than one dedicated eye light 3420 with different positions can be used. In an embodiment, the dedicated eye light 3420 is infrared light that is invisible to the wearer (e.g., 800 nm) such that the eye illumination light 3425 does not interfere with the display image perceived by the wearer.
Fig. 35 shows a series of illustrations of captured eye images showing eye glints (i.e., light reflected off the front of the eye) produced by specialized eye light. In this embodiment of the invention, the captured image of the wearer's eye is analyzed to determine the relative position of the iris 3550, pupil, or other portion of the eye, and the glints of the eye 3560. Eye flash is a reflected image of the dedicated eye light 3420 when dedicated light is used. Fig. 35 illustrates the relative positions of eye glints 3560 and iris 3550 for various eye positions. By providing dedicated eye light 3420 in a fixed position in combination with the fact that the human eye is substantially spherical or at least reliably repeatable in shape, the eye glints provide a fixed reference point within the display image or within a perspective view of the surrounding environment, to which the determined position of the iris can be compared to determine where the wearer is looking. By positioning dedicated eye lights 3420 at the corners of combiner 3410, eye glints 3560 are formed in the captured image away from iris 3550. As a result, the positions of the iris and the eye glints can be determined more easily and more accurately during the analysis of the captured image, since they do not interfere with each other. In a further embodiment, the combiner comprises an associated cut-off filter which prevents infrared light from the environment from entering the HWC and the camera is an infrared camera, such that eye flashes are provided only by light from the dedicated eye light. For example, the combiner can include a low pass filter that passes visible light while absorbing infrared light, and the camera can include a high pass filter that absorbs visible light while passing infrared light.
In an embodiment of the eye imaging system, the lens for the camera is designed to account for optics associated with the upper module 202 and the lower module 204. This is achieved by designing the camera to include optics in the upper module 202 and optics in the lower module 204 in order to produce a high MTF image of the wearer's eye at the image sensor in the camera. In yet another embodiment, the camera lens is provided with a large depth of field to eliminate the need for: the camera is focused to enable a sharp image of the eye to be captured. Where a large depth of field is typically provided by a high f/# lens (e.g., f/# > 5). In this case, the reduced light collection associated with the high f/# lens is compensated for by including dedicated eye light to enable a bright image of the eye to be captured. Further, the brightness of the dedicated eye light can be modulated and synchronized with the capturing of the eye image such that the dedicated eye light has a reduced duty cycle and reduces the brightness of the infrared light on the eye of the wearer.
In a further embodiment, fig. 36a shows a diagram of an eye image used to identify a wearer of the HWC. In this case, an image of the wearer's eye 3611 is captured and analyzed for a pattern of identifiable features 3612. The pattern is then compared to a database of eye images to determine the identity of the wearer. After the identity of the wearer has been verified, the mode of operation of the HWC and the type of image, application, and information to be displayed can be adjusted and controlled in accordance with the determined wearer identity. Examples of adjustments to the operating mode depending on whether the wearer is determined to be who include: making different modes or feature sets available, powering off or sending messages to external networks, allowing guest (guest) features and applications to run, and so forth.
In an illustration of another embodiment using eye imaging, where the sharpness of the displayed image is determined based on eye glints produced by reflections of the displayed image from the surface of the wearer's eye. By capturing an image of the wearer's eye 3611, eye glints 3622 (which are small versions of the displayed image) can be captured and analyzed for sharpness. If the display image is determined to not be sharp, an automatic adjustment to the focus of the HWC optics can be performed to improve sharpness. This ability to perform a measurement of the sharpness of the displayed image at the surface of the wearer's eye can provide a very accurate measurement of the image quality. Having the ability to measure and automatically adjust the focus of the displayed image, where the focus distance of the displayed image can vary in response to changes in the environment or in the method used by the wearer, is very useful in augmented reality imaging.
One aspect of the invention relates to controlling the HWC 102 by interpreting the eye map. In embodiments, eye imaging techniques (such as those described herein) are used to capture an eye image or series of eye images for processing. The image(s) may be processed to determine an action intended by the user, a reaction intended by the HWC, or other action. For example, the image may be interpreted as a positive user control action for an application on HWC 102. Alternatively, the mapping may, for example, cause HWC 102 to react in a predetermined manner such that HWC 102 operates safely, intuitively, and so forth at all times.
Fig. 37 illustrates an eye mapping process that involves imaging the eye(s) of the HWC 102 wearer and processing the image (e.g., by eye imaging techniques as described herein) to determine the neutral or forward-looking position of the eye relative to it and/or what position 3702 the FOV 3708 is in. The process may involve a calibration step in which the user is commanded to look in certain directions by directions provided in the FOV of HWC 102 to enable a more accurate prediction of eye position relative to the area of the FOV. If the wearer's eye is determined to be looking toward the right side of the FOV 3708 (as illustrated in fig. 37, the user is looking from the paper look), a virtual target line may be established to project what the wearer may be looking toward or in the environment. The virtual target line may be used in conjunction with images captured by a camera on HWC 102 that images the ambient environment in front of the wearer. In an embodiment, the field of view of the camera capturing the surrounding environment is or can be matched (e.g., digitally) to the FOV 3708 to make the comparison clearer. For example, where the camera captures an image of surroundings at an angle that matches the FOV 3708, the virtual line can be processed (e.g., in 2d or 3d depending on the camera image capabilities and/or processing of the image) by projecting what ambient objects are aligned with the virtual target line. If multiple objects exist along the virtual target line, a focal plane may be established corresponding to each of the objects such that the digital content may be placed in a region in the FOV 3708 that is aligned with the virtual target line and falls at the focal plane of the object of interest. The user can then see the digital content when he focuses on an object in the environment at the same focal plane. In an embodiment, objects in line with the virtual target line may be established by comparison with mapping information of surroundings.
In embodiments, digital content in line with the virtual target line may not be displayed in the FOV until the eye position is in the correct position. This may be a predetermined process. For example, the system may be set up such that a particular segment of digital content (e.g., advertisement, instructional information, object information, etc.) will appear in the wearer's viewing environment for a certain object(s). A virtual target line(s) may be developed that virtually connects the wearer's eyes with object(s) in the environment (e.g., buildings, portions of buildings, markers on buildings, GPS locations, etc.), and the virtual target line may be continuously updated depending on the wearer's location and viewing direction (e.g., as determined by GPS, electronic compass, IMU, etc.) and the object's location. The digital content may be displayed in the FOV 3704 when the virtual target line implies that the wearer's pupil is substantially aligned with or is about to be aligned with the virtual target line.
In an embodiment, the time spent looking along a particular portion of the virtual target line and/or FOV 3708 may indicate that the wearer is interested in objects in the environment and/or the digital content being displayed. If digital content is not displayed at the time that takes a predetermined period of time to view in one direction, the digital content may be rendered in the area of the FOV 3708. The time spent viewing an object may be interpreted as a command to display information about the object, for example. In other embodiments, the content may not relate to the object and may be presented due to an indication that the person is relatively inactive. In an embodiment, the digital content may be located close to, but not in line with, the virtual target line, such that the wearer's surroundings view is not obstructed, but the information can enhance the wearer's surroundings view. In an embodiment, the time taken to view along a target line in the direction of the displayed digital content may be an indication of interest in the digital content. This may be used as a conversion event in the advertisement. For example, if the wearer of HWC 102 views a displayed advertisement for a certain period of time, the advertiser may pay more for the advertisement placement. As such, in embodiments, the time spent viewing an advertisement, as evaluated by comparing the eye position to a content delivery, target line, or other suitable location, may be used to determine a conversion rate or other compensation due amount for presentation.
One aspect of the present invention relates to removing content from the FOV of HWC 102 when the wearer of HWC 102 apparently wants to clearly view the surrounding environment. Fig. 38 illustrates a scenario in which the eye mapping implies that the eye has moved or is moving rapidly, and thus the digital content 3804 in the FOV 3808 is removed from the FOV 3808. In this example, the wearer may quickly view a side, indicating that something is present on that side in the environment that has drawn the wearer's attention. Eye movement 3802 may be captured by eye imaging techniques (e.g., as described herein), and the content may be removed from view if the movement matches a predetermined movement (e.g., speed, rate, pattern, etc.). In an embodiment, eye movement is used as one input and HWC movement indicated by other sensors (e.g., IMUs in the HWC) may be used as another indication. These various sensor movements may be used together to plan for events that should cause a change in the content being displayed in the FOV.
Another aspect of the invention relates to determining a focal plane based on eye convergence of a wearer. The eyes generally converge slightly and more when the person is focused very close on something. This is generally referred to as convergence. In an embodiment, the convergence is calibrated for the wearer. That is, the wearer may be guided by some focal plane motion (exercise) to determine how much the wearer's eyes converge at various focal planes and at various viewing angles. The convergence information may then be stored in a database for later reference. In an embodiment, the generic table may be used without a calibration step or with a person skipping a calibration step. Both eyes may then be imaged periodically to determine convergence in an attempt to understand at what focal plane the wearer is focused. In an embodiment, the eyes may be imaged to determine a virtual target line and then the convergence of the eyes may be determined to establish the wearer's focus, and the digital content may be displayed or altered based thereon.
Fig. 39 illustrates a scenario in which digital content is moved 3902 within one or both of FOVs 3908 and 3910 to align with the convergence of the eye determined by pupil movement 3904. In an embodiment, by moving the digital content to maintain alignment, the overlapping nature of the content is maintained, so the object appears to be appropriate to the wearer. This can be important in scenarios where 3D content is displayed.
One aspect of the invention relates to controlling HWC 102 based on events detected by eye imaging. The wearer may, for example, control the application of HWC 102 by moving his eyes in some manner, blinking, and so forth. Eye imaging (e.g., as described herein) may be used to monitor the eye(s) of the wearer and may initiate application control commands upon detection of a predetermined pattern.
One aspect of the present invention relates to monitoring the health of a person wearing HWC 102 by monitoring the eye(s) of the wearer. Calibration may be performed so that normal performance of the wearer's eyes under various conditions (e.g., lighting conditions, image light conditions, etc.) may be recorded. The wearer's eyes may then be monitored for changes in their performance by eye imaging (e.g., as described herein). Changes in performance may be indicative of health concerns (e.g., concussion, brain injury, stroke, blood loss, etc.). If data indicative of a change or event is detected, it may be communicated from HWC 102.
One aspect of the present invention relates to the security and access of computer assets (e.g., the HWC itself and associated computer systems) determined by eye image verification. As discussed elsewhere herein, the eye images may be compared to known person eye images to confirm the identity of the person. The eye images may also be used to confirm the identity of the person wearing HWC 102 before allowing the person to contact together or share files, streams, messages, etc.
Various use cases for eye imaging are possible based on the techniques described herein. One aspect of the invention relates to the timing of eye image capture. The timing of the capture of the eye images and the frequency of the capture of the plurality of images of the eye can vary depending on the usage of the information gathered from the eye images. For example, a user that captures an eye image to identify the HWC may only be required to control the safety of the HWC and associated information displayed to the user when the HWC has been turned on or when the HWC determines that the HWC has been worn on the wearer's head. Wherein the location of the ear corners (earhorn) of the HWC (or other portion of the HWC), stress, movement pattern, or orientation can be used to determine that a person has worn the HWC on their head with the intent of using the HWC. Those same parameters may be monitored in an effort to know when to remove the HWC from the user's head. This may enable a scenario in which the capturing of an eye image for identifying the wearer may only be done when a change in the wearing conditions is identified. In a comparative example, capturing an eye image to monitor the wearer's health may require capturing the image periodically (e.g., every few seconds, every few minutes, every few hours, every few days, etc.). For example, eye images may be taken at minute intervals when the images are being used to monitor the health of the wearer when the detected movement indicates that the wearer is exercising. In a further comparative example, capturing eye images to monitor the health of a wearer for long-term effects requires only monthly capture of eye images. Embodiments of the present invention relate to selecting a rate and timing of capturing eye images corresponding to a selected usage scenario with which the eye images are associated. These selections may be done automatically, as with the above workout example where the movement indicates a workout, or they may be set manually. In a further embodiment, the selection of the rate and timing of eye image capture is automatically adjusted depending on the mode of operation of the HWC. The selection of the rate and timing of eye image capture can be further selected consistent with input characteristics associated with the wearer (including age and health condition) or sensed physical conditions of the wearer (including heart rate, chemical composition of blood, and eye blink rate).
Fig. 40 illustrates an embodiment in which digital content presented in a see-through FOV is positioned based on the speed at which the wearer is moving. When the person is not moving, the digital content may be presented at the stationary person content location 4004 as measured by the sensor(s) in the HWC 102 (e.g., IMU, GPS-based tracking, etc.). Content position 4004 is indicated as being in the middle of the FOV of the perspective; this is intended to illustrate, however, that the digital content is positioned within the see-through FOV where it is generally desirable to know that the wearer is not moving, and as such the wearer's surrounding see-through view can be somewhat obstructed. Thus, the stationary person content position or neutral position may not be centered in the see-through FOV; it may be positioned somewhere in the see-through FOV considered desired and the sensor feedback may shift the digital content from a neutral position. The movement of digital content for fast moving people is also shown in fig. 40, where the digital content moves from the see-through FOV to the content position 4008 as the people turn their heads to the side, and then moves back as the people turn their heads back. For slow moving people, head movements may be more complex and as such the movement of digital content outside the see-through FOV can follow the following path: such as the path shown by content location 4010.
In embodiments, the sensor that assesses the wearer's movement may be a GPS sensor, IMU, accelerometer, or the like. The content location may shift from a neutral position to a position towards a side edge of the field of view as the forward motion increases. The content position may shift from a neutral position to a position towards the top or bottom edge of the field of view as the forward motion increases. The content location may be shifted based on the estimated threshold speed of motion. The content position may be linearly shifted based on the speed of the forward motion. The content location may be non-linearly shifted based on the speed of the forward motion. The content position may be shifted out of the field of view. In an embodiment, if the speed of movement exceeds a predetermined threshold, the content is no longer displayed and will be displayed again as soon as the forward motion slows.
In embodiments, the content location may be generally referred to as a shift; it should be understood that the term shifting encompasses such processes: where movement from one location to another within the see-through FOV or outside the FOV is visible to the wearer (e.g., the content appears to move slowly or quickly and the user perceives the movement itself), or movement from one location to another may not be visible to the wearer (e.g., the content appears to jump in a discontinuous fashion or the content disappears and then reappears in a new location).
Another aspect of the invention relates to removing content from or shifting content to a location within the field of view, which increases the wearer's view of the surroundings when the sensor causes an alarm command to be issued. In an embodiment, the alarm may be due to a sensor or combination of sensors sensing a condition above a threshold. For example, if the audio sensor detects a loud sound at a certain pitch (pitch), the content in the field of view may be removed or shifted to provide the wearer with a clear view of the surrounding environment. In addition to the shifting of the content, in embodiments, an indication of why the content was shifted may be presented in the field of view or provided to the wearer through audio feedback. For example, if the carbon monoxide sensor detects a high concentration in an area, the content in the field of view may be shifted to the side of the field of view or removed from the field of view, and an indication that a high concentration of carbon monoxide is present in that area may be provided to the wearer. This new information, when presented in the field of view, may be similarly shifted within or outside the field of view depending on the speed of movement of the wearer.
Fig. 41 illustrates how content may be shifted from the neutral position 4104 to the alert position 4108. In this embodiment, the content is shifted outside the see-through FOV 4102. In other embodiments, the content may be shifted as described herein.
Another aspect of the invention relates to identifying various vector sum orientations (heading) and sensor inputs related to HWC 102 to determine how to position content in the field of view. In an embodiment, the speed of movement of the wearer is detected and used as an input for the location of the content, and depending on the speed, the content may be positioned with respect to a movement vector or orientation (i.e. the direction of movement) or a field of view vector or orientation (i.e. the direction of the wearer's field of view direction). For example, if the wearer is moving very fast, the content may be positioned within the field of view with respect to the movement vector, as the wearer will only view towards his own side periodically and for a short period of time. As another example, if the wearer is moving slowly, the content may be positioned with respect to the field of view orientation, as the user may be more freely shifting his view from side to side.
FIG. 42 illustrates two examples in which a motion vector may affect content positioning. The movement vector a 4202 is shorter than the movement vector B4210, indicating that the forward speed and/or acceleration of movement of the person associated with the movement vector a 4202 is lower than the person associated with the movement vector B4210. Each person is also indicated as having a field of view vector or orientation 4208 and 4212. The field-of-view vectors a 4208 and B4210 are the same from a relative perspective. The white area within the black triangle in front of each person indicates how much time each person may spend looking in a direction that is not in line with the movement vector. The time spent at exit angle a view 4204 is indicated to be more than the time spent at exit angle B view 4214. This may be because the motion vector velocity a is lower than the motion vector velocity B. Typically, the faster the person moves forward, the more the person tends to look in the forward direction. FOVs a 4218 and B4222 illustrate how content may be aligned according to motion vectors 4202 and 4210 and field of view vectors 4208 and 4212. FOV a 4218 is illustrated as presenting content in line 4220 with the field of view vector. This may be caused by the lower velocity of the motion vector a 4202. This may also be due to the prediction of the greater amount of time 4204 spent viewing away from angle a. FOV B4222 is illustrated as presenting content in line 4224 with the motion vector. This may be due to the higher speed of the motion vector B4210. This may also be due to the prediction of the shorter amount of time 4214 spent looking away from angle B.
Another aspect of the invention relates to attenuating the rate of change of content location within the field of view. As illustrated in fig. 43, the field of view vector may undergo a rapid change 4304. This rapid change may be an isolated event or it may be made at or near the time when other view vector changes are occurring. The head of the wearer may for some reason be rotated back and forth. In an embodiment, a rapid continuous change in the field of view vector may cause a rate of decrease 4308 in the change in the position of the content within the FOV 4302. For example, content may be positioned with respect to a field of view vector, as described herein, and a rapid change in the field of view vector may generally result in a rapid change in the location of the content; however, since the field of view vector is continuously changing, the rate of change of position with respect to the field of view vector may be slowed, or stopped. The position rate change may be modified based on the rate of change of the field of view vector, an average of the field of view vector changes, or otherwise modified.
Another aspect of the invention relates to presenting more than one content simultaneously in the field of view of the see-through optical system of the HWC 102 and locating one content with a field of view orientation and one content with a moving orientation. Fig. 44 illustrates two FOVs a 4414 and B4420, which correspond to two identified field-of-view vectors a 4402 and B4404, respectively. Fig. 44 also illustrates objects in the environment 4408 at positions relative to the field-of-view vectors a 4402 and B4404. When a person is looking along the field of view vector a 4402, the environmental object 4408 can be seen at the location 4412 through the field of view a 4414. As illustrated, the view-oriented content is presented as TEXT (TEXT) proximate to the environmental object 4412. Meanwhile, other content 4418 is presented in the field of view a 4414 at a position aligned in line with the motion vector. As the speed of movement increases, content 4418 may be shifted as described herein. When the field of view vector of the person is the field of view vector B4404, the environment object 4408 is not seen in the field of view B4420. As a result, field-of-view aligned content 4410 is not presented in field-of-view B4420; however, moving the aligned content 4418 is presented and still depends on the speed of the motion.
FIG. 45 shows an example data set for a person moving through an environment on a path starting with a movement heading of 0 degrees and ending with a movement heading of 114 degrees, during which time the speed of movement varies from 0 m/sec to 20 m/sec. It can be seen that the field of view orientation changes on each side of the movement orientation while moving as the person looks from side to side. A large change in the orientation of the field of view occurs when the movement speed is 0 m/sec while the person is standing still, followed by a step change in the orientation of the movement.
Embodiments provide a process for determining a display orientation that takes into account the manner in which a user moves through an environment and provides a display orientation that makes it easy for the user to find displayed information, while also providing an unobstructed perspective view of the environment in response to different movements, speeds of movement, or different types of information being displayed.
Fig. 46 illustrates a perspective view that may be seen when using the HWC, where information is overlaid onto the perspective view of the environment. The tree and building are actually in the environment and the text is displayed on the see-through display so that it appears to overlay the environment. In addition to textual information (such as, for example, instructions and weather information), some augmented reality information is shown, which relates to nearby objects in the environment.
In an embodiment, the display orientation is determined based on the speed of the movement. At low speeds, the display orientation may be substantially the same as the viewing orientation, while at high speeds the display orientation may be substantially the same as the movement orientation. In an embodiment, the display information is presented directly in front of the user and the HWC as long as the user remains stationary. However, as the speed of movement increases (e.g., above a threshold or continuously increases, etc.), regardless of the direction in which the user is looking, the display orientation becomes substantially the same as the movement orientation, such that when the user is looking in the direction of movement, the display information is directly in front of the user and the HMD and is not visible when the user is looking sideways.
A rapid change in the orientation of the field of view may be followed by a slow change in the orientation of the display to provide a diminished response to head rotation. Alternatively, the display orientation can be substantially a time-averaged field of view orientation, such that the display information is presented at an orientation that is in the middle of a series of field of view orientations over a period of time. In this embodiment, if the user stops moving their head, the display orientation gradually becomes the same as the field of view orientation and the display information moves into the display field of view in front of the user and HMD. In an embodiment, the process delays the impact of time averaged field of view orientation on display orientation when there is a high rate of change of field of view orientation. In this way, the effect of rapid head movement on the display orientation is reduced, and the positioning of the display information within the display field of view is laterally stabilized.
In another embodiment, the display orientation is determined based on the movement speed, wherein at high speeds the display orientation is substantially the same as the movement orientation. At medium speed, the display orientation is substantially the same as the time-averaged field of view orientation, so that the rapid head rotation is gradually reduced (damp out) and the display orientation is in the middle of the back and forth head movement.
In yet another embodiment, the type of information being displayed is included in determining how the information should be displayed. Augmented reality information connected to objects in the environment is given a display orientation that substantially matches the orientation of the field of view. In this way, as the user turns their head, augmented reality information enters the view associated with the object that is in the perspective view of the environment. At the same time, information that is not connected to an object in the environment is given a display orientation determined based on the type of movement and the speed of movement as previously described in this specification.
In yet another embodiment, when the movement speed is determined to be above the threshold, the displayed information moves downward in the display field of view such that an upper portion of the display field of view has less information or no information displayed to provide the user with an unobstructed perspective view of the environment.
Fig. 47 and 48 show illustrations of perspective views including overlaid display information. Fig. 47 shows the perspective view immediately following a rapid change in the orientation of the field of view from that associated with the perspective view shown in fig. 46, where the change in the orientation of the field of view comes from head rotation. In this case, the display orientation is delayed. Fig. 48 shows how the orientation is displayed at a later time as the view is oriented. Augmented reality information is maintained in a position within the display field of view where associations with objects in the environment can be easily made by the user.
Fig. 49 shows an illustration of an example of a perspective view that includes overlaid display information that has been shifted down in a display field of view to provide an unobstructed perspective view in an upper portion of the perspective view. At the same time, the augmented reality tags have been maintained in a position within the display field of view so they can be easily associated with objects in the environment.
In a further embodiment, in an operational mode (such as when the user is moving in the environment), the digital content is presented at the sides of the user's see-through FOV to enable the user to view the digital content by merely rotating their head. In this case, the see-through view FOV does not include digital content when the user is looking straight ahead, such as when moving towards a matching field of view orientation. The user then accesses the digital content by rotating their head to that side whereupon the digital content moves laterally into the user's see-through FOV. In another embodiment, the digital content is ready for presentation and will be presented if an indication for digital content presentation is received. For example, the information may be ready to be presented and the content may then be presented if a predetermined position or field of view orientation of HWC 102 is reached. The wearer can look to the side and the content can be presented. In another embodiment, the user may move the content into a region in the field of view by looking in one direction for a predetermined period of time, blinking, or displaying some other pattern that can be captured by eye imaging techniques (e.g., as described elsewhere herein).
In yet another embodiment, an operational mode is provided in which a user can define a field of view orientation in which an associated see-through FOV includes or does not include digital content. In an example, this mode of operation can be used in an office environment where digital content is provided within the FOV when the user is viewing a wall, and the FOV is unobstructed by the digital content when the user is viewing towards the hallway. In another example, the digital content is provided within the FOV when the user is looking horizontally, but is removed from the FOV when the user is looking down (e.g., viewing a desktop or cell phone).
Another aspect of the invention relates to collecting and using eye position and field orientation information. Head-mounted calculations with motion orientation, field of view orientation, and/or eye position predictions (sometimes referred to herein as "eye orientation") may be used to identify what is of significant interest to the wearer of HWC 102 and this information may be captured and used. In an embodiment, the information may be characterized as it is viewed, as the information is clearly related to what the wearer is viewing. The viewing information may be used to develop a personal profile for the wearer that may indicate what the wearer is inclined to view. Viewing information from several or many HWCs 102 may be captured so that group or crowd viewing tendencies may be established. For example, if the movement orientation and field of view orientation are known, a prediction of what the wearer is looking at may be made and used to generate a portion of a crowd profile or a personal profile. In another embodiment, if eye orientation and position, field of view orientation, and/or movement orientation are known, a prediction of what is being viewed can be predicted. The prediction may involve knowing what is close to the wearer and this may be learned by establishing the wearer's location (e.g., by GPS or other positioning technology) and establishing what mapping objects are known in the area. The prediction may involve interpreting images captured by other sensors or cameras associated with HWC 102. For example, if a camera captures an image of a sign, and the camera is oriented in line with the field of view, the prediction may involve evaluating the likelihood that the wearer is viewing the sign. Prediction may involve capturing an image or other sensory information, and then performing object discrimination analysis to determine what is being viewed. For example, the wearer may be walking down the street and a camera in the HWC 102 may capture an image, and a processor onboard the HWC 102 or remote from the HWC 102 may recognize the face, object, mark, image, etc. and may determine that the wearer may have been looking at or towards it.
Fig. 50 illustrates a cross-section of an eyeball of a wearer of a HWC having a focal point that can be associated with an eye imaging system of the present invention. Eyeball 5010 includes iris 5012 and retina 5014. Because the eye imaging system of the present invention provides on-axis eye imaging with a display system, images of the eye can be captured from a viewing angle directly in front of the eye and in line with where the wearer is looking. In embodiments of the invention, the eye imaging system can focus at the wearer's iris 5012 and/or retina 5014 to capture images of the inner portion of the eye including retina 5014 or the outer surface of iris 5012. Fig. 50 shows light rays 5020 and 5025 associated with capturing an image of the iris 5012 or retina 5014, respectively, where optics associated with the eye imaging system are focused at the iris 5012 or retina 5014, respectively. Illumination light can also be provided in the eye imaging system to illuminate the iris 5012 or the retina 5014. Fig. 51 shows a diagram of an eye including an iris 5130 and a sclera 5125. In an embodiment, an eye imaging system can be used to capture an image that includes the iris 5130 and a portion of the sclera 5125. The image can then be analyzed to determine the color, shape, and style associated with the user. In a further embodiment, the focus of the eye imaging system is adjusted to enable capture of an image of iris 5012 or retina 5014. The illumination light can also be adjusted to illuminate the iris 5012 or the retina 5014 through the pupil of the eye. The illumination light can be visible light to enable capture of the color of the retina 5014 or iris 5012, or the illumination light can be ultraviolet light (e.g., 340 nm), near infrared light (e.g., 850 nm), or mid-wave infrared light (e.g., 5000 nm) to enable capture of the hyper-spectral characteristics of the eye.
Fig. 53 illustrates a display system including an eye imaging system. The display system includes a polarized light source 2958, a DLP 2955, a quarter wave film 2957, and a beam splitting plate 5345. The eye imaging system includes a camera 3280, illumination light 5355, and a beam splitting plate 5345. Where the beam splitting plate 5345 can be a reflective polarizer on the side facing the polarized light source 2958 and a heat mirror on the side facing the camera 3280. Wherein the heat mirror reflects infrared light (e.g., wavelength 700 to 2000 nm) and transmits visible light (e.g., wavelength 400 to 670 nm). The beam splitting plate 5345 can be composed of a plurality of laminated films, a coated substrate film, or a rigid transparent substrate with a film on either side. By providing a reflective polarizer on one side, light from the polarized light source 2958 is reflected toward the DLP 2955, where it passes through the quarter wave film 2957 once, is reflected by the DLP mirror in line with the image content being displayed by the DLP 2955, and then passes back through the quarter wave film 2957. In doing so, the polarization state of the light from the polarized light source is changed such that the light is transmitted by the reflective polarizer on the beam splitting plate 5345 and the image light 2971 passes into the lower optics module 204 where the image is displayed to the user. At the same time, infrared light 5357 from the illumination light 5355 is reflected by the heat mirror such that the infrared light passes into the lower optics module 204 where it illuminates the user's eyes. A portion of the infrared light 2969 is reflected by the user's eye and the light passes back through the lower optics module 204, is reflected by the heat mirrors on the beam splitting plate 5345 and captured by the camera 3280. In this embodiment, image light 2971 is polarized, while infrared light 5357 and 2969 may be unpolarized. In an embodiment, the illumination light 5355 provides two different infrared wavelengths and the eye images are captured in pairs, wherein the pair of eye images are analyzed together to improve the accuracy of user recognition based on iris analysis.
FIG. 54 shows a diagram of a further embodiment of a display system with an eye imaging system. In addition to the features of fig. 53, this system includes a second camera 5460. Where a second camera 5460 is provided to capture eye images in the visible wavelengths. The illumination of the eye can be provided by the display image or by see-through light from the environment. Portions of the display image can be modified to provide improved illumination of the user's eyes when an image of the eyes is to be captured, such as by increasing the brightness of the display image or increasing white areas within the display image. Further, for the purpose of capturing an eye image, the modified display image can be briefly presented, and the display of the modified image can be synchronized with the capture of the eye image. As shown in fig. 54, visible light 5467 is polarized as it is captured by second camera 5460 because it passes through beam splitter 5445 and beam splitter 5445 is a reflective polarizer on the side facing second camera 5460. In such an eye imaging system, a visible eye image can be captured by the second camera 5460 at the same time as an infrared eye image is captured by the camera 3280. Among other things, the characteristics of the camera 3280 and the second camera 5460, and the associated captured respective images, can differ in resolution and capture rate.
Fig. 52a and 52b illustrate captured eye images in which the eye is illuminated with a structured light pattern. In fig. 52a, the eye 5220 is shown with a projected structured light pattern 5230, where the light pattern is a grid of lines. Light patterns such as 5230 can be provided by light source 5355 shown in fig. 53 by modifying light 5357 by including diffractive or refractive devices, as known by those skilled in the art. A visible light source can also be included for the second camera 5460 shown in fig. 54, which may include diffraction or refraction to modify the light 5467 to provide a light pattern. Fig. 52b illustrates how the structured light pattern of 5230 is distorted 5235 when the user's eye 5225 is viewed sideways. This distortion comes from the fact that: the human eye is not spherical in shape, but rather the iris protrudes slightly from the eyeball to form a bulge in the region of the iris. As a result, when an image of the eye is captured from a fixed location, the shape of the eye and the associated shape of the reflected structured light pattern differ depending on which direction the eye is pointing. The changes in the structured light pattern can then be analyzed in the captured eye image to determine the direction in which the eye is looking.
The eye imaging system can also be used for assessment of health aspects of the user. In this case, information obtained from analyzing the captured image of the iris 5012 is different from information obtained from analyzing the captured image of the retina 5014. Where light 5357 illuminating the interior portion of the eye including retina 5014 is used to capture an image of retina 5014. Light 5357 can be visible light, but in an embodiment, light 5357 is infrared light (e.g., 1 to 5 microns in wavelength), and camera 3280 is an infrared light sensor (e.g., InGaAs sensor) or low resolution infrared image sensor for determining the relative amount of light 5357 absorbed, reflected, or scattered by the interior portion of the eye. Wherein most of the absorbed, reflected, or scattered light can be attributed to material in the inner part of the eye including the retina, wherein there are closely packed blood vessels with thin walls, so that the absorption, reflection and scattering are caused by the material composition of the blood. These measurements can be made automatically at regular intervals, after an identified event, or when prompted by an external communication while the user is wearing the HWC. In a preferred embodiment, the illuminating light is near-infrared or mid-infrared (e.g., 0.7 to 5 micron wavelength) to reduce the chance of thermal damage to the wearer's eye. In another embodiment, polarizer 3285 is anti-reflective coated to reduce any reflection from this surface of light 5357, light 2969, or light 3275, and thereby increase the sensitivity of camera 3280. In a further embodiment, light source 5355 and camera 3280 together comprise a spectrometer, wherein the relative intensity of light reflected by the eye is analyzed over a narrow series of wavelengths within the wavelength range provided by light source 5355 to determine a characteristic spectrum of light absorbed, reflected, or scattered by the eye. For example, the light source 5355 can provide a wide range of infrared light to illuminate the eye, and the camera 3280 can include: a grating for laterally dispersing the reflected light from the eye into a series of narrow wavelength bands captured by a linear photodetector so that the relative intensities can be measured in wavelength and the characteristic absorption spectrum for the eye can be determined over a wide infrared range. In a further example, the light source 5355 can provide a series of narrow wavelengths of light (ultraviolet, visible, or infrared) to sequentially illuminate the eye, and the camera 3280 includes a photodetector selected to measure the relative intensities of the series of narrow wavelengths in a series of sequential measurements that together can be used to determine a characteristic spectrum of the eye. The determined characteristic spectrum is then compared to known characteristic spectra for different materials to determine the material composition of the eye. In yet another embodiment, illumination light 5357 is focused on retina 5014 and a characteristic spectrum of retina 5014 is determined and the spectrum is compared to known spectra for materials that may be present in the user's blood. For example, in the visible wavelength, 540nm is useful for detecting hemoglobin and 660nm is useful for distinguishing oxyhemoglobin. In a further example, in the infrared, a wide variety of materials (including glucose, urea, ethanol, and controlled substances) can be identified, as known by those skilled in the art. Fig. 55 shows a series of example spectra for various controlled substances measured using a form of infrared spectroscopy (thermosscientific Application Note 51242 by c. Petty, b. Garland and Mesa policy Department force Laboratory, which is hereby incorporated by reference herein). FIG. 56 shows an infrared absorption spectrum for glucose (Hewlett Packard Company 1999, G. Hopkins, G. Mauze; "In-vivo NIR Diffuse-reflection Tissue Spectroscopy of Human Subjects", which is hereby incorporated by reference). U.S. patent 6675030, which is hereby incorporated by reference herein, provides a near-infrared blood glucose monitoring system that includes infrared scanning of a body part, such as a foot. U.S. patent publication 2006/0183986 (which is hereby incorporated by reference herein) provides a blood glucose monitoring system that includes optical measurements of the retina. Embodiments of the present invention provide a method for automatically measuring a particular material in a user's blood, such as illuminating at 540 and 660nm to determine the level of hemoglobin present in the user's blood, by illuminating into the iris of the wearer's eye at one or more narrow wavelengths and measuring the relative intensity of light reflected by the eye to identify a relative absorption spectrum and comparing the measured absorption spectrum to a known absorption spectrum for the particular material.
Another aspect of the invention relates to collecting and using eye position and field orientation information. Head-mounted calculations with motion orientation, field of view orientation, and/or eye position predictions (sometimes referred to herein as "eye orientation") may be used to identify what is of significant interest to the wearer of HWC 102 and that information may be captured and used. In an embodiment, the information may be characterized as it is viewed, as the information is clearly related to what the wearer is viewing. The viewing information may be used to develop a personal profile for the wearer that may indicate what the wearer is inclined to view. Viewing information from several or many HWCs 102 may be captured so that group or crowd viewing tendencies may be established. For example, if the movement orientation and field of view orientation are known, a prediction of what the wearer is looking at may be made and used to generate a portion of a crowd profile or a personal profile. In another embodiment, if eye orientation and position, field of view orientation, and/or movement orientation are known, a prediction of what is being viewed can be predicted. The prediction may involve knowing what is close to the wearer and this may be learned by establishing the wearer's location (e.g., by GPS or other positioning technology) and establishing what mapping objects are known in the area. The prediction may involve interpreting images captured by other sensors or cameras associated with HWC 102. For example, if a camera captures an image of a sign, and the camera is oriented in line with the field of view, the prediction may involve evaluating the likelihood that the wearer is viewing the sign. Prediction may involve capturing an image or other sensory information, and then performing object discrimination analysis to determine what is being viewed. For example, the wearer may be walking down the street and a camera in the HWC 102 may capture an image, and a processor onboard the HWC 102 or remote from the HWC 102 may recognize the face, object, mark, image, etc. and may determine that the wearer may have been looking at or towards it.
In a further embodiment, a method is provided for identifying a change in focus distance associated with an eye of a user by measuring a change in a magnitude of eye glint. Where the ability to identify changes in the focus distance of the user's eyes may be useful for determining what the user is looking at in the surrounding environment when using an HMD that provides a see-through view of the surrounding environment being used. Identifying a change in the focus distance of the user's eyes may also be useful for automatic display mode selection (e.g., selecting whether the displayed image should be bright or dim) by determining whether the user is looking at the displayed image content (where the focus distance of the user's eyes matches the focus distance of the displayed image) or the user is looking at the surroundings (where the focus distance of the user's eyes is different from the focus distance of the displayed image).
The lens of The eye in The human eye changes The spherical radius to provide a change in accommodation or focus distance (see s. Plainis, w. Charman, i. Pallikaris, "The physiological Mechanism of accommodation," guide & reflective Surgery Today Europe, 2014 4, pp 23-29). Figure 56e shows a chart of the measured anterior and posterior sphere radii of the 29 year old eye over a diopter accommodation range of 0 to 6 as presented in the paper by Plainis. This change in the amount of accommodation corresponds to a change in the focusing distance from infinity to approximately 0.16 meters based on the diopter change in the focal length of the lens of the eye, where the relationship between the focal length and diopter of the lens is given in equation 1 below.
FL = 1/δ。
Where FL is the focal length of the lens in meters and δ is the diopter rating of the lens. The anterior surface of the ocular lens is covered by the cornea and iris such that the surface of the ocular lens is not exposed to the environment, however, a change in the spherical radius of the ocular lens is accompanied by a change in the spherical radius of the outer surface of the cornea. These changes in the spherical radius of the cornea affect the amount of reflected eye glints that can be seen on the surface of the eye. As the spherical radius of the ocular lens decreases to provide a higher level of accommodation, the outer spherical radius of the cornea also decreases, which reduces the magnitude of the reflected ocular glints. Similarly, as the spherical radius of the ocular lens increases to provide a lower level of accommodation, the outer spherical radius of the cornea increases, which increases the magnitude of the reflected ocular glints. Thus, a measure of the magnitude of eye glints may be used to identify changes in the focus distance of the user's eyes. Wherein the measure of the magnitude of the eye glints may be provided from an image of the user's eyes captured by the eye camera.
In order to provide a reliable detection of the size of eye glints, a light source of constant size should be provided, which is subsequently reflected by the outer surface of the cornea in the form of eye glints. Fig. 56a and 56b show example eye glints 5612 and 5614 produced from a reflective illumination source, such as an LED, for illuminating the user's eye for eye imaging for purposes such as eye tracking or iris recognition. In this case, the LED is a circular illumination source, so the eye glints are also circular. Fig. 56a shows a larger diameter eye glint 5612 within the image of the eye 5610, which corresponds to a greater focus distance than the focus distance indicated by the smaller diameter eye glint 5614 shown in fig. 56 b. The LED illumination source may provide visible wavelength light or infrared wavelength light to provide eye glint and provide illumination for eye imaging, so long as an eye camera in the HMD is capable of capturing images of the eye that include the wavelengths provided by the illumination source (e.g., as described in the eye imaging and eye illumination systems disclosed herein). Fig. 56c and 56d show another example of eye glints 5616 and 5618 where the illumination source is the displayed image such that the eye glints 5616 and 5618 are rectangular in shape. Fig. 56c shows eye glints 5616 that are larger than the eye glints 5618 shown in fig. 56d, indicating that the focus distance of the eye 5610 in fig. 56c is greater than the focus distance of the eye 5610 shown in fig. 56 c. In an embodiment, the light emitted to reflect from the eye may be in a known pattern, and thus the change in pattern may be assessed for focus change.
Measurement of the magnitude of eye glints from a known illumination source is readily achieved with an eye camera in an HMD (e.g., as disclosed herein). The resolution of the size of the eye glints is determined by the angular size of the pixels in the eye image from the eye camera (e.g., the horizontal number of pixels in the eye camera field of view/eye image). Thus, it is advantageous to have more pixels in a smaller field of view in the eye camera. This method of identifying a change in focus distance associated with a user's eye may be used with any type of optics for an HMD, for example: including refractive, reflective, holographic, beam splitter, waveguide, grating or multi-reflector optics, as long as the optics include an eye camera for imaging the user's eye.
In further embodiments, an illumination source that provides reflected eye glints may provide structured light having a pattern. The size or spacing of the pattern may then be used to identify a change in focus distance.
In another embodiment, the identified change in the focus distance of the user's eyes is used to identify a convergence distance for the displayed image. Providing a convergence distance and a focus distance for a displayed image that matches the focus distance of the user's eyes may be used to display images that are viewed intensely, such as movies. Conversely, a convergence distance and a focus distance for a displayed image that is different from a focus distance of a user's eyes may be used to provide a displayed image that is not viewed under tension, such as a battery indicator, directional information, assembly instructions, or augmented reality objects.
In embodiments, the flicker size and placement measurements may be used in conjunction with other techniques (such as convergence measurements) to determine the focus distance of the user.
Fig. 57 illustrates a scenario in which a person walks with the HWC 102 mounted on his head. In this scenario, the person's geospatial location 5704 is known by a GPS sensor (which can be another positioning system), and his movement orientation, field of view orientation 5714, and eye orientation 5702 are known and can be recorded (e.g., by the systems described herein). Objects and people are present in the scene. Person 5712 may be recognized by the HWC 102 system of the wearer, the person may be mapped (e.g., the GPS location of the person may be known or recognized), or otherwise known. The person may be wearing a discernible garment (garment) or device. For example, the garment may have a certain pattern and the HWC may discern the pattern and record that it is looking. The scene also includes mapped objects 5718 and discerned objects 5720. As the wearer moves through the scene, the field of view orientation and/or eye orientation may be recorded and communicated from HWC 102. In an embodiment, the time at which the field of view orientation and/or eye orientation maintain a particular position may be recorded. For example, if a person appears to view an object or person within a predetermined period of time (e.g., 2 seconds or more), information may be transmitted as gaze duration information as an indication that the person may have been interested in the object.
In embodiments, the field of view orientation may be used in combination with the eye orientation, or the eye orientation and/or the field of view orientation may be used separately. The field of view orientation is well able to predict in what direction the wearer is looking because many times the eyes are looking forward in the same general direction as the field of view orientation. In other scenarios, eye orientation may be a more desirable metric because eye and field of view orientations are not always aligned. In embodiments herein, examples may be provided with the term "eye/field of view" orientation, indicating that either or both of eye orientation and field of view orientation may be used in this example.
Fig. 58 illustrates a system for receiving, developing, and using movement orientation, field of view orientation, eye orientation, and/or duration information from HWC(s) 102. Server 5804 may receive orientation or gaze persistence information (referred to as persistence information 5802) for processing and/or use. The orientation and/or gaze duration information may be used to generate an individual profile 5808 and/or a group profile 5810. Personal profile 5718 may reflect the general viewing tendencies and interests of the wearer. Group profile 5810 may be a collection of orientation and duration information for different wearers to create an impression of general group viewing tendencies and interests. Cohort profiles 5810 may be divided into different cohorts based on other information (such as gender, likes, dislikes, biometric information, etc.) to enable certain cohorts to be distinguished from other cohorts. This may be useful in advertising, as advertisers may be interested in what male adult athletes (as opposed to young females) are generally watching. The profiles 5808 and 5810, as well as the original orientation and duration information, may be used by retailers 5814, advertisers 5818, trainers, and the like. For example, an advertiser may have an advertisement placed in the environment and may be interested in knowing how many people viewed the advertisement, how long they viewed the advertisement, and where they went after viewing it. This information may be used as conversion information to evaluate the value of the advertisement and thus the payment to receive for the advertisement.
In an embodiment, the process involves collecting eye and/or field of view orientation information from a plurality of head-worn computers brought into proximity with objects in the environment. For example, many people may be walking through an area, and each person may be wearing a head-worn computer with the ability to track the position of the wearer's eyes and possibly the wearer's field of view and movement orientation. Various HWC wearing individuals may then walk, ride, or otherwise come into proximity with some object in the environment (e.g., a store, sign, person, vehicle, box, bag, etc.). As each person passes or otherwise approaches the object, the eye imaging system may determine whether the person is looking toward the object. All eye/field orientation information may be collected and used to form an impression of how the crowd reacts to the object. The store may be selling and therefore the store may play out a flag indicating this. Store owners and managers may be very interested in knowing whether anyone is watching their sign. The markers may be set as objects of interest in the area, and the eye/field of view orientation determination system may record information relative to the environment and markers as the person navigates around the markers (possibly determined by their GPS location). Once the eye/field of view orientation information is collected and the association between the eye orientation and the token is determined, or when the eye/field of view orientation information is collected and the association between the eye orientation and the token is determined, feedback can be sent back to the store owner, manager, advertiser, etc. as an indication of how appealing their token is. In an embodiment, the effect of a marker on attracting attention, as indicated by eye/field orientation, may be considered a transition metric and affect the economic value of one or more marker placements.
In an embodiment, a map of an environment with an object (e.g., a landmark) may be generated by mapping the position and movement path of people in a crowd of people as they navigate next to the object. Layered on the map may be indications of various eye/field orientations. This may be useful in instructing wearers to be related to the object when they are viewing their object. The map may also have indications of how long the person viewed the object from various locations in the environment and where they went after seeing the object.
In an embodiment, the process involves collecting a plurality of eye/field of view orientations from the head-worn computer, wherein each of the plurality of eye/field of view orientations is associated with a different predetermined object in the environment. This technique can be used to determine which of the different objects draws more attention from the person. For example, if there are three objects placed in the environment, and a person enters the environment, navigating him through the environment, he may view one or more of the objects and his eye/field of view orientation may last longer on one or more objects than on other objects. This may be used to tailor or refine a person's personal attention profile, and/or it may be used in conjunction with data about other such persons for the same or similar object to determine an impression of how a group or crowd reacts to the object. Testing the advertisement in this manner may provide good feedback of the effectiveness of the advertisement.
In an embodiment, once there is substantial alignment between the eye/field of view orientation and the object of interest, the process may involve capturing the eye/field of view orientation. For example, a person with an HWC may be navigating through the environment and once the HWC detects a planned occurrence of substantial alignment, or an upcoming substantial alignment, between the eye/field orientation and the object of interest, the occurrence and/or duration may be recorded for use.
In embodiments, the process may involve collecting eye/field of view orientation information from a head-worn computer, and collecting a captured image from the head-worn computer, which is taken substantially simultaneously with the eye/field of view orientation information being captured. These two pieces of information can be used in combination to gain insight into what the wearer is looking at and may be interested in. The process may further involve associating eye/field orientation information with objects, people, or other things found in the captured image. This may involve processing the captured image to find an object or pattern. In an embodiment, gaze time or duration may be measured and used in conjunction with image processing. The process may still involve object and/or pattern recognition, but it may also involve attempting to identify what a person looks at over a period of time by more specifically identifying portions in the image in conjunction with image processing.
In an embodiment, the process may involve setting a predetermined eye/field of view orientation according to a predetermined geospatial location and using them as triggers. Where the head-mounted computer enters a geospatial location and the eye/field of view orientation associated with the head-mounted computer is aligned with a predetermined eye/field of view orientation, the system may collect the fact that there is significant alignment and/or the system may record information identifying how long the eye/field of view orientation remains substantially aligned with the predetermined eye/field of view orientation to form persistence statistics. This may eliminate or reduce the need for image processing, as triggers can be used without having to image the area. In other embodiments, image capture and processing is performed in conjunction with a trigger. In an embodiment, the trigger may be a series of geospatial locations with corresponding eye/field of view orientations such that many locations may be used as triggers that indicate when a person enters an area proximate to an object of interest and/or when the person actually appears to be viewing the object.
In embodiments, eye imaging may be used to capture images of both eyes of a wearer to determine the amount of convergence of the eyes (e.g., by techniques described elsewhere herein) in order to gain an understanding of what focal plane the wearer is focusing on. For example, if the convergence measurement suggests that the focal plane is within 15 feet of the wearer, it may be determined that the wearer is not viewing the object even though the eye/field of view orientation may be aligned with objects more than 15 feet away. If the object is within the 15 foot implied focal plane, it may be determined that the wearer is viewing the object. FIG. 59 illustrates environmental position-locked digital content 5912 indicating a position 5902 of a person. In this disclosure, the term "blue force" is generally used to indicate a member or team member for which the geospatial location is known and can be used. In an embodiment, "blue troops" is a term used to indicate members of a tactical armed team (e.g., police troops, special troops, security troops, military troops, national security troops, intelligence troops, etc.). In many embodiments herein, one member may be referred to as the primary blue team member or the first blue team member, and in many described embodiments it is this member that is wearing the HWC. It should be understood that this term is used to aid the reader and to facilitate a clear presentation of various scenarios, and that other members of the blue army or others may have HWC 102 and have similar capabilities. In this embodiment, the first person is wearing a head-mounted computer 102 having a see-through field of view ("FOV") 5914. The first person can see through the FOV to view the ambient environment through the FOV, and the digital content can also be presented in the FOV, such that the first person can view the actual surroundings in the digitally enhanced view through the FOV. The location of the other blue troops personnel is known and indicated at a location within the building at point 5902. This position is known in three dimensions (longitude, latitude, and altitude), which may have been determined via GPS along with altimeters associated with other blue troops. Similarly, the location of the first person wearing the HWC 102 is also known, as indicated as point 5908 in fig. 59. In this embodiment, the compass orientation 5910 of the first person is also known. With the compass orientation 5910 known, the angle at which the first person is looking at the surroundings can be estimated. A virtual target line between the first person position 5908 and the position 5902 of the other person can be established in three-dimensional space and released from HWC 102 proximate FOV 5914. The three-dimensionally oriented virtual target line can then be used to present ambient position-locked digital content in the FOV 5914, which indicates the position 5902 of other people. The environmental position-locked digital content 5902 can be positioned within the FOV 5914 such that a first person wearing the HWC 102 perceives the content 5902 as being locked in position within the environment and marks the positions 5902 of other persons.
The three-dimensionally located virtual target line can be periodically (e.g., every millisecond, every second, every minute, etc.) recalculated to reposition the ambient position-locked content 5912 to remain in line with the virtual target line. This can create the illusion that content 5912 stays positioned within the environment at a point associated with the location 5902 of other people, independent of the location 5908 of the first person wearing HWC 102 and independent of the compass orientation of HWC 102.
In an embodiment, the environment-locked digital content 5912 may be located with a target 5904 between the location 5908 of the first person and the location 5902 of the other person. The virtual target line may intersect the object 5904 before intersecting the location 5902 of the other personnel. In an embodiment, environment-locked digital content 5912 may be associated with object intersection 5904. In an embodiment, the intersecting object 5904 may be identified by comparing the locations 5902 and 5908 of the two people with obstacles identified on a map. In an embodiment, intersecting object 5904 may be identified by processing images captured from a camera, or other sensor, associated with HWC 102. In an embodiment, the digital content 5912 has an appearance indicating the position 5902 of other people being in, the position of the intersecting object 5904 to provide a clearer indication of the position 5902 of other people in the FOV 5914.
Fig. 60 illustrates how and where digital content may be located within FOV 6008 based on a virtual target line between position 5908 of a first person wearing HWC 102 and position 5902 of other persons. In addition to positioning content within FOV 6008 in a position that is in line with the virtual target line, the digital content may be presented such that it is focused by a first person when the first person is focused at a certain plane or distance in the environment. Presented object a 6018 is digitally generated content that is presented as an image at content location a 6012. Location 6012 is based on the virtual target line. The presented object a 6018 is presented not only along the virtual target line but also at the focal plane B6014 such that when the first person's eye 6002 is focused at something in the surrounding environment at the distance of the focal plane B6014, the content at location a 6012 in the FOV 6008 is focused by the first person. Setting the focal plane of the presented content provides content that was not focused before the eye 6002 focused at the set focal plane. In an embodiment, this allows the content at location a to not be presented when the compass of the HWC indicates that the first person is looking in the direction of the other person 5902, but rather the content will only be focused when the first person is focused in the direction of the other person 5902 and at the focal plane of the other person 5902.
The presented object B6020 is aligned with a different virtual target line than the presented object a 6018. The presented object B6020 is also presented at a content position B6004 at a different focal plane than the content position a 6012. The presented content B6020 is presented at a further focal plane, indicating that the other person 5902 is physically at a further distance. If the focal planes are sufficiently different, the content at location a will be focused at a different time than the content at location B, since the two focal planes require different focusing from the eye 6002.
Fig. 61 illustrates several blue team members at locations with various viewpoints from a first person perspective. In embodiments, the relative position, distance, and obstruction may cause digital content indicating the position of other people to be altered. For example, if the other people are visible by the first person through the FOV of the first person, the digital content may be locked at the location of the other people and the digital content may be of a type that indicates that the location of the other people is being effectively marked and tracked. If other people are in relatively close proximity but cannot be seen by the first person, the digital content may be locked to the intersecting object or region and the digital content may indicate that the actual locations of the other people cannot be seen, but the marker is generally tracking the general locations of the other people. If the other people are not within the predetermined proximity or are otherwise more significantly obscured from view by the first person, the digital content may generally indicate the direction and area in which the other people are located and the digital content may indicate that the location of the other people is not closely identified or tracked by the digital content, but the other people are in the general area.
With continued reference to fig. 61, several blue team members are presented at various locations within the area where the first person is located. The primary blue team members 6102 (also referred to generally as first personnel, or personnel of the HWC having a FOV therein for purposes of example) can directly see the blue team members in the open field 6104. In an embodiment, the digital content provided in the FOV of the primary blue team member may be based on the virtual target line and virtually locked in an environmental location indicating the open field location 6104 of the blue team member. The digital content may also indicate that the locations of the open field blue team members are marked and being tracked. The digital content may change form if the blue team members become obscured from the vision of the primary blue team member or otherwise become unavailable for direct viewing.
The blue team member 6108 is obscured from view of the primary blue team member 6102 by an obstruction that is in close proximity to the obscured member 6108. As depicted, the obscured member 6108 is in a building, but near one of the front walls. In this scenario, the digital content provided in the FOV of the primary member 6102 may indicate the general position of the obscured member 6108, and the digital content may indicate that while the position of the other member is fairly well marked, it is obscured and therefore not as accurate as if the member were in direct view. In addition, the digital content may be virtually position-locked to a feature on the outside of the building where the obscured member is located. This may make the environment lock more stable and also provide an indication that the location of the person is to some extent unknown.
Blue team member 6110 is obscured by multiple obstacles. Member 6110 is in a building and there is another building 6112 between the primary member 6102 and the obscured member 6110. In such a scenario, the digital content in the FOV of the primary member would be very spatially absent of the actual obscured member, and as such the digital content may need to be presented in a manner that indicates that the obscured member 6110 is in a general orientation but that the digital indicia is not a reliable source for the particular location of the obscured member 6110.
Fig. 62 illustrates yet another method for locating digital content within the FOV of a HWC, where the digital content is intended to indicate the location of another person. This embodiment is similar to the embodiment described herein in connection with fig. 62. In this embodiment, the primary additional element is the additional step of verifying the distance between the first person 5908 (the person wearing the HWC with the FOV digital content presentation of the location) and the other person at location 5902. Here, a rangefinder may be included in the HWC and measure distance at an angle, represented by the virtual target line. In the case where the rangefinder is looking for an object that obscures the path of the virtual target line, the digital content presentation in the FOV may indicate this (e.g., as described elsewhere herein). In the event that the rangefinder confirms the presence of a person or object at the end of the angle and prescribed distance defined by the virtual target line, the digital content may indicate that the appropriate location has been marked, as described elsewhere herein.
Another aspect of the invention relates to predicting blue team member movement to maintain a suitable virtual marking of blue team member location. Fig. 63 illustrates a scenario in which the primary blue army member 6302 is tracking the location of other blue army members through an enhanced environment using the HWC 102, as described elsewhere herein (e.g., as described in connection with the above figures). The primary blue team member 6302 may have knowledge of the tactical movement plan 6308. The tactical movement plan may be maintained locally (e.g., on HWC 102, where the plan is shared among blue team members) or remotely (e.g., on a server and communicated to HWC 102, or to a subset of HWC 102 for HWC 102 sharing). In this case, the tactical plan involves a blue team group moving generally in the direction of arrow 6308. A tactical plan may affect the presentation of digital content in the FOV of the HWC 102 of the primary blue team member. For example, a tactical plan may assist in predicting the locations of other blue team members and the virtual target line may be adjusted accordingly. In an embodiment, areas in a tactical movement plan may be shaded or colored or otherwise marked with digital content in the FOV so that the primary blue team member can manage his activities with respect to the tactical plan. For example, he may perceive that one or more blue team members are moving toward tactical path 6308. He may also perceive that movement in the tactical path does not appear to be associated with a blue team member.
Fig. 63 also illustrates that an internal IMU sensor in a HWC worn by a blue army member may provide guidance 6304 on the member's movements. This is helpful in identifying when the GPS position should be updated and thus the position of the virtual marker in the FOV. This may also be helpful in assessing the effectiveness of the GPS location. For example, if the GPS location is not updated, but there is significant IMU sensor activity, the system may suspect the accuracy of the identified location. IMU information may also be useful to help track a member's location in the event GPS information is not available. For example, if GPS signals are lost, dead reckoning may be used, and a virtual marker in the FOV may indicate two indicated movements of the team member and indicate that location identification is not ideal. The current tactical plan 6308 may be periodically updated and the updated plan further refines what is present in the FOV of the HWC 102 as well.
FIG. 64 illustrates a blue army tracking system according to the principles of the present invention. In an embodiment, blue troop HWC 102 may have a directional antenna that transmits a relatively low power directional RF signal to enable other blue troop members within range of the relatively low power signal to receive and evaluate the direction and/or distance of the signal based on the strength and varying strength of the signal. In an embodiment, tracking such RF signals can be used to alter the presentation of virtual markers of the position of a person within the FOV of HWC 102.
Another aspect of the invention relates to monitoring the health of blue team members. Each blue team member may be automatically monitored for health and stress events. For example, a member may have a watchband or other wearable biometric monitoring device as described elsewhere herein, and the device may continuously monitor biometric information and predict health concerns or stressful events. As another example, an eye imaging system described elsewhere herein may be used to monitor pupil dilation compared to normal conditions to predict head trauma (head trama). Each eye may be imaged to examine differences in pupil dilation for indication of head trauma. As another example, the IMU in HWC 102 may monitor a person walking to the door for a change in pattern, which may be indicative of head or other trauma. For example, biometric feedback from a member indicating health or stress concerns may be uploaded to a server for sharing with other members or information may be shared with local members. Once shared, the digital content in the FOV indicating the location of the person having the health or stress event may include an indication of the health event.
Fig. 65 illustrates a scenario in which the primary blue team member 6502 is monitoring the location of blue team members 6504 who already have a health event and have health alerts sent from HWC 102. As described elsewhere herein, the FOV of the HWC 102 of the primary blue team member may include a location indication of the blue team member 6504 with health concerns. The digital content in the FOV may also include an indication of the health condition associated with the position indication. In embodiments, non-biometric sensors (e.g., IMUs, cameras, rangefinders, accelerometers, altimeters, etc.) may be used to provide health and/or situational status to the blue team or other local or remote personnel interested in the information. For example, if one of the blue team members is detected as a quick touchdown from a standing location, an alert may be sent as an indication: fall, people are in trouble and have to lie down, be hit, etc.
Another aspect of the invention relates to virtually marking various prior actions and events. For example, as depicted in fig. 66, techniques described elsewhere herein may be used to construct a virtual previous movement path 6604 for a blue team member. The virtual path may be displayed as digital content in the FOV of the primary blue team member 6602 using methods described elsewhere herein. As the blue team member moves along path 6604, he may have virtually placed event marker 6608 so that when another member views the location, the marker can be displayed as digital content. For example, a blue army member may check and clear an area, and then use an external user interface or gesture to indicate that the area has been cleared, and then that location will be virtually marked and shared with the blue army member. Then, when someone wants to know whether the location is checked, he can view information of the location. As indicated elsewhere herein, if a location is visible to a member, the digital content may be displayed in a manner that indicates that particular location, and if the location is not visible from the perspective of a person, the digital content may differ to some extent, as it may not specifically mark that location.
Returning to the optical configuration, another aspect of the present invention relates to an optical configuration that provides digitally displayed content to the eyes of a person wearing a head mounted display (e.g., as used in HWC 102) and allows the person to see through the display so that the digital content is perceived by the person as an enhancement to the see-through view of the surrounding environment. The optical configuration may have a variable transmission optical element that is in line with the see-through view of the person, such that the transmission of the see-through view can be increased or decreased. This is helpful in scenarios where a person wants or will be better provided with a high transmission perspective view and when a person wants or will be better provided with less transmission of perspective in the same HWC 102. Lower perspective transmission may be used in bright conditions and/or conditions where higher contrast for digitally presented content is desired. The optical system may also have a camera that images the surroundings by receiving light reflected from the surroundings away from an optical element that is in line with the perspective view of the person's surroundings. In embodiments, the camera may further be aligned in a dark light trap such that light that is not captured by the camera that is reflected and/or transmitted in the direction of the camera is captured to reduce stray light.
In an embodiment, HWC 102 is provided that includes a camera coaxially aligned with the direction in which the user is looking. Fig. 67 shows a diagram of an optical system 6715 including an absorptive polarizer 6737 and a camera 6739. The image source 6710 can include a light source, a display and reflective surfaces, and one or more lenses 6720. Image light 6750 is provided by an image source 6710, wherein a portion of the image light 6750 is reflected by the partially reflective combiner 6735 towards the user's eye 6730. Meanwhile, a portion of the image light 6750 may be transmitted by the combiner 6735 such that the image light is incident on the absorptive polarizer 6737. In this embodiment, image light 6750 is polarized light, with the polarization state of image light 6750 oriented with respect to the transmission axis of the absorbing polarizer 6737 such that incident image light 6750 is absorbed by the absorbing polarizer 6737. In this manner, the facial glow produced by the escaping image light 6750 is reduced. In an embodiment, the absorbing polarizer 6737 includes an anti-reflective coating to reduce reflections from the surface of the absorbing polarizer 6737.
Fig. 67 further shows a camera 6739 for capturing images of the environment in the direction the user is looking. The camera 6739 is positioned behind the absorptive polarizer 6737 and below the combiner 6735 such that a portion of the light 6770 from the environment is reflected by the combiner 6735 toward the camera 6739. The light 6770 from the environment can be unpolarized such that a portion of the light 6770 from the environment that is reflected by the combiner 6735 passes through the absorptive polarizer 6737 and is being captured by the camera 6739. As a result, the light captured by the camera will have a polarization state that is opposite to the polarization state of the image light 6750. Further, the camera 6739 is aligned relative to the combiner 6735 such that a field of view associated with the camera 6739 is coaxial with the display field of view provided by the image light 6750. At the same time, a portion of the scene light 6760 from the environment is transmitted by the combiner 6735 to provide a see-through view of the environment to the user's eye 6730. Wherein the display field of view associated with the image light 6750 typically coincides with the see-through field of view associated with the scene light 6760, and thus the field of view and the see-through field of view of the camera 6739 are at least partially coaxial. By attaching the camera 6739 to a lower portion of the optical system 6715, the field of view of the camera 6739, shown by the light 6770 from the environment, moves as the user moves their head, such that the image captured by the camera 6739 corresponds to the area of the environment that the user is viewing. By coaxially aligning the camera field of view with the displayed image and the user's contextual view, an augmented reality image can be provided with improved alignment with objects in the scene. This is because the captured images from the camera 6739 provide an accurate representation of the contextual view of the user's perspective. As an example, when the user sees an object in the scene in the middle of the see-through view of the HWC, the object will be in the middle of the image captured by the camera and any augmented reality imagery to be associated with the object can be in the middle of the displayed image. As the user moves their head, the relative position of the objects seen in the perspective scene view will change, and the position of the augmented reality map can be changed within the displayed image in a corresponding manner. When only a camera 6739 is provided for each of the user's eyes, an accurate representation of the 3D scene view can also be provided. This is an important advantage provided by the present invention because images captured by cameras located in the frame of the HWC (e.g., between the eyes or at the corners) capture images that are laterally offset from the user's perspective of the scene, and as a result, it is difficult to align the augmented reality image with objects in the scene seen from the user's perspective.
In the optical system 6715 shown in fig. 67, the absorbing polarizer 6737 simultaneously functions as: light traps for escaping image light 6750, light blockers (blocker) for image light 6750 of camera 6739, and windows for light 6770 from the environment to camera 6739. This is possible because the polarization state of the image light 6750 is perpendicular to the transmission axis of the absorbing polarizer 6737, while the light 6770 from the environment is unpolarized, such that a portion of the light 6770 from the environment (which is the opposite polarization state from the image light) is transmitted by the absorbing polarizer 6737. The combiner 6735 can be any partially reflective surface, including simple partial mirrors, notch mirrors, and holographic mirrors. The reflectivity of the combiner 6735 can be selected to be greater than 50% (e.g., 55% reflectivity and 45% transmission over the visible wavelength spectral band), whereby most of the image light 6750 will be reflected toward the user's eye 6730 and most of the light 6770 from the environment will be reflected toward the camera 6739, such a system will provide a brighter displayed image, a brighter captured image, with a darker see-through view of the environment. Alternatively, the reflectivity of the combiner 6735 can be selected to be less than 50% (e.g., 20% reflectivity and 80% transmission over the visible wavelength spectral band), whereby most of the image light 6750 will be transmitted by the combiner 6735 and most of the light 6770 from the environment will be transmitted to the user's eye 6730, such a system will provide a brighter see-through view of the environment while providing a darker displayed image and a darker captured image. As such, the system can be designed to facilitate the intended use by the user.
In an embodiment, the combiner 6735 is planar with sufficient optical flatness to enable sharp display images and sharp captured images, such as flatness of less than 20 light waves (wave of light) in the visible wavelength. However, in embodiments, combiner 6735 may be curved, in which case both the displayed image and the captured image would be distorted, and such distortion would have to be digitally rectified by the associated image processing system. In the case of a displayed image, the image is digitally distorted by the image processing system in the opposite direction to the distortion caused by the curved combiner, so that the two distortions cancel each other out and the user sees the undistorted displayed image as a result. In the case of a captured image, the captured image is digitally distorted after capture to counteract distortion caused by the curved combiner, so that the image appears undistorted after image processing.
In an embodiment, combiner 6735 is an adjustable partial mirror, where the reflectivity can be changed by a user or automatically to function better within different environmental conditions or different use cases. The adjustable partial mirror can be an electrically controllable mirror, such as for example an e-reflector (electron semi-reflector) available from Kent Optronics (http:// www.kentoptronics.com/mirror. html), wherein the reflectivity can be adjusted based on the applied voltage. The adjustable partial mirror can also be a fast switchable mirror (e.g. a switching time of less than 0.03 seconds), wherein the perceived transparency is derived from the duty cycle of the mirror fast switching between the reflective state and the transmissive state. In an embodiment, images captured by the camera 6739 can be synchronized to occur when the fast switchable mirror is in a reflective state in order to provide an increased amount of light to the camera 6739 during image capture. As such, the adjustable partial mirror allows the transmittance of the partial mirror to be changed corresponding to the environmental conditions, for example, the transmittance can be low when the environment is bright and the transmittance can be high when the environment is dark.
In a further embodiment, the combiner 6735 comprises a heat-reflecting mirror coated on the side facing the camera 6739, wherein visible wavelength light is substantially transmitted and the spectral wavelength band of infrared light is substantially reflected and the camera 6739 captures an image comprising at least a portion of the infrared wavelength light. In these embodiments, image light 6750 includes visible wavelength light and a portion of the visible wavelength light is transmitted by combiner 6735 where it is then absorbed by absorptive polarizer 6737. A portion of the scene light 6760 is composed of visible wavelength light and it is also transmitted by the combiner 6735 to provide the user with a see-through view of the environment. Light 6770 from the environment consists of visible wavelength light and infrared wavelength light. A portion of the visible wavelength light, along with substantially all infrared wavelength light within the spectral wavelength band associated with the heat mirror, is reflected by combiner 6735 toward camera 6739, thereby passing through absorbing polarizer 6737. In an embodiment, the camera 6739 is selected to include an image sensor that is sensitive to infrared wavelengths of light, and the absorptive polarizer 6737 is selected to substantially transmit the infrared wavelengths of light in two polarization states (e.g., an ITOS XP44 polarizer that transmits two polarization states of light having wavelengths above 750 nm: see http:// www.itos. de/english/polarisoren/linear. php) such that an increased% of the infrared light is captured by the camera 6739. In these embodiments, the absorbing polarizer 6737 acts as a light trap for escaping image light 6750 and thereby blocks image light 6750 in the visible wavelengths from the camera 6739 while at the same time acting as a window for infrared wavelength light 6770 from the environment for the camera 6739.
By coaxially aligning the camera field of view with the display image and the user's contextual view, an augmented reality image can be provided with improved alignment with objects in the scene. This is because the captured image from the camera provides an accurate representation of the contextual view of the user's perspective. In an embodiment, a camera coaxially aligned with the user's view captures an image of the scene, the processor then identifies an object in the captured image and identifies a field of view location for the object, which can be compared to the displayed field of view related location, whereupon the digital content is then displayed relative to the location of the object.
Another aspect of the invention relates to an optical assembly using a reflective display, wherein the reflective display is illuminated with front light arranged to direct illumination at an angle of about 90 degrees to an effective reflective surface of the reflective display. In an embodiment, the optical configuration is lightweight, small, and produces high quality images in a head-mounted see-through display.
Fig. 68 provides a cross-sectional illustration of a compact optical display assembly for HWC 102, and illustrative light rays to show how light passes through the assembly, in accordance with the principles of the invention. The display assembly is comprised of an upper optic and a lower optic. The upper optics include a reflective image source 6810, a quarter wave film 6815, a field lens 6820, a reflective polarizer 6830, and a polarized light source 6850. The upper optics convert the illumination light 6837 to image light 6835. The lower optics include a beam splitting plate 6870 and a rotating curved mirror 6860. The lower optics deliver the image light to the user who is wearing the HWC 102. The compact optical display assembly provides the user with image light 6835 that conveys the display image and scene light 6865 that provides a see-through view of the environment so that the user sees the display image overlaid onto the view of the environment.
In the upper optics, linearly polarized light is provided by polarized light source 6850. Wherein the polarized light source 6850 can include one or more lamps, such as LEDs, QLEDs, laser diodes, fluorescent lamps, and the like. The polarized light source 6850 can also include a backlight assembly with a light scattering surface or diffuser to uniformly spread the light across the output area of the polarized light source. Light control films or light control structures can also be included to control the distribution of light provided by the polarized light source 6850 (also referred to as cone angle). The light control film can include, for example, diffusers, elliptical diffusers, prismatic films, and lenticular lens arrays. The light control structure can include a prism array, lenticular lens, cylindrical lens, fresnel lens, refractive lens, diffractive lens, or other structure that controls the angular distribution of the illumination light 6837. The output surface of the polarized light source 6850 is a polarizer film used to ensure that the illumination light 6837 provided to the upper optics is linearly polarized.
The illuminating light 6837 provided by the polarized light source 6850 is reflected by the reflective polarizer 6830. Wherein the polarizers on the output surfaces of the reflective polarizer 6830 and the polarized light source 6850 are oriented such that their respective transmission axes are perpendicular to each other. As a result, most of the illumination light 6837 provided by the polarized light source 6850 is reflected by the reflective polarizer 6830. Further, the reflective polarizer 6830 is angled such that the illuminating light 6837 is reflected towards the reflective image source 6810, thereby illuminating the reflective image source 6810, as shown in fig. 68.
The illuminating light 6837 passes through the field lens 6820 and is then incident on the reflective image source 6810. The illuminating light 6837 is then reflected by a reflective image source (otherwise referred to elsewhere herein as a reflective display) 6810. The reflective image source 6810 can include, among other things, a Liquid Crystal On Silicon (LCOS) display, a ferroelectric liquid crystal on silicon (FLCSO) display, a reflective liquid crystal display, a cholesteric liquid crystal display, a bistable nematic liquid crystal display, or other such reflective display. The display can be a monochrome reflective display used with sequential red/green/blue illumination 6837 or a full color display used with white illumination 6837. The reflective image source 6810 locally changes the polarization state of the illuminating light 6837, corresponding to the pixel-by-pixel image content displayed by the reflective image source 6810, thereby forming image light 6835. Where if the reflective image source 6810 is a normally white display, the regions of image light 6835 corresponding to bright regions of image content end up with a polarization state opposite that of the illumination light, and the dark regions of image light 6835 end up with the same polarization state as the illumination light 6837 (it should be noted that the invention can be used with normally black displays that provide the opposite effect on the polarization in the image light). As such, image light 6835, which is initially reflected by the reflective image source 6810, has mixed polarization states on a pixel-by-pixel basis. Image light 6835 then passes through field lens 6820, which modifies the distribution of image light 6835 while preserving the wavefront to match the requirements of the underlying optics (such as, for example, magnification and focusing). When image light 6835 passes through reflective polarizer 6830, bright regions of image light 6835 having the opposite polarization state to the illuminating light 6837 are transmitted through reflective polarizer 6830 and dark regions of image light 6835 having the same polarization state as the illuminating light 6837 are reflected back toward polarized light source 6850, as a result of which image light 6835 is linearly polarized in a single polarization state in all pixels of the image after passing through reflective polarizer 6830, but now has different intensities from pixel to pixel. Thus, the reflective polarizer 6830 acts first as a reflector for the illumination light 6837, and then again as an analyzing polarizer (analyzer polarizer) for the image light 6835.
As such, the optical axis of the illuminating light 6837 coincides with the optical axis of the image light 6835 between the reflective polarizer 6830 and the reflective image source 6810. Both the illumination light 6837 and the image light 6835 pass through the field lens 6820, but in opposite directions. Where the field lens acts to spread the illuminating light 6837 so it illuminates the entire active area of the reflective image source 6810, and also spreads the image light 6835 so it fills the eye-box 6882 after passing through the compact optical display system. By overlapping the portion of the compact optical display assembly associated with the illumination light 6837 and the portion of the compact optical display assembly associated with the image light 6835, the overall size of the compact optical display assembly is reduced. Given that the focal length associated with field lens 6820 requires some space in a compact optical display assembly, reflective polarizer 6830 and polarized light source 6850 are located in spaces that would otherwise be unused, and therefore the overall size of the display assembly is more compact.
The reflective polarizer 6830 can be a relatively thin film (e.g., 80 microns) or sheet (e.g., 0.2 mm), as shown in fig. 68. The reflective polarizer 6830 can be a wire grid polarizer such as that available from Asahi Kasei (Asahi chemical Co., Ltd.) under the name WGF, or a multilayer dielectric film polarizer such as that available from 3M under the name DBEF. As previously described, the reflective polarizer 6830 serves two functions. First, the reflective polarizer 6830 reflects the illuminating light 6837 provided by the polarized light source 6850 and redirects the illuminating light 6837 towards the reflective image source 6810. Second, the reflective polarizer 6830 acts as an analyzing polarizer for the image light 6835, thereby converting the image light 6835 of mixed polarization states above the reflective polarizer 6830 to linearly polarized light having a single polarization state below the reflective polarizer 6830. When the illuminating light 6837 incident on the reflective polarizer 6830 is incident on a relatively small portion of the reflective polarizer 6830, the image light 6835 is incident on most of the area of the reflective polarizer 6830. Thus, the reflective polarizer 6830 extends across at least the entire area of the field lens 6820 and may extend across the entire area between the field lens 6820 and the beam splitter 6870 as shown in fig. 68. Further, the reflective polarizer 6830 is angled at least in the portion upon which the illuminating light 6837 is incident to redirect the illuminating light 6837 towards the reflective image source 6810. However, in a preferred embodiment, since a reflective polarizer (such as a wire grid polarizer) can be relatively insensitive to the angle of incidence, the reflective polarizer 6830 is a flat surface that is angled to redirect the illuminating light 6837 toward the reflective image source 6810, where the flat surface extends in one continuous flat surface across substantially all of the area between the field lens 6820 and the beam splitter 6870 to facilitate manufacturing. The film or sheet of reflective polarizer 6870 can be left at the edges to position it at a desired angle and to make the surface flat.
The systems and methods described herein with respect to fig. 68-71 have many advantages. By avoiding glancing angles of the illumination light 6837 and image light 6835 at all surfaces in a compact optical display assembly, scattering of light in the assembly is reduced and, as a result, the contrast of the image presented to the user's eye 6880 is made higher with a darker black color. Further, the reflective image source 6810 can include a compensating retarder film 6815, as known to those skilled in the art, so that the reflective image source 6810 can provide a higher contrast image with more uniform contrast over an area where an image is displayed. Further, by providing an optical display assembly that consists essentially of air, the weight of the compact optical display assembly is substantially reduced. By using coincident optical axes for the illumination light 6837 and image light 6835 and overlapping the illumination light 6837 and image light 6835 for a large portion of the optical display assembly, the overall size of the compact optical display assembly is reduced. Where a coincident optical axis is provided by passing illumination light 6837 and image light 6835 through field lens 6820 in opposite directions. To maintain a uniform polarization state for the illumination light 6837, the field lens 6820 is made of a low birefringence material, such as glass or plastic (such as 0KP4 available from Osaka Gas Chemicals). By positioning the polarized light source 6850 and the associated illumination light 6837 below the field lens 6820, and by folding the optical paths of both the illumination light 6837 at the reflective polarizer 6830 and the image light 6835 at the beam splitter 6870, the overall height of the compact optical display assembly is greatly reduced. For example, the overall height of the compact optical display assembly can be less than 24mm, as measured from the reflective image source 6810 to the bottom edge of the rotating curved portion mirror 6860 for a display that provides a 30 degree diagonal field of view with a 6 x 10mm eyebox.
Preferably, the light control structure in the polarized light source 6850 includes a positive lens, such as, for example, a positive fresnel lens, a positive diffractive lens, or a positive refractive lens. Among them, a positive fresnel lens or a positive diffractive lens is preferable because they can be very thin. The illumination light 6837 is thus focused to form a smaller area or pupil at the reflective polarizer 6830, which has a direct relationship to the area of the eye-box 6882 at the other end of the optics where the image light 6835 is provided to the user's eye 6880, as shown in fig. 68. Where the positive lens concentrates the illuminating light 6837 from the polarized light source 6850, both in intensity and angular distribution, to match the etendue (etendue) of the optical system, and thereby fills the eye frame with image light 6835. By using a positive lens to concentrate the light from the polarized light source 6850 provided to the reflective polarizer 6830, and then using the field lens 6820 to expand the illuminating light 6837 to illuminate the active area of the reflective image source 6810, efficiency is improved because the illuminating light 6837 is delivered substantially only where it is desired to form the image light 6835. Further, the illumination light 6837 outside the pupil can be controlled by the positive lens and clipped by the masking edge of the positive lens. By focusing the illumination light 6837 and cropping the light outside the pupil, the illumination light 6837 is prevented from striking adjacent surfaces at glancing angles in compact optical display assemblies to reduce scattering of light, and thereby improve contrast in the image provided to the user's eye 6880 by providing a darker black color.
It should be noted that while fig. 68, 69 and 70 show optical arrangements in which the illumination light 6837 is provided from behind the rotating curved-portion mirror 6860, other optical arrangements are possible within the present invention. The position of the polarized light source 6850 can be changed, for example, to be at the side of the rotatable curved-portion mirror 6860 from which the reflective polarizer 6830 is oriented to receive the illumination light 6837. And reflects the illumination light (not shown) toward the reflective image source 6810.
In a further embodiment, a portion of image light 6835 that is reflected back toward polarized light source 6850 is recycled in polarized light source 6850 to increase the efficiency of polarized light source 6850. In this case, a diffuser and reflective surface are provided behind the polarized light source 6850, so the polarization of the light is disturbed and reflected back toward the reflective polarizer 6830.
In yet another embodiment, another reflective polarizer is provided in the polarized light source 6850 and behind the previously disclosed linear polarizer. Wherein the respective transmission axes of the reflective polarizer and the linear polarizer are parallel to each other. The other reflective polarizer then reflects the light back into the backlight, which has a polarization state that will not be transmitted by the linear polarizer. The light reflected back into the backlight passes through a diffuser associated with polarized light source 6850, where the polarization state is disrupted and reemitted, thereby recycling the light and increasing efficiency.
In another embodiment, a system according to the principles of the present invention includes an eye imaging system. Fig. 69 is an illustration of a compact optical display assembly that includes an eye imaging camera 6992 that captures an image of a user's eye 6880 that is coaxial with a display image provided to the user so that a full image of the user's iris can be reliably captured. Eye imaging camera 6992 is reflected into the lower optics by reflective polarizer 6930, which reflective polarizer 6930 includes a notch mirror coating facing eye imaging camera 6992 that reflects wavelengths of light captured by eye imaging camera 6992 (e.g., near infrared wavelengths) while transmitting wavelengths associated with image light 6835 (e.g., visible light wavelengths). The eye ray 6995 shown in fig. 69 illustrates how the field of view associated with the eye imaging camera 6992 is a relatively narrow field of view because it is multiply reflected by the lower optics to capture an image of the user's eye 6880. However, in order for the eye imaging camera 6992 to be able to focus on the user's eye 6880, the eye imaging camera 6992 needs to have a very close focusing distance (e.g., 35 mm). In addition, the field of view and the focal distance of the eye imaging camera must take into account the effect of reducing the refractive power provided by the rotating curved portion mirror 6860. To improve the efficiency of capturing light reflected from the user's eye 6880 and thereby enable brighter eye images, the rotating curved partial mirror 6860 can be coated with a partial mirror coating that acts as a total reflecting mirror in the wavelengths captured by the eye imaging camera 6992, for example, the coating can reflect 50% of the visible light associated with the image light and 90% of the near infrared light associated with the eye light 6995. Where the reflections and associated changes in polarization state are similar to those associated with image light 6835, but in reverse order, because eye ray 6995 comes from the user's eye 6880. LEDs or other micro-lights are provided adjacent the user's eye 6880 to illuminate the user's eye 6880, wherein wavelengths associated with the LEDs or other micro-lights are different from wavelengths associated with image light 6835, such as, for example, near infrared wavelengths (e.g., 850nm, 940nm, or 1050 nm). Alternatively, image light 6835 is used to illuminate the user's eye 6880 and a reflective polarizer 6930 with a low extinction ratio in reflection (e.g., a reflective extinction ratio < 15) is used to cause some of the eye rays to reflect toward the eye imaging camera 6992.
In an alternative embodiment, the reflective and partially reflective surfaces can extend laterally to the sides of the area for displaying the image to the user. In this case, the eye imaging camera can be located adjacent to the field lens and point in a direction to image the user's eye after reflection from the beam splitter and rotating curved section mirror shown in fig. 70. Where diagram 70 is a diagram showing the eye imaging camera 7092 positioned to the side of the field lens 6820 and the reflective polarizer 6830. The eye imaging camera 7092 is directed such that the field of view captured by the eye imaging camera 7092 includes the user's eye 6880, as illustrated by eye ray 7095. The quarter wave film 6890 is also laterally extended to change the polarization state of the eye light 7095 in the same manner that the polarization state of the image light is changed, such that the eye light that passes through the beam splitter 6870 and the quarter wave 6890 is partially reflected by the rotating curved portion mirror 6860 and then reflected by the beam splitter 6870 and then captured by the eye imaging camera 7092. By positioning the eye imaging camera 7092 to the side of the field lens 6820 and the reflective polarizer 6830, the complexity of the optics associated with displaying an image to a user is reduced. In addition, the space available for the eye imaging camera 7092 is increased because interference with the display optics is reduced. By positioning the eye imaging camera 7092 adjacent to the display optics, the eye image is captured nearly coaxially with the display image.
In yet another embodiment, a system in accordance with the principles of the present invention includes a field lens having an internal reflective polarizer and one or more surfaces having optical power. FIG. 71 is an illustration of the upper optics including field lens 7121 composed of upper prism 7122 and lower prism 7123. The upper prisms 7122 and the lower prisms 7123 can be molded or ground and polished. A reflective polarizer 7124 is inserted on the flat surface between the upper prism 7122 and the lower prism 7123. The reflective polarizer 7124 can be a wire grid polarizer film or a multilayer dielectric polarizer as previously mentioned. The reflective polarizer 7124 can be bonded in place with a transparent UV cured adhesive having the same index of refraction as the upper prism 7122 or the lower prism 7123. Typically, the upper prisms 7122 and the lower prisms 7123 will have the same refractive index. Where the upper prism 7122 includes an angled surface for the illuminating light 6837 to be provided to illuminate the reflective image source 6810. The illumination light is provided by a light source comprising a lamp (such as an LED) as has been described previously, a backlight 7151, a diffuser 7152 and a polarizer 7153. The lower prism 7123 comprises a curved surface on the exit surface for controlling the wavefront of the image light 6835 supplied to the lower optics. The upper prism may also include a curved surface on the upper surface next to the reflective image source 6810, as shown in fig. 71, for manipulating the chief ray angle of light at the surface of the reflective image source 6810. The illuminating light 6837 is polarized by the polarizer 7153 before entering the upper prism 7122. The transmission axes of polarizer 7153 and reflective polarizer 7124 are perpendicular to each other such that the illuminating light 6837 is reflected by reflective polarizer 7124 such that the illuminating light is redirected toward reflective image source 6810. The polarization state of the illuminating light 6837 is then changed by the reflective image source 6810, corresponding to the image content to be displayed, as previously described, and the resulting image light 6835 is then passed through the reflective polarizer 7124 to form bright and dark regions associated with the image displayed to the user's eye 6880.
In another embodiment, field lens 7121 of FIG. 71 comprises a polarizing beam splitter cube comprising two prisms, an upper prism 7122 and a lower prism 7123. In this case, the reflective polarizer 7124 is replaced by a polarization sensitive coating such that light of one polarization state (typically, for example, S-polarized light) is reflected and light of the other polarization state is transmitted. The illuminating light 6837 is then provided with a polarization state that is reflected by the coating and the image light is provided with a polarization state that is transmitted by the coating. As shown in fig. 71, the beam splitter cube includes one or more curved surfaces in either the upper prism 7122 or the lower prism 7123. The beam splitter cube can also include one or more angled surfaces in which the illumination light is supplied. The angled surface can include light control structures, such as a microlens array, to improve the uniformity of the illumination light 6837, or a lenticular array to collimate the illumination light 6837.
In yet another embodiment, the curved surface(s) or angled surface(s) illustrated in FIG. 71 can be molded onto a rectangular shaped beam splitter cube by: casting a UV curable material (e.g., UV curable acrylic) onto the planar surface of the beam splitter cube; placing a transparent mold with cavities having the desired curvature onto a flat surface to force the UV curable material to become the desired curvature; and applying UV light to cure the UV curable material. The beam splitter cube can be made of a material having the same or different index of refraction as the UV curable material.
In a further embodiment, a polarization sensitive reflective coating (such as a dielectric partially reflective mirror coating) can be used in place of a reflective polarizer or beam splitter as shown in fig. 68. In this case, the reflective film and plate, including the reflective polarizer 6830 and beam splitter 6870, include a polarization sensitive coating that substantially reflects light having one polarization state (e.g., S-polarization) and substantially transmits light having another polarization state (e.g., P-polarization). Since the illumination source includes polarizer 7153, the illumination light 6837 is one polarization state and it is not important that the reflective polarizer 7124 is sensitive to the polarization state in reflection, which only needs to be maintained and uniformly present on the surface of the reflective image source 6810. However, it is important that the reflective polarizer 7124 is highly sensitive to the polarization state in transmission (e.g., extinction ratio > 200) to be an effective polarization analyzer and to provide a high contrast image (e.g., contrast ratio > 200) to the user's eye 6880.
In a further embodiment, the field lens 7121 shown in fig. 71 can include a reflective polarizer 7124 having a curved surface (not shown) instead of a flat surface, and wherein the reflective polarizer 7124 is not a film but a polarization sensitive coating, a printed wire grid polarizer, or a molded wire grid pattern (which is then metalized). In this case, the upper prisms 7122 and the lower prisms 7123 are fabricated as matched pairs having matched curved surfaces that together form the surface of the reflective polarizer. Wherein a polarization sensitive coating, printed wire grid, or molded wire grid pattern is applied to the mating curved surfaces associated with the upper prisms 7122 or the lower prisms 7123, and a transparent adhesive is applied to the other mating surfaces to join the upper prisms 7122 and the lower prisms 7123 together to form the field lens 7121 with the internal curved reflective polarizer 7121.
Another aspect of the invention relates to making and providing an optical element for use in a see-through computer display system. In embodiments, an optical element that is lightweight, low cost, and high optical quality.
In a head mounted display, a beam splitter can be used to direct illumination light from a light source toward a reflective image source (such as LCOS or DLP). Where it is desirable to have a low weight beam splitter with a flat partially reflective surface to provide good image quality. The flat partially reflective surface is particularly important when the eye camera is provided for eye imaging, which utilizes the flat partially reflective surface to direct the field of view of the eye camera towards the user's eye.
Systems and methods provide a lightweight beam splitter comprised of a molded plastic element and an interior plate element to provide a flat partially reflective surface. At the same time, these pieces form a triplet beam splitter optic comprising two molded elements and a plate element. By providing the plate element inside the beam splitter, the matching surface of the molding element need not be optically flat, but rather the plate element provides a flat surface and an index matching material is used to bond the plate element and the molding element together. All three elements can be plastic elements to reduce the weight and cost of a lightweight beam splitter. In order to provide a more uniform refractive effect, the molding element and the plate element are preferably made of plastic materials having similar refractive indices and having low birefringence.
Fig. 72 shows a representation of two molding elements 7210 and 7220. These molded elements are molded with a relatively uniform thickness to provide uniform flow of the plastic material during molding (injection molding, compression molding or casting) and thereby enable low birefringence in the element being molded. To further reduce birefringence in the molded part, materials having low viscosity and low stress optical coefficients are preferred, including: 0KP4 from Osaka Gas (Osaka Gas, Osaka corporation); zeonex F52R, K26R, or 350R from Zeon Chemical; PanLite SP3810 from Teijin (di chemie).
The molding elements 7210 and 7220 can include a flat surface and a surface having optical power, wherein the surface having optical power can include a spherical or aspheric curved surface, a diffractive surface, or a fresnel surface. A flat surface, a diffractive surface, or a fresnel surface is preferable for the surface associated with the light illuminating the image source, and a flat surface, a spherical surface, or an aspherical surface is preferable for the surface associated with the image light. The molding element 7210 is shown with a spherical or aspherical surface 7215 and the molding element 7220 is shown with a flat surface 7225, however any of the surfaces shown can be molded as a flat surface or a surface with optical power.
After molding, the molding elements 7210 and 7220 are machined to provide mating angled surfaces. The molding element 7210 is shown in fig. 73, where a milling cutter 7328 is shown machining an angled surface 7329. Fig. 74 shows an illustration of the molded elements 7210 and 7220 after the molded elements 7210 and 7220 have been machined to provide beam splitter elements 7430 and 7440, respectively, the beam splitter elements 7430 and 7440 being prisms. The angled surfaces of the beam splitter elements 7430 and 7440 are machined to have machined angles. Alternatively, beam splitter elements 7430 and 7440 can be machined from sheet material or molded disks (pucks). In either case a machined angled surface or a molded angled surface is used in the beam splitter element, the surface will not be optically flat.
Fig. 75 shows an illustration of an assembled triplet beam splitter optic in which beam splitter elements 7430 and 7440 have been assembled with partial reflector plate elements 7560 to form a beam splitter cube. Where beam splitter elements 7430 and 7440 are made of the same material or different materials having very similar indices of refraction (e.g., within 0.05 of each other). An index matching material is used at the interface between the beam splitter element and the plate element. The index matching material can be a fluid, a light-cured adhesive, a moisture-cured adhesive, or a heat-cured adhesive. The index matching material should have a refractive index that is very similar (e.g., within 0.1) to the refractive index of the beam splitter element.
The partially reflective plate element 7560 can be a transparent plate with a partially reflective layer that is a partially reflective coating or a laminated partially reflective film. The transparent plate is preferably a cast sheet (such as a unit cast acrylic with low birefringence), or a molded plaque (plaque) of a low birefringence material (such as 0KP4, Zeonex F52R, Zeonex K26R, Zeonex 350R, or PanLite SP 3810). Furthermore, the transparent sheet should be optically flat (e.g., within 20 microns of the surface and with a surface finish of less than 15 nanometers), yet optically flat surfaces are readily available in sheet stock. By using an index matching material at the interface between the beam splitter elements 7430 and 7440 and the partial reflector element 7560, the lack of optical flatness of the surfaces of the beam splitter elements 7430 and 7440 can be filled with the index matching material so that the flatness of the reflective surfaces is determined by the more readily available flatness of the partial reflector element 7560, thereby providing manufacturing advantages. The partially reflective layer can be a partially reflective mirror, a reflective polarizer, or a wire grid polarizer, where the reflective polarizer can be a coating or film and the wire grid polarizer can be a molded structure or film partially coated with a conductive layer. Suitable reflective polarizer films are available from 3M available under the trade name DBEFQ, and wire grid polarizer films are available from Asahi-Kasei available under the trade name WGF. In a preferred embodiment, the transparent plate of partially reflective plate element 7560 has a refractive index that is very similar (e.g., within 0.1) to the refractive index of beam splitter elements 7430 and 7440.
Fig. 76 shows a diagram of an optical system for a head mounted display system. The system includes a reflective display as the image source 7667, a light source 7665, which light source 7665 can be a sequential color light source or a white light source as appropriate for the image source 7665. Where the light source 7665 provides illumination light 7674, the illumination light 7674 can be polarized light so long as the quarter wave layer is associated with the image source 7667 or the partially reflective plate element 7560, such that the polarization of the illumination light 7674 is changed before becoming image light 7672. The illumination light 7674 is reflected by the surface of the partial reflector element 7560 and then reflected by the image source 7667, whereupon it passes through the partial reflector element 7560, thereby becoming image light 7672. The image light 7672 is then reflected by the partially reflective combiner 7682 such that the image light is directed toward the user's eye 7680 to display an image to the user while providing a see-through view of the environment. In an optical system, an index matching material can be used at the interface between the image source 7665 and the beam splitter element 7440, so that the surface of the beam splitter element 7440 need not be flat. It is contemplated by the present invention that the optical system may include additional lenses and other optical structures not shown to improve image quality or change the form factor of the optical system.
In another embodiment, beam splitter elements 7430 and 7440 are directly molded using injection molding or casting. The molded beam splitter element is then assembled as shown in fig. 75, as previously described herein.
In further embodiments, the surfaces of the beam splitter element are molded or machined to have additional structure in order to provide further features. Fig. 77 shows an illustration of a lightweight beam splitter 7750 comprising an extended partially reflective plate element 7760 and an extended beam splitter element 7740, wherein the partially reflective surface is extended to provide additional area for illumination light 7674 to be reflected toward the image source 7665. A partially reflective surface with an extension therein is particularly useful when the image source 7665 is a DLP and the illumination light 7665 must be delivered at an oblique angle. Fig. 78 shows a lightweight beam splitter 7850 that includes an entrance surface 7840 for illumination light 7674, the illumination light 7674 being angled so the illumination light 7674 passes substantially perpendicular through the entrance surface 7840.
In yet another embodiment, beam splitter elements 7430 and 7440 are machined from a single molded element. Wherein a single molding element is designed to provide the desired optical surface. For example, molding element 7210 as shown in fig. 72 has surfaces that can be used for both surfaces 7215 and 7225. A series of molding elements 7210 can then be molded and some will be used to make machined beam splitter elements 7430 and some for beam splitter elements 7440. The partially reflective plate element 7560 will then be bonded with the beam splitter elements 7430 and 7440 using an index matching adhesive, as previously described herein. Alternatively, a single molded element 7210 can be designed with additional thickness across the dimension where portions of reflector element 7560 are to be added, such that the single molded element 7210 will be sawn, machined, or laser cut into beam splitter elements 7430 and 7440.
In another embodiment, a first molded optical element is molded in a geometry that enables improved optical properties, including: low birefringence; a more accurate replication of the optical surfaces of the molding (reduced power and irregular deviations). The first molded optical element is then cut into different shapes, with the cutting process leaving an optically rough surface finish. A second optical element having an optically smooth surface is then bonded to the optically rough surface of the first molded optical element using an index matching adhesive to provide a combined optical element. The index matching adhesive fills the optically rough surface on the first molded optical element such that the optically rough surface is no longer visible, and an optically smooth surface is provided in the combined optical element by the second optical element. The optical properties of the combined optical element are improved compared to a directly molded optical element having the geometry of the combined optical element. The cut surface can be flat or curved, so long as the cut surface of the first molded optical element is substantially similar to the bonding surface of the second optical element. In addition, both the first molded optical element and the second optical element can provide optical surfaces with independent optical characteristics (such as optical power, optical wedge, diffraction, grating, dispersion, filtering, and reflection). For example, optically flat surfaces can be difficult to mold on plastic lenses. The lens can be molded to provide a spherically curved surface and another surface that is substantially milled away to provide a flat surface with a rough surface finish. An optically flat sheet can then be bonded to the milled surface using an index matching adhesive to provide a composite optical element having an optically flat surface.
In yet another embodiment, the surface of the beam splitter element includes molded or machined features to collimate, converge, diverge, diffuse, partially absorb, redirect, or polarize the illumination light 7674 or the image light 7672. In this way, the number of components in the lightweight beam splitter is reduced and manufacturing complexity and cost are reduced.
Multi-piece lightweight solid-state optics have been described in connection with certain embodiments; it should be understood that the multi-piece lightweight solid-state optics may be used in conjunction with other optical arrangements (e.g., other see-through head-mounted display optical configurations described elsewhere herein).
In embodiments, the present invention provides methods for aligning images, as well as methods and apparatus for controlling light within optics of a display assembly associated with an HMD to prevent scattering and also to capture excess light to thereby improve the quality of images provided to a user.
Fig. 79a is a schematic illustration of a cross-section of a display assembly for an HMD. Wherein the display assembly includes upper optics 795 and lower optics 797 that operate together to display images to a user while providing a see-through view of the environment. Aspects of the upper optic 795 will be discussed in more detail herein. Lower optic 797 can include optical elements to manipulate image light 7940 from upper optic 795 and thereby present image light 7940 to a user's eye 799. The lower optics 797 can include one or more lenses 7950 and a combiner 793. The combiner 793 presents image light 7940 to the user's eye 799 while allowing light 791 from the environment to pass through to the user's eye 799 so that the user sees a display image superimposed on the view of the environment.
Fig. 79 is a schematic depiction of a cross section of upper optics 795 for an HMD. Included are a light source 7910, a partially reflective layer 7930, a reflective image source 7935, and a lens 7950. The light source 7910 provides illumination light 7920 to the HMD. The illuminating light 7920 is redirected through the partially reflective layer 7930 to illuminate the reflective image source 7935. The illumination light 7920 is then reflected by the reflective image source 7935, corresponding to the image content in the display image, such that it passes through the partially reflective layer 7930 and thereby forms image light 7940. Image light 7940 is optically manipulated by lens 7950 and other optical elements (not shown) in lower optics 797 so that a display image is provided to the user's eye 799. Meanwhile, the light source 7910, the partially reflective layer 7930, and the reflective image source 7935 form a front-lit image source. Among other things, the reflective image source 7935 can include an LCOS, FLCOS, DLP, or other reflective display. Fig. 79, 80, 82, and 83 are shown with illumination light 7920 provided such that it is incident on the reflective image source 7935 at an oblique angle, as is required for DLPs. Fig. 84c, 84d, 85, 86, 87, 88, and 89 are shown with illumination light 7920 provided perpendicular to the reflective image source 8535, as is required for LCOS or FLCOS. The principles of the invention described herein are applicable to any type of reflective image source where reduction of veiling glare is desired. The light source 7910 can include a light source, such as an LED, laser diode, or other light source (e.g., as described herein), as well as various light control elements, including: diffusers, prism films, lenticular lens films, fresnel lenses, refractive lenses, and polarizers. A polarizer included in the light source 7910 polarizes light exiting the light source 7910 such that the illumination light 7920 is polarized. The partially reflective layer 7930 can be a partially reflective mirror coating on a substrate, or it can be a reflective polarizer film, such as a wire grid film supplied by Asahi-Kasei under the name WGF or a multilayer polarizer film supplied by 3M under the name DBEF. When the partially reflective layer 7930 is a reflective polarizer, the illumination light 7920 is supplied as polarized light with the polarization axis of the illumination light 7920 oriented relative to the polarization axis of the reflective polarizer such that the illumination light 7920 is substantially reflected. The reflective image source 7935 then includes a quarter-wave retarder (e.g., a quarter-wave film) such that the polarization state of the illuminating light 7920 is reversed during reflection by the reflective image source 79345. This enables the reflected illumination light 7920 to then be substantially transmitted by the reflective polarizer. After passing through the partially reflective layer 7930, the light becomes image light 7940. The image light 7940 then passes into a lens 7950, which lens 7950 is part of lower optics 797 or display optics that manipulate the light to provide a display image to the user's eye. Although the partially reflective layer 7930 is illustrated as a planar surface, the inventors have contemplated that the surface may be curved, shaped, have simple or complex angles, etc., and that such surface shapes are encompassed by the principles of the present invention.
In HMDs that provide images to both eyes of a user, it is desirable to provide the images so that they are aligned with each other. This is particularly important when the image is viewed as a stereoscopic image where the perceived image alignment seen with each eye is critical to achieving depth perception. To provide accurate image alignment, effective alignment of the optics may be performed after the optics have been assembled into the rigid frame of the HMD. Wherein effectively aligning comprises aligning the images for each eye with each other by moving portions of the display assembly and attaching the portions into position relative to each other. To this end, fig. 79 shows a space 7952 that extends around the reflective image source 7935 to enable the reflective image source 7935 to move laterally and rotationally. The light source 7910 and partially reflective layer 7930 are arranged to illuminate an area including a reflective image source 7935 and a portion of the adjacent space 7952. As a result, the reflective image source 7935 can move within the space 7952 during an effective alignment process without losing illumination or degrading the brightness of the displayed image. Wherein movement of the reflective image source 7935 during active alignment can include movement corresponding to horizontal, vertical, and rotational movement of an image provided to one eye relative to an image provided to the other eye of the user. When the reflective image source 7935 is approximately 5 x 8.5mm in size, the movement can be, for example, 0.5mm in size (which is equivalent to approximately 10% of the reflective image source size) and as such the space 7952 can be 0.5mm wide or wider.
However, by including spaces 7952 in the illuminated area, visible artifacts can occur due to light reflected or scattered from the edges of the reflective image source 7935 or from structures adjacent to the spaces 7952. Accordingly, a mask 8055 is provided that extends across the space 7952 from an edge of an active area of the reflective image source 7935 to cover structures adjacent to the space 7952 and the edge of the reflective image source 7935, as shown in fig. 80. The mask 8055 is black and non-reflective so as to absorb the incident illumination light 7920. Furthermore, the mask 8055 is designed not to interfere with movement of the reflective image source 7935 that occurs during active alignment. To this end, the mask 8055 can be rigid (e.g., black plastic or black coated metal) and designed to slide under adjacent structures (such as the light source 7910), edges of the partially reflective layer 7930, and sides of the housing containing the front light. Alternatively, the mask 8055 may be flexible (e.g., a black plastic or rubber film or tape) such that the mask 8055 deforms when it contacts an adjacent structure. Fig. 81a shows a representation of a reflective image source 7935, light source 7910 and space 7952 as viewed from above. As typically found with all kinds of image sources, there is a mask 8168 applied to the surface of the image source surrounding the active area 8165, however the mask 8168 does not cover the space 7952. Fig. 81b shows a further illustration of the system shown in fig. 81a, where a mask 8055 is applied to the reflective image source 7935 such that it is attached within the mask 8168 in a manner that covers the space 7952 and does not block the active area 8165.
In another embodiment, the image produced by the image source does not use all of the active display area of the image source, so there is space to shift the image in the x and/or y view angles within the active display area for aligning the content. For example, if misalignment is observed (as indicated above), the images are digitally shifted in the x and/or y directions instead of or in addition to physically moving the image source to create better combined content alignment. The initially invalid display area around the content may be referred to as a content shift buffer.
In a further embodiment for aligning images in a HMD with perspective, a first image containing features is provided to one eye of the user using a display component similar to that shown in fig. 79a or fig. 85. A second image containing features in the same location is provided to the other eye of the user. The position of at least one of the image sources is then moved within the space provided for adjustment to align the first image with the second image, as seen by the user's eyes. This image alignment can also be done using a camera instead of the user's eyes.
In the case where the first and second images are smaller in size than the active area of the reflective image source, thereby leaving a digital space adjacent to the images that can be used for digital shifting of the images for further alignment adjustments. This adjustment can be used in combination with physical movement of the reflective image source to align the first image with the second image.
FIG. 82 is an illustration of an upper optic 825, including elements of an upper optic 795 plus a trim polarizer 8260. Wherein the polarization axis of trim polarizer 8260 is oriented such that image light 7940 is transmitted to lower optics (not shown). Light having the opposite polarization state compared to image light 7940 is absorbed by trim polarizer 8260. As such, light scattered from surfaces (such as walls of housing 8262), typically having mixed polarization states, will be partially absorbed by trim polarizer 8260. As long as trim polarizer 8260 is located behind lens 7950, trim polarizer 8260 is also capable of absorbing a portion of the colored light caused by birefringence in lens 7950. In this case, trim polarizer 8260 absorbs the light with the opposite polarization state caused by birefringence and transmits the light with the polarization state of image light 7940 before lens 7950. In some cases, it may be advantageous to change the polarization state of image light 7940 to improve the reflection of image light 7940 from combiner 793, so that a half-wave retarder is needed in addition to trim polarizer 8260. For proper operation, the half-wave retarder is positioned with its fast axis oriented at 45 degrees to the transmission axis of trim polarizer 8260. In this case, it is advantageous to position a half-wave retarder (not shown) below the trim polarizer 8260 so that the trim polarizer can absorb any elliptical polarization that may be present due to birefringence in the lens 7950 before the image light is acted upon by the half-wave retarder. In this manner, any change in retardation with respect to wavelength that may be present in a half-wave retarder will not act to increase elliptical polarization or act to increase color artifacts in image light 7940 caused by birefringence in lens 7950. In an example, the trim polarizer can be a polarizer film laminated to a half-wave retarder film, and an anti-reflective coating can be applied to the outer surface.
In fig. 83, the partially reflective layer 8330 is a laminated multi-polarizer film composed of a reflective polarizer film 8332 laminated to an absorptive polarizer film 8331. Where the reflective polarizer film 8332 is only large enough to reflect the illumination light 7920 that illuminates the active area 8165 of the reflective image source 7935. The absorbing polarizer film 8331 is larger than the reflecting polarizer film 8332 and extends across all of the aperture between the reflective image source 7935 and the lens 7950 such that no edge of the absorbing polarizer film 8331 is visible and all light reflected from the reflective image source 7935 passes through the absorbing polarizer 8331. For the case when the reflective image source 7935 is LCOS, the absorbing polarizer 8331 acts as an analyzing polarizer to allow only the polarization state of the image light to be transmitted. As such, the reflective polarizer film 8332 covers only a portion of the absorptive polarizer film 8331. The polarization axes of the reflective polarizer film 8332 and the absorptive polarizer film 8331 are aligned such that polarized light transmitted by the reflective polarizer film 8332 is also transmitted by the absorptive polarizer film 8331. In contrast, polarized light reflected by the reflective polarizer film 8332 is absorbed by the absorptive polarizer film 8331. Thus, illumination light 7920 incident on reflective polarizer film 8332 is reflected toward reflective image source 7935, with the polarization state reversed such that the illumination light is transmitted by reflective polarizer film 8332 and absorptive polarizer film 8331 as it becomes image light 7940. Meanwhile, the illumination light 7920 incident on the absorptive polarizer film 8331 in the region surrounding the reflective polarizer film 8332 is absorbed by the absorptive polarizer film 8331. By absorbing such excess illumination light 7920 that would not illuminate the active area 8165 of the reflective image source 7935, stray light is reduced within the display assembly and contrast in the image presented to the user's eye is increased as a result. By aligning the polarization axes of the reflective polarizer film 8332 and the absorptive polarizer film 8331, transmission is reduced by only approximately 12% in the area including both the reflective polarizer film 8332 and the absorptive polarizer film 8331 as compared to the area including only the absorptive polarizer film 8331. Having a local difference in transmission on the partially reflective layer 8330 comprised of stacked, multiple polarizers will have very little effect on the brightness uniformity in the image provided to the user's eye, given the location of the partially reflective layer 8330 in the optical system and the fact that it is far from the reflective image source 7935. In addition, the fact that the partially reflective layer 8330 is away from the reflective image source 8330 blurs the edges of the reflective polarizer film 8332, as seen by the user.
Fig. 84a and 84b show diagrams of an example of the partially reflective layer 8330 composed of reflective polarizer films 8430 and 8431 laminated to an absorptive polarizer film 8432. The reflective polarizer films 8430 and 8431 are cut into shapes that cover only the area where the illumination light 7920 will be reflected to illuminate the active area 8165 of the reflective image source 7935. The required shape of the reflective polarizer film will vary depending on the type of front light. For the front light shown in fig. 83, where the partially reflective layer 8330 is positioned adjacent to the reflective image source 7935, the shape of the reflective polarizer film 8431 will be rectangular or elliptical, such as shown in fig. 84 b. For front lights included in the display assembly shown in fig. 85 where the lens 8550 is located between the partially reflective layer 8530 and the reflective image source 8535, the effect of the illumination light 8520 passing through the lens 8550 changes the distribution of illumination light 8520 required from the light source 8510. As a result, the use of multiple polarizers in which the illumination light 8520 can cover only a part of the partially reflective layer 8530 and be laminated is advantageous. In an embodiment, the reflective polarizer film can cover less than 80% of the area of the absorptive polarizer film in the stacked partially reflective layer. In a further embodiment, the reflective polarizer film can cover less than 50% of the area of the absorptive polarizer film in the stacked partially reflective layer. In this case, the partially reflective layer 8530 can include a reflective polarizer film 8430 having a shape similar to that shown in fig. 84 a. In any case, the shape of the reflective polarizer film is selected in concert with the optics in the front light and the display optics associated with the display assembly of the HMD.
Fig. 84c shows an example illustration of a front light for a display assembly similar to that shown in fig. 85, where the laminated multiple polarizer film 8436 is shown with a complex curved shape, similar to S with a central flat portion and curved end portions. The stacked multilayer polarizer 8436 includes a reflective polarizer film 8438 and an absorptive polarizer film 8437. The illuminating light 8520 includes rays 8522 incident on the reflective polarizer film 8438 and rays 8521 incident on the absorptive polarizer film 8437. Due to the alignment of the polarization of the illumination light 8520 with the polarization axes of the reflective polarizer film 8438 and the absorptive polarizer film 8437 (as previously described herein), the ray 8522 is reflected by the reflective polarizer film 8438 and the ray 8521 is absorbed by the absorptive polarizer film 8437. In this way, the rays 8521 are prevented from contributing stray light. Absorbing rays 8521 are beneficial because they are unable to contribute image light 8540 because they will be incident on the reflective image source 8535 outside the active area 8165 if they are reflected by the stacked multi-polarizer 8436, and will be incident on the housing sidewall 8262 if they are transmitted by the stacked multi-polarizer 8436. Thus, by absorbing the radiation 8521, the stacked multiple polarizers 8436 reduce stray light and thereby improve contrast in the image displayed to the user.
FIG. 84d shows a further example illustration of a front light for a display assembly similar to that shown in FIG. 79, where the partially reflective layer 7930 comprises a laminated multiple polarizer film having curved surfaces. The stacked polarizer includes an absorptive polarizer film 8442 and a stacked reflective polarizer film 8441. Reflective polarizer film 8441 is positioned in the central portion of absorptive polarizer film 8442 where the illuminating light 7920 is reflected toward reflective image source 7935. The polarization axes of the reflective polarizer film 8441 and the absorptive polarizer film 8442 are aligned parallel to each other and perpendicular to the polarization axis of the illumination light 7920 provided by the polarized light source 7910. Rays 8421 of the illuminating light 7920 incident on the partially reflective layer 7930 outside of the reflective polarizer film 8441 are absorbed by the absorptive polarizer film 8442. The reflective light source 8535 includes a quarter wave layer 8443 such that the polarization axis of the illumination light 7920 is changed during the process of reflection from the reflective image source 8535. As a result, the reflected illumination light 7920 is transmitted by the reflective polarizer film 8441 and the absorptive polarizer film 8442, thereby becoming image light 7940. By absorbing rays 8421 before they are incident on external surfaces (such as housing walls) or other optical surfaces, stray light is reduced and, as a result, contrast in the image provided to the user's eye is increased. It should be noted that while fig. 84c and 84d show the reflective polarizer film positioned to reduce stray light from the left and right sides as shown in the figures, the reflective polarizer can similarly be positioned to reduce stray light in the directions in and out of the paper as shown in the figures. Fig. 84a and 84b show reflective polarizer films 8430 and 8431 positioned in the central position of an absorptive polarizer 8432 so that stray light can be reduced in all directions. An important aspect of the present invention is that such stray light reduction is achieved without reducing the brightness of the image presented to the user's eye, since reflective polarizer films 8430 and 8431 reflect the illumination light over the entire area required to fully illuminate the reflective image source.
Fig. 85 shows a schematic illustration of a display assembly for an HMD in which the optical elements of the front light overlap the display optics, as the lens 8550 is located between the partially reflective layer 8530 and the reflective image source 8535. The display assembly is then comprised of an upper optic and a lower optic. The upper optics include a reflective image source 8535, a lens 8550, a partially reflective layer 8530, and a light source 8510. The upper optics convert the illumination light 8520 into image light 8540. As shown, the lower optics include a beam splitting plate 8580, a quarter wave film 8575, and a rotating flexure mirror 8570 (lower optics similar to those shown in fig. 79a are also possible). The lower optics deliver image light 8540 to the user's eye 8582. As previously stated herein, the display assembly provides image light 8540 to convey a display image to the user and scene light 8583 to provide a see-through view of the environment so that the user sees the display image overlaid onto the view of the environment.
Fig. 85 shows a display assembly in which the partially reflective layer 8530 is a single flat film. However, it can be advantageous to use a segmented partially reflective layer 8630 such as that shown in fig. 86. In this manner, the angle of the central portion 8631 of the partially reflective layer 8630 can be selected to position the light sources 8610 differently, thereby reducing clipping of the illumination light 8620 by the lens 8550 or other portion of the support structure associated with the display assembly, and thereby improving brightness uniformity in the displayed image seen by the user's eye 8582. For this reason, comparison of fig. 85 with fig. 86 shows that by changing the angle of the central portion of the partially reflective film, the position of the light source 8610 is moved downward and the gap of the illumination light 8620 with respect to the lens 8550 is increased.
The segmented partially reflective layer can be used with a variety of geometries and compositions. Fig. 86 shows a segmented partially reflective layer 8630 comprising a folded Z-shape having three flat segments. Fig. 87 shows a segmented partially reflective layer comprising an S-shape with curved end and central flat section 8731 similar to the shape shown in fig. 84 c. The segmented partially reflective layer can comprise a single partially reflective layer (such as a reflective polarizer film or a partially reflective mirror film). Further, the illumination light 8620 can be reflected from only the central planar segment or it can be reflected from one or more of the central planar segment plus other segments in the segmented partially reflective layer. Alternatively, partially reflective layer 8630 can include multiple polarizer films to selectively provide the partially reflective layer only on portions of the partially reflective layer that are actually needed to reflect image light to uniformly illuminate reflective image source 7935, as previously described herein. Fig. 88 shows a display assembly in which a partially reflective layer 8830 is composed of a stacked, multiple polarizer film having a central portion 8831, which includes a reflective polarizer film and the remainder of which is an absorptive polarizer, as previously described herein. Wherein the segmented shape of the partially reflective layer 8830 is similar to that shown in figure 86. Fig. 89 shows a display assembly in which the partially reflective layer 8930 is composed of a stacked multi-polarizer film having a central portion 8931 that includes a reflective polarizer film and the remainder of which is an absorptive polarizer, as previously described herein. Wherein the segmented shape of the partially reflective layer 8930 is similar to the shape shown in fig. 87. Although fig. 88 and 89 show the reflective polarizer film as occupying only the flat central sections of the segmented partially reflective layers 8830 and 8930, respectively, the reflective polarizer can extend into adjacent sections as needed to reflect the illumination light 8620 in the pattern required to uniformly illuminate the reflective image source 8535. Alternatively, the segments associated with segmented partially reflective layers 8830 and 8930 can have a three-dimensional shape when the reflective polarizer portion is shaped like that shown in fig. 84a to keep reflective polarizer 8430 partially flat.
In a further embodiment, the reflective polarizer film is laminated to a flexible transparent carrier film to increase flexibility, and the absorptive polarizer film is a separate layer. Fig. 90 shows a partially reflective layer 9030 consisting of a reflective polarizer film 8441 laminated to a flexible transparent carrier film 9043. Where the flexible transparent carrier film 9043 does not reflect the illumination light 7920 or changes the polarization state of the illumination light 7920, and as a result, ray 8421 passes through the flexible transparent carrier film 9043. The purpose of the flexible transparent carrier film is to support the reflective polarizer film 8441 while allowing the partially reflective layer 9030 to be substantially as flexible as the reflective polarizer film 8441 alone. The absorbing polarizer film 9042 is then provided as a separate layer positioned adjacent to the partially reflective layer 9030. While the absorbing polarizer film 9042 can be flat or curved as needed to fit within the available space, in a preferred embodiment, the absorbing polarizer film 9042 is curved to be better positioned to absorb rays 8421 incident on the partially reflective layer 9030 outside of the reflective polarizer film 8441, as shown in fig. 90.
In yet another embodiment, the reflective polarizer film is modified such that the portion of the illumination light incident to illuminate the active area of the reflective image source is transparent and non-reflective, and a separate absorptive polarizer is provided to absorb light transmitted through the non-reflective portion. Fig. 91 is an illustration of a partially reflective layer 9130 comprised of a reflective polarizer film, where portion 9143 is modified to be transparent and non-reflective, and portion 9141 is a reflective polarizer. As such, polarized illumination light 7920 is reflected by reflective polarizer portion 9141 and transmitted by modified portion 9143. The absorbing polarizer 9042 is provided as a separate layer adjacent to the partially reflective layer 9130 such that rays 8421 of the illuminating light 7920 are transmitted by the modified portion 9143 and absorbed by the absorbing polarizer. Wherein the transmission axis of the reflective polarizer portion 9141 is aligned parallel to the transmission axis of the absorptive polarizer 9042. Modification of the reflective polarizer film can be achieved by etching the reflective polarizer film when the reflective polarizer film is a wire grid polarizer, and thereby removing the metal wires of the wire grid in the modified portion. Alternatively, the wire grid polarizer can be masked during the metal deposition step to provide shaped portions of the wire grid polarizer during fabrication. An advantage provided by modifying the reflective polarizer film is that the flexibility of the partially reflective layer 9130 is not substantially altered by the modification, and as a result, the partially reflective layer 9130 remains uniformly flexible in both the modified portions 9143 and the reflective polarizer portions 9141. Another advantage provided by using a modified reflective polarizer film is that the transition from the modified portion 9143 to the reflective polarizer portion 9141 does not include sharp edges that may cause visible artifacts in the image provided to the user's eye due to scattering by the edges or changes in optical density from thickness changes. Such embodiments can also be applied to other types of display assemblies, such as the display assembly shown in fig. 85, for example.
In yet another embodiment, the partially reflective layer includes a reflective polarizer film laminated to an absorptive polarizer, and the partially reflective layer includes a flat portion and a curved portion. Fig. 92 is an illustration of front light plus a laminated partially reflective layer 9230 for a display assembly similar to that shown in fig. 79a, the laminated partially reflective layer 9230 having a portion that is a reflective polarizer laminated to an absorptive polarizer 9230. Wherein the partially reflective layer 9230 is segmented with flat segments and curved segments. By including flat segments in the portion of the partially reflective layer 9230 that is the reflective polarizer 9241, the uniformity of the illumination light 7920 reflected onto the reflective image source 7935 is improved because a greater portion of the light source 7910 is mapped to an image, as can be seen in fig. 92. Where it is important to map a large portion of the light source area to avoid darker or lighter lines across the image created by dark and bright spots on the light source when using small scale light sources and associated light control films, such as diffusers. The inclusion of flat segments in the partially reflective layer 9230 also reduces local distortion in the image presented to the user's eye caused by local changes in optical path length or local refraction due to changes in the surface angle to which the light is exposed. Such embodiments can also be applied to other types of display assemblies, such as the display assembly shown in fig. 85, for example.
In a head mounted display that provides a displayed image overlaid onto a transmissive view of an environment, it is advantageous to have a high perspective transmission so that the user is better able to interact with the environment, and so that people in the environment are able to see the user's eyes so they feel more engaged with the user. It would also be advantageous to have a thin optics module with a low height to make the head mounted display more compact and thus more attractive.
Fig. 93 shows an illustration of an optics module that provides a user with a display image while providing high perspective transmission. In this way, the user is provided with a display image overlaid onto a clear view of the environment. The optics module comprises a combiner 9320 which can have a partially reflective mirror coating that transmits most of the available light from the environment (greater than 50% visible light transmission), with transmission above 70% being preferred. For example, combiner 9320 can have a broadband partial mirror that reflects less than 30% and transmits more than 70% of the total visible wavelength band. Alternatively, combiner 9320 can have a notch mirror coating, wherein the reflection band of the notch mirror coating is matched to the wavelength band provided by light source 9340, wherein light source 9340 can include one or more LEDs, QLEDs, diode lasers, or other light sources each having a narrow wavelength band (e.g., 50nm broadband or less, full width at half maximum). The notch mirror coating can provide, for example, greater than 20% reflectivity (e.g., 50% reflectivity) in the wavelength band provided by light source 9340 while providing greater than 80% transmission in the remaining wavelength band in the visible. For a full color image to be provided by the optics module, at least three LEDs of complementary colors (such as red, green, and blue light, or cyan, magenta, and yellow light) are required. In a preferred embodiment, combiner 9320 has a three-color notch mirror that reflects more than 50% of the light within the wavelength band provided by light source 9340 and transmits an average of more than 80% across the entire visible light wavelength band. In this manner, the three-color notch mirror coating provides improved efficiency compared to the previously described partial mirror coatings. In an example, if the combiner were to provide 75% transmission of visible light 9362 from the environment, the partially reflective mirror coating would only reflect 25% of the image light 9360, such that 75% of the image light would be transmitted through the combiner and would not contribute to the brightness of the image provided to the user's eye 9310. In contrast, a three-color notch mirror coating can be used to reflect more than 50% of image light 9360 over the wavelength of light provided by the LEDs in light source 9340 while transmitting more than 90% of the remaining wavelength of visible light not provided by the LEDs, such that the average transmission over the full visible range is more than 75%. Thus, the three-color notch mirror is twice as efficient as the partial mirror in terms of its ability to reflect the image light 9360 toward the user's eye 9310.
To enable the optics module to operate with the combiner 9320 as shown in fig. 93, the image light 9360 is provided to a lens 9330, which focuses the image light 9360 at the user's eye 9310. Where lens 9330 is shown as a single lens element for simplicity, multiple lens elements are possible. Image light 9360 is provided from illumination light 9364 from a light source 9340. Wherein the illumination light 9364 is reflected by the beam splitter 9352 towards the reflective image source 9350. The image source 9350 can be a liquid crystal on silicon display (LCOS), a ferroelectric liquid crystal display (FLCOS), or other such reflective display. A polarizer 9342 can be associated with the light source 9340 to provide polarized illumination light 9364. The beam splitter 9352 can then be a reflective polarizer oriented to substantially reflect polarized illumination light 9364. When light is reflected by image source 9350, image source 9350 changes the polarization state of the illumination light 9364 to form image light 9360 having a polarization state opposite to the polarization state of the illumination light 9364. The image light 9360 can then be transmitted by the reflective polarizer of the beam splitter 9352 by changing the polarization state of the illumination light 9364 to that of the image light 9360. It is important to note that image light 9360 is polarized to enable a folded illumination system, rather than because combiner 9320 requires polarized light. In fact, combiner 9320 cannot include a polarizer in order to provide greater than 50% transmission of light 9362 from the environment.
Fig. 94 is an illustration of an optics module that includes multiple folded optics to reduce the overall height of the optics module. In this case, illumination light 9464 is transmitted by beam splitter 9452 such that it passes directly toward image source 9450, where beam splitter 9452 is a reflective polarizer and light source 9340 comprises polarizer 9342 oriented such that the transmission axis of said polarizer 9342 is parallel to the transmission axis of beam splitter 9452. Illumination light 9464 is then reflected by image source 9450 and changes polarization state such that image light 9360 with its changed polarization state is reflected by beam splitter 9452 toward lens 9330. As can be seen by comparing fig. 93 to fig. 94, the overall height of the optics module shown in fig. 94 is greatly reduced.
However, the orientation of the additional fold in the optical path of image light 9360 in the optics module of fig. 94 increases the thickness of the optics module, where the thickness is defined as the distance from the nearest rear surface of the optics module nearest the user's eye to the farthest front surface of the optics module farthest from the user's eye. Fig. 95 and 96 show illustrations of optical modules in which the added fold in the optical path of image light 9360 is oriented perpendicular to the fold shown in fig. 94. In this case, the optics module in fig. 95 and 96 is wider but thinner than that shown in fig. 94. Fig. 95 shows the optics module from the side, and fig. 96 shows the optics module from the position of the user's eye 9310. As such, in the multi-fold optics shown in fig. 95 and 96, the optical axis 935 associated with the illumination light 9464 is perpendicular to both the optical axis 934 associated with the image light 9360 as the image light 9360 passes through the lens 9330 and the optical axis 933 associated with the image light 9360 as the image light 9360 continues to travel toward the user's eye 9310 in the eyebox. In the case of a head mounted display, having a thin optics module may be very important because a thick optics module may cause the head mounted display to protrude outward from the forehead of the user, which may be uncomfortable and unattractive. Thus, the multi-fold optics module shown in fig. 95 and 96 is shorter and thinner than the optics module shown in fig. 93. The optics module shown in fig. 95 and 96 is wider than the optics module shown in fig. 93, but in the head-mounted display eyewear configuration, the wider optics module can be better incorporated into the eyewear frame than a taller or thicker optics module.
A further advantage provided by an optics module comprising multiple folded optics is that a twist can be introduced at the folded surface to modify the orientation of different parts of the optics module relative to each other. This can be important when: when the optics module needs to fit into a thin curved eyeglass frame, visor, or helmet, the increased width associated with the upper portion of the multiple fold optics module can make it more difficult to fit into a structure that is not parallel to the combiner. In this case, for example (based on fig. 96) the upper part comprising light source 9340, polarizer 9342, beam splitter 9452 and image source 9450 can be twisted with respect to the lower part comprising lens 9330 and combiner 9320. Wherein the twist of the upper portion about axis 934 must be combined with the corresponding twist of the lower portion about axis 933 to avoid image distortion due to compound angles between the folded surfaces. In this way, the effect of the increased width of the upper portion of the multi-fold optics can be reduced when fitting the optics module into a curved structure (such as a spectacle frame, goggles frame or helmet structure).
Figure 99 shows a further embodiment in which the lens 9930 includes a diffractive surface 9931 to enable a more compact and shorter optical design with reduced chromatic aberration. Wherein the diffractive surface 9931 can be comprised of a series of small annular segments that are curved, such as the diffractive lens in a fresnel lens, for example. Diffractive surface 9931 can be flat as shown in fig. 99, or it can have a base curve to provide the add power. The diffractive surface 9931 can be single order diffractive or multiple order diffractive. To reduce scattering of wide angle illumination 9964 that can be incident on the diffractive surface 9931, an absorptive polarizer 9932 is provided and oriented with its transmission axis perpendicular to the transmission axis of the reflective polarizer of the beam splitter 9452. In this manner, illumination light 9964 transmitted by the beam splitter 9452 in a direction that would cause the illumination light 9964 to be incident on the diffraction surface 9931 is absorbed by the absorptive polarizer 9932 before it can be scattered by the diffraction surface 9931. At the same time, the image light 9360 has a polarization state opposite to that of the illumination light 9964, such that the image light 9360 is reflected by the beam splitter 9452 and transmitted by the absorbing polarizer 9932 as it passes into the lens 9930.
Fig. 100 shows an illustration of an optics module that includes a reduced angle between the beamsplitter 9452 and the lens 9930 to reduce the overall height of the optics module. The fold angle (the angle of deflection between 934 and 1005) of the image light 9360 is then greater than 90 degrees and as a result the upper edge of the beam splitter is closer to the lens 9330, thereby providing a reduced overall height of the optics module.
Fig. 100 also shows a compact planar light source 10040 consisting of a thin edge-lit backlight, similar to that provided in displays for mobile devices like cellular phones. The compact planar light source 10040 is positioned directly behind the beam splitter 9452 to reduce the overall size of the optics module. A compact planar light source can include a light guiding film or plate having a reflector and an edge-emitting lamp (such as one or more LEDs) on the side opposite the beam splitter 9452. The compact planar light source can include a polarizer, and thus the illumination light 10064 is polarized as previously described herein. To direct the illumination light 10064 toward the image source 9450 for improved efficiency, the rotating film 10043 is positioned between the compact planar light source 10040 and the beam splitter 9452. A 20-degree prism rotating film can be obtained, for example, from lumineit 103C (torance, CA) under the name of DTF. To obtain a greater degree of rotation, such as 40 degrees, the multiple layers of rotating films 10043 can be stacked together so long as they are oriented such that the rotating effect is additive. A diffuser layer (not shown) can be used in addition to the rotating film 10043 to reduce artifacts typically associated with the rotating film 10043, such as linear shadows that can be associated with prismatic structures. Fig. 101 shows a diagram of the optics module as seen from the position of the user's eye, which is similar to the diagram shown in fig. 100, but with an added vertical orientation folded in image light 10164 to reduce the thickness of the optics module, as previously described herein. As in the optics modules shown in fig. 95 and 96, the multi-fold optics shown in fig. 101 has an optical axis 1005 associated with the illumination light 10164 that is perpendicular to both the optical axis 934 associated with the image light 9360 as the image light 9360 passes through the lens 9330 and the optical axis 933 associated with the image light 9360 as the image light 9360 continues to travel toward the user's eye 9310 in the eye box. As a result, the optical device module in fig. 101 is thinner and shorter than that of fig. 93. Fig. 101 also includes a field lens 10130 to improve the optical performance of the optics module. Such addition of a second lens element is possible because of the change in the folding orientation, so that the field lens 10130 does not increase the thickness of the optics module, but rather the length of the optical path added from the field lens 10130 occurs in the width of the optics module where space is more readily available in a head mounted display.
Fig. 102 shows a diagram of an optics module similar to that shown in fig. 99, but with a different orientation of the upper portion of the optics module relative to the combiner to enable the combiner 10220 to be more vertical. This rearrangement of elements within the optics module can be important to achieve a good fit of the head mounted display to the user's face. By making the combiner 10220 more vertical, the optics module can be made to have less interference with the user's cheekbones.
Fig. 103, 103a, and 103b show illustrations of the optics module as seen from the position of the user's eyes, including the multifold optics and a Digital Light Projector (DLP) image source 10350. In this case, illumination light 10364 is provided to the image source 10350 at an oblique angle, as required by the micro-mirrors in the DLP, to reflect the image light 9360 along the optical axis 934 of the lens 9930. Wherein, in the case of DLP image source 10350, image light 9360 is composed of on-state light reflected along optical axis 934 by the on-state micro-mirrors in DLP image source 10350, corresponding to the pixel brightness in the image to be displayed to the user's eye 9310 in the eyebox. The micro-mirrors in the DLP image source 10350 also reflect the off-state light 10371 to the sides of the optics module corresponding to dark image content, and as a result, light traps 10372 are provided in the optics module to absorb light 10371. The optical trap 10372 can be a black absorptive surface or a textured black surface. The purpose of the optical trap 10372 is to absorb the incident light 10371 and thereby reduce stray light, and subsequently improve the contrast of the image displayed to the user's eye 9310. As previously described in other embodiments herein, multiple folded optical paths are used to provide the light sources 10340 to the sides of the optics module to reduce the overall thickness and height of the optics module. Fig. 103 provides a DLP image source 10350 at the top of the optics module such that image light 9360 continues along the optical axis 934, through the lens 9930, and on to the combiner 9320 where it is reflected toward the user's eye 9310 located in the eyebox. Polarizer 10341 is provided with light source 10340 such that polarized illumination light 10364 is reflected by beam splitter 9452 to illuminate DLP image source 10350. Therein, the beam splitter 9452 is in this case a reflective polarizer, which is aligned with the polarizer 10341 such that the polarized illumination light 10364 is reflected by the beam splitter 9452 and the image light 9360 is transmitted by the beam splitter 9452. Quarter wave film 10351 is located adjacent to the surface of the DLP image source 10350 such that the polarization state of the image light 9360, after reflection by the DLP image source 10350, is opposite to the polarization state of the illumination light 10364. Light source 10340 and reflective polarizer 9452 are angularly arranged so that illuminating light 10364 is incident on DLP image source 10350 at a desired oblique angle so that image light 9360 continues along optical axis 934 of lens 9930 as reflected by the on-state pixels in DLP image source 10350. A field lens (similar to 10130 as shown in fig. 101) or other lens element may be included in the optics of fig. 103 but not shown, in which case the illumination light 10364 and image light 9360 may pass through the field lens or other lens element in opposite directions.
Fig. 103a is an illustration of another optics module with multiple folded optical paths that includes a DLP image source 10350 and is shown from the position of the user's eye. The light source 10340 is again provided to the side of the optics module in order to reduce the thickness of the optics module. In this case, light source 10340 is provided on the same side of lens 9930 and combiner 9320 as DLP image source 10350. The lens 9930 can optionally include one or more diffractive surfaces 9931. The light source 10340 directly illuminates the DLP image source 10350 with illumination light 10364 incident on the DLP image source 10350 at an oblique angle such that the image light 9360 continues along the folded optical axis 934 after being reflected by the on-state micro-mirrors in the DLP image source 10350. At least one light trap 10372 is also provided to absorb light 10371 reflected from off-state micro-mirrors in the DLP and thereby improve the contrast of the displayed image as seen by the user. A field lens 10332 is provided between the DLP image source 10350 and the folding mirror 10352. The illuminating light L64 can be unpolarized light in this case, and the fold mirror 10352 can then consist of a fully reflective mirror coating (e.g., a coating that reflects the full visible spectrum) on the substrate. The field lens 10332 can be a single lens element as shown in fig. 103a, or it can include multiple lens elements as desired. The field lens 10332 is designed to provide a larger air gap between the field lens 10352 and the DLP image source 10350 to enable illumination light 10364 to be introduced to the optics module to directly illuminate the active area associated with the DLP image source 10350. By using unpolarized illumination light 10364, the optics module shown in fig. 103a has improved efficiency over the optics module shown in fig. 103 and 103b with a DLP image source 10350.
Fig. 103b is an illustration of another optics module with a multiply folded optical path that includes a DLP image source 10350 and is shown from the position of the user's eye 9310 in the eyebox. As in the optics module shown in fig. 103 and 103a, the optics module of fig. 103b has light sources 10340 positioned at the sides of the optics module to reduce the height and thickness of the optics module. The DLP image source 10350 is positioned opposite the light source 10340, however they do not share an optical axis in this embodiment. The illumination light 10364 passes through a beam splitter 10352, which beam splitter 10352 in this case can be a first reflective polarizer. A second reflective polarizer 10332 is positioned adjacent to the lens 9930 such that the illuminating light 10364 is reflected toward the DLP image source 10350. To reflect the illuminating light 10364, the first reflective polarizer (beam splitter 10352) and the second reflective polarizer 10332 are oriented perpendicular to the transmission axis. A quarter wave film 10351 (or quarter wave coating on the DLP cover glass) is provided adjacent to the DLP image source 10350 such that the polarization state of the illuminating light 10364 is changed when reflected from the DLP image source 10350 as it becomes image light 9360. As a result, the polarization of the illumination light 10364 is opposite to the polarization of the image light 9360. Thus, the illumination light 10364 is transmitted by the beam splitter 10352 and reflected by the second reflective polarizer 10332, while the image light 9360 is reflected by the beam splitter 10352 and transmitted by the second reflective polarizer 10332. The light source 10340 is oriented relative to the second reflective polarizer 10332 such that the light source 10340 is reflected at an oblique angle relative to the DLP image source 10350 as is required for providing image light 9360 along the fold optical axis 934 reflected from the on-state micro-mirrors in the DLP image source 10350. The second reflective polarizer 10332 can extend beyond the lens 9930 to provide the tilt angle required to fully illuminate the DLP image source 10350, as shown in fig. 103 b. Because the light source 10340 is located behind the beam splitter 10352, which is a reflective polarizer, the light source 10340 does not affect the image light 9360, and as a result, the light source 10340 can be a different size and orientation than the beam splitter 10352. One or more light traps 10372 are provided to absorb light 10371 reflected from off-state micro-mirrors in DLP image source 10350 and thereby improve the contrast of the displayed image. In this case, the optical trap 10372 can be positioned below the second reflective polarizer 10332 because the polarization state of the light 10371 is such that it is reflected by the beam splitter 10352 and transmitted by the second reflective polarizer 10332. The combined orientation of the light source 10340, beam splitter 10352, and DLP image source 10350 provides an optics module that is relatively thin and relatively short compared to optics modules in which the image source or light source is positioned above the fold mirror or beam splitter (e.g., such as the optics module shown in fig. 103).
Fig. 97 and 98 show illustrations of optics modules similar to those shown in fig. 94, but with the addition of an eye imaging camera 979 for capturing images of a user's eye 9310 during use. In these cases, the light source 9340 and the image source 9450 are positioned opposite each other to enable the eye imaging camera 979 to be positioned directly above the lens 9340 such that the optical axis 934 is shared between the optics module and the eye imaging camera 979. By sharing a common optical axis, the eye imaging camera 979 is able to capture images of the user's eye 9310 with a view from directly in front of the user's eye 9310. The image light 9360 can then be used to illuminate the user's eye 9310 during image capture. The portion of the light reflected from the user's eye 9310 (which can be unpolarized) passes through the beam splitter 9452 before being captured by the eye imaging camera 979. Because the eye imaging camera 979 is located above the beam splitter 9452, if the beam splitter 9452 is a reflective polarizer, the polarization state of the image light 9360 will be opposite that of the light 978 captured by the eye imaging camera 979. The eye imaging camera 979 can be used to capture still images or video. Wherein the video image can be used to track the movement of the user's eyes while viewing the display image or while viewing the transmissive view of the environment. The still image can be used to capture an image of the user's eye 9310 for the purpose of identifying the user based on the pattern on the iris. In view of the small size of the available camera modules, the eye imaging camera 979 can be added to the optics module with little impact on the overall size of the optics module. Additional illumination can be provided adjacent to combiner 9320 to illuminate the user's eyes. The additional illumination can be infrared, so the user can simultaneously view the images displayed with the visible light. If the additional illumination is infrared, the eye camera 979 must be able to capture images at matching infrared wavelengths. By capturing an image of the user's eyes from a perspective directly in front of the user's eyes, an undistorted image of the user's eyes can be obtained over a wide range of eye movements.
Fig. 120 shows a diagram of another embodiment of an eye imaging camera associated with the optics module shown in fig. 101, however the eye imaging camera can be similarly included in optics modules, such as those shown in fig. 99, 100, 103 b. These optics modules include an absorptive polarizer 9932 to reduce stray light, as previously disclosed herein. These optics modules can also include a diffractive surface, but diffractive surface 9931 is not necessary for the operation of eye imaging camera 979. In this embodiment, the polarization state of the image light 9360 is the same as the polarization state of the light reflected by the user's eye and captured by the eye imaging camera 979, as they both pass through the absorbing polarizer 9932. In this embodiment, eye imaging camera 979 is positioned adjacent to beam splitter 9452 and compact planar light source 10040 and between the beam splitter and field lens 10130. The optical axis 12034 of the light reflected by the eye is then angled to some extent relative to the optical axis 934 of the image light 9360 so that the associated eye box and the center of the user's eye 9310 are within the field of view of the eye imaging camera 979. In this manner, the eye imaging camera 979 captures images of the user's eye from almost directly in front of the user's eye 9310 and only slightly to the side of the user's eye 9310, as shown in fig. 120. Although fig. 120 shows the eye imaging camera 979 positioned adjacent to the end of the beam splitter 9452, it is also possible to position the eye imaging camera 979 adjacent to the side of the beam splitter 9452. An advantage of this embodiment is that the eye imaging camera 979 is provided with a simple optical path, so that high image quality is possible in captured images of the user's eye 9310. It should be noted that the optics associated with the eye imaging camera must account for the effects of the lens 9930, as light captured by the eye imaging camera that is reflected by the user's eye 9310 passes through the lens 9930. Moreover, the addition of the eye imaging camera 979 does not substantially increase the volume of the optics module, as can be seen by comparing graph 120 with graph 101.
Fig. 121 shows a diagram of a further embodiment of an optics module that includes an eye imaging camera 979. Similar to the embodiment shown in fig. 120, this optics module also includes an absorptive polarizer 9932 to reduce stray light, and a diffractive surface 9931 may be included but is not required. In this embodiment, eye imaging camera 979 is positioned between beam splitter 9452 and field lens 10130 and directed toward beam splitter 9452. In this manner, light reflected by the user's eye 9310 is reflected upward by the combiner 9320, passes through the lens 9930 and the absorbing polarizer 9932, and is then reflected laterally by the beam splitter 9452 toward the eye imaging camera 979. The light captured by the eye imaging camera 979 is thus the same polarization state as the image light 9360, such that the captured light is reflected by the beam splitter 9452 and transmitted by the absorbing polarizer 9932. Light reflected by the user's eye 9310 can be unpolarized as originally reflected by the user's eye 9310, however after passing through the absorbing polarizer 9932, the light becomes polarized in the same polarization state as the image light 9360. An advantage of this embodiment is that it is even more compact than the embodiment shown in fig. 120. This arrangement of the eye imaging camera 979 is also possible in the optics module shown in fig. 99, 100, 103a and 103 b.
In the embodiment shown in fig. 120 and 121, the user's eye 9310 and associated eye box can be illuminated by image light 9360, or additional light sources can be provided, for example, by LEDs positioned adjacent to the combiner 9320. Wherein the LED is capable of providing visible or infrared light so long as the eye imaging camera is capable of capturing at least a portion of the wavelength of light provided by the LED.
In an alternative embodiment for the optics module shown in fig. 103a, the light source 10340 provides polarized illumination light 10364 and the fold reflector 10352 is a reflective polarizer plate to enable an eye camera (not shown) to be positioned above the fold mirror 10352 and along the optical axis 934 for capturing images of the user's eye 9310, similar to that shown in fig. 97 and 98. The eye camera and optics module then share a common optical axis 934, such that an image of the user's eye 9310 is captured from directly in front of the eye. In this arrangement, the polarization state of the image light 9360 is opposite to the polarization state of the light captured by the eye camera, because the image light 9360 is reflected by the fold mirror 10352 and the light captured by the eye camera is transmitted by the fold mirror 10352.
Fig. 104 shows a diagram of the optics module of fig. 95 with the addition of controllable light blocking elements to improve contrast in portions of a display image and also to improve the appearance of opacity in a display object (such as an augmented reality object). Wherein the controllable light blocking element is capable of operating by absorbing incident light or scattering incident light, as provided for example by an electrochromic element, a polymer-stabilized liquid crystal or a ferroelectric liquid crystal. Examples of suitable light blocking elements include: 3G switchable film from Scienstry (Richardson, TX); switchable mirrors or switchable glasses from Kent Optronics (Hopewell Junction, NY). The controllable light blocking element 10420 is shown in fig. 104 attached to a lower surface of the combiner 9320 such that the controllable light blocking element 10420 does not interfere with the displayed image while blocking the see-through light 9362 from the environment. The addition of the controllable light blocking element 10420 adjacent to the combiner 9320 is easily done by direct attachment to the combiner or to the side wall of the optics module housing, as long as the combiner is flat. The controllable light blocking element 10420 can have a single region that can be used to block a selectable portion of the transmitted light from the environment across the area of the combiner 9320, thereby enabling selectable optical densities. Alternatively, the controllable light blocking element 10420 can provide an array of regions 10520, as shown in fig. 105, which can be individually selectably controlled to block portions of the combiner 9320 region corresponding to regions in the display image where high contrast regions of the image are located. Fig. 105 shows a diagram of an array 10520 of individually controllable light-blocking elements. Fig. 106a, 106b and 106c are illustrations of how an array 10520 of individually controllable light blocking elements can be used. Fig. 106a shows how an array 10520 of individually controllable light blocking elements can be put into a blocking mode in an area 10622 and into a non-blocking mode in an area 10623. Where the blocking mode area 10622 corresponds to an area where information or objects are to be displayed, such as shown in the corresponding area in the illustration of fig. 106 b. Fig. 106c shows what the user sees when displaying the image of fig. 106b with the array 10520 of controllable light blocking elements used in the light blocking mode 10622 and the non-blocking mode 10623. The user then sees the displayed information or object overlaid onto the transmissive view of the environment, but in the area where the information of the object is displayed, the transmissive view is blocked to improve the contrast of the displayed information or object and to provide a feeling of heaviness to the displayed information or object.
Further, fig. 104 shows a rear optical element 10490, which can be a protective plate or corrective optics. A protective plate can be attached to the sidewall or other structural element to make the positioning of the combiner 9320 more robust and to prevent dust and dirt from entering into the inner surface of the combiner 9320. The corrective optics can include normative optics that include the user's ophthalmic prescription (e.g., refractive power and astigmatism) to improve the viewing experience.
Head mounted displays provide users with the freedom to move their heads while viewing the displayed information. The see-through head-mounted display also provides a user with a see-through view of the environment, whereupon the displayed information is overlaid. While head mounted displays can include various types of image sources, image sources that provide sequential color display typically provide higher perceived resolution relative to the number of pixels in the displayed image, as each pixel provides image content for each of the colors, and the image perceived by the user as a displayed full color image frame is actually the sum of a series of rapidly displayed sequential color sub-frames. For example, the image source can sequentially provide a sub-frame image consisting of a red image, a green image, and then a blue image all derived from a single full-color frame image. In this case, the full color image is displayed at an image frame rate comprising a series of at least three sequentially colored sub-frames, the sub-frames being displayed at a sub-frame rate that is at least 3 times the image frame rate. Sequential color image sources include reflective image sources such as LCOS and DLP.
Color splitting occurs as sequential color display occurs because different color sub-frame images that together provide a full color frame image to the user are displayed at different times. The inventors have realized that with sequential color display in a head mounted display, when there is movement of the head mounted display or movement of the user's eyes such that the user's eyes do not move in synchronization with the displayed image, the perceived position of each of the sequential color image sub-frames is different within the user's field of view under such movement conditions. This can occur when the user moves his head and the user's eyes do not follow the same trajectory as the head mounted display, which may be caused by the user's eyes moving in an uneven trajectory when the eyes pause to view objects in the see-through view of the environment. Another way this can happen is if the object passes through a see-through view of the environment and the user's eyes follow this movement of the object. Due to the differences in perceived position within the user's field of view, the user sees sequential color images that are slightly separated at the edges of the object. This color separation at the edges of the object is called color splitting. Color breakup can be easily perceived during certain movements, since sequential colors are brightly colored in areas where they do not overlap each other. The faster a user moves their head or the faster the user's eyes move across the display field of view, the color breakup becomes more noticeable as the different color sub-frame images are separated by a greater distance within the field of view. Color splitting is particularly noticeable with a see-through head-mounted display, because the user is able to see the environment and the user's eyes tend to stay on objects seen in the environment as the user turns his head. Thus even if the user can turn his head at a steady rate of rotation, the user's eye movement tends to be jerky and this creates a condition in which color breakup is observed. As such, there are two different conditions that tend to be associated with color breakup: fast head movement and fast eye movement.
It is important to note that: when the user is not moving his head and the head mounted display is not moving over the user's head, color splitting will not be observed because the sub-frame images are provided at the same location within the user's field of view. Also, if the user were to move his head and the user moved his eyes in synchronization with the head movement, no color splitting would be observed. The movement of the head mounted display therefore indicates the conditions that can lead to color breakup and also indicates the degree of color breakup that can occur if the user moves his eyes relative to the head mounted display. Color splitting is less problematic with head mounted displays that do not have a perspective of the environment, because only the display image content is visible to the user and the display image content moves in synchronization with the movement of the head mounted display. Color splitting is also not a problem if a monochromatic image is displayed with a monochromatic light source (i.e., there are no sequential color sub-frames, but only a single color frame), since all displayed images are composed of the same color. Thus, color splitting is the most obvious problem in the case of head mounted displays that provide a perspective view of the environment.
Systems and methods in accordance with the principles of the present invention reduce color breakup and thereby improve the viewing experience provided by head mounted displays with perspective when a user is moving through an environment.
In an embodiment, systems and methods are provided in which a head mounted display detects the speed of movement of the head mounted display and, in response, the resolution of the image is reduced or the bit depth of the image is reduced, while the image frame rate and associated sub-frame rate at which the image is displayed is increased accordingly. In this way, the bandwidth associated with the display of images can be maintained constant despite the increased frame rate. Wherein by increasing a frame rate associated with display of the images, a time between display of each sequential color sub-frame image is reduced, and as a result, a separation in visual perception between the sequential color images is reduced. Similarly, the image frame rate can be reduced while simultaneously increasing the sub-frame rate by increasing the number of sub-frames displayed for each image frame.
In further embodiments, systems and methods are provided in which sequential color sub-frame images are laterally or vertically shifted relative to each other by a number of pixels corresponding to a detected movement of a head mounted display. In this manner, the color sequential sub-frame images are displayed to the user such that they visually overlay each other on top of each other within the display field of view. This compensates for the separation between sub-frames and thereby reduces color breakup.
In yet another embodiment, systems and methods are provided in which an eye imaging camera in a head-mounted display is used to track movement of a user's eyes. The movement of the head mounted display may be measured simultaneously. Adaptation in the presentation is then performed to reduce color breakup. For example, the resolution and frame rate of the image may be changed or the image frame rate can be reduced while the sub-frame rate is increased, corresponding to the difference in the movement of the user's eyes and the movement of the head mounted display. As another example, the sub-frames may be shifted to align with the sub-frames corresponding to a determined difference in movement between the eyes of the user and the head mounted display. As a further example, the color saturation of the content may be reduced to reduce the perception of color breakup due to the fact that: the colors, although separated in location as perceived by the user, are not as separated in color space. In yet another example, the content can be converted to a monochrome map that is displayed as a single color image (e.g., white) during the detected movement to make the color split invisible.
Fig. 107 shows an example of a full-color image 10700 that includes an array of pixels that include portions of red, green, and blue pixels. For sequential color display, three subframe images are created that each consist of only one color (such as only red, only green, or only blue). Those skilled in the art will recognize that sequential color images that together provide the perceived full color image can also be comprised of sub-frames of cyan, magenta, and yellow. These sub-frame images are displayed to the user quickly in sequence on the head mounted display so that the user perceives a full color image that combines all three colors. With a reflective display (such as LCOS or DLP), the sub-frame image is displayed by: the reflective display is altered to provide the corresponding image content associated with the particular sub-frame image, and then illuminated with the associated color light, whereupon the light is reflected to provide the sub-frame image to the optics of the head-mounted display and from there to the eyes of the user.
If the sub-frame images are precisely aligned with each other, the panchromatic image perceived by the user will be panchromatic up to the edges of the image and will have no color breakup. This is what is typically seen by a user of a head mounted display when the head mounted display is stationary on the user's head and the user is not moving his eyes. However, if the user moves his head or the head-mounted display moves on the user's head (such as due to vibration) and the user's eyes do not move in concert with the displayed image, the user perceives the sub-frame images as being laterally (or vertically) offset with respect to each other, as shown by illustrations 10802 and 10804 in fig. 108A and 108B. The perceived horizontal offset between the displayed sub-frame images is related to the speed of movement of the head mounted display and the time between display of sequential sub-frame images (which is also referred to as the sub-frame time or 1/sub-frame rate). The lateral shift between subframe images perceived by the user is color splitting, and color splitting is perceived as color stripes at the edges of the object. When the user moves his head (or eyes) quickly and the sub-frame rate is slow, the color splitting can be substantially as illustrated at 108A. If the user moves his head slowly or the sub-frame rate is higher, the color splits less, as illustrated at 108B. If in digital imaging the color split is less than one pixel in the lateral shift, the user will perceive that there will be no color split.
The display frame rate in a head mounted display is typically limited by the bandwidth of the processor and associated electronics or by the power required to drive the processor and associated electronics (which translates into battery life). The bandwidth required to display images at a given frame rate is related to the number of frames displayed in a certain period of time and the number of pixels in each frame image. As such, simply increasing the frame rate to reduce color splitting is not always a good solution as it requires higher bandwidth that the processor or associated electronics may not be able to support and power usage will increase, thereby reducing battery life. Alternatively, systems and methods in accordance with the principles of the present invention provide a method of display in which the number of pixels in each sub-frame image is reduced thereby reducing the bandwidth required to display each sub-frame image while increasing the sub-frame rate by a corresponding amount to maintain bandwidth while reducing color breakup. This embodiment is suitable for a scenario in which subframe images can be provided with different numbers of pixels and different frame rates. For example, it would be suitable for a camera and display system in which the capture conditions can be changed to provide an image with a lower resolution (which can then be displayed with a faster sub-frame rate). Still images (such as text or graphics) can be displayed at a lower frame rate and faster sub-frame rate to reduce color breakup, since the image content does not change rapidly. Alternatively, the image can be modified to be displayed at a lower resolution (fewer pixels) at a faster frame rate or sub-frame rate to reduce color breakup.
Fig. 109 shows an illustration of the timing of sequential color images, which consists of sequential display of a red sub-frame image 10902 followed by a green sub-frame image 10904 followed by a blue sub-frame image 10908 in a repeating process. As long as the sub-frames are displayed together at a full color image frame rate greater than approximately 24 frames/second such that sequential color sub-frames are displayed at a sub-frame rate greater than 72 sub-frames/second, the human eye will perceive a full color moving image without judder. This condition is suitable for displaying video images without color breakup when the head mounted display is stationary or moving relatively slowly. However, if the user moves his head such that the head mounted display moves quickly, color breakup will occur. This color splitting occurs because fast head movements are typically the user's reaction to something appearing in the environment (e.g., large noise) such that the user's eyes are searching for the environment during the fast head movements, which results in jerky eye movements and substantial color splitting.
Movement of the head mounted display can be detected by an inertial measurement unit, which can include accelerometers, gyroscope sensors, magnetometers, tilt sensors, vibration sensors, and the like. Where only movement within the plane of the display field of view (e.g., x and y movement without z movement) is important to detect conditions where color breakup may occur. If, in an embodiment, the head mounted display is detected as being moving above a predetermined threshold (e.g., greater than 9 degrees/second) that predicts color breakup occurs, the resolution of the image may be reduced (thereby reducing the number of pixels in the image and effectively making each pixel larger within the display field of view) and the sub-frame rate may be increased accordingly. Note that the sub-frame rate can be increased without changing the image frame by increasing the number of sub-frames that are sequentially displayed, for example, six sub-frames can be displayed for each image frame, with sequential color sub-frame images each being displayed twice. By increasing the number of sub-frames displayed for each image frame, the sub-frame rate can be increased without having to increase the image frame rate, which can be more difficult to change, as the image frame rate is typically provided by a source of image content such as in a movie. Diagram 110 shows a plot of a faster sub-frame rate, where the display time for each sub-frame 11002, 11004, and 11008 is reduced and the time between the display of each sequential sub-frame is also reduced. Graph 110 shows a subframe rate that is approximately twice as fast as the subframe rate shown in graph 109. The associated image frame rate can be twice as fast as in fig. 1110 compared to fig. 109, where both the image frame rate and the subframe rate are doubled. Alternatively, as previously described, the image frame rate can be unchanged between fig. 109 and 110, with only the subframe rate doubled to reduce color splitting. In order to make the bandwidth associated with the display of the image shown in fig. 110 approximately the same as the bandwidth associated with the display of the sub-frame image shown in fig. 109, the resolution (the number of pixels in each sub-frame image) is reduced to approximately one-half of the original.
While reducing the resolution of the displayed sub-frame image corresponding to an increase in the sub-frame rate may appear to degrade the image quality perceived by the user, the human eye is not able to perceive high resolution when there is substantial movement. As such, when the eye is moving, the color break-up is more visible than the reduction in resolution of the image. Thus, the system and method of the present invention trades reduced image resolution for increased image frame rate to reduce color breakup without appreciable loss of resolution, and thereby maintain bandwidth. This technique can be used, for example, to reduce color splitting down to 1/16 where the resolution of the displayed image is reduced to 1/16 of the original resolution and the frame rate of the displayed image is increased by a factor of 16.
In another embodiment of the present invention, when movement of the head mounted display is detected, the sub-frame images associated with the full color frame image are digitally shifted relative to each other in a direction opposite to the detected direction of movement and are digitally shifted by an amount corresponding to the detected speed of movement. This effectively compensates for the perceived offset between the displayed sub-frame images that causes color splitting. The digital shift is applied only to the sub-frames that together comprise the full color frame image. This is different from typical digital image stabilization, where full color frame images are digitally shifted relative to each other to compensate for movement, as described, for example, in U.S. patent publication 2008/0165280. By applying a digital shift to the sub-frames making up a single panchromatic frame image, the amount of digital shift required to reduce color breakup is typically only a few pixels even when the detected speed of movement is high, in contrast to typical digital image stabilization where rapid movement causes an accumulated shift of the frame image to cause the image to effectively move outside the display field of view or to limit the amount of digital stabilization that can be applied. Fig. 111a and 111b illustrate such an embodiment. Fig. 111a shows how the sequentially displayed sub-frame images 11102, 11104, and 11108 would be perceived by the user when there is substantial movement, where the different colors associated with the sub-frames are individually visible along the object edges, evenly spaced across the field of view in the direction of movement. In contrast, fig. 111b shows how the visibility of the sub-frames changes when the sub-frames are digitally shifted to compensate for the detected movement and thereby reduce the separation between sub-frames across the field of view, and as a result, the user perceives a series of full-color frame images 11120 with reduced color splitting. As shown in fig. 111b, the full-color frame image is not an image that is digitally shifted or stabilized in response to the detected movement.
In an embodiment, the direction and speed of movement of the head mounted display is detected by the IMU sensor immediately prior to display of each full color frame image. If the speed of movement is above a predetermined threshold, the sequentially displayed color sub-frames associated with each full-color frame are digitally shifted relative to each other so that they are displayed in an aligned position within the display field of view. The magnitude of the shift corresponds to the speed of the detected movement and the direction of the shift is opposite to the direction of the detected movement.
In an example, movement of the head mounted display is detected immediately prior to display of a first sub-frame associated with the full-color frame image. The first sub-frame associated with the full-color frame image can then be displayed without shifting. The second sub-frame can be shifted by an amount and direction that compensates for the shift that occurs between the display of the first and second sub-frames and then displayed. The third sub-frame can be shifted by an amount and direction that compensates for the shift that occurs between the display of the first sub-frame and the third sub-frame and then displayed. The movement of the head mounted display is then detected again to determine the shift to be applied to the sub-frame associated with the next full color frame image. Alternatively, the subframes can be shifted by an amount that compensates for a portion of the shift that occurs between subframes.
In further examples, the direction and speed of movement of the head mounted display is detected immediately prior to display of the reference sub-frame. Subsequent subframes are then shifted to compensate for the shift that occurs between the time the reference subframe is displayed and the time the subsequent subframes are displayed. Wherein a time when the reference subframe is displayed and a time when the subsequent subframe is displayed may be up to 5 frame times.
The advantages of this embodiment are illustrated by examining the effective frame rate associated with color breakup and blurring of the image. If a full color image is displayed at an image frame rate of 60 frames/second, the sub-frames will typically be displayed at a sub-frame rate of 180 frames/second to provide three sub-frames for each image frame. The described system and method effectively shifts the sub-frames so that they are positioned on top of each other, so that the color split is reduced to an amount corresponding to 180 frames/second. At the same time, the blur perceived by the user between image frames corresponds to 60 frames/second, since each of the sub-frames is derived from the same full-color frame image.
In a further embodiment, a digital shift of the sub-frames based on the detected movement immediately prior to the display of each panchromatic frame image can be combined with digital image stabilization applied between the panchromatic frame images.
In yet another embodiment, a method of digital shifting of sub-frames is combined with a method of increasing the frame rate while reducing the resolution of the image. These two methods of reducing color breakup operate with respect to different aspects of image processing associated with displaying images in head-mounted displays, as such they can be independently applied in either order in an image processing system associated with a processor.
In yet another embodiment, the head mounted display includes a camera for detecting eye movement (e.g., as described herein) of a user relative to movement of the head mounted display. The eye camera can be used to measure the speed of eye movement and the direction of eye movement. In an embodiment, the resolution of the eye camera can be relatively low (e.g., QVGA or VGA) so that the frame rate can be relatively high (e.g., 120 frames/second) without introducing bandwidth limitations. The detected eye movement relative to the head mounted display can be used to determine when to apply a method of reducing color splitting including, for example, increasing the frame rate and digitally shifting the sub-frames, as previously described herein. For example, if the detected eye movement is above a predetermined angular velocity, the resolution of the displayed image may be reduced and the sub-frame rate may be increased. In another example, the detected eye movement can be used to determine the amount and direction of digital shifts applied to sub-frames within an image frame prior to display of the sub-frames. In yet another example, the measured eye movement can be used in conjunction with the detected movement of the head-mounted display to determine the amount and direction of the digital shift applied to the sub-frame within the image frame prior to display of the sub-frame. The amount and direction of the digital shift applied to the sub-frame may correspond to a difference between the detected movement of the head mounted display and the detected eye movement of the user. Wherein the detection of the condition that the eyes of the user are moving in one direction and the head mounted display is moving in the opposite direction indicates a scenario in which especially bad color breakup can occur. In this case, a combined method for reducing color breakup is advantageous.
In yet another embodiment, when the movement of the head mounted display or the eye movement is detected to be above a predetermined threshold, the image is changed from a full color image displayed in color order to a monochrome image. The monochrome image can be comprised of combined image content from each of the color sequential sub-frames associated with each full color image frame. Where the monochrome image can be a grayscale image or a luminance image, where the luminance code value (Y) for each pixel can be calculated, for example as given below in equation 1, as derived from the following website: http:// en. wikipedia. org/wiki/Grayscale and as referenced to the CIE 1931 standard for digital photography:
y = 0.2126R + 0.7152G + 0.0722B equation 1.
Where R is the red code value for a pixel, G is the green code value for a pixel and B is the blue code value for a pixel. Alternatively, a monochrome image can be composed of a single color image (such as a green sub-frame image), and this image can be displayed in a single color or preferably with all sequential colors (e.g., red, green, and blue) applied simultaneously, such that the illumination applied to the reflective image source is white light, and as a result, the displayed image appears to be a grayscale image.
Several specific examples are provided below.
Example 1:
for a 26 degree display field of view and a 1280 pixel horizontal width image, one pixel occupies 0.020 degrees within the display field of view. If the frame rate of the full-color image is 60 Hz, the sub-frame time is 0.006 seconds in the case of a three-color sequential sub-frame image. The rotation speed of the head mounted display required to produce color splitting of one pixel is then 3.6 degrees/second. If the number of horizontal pixels in the display field is reduced to 640 pixels and at the same time the frame rate of the full-color image is increased to 120 Hz, in the case of a three-color sequential sub-frame image, the sub-frame time is reduced to 0.003, the size of the pixels is increased to 0.041 degrees and the rotation speed for generating color splitting of one pixel is 14.6 degrees/sec.
Example 2:
for a 26 degree display field of view and a 1280 pixel horizontal width image, one pixel is 0.020 degrees within the display field of view. If the minimum size that the user can detect for color splitting is one pixel width, a rotation speed of more than 3.6 degrees/second is required before color splitting is detected by the user (if the sub-frame rate is 180 Hz). Although color splitting is an analog effect, the user's eye does not have the resolution to detect the color fringes that exist during movement below this speed. Therefore, below this rotation speed, color break-up management is not required.
Example 3:
for a 26 degree display field of view and a 1280 pixel horizontal width image, one pixel is 0.020 degrees within the display field of view. If the user is able to detect a color split as small as one pixel width, a rotation speed of 3.6 degrees/second would require that the sub-frames be shifted one pixel with respect to each other (if the sub-frame rate is 180 Hz) to align the sub-frames so that the color split is not visible to the user. If the user turns their head at 15 degrees/sec, the sub-frames will need to be shifted 4 pixels relative to each other to align the sub-frames so that the color split is not visible. If the image frame begins with the display of a red sub-frame image, no digital shift is required for the red sub-frame image. A 4 pixel shift is required for the green sub-frame image. Also, 8 pixel shifts are required for the blue subframe image. The next red sub-frame associated with the next image frame will then be effectively shifted by 12 pixels within the field of view relative to the previous red sub-frame.
Each of the color split reduction techniques described herein may be used in combination with each of the other color split reduction techniques.
The inventors have appreciated that fitting a see-through computer display into certain head-mounted form factors is a challenge, even when reduced in size as described herein. A further advantage provided by an optics module comprising multiple folded optics is the ability to introduce twists at the folded surface to modify the orientation of different portions in the optics module relative to each other. This can be important when: when the optics module needs to fit into a thin curved eyeglass frame, visor, or helmet, the increased width associated with the upper portion of the multiple fold optics module can make it more difficult to fit into a configuration that is not parallel to the combiner. As such, another aspect of the invention relates to twisting certain optical components within a see-through computer display such that the optical components better fit certain form factors (e.g., eyeglasses), yet continue to perform as a high quality image display. In an embodiment, an optics system (e.g., the optical system described herein with respect to fig. 6 and 93-106) having a two-mirror system to fold an optical path is provided such that an image-producing module (e.g., an upper module) including a first image light-reflecting surface is rotated about a first optical axis leading from the upper module to a lower module and in a direction that causes the upper module to fit more compactly into a frame of a head-mounted computer. At the same time, to avoid distorting the image provided to the user's eye, image delivery optics (e.g., a lower module) including a second image light reflecting surface is rotated about a second optical axis leading to the user's eye and in an opposite direction relative to the image, thereby introducing a compound angle between the first and second image light reflecting surfaces. As long as the first and second optical axes are perpendicular to each other in the non-twisted state, distortion in the image associated with the twist about the first axis is compensated by the twist of the same angular metric value about the second axis, such that the image presented to the user's eye is not distorted by the twist.
FIG. 112 illustrates a head-worn computer with a see-through display in accordance with the principles of the present invention. The head-worn computer has a frame 11202 that houses/holds the optics module in a position in front of the user's eye. As illustrated in fig. 112, the frame 11202 holds two sets of optics modules 11204 and 11208, each of which has upper and lower optics modules. The optics module 11204 is non-twisted and is presented to illustrate the difficulty of fitting a non-twisted version into the frame. It will be noted that the dashed box representing the outer boundary of the optics module 11204 does not fit within the boundary of the frame 11202. Fitting the optics module 11204 into the frame 11202 will generally require the frame 11202 to become thicker from front to back, which will result in more offset of the eyewear form factor from the user's face, which is less desirable and less compact. In contrast, the optics module 11208 is a twisted optics module, where the upper module is twisted (or rotated) to better fit into the confines of the frame 11202, as shown in fig. 112. Fig. 113 shows a more detailed illustration of the twist imparted within the multiple fold optics in the optics module 11208. The twisting of the upper module 11214 relative to the lower module 11218 along the optical axis 934 to better fit into the frame 11202, it is this twisting that enables the optics module 11208 to better fit within the frame 11202, as shown in fig. 112, and as a result, the frame 11202 can be thinner and more compact than if a non-twisted optics module were used. To avoid distorting the image presented to the user, a second twist is required to introduce a compound angle between the first reflective surface 11225 in the upper module 11214 and the second reflective surface 11226 in the lower optics module 11218. A second twist is imparted to the second reflective surface about the optical axis 933 and in an opposite direction relative to the image from the twist in the upper module 11214. In this manner, the effect of the increased width of the upper portion of the multi-fold optics may be reduced when fitting the optics module into a curved structure (such as a spectacle frame, a goggle frame, or a helmet structure). Therein, it is preferred, but not required, that the optical axis 934 is perpendicular to the optical axis 933 such that the magnitude of the angular twist imparted to the first reflective surface 11225 can be the same as the twist imparted to the second reflective surface 11226, thereby providing an image to the user's eye that is not distorted by the twist.
Another aspect of the invention relates to the configuration of optics and electronics in the head-mounted frame such that the frame maintains a minimum form factor to resemble standard eyeglasses. In an embodiment, a see-through optical display with multiple fold optics (e.g., as described herein) to provide reduced thickness may be mounted in a frame. In embodiments, the multi-fold optical configuration may be twisted (e.g., as described herein) at the fold surface to better fit the optics into the frame. In the examples. The electronics that operate the display, processor, memory, sensors, etc. are positioned between, above, below, on the sides, etc. of the optical modules and oriented to provide a reduced thickness in the frame to match the thickness of the optics. Orienting the board may be particularly important when the board includes large components that limit the width of the board) such as, for example, a processor chip). For example, the electronic board and components on the electronic board may be mounted in a vertical orientation between and/or above the optical modules to reduce the thickness of the electronic board when mounted in the frame. In another configuration, the plate may be mounted between the optical modules at a height near the top of the optical modules to minimize the height of the eyeglass frame. In yet another configuration, the plate may be mounted such that it extends over the optical module to minimize the thickness of the frame. In a further embodiment, the plates may be mounted in an angled configuration to enable the thickness and height of the frame to be reduced simultaneously. In an embodiment, the electronic device may be divided among multiple boards. For example, the longer plate is above the shorter plate, with the space between the optical modules for the lower plate. This configuration uses some space between the eyes for some electronic devices.
Fig. 114 illustrates a top view and a front view of a configuration including an optical module 11208, an electronics board 11402, and a heat sink 11404. The plate 11402 is mounted in a vertical orientation to maintain a thin frame portion seated across the brow of the user. As illustrated, the optical module 11208 includes a second reflective surface 11226 in front of the user's eye and an upper module 11214. The upper module may have a flat reflective surface and the upper portion 11214 may be rotated or twisted about the second reflective surface 11226, as described herein. The second reflective surface 11226 may be a partial mirror, notch filter, holographic filter, or the like to reflect at least a portion of the image light to the user's eye while allowing the scene light to transmit through to the eye.
Fig. 115 illustrates a front view including a configuration of the optical device illustrated in fig. 114; however, the electronics board 11402 is mounted in the space between the optical modules at a height similar to that of the optical modules. This configuration reduces the overall height of the frame.
Fig. 116 illustrates a front view including a configuration of the optical device illustrated in fig. 114 and 115. The layout of the electronic device in this configuration is accomplished with multiple boards 11402, 11602, and 11604. The multiple panel configuration allows the panels to be thinner from front to back, thereby enabling the brow section of the frame to be thinner. A heat sink 11404 (not shown in fig. 116) may be mounted on the front face between the optical modules. This arrangement also allows heat to be drawn away in a direction away from the user's head. In an embodiment, the processor, which is the primary heat generator in the electronic device, is mounted vertically (e.g., on board 11604) and the heat sink 11404 may be mounted in front so that it contacts the processor. In this configuration, the heat sink 11404 spreads the heat away from the user's head to the front of the device. In other embodiments, the processor is mounted horizontally (e.g., on board 11602 or 11402). In embodiments, the plate(s) may be inclined from front to back (e.g., 20 degrees) to create an even thinner brow section.
Another aspect of the invention relates to hiding the optical module so that the optical module, the electronic device, or the board is not clearly visible to a person viewing the user. For example, in the configurations described herein, the optical module includes a lens suspended over the top of the brow section of the head-mounted device frame and the electronics board(s) are also suspended downward so as to partially block the see-through view. To hide these features and thereby provide the appearance of conventional eyeglasses to the headset computer, an external lens may be included in the eyeglass frame such that the external lens covers a portion of the frame containing the optical module or electronic device, and the external lens may include a progressive tint from top to bottom. In an embodiment, the tint may have less transmission at the top for hiding the portion of the frame comprising the optical module or the electronic board, and higher transmission below the hiding point in order to maintain high perspective transmission.
One aspect of the invention provides a multiple fold optic for reducing the thickness of an optic module and vertically oriented or angled electronic equipment for reducing the mounting thickness of the electronic equipment and a progressively tinted outer lens for concealing a portion of the optic or electronic equipment. In this manner, the head-worn computer is provided with a thinner form factor and the appearance of conventional eyeglasses.
Another aspect of the invention relates to an intuitive user interface installed on HWC 102, where the user interface includes tactile feedback to the user to provide the user with indications of engagement and changes. In an embodiment, the user interface is a rotating element on the temple section of the glasses form factor of HWC 102. The rotating element may include segments such that the rotating element positively engages at some predetermined angle. This facilitates tactile feedback to the user. As the user turns the spinner, the spinner 'clicks' through its predetermined steps or angles and each step causes the displayed user interface content to be changed. For example, the user may cycle through a set of menu items or selectable applications. In an embodiment, the rotating element further comprises a selection element, such as a pressure sensitive section that the user can press to make a selection.
Fig. 117 illustrates a human head wearing a head-worn computer in a glasses form factor. The eyewear has a temple section 11702 and a rotational user interface element 11704. The user can rotate the rotational element 11704 to cycle through the options presented as content in the see-through display of the glasses. Fig. 118 illustrates several examples of different rotational user interface elements 11704a, 11704b, and 11704 c. The rotational element 11704a is mounted at the front end of the temple and has significant side and top exposure for user interaction. Rotational element 11704b is mounted farther back and also has significant exposure (e.g., 270 degree touch). Rotational element 11704c has less exposure and is exposed for interaction on top of the temple. Other embodiments may have side or bottom exposure.
As discussed above, specially designed lenses may be used to conceal portions of the optics module and/or the electronics module. Fig. 119 illustrates an embodiment of one such lens 11902. Two lenses 11902 are illustrated as having a 6 substrate (Base 6) and a thickness of 1.3mm, but other geometries, such as geometries having different curvatures and thicknesses, can be used. Lens 11902 is shaped to look like conventional eyeglasses with features including special colored and magnetically mounted attachments in portions of lens 11902 with opaque structures (such as electronics) behind the lens.
Lens 11902 includes blind holes 11904 for mounting a magnetic attachment system (not shown). The magnetic attachment system may include magnets, magnetic materials, dual magnets, oppositely polarized magnets, etc., to enable the lens 11902 to be removed from and reinstalled on a head-worn computer (e.g., HWC 102). In the magnetic attachment system, the lens 11902 is magnetically fixed into the frame of the HWC. The magnet can be inserted into the blind hole 11904 or into the frame of the HWC at a corresponding mating location. As long as the matching location on the frame of the lens 11902 or HWC includes a magnet and the other location has a similarly sized piece of magnetic material or another magnet that is oriented to attract the lens 11902 and secure it in the frame of the HWC. To this end, the frame of the HWC may provide a guide feature to position the lens 11902 in front of the optics module in the HWC. Where the guide feature can be a ridge or flange in which the lens is located, lens 11902 cannot move laterally when held in place by the magnetic attachment system. In this manner, the function of the magnetic attachment system is only to secure lens 11902 in place, while the guide features position lens 11902. The guide features can be made robustly. To hold lens 11902 in place when dropped or even when subjected to an impact when the force provided by the magnetic attachment system is relatively low (to enable lens 11902 to be easily removed by a user for cleaning or replacement). Wherein easy replacement enables various lenses having different optical characteristics (e.g., polarized, photochromic, different optical densities) or different appearances (e.g., color, level of tinting, mirror coating) to be changed by the user as desired.
Fig. 119 also illustrates an example of how lens 11902 may be colored to hide or at least partially hide certain optical components (e.g., non-see-through or opaque components), such as electronics, electronics boards, auxiliary sensors (such as infrared cameras), and/or other components. As illustrated, the blind holes 11904 may also be hidden or at least partially hidden by coloring. As illustrated in fig. 119, the top portion 11908 (approximately 15mm as illustrated) may be more heavily colored (e.g., 0 to 30% transmission) or mirrored to better hide the non-see-through portions of the optics and other components. Below top portion 11908, lens 11902 may have a gradient zone 11909 in which the level of tinting gradually changes from top to bottom and is introduced into lower zone 11910. Lower region 11910 includes an area where the user primarily views the see-through surroundings, and this region may be colored to accommodate viewing applications. For example, if the application requires high perspective, lower region 11910 may be colored between 90% and 100% transmission. If the application requires a certain perspective tint, the lower area may be more heavily colored or mirrored (e.g., 20% to 90%). In embodiments, lower region 11910 may be a photochromic layer, an electrochromic layer, a controllable mirror, or other variable transmission layer. In embodiments, all or a portion of the lens may have a variable transmission layer, such as a photochromic layer, an electrochromic layer, or a controllable mirror, or the like. In an embodiment, any or all of lenses 11902 in the region may include polarization.
Another aspect of the invention relates to cooling the internal components by using micro-pores that are sized such that the micro-pores are large enough to allow gas to escape, but small enough not to allow water to pass through (e.g., 25 μm, 0.2mm, 0.3mm, etc.). The micro-holes may be included in the heat sink, for example. The heat sink, or other area, may be filled with hundreds or thousands of such micro-holes. The micro-holes may be, for example, laser cut or CNC holes that are small enough to keep large water droplets out of the device, but allow air to be exchanged through the heat sink. In addition to increasing the surface area of the heat sinks, they also have matching holes on the underside of the frame to enable convective cooling, where cool air is drawn in from the bottom as heat rises from the top (like a chimney), and as such, heat sinks with micro-holes are preferably located on the top or sides of the frame of the HWC. In an embodiment, the micro-holes are aligned in a slot formed by a fin on top of the heat sink. The exiting air then flows through the slots, thereby increasing heat transfer from the fins. In embodiments, the micro-holes may be angled such that the length of the holes in the heat sink material is increased and the air flow can be directed away from the head of the user. Further, the micro-holes may be of a size that causes turbulence in the air flow as it passes through the micro-holes. Wherein the turbulence substantially increases the heat transfer associated with the air flow through the heat sink. In an embodiment, the thermal management system of the HWC 102 is passive, not including an active cooling system, such as a fan or other energized mechanical cooling system to force air through the micropores. In other embodiments, the thermal management system includes mechanical cooling (such as a fan or fans or other systems) that is activated to force air to move through the HWC and micro-holes.
Another aspect of the invention relates to discovering items in the surrounding environment based on similarity to the identified items. Augmented reality is often strictly defined in terms of what is included and how it is used, and it would be advantageous to provide a more flexible interface so people can use augmented reality to do anything they want it to do. An example is the use of HWC cameras, image analysis and display to specify items to be discovered. Fig. 122 shows an illustration of an image 12210 of a scene containing an object for which the user wishes the HWC to assist in finding as the user moves through the environment. In this example, the user has circled the object being sought 12220, which in this case is a cat. The HWC then analyzes the circled regions of the image for shape, style, and color to identify the target to search for. The HWC then uses the camera to capture an image of the scene as the user walks. The HWC analyzes the image and compares the shape, pattern and color in the captured scene image and compares them with the shape, pattern and color of the target. When there is a match, the HWC alerts the user to the potential discovery. The warning may be a vibration, a sound, or a visual cue in an image displayed in the HWC, such as a circle, a flash, or an indicator corresponding to a potentially found location in the scene. The method provides a versatile and flexible augmented reality system in which items are visually described and a command to "find something like this" is given to the HWC. Examples of ways to identify objects to be searched include: circle an item in a previously captured image stored on the HWC (as shown in fig. 122); pointing to an item in a physical image held in front of a camera in the HWC; pointing to an item in a live image provided by a camera in the HWC and viewed in a see-through display of the HWC, and so on. Alternatively, a command to "find words like this" (e.g., street signs or items in the store) may be utilized to enter text into the HWC, and the HWC can then search for the text as the user moves through the environment. In another example, a user can indicate a color with a "find such a color like" command. The camera used to search for items may even be a hyperspectral camera in the HWC to use infrared or ultraviolet light to search for items thereby enhancing the visual search being conducted by the user. This method can be extended to any pattern that the user can recognize for the HWC, such as sound, vibration, movement, etc., and the HWC can then search for the targeted recognized pattern using any sensors included in the HWC. As such, the discovery system provided by the present invention is very flexible and can react to any pattern that can be recognized by the sensors in the HWC, all the user has to do is provide an example of the pattern to look for as a target. In this way, the discovery system assists the user, and the user can do other things while the HWC is looking for the target. The discovery system can be provided as a mode of operation in the HWC, where the user selects the mode and then enters a pattern to be used as a search target by the HWC. Examples of items that can be searched include: home objects, animals, plants, street signs, weather activity (e.g. clouds), people, voices, singing voice, bird calls, specific sounds, spoken words, temperature, wind direction shift as identified by the wind sound relative to the compass orientation, vibration, object to purchase, brand name in store, label on item in warehouse, barcode or number on object and color of object to match. In further embodiments, the rate of search can be selected by the user (e.g., how often to perform the analysis), or the rate can be automatically selected by the HWC by responding to the rate of change of the condition related to the target. In yet another embodiment, the sensor in the HWC includes a range finder or camera capable of generating a depth map to measure the distance to an object in an image captured by the camera. The HWC can then analyze the image as well as the distance to determine the size of the object. The user can then input the size of the object to the discovery system as a characteristic of the target pattern to enable the HWC to more accurately identify potential findings.
Another aspect of the invention relates to assisting a person in reading text presented in physical form (such as a book, magazine) on a computer screen or a telephone screen or the like. In an embodiment, a camera on the HWC is capable of imaging the page and a processor in the HWC is capable of recognizing the word on the page. A line, box, or other indicator may be presented in the HWC to indicate which words are being captured and identified. The user will then always view the page of words through the see-through display with an indication of which words have been recognized. The recognized words can then be translated or converted from text, which is then presented to the user in a see-through display. Alternatively, the recognized words can be converted from text to speech, which is then presented to the user through a head-mounted speaker, headphones, a visual display, and so forth. This gives the user a better understanding of the accuracy associated with text recognition relative to converted text or converted speech.
In a further aspect of the invention, the combiner is provided with a magnetic attachment structure to enable the combiner to be movable. In optics associated with the HWC 102 (such as, for example, the optics shown in fig. 6), it is important that the combiner 602 be accurately positioned and rigidly held under the frame of the HWC and the upper optical module 202 located within the frame. At the same time, the combiner 602 may become damaged such that it needs to be replaced or it may need to be periodically cleaned in order to facilitate the combiner being movable. Fig. 123 shows an illustration of a cross-section of a single combiner 12360 with magnetic attachment structure, as shown from the side to show the angle of the combiner 12360. Fig. 124 shows an illustration of two combiners 12360, where magnetic attachment structures attach the combiners 12360 to the frame of the HWC 12350, as shown from the front of the HWC. The combiner 12360 has two or more pins 12365 attached to the combiner 12360 such that the pins have parallel axes. Pins 12365 are shown inserted into holes drilled through the combiner 12365 and attached in place with an adhesive, such as a UV cured adhesive. Dowel 12365 is made of a magnetic material, such as, for example, stainless steel. The pins 12365 extend into parallel bores in the frame of the HWC 12350 such that the combiner 12360 is fixedly held in place relative to the frame of the HWC 12350. The attachment and bending of the pin 12365 establishes the angle between the combiner 12360 and the optics in the frame of the HWC 12350. The magnet 12370 is engaged into the frame of the HWC 12350 such that the pin 12365 is attracted to the magnet 12370 and thereby the pin 12365 and attached combiner 12360 are held in place relative to the frame of the HWC 12350. Magnet 12370 is selected such that the force exerted by magnet 12370 on pin 12365 is strong enough to hold combiner 12360 in place during normal use, but weak enough such that removal of combiner 12350 by the user is possible. By having the pins 12365 and associated bores parallel, the combiner 12350 can be easily removed for cleaning, or replaced if damaged. To provide a more rigid and repeatable connection between the combiner 12360 and the frame of the HWC 12350, the pin can fit into an extended tight bore in the frame of the HWC 12350. In addition, dowel 12365 can include a flange as shown that seats onto an associated flat surface of the frame of frame 12350 or a flat surface of magnet 12370 to further establish the vertical position of combiner 12360 and the angle of combiner 12360. In a preferred embodiment, magnet 12370 is a ring magnet and pin 12365 extends through the center of the ring magnet. Magnets 12370 may also be included in an insert (not shown) that further includes precision drilling for precisely aligning and guiding pins 12365. The insert can be made of a hardened material (such as ceramic) to provide the pin 12365 with a bore that is wear resistant during repeated removal and reinstallation of the combiner 12360. The pins can be accurately positioned within the combiner using a jig that holds the pins and the combiner. The holes for the pins in the combiner are then made larger than the pins, so there is clearance to allow the combiner and pins to be fully positioned by the jig. An adhesive (such as a UV cured adhesive) is then introduced to the hole and cured in place to secure the pin to the combiner in the position established by the clamp. In a further embodiment, the combined structure of dowel 12365 and combiner 12350 is designed to break in the event of a high impact force, to thereby protect the user from injury. Where the pins 12365 or combiner are designed to break at a previously selected impact force that is less than the impact force required to break the frame of the HWC 12350, so that the combiner 12350 with the attached pins 12365 can be simply replaced when damaged. In yet another embodiment, by providing a method for easily replacing the combiner 12360, different types of combiners can also be provided to the user, such as: polarization combiners, combiners with different shades, combiners with different spectral properties, combiners with different levels of physical properties, combiners with different shapes or sizes, combiners that are partially reflective mirrors or combiners that are notch mirrors, combiners with features to block facial glow, as previously described herein.
In a typical computer display system, automatic brightness control is a one-dimensional control parameter; when the ambient brightness is high, the display brightness or light source increases, and when the ambient brightness is low, the display brightness or light source decreases. The inventors have found that this one-dimensional paradigm has significant limitations when using a see-through computer display. Aspects of the invention relate to improving the performance of a head-worn computer by: by having the head-worn computer know the relative brightness of the content to be presented in addition to the brightness of the surrounding environment, and then adjust the brightness of the content based on these two factors to produce a viewing experience with appropriate viewability.
One aspect of the invention relates to improving the viewability of content displayed in a see-through head mounted display. Viewability involves a number of factors. The inventors have found that, in addition to image resolution, contrast, sharpness, etc., the viewability of an image presented in a see-through display is affected by: (1) the surrounding scene forming the background for the image, and (2) the relative or apparent brightness of the displayed image. If the user is looking, for example, at a bright scene, the viewability of the presented content may be washed out or difficult to see if the display settings are unchanged, in which case the content itself is relatively low in brightness (e.g., the content has many dark or black areas therein), it continues to be washed out unless the content is also changed. In this case, the brightness of the display may be increased even higher than would normally be required in a dark environment in order to compensate for the dark content of the image. As an additional example, if the user is looking towards a dark scene, the presented content may be perceived by the user as being too bright and washing away the scene, or making it difficult to interact with the scene (if the display settings are not changed). In addition, if the content itself is relatively bright (e.g., areas of primarily light or white content), the content may require additional alteration to obtain proper viewability. In this case, the display brightness may be further reduced than if it only depends on ambient lighting conditions to make the viewability of the content appropriate. In an embodiment, the head-worn computer is adapted to measure a scene forming a background for the content being presented, to know the relative brightness of the content itself to be presented (i.e. the intrinsic content brightness), and then to adjust the presentation of the content based on the scene brightness and the intrinsic content brightness, thereby achieving the desired content viewability.
While embodiments herein use the terms "content brightness" and "display brightness" in the context of altering the viewability of content, it should be understood that the step of making an alteration in the content and/or display in response to meeting viewability requirements may include causing the system to not interfere with the image content and increasing the brightness of the light source of the display, increasing the digital brightness of the image content using the available light and by adjusting the parameters of the overall display using the display driver, adjusting the actual content being displayed, and so forth. The viewability adjustment can be made by: adjusting the illumination system used to illuminate the reflective display (e.g., changing the pulse width modulation duty cycle of the LEDs, changing the power delivered to the illumination system, etc.), changing the brightness setting of the emissive display, changing how the display presents all content by adjusting settings in the display driver, or changing the content itself by image processing of all content (e.g., changing brightness, hue, saturation, color values (e.g., red, green, blue, cyan, yellow, magenta, etc.) exposure, contrast, saturation, tint, etc.), selection regions of content, types of content that can be shown simultaneously but that have inherent differences in visibility regardless of location, etc.
To improve the viewing experience of the user when viewing content in a see-through head mounted display, visual interaction between the displayed image and the see-through view of the environment must be considered. The viewability of a given displayed image is very dependent on various attributes such as its size, color, contrast and brightness, and perceived brightness as seen by the user. Wherein the color and brightness of the displayed image may be determined by pixel code values (e.g., average pixel code) within the digital image. Alternatively, the Brightness of the displayed Image may be determined according to the illuminance (luma) of the displayed Image (see "Brightness correction in Digital Image Processing", Sergey Bezryadin et al, Technologies for Digital fusion 2007, Las Vegas, NV). Other attributes of the displayed image may be calculated based on the distribution of code values in the image, similar to luminance. Depending on the mode of operation, the type of activity in which the user is engaged, and the perceived brightness of the image being displayed, it may be important to have the displayed image match, contrast, or blend into the see-through view of the environment. Content adaptation may be based on perceived user needs, in addition to the scenario that will form the background for the content. Embodiments provide methods and systems to automatically adjust the viewability of an image, for example, depending on:
1. A percentage of the display field of view covered by the displayed content, (where in a see-through head mounted display the black-colored portion of the displayed image is considered as the portion without displayed content, and instead the user is provided with a see-through view of the environment in that portion);
2. a brightness metric (e.g., hue, saturation, color, individual color distribution (e.g., red content, blue content, green content), average brightness, maximum brightness, minimum brightness, statistically calculated brightness (e.g., mean, median, mode, range, distribution concentration), etc.) of the displayed image;
3. sensor feedback indicative of a user usage scenario (e.g., the amount of motion measured by a sensor in the IMU in the head mounted display is used to determine that the user is stationary, walking, running, in a car, etc.);
4. the mode of operation of the head mounted display (which may be selected by the user or automatically selected by the head mounted display based on, for example, environmental conditions, GPS location, time or date, indicated or determined user context).
5. Types of content (e.g., still pictures (e.g., high or low contrast, monochrome or color, such as icons or markers), moving pictures (e.g., high or low contrast, monochrome or color, such as a scroll icon or a jumping marker on our starting device), video content (e.g., a marker in which the location and intensity of pixels is constantly changing, such as jumping and flashing, other normal types of video content, such as hollywood movies, step-by-step tutorials, or you recorded on your glasses that last run down a ski slope), text (e.g., small, large, monochrome, outlined, flashing, etc.), and/or the like, and/or
6. User usage scenarios (e.g., sensor feedback based, operational application based, user setting based, predictive scenarios), such as sitting still in a secure location (such as your living room) and watching a movie (e.g., where it may not be necessary to defeat the environment), walking around and getting informed or viewing directions in turns (e.g., where it may depend on the amount of display covered but may best match the environment), driving in a car and erasing blind spots (such as vertical posts (e.g., where it may be necessary to match the environment)), driving in a car and attempting to display HUD data through external lighting (e.g., where it may be necessary to defeat the environment), getting indications about maintenance and engines (e.g., where certain areas need to defeat the environment, such as pages in a service manual, and certain needs to match such as enhanced overlays, where you still need to see what you are busy with) and so on.
For example, in night vision mode using a camera with a live feed to the head mounted display, a sensor associated with the head mounted display indicates that the user is moving at a speed and has up and down movements indicating a bump. As a result, the head mounted display may automatically determine that the displayed image should be provided with the following brightness: the brightness provides a good view without regard to the perspective view of the surroundings, since it is too dark for the user to see the perspective view of the surroundings. In addition, the head mounted display can switch the displayed image from a full color to a monochrome image, such as green, where the human eye is more sensitive and responds more quickly.
In another example of a mode, when eye tracking is being used in the user interface, the brightness of the displayed image is increased relative to the see-through view of the surrounding environment. In this embodiment, the type of user interface being used determines the brightness of the displayed image relative to the brightness of the see-through view of the surrounding environment. In this way, the see-through view is made darker than the displayed image, so that the see-through view is less obtrusive to the user. By making the see-through view less obtrusive to the user, the user can more easily move his eyes to control the user interface without being distracted by the see-through view of the surrounding environment. This approach reduces nervous eye movement that is typically encountered when using eye tracking in head-mounted displays that also provide a user with a see-through view of the environment. Fig. 126 is a graph showing the luminance (L) perceived by the human eye versus the measured luminance (illuminance) of the scene. In this diagram, it can be easily seen that: the human eye has a non-linear response to illumination, where the eye is more sensitive to differences at lower levels and less sensitive to differences at higher levels. In an embodiment, when using a mode of eye tracking control comprising a user interface, the displayed image may be provided with an average brightness such that: the average brightness is perceived as 2 times or brighter than the average brightness of the see-through view of the environment (i.e. L of the displayed image is 2 times L of the see-through view).
Further, the displayed image may be changed in response to an average color, hue, or spatial frequency of the environment surrounding the user. In this case, a camera in the head mounted display may be used to capture an image of the environment that includes a portion of the see-through field of view as seen by the user. Attributes of the captured image of the environment may then be digitally analyzed (as previously described herein) to calculate attributes of the displayed image. In this case, the property of the captured image of the environment may comprise an average brightness, a color distribution or a spatial frequency of the see-through view of the environment. The calculated attributes of the environment may then be compared against the attributes of the image being displayed to determine how distractingly the perspective view will contrast the type of displayed image being displayed. The properties of the displayed image may then be modified in color, hue, or spatial frequency to improve viewability in a head-mounted display with perspective. This comparison of image content to perspective views and associated modifications to the displayed image may be applied within large field-of-view blocks or within small localized field-of-view blocks (each consisting of only a few pixels, such as may be required for certain types of augmented reality objects). Wherein the captured image of the environment (used to calculate the properties of at least a portion of the see-through view of the environment provided to the user) need not be of the same resolution as the displayed image. In further embodiments, a brightness sensor or color sensor included in the head mounted display may be used to measure an average brightness or average color within a portion of the see-through field of view of the environment. By using a dedicated sensor for measuring brightness or color, the calculation of the properties in the perspective view of the environment can be provided with little processing power, thus reducing the required power and increasing the calculation speed.
Color is often considered to be very subjective and there are several reasons for this, including such things as: the correlation to the ambient lighting of the environment, the proximity of other colors, and whether you are using one or both eyes. To compensate for these effects, the head mounted display may utilize light sensors or utilize cameras to measure color balance and intensity of ambient light to infer how the colors of objects in the environment will look, and then the colors of the displayed images may be modified to improve viewability in a head mounted display with perspective. In the case of augmented reality objects, the viewability may be improved by rendering the augmented reality object such that the augmented display object better contrasts with the environment, e.g., for the markers, or such that the augmented display object blends into the environment, e.g., while viewing the architectural model. To this end, a light sensor may be provided to determine the brightness and color balance of the ambient lighting in front of the user or from other directions in the environment (such as above the user). Additionally, objects in the environment that typically have standard colors (e.g., stop signs are red) may be identified and these colors may be measured in the captured image to determine the ambient lighting color balance.
The color perception of the human eye becomes even more complex in extreme cases of very bright and very dark, because the human eye responds non-linearly. For example, in direct sunlight, the color begins to wash away as nerves in the brain begin to saturate and lose the ability to detect subtle differences in color. On the other hand, when the environment is dim, the contrast perceived by the human eye decreases. Thus, when a bright condition is detected, color may be enhanced in the displayed image. When a dim condition is detected, the contrast in the displayed image may be enhanced to provide a better viewing experience for the user. Wherein the contrast can be enhanced by: digitally sharpening the image, increasing the difference in code values between adjacent regions in the digital image, or by adding narrow lines of complementary colors around the edges of the displayed object.
In dim conditions, the color sensitivity of the human eye also varies by color, so that blue appears brighter than red. As a result, in dim viewing conditions, the color of the object changes to blue. Thus, when the displayed image is provided as a dimmed image, such as for example when using a head mounted display in dimmed lighting (where the viewability of both the displayed image and the see-through view is important), the color balance of the image may be shifted to be redder to provide a more accurate color rendering of the displayed image as perceived by the user. If the image is displayed as a very dim image, the image may be further changed to a monochromatic red to better preserve the user's night vision.
In an embodiment, the head mounted display uses a sensor or camera to determine the brightness of the surrounding environment. The type of image to be displayed is then determined and the brightness of the image is adjusted corresponding to the type of image and the operating mode of the head-mounted display. A combined brightness is determined, which consists of the brightness of the see-through view in combination with the brightness of the displayed image. The operating region of the human eye is then determined based on the combined luminance and the known sensitivity of the human eye, as shown in graph 125. Attributes of the image (e.g., color balance, contrast, color of the object, size of the text) are then adjusted corresponding to the determined operating region, type of image, and mode of operation to improve viewability.
FIG. 125 shows a graph of sensitivity versus luminance of the human eye, as provided in chapter 2.1, page 38, of the book "Digital Image Processing Second Edition" (copyright 2002, Prentice Hall Inc ISBN 0-201-. As can be seen, the sensitivity is rather non-linear. To make this non-linearity easier to understand, the graph has been broken down into four regions.
Zone 1: top of photopic vision (glare limitation), where the relative difference in brightness is less noticeable and the color shifts to red. The sharpness of the focus is good in the case of a constricted pupil but glare in the eye begins to blur the details.
To improve viewability, the displayed image is modified to increase contrast and increase green and/or blue.
Zone 2: standard range of color vision, where cones dominate the human eye. The color perception is substantially uniform and the brightness perception follows a standard gamma curve. Maximum sharpness is possible due to the small pupil and manageable brightness level. In the case of standard brightness and color, the viewability is good.
Zone 3: transition from cone to rod for primary color sensitivity. The color perception becomes non-linear because the red cones lose sensitivity faster than the blue and green. The contrast perception is reduced due to the flattened response to changes in brightness. In the case of a larger pupil, especially in older eyes that are less able to adapt freely, the sharpness of focus also starts to decrease. Viewability is improved by increasing font and object size for legibility and reducing blue and green while increasing red and increasing contrast.
Zone 4: the bottom end of low-light vision, where the rod is dominant in sensitivity and motion is more pronounced than the content. Viewability is improved by changing the displayed image to eliminate high spatial frequencies (such as small text) and instead providing iconography (iconography) and using motion or flicker to increase the visibility of key items.
In further embodiments, changes in the operating mode are considered. Such that if the user changes the mode of operation, the displayed image is modified to improve viewability in correspondence with the mode change and environmental conditions. This may be a temporary state as the user's eyes adapt to the new operating mode and associated changes in viewing conditions. For example, if the display settings are based on a darker environmental condition than detected when the head mounted display wakes up, the brightness of the displayed image is modified to match the environmental condition, thereby avoiding injury to the user's eyes. In another example, entertainment mode is used and the brightness of the displayed image is slowly increased from ambient conditions up to the level of optimal viewability for video with saturated colors and high sharpness (zone 2). In yet another example, if the displayed image includes an icon of a restricted area or white on black text for night viewing, the brightness is reduced to account for increased brightness perception before showing a photograph or white background page.
In yet another embodiment, the eye camera is used to determine which part of the displayed image the user is looking directly at and adjust the properties of the displayed image corresponding to the brightness of that part of the displayed image. In this way, the properties of the image are adjusted corresponding to the portion of the image to which the user's eyes are positively corresponding. This approach recognizes that the human eye adapts very quickly to local changes in brightness in the area that the eye is looking at. When the brightness increases rapidly, such as when the light is turned on in a dark room, the pupil diameter can decrease by 30% within 0.4 seconds, as shown in the Pamplona study (Pamplona, v.f., oliveria, m.m., and Baranoski, g.v.g. 2009, photo modules for pupil light reflex and annular pattern formation, ACM trans. graph, 28, 4, Articles 106 (2009, 8 months), page 12). As a result, the user's eyes may quickly adapt to local changes in brightness as the user moves his eyes to look at different parts of the displayed image or different parts of the see-through view of the surrounding environment. To provide more consistent perceived brightness for different portions of a displayed image, a system or method according to the principles of the present invention adjusts the overall brightness of the displayed image in correspondence with the local brightness of the portion of the displayed image or the local brightness of the portion of the see-through view that the user's eyes are looking at. In this way, changes in the size of the pupil of the user's eye are reduced and the user is then provided with a more consistent brightness distribution within the displayed image. The portion of the see-through view or the portion of the displayed image in which the user's eyes are looking is determined by analyzing the image of the user's eyes that has been captured by the eye camera. The eye camera may be used in a video mode to continuously capture images of the user's eyes and the captured images are then continuously analyzed to track the position of the user's eyes over time. The position of the user's eyes within the captured eye image correlates to the portion of the see-through view or the portion of the displayed image that the user is looking at. The overall brightness of the displayed image may then be adjusted corresponding to the local brightness of the portion of the see-through view or the portion of the displayed image at which the user's eyes are looking. The rate of adjustment of the overall brightness of the displayed image may be further correlated to a measured diameter of the user's pupil or to a measured change in the diameter of the user's pupil as determined from analysis of the captured image of the user's eye.
In yet another embodiment, the adjustment of the properties of the overall image may be made based on: local properties of the portion of the perspective view or the portion of the displayed image at which the user's eyes are looking. The adjusted attributes of the displayed image may include: color, color balance, contrast, sharpness, spatial frequency, and resolution. Where an eye camera is used to capture an image of the user's eye, which is then analyzed to determine the portion of the see-through view or the portion of the image displayed that the user's eye is looking at. The portion of the see-through view or the portion of the displayed image that the user's eyes are looking at is then analyzed to determine the relative strength of the attribute. The overall display image is then adjusted to improve viewability corresponding to the local intensity of the attribute in the region at which the user's eyes are looking. Wherein a camera in the head mounted display may be used to capture an image of the surrounding environment that corresponds at least in part to the see-through view provided to the user's eyes.
In an embodiment, the head-worn computer has an outward facing camera to capture a scene in front of the person wearing the head-worn computer. The camera and image processing for determining the area in the surrounding scene to be used for brightness and/or color considerations in adjusting the displayed content may take a variety of forms. For example:
The camera is positioned to capture a forward facing scene-the brightness metric will take into account the captured scene and determine the relevant brightness and/or color. For example, the entire scene average color/brightness may be considered, bright or color saturated portions may be considered, dark regions may be considered, etc.;
the front facing camera may have a larger field of view than the field of view of the see-through display and image processing may be used to assess the overlap region so that the captured image brightness and/or color may represent the field of view brightness and/or color of the see-through display;
the front facing camera may have a field of view similar to that of the see-through display, such that the captured image brightness and/or color may represent the field of view brightness and/or color of the see-through display;
a front facing camera may have a narrow field of view to better target a scene directly in front of the user;
the forward facing camera may be a mechanically movable camera that follows the eye position (e.g., as determined by eye imaging as described herein) to capture a scene that follows the user's eyes;
a front facing camera may have a wide field of view to capture a scene. Once the image is captured, a segment of the image may be identified as the segment that the user is looking at (e.g., from eye imaging information), and the brightness and/or color in that segment may then be considered;
Objects in the captured scene images may be identified (e.g., as determined based on eye imaging and position determination) and may be considered; and
objects in the captured scene image may be identified as objects to which the displayed content will relate (e.g. advertisements to be associated with a shop) and may take into account the brightness and/or color of the objects.
In a further embodiment, the invention provides a method for improving the alignment of a displayed image with a perspective view of the surrounding environment. The method may also be used to correlate eye tracking with where the user is looking in the perspective view of the surrounding environment. This is an important feature for adjusting properties in the displayed image when the adjustment is based on local properties in the part of the perspective view that the user is looking at. The adjustment process may be used for each user using the head mounted display to improve the viewing experience of different individuals and to compensate for variations in head shape or eye position between individuals. Alternatively, the adjustment process may be used to fine tune the viewing experience of a single individual each time the user uses the head mounted display to compensate for different positioning of the head mounted display on the user's head. The method may also be important for improving the accuracy of the positioning of the augmented reality object. The method includes using an externally facing camera in a head mounted display to capture an image of a surrounding environment, the image including at least a portion of a user's field of view of a see-through view of the surrounding environment. Visible markers, such as, for example, cross-hairs, are provided in corners of the captured image to provide a first target image. The first target image is then displayed to the user so that the user simultaneously sees the displayed image of the surrounding environment from the first target image overlaid onto the see-through view of the surrounding environment. The user looks at the visible marker and then uses the eye tracking control to move the displayed image to a position: the portion of the displayed image adjacent to the visible marker at that location is aligned with the object in the perspective view of the environment. Wherein the eye tracking controls include an eye camera to determine movement of the user's eyes and blinking of one or both eyes (head movement may be used in conjunction with eye controls in the user interface) which are used in the user interface to input control inputs. A second image of the surrounding environment is then captured and visible markers are provided in the corners to provide a second target image, wherein the visible markers in the second target image are positioned in the corners opposite the visible markers in the first target image. The second target image is then displayed to the user. The user then looks at the visible marker in the second target and uses the eye control to move the displayed image so as to align an object adjacent to the visible marker in the second target image with an object in the see-through view of the environment. During the period when the user is viewing the first and second target images, it is important that the user does not move his head relative to the environment. The displayed image is then adjusted corresponding to the relative amount by which the first and second target images must be moved in order to align the portion of the displayed image with the corresponding portion of the see-through view of the surrounding environment.
Fig. 127 shows an example of a graphical representation of a perspective view of the surrounding environment, with outlines showing that the display field of view 12723 is smaller than a typical perspective field of view 12722.
FIG. 128 shows an illustration of a captured image of the surrounding environment, which may be a substantially larger field of view than the displayed image, such that a cropped version of the captured image of the environment may be used for the alignment process.
FIG. 129a shows a representation of a first target image 12928, and FIG. 129b shows a representation of a second target image 12929, wherein target images 12928 and 12929 each include visible markers 12926 and 12927 in opposite respective corners.
Fig. 130 shows an illustration of a first target image 12928 overlaid onto a see-through view, where the first target image 12928 has been moved to align a portion of the first target image adjacent to the visible marker 12926 relative to a corresponding object in the see-through view using an eye-tracking control. Note that the objects in the displayed image are shown in the diagram 130 as being smaller in overall size compared to the perspective view before being adjusted to improve alignment, but it is equally possible that the overall size may be larger before adjustment.
Fig. 131 shows an illustration of a second target image 12929 overlaid onto the see-through view, where the second target image 12929 has been moved using an eye-tracking control to align the portion of the second target image adjacent to the visible marker 12927 with respect to the corresponding object in the see-through view. The movement requires alignment of the first target image 12928 and the second target image 12929 is then used to determine adjustments to the displayed image such that the accuracy of the alignment of the displayed image field of view 12723 with the perspective field of view 12722 is improved. Wherein the determined adjustments to the displayed image may include adjustments in overall size, cropping of the image, and vertical and horizontal position of the displayed image within the displayed image field of view 12723. By adding at least one more visible marker to the target image and using at least one more step to position the target image relative to the perspective view of the environment, a rotational adjustment may be determined to further improve the alignment of the displayed image with the perspective view of the environment. A separate diagram showing a representation of the displayed image sized and aligned to match the perspective view of the surrounding environment is not shown as it would appear similar to diagram 127. The determined adjustments may then be used to improve the alignment of the other displayed images with the see-through view of the surrounding environment, such that regions in the displayed images may be mapped to corresponding regions in the see-through view that would be behind the displayed images when viewed in the head-mounted display. The determined adjustments may also be used to map the movement of the user's eyes to areas in the see-through view of the environment as captured in the image of the surrounding environment from the outward facing camera so that it may be determined where the user is looking in the surrounding environment. Furthermore, by analyzing the captured image of the environment, it may be determined what the user is looking at in the surrounding environment.
In yet another embodiment, the eye tracking control is used by the user to adjust the size of the displayed image and to adjust the position of the displayed image to match the perspective view of the surrounding environment. In this approach, an image of the surrounding environment is captured by an externally facing camera in the head mounted display. The image of the surrounding environment is then displayed to the user within the displayed image field of view 12723 so that the user simultaneously sees the displayed image of the surrounding environment overlaid onto the see-through view of the surrounding environment. The user then performs two adjustments to the displayed image using the eye tracking control to improve alignment of the displayed image of the surrounding environment with the see-through view of the surrounding environment. The first adjustment is to adjust the size of the displayed image of the surrounding environment relative to the size of the see-through view of the surrounding environment. The adjustment may be performed by the user, for example by a long blink of the eye to initiate adjustment followed by a sliding movement of the eye to increase or decrease the size of the displayed image. Another long blink ends the resizing process. The second adjustment is to position the displayed image to improve alignment of the displayed image of the surrounding environment with the perspective view of the surrounding environment. The adjustment may be performed by the user, for example by starting the adjustment by a long blink of the eye followed by a sliding directional movement of the eye to indicate the movement, thereby aligning the displayed image to a perspective view of the environment. This adjustment process may be performed for one eye at a time, so that the images displayed for the left and right eyes may be independently positioned for improved viewing of the stereoscopic images. The determined adjustments are then used with the other display images to improve alignment of the other display images with the see-through view of the environment and to determine a mapping of the see-through view as seen behind the displayed images in the head mounted display. The determined adjustments may also be used to map the movement of the user's eyes to areas in the see-through view of the environment as captured in the image of the surrounding environment from the outward facing camera so that it may be determined where the user is looking in the surrounding environment. Furthermore, by analyzing the captured environmental image, it may be determined what the user is looking at in the surrounding environment.
While some of the above embodiments have been described in connection with using eye tracking input for display content control and adjustment, it should be understood that an external user interface may be used in connection with or in place of eye tracking controls. For example, a touchpad, joystick, button device, or the like may be used to align content with the surrounding environment when the displayed content is presented in the field of view of the head mounted display.
In embodiments, the displayed content may be color adjusted depending on the contextual background to be located behind the displayed content in the see-through display to compensate for the color of the contextual background so that the displayed content appears to be properly color balanced. For example, if the scene background (on which the displayed content will overlay) is red (e.g., a red brick wall), the displayed content may be adjusted to reduce its red content because some of the scene's red content will be seen through the displayed content and thus contribute to the red content in the displayed content.
In embodiments, the displayed content may be adjusted as described herein (e.g., distinguished from or mixed with the scene viewed through the see-through display), by: adjusting the color and/or intensity of light produced by an illumination system adapted to illuminate a reflective display, adjusting image content by software image processing, adjusting the intensity of one or more colors of an emissive display, and so forth.
In embodiments, the perspective scene brightness and/or color may be based on an average perspective brightness and/or color of the scene as viewed through the display or otherwise proximate to the head mounted display, brightness and/or color of objects apparent in view through the perspective display, eye orientation (based on eye position for eye imaging as described herein), compass orientation, and so forth.
The inventors have found that: in head mounted displays that include multiple fold optics, it may be advantageous to use solid prisms with included fold surfaces to improve image quality and enable a more compact form factor. They have also found that: manufacturing solid prisms by molding can be challenging due to sink marks, which typically occur on planar surfaces. In addition, providing illumination light into a solid prism at a desired angle requires special considerations. Imaging of the user's eyes may be an important feature in head-mounted displays for user identification and as a user interface. Eye imaging apparatus are provided herein for use with various head mounted displays.
An aspect of the invention relates to a solid prism having improved manufacturability and design modifications that enable illumination light to be efficiently supplied into the solid prism at a desired angle to illuminate an image source.
One aspect of the present invention relates to a solid prism having a folding surface platform, wherein an optically flat folding surface is mounted on the folding surface platform of the prism such that the folding surface maintains a high optical flatness that minimizes deviations in the folding surface platform of the prism.
One aspect of the invention relates to providing additional optical features in a solid prism for capturing images of a user's eye with an eye imaging camera.
One aspect of the invention relates to providing a folding surface for a solid prism, wherein the solid prism comprises a shaped input and/or output surface that acts as a refractive power generating optical system.
An aspect of the invention relates to a solid prism having a power generating surface and having an additional power lens above the combiner such that the physical size of the power lens above the combiner is reduced, thereby reducing the overall size of the optical system.
One aspect of the invention relates to a solid prism having a surface with optical power at an image light receiving end of an optical path from a display, wherein an additional field lens with optical power is positioned between the display and the surface with optical power to further increase the optical power of the optical system.
One aspect of the invention relates to a solid prism having a folded surface comprising input and/or output surfaces with optical power and material selection among the relevant optical materials adapted to reduce lateral chromatic aberration and thereby improve the image quality provided to a user.
One aspect of the invention relates to an angled backlight assembly that redirects illumination light toward an image source by including a prismatic film, where the prisms are positioned on the sides of the backlight such that the prisms act as fresnel wedges.
One aspect of the invention relates to a stray light management system adapted to manage stray light generated by a prism film used in a backlight system, wherein the prism film causes significant stray light and an analyzing polarizer film is positioned in the image light optical path to absorb such stray light.
One aspect of the invention relates to an emissive display system that projects image light into a solid prism having a folded surface for delivering the image light to an eye of a user.
An aspect of the invention relates to projecting illumination light through a portion of the display optics and toward the combiner surface, where the illumination light reflects off of the combiner surface and directly toward the eye of the user, illuminating the eye for eye imaging. In an embodiment, the display optics comprise a solid prism, and the light source is mounted over a folded surface of the solid prism.
One aspect of the invention relates to capturing eye images directly from a combiner, wherein an eye imaging camera is mounted above the combiner. In an embodiment, the eye light is positioned at the top edge of the combiner, so the eye is directly illuminated.
An aspect of the invention relates to a surface applied to the combiner, wherein said surface is applied outside the field of view of the see-through display and is adapted to reduce stray light reflections reflecting off the combiner and towards the eye of the user.
An aspect of the invention relates to a surface applied to a combiner, wherein the surface is adapted to reflect infrared light and to pass visible light, such that visible stray light reflection towards the eye of a user is minimized, and such that infrared light from an infrared light source is reflected towards the eye of the user. The infrared reflection can then be used for eye imaging.
An aspect of the invention relates to eye imaging by waveguide optics adapted to transmit image light and be see-through to a user view of surroundings, wherein an eye imaging camera is positioned to receive an eye image through the waveguide optics such that the image is captured from a location in front of the user's eye.
One aspect of the invention relates to eye imaging by capturing reflected light exiting from an outer surface of waveguide optics adapted to transmit image light and be transparent to a user view of surroundings.
Fig. 132 shows a diagram of a multi-fold optic for a head-mounted display, including solid prisms 13250. Where solid prism 13250 includes planar surface 13254 (i.e., a first fold surface), planar surface 13254 is reflective to redirect image light 13230 and thereby provide a first fold to optical axis 13235 to enable the multi-fold optics to be more compact than an optics that does not include the fold. As shown in fig. 132, a second fold of the optical axis 13235 is provided in a lower portion of the multi-fold optics, where the image light 13230 is reflected by the combiner 13210 (i.e., the second fold surface), so the image light 13230 is directed into the eyebox 13220 where the user's eye is located, as already described previously herein. Planar surface 13254 may be a fully reflective mirror such that all image light 13230 is reflected, where image source 13260 must be a self-emissive image source (such as an OLED) or a backlit image source (such as an LCD) such that image light 13230 is provided directly by image source 13260. However, if image source 13260 is a reflective image source (such as LCOS, FLCOS, or DLP), illumination light must be supplied, which is then reflected by image source 13260 to provide image light 13230. In the case where the reflective image source is LCOS or FLCOS, where the illumination light needs to be at a high angle of incidence, planar surface 13254 may be a partial mirror, such that the illumination light may be provided from a light source located behind planar surface 13254 and directed toward image source 13260. In the case where the reflective image source is a DLP, where the illumination light needs to be at an angle commensurate with the mirror angle, the planar surface 13254 may be extended, or additional surfaces may be provided so that light can be provided from a light source located behind the planar surface 13254 or additional surfaces. In an embodiment, a first advantage provided by the solid prism 13250 is that the cone angle of the image light 13230 is reduced in the solid prism 13250, thereby extending the optical path length so that folding can be provided to the optical axis 13235, thereby enabling a more compact size of the multi-fold optics. A second advantage of the solid prism 13250 is that the planar surface 13254 provides internal reflection so that dust cannot collect on the reflective surface. A third advantage of the solid prism 13250 is that stray light is more easily controlled by blackening the outer surfaces that do not need to transmit light.
In addition to folding the optical axis 13235 by reflecting off of the planar surface 13254, the solid prism 13250 may provide optical power because the input and output surfaces 13252 may be curved. Fig. 132 shows two surfaces 13252 having optical power. By providing some of the optical power required in the multi-fold optic, the power lens 13240 need not provide as much optical power, and as a result, the power lens 13240 is thinner and the overall size of the multi-fold optic is thereby reduced. A field lens 13270 may also be provided to function in conjunction with the solid prism 13250 and the power lens 13240. By selecting the materials of the field lens 13270, solid prism 13250, and power lens 13240 to be different in refractive index and Abbe number (combining flint and crown glass properties, as known to those skilled in the art), lateral chromatic aberration in the image light 13230 provided to the eyebox 13220 may be substantially reduced, thereby improving the sharpness of the image perceived by the user, particularly in the corners of the image.
In multi-fold optics, the surfaces (13254 and 13210) of folded optical axis 13235 are preferably optically flat (e.g., flatness better than 10 microns) to maintain the wavefront of image light 13230 and thereby provide a high quality image to the user. These surfaces may be tilted with respect to the optical axis 13235 to compensate for twisting of the upper portion of the optics (extending from the image source 13260 to the bottom surface of the solid prism) with respect to the lower portion of the optics (extending from the power lens to the eyebox), as has been previously described in the text.
Manufacturing the plastic solid prism 13250 by molding can be difficult because the solid prism 13250 has a non-uniform thickness and it can include curved surfaces and flat surfaces. Injection molding of curved surfaces requires different process settings than those required for injection molded flat surfaces. In particular, when the thickness of the plastic under the flat surface is not uniform (as is the case with solid prisms 13250), the optically flat surface can be very difficult to injection mold without sink marks. To overcome this difficulty, the present disclosure provides a separate reflector 13275 for creating an improved planar surface 13254. The reflective sheet 13275 may be manufactured using a sheet manufacturing process to provide a high degree of optical flatness. In a preferred embodiment, the reflective sheet 13275 is a glass sheet that has been coated to provide reflectivity. Wherein the coating may be a fully reflective mirror if the image source 13260 is a self-luminous display or a partially reflective mirror if the image source 13260 is a reflective display. In a further preferred embodiment, the reflective sheet 13275 comprises a glass sheet with a reflective polarizer, such as a Proflux wire grid polarizer of Moxtek (UT), such that light of one polarization state is reflected and light of the opposite polarization state is transmitted.
The reflector 13275 may be bonded to the planar surface 13254 of the solid prism 13250 by using a transparent adhesive having a refractive index (also referred to as index matching) that is very similar to the refractive index of the solid prism material (e.g., within +/-0.05). By matching the index of refraction of the adhesive to that of the solid prism 13250, the interface between the solid prism material and the adhesive becomes optically invisible. In this way, the adhesive may fill any space between the reflector 13275 and the planar surface 13254 of the solid prism 13250 caused by sink marks, scratches, grooves, or other non-planarity of the planar surface of the solid prism. The flatness of the planar surface molded over the solid prism 13250 is then not important to the optical performance of the multiple fold optics, and instead the flatness of the reflective plate 13275 determines a new planar surface 13254 with improved flatness. In this way, the manufacture of the solid prism 13250 becomes easier and less expensive because the planar surface 13254 does not have to be a molded (or otherwise manufactured) optically flat surface, and the manufacturing process for making the solid prism 13250 can be optimized for the surface 13252 with optical power. In addition, by bonding the reflective surface of the reflective plate 13275 to the planar surface 13254, the optically flat reflective surface is protected from damage during the further assembly process of the multiple fold optic.
Fig. 133a, 133b, and 133c show illustrations of the steps associated with joining the reflective sheet 13275 to the solid prism 13250. As shown in fig. 133a, solid prism 13250 is mounted for engagement such that planar surface 13254 is approximately horizontal. A drop 13377 of relatively low viscosity (e.g., 200 centipoise) clear adhesive is then applied to the flat surface 13254. Wherein the adhesive is selected to have a refractive index very similar to the material of the solid prisms 13250 such that the adhesive and solid prisms are index matched. The reflective sheet 13275 is then brought into contact with the droplet 13377, as shown in fig. 133 b. The adhesive is then allowed to wick across the interface between (wick across) reflector 13275 and planar surface 13254 until the entire interface is covered by the adhesive, as shown in fig. 133 c. Importantly, in an embodiment, no pressure is applied to the reflector 13275 during the bonding process so that the reflector 13275 is not deformed and the optical flatness of the reflector 13275 is maintained. The drops 13377 used are relatively small so that the interface is covered without adhesive bleeding at the edges. The adhesive is then cured by waiting for an appropriate length of time, applying heat, or applying UV light (as appropriate for the adhesive). In a preferred embodiment, a UV-curable adhesive is used to provide rapid curing. An advantage of bonding the reflector 13275 to the solid prism 13250 is that the adhesive can fill in any sink marks that may be present on the planar surface of the prism, such that the surface of the reflector 13275 creates a planar surface 13254, the planar surface 13254 having improved flatness and a desired level of reflectivity to reflect the image light 13230. Since the adhesive is index matched to the material of the solid prism 13250, the image light 13230 passes from the solid prism 13250 through the adhesive layer to the surface of the reflector plate 13275 without disturbing the wavefront of the image light 13230 so that high image quality is maintained.
Fig. 134 shows a diagram of multiple folding optics for a reflective image source with a backlight assembly positioned behind a reflective plate 13275. Where, as shown in fig. 134, reflective sheet 13275 is a partially reflective mirror that transmits at least a portion of the light from the backlight to illuminate image source 13260, and then reflects at least a portion of image light 13230. In a preferred embodiment, the reflective sheet 13275 is a reflective polarizer that transmits one polarization state and reflects the opposite polarization state. In this case, the illumination light 13432 is provided with a first polarization state (e.g., P-polarization) and the image light 13230 is a second polarization state (e.g., S-polarization). If the image source 13260 is, for example, a normally white LCOS, this change in polarization state occurs in the bright areas of the displayed image when the illumination light 13432 is reflected by the image source 13260. As a result, image light 13230 in the bright areas of the displayed image is reflected by the reflective polarizer of reflective sheet 13275 and image light in the dark areas of the displayed image is transmitted by the reflective polarizer, such that only image light of the second polarization state passes into lens 13240. The backlight assembly includes a prism film 13477 to deflect at least a portion of the illumination light 13432 provided by the light guide 13480 towards the image source 13260. Where the prismatic film 13477 may be a rotating film such as DTF provided by lumineit corporation (tollens, CA), or alternatively the prismatic film may be a brightness enhancement film such as Vikuiti BEF4-GT-90 provided by 3M (st paul, MN). A diffuser film 13478 is also included in the backlight assembly to provide a desired cone angle of light within the illumination light 13432. A light source 13479 is also included in the backlight assembly to provide light to the light guide 13480, where the light source 13479 may be one or more LEDs. The light source 13479 may provide white light or sequential color illumination (e.g., a repeating sequence of red then blue then green illumination, or cyan then magenta then yellow illumination), depending on whether the reflective image source includes a color filter array.
In the solid prism 13250, the angle at which the illumination light 13432 can be provided is limited by the refractive effect at the interface where the light enters the solid prism 13250. As an example, snell's law for refraction across an interface is followed:
n1 sinθ1 = n2 sinθ2
to provide illumination light 13432 within a solid prism at an angle of approximately 30 degrees from the normal to the interface as shown in fig. 134, if the prism material has a refractive index of 1.5, light from the backlight assembly would have to be provided to the interface at approximately 50 degrees. Where n1 is the refractive index of the first medium from which the light originates, θ 1 is the angle of the light relative to the surface normal in the first medium, n2 is the refractive index of the second medium to which the light goes, and θ 2 is the angle of the light relative to the surface normal in the second medium. Providing illumination light 13432 at an angle of 50 degrees to the backlight assembly can be difficult because rotating films that deflect light at such large angles are not available. To reduce the refraction effect, the prism film 13477 is used as a fresnel wedge with the smooth side bonded to the reflective sheet 13275 and the prism structures directed toward the backlight assembly. Fig. 135 shows a representation of a prism film 13477 bonded to a reflector plate 13275, where the prism film 13477 shown is a brightness enhancement film having linear prism surfaces oriented at approximately 45 degrees to the interface (thereby forming linear prisms having 90 degree included angles) and an optically clear adhesive 13578 (such as 8142KCL available from 3M (st. paul, MN)) is used to bond the prism film 13477 to the reflector plate 13275. It should be noted that this orientation, in which the prismatic structures are directed towards the light sources, is opposite to the orientation typically used for brightness enhancement films (which are typically used for collimating light in backlights). Instead, with the orientation shown in fig. 135, the 45-degree surface of the brightness enhancement film splits the incoming light into two cones of light (illustrated in fig. 135a as lights 13532 and 13533) having respective deflection angles of approximately +/-17 degrees inside the prism film 13477 relative to the incident illumination light from the diffuser that is approximately perpendicular to the plane of the light guide 13480 and the plane of the reflector plate 13275, following snell's law as previously described herein. Importantly, the prismatic film provides a greatly reduced amount of light between the two cones of light. Wherein the cone angle of light within each cone of light is determined by the cone angle of the diffuser 13478. The deflection angle of the illumination light 13432 can be modified by adding a rotating film (not shown) on top of the prism film, wherein the rotating film changes the angle of the illumination light provided to the prism film 13477. A typical rotating film, such as a DTF film available from luminet (toronto, CA), provides a 20 degree deflection of the light. The illumination light is then incident on one surface of the prism film at 65 degrees and on the other surface of the prism film at 25 degrees. The two illumination cones inside the prism film have deflection angles of +28 and-8 degrees relative to the incident illumination light from the diffuser, which is approximately perpendicular to the plane of the light guide 13480 and the plane of the reflector 13275. Since the prism film 13477 is bonded to the reflective plate 13275 and the reflective plate is bonded to the solid prism 13250, the angle of light inside the prism film 13477 is substantially maintained into the solid prism 13250 as long as the refractive indices of the prism film 13477, the reflective plate 13275, and the solid prism 13250 are reasonably similar. In this manner, the system deflects the illumination light 13432 provided by the backlight assembly in the following direction: which directs the illumination light 13432 toward the image source 13260. The image source 13260 is thus illuminated by the light guide 13480 in the following manner: this approach allows the multi-fold optics to have a more compact form factor as provided by the multi-folding of the optical axis 13235. In manufacture, the prism film 13477 may be bonded to the reflector 13275 before or after the reflector 13275 is bonded to the solid prisms 13250.
Fig. 135a shows a representation of a multi-fold optic showing two illumination light cones 13532 and 13533 provided by a prismatic film 13477. When the illumination light D32 illuminates the image source 13260, the illumination light 13533 is a form of stray light in the multi-fold optics that must be controlled to provide high contrast image light 13230 to the eyebox 13220 so that the user experiences a high contrast image. The advantages provided by the prismatic film 13477 are: approximately half of the illumination light (13532) is deflected toward the image source 13260, while the other half of the illumination light (13533) is deflected in the following direction: stray light can be controlled in that direction and little light is provided between 13532 and 13533 where controlling stray light is more difficult. Fig. 135a includes an analyzing polarizer 13582 to absorb the portion of light 13533 from the backlight that is not used to illuminate the image source 13260. The analyzing polarizer 13582 is shown positioned between the power lens 13240 and the combiner 13210, however, the analyzing polarizer 13582 may also be positioned in the gap between the solid prism 13250 and the power lens 13240. The analyzing polarizer 13582 is oriented with its transmission axis such that light having the polarization state of the bright areas of image light is transmitted and light having the polarization state of the illumination light 13533 and the dark areas of image light is absorbed. Thus, the analyzing polarizer serves a dual purpose by reducing stray light associated with the illumination light 13533 and with the image light in dark regions of the image.
In a multiple fold optic with solid prisms, additional optical elements may be added for imaging the user's eye for the purpose of eye tracking in the user interface or eye recognition for safety purposes. Fig. 136, 137 and 138 show illustrations of different embodiments of additional optical elements included in a solid prism for imaging a user's eye. Fig. 136 and 137 show illustrations of various views of an optical element 13612 attached to the sides of a solid prism 13250 so that an eye camera 13610 can image a user's eye in an eyebox 13220. The optical element 13612 is shown as a single lens surface that is angled with respect to the optical axis 13235 to provide a field of view that includes light 13613 reflected from the user's eye. In this manner, light 13613 reflected from the user's eye is multiply folded in a similar manner as image light 13230. However, the optical element 13612 may include more than one lens surface and more than one lens element to improve the resolution of the eye image. Fig. 137 shows how optical element 13612 may be positioned adjacent to surface 13252 on solid prism 13250. With the optical element 13612 positioned as shown in fig. 136 and 137, the eye camera 13610 is provided with a field of view that includes light 13613 reflected by the eye, and the field of view associated with the optical element 13612 tends to extend to the upper portion of the user's eye. Wherein the user's eye can be passively illuminated by the image light 13230 or actively illuminated by additional light (not shown) adjacent to the eye-box 13220 or adjacent to the optical element 13612. The additional light may be infrared light as long as the eye camera 13610 can capture an infrared image of the user's eyes. Fig. 138 shows an illustration of another solid prism 13250 having an optical element 13812 positioned adjacent to the top of the solid prism 13250 to enable the eye camera 13814 to image the eyebox 13220. In this case, optical element 13812 is attached to solid prism 132450 and designed to provide a field of view including light 13813 reflected from the user's eye. The light 13613 is reflected by the user's eye and captured by the eye camera 13814 following a single fold path. Wherein the field of view associated with the optical element 13812 positioned as shown in fig. 138 tends to extend to the side of the user's eye. In both embodiments shown in fig. 136, 137 and 138 for imaging an eye, the optical elements 13612 and 13812 are designed to take into account the fact that: i.e., light reflected by the user's eye, passes through the power lens 13240 and at least a portion of the solid prism 13250. From a manufacturing perspective, the optical elements 13612 and 13812 may be made as attachments to the solid prism 13250 or as an integral part of the solid prism 13250 that is molded along with the other surfaces of the solid prism 13250.
In further embodiments, eye imaging is included for the multiple fold optics shown in fig. 132. Fig. 139 shows a diagram of an eye imaging system for a multi-fold optic, where the image source is a self-emitting display, such as, for example, an OLED or a backlit LCD. In this case, the reflective sheet 13275 is a partial mirror that is bonded to the planar surface 13254 of the solid prism 13250, as previously described herein. Alternatively, the partially reflective mirror coating may be applied directly to planar surface 13254, so long as planar surface 13254 is optically flat. The partially reflective mirror then reflects a portion of image light 13230, redirecting it toward lens 13240 and combiner 13210, where image light 13230 is reflected a second time and thereby redirected toward the user's eye to provide an image to eyebox 13220. At the same time, a portion of the light 13923 reflected by the user's eye is transmitted by the partial mirror and captured by the eye camera 13922. Wherein the user's eye may be passively illuminated by image light 13230 and additional active illumination light 13913 may be provided by eye light 13912 to illuminate the user's eye. In a preferred embodiment, eye light 13912 provides infrared illumination 13913 and eye camera 13922 is sensitive to infrared light in such a way that illumination 13913 does not interfere with the image displayed to the user by image light 13230. In a further preferred embodiment, the partial mirror is a cold mirror, which reflects a large part of visible light (e.g. more than 80% of visible light, 400-. In yet another preferred embodiment, the combiner is at least partially coated with a heat-reflective mirror coating, which reflects infrared light and transmits visible light. Wherein, for example, the heat mirror coating may reflect greater than 80% of infrared light provided by eye light and transmit greater than 50% of visible light associated with a see-through view of the surrounding environment. By including a cold mirror on the reflective sheet 13275 or planar surface along with a hot mirror on the combiner 13210, the loss of light 13923 reflected by the user's eye can be reduced, thereby enabling capture of bright images of the user's eye and reducing the power required for active illumination of the user's eye by eye light 13912.
Fig. 140a and 140b show illustrations of folding optics with a combiner 14010 that redirects image light 13230 that has been provided by upper optics 1406, the upper optics 1406 comprising an image source and associated optics. A camera 14022 is provided for imaging the user's eye 1408 when positioned adjacent to the eye box 13220. The eye light 14012 is provided to provide illumination light 14013, which illumination light 14013 is reflected by the combiner 14010 and thereby directed toward the user's eye 1408. Wherein the camera 14022 is positioned to one side of the upper optic 1406 such that light 14023 reflected by the user's eye is reflected by the combiner 14010 and captured by the camera 14022. As previously described herein, the ocular light 14012 can provide infrared illumination 14013 (e.g., 850 nm) and the combiner 13220 can include a heat mirror coating to reflect a majority of the infrared illumination 14013 while providing a see-through view of the surrounding environment. The eye light 14012 can be positioned to one side of the upper optic 1406 and preferably the eye light 14012 is positioned adjacent to the camera 14022 such that the illumination light 14013 causes light 14023 to reflect from the user's eye with a distribution that can be efficiently captured by the camera 14022. For example, eye light 14012 may be positioned on adjacent sides of the upper optic 1406, as in fig. 140a and 140b, where the eye light 14012 is shown positioned on the back side of the upper optic 1406, so the illumination light is reflected by the combiner back toward the user's eye 140b, and the camera 14022 is shown on the left side of the upper optic 1406, although other arrangements are possible. In a preferred embodiment, the eye light 14012 is a small LED mounted on the lower front edge of the upper optic 1406 and directed directly back toward the user's eye 1408.
In an embodiment, the combiner 14010 includes a surface that prevents reflection of visible light outside the field of view. The surface may comprise an anti-reflection coating and it may only be applied outside the field of view. This arrangement may be useful in preventing ambient stray light from reflecting into the user's eye. Without such a surface, light from the environment may reflect off the combiner surface and into the user's eye.
Fig. 141a and 141b show illustrations of folded optics including a waveguide 14132, the waveguide 14132 having an angled partially reflective surface 14135 and a reflective surface 14136 with optical power. Where image source 14153 provides image light 14130, the image light 14130 is reflected by reflective sheet 14175 such that image light 14130 is transmitted by waveguide 14132 to partially reflective surface 14135 where it is transmitted to refractive power reflective surface 14136 where it is concentrated and reflected back toward partially reflective surface 14135. The partially reflective surface then reflects and redirects the image light such that the image light 14130 is provided to the user's eye 1408. In the embodiment shown in fig. 141a, the eye light 14112 is positioned adjacent to one end of the waveguide 14132 such that the illumination light 14113 is directed toward the user's eye 1408. The camera 14122 is positioned behind a reflective plate 14175, where the reflective plate reflects at least a portion of the image light 14130 and transmits at least a portion of the light 14123 reflected by the user's eye 1408. Wherein the reflective plate 14175 may be a partial mirror, a reflective polarizer, or in a preferred embodiment the reflective plate 14175 is a cold mirror that reflects visible light and transmits infrared light (e.g., the cold mirror reflects greater than 80% of visible light, 400 to 700nm, and transmits greater than 80% of infrared light provided by eye light, 800 to 1000 nm). It will be noted that in some cases, the reflective plate may be replaced by a coating (which is applied directly to the underlying planar surface of the waveguide 14132) as long as the planar surface is optically flat. As previously described herein, eye light 14112 may provide infrared illumination 14113 as long as camera 14122 is sensitive to infrared light. By positioning the camera 14122 behind the angled reflective plate 14175, the image light 14130 and the light 14123 reflected by the user's eye 1408 may be coaxial such that the captured image of the user's eye 1408 is from a perspective directly in front of the user. Fig. 141b illustrates another embodiment, in which eye light 14112 is positioned adjacent to the camera 14122 such that the illumination light 14113 is transmitted by the reflective sheet 14175 and conveyed by the waveguide 14132 in a similar manner as the image light 14130 such that the illumination light 14113 is redirected toward the user's eye 1408.
Fig. 142a and 142b show illustrations of folding optics for a head-mounted display including a waveguide 14232 having at least one holographic optical element 14242 and an image source 14253. In this embodiment, the image source 14253 provides image light 14230 to the waveguide 14232 (not shown) such that the holographic optical element 14242 may redirect the image light 14230 toward the user's eye 1408 at approximately 90 degrees. A camera 14222 is provided to capture images of the user's eye 1408. The eye light 14212 provides illumination light 14213 to the user's eye 1408. The light 14223 is reflected by the user's eye 1408 and captured by the camera 14222. As shown in fig. 142a, the eye light 14212 is positioned to one side of the waveguide 14232 and adjacent to the camera 14222. A heat mirror coating (where its reflection spectrum matches the infrared spectrum provided by the eye light 14212) is applied to at least a portion 14224 of the waveguide 14232 so that most of the light 14223 is reflected toward the camera 14222, and at the same time provides a bright perspective view of the surrounding environment. Fig. 142b shows a diagram of similar folding optics for a head-mounted display, where the waveguide 14232 is positioned at an angle to the user's eye 1408 to provide a closer fit of the folding optics to the user's head. In this case, the holographic optical element 14242 is designed to redirect the image light 14230 at approximately 110 degrees to the waveguide and toward the user's eye 1408. The camera 14222 is then positioned at an end of the waveguide 14232 opposite the image source 14253 so that the angle between the light 14223 reflected from the user's eye 1408 and the illumination light 14213 can be reduced. In this manner, an image of the user's eye 1408 having more uniform brightness may be captured by the camera 14222. As previously described herein, at least a portion 14224 of the waveguide 14232 may be a heat mirror to reflect a majority of the light 14223 reflected by the user's eye 1408, while providing a bright perspective view of the surrounding environment.
Fig. 143 shows a diagram of folding optics for a head-mounted display in which illumination light is injected into a waveguide and redirected by a holographic optical element such that a user's eye is illuminated. Eye light 14312 is positioned at one end of waveguide 14232 so that illumination light 14313 may be injected into waveguide 14232 and transmitted to holographic optical element 14242 along with image light 14230. The holographic optical element 14242 then redirects the image light 14230 and illumination light 14313 toward the user's eye 1408. The holographic optical element 14242 must then be able to redirect both the image light 14230 and the illumination light 14313, where the image light 14230 is visible light and the illumination light 14313 may be infrared light. Light 14223 reflected by the user's eye is then reflected by the waveguide surface and captured by the camera 14222. A heat mirror coating (the reflection spectrum of which matches the infrared spectrum provided by the eye light 14212) is applied to at least a portion 14224 of the waveguide 14232 such that a majority of the light 14223 is reflected toward the camera 14222 and at the same time provides a bright perspective view of the surrounding environment. An advantage of this design is that the illumination system comprising the eye light 14312 can be made more compact. Fig. 144 shows a diagram of folding optics for a head mounted display similar to the system shown in fig. 143, where a series of angled partial mirrors 14442 are included in the waveguide instead of a holographic optical element. In this case, illumination light 14413 is injected into waveguide 14432 along with image light 14230 provided by image source 14253. The illumination light 14413 and image light 14230 are transmitted by a waveguide 14432 to the series of angled partial mirrors 14442, which redirect the illumination light 144134 and image light 14230 toward the user's eye 1408. The light 14223 reflected by the user's eye 1408 is reflected by a heat mirror coating applied to at least a portion 14224 of the waveguide 14432, where the reflection spectrum of the heat mirror matches the infrared spectrum of the illumination light 144134 provided by the eye light 14212. An advantage of this design is that the illumination system is compact and the series of angled partial mirrors can be easily made to operate on both the visible image light 14230 and the infrared illumination light 14413.
When using a head mounted display for augmented reality applications, in particular when the head mounted display provides a see-through view of the surrounding environment, it may be important to be able to change the depth of focus at which the displayed image is presented. It is also important to present the stereoscopic image at the proper vergence distance to provide the intended depth perception to the user. Wherein the focus distance is the following distance: the user's eyes must focus at this distance to view a sharp image, and the vergence distance is the distance at which the user's two eyes view the same point in an image or real object together. Within the stereoscopic image, objects intended to be perceived as being at different depths are presented with a rendered lateral shift between the relative positions of the objects within the left and right images, which is referred to as disparity. Rendering of typical stereoscopic imagery as viewed in a theater or on television mostly points to parallax mapping of the objects to produce a 3D effect, because the focus distance is limited to the theater screen or television (see the paper "Nonlinear disparity mapping for stereo 3D", m. Lang, a. Hornung, o. Wang, s. pouukos, a. Smolic, m. Gross, ACM Transactions on Graphics (impact factor: 3.73). 07/2010; 29. DOI: 10.1145/1833349.1778812). To make the stereoscopic viewing experience more comfortable for the user, the vergence distance associated with viewing an augmented reality object should closely match the focus distance associated with the same augmented reality object, thereby enabling the augmented reality object to more closely resemble the real object seen by the user of the head mounted display. Systems and methods in accordance with the principles of the present invention provide a way to vary the focus distance and vergence distance associated with augmented display objects and maps viewed in a head-mounted display in a manner that more closely matches real objects in a perspective view of the surrounding environment.
The focus distance of the image displayed in any head mounted display is determined by elements in the optics of the head mounted display. The focus distance of the image can be changed by changing elements in the optics or by changing the relative positioning of some elements in the optics. The vergence distance associated with a stereoscopic image is determined by the lateral positioning of the images within the field of view of the user's left and right eyes. The vergence distance may be changed by laterally shifting the left and right images relative to each other within the user's field of view by re-pointing the left and right optics thereby establishing different convergence points between the left and right optics or by digitally shifting the displayed images within the display field of view. In order to provide a stereoscopic viewing experience of an augmented reality object that more closely resembles the viewing experience associated with a real object, it is important that the focus distance matches the vergence distance of the augmented reality object in the stereoscopic images displayed in the head mounted display within the limits of the user's eyes. Given that augmented reality objects are often positioned at different distances within a stereoscopic image and are taking place at different distances due to different augmented reality activities, the inventors have found that there is a need for a method to change the focus distance with a corresponding change in vergence distance within all types of head mounted displays.
Fig. 145 shows a diagram (shown from the side and from the eye position) of a beam splitter-based optical module for a head-mounted display, including upper optics 14510 and combiner 14520. Wherein the upper optics 14510 includes an image source, a light source, and one or more lens elements. Combiner 14520 is a beam splitter that reflects a portion of the image light associated with the displayed image toward the user's eye while also allowing light from the surrounding environment to be transmitted so that the user sees the displayed image superimposed onto a see-through view of the surrounding environment. Fig. 146 shows a diagram (also shown from the side and from the eye position) of an optical module for a head-mounted display that has been modified to change the focus distance by adding a focus-shifting element 14625. Where the focus-shifting element 14625 is a thin lens having optical power. For example, the focus shifting element 14625 required to change the focus distance from infinity to 1 meter needs to provide an optical power of-1 diopter. Thus, the focus shifting element 14625 may be a portion of a refractive lens, such as an ophthalmic lens, that is 1 to 1.5mm thick. Alternatively, the focus-shifting element 14625 may be a Fresnel lens, which may be thinner than a refractive lens. By positioning the focus-shifting element 14625 above the combiner 14520, the optical power of the focus-shifting element 14625 acts only on the displayed image and does not change the perspective view of the surrounding environment. The method may be used in any type of optics for a head-mounted display (e.g., projection optics with a see-through combiner, holographic image projection apparatus with a see-through combiner, see-through optics with a see-through waveguide, TIR waveguide, etc.), where space is available for inserting a focus-shifting element with optical power into the optical path such that the focus distance is changed without changing the see-through view. Where upper optics 14510 utilizes polarized image light, polarization control elements 14515 may be included to modify the polarization state of the image light. Wherein the polarization control element may comprise one or more of: polarizers to remove unwanted polarization states, retarders to change the image light to circular polarization (such as quarter wave films), or half wave films to change the polarization state.
For cases where the user's eyes are unable to focus at the focus distance associated with the displayed image, a corrective lens element may be provided behind the optics module to improve the sharpness of the displayed image perceived by the user. In this case, the corrective lens element is based on the user's ophthalmic prescription and the corrective lens element improves both the see-through view of the surrounding environment and the view of the displayed image for the user. Fig. 146a shows an illustration of a side view of an optics module including a corrective lens element 14624. The corrective lens element 14624 may have a positive or negative refractive power, as required by the user for viewing the displayed image at the focusing distance. Additionally, the corrective lens elements may also include astigmatism and wedges, as included in the user's ophthalmic prescription. The corrective lens elements 14624 for the left and right eyes may be connected to each other to provide a corrective unit that is attached and aligned with the optics module or the frame of the head-mounted display with built-in interpupillary spacing or flexible interpupillary spacing. Alternatively, the left and right corrective lens elements 14624 may be separate and separately attached and aligned with the optics module or the frame of the head mounted display. For example, for applications in which the displayed image is presented with a focus distance and a vergence distance of 0.6 meters such that augmented reality objects or information may be provided for tasks performed at arm length, focus-shifting element 14625 may have a refractive power of-1.6 diopters and may provide only refractive power, and corrective lens element 14624 may have a refractive power of +2 diopters and also provide corrections for astigmatism and wedge per the user's ophthalmic prescription. Where the +2 diopter correcting lens element 14624 would be a fairly typical prescription for reading glasses for a person aged about 55 and would thus enable the person to clearly view objects and images positioned at arm length. The corrective lens element 14624 shown in fig. 146a is a refractive lens, but other types of lenses are possible, such as a fresnel lens.
Although lenses having fixed optical power are shown for the focus shift element 14625 and the corrective lens element 14624, lenses having adjustable optical power may also be used. Adjustable lenses using sliding lens elements (see U.S. patent 3,305,294) or liquid injection may be obtained, for example, from Adlens in oxford, uk: https:// www.adlens.com/. Electrically adjustable lenses may also be used as corrective lenses, such as: liquid crystal lenses available from LensVector (sunnyverl, CA) or liquid lenses available from varioptical (freon, france).
In addition, the optical modules may be mounted in the frame of the head-mounted display such that they point slightly toward each other (also referred to as toe-in) to provide a convergence distance. Thus, the convergence distance is established by the structural arrangement of the optics in the head mounted display, and the vergence distance can be adjusted by lateral digital shifting of similar portions of the left and right images being displayed to produce parallax for a portion of the images. The convergence distance then establishes a baseline vergence distance perceived by the user for a stereoscopic image rendered without parallax. To provide an improved stereoscopic viewing experience, the convergence distance associated with the structural arrangement of the optics must be taken into account when rendering the parallax associated with the displayed object in the stereoscopic image. This is particularly important in head mounted display systems where the focus distance and vergence distance are matched for augmented reality objects in stereoscopic images. Thus, the rendering of stereoscopic images initially rendered for viewing in a theater may need to be adjusted for improved viewing in the head mounted display. The convergence distance may also be used to establish a perceived distance from the entire image if the stereoscopic image is rendered without parallax, which may be useful for applications such as head-mounted computers where a desktop screen associated with the computer is perceived to be at the distance established by the convergence distance. However, the convergence distance cannot be too close to the user because the left and right images will experience opposite versions of keystone distortion. For example, if the user's eyes are separated by approximately 63.5mm, a convergence distance of 2.4 meters may be provided by pointing the optics modules at 0.75 degrees to each other. The inventors have found that: a dip of 0.75 degrees results in negligible levels of keystone distortion. Closer convergence distances require a greater degree of inward tilt, and thus keystone distortion between the left and right images degrades the perceived sharpness in the corners of the stereoscopic image. The keystone distortion may be compensated for by rendering the left and right images with matching and opposite levels of keystone pre-distortion.
Fig. 147 shows a depiction of left and right optics modules connected together in chassis 14727, with the depiction shown from the back of chassis 14727 where the user's eyes would be. Chassis 14727 allows the optics module to be built as a separate unit that is assembled into the head mounted display. By making chassis 14727 structurally rigid, optics modules can be physically aligned relative to each other and the focus distance and convergence distance can be checked and adjusted, if necessary, before being assembled into a head-mounted display, providing additional manufacturing flexibility.
Fig. 147 also shows the focus shift element 14625 for the left and right optics modules connected in a focus shift element pair 14731. By connecting the focus shifting elements 14625 together, it is easier to add a pair of focus shifting elements when needed for augmented reality imaging at different distances. The connection between the focus shifting elements 14625 in the focus shifting element pair 14731 may be rigid, as shown in fig. 147, or may be flexible to enable the focus shifting element pair 14731 to adjust to different spacings between the left and right optics modules, with chassis having different widths for users having different spacings between their eyes. Wherein focus shifting elements 14625 having various optical powers are used to provide different focus distances for the displayed image for augmented reality activities requiring the image to be displayed at different working distances. The focus shifting element 14625 may also be different for the left and right eyes to provide different focus distances for the left and right eyes. The focus shifting elements 14625 may also be provided without optical power so that they act as a protective window for the upper optic 14510.
In the simplest form, the mode change associated with changing the focus distance and vergence distance may be implemented by a user entering information through a user interface (such as a button or graphical user interface) and selecting an option. Confirmation of the mode change may then be provided to the user on the displayed image, such as, for example, a colored box around the edge of the display field of view or a message stating "mode change initiated for arm length display". In a more automated mode change, a sensor 14730 may be provided that senses the focus shift element pair 14731 so that the image may be automatically presented with a lateral shift that provides a different vergence distance that matches the focus distance provided by the focus shift element 14625. The sensor 14730 may simply sense whether the focus-shifting element pair 14731 is present. Alternatively, the sensor 14730 may detect a code (e.g., a barcode) on the focus shift element pair 14731 that corresponds to the focus distance or optical power provided by the focus shift element 14625 so that the displayed image may be automatically digitally laterally shifted to provide a matching vergence distance. The sensor may be located in the center as shown in fig. 147, but other locations are possible, such as on one side. The code may be on one of the optical surfaces or on an edge of the focus-shifting element 14625, and the sensor 14730 may be oriented in a corresponding manner to read the code. If the focus shifting element 14625 is not connected in the focus shifting element pair 14731, two sensors 14730 may be provided with one sensor 14731 on each side. When the focus shift element 14625 is detected, the displayed image may be automatically changed in response to a change in the mode of operation implied by the detected presence of the focus shift element 14625. In addition to lateral shifting that changes the vergence distance as previously discussed herein, other changes may be made to the presentation of the displayed image when a focus shifting element is present, including: size, magnification, format (e.g., 4:3 instead of 16: 9), color, contrast, dynamic range, or resolution. Where these changes to the image are made to improve the viewing experience for the user when operating at different display distances, such as in augmented reality activities. The changes in magnification and format are particularly important in the case of such mode changes, as the lateral shifting of the image that changes the vergence distance results in some cropping of the available display field of view and the power associated with the focus shifting element 14625 changes the overall power of the display optics.
Fig. 148 and 149 show how the displayed image can be digitally shifted laterally within the display field of view to change the vergence distance seen by the user. Fig. 148 shows left and right images, 14841 and 14843, respectively, provided at nominal vergence distances within the left and right display fields of view (14840 and 14842, respectively). Wherein the nominal vergence is established by the alignment of the optics modules relative to each other in the head mounted display. The nominal vergence distance may for example be infinity, wherein the optical axes of the left and right display fields of view will be parallel to each other. In a preferred embodiment, the optical axes of the left and right display fields of view (14840 and 14842) are each tilted inward by approximately 0.75 degrees, such that for a typical user with an interpupillary spacing of 63.5mm between their eyes, a nominal vergence distance is established at approximately 2.4 meters. Fig. 149 shows how left image 14941 and right image 14943 are laterally shifted toward each other within left display field of view 14940 and right display field of view 14942, respectively, to provide a shorter vergence distance. By shifting left image 14941 and right image 14943 toward each other, the user's eyes must point slightly toward each other to view left image 14941 and right image 14943 as a stereo pair with a shorter vergence distance. For improved comfort when viewing stereoscopic pairs, the focus distance should match the vergence distance. Upon laterally shifting left and right images 14941 and 14943, portions of the left and right display fields of view (shown as 14945 and 14946) become unavailable for stereoscopic imaging because those regions do not overlap in the user's field of view. Thus, when the head mounted display is used with a vergence distance that is different from the nominal vergence distance, the available sizes of the left display field of view 40 and the right display field of view 14942 are reduced. The advantage of performing a digital shift of left image 14941 and right image 14943 to provide different vergence distances is that: switching from a nominal vergence distance to a different vergence distance may be done without having to change the physical arrangement of the optics module in the head mounted display. To reduce the cropping of the display field of view, additional pixels may be used on the image source that are not typically used to display an image when operating in a mode in which a lateral shift of the image is required. For example, an image source having 1310 x 768 pixels may typically be used to display an image having 1280 x 720 pixels, such that additional pixels around the edge are only used when the displayed image is digitally shifted to change the vergence distance. Due to the halo, the brightness of the portion of the displayed image displayed with the pixels around the edge may need to be increased.
As previously mentioned herein, a change in focus distance may also be provided by changing the relative positioning of certain elements in the optics. Fig. 150a and 150b illustrate a mechanism for moving image source 15040 relative to one or more lens elements 15012 in upper optics 14510 to provide a change in focus distance of the displayed image. Where typically moving image source 15040 up as shown in fig. 150b moves the focus distance further away and vice versa. The mechanism shown includes an upper wedge 15042 and a lower wedge 15043, as well as solenoids 15035 and 15036 acting on the cores 15037 and 15038, respectively. Wherein the cores 15037 and 15038 are made of ferromagnetic material and attached to the lower wedge 15043. Solenoids 15035 and 15036 comprise cylindrical windings of conductive lines such that when current is applied to the lines, the corresponding core 15037 or 15038 is attracted into the solenoid and the attached lower wedge is thereby moved to one side or the other. The solenoid is fixed in position relative to the housing of the upper optic 14510. As the lower wedge 15043 moves laterally, the upper wedge 15042 moves up and down along with the image source 15040 attached to the upper wedge 15042. Thus, when current is applied to the solenoid 15035, the lower wedge 15043 is moved to the left, as shown in fig. 150b, and as a result, the upper wedge 15042 is moved upwards along with the image source 15040, and the focusing distance is increased. Similarly, when current is applied to the solenoid 15036, the lower wedge 15043 is moved to the right, as shown in fig. 150a, the upper wedge 15042 is moved downwards along with the image source 15040, and the focusing distance is reduced. By using an upper wedge 15042 and a lower wedge 15043 with relatively shallow wedge angles (e.g., 5 to 15 degrees), the wedges tend to stay in place when the current to the solenoid is diverted to O150. Opposing permanent magnets (not shown) may be added to the wedges 15042 and 15043 to increase friction between the wedges and thereby help to hold the wedges in place when the current to the solenoid is diverted to O150. In this way, the power required to operate the solenoids (15035 and 15036) can be very small even though a relatively large current is required to generate sufficient force to move the lower wedge 15043. By alternately applying current to solenoids 15035 and 15036, the focus distance may be alternately switched between two focus distances, such as between 2.4 meters and 0.6 meters of focus distance. This method of changing the focus distance can be used with any optics that use a microdisplay at the focal plane of the optics, such as waveguide-based optics or beam splitter cube-based optics. This arrangement may also be used with pulsed application of current to solenoids 15035 and 15038 to cause a step-wise change in wedge position and associated step-wise change in focus distance, spanning a continuous range, multiple step-wise ranges, and so forth. Additionally, guidance for movement of the image source 15040 may be provided by sliding pins that pass through the upper wedge 15042 or an associated structure (not shown), wherein the pins allow vertical movement and prevent lateral movement.
Fig. 151a and 151b show illustrations of an upper wedge 15042 and a lower wedge 15043 from the location of the image source 15040. As shown, the wedges 15042 and 15043 comprise rectangular structures with their centers removed, like a window frame, so that illumination light and image light can pass through the wedges (15042 and 15043) to enable display of an image. This is important when the mechanism that moves image source 15040 is positioned below image source 15040 (along the optical path of the image light). Fig. 151a corresponds to the wedge positioning shown in fig. 150a, and fig. 151b corresponds to the wedge positioning shown in fig. 150 b. The advantages of the layout shown in fig. 150a and 150b are: the wedges 15042 and F43, as well as other pieces in the mechanism, do not increase the overall height of the upper optic 14510.
In an alternative embodiment (not shown), the mechanism for moving image source 15040 is positioned above image source 15040, and then the wedge (15042 and 15043) may be a solid wedge or have a central portion removed to enable a lead to be connected to image source 15040. An advantage of positioning the wedge and other pieces of the mechanism over image source 15040 is that the image source may be positioned closer to lens element 15012, which may be important in certain optical designs.
In another embodiment, the wedges (15042 and 15043) may be transparent and may cover the entire aperture of the image source 15040. The transparent wedges (f 15042 and 15043) may operate as previously described to move the image source 15040. Additionally, as the wedges are moved laterally, the combined optical thickness of the two wedges is a function of the relative wedge position in the area covering the active area of image source 15040. This is due to the fact that: i.e. the transparent wedges have a higher refractive index than the air they will replace. Since the wedges are matched in slope, the combined optical thickness of the regions where the wedges overlap is uniform. Thus, a change in the combined optical thickness of the overlapping wedges contributes to a change in the focusing distance.
To further improve the repeatability of the movement of image source 15040 and upper wedge 15042 when lower wedge 15043 is moved, a spring clip may be used to apply a force to image source 15040 or upper wedge 15042 to ensure contact is maintained between the surfaces. Diagram 152 shows an illustration of spring clips 15250 and 15252 applying force to an image source 15040, where the image source 15040 is attached to the upper wedge 15042. Spring clips 15250 and 15252 are attached to the housing of upper optic 14510 using screws 15253, ultrasonic welding, adhesive, or other connection systems. To reduce lateral movement of the image source 15040 when the lower wedge 15043 is moved, one or both of the spring clips 15250 and 15252 may be connected to the image source 15040 or the upper wedge 15042. In this way, vertical movement (as shown) is allowed for changing the focus distance by bending the spring clips 15250 and H52, while lateral movement is not allowed due to the higher stiffness of the spring clips in the lateral direction, especially if both spring clips 15250 and 15252 are connected to the image source 15040 or the upper wedge 15042.
In another embodiment, the movement of the lower wedge 15043 is controlled by a motor and lead screw (rather than a solenoid). Where the motor is connected to the housing of the optics module and the lead screw or core is connected to the lower wedge 15043. The motor may be a conventional rotary motor, a linear motor, a vibrating piezoelectric motor, an induction motor, or the like. The motor may also be controlled to move the lower wedge 15043 different distances to provide various focus distances. The motor may be a stepper motor, where the number of steps determines the distance moved. Sensors may also be provided to detect movement of the lower wedge, lead screw or core, thereby improving the accuracy of the movement and the associated accuracy of the focus distance change.
In yet another embodiment, the movement of the lower wedge 15043 is provided by a manually operated knob (not shown). The knob is connected to a lead screw which is threaded into the lower wedge 15043. The user turns the knob to move the lower wedge and thereby effect a change in focus distance. This can be used to fine tune the sharpness of the displayed image and to change the focus distance to match a given vergence distance or to match the focus distance to the real object in the perspective view of the surrounding environment.
In further embodiments, the corrective lens element 14624 may include a mechanism (not shown) to enable the corrective lens element 14624 to slide or swing upward to one side to thereby be moved out of the display field of view while still attached to the head mounted display. In this manner, corrective lens element 14624 may be readily available for use with a head mounted display. This may be useful because the rectification acts on both the displayed image and the perspective view of the surrounding environment. There may be times when: at that time the user would like to be able to change the focus distance of the displayed image or change the focus of the see-through view of the surrounding environment, depending on the activity he is engaged in, and having a readily available corrective lens element 14624 would enable this. In particular, a user may need a corrective lens when operating at extreme focus distances, such as arm length or closer, or at infinity. In embodiments, the corrective lens 14624 may be manually or automatically shifted into position.
In yet another embodiment, an eye camera is included in the left and right optics modules to determine where the user's eyes are looking in opposite directions. This information may then be used to determine the portion of the displayed image that the user is looking at. The focus distance may then be adjusted to match the vergence distance associated with the augmented reality object in the portion of the displayed image. The focus distance is then automatically adjusted as the user moves his eyes to different augmented reality objects or different portions of augmented reality objects within the displayed image. Alternatively, an eye camera may be used to determine the vergence of the user's eyes and thereby determine the distance the user is looking in the perspective view of the surrounding environment. The focus distance or vergence distance may then be adjusted corresponding to the distance the user is looking at. Wherein the focus distance or vergence distance may be automatically adjusted to match the distance the user is looking in the perspective view of the surrounding environment or adjusted to be at a different distance so that the displayed image does not interfere with the user's view of the surrounding environment.
Fig. 153a, 153b, and 154 show illustrations of example display optics including eye imaging. Fig. 153a and 153b show display optics including upper optics 14510 and a combiner 14520 to provide image light 15370 to an eyebox 15366 where a user's eyes will be located when viewing a displayed image overlaid onto a see-through view of the surrounding environment. An eye camera 15364 is provided on a side of the upper optics 14510 and angled toward the combiner 14520 to capture light 15368 from the user's eye in the eyebox 15366 reflected by the combiner 14520. One or more LEDs 15362 are provided adjacent to the upper optics 14510 and are directed to provide illumination light 15367 to the eye-box 15366 and to the user's eye (either directly or reflected from an optical surface such as the combiner 14520) when the head-mounted display is being used by the user. Where the LED 15362 may provide infrared light 15367 as long as the eye camera 15364 is sensitive to infrared light.
Fig. 154 shows a representation of display optics, including projection optics 15410, waveguide 15415, and holographic optical elements 15417 and 15413, as viewed from above. The projection optics 15410 may include one or more optical elements 15412 to modify the image light 15470 as needed to couple the image light 15470 into the holographic optical element 15413 and into the waveguide 15415. The optical element 15412 may change the wavelength of the image light 15470, change the format of the image light 15470, change the size of the image light 15470, or pre-distort the image light 15470 as needed to enable the image light 15470 to be presented to the user's eye 15466 in a desired format with reduced distortion. Optical element 15412 may include: refractive lenses, diffractive lenses, toroidal lenses, free-form lenses, gratings or filters. Wherein the holographic optical element 15413 deflects image light 15470 that has been provided by the projection optics 15410 into the waveguide 15415, where the image light is transmitted to the holographic optical element 15417. Holographic optical element 15417 then deflects image light 15470 toward the user's eye 15466, where the displayed image is viewed as an image superimposed onto a see-through view of the surrounding environment. An eye camera 15464 is provided for capturing images of the user's eye reflected by the surface of the waveguide when the head mounted display is being used by the user. One or more LEDs 15462 are provided adjacent to the waveguide 15415 to illuminate the user's eye 15466 (either directly or reflected from the surface of the waveguide) and thereby increase the brightness of the captured image of the user's eye 15466. Where the LED 15362 may provide infrared light as long as the eye camera 15364 is sensitive to infrared light.
To improve the efficiency of the eye imaging system shown in fig. 153a, 153b and 154, a coating may be applied to the surface that reflects light from the eye toward the eye camera. The coating may be a heat reflective mirror coating that reflects infrared light and transmits visible light. In this way, the eye camera may capture a bright image of the user's eyes while providing the user with a bright perspective view of the surrounding environment.
The eye camera (15364 or 15464) may include auto-focus to automatically adjust a focus setting of the eye camera when the user's eye is in a different position, such as when the head mounted display is positioned differently on the user's head or when a different user is using the head mounted display. Wherein the autofocus adjusts the relative position of the lens elements or adjusts the refractive power associated with the adjustable lens elements in the optics associated with the eye camera to provide higher contrast in the image of the user's eye. Additionally, when the corrective lens 14624 is present, the auto-focusing may automatically adjust the focus and thereby compensate for the corrective lens 14624. In this case, the metadata saved with the image of the user's eye records the relative focus setting of the eye camera (15364 or 15464), and the change in the metadata can be used to determine whether corrective lens 14624 is present. If the corrective lens 14624 is present, an adjustment to the focus distance of the display optics may be made that takes into account the presence of the corrective lens 14624.
The image of the user's eyes may be used to determine the viewing direction the user is looking at by determining the relative position of the user's pupils within the eyebox or within the field of view of the eye camera 15364. From this information, the relative direction in which the left and right eyes are looking can be determined. The relative orientation information may be used to identify which portion of the displayed image the user is looking at. By comparing the relative directions of the user's left and right eyes within the simultaneously captured images, the difference in relative direction between the left and right eyes and the interpupillary distance between the user's eyes may be used to determine the vergence viewing distance at which the user is looking. The vergence viewing distance may be used to determine a desired focus distance and vergence distance in the displayed image to provide the user with a sharply focused augmented reality object in the displayed image. The determined vergence viewing distance may also be compared to a vergence distance associated with the portion of the displayed image that the user is looking at to determine whether the user is looking at the displayed image or a perspective view of the surrounding environment. Adjustments may be made to the focus distance and vergence distance for different portions of the displayed image to present a sharply focused image to the user in the portion of the image the user is looking at, or to present a blurred image to the user in the portion of the image the user is looking at when needed for the mode of operation or use case. Where digital blurring of portions of an image may be used to make portions of an image appear to have a focus distance that is closer or further away than portions of the image that leave a sharp reflection. In addition, the vergence viewing distance may be compared to the disparity associated with the portion of the stereoscopic image that the user is looking at. The disparity of the stereoscopic image may then be adjusted locally at the portion of the image that the user is looking at, or scaled over the entire stereoscopic image to present the user with an adjusted stereoscopic depth over the entire image.
The head mounted display may include an inertial measurement unit to determine the position, movement, and gaze direction of the head mounted display. Wherein the inertial measurement unit may include: position determining systems such as GPS, electronic compasses (to determine gaze direction in compass direction), accelerometers and gyroscopes (to determine movement), and tilt sensors (to determine vertical gaze direction). Comparing the viewing direction determined from the image of the user's eyes with the gaze direction determined by the inertial measurement unit may allow a compass heading to be determined for the direction the user is looking. Combining the determined position with the compass orientation of the direction the user is looking at may allow identifying objects in the surroundings the user is looking at. This identification can be further improved by: the vergence viewing distance and compass heading for the direction the user is looking at is compared to objects in the surrounding environment known to be at that distance and direction from the user. This type of determination may be important for augmented reality and display of augmented reality objects relative to real objects.
In order for the focus distance to be adjustable as the user moves his eyes around the field of view, the focus viewing distance must be determined quickly and a fast focus adjustment system is required. Vergence and parallax within the stereoscopic image must be adjusted corresponding to the determined change in the in-focus viewing distance. Response times of 0.033 seconds or less are typically required for imaging modification within head-mounted display systems to prevent the user's viewing experience from being adversely affected by latency, such as nausea experienced by the user (see the paper "Tolerance of Temporal Delay in visual Environments" r. Allison, l. Harris, m. Jenkin, U, Jasiobedzka, j. Zacher, I149E visual Reality 2001, 3/2001, p247-254, ISBN 0-7695-. When the person's gaze changes from a far object to a near object, the human eye can change the vergence viewing distance quickly while the focus adjusts more slowly. To enable this, a fast frame rate (e.g. 60 frames/second or more) is required in order to capture images of the user's eyes, and the images need to have a high contrast to enable fast image analysis to determine the relative position of the user's eyes. The user's viewing direction and focused viewing distance can then be determined to further determine where and what the user is looking at. A fast focus distance adjustment system is then needed to adjust the focus distance in 0.5 seconds or less as the user moves his eyes.
Fig. 153a, 153b, 154 show display optics including a focus distance adjustment module 15360 in projection upper optics 14510 and optics 15410, respectively. Wherein the focus distance adjustment module 15360 may provide a fast mechanism for moving the position of the image source relative to the remaining lens elements, thereby changing the focus distance of the displayed image. It is important to recognize that the focus adjustment module can be used in any type of display optics for head-mounted displays (e.g., wedge waveguides, waveguides with multiple reflective strips, holographic projection systems, (except laser scanning projection systems because they are not focused)), because moving the image source relative to other display optics to adjust the focus distance is essential for the display optics, and thus focus adjustment modules are widely available in head-mounted displays.
Fig. 155a, 155b, 156a, 156b, 157a, 157b, 158a, 158b, 159a, and 159b show an illustration of a focus adjustment module 15360 having a mechanism capable of providing fast focus distance adjustment. To make fast focus distance adjustment in a head mounted display efficient, the focus adjustment module 15360 needs to be fast, quiet, provide approximately 0.5mm of travel, compact, provide guidance to maintain alignment between the image source 15040 and the remaining optics without tilt, controllable over a range of focus distances, low cost, and low weight.
In a preferred embodiment, to provide a change in focus distance without changing the size of the displayed image, display optics are provided which are telecentric at the image source. Wherein the telecentric display optics provides parallel bundles of light rays such that the area of the image source imaged by the display optics remains constant regardless of the change in distance between the image source and the remaining optics required to change the focus distance for the displayed image. In some embodiments, the image source is reflective, and the illumination provided by the illumination source may also be telecentric. Wherein telecentric illumination light may be provided by an illumination source that is at least the same size as the image source and provides a wider cone of light, wherein only the telecentric portion of the cone is reflected by the image source. Thus, telecentric display optics at the image source provide an improved viewing experience for augmented reality, particularly when providing rapid changes to the focus distance as the user moves their eyes around the field of view. In this use case scenario, the use of non-telecentric display optics at the image source will result in such displayed augmented reality objects: the augmented reality object changes slightly in size each time the user moves their eyes and will likely produce nausea. In contrast, by using telecentric display optics, the focus distance can be continuously and comfortably changed as the user moves their eye around the field of view. Fig. 161 provides an illustration of an example of non-telecentric display optics, where the ray bundles of image light 16150 converge as image light 16150 travels from image source 15040 toward display optics including prism 16140 with optical power. As a result, if the image source 15040 is moved closer to the combiner 16150, lens 16145, and refractive prism 16140 in the display optics, the image appears to become smaller when viewed by the user from the position of the eye-box 16155, and vice versa. In contrast, diagram 162 shows a representation of an example telecentric optics comprising, for example, a prism 16240 with optical power and a lens 16245, where the ray bundles of image light 16250 are parallel to each other. Thus, as the image source 15040 moves closer to or further from the prism with refractive power 16140, the image remains the same size in the displayed image as viewed by the user from the eye box 16155.
Fig. 155a, 155b, 156a, 156b, 157a, 157b, 158a and 158b illustrate actuators and guide mechanisms positioned between the image source and the remaining optics. Each of the diagrams 155, 156, 157 and 158 illustrates a different mechanism in two states. In contrast, fig. 159a and 159b show the actuator and guide mechanism positioned between image source 15040 and the top of the housing for focus adjustment module 15360. Any of the actuators and guide mechanisms shown may be used in any position with certain modifications (not shown). The choice of where to position the actuators and the guiding mechanism depends on where space is available in the display optics and the housing for the head mounted display. If the space for the actuators and guiding mechanisms is limited in the display optics, the actuators and guiding mechanisms are positioned over the image source, as shown in fig. 159a and 159 b. However, by positioning the actuators and guide mechanisms over the image source, the height of the display optics may be substantially increased. Thus, in a preferred embodiment, multiple fold (also referred to as compound fold) display optics are included, so the actuators and guide mechanisms can be positioned adjacent to the image source, and as a result, the height of the display optics is reduced. Fig. 160 shows a diagram of an example of a multiple fold optic as viewed from the eye position, where the optical axis is folded to the side in the upper optic 16010 to reduce the height of the upper optic 16010. Image source 15040 is then positioned to the side of upper optic 16010 and image source 15040 is approximately vertical rather than horizontal. Where the example folding optics shown in fig. 160 include one or more lenses 16012, a folding mirror 16013 that redirects image light 15370 from the upper optics 16010 toward a combiner 14520, which combiner 14520 redirects the image light toward an eye-box 15366 and the user's eye. In the folding optics shown in fig. 160, the folding mirror 16013 is a reflective polarizer, such that a backlight 16014 can be positioned behind the folding mirror 16013 to provide P-polarized illumination light 16071, which illuminates a reflective image source, such as LCOS, in a focus adjustment module 15360. Upon reflection of the illumination light 16071, the image source 15040 changes polarization state from P to S, thereby providing S-polarized image light 15370, the S-polarized image light 15370 being reflected by the folding mirror 16013. By using multiple folded optics, the focus adjustment module 15360 including the actuator and guide mechanism may be positioned to one side of the upper optics 16010, where more space may be available in the frame of the head mounted display. Alternatively, the fold mirror may be included in a prism, as shown in fig. 161 and 162, which may also include a surface having optical power to further reduce the size of the display optics. As a result, the multi-fold display optics provide the following advantages: when the display optics include the focus adjustment module 15360, a more compact head mounted display can be achieved.
Fig. 155a and 155b show a representation of a focus adjustment module including a set of wedges 15042 and 15043 as actuators, where a lower wedge 15043 is moved laterally to move image source 15040 vertically (as shown) in order to change the position of image source 15040 relative to the remaining optics including lens element 15012 or lens element 15412. Solenoids 15035 and 15036 are provided to act on ferromagnetic cores 15037 and 15038, respectively, wherein the cores 15037 and 15038 are attached to the lower wedge 15043. Since wedges 15042 and 15043 are positioned between image source 15040 and the remaining optics of the display optics, wedges 15042 and 15043 are made with a central window, as shown in fig. 151a and 151b, so that light can pass from the remaining optics to image source 15040. Applying current to the solenoid 15035 will attract the core 15037 and move the lower wedge 15043 to the left, thereby moving the upper wedge 15042 and the attached image source 15040 downwards, which reduces the focusing distance, as shown in fig. 155 a. Similarly, applying current to the solenoid 15036 will attract the core 15038 and move the lower wedge 15043 to the right, thereby moving the upper wedge 15042 and the attached image source 15040 upwards, which increases the focusing distance, as shown in fig. 155 b. Leaf springs 15570 have been provided to apply a force against upper wedge 15042 or image source 15040 so that the wedge is helpful in alignment during wedge movement. The leaf springs may also be attached to the housing of focus adjustment module 15360 and to image source 15040 or upper wedge 15042 to prevent lateral movement of the image source during movement of the wedge, thereby providing guidance to the image source during focus adjustment.
Fig. 156a and 156b show an illustration of a focus adjustment module 15360 that includes a pair of bimorph piezoelectric actuators 15675 and M76 to move image source 15040 for focus adjustment. Wherein the bimorph piezoelectric actuator comprises two laminated strips of piezoelectric material arranged such that when a voltage is applied to the two strips, one side of the bimorph contracts and the other side of the bimorph expands, thereby causing the actuator to change from flat to curved. Bimorph piezoelectric actuators are advantageous for use in the focus adjustment module 15360 because they are fast acting, compact, and they can provide much more displacement than piezo-electric stack actuators. In the case of bimorph piezoelectric actuators 15675 and 15676 shown in fig. 156a and 156b, one end is attached to the housing of focus adjustment module 15360 and the other end is pushed over carrier 15677 attached to image source 15040. Fig. 156a shows a flat state for bimorph piezoelectric actuators 15675 and 15676, while fig. 156b shows a curved state for bimorph piezoelectric actuators 15675 and 15676. Wherein carrier 15677 supports image source 15040 around the edges and the central portion of the carrier is removed to form a window so that light including illumination light and image light can pass from image source 15040 to the remaining optics as previously described herein for wedges 15042 and 15043. When a voltage is applied to the two bimorph piezoelectric actuators 15675 and 15676, the two actuators 15675 and 15676 curl upward, thereby moving the carrier 15677 and attached image source 15040 upward, as shown in fig. 156b, and the focus distance then increases. If more voltage is applied, bimorph piezoelectric actuators 15675 and 15676 will curl more. When the voltage is removed, the bimorph piezoelectric actuators 15675 and 15676 return to a flat state, as shown in fig. 156a, and the focus distance decreases. The actuators are shown arranged to lift opposite corners of the carrier to provide a vertically lifting force. If a faster response is desired in moving from the curved state shown in fig. 156b to the flat state shown in fig. 156a, the voltage applied to the bimorph piezoelectric actuator can be reversed in sign in a short period of time. However, if the reverse voltages are applied for a sufficient time to cause actuators 15675 and 15676 to reach a steady state, the actuators will bend in the reverse direction, which will cause carrier 15677 and the attached image source 15040 to be lifted slightly. Additionally, as shown in fig. 156a and 156b, a four-bar linkage 15679 has been provided. Where the four-bar linkage 15679 is attached to a side wall of the housing of the focus adjustment module 15360 and to four points on the carrier 15677. The function of the four-bar linkage 15679 is to provide guidance of the carrier 15677 and attached image source 15040 so that the image source 15040 does not move or tilt laterally relative to the remaining optics in order to maintain alignment during movement associated with focus adjustment. The four bar linkage 15679 shown in fig. 156a and 156b is a thin metal or plastic structure with flexible fingers that extend from the sidewall attachments to attachment points on carrier 15679. The flexibility of the fingers allows unhindered vertical movement while preventing lateral movement. The carrier 15677 is designed to provide vertically spaced attachment points, as shown, thereby enabling the fingers of the four-bar linkage 15679 to prevent tilting of the carrier and attached image source during vertical movement. The four-bar linkage 15679 may also be designed as a leaf spring such that a slight downward force is applied to carrier 15677 to ensure that carrier 15677 remains in contact with bimorph piezoelectric actuators 15675 and 15676 during focus adjustment. An advantage of this arrangement of the bimorph piezoelectric actuator is that a large displacement can be provided for greater focus adjustment. In an embodiment, the linkage 15679 may have a stop at the upper position to more accurately stop translation of the carrier 15677 in the upper position. In embodiments, the stops may be otherwise positioned to create an upper boundary for the carrier. In further embodiments, the voltage applied to the bimorph piezoelectric actuator may be reversed to cause the bimorph piezoelectric actuator to bend in opposite directions (not shown) and thereby extend the available displacement range for focus adjustment.
Fig. 157a and 157b show an illustration of another version of a focus adjustment module 15360 that includes bimorph piezoelectric actuators 15781 and 15782. In this case, the lower bimorph piezoelectric actuator 15781 is attached in the middle to the lower surface of the housing of the focus adjustment module 15360, and the upper bimorph piezoelectric actuator 15782 is attached in the middle to the lower surface of the carrier 15677. The ends of upper and lower bimorph piezoelectric actuators 15782, 15781 are attached together. Fig. 157a shows a flat state, where no voltage is applied to the bimorph piezoelectric actuators 15781 and 15782. When a voltage is applied to bimorph piezoelectric actuators 15781 and 15782, both of them change to a curved state, which moves carrier 15677 and the image source vertically, thereby increasing the focus distance. As more voltage is applied, the bending of actuators 15781 and 15782 becomes more pronounced and the change in the movement and focus distance of carrier 15677 is increased. An advantage of this arrangement of the bimorph piezoelectric actuators 15781 and 15782 is that a greater lifting force and faster movement can be provided, but with less displacement. Thus, the bimorph piezoelectric actuators 15781 and 15782 are arranged back-to-back so they flex in opposite directions when a voltage is applied, thereby doubling the displacement of the carrier for a given voltage. It is possible to use more than two bimorph piezoelectric actuators (e.g., four bimorph piezoelectric actuators) in a stack. As previously described herein, a four bar linkage is provided to guide movement of carrier 15677 and attached image source 15040 to prevent lateral movement or tilting during focus adjustment.
Fig. 158a and 158b show an illustration of a focus adjustment module 15360 that includes one or more scissor jack actuator actuators 15883. Wherein the scissor jack actuator includes a frame that flexes such that the upper point moves upward farther as the central shaft 15885 shortens. In this way, the frame of the scissor jack actuator 15883 acts as a displacement amplifier, such that the movement of the carrier 15677 is greater than the change in length of the central shaft 15885. Figure 158a shows a situation where the central axis 15885 is long, thereby making the upper point lower and the carrier 15677 on the scissor jack actuator 15883 lower, and as a result, the focus distance closer to the user. Figure 158b shows the situation when the centre axis 15885 is short, thereby making the upper point higher and the carrier 15677 located on the scissor jack actuator 15883 also higher, and as a result the focus distance is further away from the user. The center shaft 15885 may be, for example, various devices effective to change the distance between the ends of the scissor jack actuators 15883, the center shaft 15885 may be a piezo-electric stack actuator actuated with an applied voltage or a screw manually actuated by hand rotation or electrically actuated by a motor. In any case, scissor jack actuators 15883 push on carrier 15677 to elevate image source 15040, thereby increasing the focus distance. As previously described herein, a four bar linkage 15679 may be provided to guide the carrier during focus adjustment to preserve alignment of the image source 15040 relative to the remaining optics in the upper optics 14510. The piezo-electric stack actuator may provide very fast and precise movement, such that if a piezo-electric stack is used as the central axis 15885, very fast and precise focus adjustment may be provided by the focus adjustment module 15360 (if it includes a piezo-electric stack actuator with scissor jack actuators 15883).
Fig. 159a and 159b show a graphical representation of a focus adjustment module 15360 having a voice coil motor actuator 15987. As previously described herein, in this case, image source 15040 is shown positioned below the actuator and guide mechanism. Carrier 15977 is attached to image source 15040 to support image source 15040 and provide an attachment point for four bar linkage 15679. Wherein the four-bar linkage 15679 provides guidance of the carrier 15977 and attached image source 15040 during movement associated with focus adjustment. The voice coil motor 15987 has an outer portion attached to the upper surface of the housing of the focus adjustment module 15360 (as shown) and an inner portion attached to the carrier 15977. Fig. 159a shows the relative position of the components when no voltage is applied to the voice coil motor 15987. As shown, in diagram 159a, the interior portion of voice coil motor 15987 is extended so that the carrier is in a lower position, thereby providing a reduced focus distance. Fig. 159b shows the relative position of the components when voltage is applied to the voice coil motor 15987. Under these conditions as shown in fig. 159b, the interior portion of voice coil motor 15987 is retracted so that the carrier is in the raised position, thereby providing an increased focus distance. As more voltage is applied to the voice coil motor 15987, the interior portion of the voice coil motor 15987 is retracted further, thereby providing a greater change in focus distance. A spring (not shown) may be included in the focus adjustment module 15360 to apply a force to the carrier to reduce the time that the carrier moves back to the position shown in fig. 159a when the voltage is removed from the voice coil motor 15987. The springs can also help to hold the carrier 15977 in the position shown in fig. 159a to provide a default focus setting when no power is applied to the voice coil motor 15987 to thereby provide a low power mode of operation.
A position measurement device (not shown) may be added to any of the focus adjustment modules 15360 shown in fig. 155a, 155b, 156a, 156b, 157a, 157b, 158a, 158b, 159a, and 159b to measure the relative position of the image sources. The position measurement device may then provide measurements that may be used in a control system for the focus distance (which may be a closed loop control system) to improve the accuracy and repeatability of the focus distance adjustment.
In yet another embodiment, the position of the image source 15040 in the left and right optics modules may be adjusted in the alignment step to provide a reliable convergence distance. Wherein the aligning step includes positioning the enclosure C27 in a fixture aligned with a target located in front of the fixture and at a desired convergence distance. The matched image is then displayed on the image source 15040, and the image source 15040 is moved to align the displayed image with the target viewed through the optics module. The advantage of adjusting the position of image source 15040 in the alignment step is: the effects of variations in the dimensions of cabinet 14727, upper optic 14510, and combiner 14520 may be compensated for in a manufacturing environment to provide reliable convergence distances.
In another embodiment, one or more of the following elements may be connected to provide a removable assembly, including: a focus-shifting element, a combiner and a corrective lens element. This may provide for an easier replaceable component, which may be changed when damage occurs, when use changes, or when a user changes. In particular, it is useful to change the focus shift element and the correction lens element while changing from a use case in which the vergence viewing distance is changed from a longer distance to a shorter distance (and vice versa). When in this use case, one or the other of the vergence viewing distances may exceed the content that the user's eyes can comfortably focus on. For example, if the user is myopic, then correction is needed when the vergence viewing distance is longer and no correction is needed when the vergence viewing distance is shorter.
The inventors have discovered that a less than optimal experience may result when world-locked digital content is displaced from the field of view of the user's head-mounted see-through computer display. For example, when the user turns his head away from a point in the world where the digital content is locked, the digital content shifts towards the side of the field of view. When the user turns his head even further, the content shifts out of the field of view and breaks abruptly at the edges of the field of view. The abruptness of the change in appearance and the eventual complete loss of content once the head is rotated far enough does not create a natural impression of content that is fixed in the real world. Normally, when looking at the actual object in our environment, the object remains visually present, even slightly, until we have completely shifted our vision away from the object. An object that is displaced to the side of our direct line of sight vision may be slightly blurred due to the nature of our vision (i.e., foveated) vision), but it remains present to some extent. In a typical see-through head mounted display, the field of view has a limited area (e.g., width and height). Typically, people can see through to the environment outside the field of view, and thus look strange when the content starts and eventually disappears from the user's vision, while the user is still able to see the environment in which the content was once present and locked.
One aspect of the invention relates to generating a smooth transition of world-locked augmented reality content shifted out of a see-through field of view. In an embodiment, world-locked content is modified to appear less obvious to a user as the content is shifted toward the edge of the field of view. This may take the form of de-focusing, blurring, reducing resolution, reducing brightness, reducing sharpness, reducing contrast, etc. of the content as it is shifted towards the edge. The content may gradually decrease in appearance as it approaches the edge, such that when it shifts past the edge, its appearance is minimal or non-existent, such that it appears to fade away from the user's view. This may work particularly well in the following systems: the system has a field of view large enough to accommodate sharp content in the middle of the field of view, but large enough so that the user does not use the edges very much. For example, in a system with a 60 degree horizontal field of view, the outer 10 degrees on both sides may be used as a transition region where world-locked content is managed to reduce its appearance in preparation for its disappearance from the field of view.
In one embodiment of a system for generating a smooth transition of world-locked augmented reality content shifted out of a see-through field of view, a head-mounted see-through display including a see-through optical element mounted such that it is positioned in front of a user's eye when the head-mounted see-through display is worn by the user further includes a processor adapted to present digital content in the field of view on the see-through optical element. The digital content may have a location within the field of view that depends on the location in the surrounding environment. The processor may be further adapted to modify the appearance of the content as the content approaches the edges of the field of view such that the content appears to disappear as the content approaches the edges of the field of view. The appearance modification may be a change in content brightness, a change in content contrast, a change in content sharpness, or a change in content resolution. The processor may include a display driver or an application processor. The processor may also be adapted to generate a secondary field of view (e.g., by an additional optical system as described herein) through which the user views the presented digital content and through which the user sees the surroundings, the processor further adapted to transition the content from the field of view to the secondary field of view. In this further adaptation, the appearance of the content in the secondary field of view may be diminished compared to the appearance of the content in the field of view. In this further adaptation, the secondary field of view may have a lower resolution than the resolution of the field of view and may be generated by one of: reflecting the image light onto a combiner that directs the image light directly to the user's eye or towards a final partial mirror that reflects the image light towards the user's eye, an OLED on the combiner that projects light onto the combiner, an LED array on the combiner, or an edge-emitting LCD on the combiner. In this further adaptation, the secondary field of view may be presented by a see-through panel positioned directly in front of the eyes of the user, wherein the see-through panel is mounted on the combiner and/or vertically. The see-through panel may be an OLED or an edge-lit LCD. The processor may be further adapted to predict when the content will approach an edge of the field of view and base the appearance transition at least in part on the prediction. The prediction may be based at least in part on an eye image.
In an embodiment, the prediction that content will approach and/or pass the edge of the field of view may be determined based on: compass in the head-mounted computer (e.g., monitoring compass orientation compared to world-locked position for content), movement of content within the field of view (e.g., monitoring where the content is within the field of view and monitoring the direction and speed with which it moves toward the edge), eye position (e.g., monitoring eye position and movement as an indication of how the head-mounted computer may move.
In one embodiment of a system for prediction-based transition of world-locked content, a head-mounted see-through display may include a see-through optical element mounted such that it is positioned in front of a user's eye when the head-mounted see-through display is worn by the user, and a processor adapted to present digital content in a field of view on the see-through optical element, wherein the digital content has a position within the field of view that depends on a position in a surrounding environment. The processor may be further adapted to predict when the digital content will shift out of the field of view due to a change in position of the head-mounted see-through display, and modify an appearance of the content as the content approaches an edge of the field of view such that the content appears to disappear as the content approaches the edge of the field of view. The prediction may be based on a compass orientation indicating a forward facing direction of the head mounted see-through display or tracked eye movement of the user, where the tracked eye movement indicates that the user is to turn the user's head. The appearance modification may be a change in brightness of the content, a change in contrast of the content, a change in sharpness of the content, or a change in resolution of the content. The processor may include a display driver or an application processor. The processor may be further adapted to generate a secondary field of view through which the user views the presented digital content and sees the surroundings, the processor being further adapted to transition the content from the field of view to the secondary field of view. In this further adaptation, the appearance of the content in the secondary field of view may be diminished compared to the appearance of the content in the field of view. In this further adaptation, the secondary field of view may have a lower resolution than the resolution of the field of view and may be generated by one of: reflecting the image light onto a combiner that directs the image light directly to the user's eye or towards a final partial mirror that reflects the image light towards the user's eye, an OLED on the combiner that projects light onto the combiner, an LED array on the combiner, or an edge-emitting LCD on the combiner. In this further adaptation, the secondary field of view may be presented by a see-through panel positioned directly in front of the eyes of the user, wherein the see-through panel is mounted on the combiner and/or vertically. The see-through panel may be an OLED or an edge-lit LCD. The processor may be further adapted to predict when the content will approach an edge of the field of view and base the appearance transition at least in part on the prediction. The prediction may be based at least in part on an eye image.
Fig. 163A illustrates an abrupt change in the appearance of content 16302 in the field of view of a see-through display. Fig. 163B illustrates a managed appearance system in which the content is reduced in appearance as it enters a transition region 16304 near the edge of the field of view.
One aspect of the invention relates to a hybrid see-through display system in which a high quality display system presents content to a field of view centered on a user's direct forward line of sight and another lower quality system is used to present content outside the direct forward line of sight. The content appearance transition may then be managed in part in the central field of view and in the extended field of view. The extended field of view may also have more than one section, so that reflections may be presented in the near edge portion and lighting effects more distant.
To illustrate, a front-emitting reflective display, an emissive display, a holographic display (e.g., as described herein) may be used to present high quality content in a 40 degree field of view, and another display system may be used to present content or visually perceived effects from the edge of a 40 degree point (or overlap or have a gap) outward to some other point (e.g., 70 degrees). In an embodiment, an external field of view footprint (often referred to as an "external display") may operate through an optical system in the upper module, proximate to the main field of view display system, and the optical path may include a fold (e.g., as generally described herein). In other embodiments, the external display may be a guidance system, where, for example, image light or effect light is generated and guided to a combiner. For example, the display may be mounted above the combiner and arranged to direct the lighting effect directly to the combiner.
In an embodiment, the external display may be included within the main display. For example, the lens system in the upper module may be adapted to generate high quality content in the middle but then lower quality towards the edges of the larger field of view. In this system, there may be only one display (e.g., LCoS, OLED, DLP, etc.) and the content towards the edges of the display may be managed to achieve appearance transitions.
The diagram 164 illustrates a mixed field of view including a centered field of view 16402 for rendering sharp and transitional content, and an extended field of view 16404 positioned at or near or overlapping an edge of the centered field of view 16402 and adapted to provide lower appearance content and/or lighting effects that facilitate transition of world-locked content as it is displaced out of the center field of view 16402.
Fig. 165 illustrates a hybrid display system in which a primary, centered field of view is generated by optics in upper module 16502 (e.g., as described elsewhere herein) and an extended field of view is generated by display system 16504 mounted above the combiner and providing image content and/or lighting effects in the extended area. In an embodiment, extended field of view display 16504 may include an OLED, edge-emitting LCD, LED, or other display, and the display may include microlenses, macro lenses, or other optics to properly align and focus light. In embodiments, the extended field of view may include a single illuminating element (such as an LED), a row of elements, an array of elements, or the like.
In still other embodiments, the extended field of view region may be created by mounting a see-through display on the combiner. For example, a see-through OLED display, edge-emitting LCD, or the like may be mounted in the extended field of view region and controlled to produce transitional images and/or lighting effects.
In an embodiment, the head mounted see-through display may be adapted to transition the content to an extended FOV with reduced display resolution. The head-mounted see-through display may include a see-through optical element mounted such that it is positioned in front of the user's eyes when the head-mounted see-through display is worn by a user, and a processor adapted to present digital content in a primary field of view on the see-through optical element in which the user views the presented digital content and through which the user sees the surroundings, the processor further adapted to present digital content in an extended field of view in which the user views the presented digital content and through which the user sees the surroundings. The primary field of view may have a higher resolution than the extended field of view; and the processor is further adapted to present the world-locked positioned digital content in the primary field of view and transition presentation of the world-locked positioned digital content to the extended field of view as the head-mounted display changes position transitioning the world-locked positioned digital content out of the primary field of view. The processor may include a display driver or an application processor. The extended field of view has such a resolution: the resolution generates a significant blur to the content compared to the content presented in the main field of view. The extended field of view may be generated by: by reflecting image light onto the combiner, which directs the image light directly to the user's eye, by reflecting image light onto the combiner, which directs the image light toward a final partial mirror that reflects the image light to the user's eye, by projecting light onto an OLED on the combiner, by projecting light onto an LED array on the combiner, by projecting light onto an edge-emitting LCD on the combiner, or by a see-through panel positioned directly in front of the user's eye. The panel may be mounted on the combiner or vertically, and may be an OLED or an edge-emitting LCD. The processor may be further adapted to predict when the content will approach an edge of the field of view and base the appearance transition at least in part on the prediction. The prediction may be based at least in part on an eye image.
Fig. 166A-166D illustrate examples of extended display, or extended image content optics, configurations. As illustrated, the extended display configuration may be adapted to produce extended content and/or lighting effects around each side of the central display, on multiple sides of the central display, or on one side of the central display.
Fig. 167 illustrates another optical system using a hybrid optical system including a main display optical system 16502 and an extended field of view optical system 16504. In this embodiment, both optical systems project image light, extended image light and/or illumination effects to a combiner that reflects the light to a forward-end partial mirror, which in turn reflects the light toward the wearer's eye.
In yet another embodiment, the extended field of view display may be provided by a see-through display positioned in front of the user's eyes, such that the user views directly through the see-through display. For example, a see-through OLED display or an edge-lit transparent LCD display may be positioned on either side of the combiner (as illustrated in figures C and E), or on either side of a waveguide or other display system (e.g., as illustrated in figures 8a, 8b, 8C, 141a, 141b, 142a, 142b, 143, and 144).
In an embodiment, the head mounted see-through display may be adapted to provide an extended FOV for large content. The head-mounted see-through display may include a see-through optical element mounted such that it is positioned in front of the user's eyes when the head-mounted see-through display is worn by a user, and a processor adapted to present digital content in a main field of view on the see-through optical element in which the user views the presented digital content and through which the user sees the surroundings, the processor adapted to present digital content in an extended field of view in which the user views the presented digital content and through which the user sees the surroundings. The main field of view may have a higher resolution than the extended field of view. The processor may be further adapted to present a first portion of the digital content in the main field of view and a second portion of the digital content in the extended field of view. For example, when the digital content is too large to fit in the main field of view, the processor may generate a soft transition between a first portion of the digital content in the main field of view and a second portion of the digital content in the extended field of view such that it does not appear to be abruptly interrupted at the edges of the main field of view. The processor may be adapted to generate a soft appearance towards the edges of the main field of view. The processor may modify how pixels towards the display edge render the content. The head mounted display may also include a display driver that modifies how pixels toward the edge of the head mounted display render content. The head mounted display may have pixels towards an edge of the head mounted display that render content differently than pixels towards a center portion of the head mounted display. Pixels towards the edge may have less gain than pixels towards the center portion of the head mounted display. Pixels towards the edges of the main field of view may be digitally altered by a content transition algorithm. The extended field of view may be generated by: by reflecting image light onto the combiner, which directs the image light directly to the user's eye, by reflecting image light onto the combiner, which directs the image light towards the final part mirror, which reflects the image light to the user's eye, by projecting light onto an OLED on the combiner, by projecting light onto an LED array on the combiner, by projecting light onto an edge-emitting LCD on the combiner, or by a see-through panel positioned directly in front of the user's eye. The panels may be mounted on the combiner or vertically. The see-through panel may be an OLED or an edge-lit LCD. The processor may be further adapted to predict when the content will approach an edge of the field of view and base the appearance transition at least in part on the prediction. The prediction may be based at least in part on an eye image.
In an embodiment, the head mounted see-through display may be adapted to adjust the content for transition to the extended FOV. The head-mounted see-through display may include a see-through optical element mounted such that it is positioned in front of the user's eyes when the head-mounted see-through display is worn by the user, and a processor adapted to present digital content in a primary field of view in which the user views the presented digital content and through which the user sees the surrounding environment. The processor may be further adapted to present digital content in an extended field of view in which a user views the presented digital content and through which the user sees the surrounding environment. The main field of view may have a higher resolution than the extended field of view. The processor may be further adapted to present the digital content in the main field of view and to reduce an appearance of the content as the content approaches an edge of the main field of view. The processor may be further adapted to further reduce the appearance of the content when presented in the extended field of view. The content becomes close to the edges of the extended field of view and the processor may gradually reduce the appearance of the content in the extended field of view. When the content is at the edge of the extended field of view, the content may be substantially unobtrusive. The appearance reduction may be a reduction in content brightness, a reduction in content contrast, a reduction in content sharpness, or a reduction in content resolution. The extended field of view may be generated by: by reflecting image light onto the combiner, which directs the image light directly to the user's eye, by reflecting image light onto the combiner, which directs the image light toward a final partial mirror, which reflects the image light to the user's eye, by projecting light onto an OLED on the combiner, by projecting light onto an LED array on the combiner, by projecting light onto an edge-emitting LCD on the combiner, or by a see-through panel positioned directly in front of the user's eye. The panels may be mounted on the combiner or vertically. The see-through panel may be an OLED or an edge-lit LCD. The processor may be further adapted to predict when the content will approach an edge of the field of view and base the appearance transition at least in part on the prediction. The prediction may be based at least in part on an eye image.
Fig. 168A-168E illustrate various embodiments in which a see-through display panel 16802 (e.g., an OLED, edge-lit transparent LCD display) is positioned directly in front of a user's eye in a head-worn computer to provide an extended and/or overlapping field of view in a hybrid display system. Fig. 168A illustrates a system in which an extended field of view is provided by a transparent display panel 16802 mounted on or near the combiner optics. In this embodiment, the see-through display panel 16802 is mounted on or near the back of the combiner so that it does not interfere with the central display system, which reflects image light off the combiner directly to the user's eyes. Fig. 168B illustrates a hybrid display system in which the see-through extended field of view display panel 16802 is positioned vertically proximate to the combiner. FIG. 168C illustrates a hybrid display system in which a see-through extended field of view display panel 16802 is mounted vertically in front of a curved partial mirror of a main field of view display.
Fig. 168D and 168E illustrate the hybrid display system from the rear (i.e., the user's field of view). Fig. 168D illustrates a system in which the see-through extended field of view display panel 16802 sees the display see-through around the main field of view. Fig. 168E illustrates a system in which an extended field of view see-through display panel 16802 is on a side of a main field of view display system. It should be understood that the inventors contemplate that the extended field of view display panel may be configured in many different ways to provide extension on one or more sides of the main field of view and to provide extension in a balanced configuration (i.e., similarly extending on more than one side) or an unbalanced configuration (i.e., extending more or less on one or more sides). It should also be understood that the inventors contemplate that the extended field of view may overlap the main field of view, appear adjacent to the main field of view, have a gap between the main field of view and the extended field of view, etc., depending on the particular needs of the situation.
While the configurations described herein with respect to the extended field of view have illustrated systems that create smooth transitions in world-locked content, these configurations may further be used to create additional lighting and or shading effects for content displayed in the main field of view. For example, in a configuration in which the extended field of view see-through display overlaps the main field of view, the extended field of view display may provide a background (background) for content displayed in the main field of view. The background may be, for example, a lighting effect that is behind or near the content to provide context to the content. The background may be a non-lighting effect in which pixels of a see-through display (e.g., pixels of a see-through LCD) are changed to be opaque or less transparent to provide a dark background behind or adjacent to the content (e.g., to create the appearance of a shadow). In such embodiments, the extended field of view system may overlap the primary field of view, and the extended field of view system may or may not extend beyond the edges of the primary field of view.
In an embodiment, the head mounted see-through display may be adapted to provide a hybrid multi-FOV display. In an aspect, an optical system of a head-mounted see-through display may include primary image content optics for producing central eye image content, extended image content optics for producing off-center eye image content, and a combiner positioned to present content to a user and through which the user views an ambient environment, wherein each of the primary image content optics and the extended image content optics are positioned to project their respective image light to the combiner that reflects the respective image light to an eye of the user. The combiner may directly reflect the respective image light to the user's eye. The combiner may indirectly reflect the respective image light to the user's eye, wherein the combiner may reflect the respective image light toward the quasi-straight portion mirror. The center eye image content and the off-center eye image content may pass through at least one fold in the optical system before reflecting off the combiner. Extended image content optics may be mounted directly above the combiner such that off-center eye image content is projected directly onto the combiner. The optical system may further include a processor adapted to coordinate smooth vanishing transitions of world-locked content as the content moves from the field of view of the primary image content optics to the field of view of the extended image content optics and to the edges of the field of view of the image content optics. The extended image content optics may be an OLED, LCD display, LCD array, linear, two-dimensional, or curved. The extended image content optics may generate an illumination effect corresponding to the image content. The extended image content optics may include a lens system to modify the projection. The lens system may comprise a microlens array.
In an embodiment, the head mounted see-through display may be a hybrid display with see-through panels. In an aspect, the head mounted see-through display may include a primary image content display adapted to produce image light and project the image light in a direction to be reflected by the see-through combiner so that it reaches the user's eyes, and a secondary image content display, wherein the secondary image content display is a see-through panel positioned directly in front of the user's eyes and for enhancing the visual experience delivered by the primary image content display. The secondary display may provide content or effects in an area outside the primary field of view produced by the primary image display. The outer region may be adjacent to, surround or overlap the primary field of view. The secondary display may provide content or effects in a region that overlaps the primary field of view produced by the primary image display. The secondary display may be mounted on a combiner adapted to reflect image light to the user's eye, or may be mounted vertically out of the image light optical path established by the principal image display. The head mounted display may further comprise a processor adapted to track a user's eye position, the processor further adapted to alter the position of content presented in the secondary display. The altered position may substantially maintain alignment of the principal image display and the secondary image display from the perspective of the user as the user's eyes move. The see-through panel may be an OLED or an edge-lit LCD.
In an embodiment, the head mounted see-through display may be adapted to mix the types of content. In an aspect, a head-mounted see-through display may include a field of view generated by an image display, wherein a user views digital content in the field of view and sees through the field of view to view a surrounding environment, and a processor adapted to generate two types of content, wherein the two types of content are presented in the field of view. The first type of content may be world-locked content having a field of view location that depends on where in the surrounding environment, wherein the appearance of the first type of content diminishes as it approaches an edge of the field of view. The second type of content may not be world-locked, wherein the second type of content maintains a substantially constant appearance as it approaches the edges of the field of view. The diminished appearance may include a reduction in resolution, a reduction in brightness, a reduction in contrast, adjusted by a display driver, adjusted by an application processor, or adjusted by altered pixels of a display generating the field of view. The head mounted display may further include a secondary field of view generated by the image display in which the user views the presented digital content and through which the user sees the surroundings, the processor being further adapted to transition the content from the field of view to the secondary field of view. The appearance of the content in the secondary field of view is diminished compared to the appearance of the content in the field of view. The secondary field of view may have a lower resolution than the resolution of the field of view. The secondary field of view may be generated by: reflecting image light onto a combiner that directs the image light directly to the user's eye, reflecting image light onto a combiner that directs the image light toward a final partial mirror that reflects the image light to the user's eye, an OLED that projects light onto the combiner, an LED array that projects light onto the combiner, an edge-lit LCD that projects light onto the combiner, or a see-through panel positioned directly in front of the user's eye. The panels may be mounted on the combiner or vertically. The see-through panel is an OLED or an edge-lit LCD. The processor may be further adapted to predict when the content will approach an edge of the field of view and base the appearance transition at least in part on the prediction. The prediction may be based at least in part on an eye image.
In an embodiment, the head mounted see-through display may be adapted to adjust the FOV alignment. The head-mounted see-through display may include a hybrid optical system adapted to generate a primary see-through field of view for presenting content at a high resolution and a secondary see-through field of view for presenting content at a lower resolution, wherein the primary and secondary fields of view are presented in close proximity to each other, a processor adapted to adjust a relative proximity of the primary and secondary fields of view, and an eye position detection system adapted to detect a position of a user's eye, wherein the processor adjusts the relative proximity of the primary and secondary fields of view based on the position of the user's eye. The secondary field of view may be produced on a see-through OLED panel positioned directly in front of the user's eyes, on a see-through edge-lit LCD panel positioned directly in front of the user's eyes, or on a see-through combiner positioned directly in front of the user's eyes. The relative proximity may be a horizontal proximity or a vertical proximity. Relative proximity may define a measure of overlap between the primary and secondary fields of view or a measure of separation between the primary and secondary fields of view. The eye position detection system may image the eye from a perspective substantially in front of the eye with a reflection off the see-through optics in a region including the primary field of view or with a reflection off the see-through optics in a region including the secondary field of view.
When using a Head Mounted Display (HMD) (e.g., as part of HWC 102) for purposes such as augmented reality imaging, it is desirable to provide a wide field of view (e.g., 60 degrees). However, when viewing a wide field of view with a head mounted display, it should be appreciated that viewing images with a head mounted display is different from viewing images on a screen that is rigidly mounted in the environment (e.g., a wall mounted television or cinema screen). In the case of a head mounted display, as the user moves their head, the head mounted display and its associated display field of view also move in relation to the surrounding environment. This makes it difficult for a user of the HMD to view the edges or corners of an image displayed with a wide field of view, because head movement does not assist the user, only eye movement has to be used to view the corners of the image. To improve the viewing experience when using an HMD to view an image displayed with a wide field of view, the relationship between head movement and eye movement used by a person when viewing the surrounding environment should be substantially replicated. For example, when viewing an image with a wide field of view on a rigidly mounted screen (such as looking at the edge of a movie screen in a movie theater), the viewer will normally rotate his head (at least slightly), as opposed to just moving the viewer's eyes to the edge. The inventors have found that some accommodation must be made to provide comfortable and intuitive viewing of the area towards the outer edge of the wide field of view in the HMD system. In embodiments, the content displayed in the wide field of view may not necessarily be world-locked (i.e., where the location of the content in the field of view depends on the location of objects in the environment such that the content appears to the user to be positionally connected to the environment), but may still include a process of shifting the location of the rendered content based on the position or motion of the user's eyes or head.
Because the head mounted display is worn on the head of the user, compactness is important to provide a comfortable viewing experience. Compact optical systems typically include short focal length optics with low f # to reduce physical size. Optics with these characteristics typically require a wide cone angle of light from the image source. Wherein a wide cone angle is associated with an image source that emits image light from a front surface of the image source, as for example in a small display or microdisplay, such as an OLED, backlit LCD, etc. These displays may emit unpolarized image light or polarized image light. The optical system receives image light from an image source and then manipulates the image light to form a converging cone of image light that forms an image at the user's eye having an associated wide field of view. In order to enable a user to interact with the displayed image and the surroundings simultaneously, it is advantageous to provide an undistorted and bright perspective view of the surroundings and a bright and sharp displayed image. However, providing an undistorted and bright perspective view and a bright and sharp displayed image may be conflicting requirements, especially when providing a wide field of view image.
For the purpose of viewing augmented reality imagery, it is desirable to provide a wide field of view of 50 degrees or more. However, the design of compact optics with a wide field of view suitable for use in compact head mounted displays can be challenging. This is further complicated by the fact that: the human eye has high resolution capability only in a very narrow portion of the field of view, called the fovea (fovea), and much lower resolution capability at the periphery of the field of view. In order to view the entire area of the high resolution image, the person must move their eyes over a wide field of view.
The inventors have found that there is a need for an optical system that provides high transparency to the surrounding environment to provide an undistorted and bright view of the surrounding environment while also displaying a bright and sharp image over a wide display field of view. To provide a comfortable viewing experience, the optical system should take into account how the user moves his eyes and his head to view the environment. This is particularly important when the user is viewing an augmented reality image.
Systems and methods in accordance with the principles of the present invention provide an HMD that displays an image with a wide field of view overlaid onto a see-through view of the surrounding environment with an improved see-through view and a high contrast display image. An optical system is provided, which includes: including the upper optics of an emissive image source (e.g., OLED, back-lit LCD, etc.), one or more lenses and stray light traps, and the non-polarizing lower optics including a planar angled beam splitter and a curved partially reflecting mirror. The emissive image source provides image light that includes one or more narrow spectral bands of image light. Wherein one or more of the reflective surfaces on the beam splitter and the curved partially reflecting mirror are treated to reflect a majority of incident light within the narrow spectral band and transmit a majority of incident light within the visible spectral band, thereby providing a bright, see-through view of the displayed image and the surrounding environment (e.g., using a tri-color mirror on the beam splitter).
Stray light traps are also provided to enable higher contrast images to be displayed consistent with a highly transmissive view of the surrounding environment. Where stray light may come from various sources, including: perspective light from the surrounding environment; image light that has been reflected back into the optics by the curved portion mirror; or light from below that has passed through the beam splitter. By capturing this stray light, the contrast of the displayed image seen by the user is greatly improved.
A display mode of operation is also provided for improved viewing of wide field of view images, wherein the display image is laterally shifted within the display field of view in correspondence with movement of the user's head. Wherein a lateral shift of the display image is triggered by detecting eye movement followed by head movement in the same direction. The display image is then laterally displaced in correspondence with and in the opposite direction to the subsequent head movement. The purpose of this mode is to enable the user to view the peripheral portion of the image without having to move their eyes to the full extent of the wide display field of view. Whereby the user views a wide field of view of the displayed image through a combination of eye movement and head movement for a more comfortable viewing experience.
Systems and methods in accordance with the principles of the present invention provide a head mounted display having a highly transmissive see-through view of the surrounding environment and a high contrast display image overlaid onto the see-through view of the surrounding environment. In this way, the systems and methods provide a head mounted display that is well suited for use with augmented reality imagery because the user is provided with a bright and sharp display image while still being able to easily view the surrounding environment. The system and method also provide a wide field of view having a sharpness corresponding to the acuity distribution of the human eye when typical eye movements and head movements are taken into account. Wherein the wide field of view head mounted display may provide a display field of view of, for example, at least +/-25 degrees (50 degree included angle). Furthermore, the compact optics are provided with a reduced thickness to improve the compact form factor of the head mounted display. An operation mode is provided that takes into account viewing conditions of a head mounted display in which the display is attached to the head of a user.
FIG. 169 shows a cross-sectional illustration of an example optical assembly 169900 for a head mounted display. The optical assembly 169900 includes: including upper optics 169903 to transmit an image source 16900, one or more lenses 1697 and a light trap 16930, and lower optics 16900 including an angular beam splitter 1695 and a curved partially reflecting mirror 1960. Emissive image source 1699 provides image light 1940 with image content that is optically manipulated by lens 1697 and curved section mirror 1960 to form a wide field of view that is presented to the user's eye in eyebox 16970. Wherein the eye box is defined as the area in which the user's eyes can see the displayed image. The optics are folded to make the optical assembly 1699 more compact so that the optics have a first optical axis 16946 extending perpendicularly from the emissive image source 1697. Angled beam splitter 16950 redirects a portion of image light 16940 by reflection such that image light 16940 passes out along second optical axis 16943. Curved portion mirror 1960 reflects a portion of image light 16940 such that it passes back along second optical axis 16943 and toward eyebox 16970. At the same time, scene light 16973 from the ambient environment is transmitted by curved partially reflecting mirror 1960 and angular beam splitter 16950 to provide a perspective view of the ambient environment to eyebox 16970. Thus, the curved portion mirror 1960 acts as a combiner in which the user sees the display image provided by the image light 1940 superimposed onto the see-through view of the ambient environment provided by the scene light 16972.
Emissive image source 169910 can be any type of light emitting display that does not require the application of supplemental light (e.g., transmissive front light as described elsewhere herein) within upper optics 169903, including: OLEDs, back-lit LCDs, micro-sized LED arrays, laser diode arrays, edge-lit LCDs, and plasma displays. Typically, emissive displays provide image light having a narrow wavelength band of light in the visible range. For example, for a full color display, the bands may include red, green, and blue bands having half-height bandwidth (FWHM) wavelengths of 615-. Further, emissive image source 169910 provides a wide cone of image light (e.g., 100 degrees or more). There are several advantages associated with using an emissive image source 169910 with a wide cone angle in that: the optical system can be designed with a shorter focal length and a faster f # (e.g., 2.5 or faster), which enables the optics to be much more compact. Furthermore, by eliminating the need for an illumination system to apply light to the front surface of an image source, such as typically required for a reflective image source (such as LCOS or DLP), the overall size of the upper optics can be greatly reduced.
In an embodiment, to provide a high transmission (e.g., greater than 50% transmission of scene light to the eye) transmission view of the surrounding environment, the lower optic is a non-polarizing design, wherein the optical surfaces allow some portion of the non-polarized visible light to be transmitted. This is to avoid more than 50% of the light loss that occurs when using an absorptive polarizer or a reflective polarizer in transmission along the optical path of the scene light 16973. Instead, the reflective surfaces on angular beam splitter 16950 and curved partially reflecting mirror 1960 are treated as partially reflective. Where the partial reflection treatment may be a base partial mirror having a relatively uniform level of reflectivity across the entire visible range, or the partial reflection treatment may be a notch mirror that provides a higher level of reflectivity in the visible range in one or more narrow wavelength bands that have been selected to match the output band of the transmitting image source and a higher level of transmissivity in wavelengths between the narrow wavelength bands (e.g., as described elsewhere herein). The partial reflection treatment may be a coating (such as a multilayer coating), a phase-matched nanostructure, or a film (such as a multilayer film or a coated film) having partial mirror properties or notch mirror properties.
By using non-polarizing lower optic 169907 in the portion of the optic where the see-through view of the surrounding environment is provided, there are added benefits in that: chromatic aberration is avoided when viewing polarized image sources in an environment, such as a liquid crystal television or computer monitor, or natural sources that may be very distracting to the user, such as clouds and reflections. These color differences typically take the form of a rainbow pattern with bright colors that can be very distracting to the head-mounted experience. Chromatic aberration is caused by interference between any polarizer or circular polarizer present in the see-through portion of the optics and the polarized light of the polarized image source. As a result, the described systems and methods provide non-polarizing optics in the see-through portion of the optics to enable a user to view a non-polarizing image source (such as a liquid crystal computer monitor) while wearing a head-mounted display without exposure to a rainbow pattern.
With a high transmission perspective view of the surrounding environment, the high level of scene light 16973 passes through the lower optics en route to the eyebox 16970. This opens up possibilities for: loss of contrast in the displayed image due to stray light from the portion of scene light 16973 reflected back to the emitting image source 1699 by angular beam splitter 16950 and also from the portion of image light 1940 reflected back toward the emitting image source 1699 by angular beam splitter 16950. The combined stray light from the portions of image light 1940 and scene light 16973 reflected back to the emissive image source 16901 is then scattered off the sidewalls in the upper optics 16901 and reflected by the surfaces of the emissive image source 16900 such that the combined stray light adds to the image light 16940 presented to the eyebox 16970 for viewing by the user. Since this stray light has no image content, the net effect is that the contrast in the displayed image is reduced. To reduce stray light from these two sources, a light trap 1693 is provided.
The diagram 170 shows a graphical representation of the light trap 1693 operating to reduce stray light. Light trap 1693 includes a sandwich structure including quarter wave films 17032 and 17034 on either side of linear polarizer film 17033. The sandwich structure may be loosely attached or laminated together with an adhesive layer. Light trap 1693 functions by allowing unpolarized image light 17025 from an emissive image source 169910 to pass through the quarter-wave film 17032, which quarter-wave film 17032 does not affect image light 17025 because it is unpolarized. Image light 17025 then passes through polarizer 17033, which polarizer 17033 linearly polarizes the image light. The linearly polarized image light then passes through a quarter wave film 17034, which turns the image light into circularly polarized image light 17026. A portion of the circularly polarized image light 17026 is reflected by angular beam splitter 16950 toward curved portion mirror 1960, while another portion of the circularly polarized image light is transmitted by angular beam splitter 16950 to become a facial glow. Curved partially reflecting mirror 1960 reflects a portion of the circularly polarized image light 17026 back toward angled beam splitter 16950 while transmitting into a portion of the eye glow. A portion of the circularly polarized image light 17026 passing back toward angled beam splitter 16950 is transmitted to eye box 16970 and another portion is reflected by angled beam splitter 16950 such that it passes toward emissive image source 1699. However, when the returned circularly polarized image light 17026 passes through the quarter wave film 17034, it is transformed into linearly polarized light having an opposite polarization orientation as compared to the image light 17025, so that the polarizer 17033 absorbs the returned light. Thus. Given that a typical absorptive polarizer absorbs approximately 99.99% of light having the opposite polarization state, the portion of image light reflected back toward the emissive image source 169910 can be essentially eliminated by the light trap 1693.
Scene light 17045 is unpolarized and is transmitted by curved beam splitter 1960. When unpolarized scene light 17045 encounters angled beam splitter 16950, a portion is transmitted toward eyebox 16970 to provide a see-through view of the environment and a portion is reflected toward emissive image source 1699. The unpolarized scene light 17045 passes through the quarter-wave film 17034 unchanged. When the scene light passes through the polarizer 17033, it becomes polarized light. When the scene light passes through the quarter-wave film 17032, the scene light then becomes circularly polarized scene light 17046. Circularly polarized scene light 17046 is reflected by the surface of the emissive image source 1699. When the returning circularly polarized scene light 17046 passes back through the quarter wave film 17032, the returning circularly polarized scene light 17046 is transformed into polarized scene light with the opposite polarization state, which is then absorbed by the polarizer 17033.
The net effect of the light trap 1693 is that stray light from the return image light and scene light is essentially eliminated and, as a result, the contrast in the display image is greatly increased. This is particularly important when the head mounted display is used in bright environments where the incoming scene light 17045 is substantial. By using a sandwich of light traps 1693 (which include quarter wave films 17032 and 17034 on either side of linear polarizer film 17033), stray light from unpolarized light 17025 and 17045 traveling in opposite directions can be effectively trapped. The effect on the portion of image light 17025 that is reflected by angular beam splitter 16950, reflected by curved-portion mirror 1960, and transmitted by angular beam splitter 16950 so that it becomes the display image viewed by the user is that image light 16940 is circularly polarized light. Further, there is an approximately 50% reduction in brightness as the image light 17025 passes through the polarizer film 17033. However, the increase in contrast is much higher, so that the perceived image quality of the displayed image is greatly improved, especially in bright environments. The inventors have performed measurements of the effectiveness of such light traps positioned over an OLED display surrounded by a black textured plastic frame. Wherein the quarter-wave film is selected to have a retardation level that provides excellent extinction of stray light after the stray light passes through the quarter-wave film twice without compromising color shift to remaining stray light. The result is that the light reflected from the OLED display surface is reduced by a factor of 117 and the light reflected from the black textured plastic is reduced by a factor of 6.
The optical trap 1693 can also be simplified to a circular polarizer by eliminating one of the quarter wave films. In this case, the light trap 1693 acts on only one of the unpolarized stray light sources. If the quarter wave film 17032 is eliminated, the light trap 1693 captures only stray light from the image light 17025, and the scene light 17046 reflected back toward the image source 169910 is then polarized. Alternatively, if the quarter wave film 17034 is eliminated, the light trap 1693 only captures stray light from the scene light 17045, and the image light 17026 is polarized.
In an alternative embodiment, light trap 1693 may be positioned on the surface of image source 169910. The light trap may be a polarizer 17033 sandwiched between quarter wave films 17032 and 17034 to capture stray light from both image light 17025 and scene light 17045 reflected back toward image source 169910. By positioning light trap 1693 directly on the surface of image source 169910, stray light from scene light 17045 is very effectively trapped because birefringence in lens 1697 does not affect the polarization state of circularly polarized scene light 17046. Thus, light trap 1693 may be a circular polarizer positioned on image source 169910 having a quarter wave film of a circular polarizer proximate to a surface of image source 169910 to capture only stray light associated with scene light 17045, as previously described herein. Light trap 1693 can be sized to cover surfaces of image source 1699 in addition to covering adjacent reflective portions of the image source package or adjacent the housing to capture stray light associated with reflected light from these surfaces.
To capture stray light from image light 17025 that is reflected back toward image source 169910, a second circular polarizer (e.g., including polarizer 17033 and quarter wave film 17034) may be positioned between lens 1697 and the lower optics, with quarter wave film 17034 of the second circular polarizer positioned to face the lower optics. The polarization axis of the first circular polarizer should be aligned with the polarization axis of the second circular polarizer to transmit maximum image light 17025. This second circular polarizer provides a high efficiency light trap for stray light from image light 17025 that is reflected back toward image source 1699 by partial mirror 1960 and angular beam splitter 16960. However, if first and second circular polarizers are included, the birefringence in the lens 169920 in the upper optics will affect the brightness uniformity and contrast uniformity of the image seen by the user. This is because the image light 17025 will be polarized by the first circular polarizer, and the image light will then pass through the lens 1697 where any birefringence present will cause a portion of the image light to become elliptically polarized. The elliptically polarized image light will then pass through a second circular polarizer, where the elliptically polarized portions of the image light will be filtered corresponding to the degree of elliptical polarization present. If lens 169920 has low birefringence (e.g., less than 50nm retardation), using two circular polarizers will provide a nearly unobtrusive degraded image with brightness uniformity and contrast uniformity, however if the birefringence is high, brightness uniformity and contrast uniformity will be noticeably degraded.
Table 1 below shows a comparative analysis of various non-polarized partial reflection processes for angled beam splitter 16950 and curved partial mirror 1960, where all numbers are presented in terms of the percentage of image light 17025 emitted by image source 169910. This analysis shows the effect of using the notch mirror processing and the effect of the optical trap 16930 compared to the base partial mirror (i.e., a partial mirror that reflects all visible wavelengths substantially equally) processing on the angular beam splitter 1695 and the curved partial mirror 1960. Phase-matched nanostructures that reflect light in narrow wavelength bands may be provided as an embossed film or as an in-molded (in) structure on an optical surface to provide a notch mirror treatment, but they are not shown in table 1. In this analysis, the reflectivities of angular beam splitter 16950 and curved-section mirror 1960 have been selected to deliver at least 50% of "see-through light to the eye" (this is the scene light 16973 that reaches eyebox 16970) and at least 20% of "see-through light at the wavelength of the image light" which takes into account the narrow band of reflectivities provided by any notch mirror treatment on the reflective surface. Case 1 includes a triple notch mirror treatment of angular beam splitter 16950 and curved partially mirror 1960 (also referred to as a three-color notch mirror for reflecting red, green, and blue light), and does not include light trap 1693. In this analysis, the notch mirror was assumed to reflect at a selected percentage of reflectivity within a 20nm wide band for each color (e.g., a triple notch mirror can provide high reflectivity in the bands 470nm for blue, 535nm for green, 615 nm for red) and transmit the remaining visible light at 95%. Case 2 includes triple notch mirror processing of angular beam splitter 16950 and curved partially mirror 1960 and optical trap 16930. Case 3 includes a base partial mirror treatment for curved partial mirror 1960 and a triple trap for angular beam splitter 16950 Wave mirror processing and light trap 1693. Case 4 includes a base partial mirror treatment for angled beam splitter 16950 and a triple notch mirror treatment for curved partial mirror 1960 and optical trap 1693. Case 5 includes a base partial mirror treatment for both angular beam splitter 16950 and curved partial mirror 1960 and optical trap 16930.
Situation numbering 1 2 3 4 5
Coating on beam splitters Three-color trap Reflecting mirror Three-color trap Reflecting mirror Three-color trap Reflecting mirror Simple part Reflecting mirror Simple part Reflecting mirror
Coatings on curved partial mirrors Three-color trap Reflecting mirror Three-color trap Reflecting mirror Simple part Reflecting mirror Three-color trap Reflecting mirror Simple part Reflecting mirror
Quarter wave/polarizer interlayer for reflecting light back to display Trap with a plurality of wells Whether or not Is that Is that Is that 20
Beamsplitter reflectance image light (%) 50 50 60 30 75
Total beam splitter transmission (%) 83 83 80.6 65 28
Curved portion mirror image light reflectance (%) 80 80 33 75 67
Total transmittance of curved mirror (%) 75.8 75.8 62 77 15
Reflectance of display Panel (%) 15 15 15 15 15
Image light to the eye 20.0 8.4 3.3 6.1 1.8
Perspective light to the eye 62.9 62.9 50.0 50.1 50.3
Glow of the eye 10.0 4.2 16.9 3.2 5.6
Glow of the face 50.0 21.0 16.8 27.3 31.5
Light from below reflected towards the eye 12.0 12.0 14.4 30.0 20.0
Image light returning to the panel 20.0 0.00084 0.000499 0.000284 0.00005
See-through light with image light wavelength back to panel 10.0 4.2 15.6 3.2 5.6
Having image light wavelength returned to the panel and reflected back toward the eye Perspective light 1.5 0.000063 0.000234 4.73E-05 0.00008
See-through light to the eye having an image light wavelength 20.1 20.1 31.0 25.2 50.3
Ratio of image light to eye/image light back to panel 7 10000 6667 21667 37500
Image light to the eye/back to the panel and reflected back into the system with Ratio of transmitted light to image light wavelength 13 133333 14194 130000 20896
The effect of light trap 1693 on image contrast can be seen in the bottom two rows of table 1 relating to image contrast, as shown by the ratio of "image light to eye" divided by "light back to image source" representing the brightness of the display image, either from image light reflected back to image source or from scene light reflected back to image source. In both sets of numbers, the ratio was significantly higher (1000 times or more) in cases 2-5 (where the optical trap 169930 was present) compared to case 1 (where there was no optical trap). The light loss produced by having a light trap can also be seen in the numbers for "image light to eye", where case 1 shows an approximately 2 times higher number indicating a brighter displayed image.
The effect of the notch mirror treatment on the numbers for "image light to eye" (image light 16940) and "see-through light to eye" (scene light 16973) can be seen by comparing cases 2-4 with various combinations of three-color notch mirror treatment with case 5 with the base partial mirror treatment on angular beam splitter 16950 and curved partial mirror 1960. The tri-color notch mirror treatment of one or both reflective surfaces increases the portion of image light 16940 delivered to the eye box 16970 while also increasing the portion of scene light 16973 provided to the eye. Using the base partial mirror processing for angular beam splitter 16950 and curved partial mirror 1960 reduces the efficiency of the optics in delivering image light to the user's eye by approximately 2 to 4.5. It should be noted that if angled beam splitter 16950 or curved partially mirror 1960 included a polarizer (either absorptive or reflective), only about 42% of the scene light would be transmitted to the user's eye based on the typical transmission percentage of unpolarized light by a polarizer. And if one of the surfaces is a polarizer and the other is a 50% partial mirror, only about 21% of the scene light will be transmitted to the user's eye.
Other light losses are also shown by the numbers in table 1. The "eye glow" is the portion of image light 16940 that is transmitted by the curved segment mirror 1960. The "facial glow" is the portion of the image light transmitted by angular beam splitter 16950. Determining which of the cases is better in terms of eye glow and facial glow for a given head-mounted display will depend on whether there are other controls to mitigate the eye glow or facial glow. Case 3 may be the best choice if eye glow control is present, as facial glow is low. Case 4 may be the best choice if facial glow control is present, as it has a lower eye glow.
In general, case 2, which has a three-color notch mirror treatment of both the angular beam splitter 16950 and the curved portion mirror 1960, has a good combination of characteristics for providing a bright and high contrast image and high perspective transmission to the user's eye. This is because case 2 has a relatively good figure for: efficiency of delivering image light to the eye, high transmission perspective, low eye glow, low facial glow, acceptable perspective at the wavelength of the image light, and excellent contrast.
A three-color notch mirror process can be obtained that reflects more S-polarized light than P-polarized light. However, given the narrow band of reflection provided by the three-color notch mirror process, the transmitted portion of light may be substantially unpolarized and thus still provide transmission of more than 50% of the scene light and provide a view of polarized light sources that do not contain chromatic aberrations such as rainbow. In this scenario, case 4 may be more efficient for delivering image light to the eye and providing high perspective transmission.
In many use cases, such as, for example, augmented reality imaging, it is desirable to use a head mounted display that provides a wide field of view (e.g., greater than 40 degrees). However, it can be difficult to design any type of optic that provides a uniformly high MTF for a uniformly sharp image over an entire wide field of view. As a result, the optics may be very complex, and the physical size of the optics may become undesirably large for use in a head-mounted display. To avoid this problem, it is important to know the acuity of the human eyes in the peripheral part of the field of view, and to know the angular range of eye movement typically used before a person moves his head.
FIG. 172 shows a graph of the acuity of a typical Human eye relative to angular position in the field of view (S. Anderson, K Mullen, R Hess; "Human peripheral spatial resolution for visual and visual stimuli"; "Limits imaged by optical and visual factors," Journal of physical (1991), 442, pp 47-64). The fovea at the center of the human eye provides very high acuity over an angular range of approximately 2 degrees. The acuity decreases rapidly with increasing angular position (also called eccentricity) in the field of view. Furthermore, the chromatic sensitivity is substantially lower than the achromatic sensitivity. As shown in fig. 172, the achromatic sensitivity changes from approximately 50 cycles/degree of the central depression to 5 cycles/degree at 15 degrees, and the chromatic sensitivity changes from approximately 30 cycles/degree of the central depression to 3 cycles/degree at 15 degrees. Fig. 173 shows a graph of typical acuity of a human eye versus eccentricity in simplified form, emphasizing the decrease in acuity with eccentricity and the difference between achromatic and chromatic acuity.
The acuity of the eyes experienced by the user must however take into account the rapid movement of the eyes within the field of view. These rapid movements of the eyes effectively extend the highly sensitive portion of the field of view seen by the user. In augmented reality applications, head movements by the user must also be taken into account. When a user perceives an object near the edge of the field of view of the eyes, the user first moves their eyes toward the object and then moves their head. By reducing the angular movement of the eyes, these combined movements enable the user to view a wider field of view while also making viewing of objects at the edges of the field of view more comfortable. People tend to move their eyes only a limited amount before they move their head. FIG. 174 shows a typical graph of eye movement and Head movement given in radians versus time (A Doshi, M Trivedi; "Head and eye size dynamics in visual engagement and Context Learning", 2009 IEEE, 978-1-4244-. As seen in the data presented in fig. 174, the user's head tends to move more quickly to center the eyes back within the field of view so that the head and eyes have the same angle. The angular parallax between the eye and the head tends to be limited to less than approximately 0.25 radians (which is equal to approximately 15 degrees), except for very brief shifts. This is different from head movements that occur when a person reacts to sound, where the eyes and head move together with minimal parallax. If a user wants to view an object pointing from the head in a direction greater than approximately 15 degrees, the user will first move their eyes and then move their head to reduce the angular disparity between the eyes and the head to less than 15 degrees to view the object. When designing and operating a head mounted display with a wide display field of view, it is important to take this relationship between eye movement and head movement into account. Based on the acuity of the human eye and the eye movement relative to the head movement, a sharp image with high resolution and high contrast within the central +/-15 to +/-20 degree portion of the display field of view is required to provide the user with an image perceived as sharp and high contrast. This is the central region of the display field of view where the user will move their eyes to view the image with the fovea. Outside this area of the display field of view, the display image does not have to be as sharp, as the user typically will not directly view this area of the display field of view. For example, instead of viewing an augmented reality object located 30 degrees from the center of the display field of view, the user would move their eyes approximately 15 degrees toward the object and then rotate their head the remaining 15 degrees toward the object. If the augmented reality image is world-locked (i.e., where the object is displayed in a constant position relative to the real objects in the surrounding environment), when the user moves his head, the augmented reality object will move towards the center of the display field of view, and thus it will move into the central sharp region of the display field of view.
Fig. 175 is a graph illustrating the effective achromatic acuity compared to the foveal acuity provided by a typical human eye within the field of view of the eye when eye movement is involved. The relative acuity is equal to the acuity provided by the fovea within the +/-15 degree portion of the field of view viewed with the fovea by moving the eye. Beyond the portion of the field of view seen with the fovea, the acuity decreases at a rate associated with eccentricity in the eye, as shown in fig. 173. The acuity chart corresponds to the sharpness profile that needs to be provided by a head mounted display having a wide field of view. As long as the display image is provided with a relative sharpness that is higher than the sharpness distribution shown in fig. 175, the human eye perceives the display image as uniformly sharp. This is because the acuity of the eye is greatly reduced when the image is presented with a wider field of view than the portion of the field of view that can be comfortably viewed through the fovea. For example, based on the acuity chart in FIG. 175, an image may be presented with a central sharp zone of +/-15 degrees to +/-20 degrees in size, and as long as the image sharpness decreases to no less than 20% of the sharpness of the sharp zone at approximately +/-25 degrees, the image will be perceived by the user as uniformly sharp. Fig. 176 is a graph showing minimum design MTF versus angular field position required for an image that provides a uniform sharpness appearance in a wide field of view display image. In this figure, the design MTF is given as a spatial modulation at 20% MTF relative to nyquist, where the nyquist MTF is 100% and the reduced MTF is smaller. The graph shows a uniform design MTF across the 100% nyquist of the central sharp zone (+/-15 degrees) and a rapidly decreasing design MTF in the peripheral zone (greater than 15 degrees). By providing a reduced design MTF in the outer portion of the angular field, the optics can be greatly simplified, thereby reducing cost and reducing the overall size of the optics.
Fig. 177 is a graph showing the relative MTF that needs to be provided by display optics for a wide field-of-view display in order to provide a sharpness that matches the acuity of a human eye in the peripheral region of the display field-of-view, where the resolvable sharpness for the optics is determined as the spatial frequency at which the MTF is 20%. In this figure, simple two-point MTF curves (100% MTF and 20% MTF) are shown for various angular field positions in the display field of view: 0 to 15 degrees (this is the top right curve), 20 degrees, 25 degrees, 30 degrees, 35 degrees, 40 degrees, 45 degrees and 50 degrees (this curve is the bottom left curve). These curves show the minimum MTF (from fig. 176) that needs to be provided across the display field of view to match the acuity of the human eye. As can be seen, the results show that the MTF for wide field optics can drop substantially in the outer portion of the display field of view. For example, the MTF of wide field-of-view optics can be higher than 20% at the nyquist frequency of the image source in the central sharp region, while the MTF can be much lower in the peripheral region, such as 2% or 20% at the nyquist frequency of 1/2. It should also be noted that since the color acuity of the human eye is lower than the achromatic acuity, there may be a large amount of lateral color (e.g., 5 pixels at 25 degrees or more) in the peripheral area of the wide-field-of-view display image, and the lateral color difference will not be noticeable. Thus, lateral chromatic aberration in the peripheral region of the displayed wide field-of-view image contributes more to reducing the perceived sharpness of the image, but the low sensitivity of the eye in the peripheral region makes the loss in sharpness imperceptible. Similarly, the low acuity of the human eye in the peripheral region makes the distortion less perceptible in the peripheral region. The combination of loss of acuity and a reduction in color acuity that makes distortion less noticeable all add to the reduced need for image quality in the peripheral region of the display field of view.
By way of example, FIG. 171 shows a diagram of a simple optical system that provides a 60 degree display field of view (i.e., +/-30 degrees from center). This includes an emissive image source 169910, an individual lens element 1697, an angular beam splitter 16950, and a curved partially reflecting mirror 1960, as previously described herein. The optical system provides a display image to an eyebox 16970 having a display field of view with an included angle of approximately 60 degrees. At the same time, the user is provided with a perspective view of the surrounding environment through angled beam splitter 16950 and curved portion mirror 1960, which may be larger than the display field of view via an extension through the area adjacent to angled beam splitter 16950 and curved portion mirror 1960 or adjacent to angled beam splitter 16950 and curved portion mirror 1960. Fig. 178 shows the modeled MTF curves associated with the optical system of fig. 171, showing MTF curves for various angular positions within the display field of view. The MTF curves for the 15, 6 degree positions in the display field of view (expressed in horizontal, vertical within the field of view) are indicated in fig. 178 with arrows, where it can be seen that the 15 degree MTF curve ends with a 20% MTF at the nyquist point for the image source, which in this case corresponds to the right hand end of the spatial frequency axis or 75 cycles/mm. The MTF curve below the indicated 15, 6 degree MTF curve is a 30 degree MTF curve. For a 30 degree point in the display field to have the same perceived sharpness as a 15 degree point in the display field, the 30 degree MTF curve needs to have at least 20% MTF at 7.5 cycles/mm (10% nyquist) according to fig. 176. It can be seen that all the 30 degree MTF curves shown in fig. 178 are easily above the 20% MTF at the 7.5 cycles/mm point, so that, in this way, the image will be perceived by the human eye as sharp in the peripheral zone when considering the limited movement of the human eye. Thus, even if the MTF curves corresponding to peripheral angular positions in the field of view of the display shown in fig. 178 do not satisfy the nyquist performance condition for the display of 20% MTF at 75 cycles/mm, the peripheral points in the field of view will still be perceived by the user as providing the same level of sharpness as provided by the central angular point in the field of view.
FIG. 179 is an illustration of a resolution chart in which the sharpness of an image has been reduced by blurring the peripheral portions of the image to simulate an image from optics providing a central sharp zone of +/-15 degrees with a peripheral zone that is less sharp. Looking directly at different parts of the image, it can be seen that the outer portion 179100 is much less sharp than the central region 17920. However, if the image is viewed at a distance where the central region 17920 between the vertical bars occupies approximately +/-15 degrees in the viewer's field of view, the image will appear to be uniformly sharp at the outer edges as long as the viewer remains gazing at the inner edges of the vertical bars.
As a result, the systems and methods described herein in accordance with the principles of the present invention may be used to design any type of optics for a head-mounted display with a wide field of view, including optics with beam splitters, optics with waveguides, or projection optics with holographic optical elements, wherein a central sharp zone is provided that delivers MTF levels corresponding to the acuity of the fovea and a peripheral zone adjacent to the central sharp zone that provides a reduced level of sharpness relative to the acuity of the human eye when considering limited movement of the eye. In an embodiment, the central sharp region comprises +/-15 degrees (30 degree included angle) with respect to the optical axis, and the peripheral region extends beyond the central sharp region to the edge of the field of view of the displayed image. The MTF in the central sharp region should be higher than 20% at the nyquist level of the display to provide a sharp image. The MTF in the peripheral zone can decrease at increasing angles at a smaller rate than the decrease in acuity in the human eye with increasing eccentricity. For example, if the peripheral zone extends from +/-15 degrees to +/-30 degrees (60 degree included angle), the MTF can be as low as 10% of the Nyquist spatial modulation at 20% MTF. By limiting the angular zones in which high MTF is desired and reducing the design MTF in the peripheral zones, the optics can include fewer elements and simpler elements with lower cost materials, thereby reducing the overall cost of the optics, and in addition, the optics can be made more compact to enable wide field-of-view optics to better fit into a head-mounted display. This effect is illustrated by the compact optics shown in fig. 171, which, as stated previously herein, provides a 60 degree field of view while including a single plastic field lens, beam splitter, and curved partially reflecting mirror. Where processing for beam splitters and curved section mirrors has been previously discussed herein to provide high perspective with unpolarized lower portions when viewing polarized light sources to eliminate rainbow. And furthermore, optical traps may be added to the compact optics to increase contrast, also as discussed previously herein.
The systems and methods described herein in accordance with the principles of the invention may be used to fabricate compact optical elements for head mounted displays with wide display fields of view, with improved contrast and with high transparency for see-through views of the surrounding environment. By using an emissive display, the need for front light is eliminated, reducing the space between the emissive image source and the lower optics. By limiting the high MTF zones to a central sharp zone surrounded by the lower MTF peripheral zone, the number of lens elements required to display a wide field of view is reduced, thereby also reducing the size of the optical element. As shown in fig. 171, with only one or two lens elements in the upper portion, a 60 degree field of view is possible. As a result, the height of the optical device can be reduced.
In an embodiment, the angular size of the display field of view and the emissive image sources 169910 are selected such that individual pixels in the emissive image sources 169910 subtend an angle in the display image that is less than the achromatic acuity of the fovea of the human eye, such that the black and white portions of the display image do not have a pixilated appearance when viewed by a user. This provides the user with an image having smooth lines and curves without producing a jagged appearance when resolving individual black and white pixels. For example, based on the data shown in fig. 172 and 173, the human eye has an achromatic acuity of approximately 50 cycles/degree, and the display field of view should be less than 38 × 22 degrees or 43 degrees diagonal for adjacent black and white pixels to be individually unresolvable in a sharp area of the display image comprising 1920 × 1080 pixels (1080 p).
In an embodiment, the angular size of the display field of view and the emissive image sources 169910 are selected such that individual pixels in the emissive image sources 169910 subtend an angle in the display image that is less than the color acuity of the human eye, such that colored portions of the display image do not have a pixilated appearance when viewed by a user. This provides the user with an image having smooth lines and curves over the colored areas without creating a jagged appearance when resolving individual colored pixels. For example, based on the data shown in fig. 172 and 173, the human eye has approximately 30 cycles/degree color acuity, and the display field of view should be less than 64 x 36 degrees or 73 degrees diagonal for adjacent colored pixels to be individually indistinguishable in a sharp area of the display image comprising 1920 x 1080 pixels.
In an embodiment, the angular size of the display field of view and the emissive image source 1699 are selected such that the subpixels making up each pixel in the emissive image source (typically each panchromatic pixel comprising adjacent red, green and blue subpixels, and the relative brightness of the subpixels together determining the perceived color of the pixel) subtend less than an angle that can be resolved by the human eye, such that each pixel appears to comprise a single color, and the subpixels are not visible to the user. This provides the user with an image comprising consistent patches without a speckled appearance that can be perceived when the individual sub-pixels can be resolved. For example, based on the data shown in fig. 172 and 173, the human eye has an achromatic acuity of approximately 50 cycles/degree, with a display field of view less than 115 x 64 degrees or 131 degrees diagonal for subpixels not resolvable in an image comprising 1920 x 1080 pixels.
In an embodiment, the optics comprise a telecentric region in the optical path of the image light, wherein the lens elements can be moved relative to each other to effect a change in focus distance without changing the magnification of the displayed image. Changes in focus distance can be achieved in head mounted displays in various ways by changing the spacing between optical elements. For example, focus adjustment may be achieved by moving the image source relative to the remainder of the optical system. However, in a display system with a wide field of view, image light 16940 emitted by the emitting image source 1699 must spread in area to fill the area of the curved partially reflecting mirror 1960, which establishes the angular size of the display field of view as seen from the eyebox 16970 as shown in fig. 171. To this end, the line beam between the emissive image source 169910 and the lens element 1697 diverges rapidly (e.g., at an included angle of 100 degrees or more). Any change in the spacing between the emissive image source 1699 and the lens element 1697 (which is made to change the focus distance or focus quality) is effected by a change in the visual size of the display image seen by the user, due to the diverging beam emitted by the emissive image source 1699. In head mounted displays that are presenting augmented reality imagery, especially when focus adjustments are made automatically as the user moves or as the augmented reality object moves, it is important that the visual size of the augmented reality object is consistent with the movement to provide comfortable viewing conditions for the user. The change in the visual size of the displayed image may also cause the image to be cropped by a portion of the housing adjacent to the optics such that the edges of the displayed image are not viewable from the eyebox or the effective size of the eyebox is reduced. Thus, the ability to make changes to the focal length of the displayed image or portion of the displayed image without changing the visual size of the image is an important feature for head mounted displays used to display augmented reality imagery. The distal region may be provided at multiple locations within the optic, such as between lenses in the upper optic or between the upper and lower optics. Fig. 171 shows telecentric region 17140 between the upper and lower optics, where the centerlines in each beam are parallel. Within this telecentric region 17140, focus adjustments may be made by moving the lens element 1697 and the emissive image source 1699 as a first unit relative to a second unit including the angular beam splitter 16950 and the curved portion mirror 1960 to change focus. As an example, for the optic shown in fig. 171, a decrease in the 0.5mm separation between upper optic 169903 and lower optic 169907 can provide a change in the focus distance from infinity to 1 meter (this is the same as adding a 1 diopter corrective lens behind the optic). This ability to adjust the focus distance may be used to fine tune the sharpness of the displayed image for the user or to change the apparent distance that the displayed image presents to the user. Where changes in apparent distance of the display image may be used for augmented reality use cases where the display image is presented at a distance that matches an object in the environment or at a particular distance such as arm length.
A manual mechanism such as a screw or cam may be positioned to change the space in the distal region by moving the associated optical element. Where manual adjustment is useful for adjusting focus during manufacturing or enabling a user to fine tune focus to his ophthalmic power prescription. The electronic actuator may be mounted to automatically adjust the spacing in the far-center region for augmented reality applications or for mode changes including changes in focus distance.
In embodiments, no telecentric region may be provided, or the telecentric region may be only near telecentric, and focal plane adjustments may be made by moving the optical elements and also digitally adjusting the content to compensate for magnification effects caused by the shifting elements in the non-telecentric region.
In an embodiment, a mode is provided for viewing wide angle display images (e.g., greater than 50 degree included angle) with any type of head mounted display in which images are moved laterally within the display field of view corresponding to detected eye movement followed by head movement by the user. This mode mimics the experience of sitting in the front row of a movie theater where, in order to view a wide angle movie image, the viewer cannot comfortably view the entire movie screen with eye movement alone, and instead must move his eyes along with his head to see the peripheral area of the movie screen. To achieve this mode, the head mounted display requires a means for detecting eye movement associated with the optical assembly 169900, as well as an inertial measurement unit to detect head movement. Thus, this mode detects the desire of the user to view the peripheral portion of the displayed image with the portion of the eye field of view having higher acuity by detecting eye movement followed by head movement in the same direction.
The display image is then moved laterally across the display field of view in a direction opposite to the detected movement of the eyes and head, where the magnitude and speed of the lateral movement corresponds to the magnitude and speed of the detected movement of the eyes and head. This lateral movement of the display image within the display field of view provides the user with an improved view of the peripheral portion of the display image by moving the peripheral portion of the display image into the central sharp region of the display field of view and moving the peripheral portion of the display image into a position in which the user's eyes are relatively centered. Further, lateral movement of the display image within the display field of view may be limited to the lateral movement required to center the edges of the display image within the display field of view. This mode deals with the following facts: it is uncomfortable for the user to move his eyes relative to his head over an angle of approximately 15 to 20 degrees for more than a short period of time, and since the head mounted display is attached to the user's head, eye movement is the only way to visually view different portions of the display field of view. This makes it difficult for a user of the head-mounted display to comfortably view an image having a visual size greater than a 30 to 40 degree included angle. The disclosed mode overcomes this limitation by detecting when a user would like to view a peripheral portion of the display image and then laterally moving the display image within the display field of view to the following positions: where the peripheral portion of the display image can be more comfortably viewed and displayed with improved sharpness and higher contrast.
The mode is different from a world-locked or body-locked presentation of the display images by triggering a lateral movement of the display images within the display field of view based on detecting a combined eye movement in a certain direction followed by a head movement in the same direction, wherein the lateral movement of the images occurs in correspondence with the head movement independent of the eye movement. A description of the body-locking of virtual objects in head-mounted displays is provided, for example, in U.S. patent publication 2014204759. In an embodiment, the lateral movement of the display image is limited within the display field of view to the lateral movement required to position the edge of the display image at the center of the display field of view or at some other comfort point within the field of view. Another example where lateral movement of an image would not be desired is when the user is only temporarily looking toward an edge or corner (e.g., a warning light flashes in a corner of the image, and the user only temporarily moves his eyes to verify the flashing light). In this case, the user does not move his head and as a result does not trigger a lateral movement of the image, and the display image remains stationary within the display field of view.
After eye movements above a predetermined threshold have been detected followed by head movements in the same direction, the display image is moved laterally across the display field of view corresponding to and in the opposite direction to the detected angular movement of the user's head (note that the method may also be used in a corresponding manner for trans-axial or radial movement of the display image within the display field of view). Eye movement may be detected with an eye camera (e.g., as disclosed elsewhere herein) that captures images of the user's eye while the user's eye views the display image or by detecting changes in the electric field associated with the eye. Angular movement of the user's head may be detected by a motion sensor (e.g., IMU) relative to the world, relative to the user's body, etc. Fixing the display image in association with the environment is good for viewing a wide-angle image when the user is sitting still or standing still. Fixedly displaying the image in association with the user's body is good for viewing wide angle images when the user is walking, running, or sitting in a car. The angular movement of the user's head relative to the environment may be measured, for example, by an inertial measurement unit in the head mounted display or by image tracking of objects in the environment with a camera in the head mounted display. Angular movement of the user's head relative to the user's body may be measured by a downward facing camera, which may, for example, capture an image of a portion of the user's body. The image of the portion of the user's body is then analyzed to detect relative changes that can be used to detect movement of the user's head relative to the user's body. Alternatively, two inertial measurement units may be used to detect movement of the user's head relative to the user's body, one attached to the head mounted display and one attached to the user's body, and differential measurements are used to determine the movement of the user's head relative to the user's body. After eye movement above the threshold has been detected and movement of the user's head above the threshold has been detected as following the eye movement, the display image begins to move laterally across the display field of view. The speed of the lateral movement of the display image corresponds to the subsequent detected head movement and is in the opposite direction to the head movement. The lateral movement of the display image continues until the edge of the display image reaches the center of the display field of view or the eyes are detected to be viewing the center of the display field of view (or within a predetermined threshold of the center of the display field of view), indicating that the peripheral portion of the image that the user wants to view has been reached.
Fig. 180 and 181 are diagrams showing how an image shifts within a display field of view when a user moves his head. Note that the user's head is shown to the side of the image because the image is actually presented to the user inside the head mounted display. The diagram 180 shows an image 18055 centered within the display field of view and a user head 18050 pointing straight ahead. The diagram 181 shows the user's head 18150 pointing to the side, and as a result the image 18155 shifts within the display field of view in the opposite direction to the movement of the user's head, leaving a blank portion 18130 in which no image content is to be displayed. In fig. 182, a blank portion 18230 of the display field of view from which the image has been shifted away is displayed as a dark region to enable the user to see through to the surrounding environment in the blank portion. However, in different use cases, it may be advantageous to display the blank portions as neutral gray or colored.
In an embodiment, a user of a wide field of view head mounted display is provided with an option to select the size (e.g., angular size) of the displayed image associated with a different image or application. The display image is then resized to provide a selected angle image size for display to the user. For example, in a movie viewing mode, the user may select the display image to be approximately 30 degrees in size, which mimics the experience of sitting in the back row of a movie theater, where it is comfortable for the user to view the entire display image with eye movement only. Alternatively, the user may select the display image to be 50 degrees in size, which mimics the experience of sitting in the front row of a movie theater, where the display image needs to be viewed with a combination of eye movement and head movement along with the image shifting described previously herein to comfortably view the entire display image. Diagram 183 shows an illustration of a wide display field of view 18360 where a user may select to display a smaller field of view 18365 for a given image or application (e.g., game) to improve the personal viewing experience. Where the smaller field of view 18365 enables a user to view an image or application without having to move his eyes too much to see the entire image.
In an embodiment, the display format is selected to have a narrow vertical field of view relative to the horizontal field of view, such that the thickness of the optics as measured across the lower optics can be reduced. Due to the angular orientation of the angular beam splitter 16950 in the lower optics, the vertical field of view in the displayed image is directly proportional to the thickness of the optical assembly. For a given display field of view measured along a diagonal of the display field of view, reducing the vertical field of view and thereby increasing the format ratio of the display image enables the thickness of the optical assembly to be reduced. For example, for a 16:9 format image with a 50 degree diagonal field of view, the thickness 18410 of the optical assembly 18415 may be approximately 17mm, as illustratively shown in FIG. 184. If the format of the display image is increased to 30:9 with a 50 degree diagonal field of view, the thickness 18510 of the optical component 18515 may be approximately 10mm, as illustratively shown in FIG. 185. This represents an approximately 40% reduction in thickness of the optical assembly provided by the change to a higher format ratio. Fig. 186 shows a 30:9 format field of view 18620 and a 22:9 format field of view 18625, where both fields of view have the same vertical field of view and different horizontal fields of view. By using a higher format ratio, a wide field of view may be displayed for use with an enhanced display image in a relatively thin head mounted display to improve the form factor of the head mounted display. The high format ratio may be obtained by using a high format ratio emissive display or by using a normal format ratio emissive display (e.g., 4:3, 16:9, or 22: 9) and then using portions of the upper or lower regions of the emissive display. For example, the head mounted display may include a 1080p emissive display having 1920 x 1080 pixels and may display a 30:9 image by using 1920 x 576 pixels on the emissive display. A thin optical assembly will then be provided which is only capable of displaying an image comprising 576 pixels in the vertical direction, but the optics may display an image of up to 1920 pixels horizontally. Where an image having a different format is to be displayed, the image will be resized to fit the available display space (e.g., a 16:9 format image may be displayed as a 1024 x 576 pixel image and a 22:9 image may be displayed as a 1408 x 576 pixel image or any other ratio associated with the number of pixels available horizontally or vertically and the format of the image being displayed). In a preferred embodiment, the display field of view has a format ratio greater than 22: 9. By having a format ratio such as 30:9, for example, the center portion may be used to display a 22:9 image such as a movie, while the area 18627 outside the 22:9 display field of view may be used to display auxiliary information that need not be readily viewable or presented in high resolution, such as battery life, time, temperature, orientation of directionality, whether new email or text is available.
In another embodiment, the central sharp region of the display may be used to display a different type of image than the outer peripheral region. For example, the central sharp may be used to display a 22:9 or 16:9 movie image that is resized to fit the number of pixels contained in the central sharp. The outer peripheral region may then be used like a second display at which other types of information are displayed that can be viewed at a lower resolution for a short period of time such that the required uncomfortable glasses position is acceptable.
In yet another embodiment, the information displayed in the outer peripheral region is rendered differently than the central sharp region. This may include using larger font letters, higher contrast settings, or different colors to make the information presented in the outer peripheral region more easily viewable.
In further embodiments, the display image is adjusted corresponding to the change in focus distance. To enable measurement of the focus distance, sensors may be provided to measure the distance between optical elements for changing the focus distance, such as between image source 1699 and lens element 1697 or between lens element 1697 and underlying optics. Where the display image may be digitally adjusted to be larger or smaller to compensate for magnification that may occur if the light rays between the optical elements are not telecentric. The display image may also be digitally adjusted for distortion that may occur when the optical elements are moved to change the distance between the optical elements when a change in focus distance is achieved. Where a change in focus distance may be associated with an augmented reality mode of operation, such as a mode where the focus distance needs to be at a particular distance (such as, for example, at an arm length) to allow a user to interact with a displayed augmented reality object.
In yet another embodiment, the optical assembly is designed to provide telecentric light to an optical surface that includes a triple notch mirror treatment to reduce the angular range of incident light and thereby improve the performance of the triple notch mirror. Where telecentric light may be incident on the angled beam splitter or on the curved partially reflecting mirror. This embodiment may be particularly important when the head-mounted display provides a wide field of view, as the triple notch mirror is designed for use at a particular angle with a limited angular distribution around the particular angle. By providing telecentric light to the triple notch mirror, color uniformity and brightness uniformity can be improved. In a further refinement, the wide-angle display image may be rendered to compensate for radial primaries and luminance roll-off by radially increasing digital luminance (e.g., radially increasing code values and associated illumination in the image) and radially changing color balance (e.g., color rendering) in the image. In this way, the user is provided with an image that is perceived as having uniform brightness and uniform color, although the angular limitations of the triple notch mirror processing affect the displayed image over a wide display field of view.
Another aspect of the invention relates to a display panel included in a head-worn computer that has the ability to present an image that is wider than is required for the use scene, such that the edges of the panel can be left blank to allow for shifts in the display content. The display content may then be fully rendered (even when shifted) because the content may be shifted into the normal blank areas of the panel. For example, the panel may be selected such that it may produce a 50 degree field of view, but the digital content may consume only a 45 degree field of view, such that the entire image may still be viewed if it is shifted by 2.5 degrees in either direction. As explained elsewhere herein, in a wide field of view head mounted display system, if a user attempts to look at the far edge of the content, the content may need to be shifted. In such a case, the system may start with a reserved blank area on the edge(s) of the field of view to account for the entire content shift. In other embodiments, the shift into the reserved edge(s) may be used when compensating the content for the focal plane, convergence, etc.
In an embodiment, the content presented in the field of view is of a content type intended to occupy all fields of view (such as when watching a movie). When a movie is presented, it is intended to occupy as many fields of view as can be comfortably viewed. In an embodiment, it is this type of full display content that is presented within the middle section of the field of view that has edges that are intentionally left blank. This arrangement allows full display content to be shifted into unused edges for the adaptation described herein.
In an embodiment, the wide-field-of-view display is used to enable the display image to be laterally shifted by a digital shift of the image on the image source 110 to change the convergence distance associated with the viewing of the stereoscopic image, and thereby change the perceived distance to the display image. Wherein the convergence distance may be changed corresponding to a type of image being displayed, a type of use case associated with the augmented reality object being displayed, or in response to a detected (e.g., such as may be detected with an eye camera) characteristic of the user's eyes in the head mounted display, such as a convergence distance or focus distance of the user's eyes. Fig. 187 shows an illustration of a user's eye 18721 looking through a display field of view 18723. In this case, the user's eyes 18721 have parallel lines of sight 18725 such that the convergence point associated with the stereoscopic images is approximately at infinity. The central portion of each display field of view 18723 is then used to display an image (shown as a dark area in the display field of view 18723) that does not occupy the entire display field of view 18723. In this way, the user perceives a stereoscopic image comprising a left image and a right image overlapping each other on top to be presented at approximately infinity from a convergence cue associated with a convergence distance. Preferably, the focus distance is the same as the convergence distance such that the focus cues associated with the focus distance are the same as the convergence cues, and thereby provide the user with stereoscopic images having consistent stereoscopic cues for a more comfortable viewing experience. Importantly, in fig. 187, there is a portion of each display field 18723 that is not used for the side of the display image because the display image does not occupy the entire horizontal angular range of the display field 18723. Accordingly, it is possible to laterally shift the left and right images within the display field of view 18723 (as shown in fig. 188) to provide a closer convergence distance. Where the user's eyes 18821 are shown in a slightly rotated position such that lines of sight 18825 are angled toward each other when viewed through the centers of the left and right display images. The geometry is created by shifting the left and right display images toward each other within their respective display fields of view 18823. Fig. 189 shows a plot of the left and right display images (18911 and 18910) as they would be presented within display field of view 18723 for the case when the convergence distance is approximately infinity. Fig. 190 shows an illustration of the left and right display images (19012 and 19014) as they would be presented within the display field of view 18823 for the case when the convergence distance is closer. Thus, providing a narrow vertical field of view for the wide display fields 18723 and 18823 provides the additional benefit of convergence distance adjustment by digitally shifting the display image within the display field of view. Convergence distance adjustment may be used to provide augmented reality images that are perceived as being at different distances, as required for a certain application or desired viewing experience. This feature is particularly useful when the display image has a lower format ratio than the display field of view (e.g., the display image has a 22:9 format and the display field of view has a 30:9 format), so that portions of the display field of view are not used when displaying the left and right images. In an example, a 16:9 format stereoscopic image is displayed in an optic that provides a 25:9 format display field of view, where the stereoscopic image is displayed without cropping such that a vertical angular range of the displayed stereoscopic image matches a vertical angular range of the display field of view of the optic. To change the convergence distance from 8 feet to 2 feet requires that the left and right display images be digitally shifted toward each other by approximately 10% of the horizontal angular range of each of the display images (e.g., for a 1280 x 720 pixel image, the digital shift totals 146 pixels). Such example changes in convergence distance are well suited to changing between imaging use cases, such as for changing from viewing a movie with an image perceived at 8 feet to interacting with an augmented reality object that requires the image to be perceived within the reach of the user.
In yet another embodiment, a wide field-of-view display is used to enable display images to be laterally shifted by digital shifting of the images on the image source 110 to change the interpupillary distance between the display images. Fig. 191 shows a diagram of a user eye 19121 looking through a display field of view 19123, where the user eye 19121 has a larger interpupillary distance therebetween than the user eye 18721 shown in fig. 187. In both of fig. 187 and 191, the user eyes 18721 and 19121 have parallel lines of sight 18725 and 19125, respectively, such that the convergence point associated with the stereoscopic images is approximately at infinity. The central portion of each display field of view 18723 and 19123 is used to display an image (shown as a dark area within the display fields of view 18723 and 19123) that does not occupy the entire display fields of view 18723 and 19123. However, since the user's eye 19121 has a wider interpupillary distance in this case, the left and right images are shifted laterally within the display field of view 19123 (as shown in fig. 191) by digitally shifting the images on the image source to position the images farther apart as seen by the user within the display field of view 19123, and thereby provide the desired line of sight 19125. Where the user's eyes 19121 are shown in parallel positions such that the lines of sight 19125 are parallel when viewed through the centers of the left and right display images. The geometry is generated by shifting the left and right display images away from each other within their respective display fields of view 19123. Diagram 192 shows a depiction of the left and right display images (19212 and 10214) as they would be presented within the display field of view 19123 or as seen on the image source 110 for the case when the convergence distance is approximately at infinity and the user eye 19121 has a large interpupillary distance. Still further, providing a wide horizontal display field of view 19123 along with a narrow vertical field of view provides the added benefit of a digital method of adjusting for interpupillary distance by digitally shifting the display image within the display field of view.
In a preferred embodiment, the portion of the display field of view used for the lateral shifting of the image amounts to 10% or more of the display field of view. Thus, while these portions of the display field of view are not used to display images, they are used to position the images for the purpose of providing a desired convergence distance or adjusting the interpupillary distance of the displayed left and right images. As can be seen in fig. 189, 190 and 192, when the display image is shifted laterally within the display field of view by digitally shifting the image on the image source, the blank or unused portion of the display field of view changes in its relative size to the left and right sides of the display image while maintaining a constant total amount. In further preferred embodiments, the total amount of blank or unused portions of the display field of view amounts to 10% or more of the display field of view.
Although embodiments of the HWC have been described in language specific to features, systems, computer processes, and/or methods, the appended claims are not necessarily limited to the specific features, systems, computer processes, and/or methods described. Rather, the specific features, systems, computer processes, and/or methods are disclosed as non-limiting example embodiments of the HWC. All documents referred to herein are hereby incorporated by reference.
Additional statements of the disclosure
In some embodiments, the systems and methods may be described in the following clauses or otherwise described herein.
Clause group A
Clause 1. a method for controlling image presentation in a see-through head-mounted display system with improved viewability for a user, comprising: determining a brightness of a perspective view of a surrounding environment; determining a brightness metric for an image to be displayed; and adjusting an attribute of the image based on the brightness metric of the image and the brightness of the see-through view of the surrounding environment such that the brightness of the image and the predetermined combined brightness of the see-through view are presented to the user.
Clause 2. the method of clause 1, further comprising determining a light responsive operating region of the user's eye, and wherein the step of adjusting the attribute of the image is further based on the operating region.
Clause 3. the method of clause 2, wherein the step of adjusting the attributes of the content improves the perspective viewability of the user.
Clause 4. the method of clause 2, wherein the step of adjusting the attributes of the content improves the user view of the image.
Clause 5. the method of clause 2, wherein the operating zone of the human eye is selected from one of photopic vision, transitional vision, and scotopic vision.
Clause 6. the method of clause 1, wherein the attribute comprises changing contrast in the image.
Clause 7. the method of clause 1, wherein the attribute comprises changing a color of an object in the image.
Clause 8. the method of clause 1, wherein the attribute comprises changing a color balance in the image.
Clause 9. the method of clause 1, wherein the attribute comprises changing a font size of text in the image.
Clause 10. the method of clause 1, wherein the attribute comprises changing the image to a monochromatic image.
Clause 11. the method of clause 10, wherein the monochrome image is a red image.
Clause 12. the method of clause 10, wherein the monochromatic image is a green image.
Clause 13. the method of clause 1, wherein the head-mounted display system further comprises a camera for capturing an image of the surrounding environment, the image comprising at least a portion of the see-through view of the surrounding environment.
Clause 14. the method of clause 13, further comprising the steps of: analyzing the captured image to identify attributes of the surrounding environment; and wherein the step of adjusting the property of the image comprises adjusting the property further based on the identified property of the surrounding environment.
Clause 15. the method of clause 14, wherein the attribute of the surrounding environment is a type of the object.
Clause 16. the method of clause 14, wherein the attribute of the ambient environment is a range of colors.
Clause 17. the method of clause 14, wherein the attribute of the surrounding environment is a local range of colors within a portion of the see-through view, and the adjusted attribute of the image is a local color within a portion of the displayed image.
The method of clause 18. the method of clause 1, wherein the head-mounted display further comprises a camera for capturing an image of the user's eyes to determine the direction the user is looking; and wherein the step of determining the brightness of the perspective view of the surroundings involves determining the brightness of the perspective view of the surroundings in said direction.
Clause 19. the method of clause 18, wherein the image is adjusted differently in a portion of the image corresponding to the direction.
Clause 20. the method of clause 1, wherein the image to be displayed includes a user interface controlled by eye tracking, and the image is adjusted to be brighter than the see-through view to reduce jittery eye movement.
Clause 21. the method of clause 20, wherein the image is adjusted to be 2 times brighter or brighter than the see-through view.
Clause 22. a method for providing improved alignment of a display image with a see-through view of a surrounding environment in a head mounted display that provides a display image overlaid onto a see-through view of a surrounding environment, wherein the head mounted display includes an eye camera for imaging a user's eye and an externally facing camera for imaging the surrounding environment, the method comprising: capturing an image of a surrounding environment with an externally facing camera; displaying the captured images of the environment to a user in the head mounted display so that the user simultaneously sees the displayed images of the surrounding images overlaid onto the see-through view of the surrounding environment; using an eye camera to collect control inputs from a user to adjust a positioning of a displayed image to improve alignment of the displayed image of the surrounding environment with a see-through view of the surrounding environment; and using the adjusted positioning of the display images to improve alignment of the other display images with the perspective view of the surrounding environment.
Clause 23. the method of clause 22, wherein the other display image comprises an augmented reality image.
Clause group A1
Clause 1. a method for providing improved alignment of a display image with a see-through view of a surrounding environment in a head mounted display that provides a display image overlaid onto a see-through view of a surrounding environment, wherein the head mounted display includes an eye camera for imaging a user's eye and an externally facing camera for imaging the surrounding environment, the method comprising: capturing an image of a surrounding environment with an externally facing camera; displaying the captured images of the environment to a user in the head mounted display so that the user simultaneously sees the displayed images of the surrounding images overlaid onto the see-through view of the surrounding environment; using an eye camera to collect control inputs from a user to adjust a positioning of a displayed image to improve alignment of the displayed image of the surrounding environment with a see-through view of the surrounding environment; and using the adjusted positioning of the display images to improve alignment of the other display images with the perspective view of the surrounding environment.
Clause 2. the method of clause 1, wherein the other display image comprises an augmented reality image.
Clause group B
Clause 1. a head mounted display with multiple folding optics providing an improved displayed image overlaid onto a see-through view of a surrounding environment, the head mounted display comprising: a solid prism having an angled planar surface; a self-emissive display providing image light associated with a displayed image; and a flat reflective plate bonded to the planar surface, which reflects image light associated with a display image, wherein the flat reflective plate is bonded to the solid prism.
Clause 2. the head-mounted display according to clause 1, wherein the self-light emitting display is an OLED.
Clause 3. the head mounted display of clause 1, wherein the self-emissive display is a back-lit LCD.
Clause 4. the head mounted display of clause 1, wherein the flat reflective plate is bonded to the solid prism with an index-matching adhesive.
Clause 5. the head-mounted display of clause 4, wherein the index-matching adhesive has an index of refraction that is +/-0.05 of the index of refraction of the solid prisms.
Clause group B1
Clause 1. a head mounted display with multiple folding optics providing an improved displayed image overlaid onto a see-through view of a surrounding environment, the head mounted display comprising: a solid prism having an angled planar surface; a flat partial reflector plate bonded to the planar surface, which transmits a portion of the illumination light and reflects a portion of the image light associated with the display image; a reflective image source that reflects illumination light to provide image light; and a backlight attached to the angled planar surface, including a prismatic film that directs a portion of the illumination light toward the reflective image source.
Clause 2. the head mounted display of clause 1, wherein the reflective image source is LCOS.
Clause 3. the head mounted display of clause 1, wherein the flat partially reflective plate comprises a reflective polarizer.
Clause 4. the head mounted display of clause 1, wherein the prismatic film is bonded to a flat partially reflective sheet.
Clause 5. the head mounted display of clause 1, wherein the prismatic film is oriented with the prisms facing away from the reflective image source and facing toward the light source for the illumination light.
Clause 6. the head mounted display of clause 5, wherein the prismatic film separates the illumination light into two cones, and only one cone illuminates the reflective image source.
Clause 7. the head mounted display of clause 6, further comprising an analyzing polarizer that absorbs a cone of illuminating light that does not illuminate the reflective image source.
Clause 8. the head-mounted display of clause 7, wherein the analyzing polarizer further absorbs image light associated with dark portions in the displayed image.
Clause 9. the head mounted display of clause 1, wherein the flat reflective plate is bonded to the planar surface with a transparent adhesive that is index matched to the solid prisms.
Clause 10 the head-mounted display of clause 9, wherein the refractive index of the transparent adhesive is within +/-0.05 of the refractive index of the solid prisms.
Clause group B2
Clause 1. a method for manufacturing a solid prism having a flat reflective surface for a head-mounted display, comprising: molding a solid prism of optical material having one or more surfaces with optical power and at least one planar surface; after molding, applying a drop of a transparent UV-curable adhesive having a refractive index that matches the refractive index of the optical material of the planar surface; applying a flat reflective sheet to the droplets of adhesive; allowing the adhesive to wick across an interface between the planar surface and the flat reflective plate without applying pressure to the flat reflective plate to avoid deforming the flat reflective plate until the entire interface is covered by the adhesive; and applying UV light to the interface to cure the adhesive and form a solid prism having a flat reflective surface.
The method of claim 1, wherein the flat reflective sheet is applied such that the reflective side of the flat reflective sheet contacts the adhesive such that the reflective side is inside a solid prism having a flat reflective surface.
The method of claim 1, wherein the flat reflective plate is a mirror.
The method of claim 1, wherein the flat reflective plate is a partial mirror.
The method of claim 1, wherein the flat reflective plate is a reflective polarizer.
The method of claim 5, wherein the reflective polarizer is in contact with an adhesive, such that the reflective polarizer is inside a solid prism having a flat reflective surface.
The method of claim 1, wherein the adhesive fills sink marks on a planar surface.
Clause group B3
Clause 1. a head mounted display for displaying a displayed image overlaid onto a see-through view of a surrounding environment, comprising a multi-fold optic, the head mounted display comprising: a solid prism having at least one surface with refractive power and a planar surface providing a first internal fold of an optical axis of image light associated with a display image; a combiner providing a second fold of the optical axis associated with the image light; and wherein the flat plate reflector is bonded to the planar surface of the solid prism to provide a flat reflective surface to the solid prism.
Clause 2. the head mounted display of clause 1, wherein the combiner is a flat partial mirror.
Clause 3. the head mounted display of clause 1, wherein the multiple fold optics further comprises a back lit display or a self illuminated image source providing image light, and the flat panel reflector is a total reflective mirror.
Clause 4. the head mounted display of clause 1, wherein the multiple fold optics further comprises a reflective image source and the illumination light is provided by a backlight positioned behind a flat-plate reflector, wherein the flat-plate reflector is a partially reflective mirror and the illumination light is transmitted through the flat-plate reflector.
Clause 5. the head mounted display of clause 1, wherein the flat plate reflector is bonded to the planar surface of the solid prism with a UV-cured transparent adhesive that is index matched to the solid prism.
Clause 6. the head mounted display of clause 5, wherein the adhesive fills sink marks on the planar surface.
Clause 7. the head-mounted display of clause 5, wherein the refractive index of the UV-cured transparent adhesive is within +/-0.05 of the refractive index of the solid prisms.
Clause group B4
Clause 1. a head mounted display with multiple folding optics for providing a displayed image overlaid onto a see-through view of a surrounding environment, the head mounted display comprising: a solid prism providing a first internal fold of image light associated with a display image; a combiner providing a second fold of image light while providing a see-through view of the surrounding environment; and an optical element associated with the solid prism that provides a field of view of reflected light from the user's eye to the eye imaging camera.
Clause 2. the head mounted display of clause 1, wherein the optical element provides a field of view of light reflected from the user's eye that is multiply folded.
Clause 3. the head mounted display of clause 1, wherein the optical element provides a field of view of singly folded light reflected from a user's eye.
Clause 4. the head mounted display of clause 1, wherein the optical element and the solid prism are molded together.
Clause 5. the head mounted display of clause 1, wherein the optical element is attached to the solid prism after the solid prism has been molded.
Clause group B5
Clause 1. a head mounted display with folding optics for providing a displayed image overlaid onto a see-through view of a surrounding environment, the head mounted display comprising: a combiner that provides a see-through view of the surrounding environment and simultaneously redirects the image light towards the user's eyes to provide an overlaid display image; a light source that illuminates a user's eye; a camera that captures an image of a user's eye including light reflected by the user's eye when the user views a display image overlaid onto the see-through view; and wherein the light source provides infrared light and the camera is sensitive to the infrared light.
Clause 2. the head mounted display of clause 1, wherein a portion of the combiner comprises a heat reflective mirror coating that is reflective to infrared light and transmissive to visible light.
Clause 3. the head mounted display of clause 2, wherein the camera is positioned to one side of the combiner.
Clause 4. the head mounted display of clause 3, wherein the combiner comprises a partially reflective mirror coating that reflects a portion of the image light.
Clause 5. the head mounted display of clause 3, wherein the combiner comprises a notch mirror coating that reflects a portion of the image light.
Clause 6. the head mounted display of clause 3, wherein the combiner comprises a holographic optical element that reflects a portion of the image light.
Clause 7. the head mounted display of clause 3, wherein the light source is positioned to one side of the combiner.
Clause 8. the head mounted display of clause 1, wherein the combiner comprises a waveguide that transmits image light from an image source to a display area.
Clause 9. the head mounted display of clause 8, wherein the waveguide comprises an angled partial reflector that redirects a portion of the image light for viewing by a user and also redirects a portion of the light reflected by the user's eye for capture by a camera.
Clause 10 the head mounted display of clause 9, wherein the image light and the light reflected by the user's eye are coaxial.
Clause 11 the head mounted display of clause 10, wherein the waveguide comprises: a first partial reflector that redirects a portion of the image light along the waveguide while transmitting a portion of the light reflected by the user's eye to the camera; and a second partial reflector that redirects a portion of the image light toward the user's eye and also redirects a portion of the light reflected by the user's eye along the waveguide.
Clause 12 the head mounted display of clause 11, wherein the light source is directed along the waveguide and a portion of the light redirected by the second partial reflector toward the user's eye.
Clause 13. the head mounted display of clause 3, wherein the combiner comprises a waveguide that transmits image light from an image source to a display area.
Clause 14 the head mounted display of clause 13, wherein the illumination light is transmitted from a light source through a waveguide to illuminate the user's eye.
Clause 15. the head mounted display of clause 14, wherein the combiner comprises a holographic optical element that redirects image light toward the user's eye.
Clause 16. the head mounted display of clause 2, wherein the coating reflects greater than 80% of infrared light and transmits greater than 50% of visible light.
Clause group B6
Clause 1. a compact head mounted display with multiple folding optics providing a displayed image overlaid onto a see-through view of a surrounding environment, the compact head mounted display comprising: a solid prism of a first material having an angled planar surface that folds an optical axis and one or more surfaces that provide optical power; one or more additional lens elements of optical power made of a second material different from that of the solid prisms; an image source providing image light associated with a display image; a combiner that folds an optical axis and directs image light toward a user's eye; and wherein the multiple fold optics provide a more compact head mounted display, and the first material and the second material are selected to reduce lateral chromatic aberration in the displayed image.
Clause group B7
Clause 1. a compact head-mounted display having display optics with an optical axis that is multi-folded providing a display image overlaid onto a see-through view of a surrounding environment, the compact head-mounted display comprising: an image source providing image light associated with a display image; an upper optics module comprising a plurality of optical elements providing a first fold of optical power and optical axis; a lower optics module comprising a combiner that provides a second fold of the optical axis to direct image light toward the user's eye while also allowing light from the surrounding environment to pass through to the user's eye such that the display image is seen by the user as being superimposed onto the see-through view of the surrounding environment; and wherein the upper optics includes a solid prism providing optical power and a mirror folding an optical axis and directing image light toward the lower optics module.
Clause 2. the compact head mounted display of clause 1, wherein the image source is a self-emitting microdisplay.
Clause 3. the compact head mounted display of clause 1, wherein the image source is an OLED microdisplay.
Clause 4. the compact head mounted display of clause 1, wherein the image source is a back-lit LCD microdisplay.
Clause 5. the compact head-mounted display of clause 3, wherein the mirror of the solid element is a partial mirror and the camera for capturing the image of the user's eye is positioned behind the partial mirror.
Clause 6. the compact head-mounted display of clause 5, wherein the partial mirror is a cold mirror that reflects visible light and transmits infrared light, and the camera captures an infrared image of the user's eye.
Clause 7. the compact head mounted display of clause 2, wherein the image source is a reflective microdisplay.
Clause 8. the compact head mounted display of clause 2, wherein the image source is LCOS or FLCOS.
Clause 9. the compact head mounted display of clause 2, wherein the image source is a DLP.
Clause 10. the compact head-mounted display of clause 8, wherein the mirror of the solid prism is a partial mirror having a backlight positioned behind the partial mirror to provide illumination light to the image source.
Clause 11 the compact head-mounted display of clause 10, wherein the partial mirror is a reflective polarizer that transmits a portion of the illumination light, and the polarization state of the illumination light changes when reflected by the image source to provide image light, such that the image light is reflected by the partial mirror.
Clause 12. the compact head-mounted display of clause 2, wherein the material of the solid prisms has a different index of refraction than the material of the other optical elements in the upper optics module.
Clause 13. the compact head mounted display of clause 12, wherein the materials and optical powers associated with the plurality of optical elements of the upper optical module are selected to reduce lateral chromatic aberration in the displayed image provided to the user's eye.
Clause 14. the compact head-mounted display of clause 13, wherein the material of the solid prisms has a higher index of refraction than the material of the other optical elements in the upper optics module.
Clause 15. the compact head-mounted display of clause 14, wherein the optical power of the solid prisms is selected to reduce the overall size of the upper optic module.
Clause 16. the compact head-mounted display of clause 2, wherein the upper optics module further comprises a field lens adjacent to the image source and a power lens adjacent to the lower optics module.
Clause 17. the compact head-mounted display of clause 16, wherein the fold angle associated with the solid prism is selected to reduce the overall size of the upper optics module.
Clause 18. the compact head-mounted display of clause 16, wherein the solid prism is selected to reduce an overall height of the upper optics module.
Clause group C
Clause 1. a head-mounted see-through display, comprising: a see-through optical element mounted such that it is positioned in front of the user's eyes when the head-mounted see-through display is worn by the user; a processor adapted to present digital content in a field of view on a see-through optical element, wherein the digital content has a position within the field of view that depends on a position in a surrounding environment; and the processor is further adapted to modify the appearance of the content as it approaches the edges of the field of view such that the content appears to disappear as it approaches the edges of the field of view.
Clause 2. the head mounted display of clause 1, wherein the processor comprises a display driver.
Clause 3. the head mounted display of clause 1, wherein the processor comprises an application processor.
Clause 4. the head mounted display of clause 1, wherein the appearance modification is a change in brightness of the content.
Clause 5. the head mounted display of clause 1, wherein the appearance modification is a change in contrast of the content.
Clause 6. the head mounted display of clause 1, wherein the appearance modification is a change in content sharpness.
Clause 7. the head mounted display of clause 1, wherein the appearance modification is a change in resolution of the content.
Clause 8. the head mounted display of clause 1, wherein the processor is further adapted to generate a secondary field of view in which the user views the presented digital content and through which the user sees the surrounding environment, the processor being further adapted to transition the content from the field of view to the secondary field of view.
Clause 9. the head mounted display of clause 8, wherein the appearance of the content in the secondary field of view is diminished compared to the appearance of the content in the field of view.
Clause 10. the head mounted display of clause 8, wherein the secondary field of view has a lower resolution than a resolution of the field of view.
Clause 11 the head mounted display of clause 10, wherein the secondary field of view is generated by reflecting image light onto a combiner that directs the image light directly to the user's eye.
Clause 12. the head mounted display of clause 10, wherein the secondary field of view is generated by reflecting image light onto a combiner that directs the image light toward an end partial mirror that reflects the image light to the user's eye.
Clause 13. the head mounted display of clause 8, wherein the secondary field of view is generated by an OLED that projects light onto the combiner.
Clause 14. the head mounted display of clause 8, wherein the secondary field of view is generated by an array of LEDs that project light onto the combiner.
Clause 15. the head mounted display of clause 8, wherein the secondary field of view is generated by an edge-lit LCD that projects light onto the combiner.
Clause 16. the head mounted display of clause 8, wherein the secondary field of view is presented by a see-through panel positioned directly in front of the user's eyes.
Clause 17. the head mounted display of clause 16, wherein the see-through panel is mounted on a combiner.
Clause 18. the head mounted display of clause 16, wherein the see-through display is mounted vertically.
Clause 19. the head mounted display of clause 16, wherein the see-through panel is an OLED.
Clause 20. the head mounted display of clause 16, wherein the see-through panel is an edge-lit LCD.
Clause 21. the head mounted display of clause 1, wherein the processor is further adapted to predict when content is about to approach an edge of a field of view and base the appearance transition at least in part on the prediction.
Clause 22 the head mounted display of clause 21, wherein the prediction is based, at least in part, on an eye image.
Clause group C1
Clause 1. a head-mounted see-through display, comprising: a see-through optical element mounted such that it is positioned in front of the user's eyes when the head-mounted see-through display is worn by the user; a processor adapted to present digital content in a field of view on a see-through optical element, wherein the digital content has a position within the field of view that depends on a position in a surrounding environment; and the processor is further adapted to predict when the digital content is about to shift out of the field of view due to a change in position of the head-mounted see-through display and to modify an appearance of the content as the content approaches an edge of the field of view such that the content appears to disappear as the content approaches the edge of the field of view.
Clause 2. the head mounted display of clause 1, wherein the prediction is based on a compass orientation indicating a forward facing direction of the head mounted see-through display.
Clause 3. the head mounted display of clause 1, wherein the prediction is based on a tracked eye movement of the user, wherein the tracked eye movement indicates that the user is about to turn the user's head.
Clause 4. the head mounted display of clause 1, wherein the appearance modification is a change in brightness of the content.
Clause 5. the head mounted display of clause 1, wherein the appearance modification is a change in contrast of the content.
Clause 6. the head mounted display of clause 1, wherein the appearance modification is a change in content sharpness.
Clause 7. the head mounted display of clause 1, wherein the appearance modification is a change in resolution of the content.
Clause 8. the head mounted display of clause 1, wherein the processor is further adapted to generate a secondary field of view in which the user views the presented digital content and through which the user sees the surrounding environment, the processor being further adapted to transition the content from the field of view to the secondary field of view.
Clause 9. the head mounted display of clause 8, wherein the appearance of the content in the secondary field of view is diminished compared to the appearance of the content in the field of view.
Clause 10. the head mounted display of clause 8, wherein the secondary field of view has a lower resolution than a resolution of the field of view.
Clause 11 the head mounted display of clause 10, wherein the secondary field of view is generated by reflecting image light onto a combiner that directs the image light directly to the user's eye.
Clause 12. the head mounted display of clause 10, wherein the secondary field of view is generated by reflecting image light onto a combiner that directs the image light toward an end partial mirror that reflects the image light to the user's eye.
Clause 13. the head mounted display of clause 8, wherein the secondary field of view is generated by an OLED that projects light onto the combiner.
Clause 14. the head mounted display of clause 8, wherein the secondary field of view is generated by an array of LEDs that project light onto the combiner.
Clause 15. the head mounted display of clause 8, wherein the secondary field of view is generated by an edge-lit LCD that projects light onto the combiner.
Clause 16. the head mounted display of clause 8, wherein the secondary field of view is presented by a see-through panel positioned directly in front of the user's eyes.
Clause 17. the head mounted display of clause 16, wherein the panel is mounted on a combiner.
Clause 18. the head mounted display of clause 16, wherein the panel is mounted vertically.
Clause 19. the head mounted display of clause 1, wherein the processor is further adapted to predict when content is about to approach an edge of a field of view and base the appearance transition at least in part on the prediction.
Clause 20. the head mounted display of clause 1, wherein the prediction is based, at least in part, on an eye image.
Clause 21. the head mounted display of clause 16, wherein the see-through panel is an OLED.
Clause 22 the head mounted display of clause 16, wherein the see-through panel is an edge-lit LCD.
Clause group C2
Clause 1. a head-mounted see-through display, comprising: a see-through optical element mounted such that it is positioned in front of the user's eyes when the head-mounted see-through display is worn by the user; a processor adapted to present digital content in a primary field of view on a see-through optical element, the presented digital content being viewed by a user in the primary field of view and the surrounding environment being viewed by the user through the primary field of view; the processor is further adapted to present digital content in an extended field of view in which a user views the presented digital content and through which the user views the surrounding environment; the primary field of view has a higher resolution than the extended field of view; and the processor is further adapted to present the world-locked localized digital content in the primary field of view and transition the presentation of the world-locked localized digital content to the extended field of view when the head-mounted display changes position to transition the world-locked localized digital content out of the primary field of view.
Clause 2. the head mounted display of clause 1, wherein the processor comprises a display driver.
Clause 3. the head mounted display of clause 1, wherein the processor comprises an application processor.
Clause 4. the head mounted display of clause 1, wherein the extended field of view has a resolution that generates substantial blurring of content when compared to content presented in the main field of view.
Clause 5. the head mounted display of clause 1, wherein the extended field of view is generated by reflecting image light onto a combiner that directs image light directly to a user's eye.
Clause 6. the head mounted display of clause 1, wherein the extended field of view is generated by reflecting image light onto a combiner that directs the image light toward an end partial mirror that reflects the image light to a user eye.
Clause 7. the head mounted display of clause 1, wherein the extended field of view is generated by an OLED that projects light onto the combiner.
Clause 8. the head mounted display of clause 1, wherein the extended field of view is generated by an array of LEDs that project light onto a combiner.
Clause 9. the head mounted display of clause 1, wherein the extended field of view is generated by an edge-lit LCD that projects light onto the combiner.
Clause 10. the head mounted display of clause 1, wherein the extended field of view is generated by a see-through panel positioned directly in front of the user's eyes.
Clause 11 the head mounted display of clause 10, wherein the panel is mounted on a combiner.
Clause 12. the head mounted display of clause 10, wherein the panel is mounted vertically.
Clause 13. the head mounted display of clause 1, wherein the processor is further adapted to predict when content is about to approach an edge of a field of view and base the appearance transition at least in part on the prediction.
Clause 14. the head mounted display of clause 1, wherein the prediction is based, at least in part, on an eye image.
Clause 15. the head mounted display of clause 10, wherein the see-through panel is an OLED.
Clause 16. the head mounted display of clause 10, wherein the see-through panel is an edge-lit LCD.
Clause group C3
Clause 1. a head-mounted see-through display, comprising: a see-through optical element mounted such that it is positioned in front of the user's eyes when the head-mounted see-through display is worn by the user; a processor adapted to present digital content in a primary field of view on a see-through optical element, the presented digital content being viewed by a user in the primary field of view and the surrounding environment being viewed by the user through the primary field of view; the processor is adapted to present digital content in an extended field of view in which a user views the presented digital content and through which the user sees the surroundings; the primary field of view has a higher resolution than the extended field of view; and the processor is further adapted to present a first portion of the digital content in the main field of view and a second portion of the digital content in the extended field of view.
Clause 2. the head mounted display of clause 1, wherein when the digital content is too large to fit within the primary field of view, the processor creates a soft transition between the first portion of the digital content in the primary field of view and the second portion of the digital content in the extended field of view such that the digital content does not appear to be abruptly interrupted at the edges of the primary field of view.
Clause 3. the head mounted display of clause 1, wherein the processor is adapted to generate a soft appearance towards the edges of the main field of view.
Clause 4. the head mounted display of clause 1, wherein the processor modifies how pixels toward an edge of the display render content.
Clause 5. the head mounted display of clause 1, further comprising a display driver that modifies how pixels toward an edge of the head mounted display render content.
Clause 6. the head mounted display of clause 1, wherein the head mounted display has pixels toward an edge of the head mounted display that render content differently than pixels toward a center portion of the head mounted display.
Clause 7. the head mounted display of clause 6, wherein the pixels toward the edge have a smaller gain than the pixels toward the center portion of the head mounted display.
Clause 8. the head mounted display of clause 1, wherein pixels toward the edge of the main field of view are digitally altered by a content transition algorithm.
Clause 9. the head mounted display of clause 1, wherein the extended field of view is generated by reflecting image light onto a combiner that directs image light directly to a user's eye.
Clause 10 the head mounted display of clause 1, wherein the extended field of view is generated by reflecting image light onto a combiner that directs the image light toward an end partial mirror that reflects the image light to a user eye.
Clause 11. the head mounted display of clause 1, wherein the extended field of view is generated by an OLED that projects light onto the combiner.
Clause 12. the head mounted display of clause 1, wherein the extended field of view is generated by an array of LEDs that project light onto a combiner.
Clause 13. the head mounted display of clause 1, wherein the extended field of view is generated by an edge-lit LCD that projects light onto the combiner.
Clause 14. the head mounted display of clause 1, wherein the extended field of view is generated by a see-through panel positioned directly in front of the user's eyes.
Clause 15. the head mounted display of clause 14, wherein the panel is mounted on a combiner.
Clause 16. the head mounted display of clause 14, wherein the panel is mounted vertically.
Clause 17. the head mounted display of clause 14, wherein the see-through panel is an OLED.
Clause 18. the head mounted display of clause 14, wherein the see-through panel is an edge-lit LCD.
Clause 19. the head mounted display of clause 1, the processor further adapted to predict when content is about to approach an edge of a field of view and base the appearance transition at least in part on the prediction.
Clause 20. the head mounted display of clause 19, wherein the prediction is based, at least in part, on an eye image.
Clause group C4
Clause 1. a head-mounted see-through display, comprising: a see-through optical element mounted such that it is positioned in front of the user's eyes when the head-mounted see-through display is worn by the user; a processor adapted to present digital content in a primary field of view in which a user views the presented digital content and through which the user sees the surroundings; the processor is further adapted to present digital content in an extended field of view in which a user views the presented digital content and through which the user views the surrounding environment; the primary field of view has a higher resolution than the extended field of view; the processor is further adapted to present the digital content in the main field of view and to reduce an appearance of the content as the content approaches an edge of the main field of view; and the processor is further adapted to reduce the appearance of the content when rendered in the extended field of view.
Clause 2. the head mounted display of clause 1, wherein the processor tapers the appearance of the content in the extended field of view as the content becomes closer to the edge of the extended field of view.
Clause 3. the head mounted display of clause 2, wherein the content does not substantially appear when the content is at the edge of the extended field of view.
Clause 4. the head mounted display of clause 1, wherein the appearance reduction is a reduction in brightness of the content.
Clause 5. the head mounted display of clause 1, wherein the appearance reduction is a reduction in content contrast.
Clause 6. the head mounted display of clause 1, wherein the reduction in appearance is a reduction in content sharpness.
Clause 7. the head mounted display of clause 1, wherein the reduction in appearance is a reduction in resolution of the content.
Clause 8. the head mounted display of clause 1, wherein the extended field of view is generated by reflecting image light onto a combiner that directs image light directly to a user's eye.
Clause 9. the head mounted display of clause 1, wherein the extended field of view is generated by reflecting image light onto a combiner that directs the image light toward an end partial mirror that reflects the image light to a user eye.
Clause 10. the head mounted display of clause 1, wherein the extended field of view is generated by an OLED that projects light onto the combiner.
Clause 11. the head mounted display of clause 1, wherein the extended field of view is generated by an array of LEDs that project light onto a combiner.
Clause 12. the head mounted display of clause 1, wherein the extended field of view is generated by an edge-lit LCD that projects light onto the combiner.
Clause 13. the head mounted display of clause 1, wherein the extended field of view is generated by a see-through panel positioned directly in front of the user's eyes.
Clause 14. the head mounted display of clause 13, wherein the panel is mounted on a combiner.
Clause 15. the head mounted display of clause 13, wherein the panel is mounted vertically.
Clause 16. the head mounted display of clause 13, wherein the see-through panel is an OLED.
Clause 17. the head mounted display of clause 13, wherein the see-through panel is an edge-lit LCD.
Clause 18. the head mounted display of clause 1, wherein the processor is further adapted to predict when content is about to approach an edge of the primary field of view and base the reduction in appearance at least in part on the prediction.
Clause 19. the head mounted display of clause 18, wherein the prediction is based, at least in part, on an eye image.
Clause group C5
Clause 1. an optical system of a head-mounted see-through display, comprising: primary image content optics for producing central eye image content; extended image content optics for generating off-center eye image content; a combiner positioned to present an image to a user and through which the user views the surrounding environment; and wherein each of the primary image content optics and the extended image content optics are positioned to project their respective image light to a combiner that reflects the respective image light to the user's eye.
Clause 2. the optical system of clause 1, wherein the combiner reflects the respective image light directly to the user's eye.
Clause 3. the optical system of clause 1, wherein the combiner indirectly reflects the respective image light to the user's eye.
Clause 4. the optical system of clause 3, wherein the combiner reflects the respective image light toward the final partial mirror.
Clause 5. the optical system of clause 1, wherein the center eye image content and the off-center eye image content pass through at least one fold in the optical system before reflecting off the combiner.
Clause 6. the optical system of clause 1, wherein the extended image content optics are mounted directly above the combiner such that the off-center eye image is projected directly onto the combiner.
Clause 7. the optical system of clause 1, further comprising a processor adapted to coordinate a smooth vanishing transition of the world-locked content as the content moves from the field of view of the primary image content optics to the field of view of the extended image content optics and to an edge of the field of view of the extended image content optics.
Clause 8. the optical system of clause 1, wherein the extended image content optic is an OLED.
Clause 9. the optical system of clause 1, wherein the extended image content optic is an LCD display.
Clause 10. the optical system of clause 1, wherein the extended image content optic is an array of LEDs.
Clause 11. the optical system of clause 1, wherein the extended image content optics are linear.
Clause 12. the optical system of clause 1, wherein the extended image content optics are two-dimensional.
Clause 13. the optical system of clause 1, wherein the extended image content optics are curved.
Clause 14. the optical system of clause 1, wherein the extended image content optics generate a lighting effect corresponding to the image content.
Clause 15. the optical system of clause 1, wherein the extended image content optics comprise a lens system for modifying the projection.
Clause 16. the optical system of clause 15, wherein the lens system comprises an array of microlenses.
Clause group C6
Clause 1. a head-mounted see-through display, comprising: a main image content display adapted to produce image light and project the image light in a direction to be reflected by the see-through combiner such that the image light reaches the user's eyes; and a secondary image content display, wherein the secondary image content display is a see-through panel positioned directly in front of the user's eyes and for enhancing the visual experience delivered by the primary image content display.
Clause 2. the head mounted display of clause 1, wherein the secondary display provides content or effects in an area outside the primary field of view produced by the primary image display.
Clause 3. the head mounted display of clause 2, wherein the outer region is adjacent to a primary field of view.
Clause 4. the head mounted display of clause 2, wherein the outer region surrounds a primary field of view.
Clause 5. the head mounted display of clause 2, wherein the outer region overlaps the primary field of view.
Clause 6. the head mounted display of clause 1, wherein the secondary display provides content or an effect in a region that overlaps with the primary field of view produced by the primary image display.
Clause 7. the head mounted display of clause 1, wherein the secondary display is mounted on a combiner adapted to reflect image light to the user's eye.
Clause 8. the head mounted display of clause 1, wherein the secondary display is mounted vertically outside of the image light optical path established by the principal image display.
Clause 9. the head mounted display of clause 1, further comprising a processor adapted to track a user's eye position, the processor further adapted to alter the position of content presented in the secondary display.
Clause 10 the head mounted display of clause 9, wherein the altered position substantially maintains alignment of the primary image display and the secondary image display from the user's perspective as the user's eye moves.
Clause 11. the head mounted display of clause 1, wherein the see-through panel is an OLED.
Clause 12. the head mounted display of clause 1, wherein the see-through panel is an edge-lit LCD.
Clause group C7
Clause 1. a head-mounted see-through display, comprising: a field of view generated by an image display, wherein a user views digital content in the field of view and perspectives the field of view to view an ambient environment; and a processor adapted to generate two types of content, wherein the two types of content are presented in a field of view; wherein the first type of content is world-locked content having a field of view position that is dependent on a location in the surrounding environment, wherein an appearance of the first type of content diminishes as the first type of content approaches an edge of the field of view; and wherein the second type of content is not world-locked, wherein the second type of content maintains a substantially constant appearance as it approaches the edges of the field of view.
Clause 2. the head mounted display of clause 1, wherein the diminished appearance comprises a reduction in resolution.
Clause 3. the head mounted display of clause 1, wherein the diminished appearance comprises a decrease in brightness.
Clause 4. the head mounted display of clause 1, wherein the diminished appearance comprises a reduction in contrast.
Clause 5. the head mounted display of clause 1, wherein the diminished appearance is adjusted by a display driver.
Clause 6. the head mounted display of clause 1, wherein the diminished appearance is adjusted by the application processor.
Clause 7. the head mounted display of clause 1, wherein the diminished appearance is adjusted by the altered pixels of the display that generate the field of view.
Clause 8. the head mounted display of clause 1, further comprising a secondary field of view generated by the image display, the secondary field of view in which the user views the presented digital content and through which the user sees the surrounding environment, the processor further adapted to transition the content from the field of view to the secondary field of view.
Clause 9. the head mounted display of clause 8, wherein the appearance of the content in the secondary field of view is diminished compared to the appearance of the content in the field of view.
Clause 10. the head mounted display of clause 8, wherein the secondary field of view has a lower resolution than a resolution of the field of view.
Clause 11 the head mounted display of clause 10, wherein the secondary field of view is generated by reflecting image light onto a combiner that directs the image light directly to the user's eye.
Clause 12. the head mounted display of clause 10, wherein the secondary field of view is generated by reflecting image light onto a combiner that directs the image light toward an end partial mirror that reflects the image light to the user's eye.
Clause 13. the head mounted display of clause 8, wherein the secondary field of view is generated by an OLED that projects light onto the combiner.
Clause 14. the head mounted display of clause 8, wherein the secondary field of view is generated by an array of LEDs that project light onto the combiner.
Clause 15. the head mounted display of clause 8, wherein the secondary field of view is generated by an edge-lit LCD that projects light onto the combiner.
Clause 16. the head mounted display of clause 8, wherein the secondary field of view is generated by a see-through panel positioned directly in front of the user's eyes.
Clause 17. the head mounted display of clause 16, wherein the panel is mounted on a combiner.
Clause 18. the head mounted display of clause 16, wherein the panel is mounted vertically.
Clause 19. the head mounted display of clause 16, wherein the see-through panel is an OLED.
Clause 20. the head mounted display of clause 16, wherein the see-through panel is an edge-lit LCD.
Clause 21. the head mounted display of clause 1, wherein the processor is further adapted to predict when content is about to approach an edge of a field of view and base the appearance transition at least in part on the prediction.
Clause 22 the head mounted display of clause 21, wherein the prediction is based, at least in part, on an eye image.
Clause group C8
Clause 1. a head-mounted see-through display, comprising: a hybrid optical system adapted to generate a primary perspective field of view for presenting content at a high resolution and a secondary perspective field of view for presenting content at a lower resolution, wherein the primary and secondary fields of view are presented in proximity to each other; a processor adapted to adjust a relative proximity of the primary and secondary fields of view; and an eye position detection system adapted to detect a position of a user's eye, wherein the processor adjusts the relative proximity of the primary and secondary fields of view based on the position of the user's eye.
Clause 2. the head mounted display of clause 1, wherein the secondary field of view is produced on a see-through OLED panel positioned directly in front of the user's eye.
Clause 3. the head mounted display of clause 1, wherein the secondary field of view is produced on a see-through edge-lit LCD panel positioned directly in front of the user's eye.
Clause 4. the head mounted display of clause 1, wherein the secondary field of view is generated on a see-through combiner positioned directly in front of the user's eyes.
Clause 5. the head mounted display of clause 1, wherein the relative proximity is a horizontal proximity.
Clause 6. the head mounted display of clause 1, wherein the relative proximity is vertical proximity.
Clause 7. the head mounted display of clause 1, wherein the relative proximity defines a measure of overlap between the primary field of view and the secondary field of view.
Clause 8. the head mounted display of clause 1, wherein the relative proximity defines a measure of separation between the primary field of view and the secondary field of view.
Clause 9. the head mounted display of clause 1, wherein the eye position detection system images the eye from a viewing angle substantially in front of the eye.
Clause 10. the head mounted display of clause 1, wherein the eye position detection system images the eye with reflections off the see-through optics of the hybrid optical system in a region including the primary field of view.
Clause 11. the head mounted display of clause 1, wherein the eye position detection system images the eye with reflections off the see-through optics of the hybrid optical system in a region including the secondary field of view.
Clause group D
Clause 1. a head mounted display with an improved high transmission perspective view of the surrounding environment, the perspective view having an overlaid high contrast display image, the head mounted display comprising:
an upper optic having a first optical axis, comprising: an emissive image source providing image light comprising one or more narrow spectral bands of light; one or more lenses; a stray light trap; and
a non-polarizing lower optic having a second optical axis, comprising: a planar beam splitter angled with respect to the first and second optical axes; and a curved portion mirror; wherein one or more of the reflective surfaces are treated to reflect a majority of incident light within the one or more narrow spectral bands and transmit a majority of incident visible light from the ambient environment.
Clause 2. the head mounted display of clause 1, wherein the emissive image source is an OLED display.
Clause 3. the head mounted display of clause 1, wherein the emissive image source is a back lit LCD.
Clause 4. the head mounted display of clause 1, wherein the emissive image source is a micro LED display.
Clause 5. the head mounted display of clause 1, wherein the emissive image source is a plasma display.
Clause 6. the head mounted display of clause 1, wherein the treatment is a notch mirror coating that reflects a majority of at least one of the narrow spectral bands provided by the emissive image source and transmits a majority of visible light.
Clause 7. the head mounted display of clause 6, wherein the coating is a notch mirror coating that reflects a majority of the plurality of narrow spectral bands provided by the emissive image source and transmits a majority of visible light.
Clause 8. the head mounted display of clause 6, wherein the notch mirror coating is applied to a planar beam splitter.
Clause 9. the head mounted display of clause 6, wherein the notch mirror coating is applied to a curved portion mirror.
Clause 10 the head mounted display of clause 1, wherein the treatment is a multilayer film that reflects a majority of at least one of the narrow spectral bands provided by the emissive image source and transmits a majority of visible light.
Clause 11. the head mounted display of clause 1, wherein the stray light trap comprises one or more circular polarizers.
Clause 12. the head mounted display of clause 1, wherein the stray light trap comprises a polarizer with quarter wave films on both sides.
Clause 13. the head mounted display of clause 12, wherein the stray light trap is positioned between the lens and the beam splitter.
Clause 14 the head mounted display of clause 11, wherein the circular polarizer is positioned adjacent to an image source.
Clause 15. the head mounted display of clause 11, wherein the first circular polarizer is positioned adjacent to the image source and the second circular polarizer is positioned between the lens and the beam splitter.
Clause 16 the head mounted display of clause 13, wherein the lens provides telecentric image light to the beam splitter that passes through the stray light trap.
Clause 17. the head mounted display of clause 1, wherein the stray light trap absorbs light passing through the lower optics and toward the image source from reflected image light or light from a see-through view of the surrounding environment.
Clause 18. the head mounted display of clause 1, wherein the see-through view of the environment as seen by the user includes at least 50% of the available light from the surrounding environment.
Clause 19. the head mounted display of clause 1, wherein the see-through view of the environment as seen by the user includes at least 60% of the available light from the surrounding environment.
Clause 20. the head mounted display of clause 1, wherein the see-through view of the environment as seen by the user includes at least 20% of the available light from the surrounding environment within the narrow spectral band.
Clause group D1
Clause 1. a head mounted display with a wide display field of view providing a highly transmissive see-through view of the surrounding environment with an overlaid high contrast display image, comprising:
an upper optic having a first optical axis, comprising: an emissive image source providing image light comprising one or more narrow spectral bands of light; one or more lenses; a stray light trap; and
a non-polarizing lower optic having a second optical axis, comprising: a planar beam splitter angled with respect to the first and second optical axes; and a curved portion mirror; wherein the upper and lower optics are designed to provide a limited central sharp zone and a less sharp peripheral zone corresponding to the acuity of a human eye with movement.
Clause 2. the head mounted display of clause 1, wherein the central sharp region is +/-15 degrees or less.
Clause 3. the head mounted display of clause 1, wherein the central sharp region is +/-20 degrees or less.
Clause 4. the head mounted display of clause 1, wherein the peripheral region extends to at least +/-25 degrees from the central sharp region.
Clause 5. the head mounted display according to clause 1, wherein the lateral chromatic aberration of the plurality of pixels is present in a portion of the displayed image corresponding to the peripheral area.
Clause 6. the head mounted display of clause 5, wherein the lateral color difference is 5 pixels or more.
Clause 7. the head mounted display of clause 1, wherein the emissive image source provides a cone of image light that is at an included angle of 100 degrees or greater.
Clause 8. the head mounted display of clause 1, wherein the optic has an f # of 2.5 or faster.
Clause 9. the head mounted display of clause 1, wherein the emissive display comprises pixels and subpixels, and the emissive display resolution and display field of view are selected such that each subpixel subtends less than approximately 1/50 degrees within the wide display field of view to make adjacent subpixels indistinguishable.
Clause 10. the head mounted display of clause 1, wherein the emissive display comprises pixels, and the emissive display and the display field of view are selected such that each pixel subtends less than approximately 1/30 of a degree within the wide display field of view to render adjacent colored pixels unresolvable.
Clause 11 the head mounted display of clause 10, wherein the emissive display is a 1080p display and the display field of view is less than 73 degrees diagonal.
Clause 12. the head mounted display of clause 1, wherein the emissive display comprises pixels, and the emissive display and the display field of view are selected such that each pixel subtends less than approximately 1/50 of a degree within the wide display field of view to render adjacent black and white pixels unresolvable.
Clause 13. the head mounted display of clause 1, further comprising a light trap that captures light reflected back to the image source.
Clause 14. the light trap of clause 13, wherein the light trap comprises a sandwich of polarizers with quarter wave films on both sides.
Clause 15. the optical trap of clause 13, wherein the optical trap is positioned between an upper optic and a lower optic.
Clause 16. the head mounted display of clause 1, wherein the wide field of view comprises an included angle of 50 degrees or more.
Clause 17. the head mounted display of clause 1, wherein the wide field of view comprises a format ratio greater than 22:9 to enable a reduced thickness of the optical device.
Clause 18. the head mounted display of clause 1, wherein an outer edge of the central sharp region of the optic has an MTF greater than 20% at a nyquist frequency of the image source, and an outer edge of the peripheral region has an MTF less than 20% at the nyquist frequency of the image source.
Clause 19. the head mounted display of clause 1, wherein an outer edge of the central sharp region of the optics has an MTF greater than 20% at a nyquist frequency of the image source, and an outer edge of the peripheral region has an MTF less than 20% at a 1/2 nyquist frequency of the image source.
Clause group D2
Clause 1. a method for operating a head mounted display having a wide display field of view, the head mounted display providing improved comfort for viewing displayed wide field of view images, the method comprising: detecting eye movement and head movement of a user; detecting eye movement above a first predetermined threshold followed by head movement above a second predetermined threshold, wherein the detected eye movement and the detected head movement are in the same direction; and shifting the displayed wide field of view image within the wide display field of view corresponding to subsequent detected movement of the user's head to thereby move a peripheral portion of the displayed wide field of view image into a central portion of the wide display field of view where the peripheral portion of the displayed wide field of view image is viewable by the user with his eyes in a more central position.
Clause 2. the method of clause 1, wherein the head-mounted display further comprises an eye camera, and eye movement is detected with the eye camera.
Clause 3. the method of clause 1, wherein the head mounted display further comprises an inertial measurement unit, and head movement relative to the environment is detected with the inertial measurement unit.
Clause 4. the method of clause 1, wherein the head mounted display further comprises a camera that captures an image of a portion of the surrounding environment, and detecting head movement relative to the environment by analyzing relative changes in the image of the environment.
Clause 5. the method of clause 1, wherein the head-mounted display further comprises a camera that captures an image of a portion of the user's body, and detecting head movement relative to the user's body by analyzing relative changes in the image of the user's body.
Clause 6. the method of clause 5, wherein the camera in the head mounted display is pointed downward.
Clause 7. the method of clause 1, wherein the head mounted display further comprises a first inertial measurement unit in the head mounted display and a second inertial measurement unit attached to the user's body, and the movement of the user's head relative to the user's body is determined from a differential change between the first inertial measurement unit and the second inertial measurement unit.
Clause 8. the method of clause 3, wherein the measuring is performed by time-weighted averaging of inertial measurement units in the head-mounted display.
Clause 9. the method of clause 1, further comprising the steps of: in which the shift of the displayed wide field-of-view image within the wide display field of view is stopped when it is detected that the eye is looking at the central portion of the wide display field of view.
Clause 10. the method of clause 1, further comprising the steps of: in which the shifting of the displayed wide-field-of-view image within the wide display field of view is stopped when the edge of the wide-field-of-view image is determined to have shifted to the central portion of the wide display field of view.
Clause 11. a method for operating a head mounted display having a wide display field of view with improved comfort when viewing different applications, wherein the head mounted display includes optics that provide a central sharp region, the method comprising: a user selecting an application having an image for viewing in a head mounted display; a user selecting a field of view for viewing an image; a head mounted display resizing the image into the selected field of view and displaying the image; and if the resized image is angularly larger than the predetermined size; detecting movement of the user's eyes followed by movement of the user's head, wherein the eye movement and the head movement are in the same direction relative to a fixed reference point; and displaying the displayed wide field of view image within the field of view in a bit-wide manner corresponding to the detected movement of the user's head and in a direction opposite to the detected movement of the user's head to move a peripheral portion of the image into the central sharp region where the peripheral portion of the image is viewable by the user with reduced movement of the user's eyes.
Clause 12. the method of clause 11, wherein the fixed reference point is an environment.
Clause 13. the method of clause 12, wherein the head-mounted display comprises an eye camera for detecting eye movement and an inertial measurement unit for detecting head movement.
Clause 14. the method of clause 12, wherein the head-mounted display comprises an eye camera for detecting eye movement and a camera for detecting head movement.
Clause 15. the method of clause 11, wherein the fixed reference point is the user's body.
Clause 16. the method of clause 15, wherein the head-mounted display comprises an eye camera for detecting eye movement and an inertial measurement unit for detecting head movement.
Clause 17. the method of clause 16, wherein the measuring is performed by comparative measurement of a first inertial measurement unit in the head-mounted display and a second inertial measurement unit attached to the body of the user.
Clause 18. the method of clause 15, wherein the measuring is performed by time-weighted averaging of inertial measurement units in the head-mounted display.
Clause 19. the method of clause 15, wherein the central sharp region is less than +/-20 degrees in the displayed wide field of view.
Clause 20. the method of clause 15, wherein the displayed wide field of view is at least +/-25 degrees.
Clause group D3
Clause 1. a head mounted display with improved form factor providing a wide field of view and a high transmission perspective view of the surrounding environment with overlaid high contrast display images, comprising:
an upper optic having a first optical axis, comprising: an emissive image source providing image light; one or more lenses; a stray light trap; and
a non-polarizing lower optic having a second optical axis, comprising: a planar beam splitter angled with respect to the first and second optical axes; and a curved portion mirror; wherein the head mounted display provides a rectangular image having a reduced vertical field of view and a format greater than 22:9, thereby enabling reduced thickness of the upper and lower optics for a given diagonal display field of view.
Clause 2. the head mounted display of clause 1, wherein the wide field of view comprises a display field of view having an included angle of 50 degrees or greater.
Clause 3. the head mounted display of clause 1, wherein the emissive image source provides a cone of image light at an included angle greater than 100 degrees.
Clause 4. the head mounted display of clause 1, wherein the head mounted display further comprises separate stereoscopic displays for the left and right eyes, each of which provides a display field of view of 50 degrees or greater, and the display fields of view for the left and right eyes only partially overlap such that the combined field of view is larger than each of the stereoscopic displays.
Clause 5. the head mounted display according to clause 1, wherein the upper optic comprises two or more lenses and a telecentric region is provided between the two or more lenses in the upper optic and the change in focal distance is provided by changing the spacing between the lenses without a change in magnification of the displayed image.
Clause 6. the head mounted display of clause 5, wherein a mechanism for manual change of focus distance is provided.
Clause 7. the head mounted display of clause 5, wherein an electrically driven actuator is included to provide an automatic change in focus distance.
Clause 8 the head mounted display of clause 7, wherein the automatic change in focus distance is provided to position a portion of the display image at a viewing distance associated with the augmented reality application.
Clause 9. the head mounted display of clause 1, wherein a telecentric region is provided between the upper optic and the lower optic, and the change in focal distance is provided by changing the spacing between the upper optic and the lower optic without a change in magnification of the displayed image.
Clause 10 the head mounted display of clause 9, wherein a mechanism is provided for manual change of focus distance.
Clause 11. the head mounted display of clause 9, wherein an electrically driven actuator is included to provide an automatic change in focus distance.
Clause 12 the head mounted display of clause 11, wherein the automatic change in focus distance is provided to position a portion of the display image at a viewing distance associated with the augmented reality application.
Clause 13. the head mounted display of clause 1, wherein the emissive image source is an OLED display.
Clause 14. the head mounted display of clause 1, wherein the emissive image source is a back lit LCD display.
Clause 15. the head mounted display of clause 1, wherein the emissive image source is a micro LED array display.
Clause 16. the head mounted display of clause 1, wherein the light trap captures stray light reflected back to an emitting image source.
Clause 17. the head mounted display of clause 1, wherein the light trap increases contrast in a display image seen by a user by capturing stray light in the upper optical element.
Clause 18. the head-mounted display according to clause 1, wherein a central region of the display field of view is used to display an image having a lower format ratio than the display field of view, and a portion of the display field of view outside the display image is used to display additional information.
Clause 19. the head mounted display of clause 1, wherein the emissive image source provides a cone of image light that is 100 degrees or greater of included angle, and the optical device has an f # of 2.5 or faster.
Clause 20. the head mounted display of clause 1, wherein the emissive display comprises pixels and subpixels, and the emissive display and the display field of view are selected such that each pixel subtends less than approximately 1/30 of a degree within the wide display field of view such that adjacent colored subpixels are not resolvable by the human eye.
Clause group D4
Clause 1. a head mounted display providing a user with a high transmission see-through view of the surrounding environment with improved visual uniformity of the displayed image and a wide display field of view, comprising:
an upper optic having a first optical axis, comprising: an emissive image source providing image light; one or more lenses; and
a non-polarizing lower optic having a second optical axis, comprising: a planar partially reflective beam splitter angled with respect to the first and second optical axes; and a curved portion mirror; and wherein at least one of the partially reflective surfaces comprises a notch mirror treatment.
Clause 2. the head mounted display of clause 1, wherein the upper optic provides telecentric light to the lower optic to improve uniformity in the displayed image by reducing an angular range of light associated with the displayed image incident on the surface comprising the notch mirror treatment.
Clause 3. the head mounted display of clause 1, wherein the display image is digitally modified to radially increase the digital brightness in the display image to compensate for radial brightness roll-off from the notch mirror process.
Clause 4. the head mounted display of clause 1, wherein the display image is digitally modified to radially increase the color rendering in the display image to compensate for the change in primary color from the angle of the notch mirror process.
Clause 5. the head mounted display of clause 1, wherein the notch mirror processes narrow bands that reflect red, green, and blue light.
Clause 6. the head mounted display of clause 5, wherein the narrow bands of red, green, and blue light match the bands of colored light output by the emissive display.
Clause 7. the head mounted display of clause 1, wherein the notch mirror process has a higher reflectivity for S polarized incident light.
Clause 8. the head mounted display of clause 1, wherein the notch mirror treatment has a reflectivity insensitive to polarization of incident light.
Clause 9. the head mounted display of clause 1, further comprising a plurality of optical elements, and wherein the distance between adjacent optical elements can be adjusted to change the focus distance of the displayed image.
Clause 10 the head mounted display of clause 9, wherein the sensor is provided to measure the distance between adjacent optical elements when the focus distance is adjusted.
Clause 11 the head mounted display of clause 10, wherein the display image is digitally modified prior to being displayed to compensate for changes in the size of the display image associated with changes in the focus distance.
Clause 12. the head mounted display of clause 1, wherein the notch mirror treatment is a coating.
Clause 13. the head mounted display of clause 1, wherein the notch mirror treatment is a film.
Clause 14. the head mounted display of clause 1, wherein the notch mirror treatment is phase matched nanostructures.
Clause group E
Clause 1. an apparatus, comprising: a platform mechanically adapted to be worn on a user's head; a perspective view display mounted on the platform and adapted to present image content to the eyes of the user, wherein the perspective view display further provides a perspective view to the user of the environment proximate to the user; a removable and replaceable eye cover adapted to be connected to the platform, wherein the eye cover is also adapted to contain light escaping from an interior cavity formed at an interface with the user's face; and a removable and replaceable outer lens mounted on the platform along an optical axis in line with a user perspective view of the environment.
The apparatus of clause 2. the apparatus of clause 1, wherein the eye shield is adapted to be attached to a platform with a magnet.
Clause 3. the apparatus of clause 1, wherein the eye shield is adapted to be connected to a platform with a mechanical attachment.
The apparatus of clause 4. the apparatus of clause 1, wherein the eye shield is adapted to be connected to the platform at least in part by an arm attached to the platform being held in place, wherein the arm is adapted to hold the platform on the head of the user.
Clause 5. the apparatus of clause 1, wherein the outer lens has a see-through transparency between 2% and 5% to create an immersive environment for the user while creating a transparency that maintains a visual connection for the user to the surrounding environment.
Clause 6. the apparatus of clause 1, wherein the outer lens has a see-through transparency between 5% and 10% to create an immersive environment for the user while creating a transparency that maintains a visual connection for the user to the surrounding environment.
Clause 7. the apparatus of clause 1, wherein the outer lens has a see-through transparency between 10% and 20% to create an immersive environment for the user while creating a transparency that maintains a visual connection for the user to the surrounding environment.
Clause 8. the device of clause 1, wherein the outer lens has a see-through transparency of greater than 20%.
Clause 9. the device of clause 1, wherein the outer lens has an electronically adjustable transparency.
Clause 10. the device of clause 9, wherein the electronically adjustable transparency is based on a liquid crystal system.
Clause 11. the apparatus of clause 1, wherein the outer lens has a low transparency in an area behind the field of view presented to the user's eyes and a higher transparency outside the area behind the field of view.
Clause 12. the device of clause 1, wherein the outer lens has a mechanically adjustable crossed polarizer for adjusting the see-through transparency.
Clause 13. the device of clause 1, wherein the eye shield has a region of transparency that provides a user with a perspective of the surrounding environment.
Clause 14. the device of clause 13, wherein the transparency is partial transparency.
Clause 15. the apparatus of clause 1, wherein the eye shield further comprises an interior lighting effect system that provides a peripheral lighting effect that is coordinated with content presented in the field of view.
Clause 16. the apparatus of clause 15, wherein the internal light effects system produces the light effects with LEDs or OLEDs.
Clause 17. the apparatus of clause 16, wherein the lighting effect is a colored lighting effect.
Clause 18. the apparatus of clause 15, wherein the interior lighting system comprises a light source substantially surrounding an interior cavity for controlled delivery of the effect lighting direction.
The apparatus of clause 19. the apparatus of clause 1, wherein the eyecup further comprises a haptic effect system adapted to achieve a haptic effect that is coordinated with content presented in the field of view.
Clause 20. the apparatus of clause 19, wherein the haptic effect system comprises a piezoelectric vibration system.
Clause 21. the apparatus of clause 20, wherein the piezoelectric vibration system is positioned on one side of the eye shield.
Clause 22. the apparatus of clause 20, wherein the piezoelectric vibration system is positioned on top of the eye shield.
Clause 23. the apparatus of clause 20, wherein the piezoelectric vibration system is positioned on the bottom of the eyecup.
Clause 24. the device of clause 1, further comprising a data connection between the eye shield and the platform.
Clause 25. the apparatus of clause 24, wherein the data connection is established by an attachment system that mechanically connects the eyecup to the platform.
Clause 26. the apparatus of clause 24, wherein the data connection is a wireless data connection.
Clause group E1
Clause 1. a stray light suppression system, comprising: a shield mechanically adapted to be removably and replaceably mounted to a see-through display system adapted to be worn on a user's head; the shield is further adapted to inhibit light emitted from the see-through display system and to inhibit ambient light from entering a side of the see-through display system when mounted on the see-through display system; and the shroud further comprises a peripheral lighting effect system.
Clause 2. the system of clause 1, wherein the shroud is adapted to be removably and replaceably mounted to the see-through display with a magnet.
Clause 3. the system of clause 1, wherein the shroud is adapted to be removably and replaceably mounted to the see-through display with a mechanical attachment.
Clause 4. the system of clause 1, wherein the shroud is adapted to be removably and replaceably mounted to the see-through display and is at least partially held in place by an arm attached to the see-through display, wherein the arm is adapted to hold the platform on the head of the user.
Clause 5. the system of clause 1, wherein the light suppression system further comprises an external lens, wherein the external lens has a see-through transparency between 2% and 5% to create an immersive environment for the user while creating a transparency that maintains a visual connection for the user to the surrounding environment.
Clause 6. the system of clause 1, wherein the light suppression system further comprises an external lens, wherein the external lens has a see-through transparency between 5% and 10% to create an immersive environment for the user while creating a transparency that maintains a visual connection for the user to the surrounding environment.
Clause 7. the system of clause 1, wherein the light suppression system further comprises an external lens, wherein the external lens has a see-through transparency between 10% and 20% to create an immersive environment for the user while creating a transparency that maintains a visual connection for the user to the surrounding environment.
Clause 8. the system of clause 1, wherein the light containment system further comprises an outer lens, wherein the outer lens has a see-through transparency of greater than 20%.
Clause 9. the system of clause 1, wherein the light containment system further comprises an outer lens, wherein the outer lens has a low transparency in an area behind the field of view presented to the user's eye and a higher transparency outside the area behind the field of view.
Clause 10. the system of clause 1, wherein the shroud has an area of transparency that provides a user with a view of the surrounding environment.
Clause 11. the system of clause 10, wherein the transparency is partial transparency.
Clause 12. the system of clause 1, wherein the shroud further comprises a haptic effect system adapted to achieve a haptic effect that is coordinated with content presented in the field of view.
Clause 13. the system of clause 12, wherein the haptic effect system comprises a piezoelectric vibration system.
Clause 14. the system of clause 13, wherein the piezoelectric vibration system is positioned on one side of the eye shield.
Clause 15. the system of clause 13, wherein the piezoelectric vibration system is positioned on top of the eyecup.
Clause 16. the system of clause 13, wherein the piezoelectric vibration system is positioned on the bottom of the eye cup.
Clause 17. the system of clause 1, further comprising a data connection adapted to connect the shield and the see-through display.
Clause 18. the system of clause 1, wherein the data connection is established by an attachment system that mechanically connects the shield to a see-through display.
Clause 19. the system of clause 1, wherein the data connection is a wireless data connection.
Clause 20. the system of clause 1, wherein the peripheral lighting effect system produces the lighting effect with an LED or OLED.
Clause 21. the system of clause 20, wherein the lighting effect is a colored lighting effect.
Clause 22. the system of clause 1, wherein the peripheral lighting effect system comprises a light source substantially surrounding an internal cavity for controlled delivery of the effect lighting direction.
Clause group E2
Clause 1. a stray light suppression system, comprising: a shield mechanically adapted to be removably and replaceably mounted to a see-through display system adapted to be worn on a user's head; the shield is further adapted to inhibit light emitted from the see-through display system and to inhibit ambient light from entering a side of the see-through display system when mounted on the see-through display system; and the shroud further comprises a haptic illumination effect system.
Clause 2. the system of clause 1, wherein the shroud is adapted to be removably and replaceably mounted to the see-through display with a magnet.
Clause 3. the system of clause 1, wherein the shroud is adapted to be removably and replaceably mounted to the see-through display with a mechanical attachment.
Clause 4. the system of clause 1, wherein the shroud is adapted to be removably and replaceably mounted to the see-through display and is at least partially held in place by an arm attached to the see-through display, wherein the arm is adapted to hold the platform on the head of the user.
Clause 5. the system of clause 1, wherein the light suppression system further comprises an external lens, wherein the external lens has a see-through transparency between 2% and 5% to create an immersive environment for the user while creating a transparency that maintains a visual connection for the user to the surrounding environment.
Clause 6. the system of clause 1, wherein the light suppression system further comprises an external lens, wherein the external lens has a see-through transparency between 5% and 10% to create an immersive environment for the user while creating a transparency that maintains a visual connection for the user to the surrounding environment.
Clause 7. the system of clause 1, wherein the light suppression system further comprises an external lens, wherein the external lens has a see-through transparency between 10% and 20% to create an immersive environment for the user while creating a transparency that maintains a visual connection for the user to the surrounding environment.
Clause 8. the system of clause 1, wherein the light containment system further comprises an outer lens, wherein the outer lens has a see-through transparency of greater than 20%.
Clause 9. the system of clause 1, wherein the light containment system further comprises an outer lens, wherein the outer lens has a low transparency in an area behind the field of view presented to the user's eye and a higher transparency outside the area behind the field of view.
Clause 10. the system of clause 1, wherein the shroud has an area of transparency that provides a user with a view of the surrounding environment.
Clause 11. the system of clause 10, wherein the transparency is partial transparency.
Clause 12. the system of clause 1, wherein the haptic effect system comprises a piezoelectric vibration system.
Clause 13. the system of clause 1, wherein the piezoelectric vibration system is positioned on one side of the eye shield.
Clause 14. the system of clause 1, wherein the piezoelectric vibration system is positioned on top of the eyecup.
Clause 15. the system of clause 1, wherein the piezoelectric vibration system is positioned on the bottom of the eyecup.
Clause 16. the system of clause 1, further comprising a data connection adapted to connect the shield and the see-through display.
Clause 17. the system of clause 16, wherein the data connection is established by an attachment system that mechanically connects the shield to a see-through display.
Clause 18. the system of clause 16, wherein the data connection is a wireless data connection.
Clause group F
Clause 1. an apparatus, comprising: an eye shield adapted to be removably mounted on a head-mounted computer having a see-through computer display; and an audio headset having an adjustable stand connected to the eye cup, wherein the adjustable stand provides an extension and rotation to provide a mechanism for a user of the headset computer to align the audio headset with the user's ear.
Clause 2. the device of clause 1, wherein the audio headset includes an audio cord connected with a connector on the eyecup, the eyecup connector adapted to removably mate with a connector on the headset computer.
Clause 3. the apparatus of clause 1, wherein the audio headset is adapted to receive the audio signal from the headset computer over a wireless connection.
Clause 4. the device of clause 1, wherein the headset computer has a removable and replaceable front lens.
Clause 5. the device of clause 1, wherein the eye shield comprises a battery to power a system inside the eye shield.
Clause 6. the apparatus of clause 1, wherein the eye shield comprises a battery to power a system inside the head-worn computer.
Clause 7. the apparatus of clause 1, wherein the eye shield comprises a fan adapted to exchange air between the interior space partially defined by the user's face and the external environment to cool the air in the interior space.
The apparatus of clause 8. the apparatus of clause 1, wherein the audio headset comprises a vibration system adapted to provide haptic feedback to a user in coordination with digital content presented in a see-through computer display.
Clause 9. the apparatus of clause 1, wherein the head-worn computer comprises a vibration system adapted to provide tactile feedback to a user in coordination with the digital content presented in the see-through computer display.
Clause 10. an apparatus, comprising: an eye shield adapted to be removably mounted on a head-mounted computer having a see-through computer display; and a flexible audio headset mounted to the eye shield, wherein the flexibility provides a mechanism for a user of the headset computer to align the audio headset with the user's ear.
Clause 11. the device of clause 10, wherein the flexible audio headset is mounted to the eye shield with a magnetic connection.
Clause 12. the device of clause 10, wherein the flexible audio headset is mounted to the eye shield with a mechanical connection.
Clause group G
Clause 1. a head mounted display providing a displayed stereoscopic image overlaid onto a see-through view of a surrounding environment, wherein a focus distance and a vergence distance associated with the displayed stereoscopic image can be changed, the head mounted display comprising: left and right display optics each comprising upper optics and a partially reflective combiner, wherein each upper optics comprises an image source, an illumination source, and one or more lenses; two removable focus-shifting elements, each positioned between one of the image sources and its respective combiner to change a focus distance of the displayed stereoscopic image; and an integrated processor providing the stereoscopic images to the left and right display optics and adapted to position the stereoscopic images at a vergence distance that changes corresponding to the focus distance.
Clause 2. the head mounted display of clause 1, wherein the focus shifting element changes the focus distance to a focus distance that is within an arm length of a user.
Clause 3. the head mounted display of clause 2, wherein the displayed stereoscopic image includes one or more augmented reality objects.
Clause 4. the head mounted display of clause 1, wherein the removable focus-shifting element is separate for the left display optic and the right display optic.
Clause 5. the head mounted display of clause 1, wherein the removable focus-shifting element is incorporated between the left display optics and the right display optics.
Clause 6. the head mounted display of clause 1, further comprising a sensor that detects the presence of the removable focus-shifting element.
Clause 7. the head mounted display of clause 6, wherein the sensor also detects information about a focus shift associated with the removable focus shift element.
Clause 8. the head mounted display of clause 1, wherein the image processor changes the vergence distance of the portion of the stereoscopic image by rendering the stereoscopic image with the changed disparity.
Clause 9. the head mounted display of clause 8, wherein the parallax of the entire stereoscopic image is changed to provide different vergence viewing distances to the stereoscopic image.
Clause 10. the head mounted display of clause 1, further comprising a corrective lens positioned behind the combiner to improve the view of both the display image and the see-through view based on the perception of the user.
Clause 11. the head mounted display of clause 10, wherein the corrective lens, the combiner, and the removable focus-shifting element are connected.
Clause 12 the head mounted display of clause 7, wherein the vergence distance of the displayed stereoscopic images is automatically changed corresponding to the detected focus shift with which the removable focus shift element is associated.
Clause 13. the head mounted display of clause 7, wherein the focus shifting element further comprises a bar code describing an amount of focus shift provided by the focus shifting element, and the bar code is detected by the sensor.
Clause 14 the head mounted display of clause 7, wherein a visual indicator is added to the display image when the removable focus shift element is detected.
Clause 15 the head mounted display of clause 7, wherein the presentation of the display image is modified when the removable focus shift element is detected.
Clause 16. the head mounted display of clause 1, wherein the focus shifting element is a fresnel lens.
Clause 17. the head mounted display of clause 10, wherein the corrective lens is a fresnel lens.
Clause 18. the head mounted display of clause 10, wherein the corrective lens is an adjustable lens.
Clause 19. the head mounted display of clause 10, wherein the corrective lens is attached to a frame of the head mounted display such that the corrective lens can swing out of the display field of view when not in use.
Clause 20. the head mounted display of clause 10, wherein the focus shifting element and the corrective lens are attached together in a removable assembly.
Clause group G1
Clause 1. an apparatus for a head mounted display that provides a change in viewing distance of a displayed image overlaid onto a see-through view of a surrounding environment, the apparatus comprising: an upper optic comprising an image source, an illumination source, one or more lenses, and a focus changing module; a partially reflective combiner; an integrated processor; and wherein the focus change module moves the image source between the predetermined positions, thereby providing two or more focus distances for the displayed image.
Clause 2. the apparatus of clause 1, wherein the focus change module moves the image source between two predetermined positions that enable the display image to be provided with one focus distance within the user's arm length and another focus distance that exceeds the user's arm length.
Clause 3. the apparatus of clause 1, wherein the focus changing module comprises a wedge that moves laterally to move the image source between predetermined positions.
Clause 4. the device of clause 3, wherein one or more solenoids are used to move the wedge laterally.
Clause 5. the device of clause 3, wherein a manually operated screw is used to move the wedge laterally.
Clause 6. the device of clause 3, wherein a motor is used to turn a screw that moves the wedge laterally.
Clause 7. the apparatus of clause 3, wherein the wedge is positioned between the image source and the lens, and the wedge includes a window for light to pass through to the lens and the combiner.
The device of clause 8. the device of clause 3, wherein the wedge is positioned over an image source.
Clause 9. the apparatus of clause 1, wherein the focus change module comprises a guide mechanism associated with the image source to guide movement of the image source between the predetermined positions.
Clause 10. the device of clause 9, wherein the guide mechanism is a pin.
Clause 11. the device of clause 9, wherein the guide mechanism is a leaf spring.
Clause 12. the device of clause 9, wherein the guide mechanism is a four-bar linkage mechanism.
The apparatus of clause 13. the apparatus of clause 3, wherein the wedge is transparent and lateral movement of the wedge changes an optical thickness across the wedge, which changes a focus distance of the displayed image.
Clause group G2
Clause 1. a method of adjusting a focus distance of a displayed image corresponding to eye movement of a user in a head mounted display, the head mounted display comprising an integrated processor, an inertial measurement unit, and left and right optics modules, each of the left and right optics modules comprising: display optics having a focus adjustment module, one or more lenses, and a combiner; and a camera to capture an image of a user's eye, the method comprising: capturing an image of at least one user's eye with a camera; analyzing, with a processor, the captured image to determine a relative direction in which the user's eyes are looking; determining the portion of the displayed image that the user is looking at from the determined relative direction that the user's eyes are looking at; and controlling the focus adjustment module to adjust a focus distance for the displayed image corresponding to the determined portion of the image the user is looking at.
Clause 2. the method of clause 1, further comprising the steps of: the gaze direction of the head mounted display is determined with an inertial measurement unit, and the relative direction the user's eyes are looking is compared to the determined gaze direction to determine the compass direction the user is looking.
Clause 3. the method of clause 1, further comprising two cameras capturing images of the left and right eyes of the user.
Clause 4. the method of clause 3, further comprising analyzing the images of the left and right eyes of the user to determine the relative direction in which the left and right eyes of the user are looking.
Clause 5. the method of clause 4, further comprising the steps of: analyzing a difference between relative directions in which a left eye and a right eye of a user are looking to determine a vergence viewing distance of the user; and adjusting a vergence distance associated with a portion of the displayed image corresponding to the vergence viewing distance.
Clause 6. the method of clause 5, further comprising the steps of: the vergence viewing distance is compared to a vergence distance associated with the determined portion of the image the user is looking at to determine whether the user is looking at the displayed image or a perspective view of the surrounding environment.
Clause 7. the method of clause 1, wherein the focus distance is continuously adjusted in response to a change in the relative direction in which the user's eyes are looking.
Clause 8. the method of clause 5, wherein the display image is a stereoscopic image, and further comprising the steps of: the focus distance and the vergence distance are adjusted corresponding to the determined portion of the stereoscopic image at which the user's eyes are looking and the disparity with which the portion of the stereoscopic image is associated.
Clause 9. the method of clause 6, wherein the head-mounted display further comprises a camera that captures images of the surrounding environment in front of the user, and the method further comprises the steps of: if the user is looking at a see-through view of the surrounding environment, objects are identified in the captured image of the surrounding environment that the user is looking at based on the determined relative direction that the user's left and right eyes are looking at.
Clause 10. the method of clause 9, wherein the display image is modified based on an object identified in the ambient environment at which the user is looking.
Clause 11. the method of clause 1, wherein the camera further comprises autofocus, and the method comprises the steps of: a focus setting for autofocusing is determined based on metadata associated with the captured image of the user's eye to determine the presence of corrective lenses.
Clause 12. the method of clause 11, wherein the adjustment of the focus adjustment module is modified depending on whether a corrective lens is determined to be present.
Clause group G3
Clause 1. a head mounted display system for viewing an augmented reality image, the augmented reality image comprising a displayed image of an object overlaid onto a see-through view of a surrounding environment, the head mounted display system comprising: an integrated processor providing an image; a focus adjustment module comprising an image source, an actuator, and a guide mechanism; a multi-fold display optic comprising one or more lenses; an illumination source; a combiner that reflects a portion of light from an image source and transmits a portion of light from a surrounding environment.
Clause 2. the head mounted display of clause 1, wherein the multi-fold display optics comprise a fold mirror between the focus adjustment module and the combiner.
Clause 3. the head mounted display of clause 2, wherein the folding mirror folds the display optics to the side of the combiner to reduce the height of the display optics.
Clause 4. the head mounted display of clause 1, wherein the multi-fold display optics comprise a prism.
Clause 5. the head mounted display of clause 1, wherein the display optics are telecentric at an image source.
Clause 6. the head mounted display of clause 5, wherein the illumination source provides telecentric illumination of the image source.
Clause 7. the head mounted display of clause 1, further comprising a camera that captures an image of the user's eye.
Clause 8. the head mounted display of clause 7, wherein the captured image of the user's eye is reflected from the combiner before being captured by the camera.
Clause 9. the head-mounted display of clause 7, further comprising an LED to illuminate the user's eye.
Clause 10 the head mounted display of clause 9, wherein the LED provides infrared light to which the camera is sensitive.
Clause 11. the head mounted display of clause 1, further comprising an inertial measurement unit.
Clause 12 the head mounted display of clause 1, wherein the actuator is a solenoid coupled to a pair of wedges.
Clause 13. the head mounted display of clause 1, wherein the actuator is a pair of bimorph piezoelectric actuators.
Clause 14. the head mounted display of clause 1, wherein the actuator is a scissor jack actuator.
Clause 15. the head mounted display of clause 1, wherein the actuator is a voice coil motor.
Clause 16. the head mounted display of clause 1, wherein the guide mechanism is a leaf spring.
Clause 17. the head mounted display of clause 1, wherein the guide mechanism is a four-bar linkage mechanism.
Clause 18. the head mounted display of clause 3, wherein the focus adjustment module is located to the side of the combiner, the image source being oriented approximately vertically and closer to the multi-fold display optics than the actuator and guide mechanism.
Clause 19. the head mounted display of clause 7, wherein the eye camera comprises auto-focus.
Clause 20. the head mounted display of clause 19, wherein the focus setting associated with auto-focus is used to make the auto-adjustment.
Clause group H
Clause 1. a method, comprising: illuminating the user's eye with an illumination source in the head-mounted display; capturing an image of a user's eye with an eye camera in a head-mounted display, wherein the image comprises eye glints produced by light from an illumination source reflected from a surface of the user's eye; and identifying a change in focus distance for the user's eye corresponding to the change in the size of the eye glints.
Clause 2. the method of clause 1, wherein the illumination source is an LED.
Clause 3. the method of clause 1, wherein the illumination source is a display image from a head mounted display.
Clause 4. the method of clause 1, wherein the identified change in focus distance is used to determine what the user is looking at in the surrounding environment.
Clause 5. the method of clause 1, wherein the identified change in focus distance is used to automatically select a display mode for the head mounted display.
Clause 6. the method of clause 5, wherein the display mode comprises whether the display image should be brighter or dimmer.
Clause 7. the method of clause 1, wherein the identified change in focus distance is used to determine whether the user is looking at the display image or the user is looking at the surrounding environment.

Claims (20)

1. A head mounted display with a wide display field of view providing a highly transmissive see-through view of the surrounding environment with an overlaid high contrast display image, comprising:
a. an upper optic having a first optical axis, comprising:
i. an emissive image source providing image light comprising one or more narrow spectral bands of light;
One or more lenses;
stray light traps; and
b. a non-polarizing lower optic having a second optical axis, comprising:
i. a planar beam splitter angled with respect to the first and second optical axes; and
a curved partial mirror;
wherein the upper and lower optics are designed to provide a central sharp area and a less sharp peripheral area around the central sharp area corresponding to the acuity of a human eye with movement.
2. The head mounted display of claim 1, wherein the central sharp region is +/-15 degrees or less.
3. The head mounted display of claim 1, wherein the central sharp region is +/-20 degrees or less.
4. The head mounted display of claim 1, wherein the peripheral region extends to at least +/-25 degrees from a central sharp region.
5. The head mounted display of claim 1, wherein lateral color differences of the plurality of pixels are present in portions of the displayed image corresponding to the peripheral region.
6. The head mounted display of claim 5, wherein the lateral color difference is 5 pixels or more.
7. The head mounted display of claim 1, wherein the emissive image source provides a cone of image light that is 100 degrees or greater of included angle.
8. The head mounted display of claim 1, wherein the upper optic has an f # of 2.5 or faster.
9. The head mounted display of claim 1, wherein the emissive image source comprises pixels and sub-pixels, and the emissive image source resolution and display field of view are selected such that each sub-pixel subtends less than 1/50 of a degree within the wide display field of view to make adjacent sub-pixels unresolvable.
10. The head mounted display of claim 1, wherein the emissive image source comprises pixels, and the emissive image source and the display field of view are selected such that each pixel subtends less than approximately 1/30 of a degree within the wide display field of view to render adjacent colored pixels indistinguishable.
11. The head mounted display of claim 10, wherein the emissive image source is a 1080p display and a display field of view is less than 73 degrees diagonal.
12. The head mounted display of claim 1, wherein the emissive image source comprises pixels, and the emissive image source and the display field of view are selected such that each pixel subtends less than approximately 1/50 of a degree within the wide display field of view to render adjacent black and white pixels indistinguishable.
13. The head mounted display of claim 1, further comprising a light trap that captures light reflected back to an emissive image source.
14. The head mounted display of claim 13, wherein the light trap comprises a sandwich of polarizers with quarter wave films on both sides.
15. The head mounted display of claim 13, wherein the light trap is positioned between an upper optic and a lower optic.
16. The head mounted display of claim 1, wherein the wide display field of view comprises an included angle of 50 degrees or more.
17. The head mounted display of claim 1, wherein the wide display field of view comprises a format ratio greater than 22:9 to enable reduced thickness of the optics.
18. The head mounted display of claim 1, wherein an outer edge of the central sharp region has an MTF above 20% at a nyquist frequency of an emitting image source, and an outer edge of the peripheral region has an MTF less than 20% at a nyquist frequency of an emitting image source.
19. The head mounted display of claim 1, wherein an outer edge of the central sharp region of the optics has an MTF above 20% at a nyquist frequency of the emitting image source and an outer edge of the peripheral region has an MTF less than 20% at a 1/2 nyquist frequency of the emitting image source.
20. The head mounted display of claim 1, wherein the lower optics comprise a notch mirror.
CN201680002425.3A 2015-02-17 2016-02-16 See-through computer display system Active CN106662750B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110186961.6A CN113671703A (en) 2015-02-17 2016-02-16 See-through computer display system

Applications Claiming Priority (17)

Application Number Priority Date Filing Date Title
US14/623,932 US20160239985A1 (en) 2015-02-17 2015-02-17 See-through computer display systems
US14/623932 2015-02-17
US14/635,390 US20150205135A1 (en) 2014-01-21 2015-03-02 See-through computer display systems
US14/635390 2015-03-02
US14/670677 2015-03-27
US14/670,677 US20160286203A1 (en) 2015-03-27 2015-03-27 See-through computer display systems
US14/741943 2015-06-17
US14/741,943 US20160018645A1 (en) 2014-01-24 2015-06-17 See-through computer display systems
US14/813969 2015-07-30
US14/813,969 US9494800B2 (en) 2014-01-21 2015-07-30 See-through computer display systems
US14/851,755 US9651784B2 (en) 2014-01-21 2015-09-11 See-through computer display systems
US14/851755 2015-09-11
US14/861,496 US9753288B2 (en) 2014-01-21 2015-09-22 See-through computer display systems
US14/861496 2015-09-22
US14/884,567 US9836122B2 (en) 2014-01-21 2015-10-15 Eye glint imaging in see-through computer display systems
US14/884567 2015-10-15
PCT/US2016/018040 WO2016133886A1 (en) 2015-02-17 2016-02-16 See-through computer display systems

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110186961.6A Division CN113671703A (en) 2015-02-17 2016-02-16 See-through computer display system

Publications (2)

Publication Number Publication Date
CN106662750A CN106662750A (en) 2017-05-10
CN106662750B true CN106662750B (en) 2021-03-12

Family

ID=56692324

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201680002425.3A Active CN106662750B (en) 2015-02-17 2016-02-16 See-through computer display system
CN202110186961.6A Pending CN113671703A (en) 2015-02-17 2016-02-16 See-through computer display system

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110186961.6A Pending CN113671703A (en) 2015-02-17 2016-02-16 See-through computer display system

Country Status (3)

Country Link
EP (1) EP3259632A4 (en)
CN (2) CN106662750B (en)
WO (1) WO2016133886A1 (en)

Families Citing this family (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9952664B2 (en) 2014-01-21 2018-04-24 Osterhout Group, Inc. Eye imaging in head worn computing
US9229233B2 (en) 2014-02-11 2016-01-05 Osterhout Group, Inc. Micro Doppler presentations in head worn computing
US9298007B2 (en) 2014-01-21 2016-03-29 Osterhout Group, Inc. Eye imaging in head worn computing
US9965681B2 (en) 2008-12-16 2018-05-08 Osterhout Group, Inc. Eye imaging in head worn computing
US9400390B2 (en) 2014-01-24 2016-07-26 Osterhout Group, Inc. Peripheral lighting for head worn computing
US10254856B2 (en) 2014-01-17 2019-04-09 Osterhout Group, Inc. External user interface for head worn computing
US10684687B2 (en) 2014-12-03 2020-06-16 Mentor Acquisition One, Llc See-through computer display systems
US9575321B2 (en) 2014-06-09 2017-02-21 Osterhout Group, Inc. Content presentation in head worn computing
US9746686B2 (en) 2014-05-19 2017-08-29 Osterhout Group, Inc. Content position calibration in head worn computing
US9299194B2 (en) 2014-02-14 2016-03-29 Osterhout Group, Inc. Secure sharing in head worn computing
US11103122B2 (en) 2014-07-15 2021-08-31 Mentor Acquisition One, Llc Content presentation in head worn computing
US10649220B2 (en) 2014-06-09 2020-05-12 Mentor Acquisition One, Llc Content presentation in head worn computing
US9841599B2 (en) 2014-06-05 2017-12-12 Osterhout Group, Inc. Optical configurations for head-worn see-through displays
US10191279B2 (en) 2014-03-17 2019-01-29 Osterhout Group, Inc. Eye imaging in head worn computing
US9594246B2 (en) 2014-01-21 2017-03-14 Osterhout Group, Inc. See-through computer display systems
US9810906B2 (en) 2014-06-17 2017-11-07 Osterhout Group, Inc. External user interface for head worn computing
US9939934B2 (en) 2014-01-17 2018-04-10 Osterhout Group, Inc. External user interface for head worn computing
US9829707B2 (en) 2014-08-12 2017-11-28 Osterhout Group, Inc. Measuring content brightness in head worn computing
US20160019715A1 (en) 2014-07-15 2016-01-21 Osterhout Group, Inc. Content presentation in head worn computing
US11737666B2 (en) 2014-01-21 2023-08-29 Mentor Acquisition One, Llc Eye imaging in head worn computing
US9532715B2 (en) 2014-01-21 2017-01-03 Osterhout Group, Inc. Eye imaging in head worn computing
US20150205135A1 (en) 2014-01-21 2015-07-23 Osterhout Group, Inc. See-through computer display systems
US9651784B2 (en) 2014-01-21 2017-05-16 Osterhout Group, Inc. See-through computer display systems
US11487110B2 (en) 2014-01-21 2022-11-01 Mentor Acquisition One, Llc Eye imaging in head worn computing
US9651788B2 (en) 2014-01-21 2017-05-16 Osterhout Group, Inc. See-through computer display systems
US9494800B2 (en) 2014-01-21 2016-11-15 Osterhout Group, Inc. See-through computer display systems
US9766463B2 (en) 2014-01-21 2017-09-19 Osterhout Group, Inc. See-through computer display systems
US9836122B2 (en) 2014-01-21 2017-12-05 Osterhout Group, Inc. Eye glint imaging in see-through computer display systems
US11892644B2 (en) 2014-01-21 2024-02-06 Mentor Acquisition One, Llc See-through computer display systems
US9811159B2 (en) 2014-01-21 2017-11-07 Osterhout Group, Inc. Eye imaging in head worn computing
US11669163B2 (en) 2014-01-21 2023-06-06 Mentor Acquisition One, Llc Eye glint imaging in see-through computer display systems
US9753288B2 (en) 2014-01-21 2017-09-05 Osterhout Group, Inc. See-through computer display systems
US20150241964A1 (en) 2014-02-11 2015-08-27 Osterhout Group, Inc. Eye imaging in head worn computing
US9401540B2 (en) 2014-02-11 2016-07-26 Osterhout Group, Inc. Spatial location presentation in head worn computing
US20160187651A1 (en) 2014-03-28 2016-06-30 Osterhout Group, Inc. Safety for a vehicle operator with an hmd
US9672210B2 (en) 2014-04-25 2017-06-06 Osterhout Group, Inc. Language translation with head-worn computing
US9651787B2 (en) 2014-04-25 2017-05-16 Osterhout Group, Inc. Speaker assembly for headworn computer
US10853589B2 (en) 2014-04-25 2020-12-01 Mentor Acquisition One, Llc Language translation with head-worn computing
US10663740B2 (en) 2014-06-09 2020-05-26 Mentor Acquisition One, Llc Content presentation in head worn computing
US9684172B2 (en) 2014-12-03 2017-06-20 Osterhout Group, Inc. Head worn computer display systems
USD751552S1 (en) 2014-12-31 2016-03-15 Osterhout Group, Inc. Computer glasses
USD753114S1 (en) 2015-01-05 2016-04-05 Osterhout Group, Inc. Air mouse
US20160239985A1 (en) 2015-02-17 2016-08-18 Osterhout Group, Inc. See-through computer display systems
TWI760955B (en) * 2016-04-26 2022-04-11 荷蘭商露明控股公司 Led flash ring surrounding camera lens and methods for providing illumination
US20190278090A1 (en) * 2016-11-30 2019-09-12 Nova-Sight Ltd. Methods and devices for displaying image with changed field of view
CN106803988B (en) * 2017-01-03 2019-12-17 苏州佳世达电通有限公司 Information transmission system and information transmission method
KR102242282B1 (en) * 2017-06-26 2021-04-20 후아웨이 테크놀러지 컴퍼니 리미티드 Power supply
EP3531244A1 (en) * 2018-02-26 2019-08-28 Thomson Licensing Method, apparatus and system providing alternative reality environment
CN109932806B (en) * 2017-12-18 2021-06-08 中强光电股份有限公司 Optical lens
US20190204910A1 (en) * 2018-01-02 2019-07-04 Microsoft Technology Licensing, Llc Saccadic breakthrough mitigation for near-eye display
US10726765B2 (en) 2018-02-15 2020-07-28 Valve Corporation Using tracking of display device to control image display
JP7067185B2 (en) * 2018-03-27 2022-05-16 セイコーエプソン株式会社 Display device
CN110596889A (en) * 2018-06-13 2019-12-20 托比股份公司 Eye tracking device and method of manufacturing an eye tracking device
WO2020023266A1 (en) * 2018-07-23 2020-01-30 Magic Leap, Inc. Systems and methods for external light management
US20200033560A1 (en) * 2018-07-30 2020-01-30 Apple Inc. Electronic Device System With Supplemental Lenses
EP3850420A4 (en) * 2018-09-14 2021-11-10 Magic Leap, Inc. Systems and methods for external light management
CN110618529A (en) * 2018-09-17 2019-12-27 武汉美讯半导体有限公司 Light field display system for augmented reality and augmented reality device
JP2020053874A (en) 2018-09-27 2020-04-02 セイコーエプソン株式会社 Head-mounted display device and cover member
JP6958530B2 (en) * 2018-10-10 2021-11-02 セイコーエプソン株式会社 Optical module and head-mounted display
CN109615648B (en) * 2018-12-07 2023-07-14 深圳前海微众银行股份有限公司 Depth of field data conversion method, device, equipment and computer readable storage medium
CN109683317A (en) * 2018-12-28 2019-04-26 北京灵犀微光科技有限公司 Augmented reality eyepiece device and augmented reality display device
CN110174767B (en) * 2019-05-13 2024-02-27 成都工业学院 Super-multi-view near-to-eye display device
US10884241B2 (en) * 2019-06-07 2021-01-05 Facebook Technologies, Llc Optical element for reducing stray infrared light
CN112433327B (en) * 2019-08-06 2022-12-20 三赢科技(深圳)有限公司 Lens module and electronic device
CN114270229A (en) 2019-08-21 2022-04-01 奇跃公司 Flat spectral response grating using high refractive index materials
CN110865458A (en) * 2019-11-29 2020-03-06 联想(北京)有限公司 Head-mounted equipment
TWI745867B (en) * 2020-02-18 2021-11-11 宏碁股份有限公司 Method for adjusting blue light components and head-mounted display
GB2597099A (en) * 2020-07-15 2022-01-19 Sony Interactive Entertainment Inc Head mounted display
CN112147639A (en) * 2020-07-17 2020-12-29 中国工程物理研究院应用电子学研究所 MEMS one-dimensional laser radar and digital camera surveying and mapping device and method
CN111816113A (en) 2020-07-29 2020-10-23 昆山工研院新型平板显示技术中心有限公司 Brightness compensation method, device and equipment of display panel
CN114093293B (en) 2020-07-29 2024-03-19 昆山工研院新型平板显示技术中心有限公司 Luminance compensation parameter determination method, device and equipment
CN112150940B (en) * 2020-09-22 2024-05-14 郑州胜龙信息技术股份有限公司 Grating type screen display system for information safety protection
JP2022163813A (en) * 2021-04-15 2022-10-27 キヤノン株式会社 Wearable information terminal, control method for the same, and program
TWI798853B (en) * 2021-10-01 2023-04-11 佐臻股份有限公司 Augmented reality display device
WO2023096807A1 (en) * 2021-11-29 2023-06-01 Corning Incorporated Triple notched filter coverglass for high ambient contrast display
CN114322944B (en) * 2021-12-24 2023-09-12 中国科学院长春光学精密机械与物理研究所 Coaxial foldback type navigation and spectrum integrated optical system
CN116413914A (en) * 2022-01-05 2023-07-11 华为技术有限公司 Display device and vehicle
WO2023131980A1 (en) * 2022-01-07 2023-07-13 Tesseract Imaging Limited Apparatus for viewing optical images and method thereof
WO2024023502A1 (en) * 2022-07-27 2024-02-01 Bae Systems Plc A coating for optical surfaces

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020060851A1 (en) * 2000-09-27 2002-05-23 Shoichi Yamazaki Image display apparatus and head mounted display using it
US20070075917A1 (en) * 2003-11-21 2007-04-05 Kenji Nishi Image display device and simulation device
US20120242697A1 (en) * 2010-02-28 2012-09-27 Osterhout Group, Inc. See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light
CN202948204U (en) * 2012-12-07 2013-05-22 香港应用科技研究院有限公司 Perspective-type head-mounted display optical system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5483307A (en) * 1994-09-29 1996-01-09 Texas Instruments, Inc. Wide field of view head-mounted display
JP4048844B2 (en) * 2002-06-17 2008-02-20 カシオ計算機株式会社 Surface light source and display device using the same
US7791889B2 (en) * 2005-02-16 2010-09-07 Hewlett-Packard Development Company, L.P. Redundant power beneath circuit board
US8570656B1 (en) * 2009-04-06 2013-10-29 Paul Weissman See-through optical system
DE112012001032T5 (en) * 2011-02-28 2014-01-30 Osterhout Group, Inc. Lighting control in displays to be worn on the head
JP6111636B2 (en) * 2012-02-24 2017-04-12 セイコーエプソン株式会社 Virtual image display device
TWI498771B (en) * 2012-07-06 2015-09-01 Pixart Imaging Inc Gesture recognition system and glasses with gesture recognition function
US9069115B2 (en) * 2013-04-25 2015-06-30 Google Inc. Edge configurations for reducing artifacts in eyepieces
US20150205135A1 (en) * 2014-01-21 2015-07-23 Osterhout Group, Inc. See-through computer display systems
CN203858415U (en) * 2014-04-03 2014-10-01 中航华东光电(上海)有限公司 Head-mounted display structure

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020060851A1 (en) * 2000-09-27 2002-05-23 Shoichi Yamazaki Image display apparatus and head mounted display using it
US20070075917A1 (en) * 2003-11-21 2007-04-05 Kenji Nishi Image display device and simulation device
US20120242697A1 (en) * 2010-02-28 2012-09-27 Osterhout Group, Inc. See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light
CN202948204U (en) * 2012-12-07 2013-05-22 香港应用科技研究院有限公司 Perspective-type head-mounted display optical system

Also Published As

Publication number Publication date
WO2016133886A1 (en) 2016-08-25
CN106662750A (en) 2017-05-10
EP3259632A1 (en) 2017-12-27
CN113671703A (en) 2021-11-19
EP3259632A4 (en) 2018-02-28

Similar Documents

Publication Publication Date Title
US11353957B2 (en) Eye glint imaging in see-through computer display systems
US11947126B2 (en) See-through computer display systems
US11947120B2 (en) Image expansion optic for head-worn computer
US11768417B2 (en) Electrochromic systems for head-worn computer systems
CN106662750B (en) See-through computer display system
US10866420B2 (en) See-through computer display systems
US10073266B2 (en) See-through computer display systems
US20180035101A1 (en) See-through computer display systems
US11669163B2 (en) Eye glint imaging in see-through computer display systems
WO2017066556A1 (en) Compact optical system for head-worn computer
WO2017151872A1 (en) Speaker systems for head-worn computer systems
US20240192502A1 (en) Image expansion optic for head-worn computer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211103

Address after: Florida, USA

Patentee after: Manto first acquisition Co.,Ltd.

Address before: Delaware, USA

Patentee before: JGB mortgage Co.,Ltd.

Effective date of registration: 20211103

Address after: Delaware, USA

Patentee after: JGB mortgage Co.,Ltd.

Address before: California, USA

Patentee before: Osterhout Group Inc.