CN114675418A - Ultra lightweight wearable display device and method for display device - Google Patents

Ultra lightweight wearable display device and method for display device Download PDF

Info

Publication number
CN114675418A
CN114675418A CN202210167596.9A CN202210167596A CN114675418A CN 114675418 A CN114675418 A CN 114675418A CN 202210167596 A CN202210167596 A CN 202210167596A CN 114675418 A CN114675418 A CN 114675418A
Authority
CN
China
Prior art keywords
image
optical
display device
resolution
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210167596.9A
Other languages
Chinese (zh)
Inventor
胡大文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/315,302 external-priority patent/US11231589B2/en
Application filed by Individual filed Critical Individual
Publication of CN114675418A publication Critical patent/CN114675418A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Abstract

Techniques for weight reduction for wearable display devices are disclosed. In one embodiment of the invention, the wearable display device, instead of containing any electronic components, is coupled to a housing box by a transmission line comprising one or more optical fibers that act to transport content or optical images from one end to the other via total internal reflection of the optical fibers. The optical image is picked up by the focusing lens from a micro display in the housing. The optical image is of lower resolution in the fiber but it accepts twice the speed of the normal refresh rate (e.g., 120Hz versus 60 Hz). Where two successive images are combined at or near the other end of the fiber at a lower resolution to produce a higher resolution image, where the combined image is refreshed to a generally normal refresh rate.

Description

Ultra lightweight wearable display device and method for display device
Technical Field
The present invention relates generally to the field of display devices, and more particularly to the architecture and design of display devices fabricated as a pair of glasses that can be used in a variety of applications including virtual reality and augmented reality. In particular, the present invention uses thinner optical fibers to transmit optical images through the entire eyewear, further reducing the weight of the eyewear, and reducing the potential effects on the transmission line caused by the devices often attached between the eyewear and the frame, or the load carried by the wearer of the eyewear.
Background
Virtual reality or VR is generally defined as the real and immersive simulation of a three-dimensional environment created using interactive software and hardware and experienced or controlled by the movement of a subject. A person using a virtual reality device is typically able to look around an artificially generated three-dimensional environment, walking around and interacting with features or objects depicted on a screen or in goggles. Virtual reality artificially creates a sensory experience, which may include sight, touch, hearing, and less common smell.
Augmented Reality (AR) is a technology that adds computer-generated augmentation to existing reality to make it more meaningful through the ability to interact with it. AR is developed into applications and used on mobile devices to mix digital components into the real world so that they augment each other, but can also be easily discerned. AR technology is rapidly becoming the mainstream. It is used to display a overlay of scores for televised sports games on a mobile device and pop up 3D emails, photos or text messages. The leader of this technology industry also uses AR to do exciting and revolutionary things with holograms and motion activation commands.
Separately, the delivery methods of virtual reality and augmented reality are different. Most 2016's of virtual reality are displayed on computer display screens, projector screens, or by virtual reality headsets (also known as head mounted displays or HMDs). HMDs are typically in the form of head mounted goggles with the screen in front of the eyes. Virtual reality actually brings the user into the digital world by switching off external stimuli. In this way, the user is only interested in the digital content being displayed in the HMD. Augmented reality is increasingly used in mobile devices, such as laptops, smart phones, and tablet computers, to change the way real world and digital images, graphics intersect and interact.
Indeed, VR and AR are not always opposed because they do not always operate independently of each other, but rather are often mixed together to create a more immersive experience. For example, haptic feedback as vibrations and sensations added to the interaction with the graphics is considered enhanced. However, it is often used within virtual reality scenes in order to make the experience more realistic by touch.
Virtual reality and augmented reality are prominent examples of experiences and interactions that are expected to become immersive in the simulated platform of entertainment and gaming or to be facilitated by adding new dimensions to the interaction between a digital device and the real world. They both open up, of course, both real and virtual worlds, either individually or mixed together.
Fig. 1A shows exemplary goggles for delivering or displaying VR or AR applications as are common on the market today. Regardless of the design of the goggles, they appear to be bulky and cumbersome and create inconvenience when worn by the user. Furthermore, most goggles are not see-through. In other words, when the user wears the goggles, he or she will not be able to see or do anything else. Therefore, there is a need for a device that can display VR and AR and also allow the user to perform other tasks when needed.
Various wearable devices are being developed for VR/AR and holographic applications. Fig. 1B shows a simplified diagram of HoloLens from Microsoft. Weighing 579g (1.2lbs) at this weight, the wearer will feel uncomfortable after wearing for a period of time. In fact, the products available on the market are generally bulky and bulky compared to normal spectacles (25 g-100 g). There are reports that wearable devices in accordance with the HoloLens from Microsoft would offer to the united states. If truly equipped to a soldier, the weight of the wearable device would likely greatly affect the soldier's movements, particularly in the battlefield, requiring rapid movement. Thus, there is a further need for a wearable AR/VR viewing or display device that looks similar to a pair of ordinary eyeglasses but also allows for a smaller footprint, enhanced impact performance, low cost packaging and easier manufacturing process.
Many eyeglass-type display devices use a common design that places an image forming component (e.g., LCOS) in front or near the lens frame, it is desirable to reduce image transmission loss and use fewer components. However, such designs often unbalance the eyeglass-type display, with the front portion being much heavier than the rear portion, adding some pressure on the nose. Therefore, there is still a need to distribute the weight of such display devices when they are worn by the user.
Regardless of how the wearable display device is designed, there are still many components, wires, and even batteries that must be used to make the display device functional and operable. While much effort has been made to move as many portions as possible to an attachable device or housing to drive the display device from the user's waist or pocket, copper wires or the like must be used in necessary portions to transmit various control signals and image data. The wires, typically in the form of cables, do have a weight that increases stress on the wearer when wearing such display devices. Therefore, there is still a need for a transmission medium that can be as light as possible without sacrificing the desired functionality.
There are many other needs, not individually listed, which one of ordinary skill in the art would readily appreciate that one or more embodiments of the present invention detailed herein would clearly satisfy.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of the invention and to briefly introduce some preferred embodiments. In order to avoid obscuring the section, abstract and heading, simplifications or omissions may be made in this section as well as in the abstract and heading. Such simplifications or omissions are not intended to limit the scope of the present invention.
The present invention relates generally to the architecture and design of wearable devices that can be used for virtual reality and augmented reality applications. According to one aspect of the invention, a display device is made in the form of a pair of eyeglasses and includes a minimum number of portions to reduce its complexity and weight. A separate shell or outer cover is provided that is portable for attachment or attachment to a user (e.g., a pocket or belt). The housing contains all the necessary parts and circuitry to generate content for virtual reality and augmented reality applications, resulting in the minimum number of parts required on the eyewear, thus making the eyewear smaller in footprint, enhanced in impact performance, lower in packaging cost, and easier in manufacturing process. Content is optically picked up by the fiber optic cable and transported to the eyewear through the optical fibers in the fiber optic cable, where the content is respectively projected to a tailored lens for displaying the content in front of the wearer's eyes.
According to another aspect of the invention, the glasses do not contain electronic components and are coupled to a housing by means of a transmission line comprising one or more optical fibers (single or multiple of which may be interchangeably expressed hereafter), wherein the optical fibers are responsible for transporting the content or optical image from one end of the optical fiber to the other end thereof by total internal reflection within the optical fiber. An optical image is picked up by the focusing lens from the microdisplay in the housing.
According to yet another aspect of the invention, the optical image is of lower resolution in the fiber but it accepts twice the speed of the normal refresh rate (e.g., 120Hz versus 60 Hz). Where two successive frames of lower resolution images are combined near the other end of the fiber to produce a higher resolution image, the combined image is refreshed to a generally normal refresh rate.
According to yet another aspect of the invention, each lens comprises a prism in the form of: which propagates an optical image projected onto one edge of the prism to an optical path where a user can see an image formed from the optical image. The prism is also integrated or stacked on an optical corrective lens that is complementary or reciprocal to the lens of the prism to form an integrated lens of the eyewear. The optical correction lens is provided to correct the optical path from the prism, allowing the user to view through the integrated lens without optical distortion.
According to yet another aspect of the invention, an exemplary prism is a waveguide. Each of the integrated lenses includes an optical waveguide that propagates an optical image projected onto one end of the waveguide to the other end through an optical path where an image formed from the optical image is visible to a user. The waveguide may also be integrated with or stacked on an optical corrective lens to form an integrated lens of the eyewear.
According to yet another aspect of the invention, the integrated lens may also be coated with a multilayer film having optical properties to enhance the optical image in front of the user's eye.
According to yet another aspect of the invention, the glasses include several electronic devices (e.g., sensors or microphones) to enable various interactions between the wearer and the displayed content. The signal captured by the device (e.g., depth sensor) is transmitted to the housing by wireless means (e.g., RF wireless or bluetooth) to eliminate the wired connection between the glasses and the housing.
According to yet another aspect of the present invention, an optical conduit is used to convey an optical image received from an image source (e.g., a miniature display). The optical conduit is enclosed in or integrated with a temple of the display device. Depending on the embodiment, the optical conduit including the bundle or array of optical fibers may be twisted, thinned, or otherwise deformed to fit the fashion design of the temple while transporting the optical image from one end of the temple to the other.
According to yet another aspect of the present invention, the portable device may be implemented as a stand-alone device or a docking unit to receive a smartphone. The portable device is essentially a control box connected to a network, such as the internet, and generates control and command signals when controlled by a user. When the smartphone is received in the docking unit, many of the functions provided in the smartphone, such as the network interface and the touch screen, may be used to receive input from the user.
The present invention may be embodied as an apparatus, method, or system. Different embodiments may yield different benefits, objects, and advantages. In one embodiment, the present invention is a display device comprising: a frame for glasses; at least one integrated lens comprising an optical waveguide lens, wherein the integrated lens is framed in the eyewear frame; at least one temple attached to the eyeglass frame; a set of optical fibers having a first end and a second end, wherein the first end receives a sequence of two-dimensional optical images that are transported from the first end to the second end by total internal reflection among the optical fibers, wherein no other power-driven electronic components are required in the display device to receive the two-dimensional optical images that are transmitted to the integrated optic, the two-dimensional optical images being formed in the optical waveguide optic to be viewed by a viewer looking at the integrated optic. In one embodiment, the data image that produces the two-dimensional optical image is at a first refresh rate and a first resolution, and two successive two-dimensional optical images are displayed in the integrated optic, resulting in a combined composite optical image at a second refresh rate and a second resolution. Wherein the first refresh rate is 2 × the second refresh rate, and the first resolution is 1/2 × the second resolution (e.g., the first refresh rate is 120Hz, and the first resolution is 640 × 480).
In another embodiment, the present invention is a method of displaying a device, the method comprising: a set of optical fibers having a first end and a second end, wherein the first end receives the sequence of two-dimensional optical images projected thereon and the second end is coupled to an integrated lens, the integrated lens comprising an optical waveguide lens, the integrated lens being framed on an eyeglass frame to which at least one temple is attached; sequentially transporting the two-dimensional optical image from the first end to the second end within the optical fiber by total internal reflection; projecting the two-dimensional optical image into the optical waveguide lens and forming the two-dimensional optical image in the optical waveguide lens for viewing by a viewer looking at the integrated lens, wherein no other power-driven electronic components are required in the display device to receive the two-dimensional optical image that is communicated to the integrated lens in which the two-dimensional optical image is formed.
In addition to the above objects, which are achieved by the practice of the invention in the following description and which result in the embodiments shown in the drawings, there are many other objects.
Drawings
These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
FIG. 1A illustrates exemplary goggles for delivering or displaying VR or AR applications that are common on the market today;
FIG. 1B shows a simplified diagram of HoloLens from Microsoft;
FIG. 2A shows an exemplary pair of eyeglasses that may be used for VR applications according to one embodiment of the present invention;
FIG. 2B illustrates the use of an optical fiber to remove light from a curved path in a more efficient manner or by total internal reflection within the optical fiber
One location to another;
FIG. 2C illustrates two exemplary ways of encapsulating an optical fiber or fibers according to one embodiment of the invention;
FIG. 2D shows how an image is carried from the microdisplay to the imaging medium by a fiber optic cable;
FIG. 2E illustrates an exemplary set of Variable Focus Elements (VFE) to accommodate adjustment of the projection of an image onto an optical object (e.g., an imaging medium or prism);
FIG. 2F illustrates an exemplary lens that may be used in the eyewear shown in FIG. 2A, wherein the lens comprises two portions, a prism and an optical corrective lens or corrector;
FIG. 2G shows internal reflections from multiple sources (e.g., a sensor, an imaging medium, and multiple light sources) in an irregular prism;
FIG. 2H shows a comparison of such an integrated lens with a coin and a ruler;
fig. 2I shows a shirt with the cable enclosed within or attached to the shirt;
FIG. 3A illustrates how three single color images are visually combined and perceived by human vision as a full color image;
FIG. 3B shows that three different color images are produced under three lights at wavelengths λ 1, λ 2, and λ 3, respectively, and the imaging medium contains three films, each film coated with one type of phosphor.
FIG. 4 illustrates the use of a waveguide to transport an optical image from one end of the waveguide to its other end;
FIG. 5A illustrates an exemplary functional block diagram that may be used with a separate shell or housing to generate content for virtual reality and augmented reality for display on the exemplary eyewear of FIG. 2A;
fig. 5B shows an embodiment according to which an exemplary circuit is used in a single housing device cartridge (also referred to herein as an image engine).
FIG. 5C shows an exemplary embodiment showing how a user may wear a pair of designed display glasses according to one embodiment of the invention.
Fig. 5D shows an exemplary functional block diagram of circuitry for the image engine of fig. 5B, which employs the technique disclosed in U.S. patent No. US 10,147,350, according to one embodiment.
FIG. 6A illustrates an exemplary array of multiple pixel cells, each of which shows four sub-pixel cells.
FIG. 6B illustrates a concept to generate an extended image from the two generated frames.
FIG. 6C shows an example of an expanded image into a double-sized image with sub-pixel units, the image being processed by writing pixel values to all (four) sub-pixel units in a group, the expansion being performed by a two-pass process and separated to form two frames.
Fig. 6D shows the meaning of the method in which the images are separated to produce two frames of the same size as the original image by means of light intensity.
Fig. 6E shows another embodiment for enlarging an input image to an expanded but two considerably reduced and interleaved images.
FIG. 7 illustrates how an optical image is generated using an optical cube in one embodiment; and
fig. 8 shows that the display glasses do not include any other power-driven electronic components to provide images or video to the integrated lens.
Detailed Description
The detailed description of the invention is presented largely in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations that are directly or indirectly analogous to data processing devices coupled to a network. These process descriptions and representations are generally used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art.
Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, the order in which the blocks in the process flow diagrams or illustrations represent one or more embodiments of the invention is not intended to indicate any particular order nor imply any limitations in the invention.
Embodiments of the invention are discussed herein with reference to fig. 2A-7. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes as the invention extends beyond these limited embodiments.
Referring now to the figures, in which like numerals refer to like parts throughout the several views. FIG. 2A shows an exemplary pair of eyeglasses 200 for use in VR/AR applications in accordance with an embodiment of the present invention. The eyewear 200 does not significantly differ in appearance from a normal pair of eyeglasses, but includes two flexible cables 202 and 204 extending from temples 206 and 208, respectively. According to one embodiment, the two flexible cables 202 and each pair of the mirror legs 206 and 208 are integrated or removably connected at one end thereof and include one or more optical fibers. In this context, a temple arm may also be referred to as a temple, which may be understood as a support part at the edge.
Both flexible cables 202 are coupled at their other ends to a portable computing device 210, where the computing device 210 generates images captured by the cables 202 based on a microdisplay. The image is transported through the optical fibers by total internal reflection within the flexible cable 202 to the other end of the optical fibers where it is projected onto the lenses in the glasses 200.
According to one embodiment, each of the two flexible cables 202 contains one or more optical fibers. Optical fibers are used to transmit light from one place to another along a curved path in a more efficient manner as shown in fig. 2B. In one embodiment, the optical fiber is formed from thousands of strands of very fine quality glass or quartz having an index of refraction on the order of about 1.7. The thickness of one strand is small. The strands are coated with a layer of a material of lower refractive index. The ends of the strands are polished and clamped firmly after careful alignment thereof. When light is incident at one end at a small angle, it is refracted into the strand (or fiber) and is incident on the interface of the fiber and the coating. At incident angles greater than the critical angle, the light rays undergo total internal reflection and substantially transport the light from one end to the other, even when the fiber is bent. Depending on the embodiment of the present invention, a single optical fiber or a plurality of optical fibers arranged in parallel may be used to transport an optical image projected onto one end of an optical fiber to the other end thereof. Typically a high resolution image will require more optical fiber to transmit. As will be described below, two such images (e.g., two successive images) are combined at double the refresh rate after transmission to produce a viewable second (high) resolution image by minimizing the number of fibers used to transmit the first (low) resolution image to a small number of fibers.
Fig. 2C shows two exemplary ways of encapsulating an optical fiber or fibers according to one embodiment of the invention. The encapsulated optical fiber may be used as cable 202 or 204 in fig. 2A and extends through each of the non-flexible temples 206 and 208 to its end. According to one embodiment, the temples 206 and 208 are made of a material type common to a pair of ordinary eyeglasses (e.g., plastic or metal), a portion of the cable 202 or 204 is embedded or integrated in the temples 206 or 208, thereby creating a non-flexible portion, while the other portion of the cable 202 or 204 is still flexible. According to another embodiment, the non-flexible portion and the flexible portion of the cable 202 or 204 may be removably connected by an interface or connector.
Reference is now made to fig. 2D, which illustrates how an image is conveyed from the microdisplay 240 to the imaging medium 244 through a fiber optic cable 242. As will be described further below, the imaging medium 244 can be a physical thing (e.g., a film) or a non-physical thing (e.g., air). The microdisplay is a display with a very small screen (e.g., less than one inch). This type of tiny electronic display system was introduced commercially by the end of the 90's of the 20 th century. The most common applications for microdisplays include rear projection TVs and head mounted displays. The microdisplay may be reflective or transmissive depending on the way light is allowed to pass through the display cell. Through the lens 246, an image (not shown) displayed on the micro display 240 is picked up by one end of the optical fiber cable 242, which transmits the image to the other end of the optical fiber cable 242. Another lens 248 is provided to collect images from the fiber optic cable 242 and project images to the imaging media 244. Depending on the implementation, there are different types of microdisplays and imaging media. Some embodiments of the microdisplay and the imaging media are described in detail below.
Fig. 2E illustrates a set of exemplary Variable Focus Elements (VFEs) 250 to accommodate adjustment of the projection of an image onto an optical object, such as an imaging medium or prism. To facilitate the description of the various embodiments of the present invention, the presence of image media is assumed. As shown in fig. 2E, the image 252 conveyed through the fiber optic cable reaches an end surface 254 of the fiber optic cable. The image 252 is focused onto an imaging medium 258 by a set of lenses 256, referred to herein as a Variable Focus Element (VFE). The VFE 256 is provided to adjust to ensure that the image 252 is accurately focused onto the imaging media 258. Depending on the implementation, the adjustment of the VFE 256 may be done manually or automatically based on input (e.g., measurements obtained from sensors). According to one embodiment, the adjustment of the VFE 256 is performed automatically according to a feedback signal derived from a sensed signal from a sensor against the eye (pupil) of the wearer wearing the eyeglasses 200 of fig. 2A.
Referring now to FIG. 2F, an exemplary lens 260 that may be used with the eyewear shown in FIG. 2A is shown. Lens 260 includes two portions: a prism 262 and an optical correction lens or corrector 264. The prism 262 and the corrector 264 are stacked to form the optic 260. As the name implies, the optical corrector 264 is provided to correct the optical path from the prism 262 such that light passing through the prism 262 travels straight through the corrector 264. In other words, the refracted light from the prism 262 is corrected or released from refraction by the corrector 264. In optics, a prism is a transparent optical element with a flat, polished surface that refracts light. At least two of the planar surfaces must have an angle between them. The exact angle between the surfaces depends on the application. The traditional geometry is a triangular prism with a triangular base and rectangular sides, and in spoken use, a prism is often referred to as this type. The prism may be made of any material that is transparent to the wavelength for which it is designed. Typical materials include glass, plastic and fluorspar. According to one embodiment, the type of prism 262 is not actually in the shape of a geometric prism, and thus the prism 262 is referred to herein as an arbitrary-shaped prism, which directs the corrector 264 to a shape that is complementary, reciprocal, or conjugate to the form of the prism 262 to form the optic 260.
On one edge of the lens 260 or the edge of the prism 262, there are at least three items that utilize the prism 262. Designated 267 is imaging media corresponding to imaging media 244 of fig. 2D or imaging media 258 of fig. 2E. Depending on the embodiment, the image conveyed by the optical fiber 242 of fig. 2D may be projected directly onto the edge of the prism 262, or formed on the imaging medium 267 before it is projected onto the edge of the prism 262. In any case, depending on the shape of the prism 262, the projected image is refracted in the prism 262 and subsequently seen by the eye 265. In other words, a user wearing a pair of eyeglasses using lenses 262 may see an image through or displayed in the prism 262.
A sensor 266 is provided to image the position or movement of the pupil in the eye 265. Also, based on the refraction provided by the prism 262, the sensor 266 may find the location of the pupil. In operation, an image of the eye 265 is captured. The image is analyzed to derive the manner in which the pupil views the image shown through or in the lens 260. In AR applications, the position of the pupil may be used to activate some action. Optionally, a light source 268 is provided to illuminate the eye 265 to facilitate image capture by the sensor 266. According to one embodiment, the light source 268 uses a near-guess source, whereby the user or his eye 265 will not be affected by the light source 268 when it is turned on.
Fig. 2G shows internal reflections from multiple sources (e.g., sensor 266, imaging media 267, and light source 268). Since the prism is uniquely designed, or has specific edges, especially in shape, the light from the source is reflected several times within the prism 268 and then impinges on the eye 265. For completeness, fig. 2H shows a comparison of such lenses in size with a coin and a ruler.
As described above, there are different types of microdisplays and, therefore, different imaging media. The following table summarizes some of the microdisplays that may be used to facilitate the creation of an optical image that may be transported from one end to the other end by one or more optical fibers by total internal reflection within the optical fibers.
Figure BDA0003517110890000101
LCoS ═ liquid crystal on silicon (LCoS);
LCD ═ liquid crystal displays;
OLED ═ organic light emitting diode;
RGB ═ red, green, and blue; and
SLM is a spatial light modulator.
In the first case shown in the above table, a full color image is actually displayed on a silicon substrate. As shown in fig. 2D, the full color image can be picked up by a focusing lens or set of lenses that project the full image onto exactly one end of the optical fiber. The image is transported within the fiber and again picked up by another focusing lens at the other end of the fiber. The imaging medium 244 of fig. 2D may not be physically needed because the delivered image is visible and full color. The color image may be projected directly onto one edge of the prism 262 of fig. 2F.
In the second case shown in the table above, the LCoS is used with different light sources. In particular, there are at least three colored light sources (e.g., red, green, and blue) used sequentially. In other words, each light source produces a single color image. The image picked up by the optical fiber is only a single color image. A full color image can be reproduced when all three different single color images are combined. The imaging medium 244 of fig. 2D is provided to reproduce a full color image from three different single color images respectively conveyed by the optical fibers.
Fig. 2I shows a shirt 270 with a cable 272 enclosed within or attached to shirt 270. Shirt 270 is an example of a fabric material or a multi-layer piece. Such relatively thin cables may be embedded in the multilayer. When a user wears such a shirt made or designed according to one embodiment, the cable itself has less weight and the user is more free to move around.
Fig. 3A shows how three single color images 302 are visually combined and perceived by human vision as a full color image 304. According to one embodiment, three colored light sources are used, for example red, green and blue light sources that are switched on in sequence. More specifically, when the red light source is turned on, only a red image is produced as a result (e.g., from a microdisplay). The red image is then optically picked up and transported by an optical fiber and then projected into the prism 262 of fig. 2F. As the green and blue light is then turned on in sequence, green and blue images are generated and conveyed separately by the optical fibers and then projected into the prism 262 of fig. 2F. It is well known that human vision possesses the ability to combine three single color images and perceive them as a full color image. With all three single color images projected in sequence into the prism perfectly aligned, the eye sees a full color image.
Also in the second case shown above, the light source may be approximately invisible. According to one embodiment, three light sources generate light close to the UV band. Under such illumination, three different color images may still be generated and delivered, but are not fully visible. Before the color image can be presented to the eye or projected into a prism, it will be converted into three primary color images, which can then be perceived as a full color image. According to one embodiment, imaging media 244 of FIG. 2D is provided. Fig. 3B shows that at the wavelengths respectively, it can then be perceived as a full color image. According to one embodiment, the imaging media 244 of FIG. 2D is provided. FIG. 3B shows that three different color images 310 are produced under three light sources at wavelengths λ 1, λ 2, and λ 3, respectively, and the imaging medium 312 includes three film layers 314, each film layer 314 being coated with one type of phosphor, i.e., a substance that exhibits a luminescence phenomenon. In one embodiment, three types of phosphors at wavelengths 405nm, 435nm, and 465nm are used to convert three different color images produced under three light sources in the near UV band. In other words, when one such color image is projected onto a film layer coated with phosphor at a wavelength of 405nm, the single color image is converted to a red image that is then focused and projected into a prism. The process is the same for the other two single color images through the film layer coated with phosphor at wavelengths 435nm or 465nm, producing green and blue images. When such red, green, and blue images are projected sequentially into the prism, they are perceived together by human vision as a full-color image.
In the third or fourth case shown in the table above, instead of using light in the visible spectrum of the human eye or that is nearly invisible, the light source uses a laser source. Visible lasers and invisible lasers are also present. Without much difference to the operation of the first and second cases, the third or fourth case uses so-called Spatial Light Modulation (SLM) to form a full color image. Spatial light modulators are a general term describing devices for modulating the amplitude, phase or polarization of light waves in space and time. In other words, the SLM + laser (RGB sequential) can produce three separate color images. When the color images are combined with or without an imaging medium, a full color image can be reproduced. In the case of SLM + laser (not visible), the imaging medium would be presented to convert the invisible image to a full color image, in which case appropriate film layers may be used as shown in fig. 3B.
Referring now to FIG. 4, a waveguide 400 is shown for transporting an optical image 402 from one end 404 to another end 406 of the waveguide 400, where the waveguide 400 may be stacked with one or more sheets of glass or lens (not shown) or coated with one or more film layers to form a suitable lens for application to a pair of glasses for displaying images from a computing device. As known to those skilled in the art, an optical waveguide is a spatially inhomogeneous structure for guiding light, i.e. a spatial region for confining the light propagation, wherein the waveguide contains a region of increased refractive index compared to the surrounding medium (usually called cladding).
Waveguide 400 is transparent and shaped at end 404 in a suitable manner to allow image 402 to propagate along waveguide 400 to end 406, where user 408 can view through waveguide 400 to see propagating image 410. According to one embodiment, one or more film layers are disposed on waveguide 400 to magnify propagating image 410 such that eye 408 can see a significantly magnified image 412. One example of such a film is known as metalenses, which is essentially an array of thin titania nanosheets on a glass substrate.
Referring now to fig. 5A, an exemplary functional block diagram 500 is shown that may be used with a separate shell or housing to generate virtual reality and augmented reality related content for display on the exemplary eyewear of fig. 2A. As shown in fig. 5A, two micro-displays 502 and 504 are provided to supply content to two lenses in the glasses of fig. 2A, essentially a left image going to the left lens and a right image going to the right lens. Examples of such content are 2D or 3D images and video or holograms. Each of the micro-displays 502 and 504 is driven by a corresponding driver 506 or 508.
The entire circuit 500 is controlled and driven by a controller 510 programmed to produce the content. According to one embodiment, the circuit 500 is designed to communicate with the internet (not shown), receiving the content from other devices. Specifically, the circuit 500 includes an interface that receives the sensing signal wirelessly (e.g., RF or bluetooth) from a remote sensor (e.g., sensor 266 of fig. 2F). The controller 510 is programmed to analyze the sensing signals and provide feedback signals to control certain operations of the glasses, such as the projection mechanism, which includes a focusing mechanism that automatically focuses and projects an optical image onto the edge of the prism 262 of fig. 2F. Further, audio is provided to synchronize with the content, and the audio may be wirelessly transmitted to headphones.
Fig. 5A illustrates an exemplary circuit 500 that generates content for display in a pair of glasses contemplated in one embodiment of the present invention. The circuit 500 shows that there are two micro-displays 502 and 504 for providing two corresponding image or video streams to the two lenses of the glasses in fig. 2A. According to one embodiment, only one microdisplay may be used to drive both lenses of the glasses in fig. 2A. Such circuitry is not provided herein, as those skilled in the art know how the circuitry can be designed or how to modify the circuitry 500 of fig. 5A.
Fig. 5B shows an embodiment according to which an exemplary circuit 500 is used in a single housing device box 516 (also referred to herein as an image engine). The image engine 516 receives an image source or video from the smartphone 518, while also being a controller to provide the required interface to allow the wearer or user to manipulate what is received and shown on the display glasses, and how to interact with the display. FIG. 5C illustrates an exemplary embodiment showing how a user may wear such display glasses. The display glasses 520 according to this embodiment do not contain active electronic components (power driven) except a pair of optical fibers 522 to deliver an image or video. The accompanying sound can be provided by the smartphone 518 directly to a headset (earbud or bluetooth headset). As will be further described below, the thickness or number of fibers 522 from the image engine 516 to the glasses 520 used to transmit or transport the low resolution images and video will again be reduced.
Fig. 5D illustrates an exemplary circuit 530 according to one embodiment, which employs the technology disclosed in U.S. patent No. US10,147,350, the contents of which are hereby incorporated by reference. As shown in FIG. 5D, circuit 530 essentially produces two low resolution images (e.g., 640x480) that are displaced diagonally by one pixel and have a refresh rate of 120Hz (60 Hz in the United states for the commonly used "standard" refresh rate). The refresh rate typically used is 60Hz for most television TVs, PC monitors, and smart phones. A refresh rate of 60Hz means that the display is refreshed 60 times a second, in other words, the displayed image is updated (or refreshed) every 16.67 microseconds (ms). When such two images are refreshed twice at the standard refresh rate, the image resolution perceived by the user on the integrated glasses is doubled, i.e., nominally up to 1280x 960.
According to one embodiment, the native (first) resolution of a displayed image on the display glasses, e.g., 640x480, or a resolution that is pre-set for efficient transmission over optical fiber, is at a first refresh rate when transmitting video. If the image is at a higher resolution than the first resolution, it may be reduced to a lower resolution. According to US10,147,350, a duplicate but diagonally displaced image by half a pixel is produced resulting in a second image also at the first resolution, both images being projected onto the fibre 522 in sequence at twice the refresh rate of the original image, that is to say the second refresh rate is equal to twice (2X) the first refresh rate. When the images are output sequentially from the other end of the fibre, they will be seen in the waveguide as an image of the second resolution, which is twice that of the first resolution.
Fig. 6A-6E duplicate fig. 16A-16E of U.S. patent number US10,147,350. As described above, the optical image output from the optical fiber in one embodiment of the present invention will be twice the spatial resolution seen by the input image. Referring to FIG. 6A, a pixel cell array 600 (forming an image or a data image) is shown having 4 sub-image cells 604A, 604B, 604C, and 604D. When an input image (e.g., 500x500) with a first resolution is received and displayed as the first resolution, each pixel value is stored in each pixel unit 600. In other words, sub-picture elements 604A, 604B, 604C, and 604D are all written or stored with the same value and are addressed simultaneously. As shown in FIG. 6A, a word line (e.g., WL0, WL1, or WL2) may be addressed to sub-pixels belonging to two columns of the pixel 602 at the same time, and a bit line (e.g., BL0, BL1, or BL) may be addressed to sub-pixels belonging to two rows of the pixel 602 at the same time. At any instant, a pixel value is written into pixel 602, where sub-picture elements 604A, 604B, 604C, and 604D are selected. Finally, the input images are displayed at a first resolution (e.g., 500x500), i.e., the input images are all at the same resolution.
Now assume that an input (data) image of a first resolution (e.g., 500x500) is received and displayed at a second resolution (e.g., 1000x1000), where the second resolution is twice the first resolution. According to one embodiment, the sub-image elements are used to achieve a viewable resolution. It is extremely important to understand that this improved spatial resolution is viewable by the human eye, rather than the actual double resolution of the input image. To facilitate the description of the invention, fig. 6B and 6C are used to illustrate how to expand an enlarged input image to achieve a viewable resolution.
Now assume that an input image 610 is at a resolution of 500x 500. The input image 610 is expanded to a size 1000x1000 of an image 614 via data processing 612 (e.g., enlarging and sharpening). Fig. 6C shows an example where image 616 is expanded to image 618 to be twice as large with sub-pixel cells. In operation each pixel of image 616 writes a group including all (four) sub-pixel cells (e.g., the exemplary sub-pixel is 2x 2). Those skilled in the art will appreciate that the description herein can be immediately applied to other sub-pixel structures (3x3, 4x4, 5x5, etc.), resulting in even more observable resolution. According to one embodiment, a sharpening process (e.g., data processing of the portion of FIG. 16B) is applied to expand image 618 to cause the underlying process of enlarging image 618 (e.g., filtering, thinning, or sharpening image edges) to achieve the goal of generating two frames from expanded image 618. In one embodiment, the values of each subpixel are computationally recalculated to achieve a better defined edge to generate image 620, and in another embodiment, the values of adjacent pixels are referenced to obtain a sharp edge.
Processed image 620 is then separated into two images 622 and 624 via a separate process 625. Both images 622 and 624 have the same resolution as the input image (e.g., 500x500), where the sub-pixel elements of images 622 and 624 are written or stored with the same value. The boundary extent of the pixel cells in image 622 is intentionally different from the boundary extent of the pixel cells in image 624. In one embodiment, the boundary of the pixel cell is offset by half a pixel in the vertical direction (equivalent to one sub-pixel in a 2x2 sub-pixel array) and also offset by half a pixel in the horizontal direction (equivalent to one sub-pixel in a 2x2 sub-pixel array). The separation process 625 proceeds in one manner: when images 622 and 624 overlap, the combined image can best fit into image 620 and is four times the resolution of input image 616. In the example of FIG. 6C, to maintain a fixed intensity of the input image 610, the separation process 625 also includes a process of reducing the intensity of each of the two images 622 and 624 by 50%. In operation, the intensity of the first image is reduced by N percent, where N is an integer and ranges from 1 to 100, but is set to be about 50 in practice. As a result, the intensity of the second image is reduced to (100-N) percent. Either of the two images 622 and 624 is displayed at twice the refresh rate of the input image 610. In other words, if the input image is displayed at 50Hz per second, each pixel of the two images 622 and 624 is displayed at 100Hz per second. The combined image perceived by the viewer approximates image 620 due to the offset of the pixel boundaries and the processing of the data. The pixel boundary that is offset between the two images 626 and 624 has the effect of "shifting" the pixel boundary. According to another embodiment, as shown by two pixels 626 and 628, the example illustrated in FIG. 6C is similarly shifted one (sub) pixel in the southeast direction.
According to one embodiment, the separation process 625 may be performed by an image algorithm or a pixel shift, where a pixel shift refers to a sub-pixel in the sub-pixel structure shown in FIG. 6A. There are many ways to separate one NxM image into two images by intensity and each image is still NxM, so the perceived display effect to see two images of either will double the refresh rate for optimal vision. For example, an exemplary approximation is to maintain and modify the original image and reduce the intensity as a first frame, while the remainder of the first frame is used to generate a second frame, again with reduced intensity. In another embodiment, the approximation is to shift the bit from the first frame (either from the original or modified acquisition) by half (1/2) pixels (e.g., horizontally and vertically or diagonally) to generate a second frame, further details of which are provided later. FIG. 6C shows that two pictures 622 and 624 are generated from processing the expanded picture 620, in accordance with the graphics algorithm, concurrently with the generation of two pixels 626 and 628, by diagonally shifting the pixels of the first frame to generate the second frame. It should be noted that the separation process here means that two frames equivalent to the original image size are generated by separating the images by their intensities. Fig. 6D shows an image of two pixels, one at full intensity (shown as black) and the other at half full intensity (shown as gray). When the two pixel images are separated into two frames of the same size as the original, the first frame has two pixels both at half full intensity (shown as gray) and the second frame also has two pixels, one at half full intensity (shown as gray) and the other at almost zero percent full intensity (shown as white). Now there are twice as many pixels as the original image, which shows a checkerboard pattern like a western flag-jump disk. Since each pixel is refreshed 60 times per second instead of 120 times, each pixel has only half the brightness, but because they are twice as much, the brightness of the image as a whole remains the same.
Referring now to FIG. 6E, another embodiment is shown to expand the input image 610. The input image 610 is still assumed to be 500x500 resolution. Via data processing 612, the input image 610 is expanded to a size of 1000x 1000. In this embodiment, it should be appreciated that 1000x1000 is not the resolution of the extended image. The expanded image is images 630 and 632 with two 500x500 reductions to a considerable extent. The expanded view 634 of the considerably reduced images 630 and 632 shows that the pixels in one image are considerably reduced allowing pixels of another image to be generated between the pixels. According to an embodiment of the invention, the first grid image is derived from the input image and the second image is derived from the first image. As shown in the expanded view 634 of FIG. 6E, an exemplary pixel 636 in the second image 632 is derived from three pixels 638A, 638B, and 638C. In the same way, that is to say that a displacement of half (1/2) pixels along a set direction can be applied to generate all the pixels of the second image. At the end of the data processing 612, there is an interleaved image that holds two images 630 and 632, each of 500x 500. Another process flow 625 of the separation process is applied to the interleaved image to generate or store two images 630 and 632 therein.
Referring now to one embodiment illustrated in FIG. 7, an optical image is generated using an optical cube 702. An image displayed with the light source 704 and on a microdisplay (e.g., LCoS or OLED)706 is projected as an optical image and picked up by the lens 708. The optical image is then transported therefrom to the other end via the optical fiber 710. The optical image is then projected into a waveguide or integrated lens 712 via another lens (e.g., collimator or collimator) 714. The optical image is ultimately viewed by a human eye 716 in the waveguide lens 712. Fig. 8 shows that the display glasses 720 do not contain any other power-driven electronic components therein to provide images or video to the integrated lens.
The invention has been described with a certain degree of particularity. It will be understood by those of skill in the art that the present disclosure of the embodiments is by way of example only, and that various changes in the arrangement and combination of parts may be made without departing from the spirit and scope of the invention. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description of the embodiments.

Claims (14)

1. A display device includes:
a frame for glasses;
at least one integrated lens comprising an optical waveguide lens, wherein the integrated lens is framed in the eyewear frame;
At least one temple attached to said eyeglass frame;
a set of optical fibers having a first end and a second end, wherein the first end receives a sequence of two-dimensional optical images that are transported from the first end to the second end by total internal reflection among the optical fibers,
wherein no other power-driven electronic components are required in the display device to receive the two-dimensional optical image that is transmitted to the integrated lens, the two-dimensional optical image being formed in the optical waveguide lens to be seen by a viewer looking at the integrated lens.
2. The display device according to claim 1, wherein: the optical fiber is disposed along the temple.
3. The display device according to claim 2, wherein: the optical fiber is enclosed within the temple.
4. The display device according to claim 2, wherein: the optical fiber is part of the temple.
5. The display device according to claim 1, wherein: the data image that produces the two-dimensional optical image is at a first refresh rate and a first resolution and two successive two-dimensional optical images are displayed in the integrated optic resulting in a combined composite optical image at a second refresh rate and a second resolution.
6. The display device according to claim 5, wherein: the first refresh rate is 2 x the second refresh rate, and the first resolution is 1/2 x the second resolution.
7. The display device according to claim 5, wherein: the two successive two-dimensional optical images from the optical fibers are used to produce a composite optical image viewed by a viewer of the display device.
8. A method for a display device, the method comprising:
a set of optical fibers having a first end and a second end, wherein the first end receives the sequence of two-dimensional optical images projected thereon and the second end is coupled to an integrated lens, the integrated lens comprising an optical waveguide lens, the integrated lens being framed on an eyeglass frame to which at least one temple is attached;
transporting the two-dimensional optical images sequentially within the optical fiber from the first end to the second end by total internal reflection;
projecting the two-dimensional optical image into the optical waveguide lens and forming the two-dimensional optical image in the optical waveguide lens for viewing by a viewer looking at the integrated lens, wherein no other power-driven electronic components are required in the display device to receive the two-dimensional optical image, which is transmitted to the integrated lens, in which the two-dimensional optical image is formed.
9. The method of claim 8, wherein: the optical fiber is disposed along the temple.
10. The method of claim 8, wherein: the optical fiber is enclosed within the temple.
11. The method of claim 8, wherein: the optical fiber is part of the temple.
12. The method of claim 8, wherein: the data image that produces the two-dimensional optical image is at a first refresh rate and a first resolution and two successive two-dimensional optical images are displayed in the integrated optic resulting in a combined composite optical image at a second refresh rate and a second resolution.
13. The method of claim 12, wherein: the first refresh rate is 2 x the second refresh rate, and the first resolution is 1/2 x the second resolution.
14. The method of claim 12, wherein: the two successive two-dimensional optical images from the optical fibers are used to produce a composite optical image viewed by a viewer of the display device.
CN202210167596.9A 2021-05-08 2022-02-23 Ultra lightweight wearable display device and method for display device Pending CN114675418A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/315302 2021-05-08
US17/315,302 US11231589B2 (en) 2016-12-08 2021-05-08 Ultralight wearable display device

Publications (1)

Publication Number Publication Date
CN114675418A true CN114675418A (en) 2022-06-28

Family

ID=82073061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210167596.9A Pending CN114675418A (en) 2021-05-08 2022-02-23 Ultra lightweight wearable display device and method for display device

Country Status (1)

Country Link
CN (1) CN114675418A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101175223A (en) * 2007-07-10 2008-05-07 天津大学 Multi-view point stereoscopic picture synthesizing method for LCD free stereoscopic display device based on optical grating
US20080152216A1 (en) * 2004-08-31 2008-06-26 Visre, Inc. Methods for and apparatus for generating a continuum of three dimensional image data
WO2009097782A1 (en) * 2008-01-30 2009-08-13 Yong Di Method of an image synthesized by multiple images and its display components
CN103533336A (en) * 2012-07-06 2014-01-22 薄淑英 High-resolution auto-stereoscopic display
CN103686044A (en) * 2012-09-05 2014-03-26 想象技术有限公司 Pixel buffering
US20150205126A1 (en) * 2013-11-27 2015-07-23 Magic Leap, Inc. Virtual and augmented reality systems and methods
US20150234477A1 (en) * 2013-07-12 2015-08-20 Magic Leap, Inc. Method and system for determining user input based on gesture
US20160004060A1 (en) * 2014-06-10 2016-01-07 Purdue Research Foundation High frame-rate multichannel beam-scanning microscopy
US20160033771A1 (en) * 2013-03-25 2016-02-04 Ecole Polytechnique Federale De Lausanne Method and apparatus for head worn display with multiple exit pupils
US20160270656A1 (en) * 2015-03-16 2016-09-22 Magic Leap, Inc. Methods and systems for diagnosing and treating health ailments
CN110196494A (en) * 2018-06-03 2019-09-03 胡大文 Wearable display system and method for feeding optical image
CN111290581A (en) * 2020-02-21 2020-06-16 京东方科技集团股份有限公司 Virtual reality display method, display device and computer readable medium
US20210049976A1 (en) * 2019-08-15 2021-02-18 Coretronic Corporation Projector and projection method thereof
CN112578564A (en) * 2020-12-15 2021-03-30 京东方科技集团股份有限公司 Virtual reality display equipment and display method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080152216A1 (en) * 2004-08-31 2008-06-26 Visre, Inc. Methods for and apparatus for generating a continuum of three dimensional image data
CN101175223A (en) * 2007-07-10 2008-05-07 天津大学 Multi-view point stereoscopic picture synthesizing method for LCD free stereoscopic display device based on optical grating
WO2009097782A1 (en) * 2008-01-30 2009-08-13 Yong Di Method of an image synthesized by multiple images and its display components
CN103533336A (en) * 2012-07-06 2014-01-22 薄淑英 High-resolution auto-stereoscopic display
CN103686044A (en) * 2012-09-05 2014-03-26 想象技术有限公司 Pixel buffering
US20160033771A1 (en) * 2013-03-25 2016-02-04 Ecole Polytechnique Federale De Lausanne Method and apparatus for head worn display with multiple exit pupils
US20150234477A1 (en) * 2013-07-12 2015-08-20 Magic Leap, Inc. Method and system for determining user input based on gesture
US20150205126A1 (en) * 2013-11-27 2015-07-23 Magic Leap, Inc. Virtual and augmented reality systems and methods
US20160004060A1 (en) * 2014-06-10 2016-01-07 Purdue Research Foundation High frame-rate multichannel beam-scanning microscopy
US20160270656A1 (en) * 2015-03-16 2016-09-22 Magic Leap, Inc. Methods and systems for diagnosing and treating health ailments
CN110196494A (en) * 2018-06-03 2019-09-03 胡大文 Wearable display system and method for feeding optical image
US20210049976A1 (en) * 2019-08-15 2021-02-18 Coretronic Corporation Projector and projection method thereof
CN111290581A (en) * 2020-02-21 2020-06-16 京东方科技集团股份有限公司 Virtual reality display method, display device and computer readable medium
CN112578564A (en) * 2020-12-15 2021-03-30 京东方科技集团股份有限公司 Virtual reality display equipment and display method

Similar Documents

Publication Publication Date Title
US11391951B2 (en) Dual depth exit pupil expander
US10353213B2 (en) See-through display glasses for viewing 3D multimedia
US11237394B2 (en) Freeform head mounted display
CN110196494B (en) Wearable display system and method for delivering optical images
US9946075B1 (en) See-through display glasses for virtual reality and augmented reality applications
US10823966B2 (en) Light weight display glasses
US11662575B2 (en) Multi-depth exit pupil expander
US20210294107A1 (en) Optical image generators using miniature display panels
US10725301B2 (en) Method and apparatus for transporting optical images
CN108267859B (en) Display equipment for displaying 3D multimedia
US11391955B2 (en) Display devices for displaying holograms
US20220163816A1 (en) Display apparatus for rendering three-dimensional image and method therefor
US11163177B2 (en) See-through display glasses with single imaging source
CN114675418A (en) Ultra lightweight wearable display device and method for display device
US11231589B2 (en) Ultralight wearable display device
US20200018961A1 (en) Optical image generators using miniature display panels
US20190162967A1 (en) Light weight display glasses using an active optical cable
CN110297327A (en) Using the lightweight display device of active optical cable
CN110196495B (en) Light display device
CN110286486B (en) Method for conveying optical images
CN115903235A (en) Display device for displaying hologram and method thereof
US11002967B2 (en) Method and system for communication between a wearable display device and a portable device
CN109963145A (en) Vision display system and method and head-wearing display device
US11927758B1 (en) Multi-laser illuminated mixed waveguide display with volume Bragg grating (VBG) and mirror
US20230258937A1 (en) Hybrid waveguide to maximize coverage in field of view (fov)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination