CN117642678A - Optical assembly of head-mounted display - Google Patents

Optical assembly of head-mounted display Download PDF

Info

Publication number
CN117642678A
CN117642678A CN202380012722.6A CN202380012722A CN117642678A CN 117642678 A CN117642678 A CN 117642678A CN 202380012722 A CN202380012722 A CN 202380012722A CN 117642678 A CN117642678 A CN 117642678A
Authority
CN
China
Prior art keywords
light
optical assembly
optical
mounted display
head mounted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202380012722.6A
Other languages
Chinese (zh)
Inventor
赖俊颖
郑钰洁
郑肯羽
陈国轩
叶逢春
陈台国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haisi Zhicai Holding Co ltd
Original Assignee
Haisi Zhicai Holding Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haisi Zhicai Holding Co ltd filed Critical Haisi Zhicai Holding Co ltd
Priority claimed from PCT/US2023/013719 external-priority patent/WO2023164065A1/en
Publication of CN117642678A publication Critical patent/CN117642678A/en
Pending legal-status Critical Current

Links

Abstract

An optical assembly for a head-mounted display includes a light redirecting layer disposed in a first optical path between a first light emitter and a first eye of a viewer, the light redirecting layer including a plurality of three-dimensional geometric patterns annularly disposed on a surface of the light redirecting layer. The light redirecting layer includes a plurality of sub-unit segments, each of the plurality of sub-unit segments respectively including a plurality of three-dimensional geometric patterns having different physical dimensions for respectively receiving and redirecting first light signals emitted by the first light emitter to emit light of different wavelengths at different angles of incidence to a first eye of a viewer, the first light signals corresponding to first pixels of an image. The plurality of three-dimensional geometric patterns includes columnar three-dimensional nanostructures protruding from a surface of the light redirecting layer.

Description

Optical assembly of head-mounted display
RELATED APPLICATIONS
The present application claims U.S. provisional patent application No. 63313741, filed on day 2022, month 2, and entitled "optical combiner with a metainterface for head mounted display" (OPTICAL COMBINER WITH META-SURFACE FOR HEAD WEARABLE DISPLAY DEVICE), and U.S. provisional patent application No. 63435030, filed on day 2022, month 12, and 23, and entitled "priority for development of optical combiner with a metainterface for head mounted display" (DEVELOPMENT OF THE OPTICAL COMBINER WITH META-SURFACE FOR HEAD WEARABLE DISPLAY DEVICE).
Technical Field
The present invention relates to an optical assembly for presenting a virtual image of a head-mounted display, such as an augmented reality or hybrid reality display; more particularly, the present invention provides optical assemblies incorporating a super surface for improving the performance of virtual image rendering capabilities in a head mounted display.
Background
Most head mounted displays, such as augmented reality or virtual reality glasses, currently employ existing optical components for directing light from light emitters to the eyes of a viewer. To control the direction of light, the curvature of the surface of the optical assembly is adjusted to change the direction of light reflected by the optical assembly. There are few other ways to influence the optical performance of an optical assembly other than adjusting the curvature of the surface of the optical assembly and other optical elements. The virtual images presented by existing optical components are often subject to distortion and field of view limitations.
Meanwhile, a super surface, also called a manual impedance surface, is known for its ability to manipulate and control electromagnetic wave behavior. In recent years, they have been the subject of intensive research and development due to their potential application in a wide range of fields including telecommunications, optics and biomedical. By carefully designing the electromagnetic properties of the supersurface, the behavior of the incident electromagnetic wave can be manipulated in a predictable and controllable manner.
One of the main advantages of supersurfaces is their ability to redirect and control the direction of electromagnetic waves in an efficient manner. This can be achieved by designing the hypersurface to have a specific impedance profile, which will cause the incident wave to be redirected in a specific direction. This feature is of great importance for a wide range of applications, including the development of optical lenses for head-mounted devices.
In the past few years, significant progress has been made in the development of head-mounted displays (e.g., glasses) for Augmented Reality (AR) environments. Nevertheless, some challenges remain to be overcome. One of the biggest technical challenges faced by AR spectacles is miniaturization. AR glasses require the incorporation of complex hardware components (including display, camera, sensor and processor) into a compact form factor. In particular, the optical components of AR glasses play an important role in the performance and overall physical dimensions of AR glasses. Super surfaces have the potential to have a significant impact in the field of head mounted devices. Although the super-surface provides possibility for solving problems of image distortion, visual field, external dimension, eye fit, volume reduction, etc., the application of the super-surface in the field of AR glasses is very little progressed.
Disclosure of Invention
The present invention provides an optical assembly for a head mounted display displaying augmented reality, mixed reality, and virtual reality environments. The optical component employs a supersurface to improve optical performance.
In one embodiment, the present invention provides a process for designing a supersurface having the desired optical properties of the optical assembly of the present invention:
step 1: converting the input light and the output light into tensors; converting the desired output light into a sensitivity tensor;
step 2: defining key parameters for calculation, which may be:
1. the wavelengths of red light, green light and blue light emitted by the light emitters of the head-mounted display;
2. transmission of ambient visible light to an optical component having a supersurface;
3. the light profile of the light emitted by the light emitter, such as the cross-sectional shape and area of the light beam emitted by the emitter of the head mounted display;
step 3: determining a suitable computing model, providing parameters that characterize the present invention;
step 4: computer-aided calculation is performed by using the model to determine an optimal geometry of the three-dimensional geometric pattern of the hypersurface per unit area corresponding to the final desired light profile; the contour (shape, size, rotation angle, distance between adjacent supersurfaces, etc.) of the three-dimensional geometric pattern across the predetermined region of the supersurface is determined based on the linear estimate.
The three-dimensional (3D) superstructure of the present invention is specifically designed so that it is compatible with the 3D image rendering method used in head mounted displays. The 3D nanostructures on the supersurface are columnar structures having a variety of cross-sectional shapes (e.g., circular, elliptical, rectangular, etc.). The hypersurface is divided into a plurality of sub-unit sections; wherein each subunit segment is responsible for redirecting the optical signals forming the binocular virtual image pixels.
According to an embodiment of the invention, the configuration of the sub-unit sections and the different areas receiving different colors of light are shown. In this embodiment, the cross-sectional area of the optical signal projected on the light redirecting layer is approximately the same as the area of one of the subunit sections. Each of the plurality of sub-unit sections includes a first region B for receiving and redirecting blue light, a second region G for receiving and redirecting green light, or a third region R for receiving and redirecting red light. However, upon receiving light of a different wavelength, the first region B, the second region G, and the third region R redirect light of a different wavelength to the same location on the retina of the first eye of the viewer. Thus, an accurate color representation of the pixel can be reconstructed and received by the viewer. Note that in some embodiments, two of the first, second, and third regions B, G, and R have the same horizontal position as the light redirecting layer, and one of the first, second, and third regions B, G, and R is arranged horizontally or vertically with respect to the other two regions.
According to an embodiment of the invention, the cross-sectional area of the light signal impinging on the light redirecting layer may be larger than the area of one of the subunit sections. Each subunit segment is spaced apart from its adjacent subunit segment.
According to an embodiment of the present invention, a subunit section may share the same first region G, second region B or third region R as another adjacent subunit section. As shown, the first subunit sections are marked as squares and the second subunit sections are marked as squares with bold lines. Both the first and second subunit sections share the same first and second regions G and B. In this case, the cross-sectional area of the optical signal projected on the light redirecting layer may be approximately the same as the area of the subunit sections. With this arrangement, the total area of the subsurface can be reduced, thereby reducing the cost of manufacturing the subsurface.
Alternatively, in another embodiment of the present invention, each of the plurality of sub-unit sections may include a first region R & G for receiving and redirecting light of two colors simultaneously, and a second region B for receiving and redirecting light of the remaining colors. In this embodiment, a single set of three-dimensional nanostructures may be used to receive light of two colors having relatively similar wavelengths. For example, a single set of three-dimensional nanostructures may be used to receive and redirect red and green light, while a separate set of three-dimensional nanostructures may be required to receive and redirect blue light. In this way, the process of three-dimensional nanostructures can be simplified by reducing the number of segments within a subunit segment for receiving light of different colors. However, similar to the previous embodiments, upon receiving light of a different wavelength, the first region R & G and the second region B redirect light of a different wavelength to the same location on the retina of the first eye of the viewer.
In another embodiment of the present invention, each of the plurality of sub-unit sections may include a first region G & B for receiving and redirecting light of two colors simultaneously, and a second region R for receiving and redirecting light of the remaining colors.
In some embodiments, the optical signals emitted by the optical transmitters may be uniformly received by the subunit sections. In certain other embodiments, each color contained in a single light signal may be projected onto a different area of a subunit segment separately. In order for different areas on a subunit section to receive light of a corresponding color, it may be necessary to configure the light emitters specifically and correspondingly.
In some embodiments, the optical assembly may be implemented with optical power, so that viewers with myopia, hyperopia, etc. can see real objects in the environment. The refractive surface may comprise a convex or concave surface. The curvature of one of the surfaces is determined by the viewer's prescription. If the optical component is integrally formed, the curvature formulated according to the prescription may be the outer surface of the optical component. The inner surface of the optical assembly may be provided with a light redirecting layer having a super surface for directing the light signal to the eye of a viewer.
Head mounted displays employing optical components with a super surface may have the following advantages over existing head mounted displays:
1. extended field of view (FOV): the supersurface of the optical assembly may be designed so that NA (numerical aperture) may be increased relative to existing optical assemblies. For example, existing optical components may have NA less than 0.5; on the other hand, an optical component with a supersurface may have an NA of up to 0.9. Furthermore, the use of a super surface may change the FOV of the optical assembly from less than 40 degrees to greater than 80 degrees. In some cases, optical components with a super surface may achieve a negative refractive index, which is not possible in existing optical components.
2. Shortening the eye fit distance: "eye fit" refers to the distance between the glasses (or optical components) and the pupil. The fact that existing head mounted displays of existing optical assemblies have poor NA performance can affect eye relief. The eye relief of existing head mounted displays may typically be 2.5cm or more. However, using an optical assembly with a super surface, the eye relief may be less than 1.5cm.
3. The volume and the weight are reduced: with an optical assembly having a super surface, the volume and weight of the head mounted display may be reduced as fewer existing optical elements may be required to present the virtual image.
4. Better aesthetic design (external dimensions): existing head mounted displays are bulky because they require a large eye relief. As the eye relief decreases, the physical dimensions of a headset employing an optical assembly with a super surface may more closely approximate conventional eyeglasses.
5. Less image distortion: existing optical components and optical elements in a head-mounted device may produce different path lengths for light of different wavelengths from a light emitter or light having different angles of incidence. As a result, the final image frame shape projected into the viewer's eye may appear distorted. The distortion may be corrected using a superstructure, thereby leaving the final image presented in the viewer's eye undistorted.
Drawings
FIG. 1 illustrates an input tensor and an output tensor for constructing the superstructure of the present invention.
FIG. 2 illustrates a schematic diagram of a head mounted display and a method of presenting binocular virtual images according to an embodiment of the present invention.
Fig. 3 illustrates a method of the present invention for presenting depth perception.
Fig. 4 illustrates a method of the present invention for presenting depth perception.
Fig. 5A illustrates a first embodiment of the superstructure configuration of the present invention.
Fig. 5B illustrates a second embodiment of the superstructure arrangement of the present invention.
Fig. 5C illustrates a second embodiment of the superstructure arrangement of the present invention.
Fig. 5D illustrates a third embodiment of the superstructure arrangement of the present invention.
Fig. 5E illustrates a fourth embodiment of the superstructure arrangement of the present invention.
Fig. 6A illustrates a fifth embodiment of the superstructure arrangement of the present invention.
Fig. 6B illustrates a sixth embodiment of the superstructure arrangement of the present invention.
Fig. 7A illustrates a possible configuration of a superstructure for micro LED light emitters.
Fig. 7B is another diagram illustrating a possible configuration for a micro LED light emitter.
FIG. 8A is a schematic diagram of an embodiment of an optical assembly having optical power.
FIG. 8B is another schematic diagram of an embodiment of an optical assembly having optical power.
Detailed Description
The terminology used in the following description is intended to be interpreted in its broadest reasonable manner, even though such terminology is being used in conjunction with the detailed description of certain specific embodiments. The following description may even emphasize certain terms; however, any terms which are to be construed in a limited manner are specifically defined in this embodiment.
With respect to designing a suitable superstructure (and assuming the final super-surface has the desired rectangular shape) on a super-surface with the above advantages and optical functions for use in an optical assembly in a head-mounted display, there are several points (e.g., nine points total) to use as a location for the super-surface design. Referring to fig. 1, the light output of these points is used to check whether the light profile of these light outputs meets the required specifications (e.g., intensity, angle of incidence, spot size, etc.) for rendering a virtual image of the head mounted display. For example, a light combiner having a super surface with a size of 20×11.25 mm embedded thereon may correspond to a virtual image frame having a resolution of 1280×720 pixels. When reaching the pupil of the viewer, the size of the virtual image frame is reduced to 2x1.125 mm. When the light profile of the light output of the nine anchor points all meet the required requirements, linear estimation can be used to determine the specifications (shape, size, rotation angle, distance between adjacent supersurfaces, etc.) of the supersurfaces for the intermediate region between the nine anchor points. The effect of the supersurface on the input light from the environment is checked to ensure the transmittance of the optical assembly and the supersurface for ambient light. The specification of the super surface may be adjusted if necessary.
Image distortion is caused by the fact that light of different wavelengths (red, green, and blue) passes through different refractive indices and light of different incident angles passes through different optical paths while passing through various different optical elements. The light profile can be adjusted using the subsurface so that the subsurface can be used to correct the intensity distribution and shape of the image as it is formed on the retina of the viewer. In some cases, the super surface engineering starts with using distorted images as input tensors, while using images with the correct shape and uniformity as output tensors for determining the anti-distortion susceptibility tensors.
In some cases, it may be desirable to improve the spot size and/or shape (cross-sectional area and shape of light) of the light emitted by the light emitters. For example, if an optical signal having an elliptical cross-sectional shape is required to be rounded, a corresponding super-surface capable of correcting the elliptical light profile can be obtained by setting the elliptical light profile as an input tensor and the circular light profile as an output tensor. This calculation may also use 9 anchor points (or only 5 anchor points) to linearly estimate the subsurface of the rest of the subsurface.
The general procedure for designing a supersurface having the desired optical properties of the optical assembly of the present invention is described below:
Step 1: converting the input light and the output light into tensors; converting the desired output light into a sensitivity tensor;
step 2: defining key parameters for calculation, which may be:
1. the wavelengths of red light, green light and blue light emitted by the light emitters of the head-mounted display;
2. transmission of ambient visible light to an optical component having a supersurface;
3. the light profile of the light emitted by the light emitter, such as the cross-sectional shape and area of the light beam emitted by the emitter of the head mounted display;
step 3: determining a suitable computing model, providing parameters that characterize the present invention;
step 4: computer-aided calculation is performed by using the model to determine an optimal geometry of the three-dimensional geometric pattern of the hypersurface per unit area corresponding to the final desired light profile; determining the profile (shape, size, angle of rotation, etc.) of a three-dimensional geometric pattern across a predetermined region of the subsurface based on the linear estimate
The present invention employs a super surface on the optical components of a head mounted display, however, one of ordinary skill in the art to which the present invention pertains will also know how to apply a super surface to various optical elements in light of the teachings of the present invention. Several possible ways to employ a super surface in a head mounted display are described below.
1. An optical component: the supersurface may be applied to an optical component as an optical film for altering the optical properties of the optical component and altering the optical path of light.
2. Correcting image distortion: a super surface may be employed on at least one optical element of the head mounted display to compensate for image distortion, thus allowing the final image to appear to be a desired shape (e.g., rectangular).
3. And (3) correcting chromatic aberration: a super surface may be employed on at least one optical element of the head mounted display to perform color difference correction of the image.
4. Improving the uniformity of the light intensity distribution: the optical path difference between the plurality of optical signals causes the optical intensity of each of the plurality of optical signals to be different when reaching the eye of the viewer; a super surface may be employed on at least one optical element of the head mounted display to perform optical path correction between the plurality of optical signals.
5. Beam cross-sectional shape: the super-surface can be used to produce a more desirable cross-sectional shape of the optical signal to increase the resolution and pixel density of the final image presented on the viewer's retina; for example, the beam shape of the EEL (edge-emitting laser) may be changed from an elliptical shape to a circular shape.
The three-dimensional (3D) superstructure of the present invention is specifically designed so that it is compatible with the 3D image rendering method used in head mounted displays. The method of the present invention for rendering a 3D image with depth of field is described below. The main advantage of the 3D image rendering techniques described herein is that the depth of field of the virtual image rendered is the same as the position of the viewer's eyes gaze, eliminating the Vergence Adjustment Conflict (VAC) and focus competition. Referring to fig. 2, the head mounted display has a first light emitter 11, a first optical component 21, a second light emitter 12, and a second optical component 22. In many embodiments, the optical components may be divided into a first optical component 21 and a second optical component 22, each for both eyes of the viewer. For clarity, the phrases "first optical assembly 21" and "second optical assembly 22" are used below to describe embodiments of the present invention. Those of ordinary skill in the art will recognize that two optical components may be integrally formed into a single optical component, but are substantially identical in function and result as separate optical components. The first optical transmitter 11 generates a plurality of first optical signals 100 for an object. The first optical assembly 21 receives the plurality of first light signals 100 and redirects them to a retina of a viewer for display of a plurality of first pixels of the object. The second optical transmitter 12 generates a plurality of second optical signals 200 for the object. The second optical assembly 22 receives the plurality of second light signals 200 and redirects them to the other retina of the viewer to display the plurality of second pixels of the object. In addition, the viewer sees each pair of the first optical signal 100 and the corresponding second optical signal 200 to display binocular virtual pixels of binocular virtual images having a depth of field related to the angle between the optical path extension of the first optical signal 100 and the corresponding second optical signal 200. In addition to a single binocular virtual pixel of the object, when each remaining first optical signal 100 and corresponding second optical signal 200 are seen to have different depths of field, a binocular virtual image having a plurality of depths of field is seen, the depth of field being dependent on the optical path extension between each first optical signal 100 and its corresponding second optical signal 200.
For the understanding of the present invention, detailed techniques for displaying a single binocular virtual pixel having a depth of field are described below. Furthermore, the term "optical convergence angle" refers to the angle between the incident first optical signal 100 and the second optical signal 200 (which is the angle between the optical path lengths of the first optical signal 100 and the second optical signal 200); the term "convergence angle" refers to the angle between the visual axes of the eyes of a viewer. The position of the binocular virtual pixel seen by the viewer may be determined by the position of the intersection between the optical path of the first optical signal 100 extending from the viewer's eye to the first optical component 21 and the optical path of the corresponding second optical signal 200 extending from the other viewer's eye to the second optical component 22. Referring to fig. 3, according to the nature of binocular vision, when a viewer sees an object, the viewer's eyes will look at the object and both eye visual axes will be directed toward the object. The visual axis is a line extending from the object being viewed through the center of the pupil to the macula retinae. The perception of depth of field in humans depends in part on the angle of convergence between the two visual axes of the eye. That is, when the convergence angle between the two visual axes of the eye (when looking at an object) is relatively large, it is possible to see the object relatively close to the viewer (the depth of field is small); on the other hand, when the convergence angle between the two visual axes of the eye (when looking at the object) is relatively small, then the object may be seen relatively far from the viewer.
Similarly, when the head-mounted display is used to generate the binocular virtual image, the optical convergence angle of the incident first optical signal 100 and the incident second optical signal 200 relative to the viewer can be adjusted, so that the viewer can look at the binocular virtual image with the desired convergence angle of eyes when viewing the binocular virtual image formed by the first optical signal 100 and the second optical signal 200, and further the target depth of field feeling can be presented. In an embodiment, this may be achieved by making the convergence angle of both eyes the same as the optical convergence angle between the first optical signal 100 and the second optical signal 200. Thus, the depth of field of the virtual image presented is the same as the position of the viewer's eyes, and the convergence adjusting conflict (VAC) and focus competition can be eliminated.
When the head-mounted display is used to present the binocular virtual image, it can be easily known that the horizontal and vertical positions of the binocular virtual image seen by the user in the 3D space are directly related to the horizontal and vertical positions on the first retina and the second retina, where the first optical signal 100 (transmitted by the first transmitter) and the second optical signal 200 (transmitted by the second transmitter) are transmitted and received, respectively. With reference to fig. 4, the perception of horizontal, vertical and depth of field positions of an object in 3D space by human natural binocular vision is illustrated. To facilitate explanation of the principles of human eye vision and retinal imaging, the retinas of the first and second eyes of the user are depicted as matrices, each matrix element corresponding to a particular horizontal and vertical position on the retina. According to natural vision, a first right eye light instance R1 from an object reaches a matrix element R22 of the first retina. The corresponding second light instance L1 from the object reaches the matrix element L22 of the second retina. In addition to the object parallax information contained in R1 and L1, the user's perception of depth of field depends on the optical convergence angle CA1 between the first light instance R1 and the second light instance L1. As the depth of field of the object seen by the viewer increases, the optical convergence angle decreases; conversely, as the depth of field of the object seen by the viewer decreases, the optical convergence angle increases. Specifically, as shown in fig. 3, assuming that the object moves from the position p1 to p2, the optical convergence angle is changed from CA1 to CA2 (CA 2> CA 1); meanwhile, the position on the first retina receiving the first light instance is changed from R22 to R32, and the position on the second retina receiving the second light instance is changed from L22 to L12. In particular, as shown in fig. 3, it is apparent that the depth of field perception of an object is at least partially related to the angle of optical convergence between a first light instance and a second light instance entering the eyes of a viewer (in addition to parallax images). In natural vision, there may be countless first and second light instances from a point of the object, albeit due to light scattering; however, due to the eye lens, all of the first and second examples would converge to a single location, respectively; thus, fig. 4 shows only one example.
According to the present invention, the depth of field perception of the binocular virtual pixel is controlled by adjusting the optical convergence angle formed between the optical path extension of the first optical signal 100 and the optical path extension of the second optical signal 200. The optical path extension direction of the first optical signal 100 and the second optical signal 200 can be changed by controlling the projection directions of the first optical transmitter 11 and the second optical transmitter 12. This method of creating a virtual image depth of field perception is consistent with the natural vision of the human eye, since at least a portion of the human brain will determine the depth of field of an object in 3D space based on the gaze angle of the eye, which is directly related to the convergence angle formed between the optical path extension of the first optical signal 100 and the optical path extension of the second optical signal 200.
Referring back to fig. 2, a method for presenting depth perception according to the present invention is further described. The viewer sees a virtual image 70 comprised of a plurality of binocular pixels (e.g., a first binocular virtual pixel 72 and a second binocular virtual pixel 74). The first binocular virtual pixel 72 is displayed at a first depth of field D1 and the second binocular virtual pixel 74 is displayed at a second depth of field D2. The convergence angle of the first binocular virtual pixel 72 is θ1 (first convergence angle). The convergence angle of the second binocular virtual pixel 74 is θ2 (second convergence angle). The first depth of field D1 is associated with a first convergence angle θ1. In particular, the first depth of field of the first binocular virtual pixel of the object may be determined by a first convergence angle θ1 between the optical path extension of the first optical signal 101 and the corresponding second optical signal 201. The first depth of field D1 of the first binocular virtual pixel 72 may be calculated schematically by the following equation:
The distance between the right pupil and the left pupil is the inter-pupillary distance (IPD). Likewise, the second depth of field D2 is related to the second convergence angle θ2. In particular, the second depth of field D2 of the second binocular virtual pixel of the object may be determined schematically by a second angle θ2 between the optical path extension of the first optical signal 102 and the corresponding second optical signal 202 using the same formula. The second angle θ2 is smaller than the first angle θ1 because the second binocular virtual pixel 74 seen by the viewer is farther from the viewer (i.e., has a greater depth of field) than the first binocular virtual pixel 72. Further, the angle between the redirected right eye light signal and the corresponding left eye light signal is determined by the relative horizontal distance of the right eye pixels and the left eye pixels. Accordingly, the depth of field of the binocular virtual pixels is inversely proportional to the relative horizontal distance between the right eye pixels and the corresponding left eye pixels forming the binocular virtual pixels. In other words, the deeper the binocular virtual pixel is seen by the viewer, the smaller the relative horizontal distance in the X-axis between the right-eye pixel and the left-eye pixel forming such binocular virtual pixel. In some variations of the present invention, the depth perception of binocular virtual image frames or binocular pixels may be a combination of the foregoing method and the existing parallax method (partially by the method disclosed herein and partially by the parallax method). However, in some embodiments, depth perception may be presented primarily by the methods disclosed herein.
The above-described method for rendering binocular virtual pixels having depth of field may be applied to various display systems including laser projector-based light emitters and micro-LED light emitters. Each binocular virtual pixel in the binocular virtual image may be rendered to have a different depth of field. According to some embodiments, when the light emitters are micro LEDs, the head mounted display may further comprise collimators, making the direction of the light signals more aligned in a specific direction or making the spatial cross section of the light beam smaller. An optical assembly may be disposed on one side of the micro LED and a collimator may be disposed between the light emitter and the optical assembly. The collimator may be a curved mirror or lens.
The following discusses the placement (i.e., configuration) of the superstructures on the supersurface of the optical assembly of the invention. In general, the dimensions of the superstructures on the supersurface may vary widely, depending on the particular application and design. The dimensions of the superstructures on the supersurface vary from nanometers to millimeters, and the physical dimensions (e.g., length, width, height) and shape of the superstructures will determine their specific impact on light. For example, superstructures in the hundreds of microns range can be used to create diffraction gratings, while larger superstructures in the millimeters range can be used to control light polarization. The specific dimensions of the superstructure are determined by the wavelength of the light with which it is to interact and the level of control required for the light. In the present invention, the superstructures typically have dimensions in the nanometer range. For clarity, these superstructures may be referred to herein as 3D nanostructures. The 3D nano structure on the super surface can change the direction of incident light; the amount of change in direction of the incident light depends on the specific geometric profile of the 3D nanostructure and the wavelength of the light received by the 3D nanostructure. In general, 3D nanostructures on a supersurface are columnar structures having various cross-sectional shapes (e.g., circular, elliptical, rectangular, etc.). In addition, the height or cross-sectional area of the 3D nanostructure also affects the exit propagation angle of the incident light. Furthermore, according to the present invention, even the same 3D nanostructure, light of different wavelengths may be affected differently; this means that the amount of change in the exit angle is different when light of different wavelengths is received by the same 3D nanostructure (e.g. the exit angle of light depends on the wavelength). In the field of head-mounted displays, the projection angle of the optical signal is very critical for the image quality of the virtual image to be finally presented, and the 3D effect is largely dependent on the correction of the projection angle of the optical signal, so that it is necessary to design the arrangement of the 3D nanostructures specifically and purposefully to achieve the best possible image quality.
Generally, a head-mounted display generates a binocular virtual image having a plurality of binocular virtual pixels, each pixel having a different color, which is formed by mixing red light, green light and blue light. In order to process light of different wavelengths for each pixel forming the binocular virtual image, the present invention proposes to divide the super surface into a plurality of sub-unit sections; wherein each subunit section is responsible for changing the direction of the light signals forming a binocular virtual image pixel (each pixel/light signal consisting of blue light, green light, red light or any combination thereof). Thus, for example, if the binocular virtual image generated by the head mounted display includes 1280X720 pixels, the super surface is divided into 1280X720 sub-unit segments, each of which is responsible for receiving and redirecting each of the 1280X720 pixels, respectively. As is known, a single set of 3D nanostructures may be used to simultaneously receive and redirect light of all colors in the light signal. However, as previously described, since the redirecting composition angle of the redirected light is largely dependent on the wavelength of the incident light, if uniform 3D nanostructures are used to redirect light of all colors, chromatic aberration may be truncated at the receiving end of the resulting rendered binocular virtual image (e.g., the retina of the viewer). To correct this problem and improve the efficiency of redirecting light of different wavelengths to the desired location of the viewer's retina, each subunit segment may be configured to include specific 3D nanostructures for redirecting blue, green and red light contained in a single light signal (corresponding to a single pixel of the binocular virtual image), respectively. Furthermore, the same subunit section receives light having a different wavelength but belonging to the same pixel and redirects it to the same location on the retina of the first eye of the viewer.
Referring to fig. 5A-5E, the configuration of the subunit sections is discussed below. The circle drawn by the dotted line represents the projection area (cross-sectional area) of a single pixel of the optical signal.
Referring to FIG. 5A, a configuration of sub-unit segments and different areas for receiving different colors of light are shown according to an embodiment of the present invention. In this embodiment, the cross-sectional area of the optical signal projected on the light redirecting layer is approximately the same as the area of one of the subunit sections. Each of the plurality of sub-unit sections includes a first region B for receiving and redirecting blue light, a second region G for receiving and redirecting green light, or a third region R for receiving and redirecting red light. However, upon receiving light of a different wavelength, the first region B, the second region G, and the third region R redirect light of a different wavelength to the same location on the retina of the first eye of the viewer. Thus, an accurate color representation of the pixel can be reconstructed and received by the viewer. Note that in some embodiments, two of the first, second, and third regions B, G, and R have the same horizontal position as the light redirecting layer, and one of the first, second, and third regions B, G, and R is arranged horizontally or vertically with respect to the other two regions.
Referring to fig. 5B, in some embodiments, a single subunit section may include a plurality of first regions B, a plurality of second regions G, and a plurality of third regions R, which are distributed throughout the subunit section. In this embodiment, when all colors of light in the light signal are uniformly illuminated on the subunit sections, the light is uniformly redirected by the subunit sections to a particular location of the viewer's retina. The efficiency of redirecting light of different colors can be improved. Furthermore, when redirected to the retina of the viewer, the different colored light may mix in a more uniform manner, which may in turn more accurately reconstruct the original color of the pixel.
Referring to fig. 5C, according to an embodiment of the invention, the cross-sectional area of the optical signal projected onto the light redirecting layer may be larger than the area of one of the subunit sections. Each subunit segment is spaced apart from its adjacent subunit segment. Similar to the previous embodiments, a single subunit section may include a plurality of first regions B, a plurality of second regions G, and a plurality of third regions R.
Referring to fig. 5D, when adjacent pixels of an image frame partially overlap each other, an area where two adjacent light signals impinge on a subunit section may have a 3D nanostructure capable of redirecting the two adjacent light signals to their respective locations on the retina, according to an embodiment of the invention. In the figure, the area of the first subunit section close to the second adjacent subunit section (the area where two pixels impinge on the first subunit section) comprises a three-dimensional nanostructure that redirects two consecutive first signals of two pixels to their respective positions on the viewer's retina. This embodiment may be advantageous for the case where the cross-sectional area of the optical signal of the pixel is substantially larger than the area of the subunit section. Referring to fig. 5D, a subunit section may share the same first region G, second region B, or third region R as another adjacent subunit section. As shown, the first subunit sections are marked as squares and the second subunit sections are marked as squares with bold lines. Both the first and second subunit sections share the same first and second regions G and B. In this case, the cross-sectional area of the optical signal projected on the light redirecting layer may be approximately the same as the area of the subunit sections. With this arrangement, the total area of the subsurface can be reduced, thereby reducing the cost of manufacturing the subsurface. Similar to the previous embodiments, a single subunit section may include a plurality of first regions B, a plurality of second regions G, and a plurality of third regions R. Alternatively, in another embodiment of the present invention (see G of fig. 5E), each of the plurality of subunit sections may include a first region R & G for receiving and redirecting light of two colors simultaneously, and a second region B for receiving and redirecting light of the remaining colors. In this embodiment, a single set of 3D nanostructures may be used to receive light of two colors that are closer in wavelength. For example, a single set of 3D nanostructures may be used to receive and redirect red and green light, while a separate set of 3D nanostructures may be required to receive and redirect blue light. In this way, the process of 3D nanostructure can be simplified by reducing the number of segments within a subunit segment for receiving light of different colors. However, similar to the previous embodiments, upon receiving light of a different wavelength, the first region R & G and the second region B redirect light of a different wavelength to the same location on the retina of the first eye of the viewer. Similar to the previous embodiments, a single subunit section may include a plurality of first regions R & G and a plurality of second regions B. In another embodiment of the present invention, each of the plurality of sub-unit sections may be designed to include a first region G & B for receiving and redirecting light of two colors at the same time and a second region R for receiving and redirecting light of the remaining colors, depending on the requirements of image quality. The first region G & B and the second region R redirect light of different wavelengths to the same location on the retina of the first eye of the viewer.
In the previous embodiments, the subunit sections may uniformly receive the light signals (representing a single pixel) emitted by the light emitters. In this case, the first, second and third regions included in the sub-unit sections uniformly receive red, green and blue light. In other words, all regions may receive red, green, and blue light simultaneously. However, since two or three sets of 3D nanostructures are employed in different regions of the subunit sections, and each region of the subunit sections is designed to more efficiently redirect light of a particular color, the different regions of the subunit sections are able to redirect light of a particular color to a target location. This is in contrast to the prior art which uses a single type of nanostructure to redirect light of all colors. In the present invention, each region in a subunit section redirects a corresponding color to a target location. In general, the target location may correspond to a particular projection angle to present a particular depth of field perception for a pixel. Nonetheless, all colors redirected by different regions of the subunit segments (corresponding to a single pixel) are redirected to the same location on the retina in order for the viewer to see pixels with a particular convergence angle.
Referring to fig. 6A and 6B, in certain other embodiments, each color contained in a single light signal may be projected onto a different area of a subunit section, respectively. In order for different areas on a subunit section to receive light of a corresponding color, it may be necessary to configure the light emitters specifically and correspondingly. For example, where the light emitters are laser projectors (e.g., laser beam control/LBS projectors), the emitters may be configured to project red, green, and blue light forming a single pixel at different times (not simultaneously). The LBS projector adopts MEMS mirror to change projection direction; in projecting an image frame of an image, the projection angles in the vertical and horizontal directions are constantly changed. By configuring the projector to project different colors of light at different times, the different colors of light are projected to different horizontal or vertical positions. As a result, blue, green, or red light is received by different areas on the subunit section. With respect to the arrangement of the first, second or third regions for receiving light of different colors, according to some embodiments of the invention, the first, second or third regions have the same horizontal position as the light redirecting layer. In other embodiments, the first, second or third regions have the same vertical position as the light redirecting layer.
In the embodiments of the invention described above, since different sub-unit sections will receive and redirect light of different colors, the light will be redirected to the eyes of the viewer at slightly different angles of incidence. However, as previously described, these lights are redirected to the same location on the retina of the viewer's eye to present a single pixel with a particular color.
In the case where the light emitters are micro LEDs, the pixels of the virtual image may be generated by pixel units comprising blue micro LEDs, green micro LEDs, red micro LEDs, or any combination thereof. The virtual image is presented by an array of pixel cells comprising micro LEDs. Referring to fig. 7A and 7B, the optical assembly may include a supersurface comprising an array of subcell segments. The position of each subunit segment corresponds to the position of a pixel unit. Further, in some cases, the location of the first, second, or third regions of the subunit section may correspond to the location of a blue, green, or red micro LED of the pixel cell. As shown, the array of pixel cells may be placed on one side of the array of sub-cell segments to redirect and aim light to specific locations on the viewer's retina.
In the present invention, each of the plurality of subunit segments includes a plurality of three-dimensional geometric patterns having different physical dimensions for respectively receiving and redirecting light emissions of different wavelengths of the emitted light signal. Although each subunit section may contain a first region, a second region, or a third region; however, the physical size and shape of the first, second, or third regions in different subunit sections may be different. This is because each subunit segment needs to receive light signals (representing different pixels) from different angles to different locations on the retina.
One of the main features of the 3D nanostructure on the supersurface of the present invention is its unique optical properties relative to existing optical elements, where snell's law and reflection law are the fundamental principles of all existing optical elements. When light is incident on the 3D nanostructure on the subsurface, the angle of incidence of the light signal with respect to the portion of the subsurface that receives the light signal is not equal to the angle of reflection of the light signal with respect to the portion of the subsurface that receives the light signal. Accordingly, a better performing planar optical assembly for a head mounted display (e.g., smart glasses) may be designed, thereby greatly improving the overall size of the head mounted display.
For clarity, the following describes example embodiments in accordance with one context of the invention. An existing 1280x720 pixel LBS projector capable of generating red, green, and blue light may be provided. The three colors of light are projected coaxially into pixels forming an image. Assuming that the LBS projector produces a FOV 40 degrees wide, the eye relief (distance between the eye and the optical components) is 22mm; the size of the supersurface on the optical element should be at least 16mm (equal to 2 tan (20 °) ×22 mm). Thus, the center-to-center distance (horizontal gap) of two adjacent pixels projected on the super surface is 12.5um (=16 mm/1280), and the reflection angle difference to the pupil between each pixel is 0.03125 ° (=40 °/1280). In wearable display applications based on retinal scanning, the optical assembly is required to focus the light at each emission angle onto the retina of the viewer. The 3D superstructure on the super-surface of the optical assembly accurately redirects each optical signal (representing a single pixel) of the 2D image frame to a specific location on the retina. Generally, each pixel on the retina is about 20-30um in diameter. Higher VA (visual acuity) requires smaller spot size (pixel diameter) on the retina.
Referring to fig. 8A and 8B, in some embodiments, the optical assembly 500 may be implemented with optical power, so that a viewer with myopia, hyperopia, etc. is able to see real objects in the environment. This is particularly advantageous when the optical assembly 500 is used for augmented or mixed reality displays. Accordingly, the optical assembly 500 may further include a refractive surface 50 disposed at one side of the optical assembly 500; the light redirecting layer 60 is disposed on the other side of the refractive surface 50. The refractive surface 50 may comprise a convex or concave surface. The curvature of the refractive surface 50 is determined by the refractive prescription of the viewer. For example, the curvature formulated according to the prescription may be the outer surface of the optical assembly 500 (farther from the eye of the viewer). The inner surface of the optical assembly 500 (near the eye of the viewer) may be provided with a light redirecting layer 60 having a super surface for directing the light signals to the eye of the viewer. The curvature of the refractive surface 50 may be an aspherical lens surface or a free-form curved surface.
Furthermore, in some embodiments, a spacer layer may be provided between the supersurface and the optical assembly for protecting the nanostructures on the supersurface from damage; alternatively, a spacer layer is provided on one side of the light redirecting layer for protecting the three-dimensional nanostructures on the supersurface.
In some embodiments, the head-mounted display may include a support structure that may be worn on the head of a viewer to carry the first light emitter 11, the second light emitter 12, the first optical assembly 21, and the second optical assembly 22. The first optical assembly 21 and the second optical assembly 22 are disposed within the field of view of the viewer. In particular, the head-mounted display may be implemented in the form of a pair of glasses, referred to as smart glasses. In this case, the optical assembly may be combined with a prescription lens for correcting myopia, hyperopia, etc. In some cases, when the head-mounted display is implemented on smart glasses, the optical components of the smart glasses may have refractive characteristics for correcting the viewer's vision and synthesizer function. The smart glasses may have optical components with refractive characteristics formulated according to a prescription to meet the needs of a near or far vision person to correct vision. In these cases, the optical components of the smart glasses (which may also be split into two, respectively binocular arrangements) may include a refractive surface 50. The refractive surface 50 and the optical component may be integrally formed as one piece using the same or different types of materials. The refractive surface 50 and the optical assembly may also be made in separate pieces and then assembled together. In some cases, the light redirecting layer is disposed on one side of the optical component; the refractive surface 50 is arranged on the opposite side of the optical component where the light redirecting layer is arranged, as shown in fig. 8A and 8B. For example, the light redirecting layer may be on the inside (near the eye) of the optical component and the refractive surface 50 may be on the outside of the optical component.
The previous embodiments are provided to enable any person skilled in the art to make or use the subject matter of the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art. The novel principles and targets disclosed herein may be applied to other embodiments without employing innovative capabilities. The subject matter recited in the claims is not limited to the embodiments disclosed herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. Further additional embodiments are contemplated as falling within the spirit and true scope of the disclosed subject matter. Accordingly, it is intended that the present invention cover modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.

Claims (20)

1. An optical assembly of a head mounted display, comprising:
a light redirecting layer disposed in a first optical path between a first light emitter and a first eye of a viewer, the light redirecting layer comprising a plurality of three-dimensional geometric patterns disposed on a surface of the light redirecting layer;
wherein the light redirecting layer comprises a plurality of sub-unit segments, each of the plurality of sub-unit segments respectively comprising a plurality of three-dimensional geometric patterns having different physical dimensions for respectively receiving and redirecting a plurality of first light signals emitted by the first light emitter to the first eye of the viewer at different angles of incidence, each light signal of the first light signals corresponding to a first pixel of an image,
Wherein the plurality of three-dimensional geometric patterns comprise columnar three-dimensional nanostructures protruding from the surface of the light redirecting layer.
2. The optical assembly of the head mounted display of claim 1, wherein the first light emitter is configured to emit at least blue light, green light, red light, or any combination thereof, each of the plurality of sub-unit sections is configured to receive the first light signal comprised of any combination of the blue light, green light, or red light, respectively, and redirect any combination of the blue light, green light, or red light to the first eye of the viewer at different angles of incidence.
3. The optical assembly of the head mounted display of claim 2, wherein the blue, green, or red light is received by the same location on the subunit section.
4. The optical assembly of the head mounted display of claim 2, wherein the blue, green, or red light is received by different locations on the subunit section.
5. The optical assembly of the head mounted display of claim 2, wherein the blue, green, or red light of each of the light signals comprising the first light signal is not emitted simultaneously.
6. The optical assembly of the head mounted display of claim 1, wherein an angle of incidence of any of the first light signals with respect to a portion of the light redirecting layer that receives the first light signal is not equal to an angle of reflection of the first light signal with respect to a portion of the light redirecting layer.
7. The optical assembly of the head mounted display of claim 1, wherein a cross-sectional area of the first light signal projected onto the light redirecting layer is substantially the same as an area of one of the subunit segments.
8. The optical assembly of claim 7, wherein light of different wavelengths in a first light signal received by the same subunit section of the plurality of subunit sections is redirected to the same location on the retina of the first eye of the viewer.
9. The optical assembly of the head mounted display of claim 2, wherein each of the plurality of sub-unit sections further comprises a first region for receiving and redirecting the blue light, a second region for receiving and redirecting the green light, or a third region for receiving and redirecting the red light.
10. The optical assembly of claim 9, wherein two of the first, second, and third regions have the same horizontal or vertical position as the light redirecting layer, one of the first, second, and third regions being aligned horizontally or vertically with two of the first, second, and third regions.
11. The optical assembly of the head-mounted display of claim 1, wherein the first light emitter is a micro light emitting diode, the optical assembly being disposed on one side of the first light emitter.
12. The optical assembly of a head mounted display of claim 1, wherein the light redirecting layer is disposed on one side of the optical assembly.
13. The optical assembly of the head mounted display of claim 11, wherein the first light signal is aimed through the light redirecting layer after passing through the light redirecting layer.
14. The optical assembly of a head mounted display of claim 1, wherein the optical assembly is configured to receive the first optical signal emitted by the first optical emitter and to be conducted through an optical direction modifier for dynamically changing the direction of the first optical signal in time.
15. The optical assembly of a head mounted display of claim 1, further comprising a spacer layer disposed on one side of the light redirecting layer for protecting the three-dimensional nanostructures on the supersurface.
16. The optical assembly of the head-mounted display of claim 1, wherein another light redirecting layer is disposed in a second optical path between a second light emitter and a second eye of the viewer for respectively receiving and redirecting a plurality of second light signals emitted by the second light emitter at different wavelengths of light emitted by the second light emitter at different angles of incidence to the second eye of the viewer, each of the second light signals corresponding to a second pixel of the image, wherein the different angles of incidence of the first light signal are related to a visual axis when the first eye sees the first pixel, the different angles of incidence of the second light signal are related to a visual axis when the second eye sees the second pixel, and wherein one of the pixels of the first pixel is viewed by the viewer with the corresponding second pixel, forming a binocular view of the image.
17. The optical assembly of a head mounted display of claim 1, wherein an area of a subsection cell adjacent to an adjacent subsection cell includes three-dimensional nanostructures that redirect two consecutive first signals of two pixels to their respective locations on the viewer's retina.
18. The optical assembly of a head mounted display of claim 1, further comprising a refractive surface disposed on one side of the optical assembly.
19. The head mounted display optical assembly of claim 18, wherein the refractive surface comprises a convex or concave surface.
20. The head mounted display optical assembly of claim 18, wherein the light redirecting layer is disposed on one side of the refractive surface.
CN202380012722.6A 2022-02-25 2023-02-23 Optical assembly of head-mounted display Pending CN117642678A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/313,741 2022-02-25
US202263435030P 2022-12-23 2022-12-23
US63/435,030 2022-12-23
PCT/US2023/013719 WO2023164065A1 (en) 2022-02-25 2023-02-23 Optical assembly for head wearable displays

Publications (1)

Publication Number Publication Date
CN117642678A true CN117642678A (en) 2024-03-01

Family

ID=90016769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202380012722.6A Pending CN117642678A (en) 2022-02-25 2023-02-23 Optical assembly of head-mounted display

Country Status (1)

Country Link
CN (1) CN117642678A (en)

Similar Documents

Publication Publication Date Title
US11221486B2 (en) AR headsets with improved pinhole mirror arrays
US11733516B2 (en) Augmented reality display comprising eyepiece having a transparent emissive display
US20170059869A1 (en) Optical system for head mount display
CN107430277B (en) Advanced refractive optics for immersive virtual reality
JP6539672B2 (en) Immersive compact display glass
KR101916079B1 (en) Head-mounted display apparatus employing one or more fresnel lenses
US8384999B1 (en) Optical modules
JP6246592B2 (en) Collimating display with pixel lens
JP2022069518A (en) Very high refractive index eyepiece substrate-based viewing optics assembly architectures
JP6246588B2 (en) Head mounted display device using one or more Fresnel lenses
KR101928764B1 (en) Head-mounted display apparatus employing one or more reflective optical surfaces
WO2019062480A1 (en) Near-eye optical imaging system, near-eye display device and head-mounted display device
CN107771297A (en) For virtual and augmented reality near-to-eye free form surface type nanostructured surface
TWI553344B (en) Head-mounted display apparatus employing one or more fresnel lenses
CN114616506B (en) Compact enhanced realistic optical device with ghost blocking function and wide viewing angle
CN105474074A (en) Spectacle lens and display device comprising such a spectacle lens
CN110187506A (en) Optical presentation system and augmented reality equipment
US20230044063A1 (en) Ar headset with an improved displa
CN109116577B (en) Holographic contact lens and application thereof
US20230314805A1 (en) Optic system for head wearable devices
TWI841258B (en) Optical assembly for head wearable displays
CN117642678A (en) Optical assembly of head-mounted display
TW202340804A (en) Optical assembly for head wearable displays
JP6832318B2 (en) Eye projection system
KR102412293B1 (en) Optical System of Near to Eye Display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination