WO2022111668A1 - Virtual-reality fusion display device - Google Patents

Virtual-reality fusion display device Download PDF

Info

Publication number
WO2022111668A1
WO2022111668A1 PCT/CN2021/133765 CN2021133765W WO2022111668A1 WO 2022111668 A1 WO2022111668 A1 WO 2022111668A1 CN 2021133765 W CN2021133765 W CN 2021133765W WO 2022111668 A1 WO2022111668 A1 WO 2022111668A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
ambient light
display device
combiner
wearer
Prior art date
Application number
PCT/CN2021/133765
Other languages
French (fr)
Chinese (zh)
Inventor
朱帅帅
邓焯泳
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022111668A1 publication Critical patent/WO2022111668A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/10Beam splitting or combining systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/28Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 for polarising
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/28Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 for polarising
    • G02B27/286Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 for polarising for controlling or changing the state of polarisation, e.g. transforming one polarisation state into another
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • the present application relates to the technical field of head-mounted display devices, and in particular, to a display device that realizes virtual-real fusion.
  • HMD Head Mounted Display
  • VR Virtual Reality
  • AR Augmented Reality
  • MR Mixed Reality
  • HMDs such as AR and MR can enable the wearer to observe real objects and virtual content at the same time, but because the HMD's combiner projects the light carrying the real object and the light carrying the virtual content into the wearer's eyes at the same time, Therefore, from the perspective of the wearer, the virtual content cannot effectively block the real object, and there are problems such as ghosting between the virtual content and the real object, which cannot achieve the effect of virtual and real fusion, and the wearer's experience effect is poor.
  • the purpose of the present application is to provide a display device that realizes virtual-real fusion, so as to solve the problem that the existing head-mounted display device cannot realize the virtual-real fusion.
  • the present application provides a display device for realizing virtual and real fusion, including: a first combiner, a second combiner, a first modulation component, a second modulation component, a shading template, and an optomechanical; the first The modulation component is located on the light-emitting side of the first combiner; the second modulation component is located on the light-emitting side of the first modulation component and on the light-incident side of the second combiner.
  • the first combiner is configured to receive first ambient light and transmit the first ambient light to the first modulation component, where the first ambient light carries real object information.
  • the first modulation component is configured to modulate the polarization state of the first ambient light to form a second ambient light, and image the second ambient light to the occlusion template.
  • the occlusion template is used to modulate the polarization state of the second ambient light according to the occlusion relationship between the real object and the virtual object to form a third ambient light.
  • the first modulation component modulates the polarization state of the third ambient light again to form a fourth ambient light, and transmits the fourth ambient light to the second modulation component.
  • the second modulation component is used to transmit the fourth ambient light to the second combiner, and is also used to modulate the polarization state of the first display light to form a second display light, and transmit the second display light to the second combiner; wherein the first display light is generated by the optomechanical and carries the virtual object information; the polarization state of the second display light and the polarization of the fourth ambient light state is different.
  • the second combiner is used to transmit the fourth ambient light and the second display light, so that the fourth ambient light and the second display light are incident on the wearer's eyes. Based on this, when the wearer sees the virtual content and the real object, the virtual content and the real object can be blocked from each other, so as to achieve the effect of virtual and real fusion, so as to provide the wearer with a more natural and realistic visual experience.
  • the first modulation component includes a first lens; the first lens is facing the light-emitting side of the first combiner, and is used for imaging the first ambient light to the occlusion template, so that The polarization state of light is modulated by the occlusion template.
  • the first modulation component further includes a first polarizer, a first polarizing beam splitter and a half-wave plate.
  • the first polarizer and the first polarizing beam splitter are sequentially located on the light-emitting side of the first lens and away from the first combiner; the half-wave plate is located on the side of the first polarizing beam splitter.
  • the shielding template is located on the back focal plane of the first lens, and the angle between the fast axis direction of the half-wave plate and the polarization direction of the third ambient light is 45°.
  • the first polarizer is used to modulate the polarization state of the first ambient light to form the second ambient light;
  • the polarization beam splitter is used to reflect the second ambient light to the shielding template, and for transmitting the third ambient light;
  • the half-wave plate is used to modulate the polarization state of the third ambient light to form the fourth ambient light.
  • the second modulation component includes a second polarizer, a second polarizing beam splitter, and a second lens.
  • the second lens faces the light incident side of the second combiner, the second polarizing beam splitter is located on the light incident side of the second lens, and faces the half-wave plate; the optomechanical and the shielding template are both located on the focal plane of the second lens.
  • the second polarizer is used to modulate the polarization state of the first display light to form the second display light.
  • the second polarizing beam splitter is used for reflecting the fourth ambient light to the second lens, and for transmitting the second display light to the second lens.
  • the second lens is used for collimating the fourth ambient light and the second display light.
  • the display device further includes a first zoom lens, the first zoom lens is located on the light incident side of the first combiner; the first zoom lens is used for collimating the first ambient light , so that the first ambient light is transmitted in the first combiner.
  • the display device further includes a second zoom lens, the second zoom lens is located on the light-emitting side of the second combiner; the second zoom lens is used for collimating through the second combiner The fourth ambient light and the second display light are transmitted.
  • the display device further includes an eye tracking component; the eye tracking component is used to capture the gaze direction of the wearer's eyes.
  • the display device further includes a controller that is electrically connected to the first zoom lens, the second zoom lens, and the eye tracking component, respectively; the controller is configured to: Adjust the optical power of the first zoom lens and the second zoom lens according to the direction of the wearer's line of sight. Based on this, the controller can adjust the refractive power of the first zoom lens and the second zoom lens to adaptively adjust the presentation effects of the real object and the virtual object.
  • the depth of the wearer's gaze point is obtained according to the wearer's line of sight
  • the controller is configured to control the first zoom lens and the second zoom lens, so that the real object and the The virtual image corresponding to the virtual object is set on the virtual image plane where the gaze point is located to improve the VAC problem.
  • the display device further includes a depth detector; the depth detector is provided on one side of the first coupler and away from the second coupler; the depth detector is used to obtain the wearer's Depth information of the surrounding real environment.
  • the occlusion template is further configured to refresh the third ambient light in real time, so as to adaptively change the occlusion relationship between the real object and the virtual object.
  • the optical machine is further configured to refresh the virtual object information carried by the first display light in real time, so as to adaptively change the occlusion relationship between the virtual object and the real object.
  • the controller is configured to adjust the optical power of the first zoom lens and the second zoom lens according to the following formula.
  • P 1 is the optical power of the first zoom lens
  • P 2 is the optical power of the second zoom lens
  • M is the wearer's degree of myopia
  • V is the line of sight of the wearer's eyes. the depth of the gaze point.
  • the display device can adaptively correct the wearer's myopia.
  • the occlusion template is further used to defocus the incident second ambient light; and/or, the optomechanical for defocusing the first display light. Based on this, the virtual-real fusion effect between the real object and the virtual object can be improved, so as to improve the visual experience of the wearer.
  • the display device further includes an electrochromic sheet; the electrochromic sheet is sandwiched between the first coupler and the second coupler, and is used to block the transmission through the light from the first combiner.
  • the display device further includes a shielding piece; the shielding piece is sandwiched between the first coupler and the second coupler, and is used to block light passing through the first coupler.
  • the display device is a head-mounted display device or a head-up display device.
  • the head-up display device can be applied in the smart cockpit of the car.
  • the head-up display device can cooperate with the front windshield of a car to present relevant content on the windshield.
  • the included angle between the fast axis direction of the half-wave plate and the polarization direction of the third ambient light may also be about 45°.
  • the first lens is a single lens, or the first lens is an assembly of two or more lenses.
  • the second lens is a single lens, or the second lens is an assembly of two or more lenses.
  • the first combiner is a diffractive optical waveguide, a reflective optical waveguide or a pinhole mirror.
  • the second combiner is a diffractive optical waveguide, a reflective optical waveguide or a pinhole mirror.
  • the occlusion template is LCoS or DLP.
  • the wearer can see the virtual content and real objects that are combined with the virtual and the real, so as to provide the wearer with a more natural and realistic vision experience.
  • FIG. 1 is a schematic diagram of a conventional AR display device.
  • FIG. 2 is a schematic diagram of a real object and virtual content under a first positional relationship.
  • FIG. 3 is a schematic diagram of a real object and virtual content in a second positional relationship.
  • FIG. 4 is a schematic diagram of a real object and virtual content in a third positional relationship.
  • FIG. 5 is a frame diagram of a head-mounted display device according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a head-mounted display device according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a head-mounted display device according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of the occlusion template, the virtual content, and the scene seen by the wearer when the wearer looks at the real object.
  • FIG. 9 is a schematic diagram of the occlusion template, the virtual content, and the scene seen by the wearer when the wearer stares at the virtual object.
  • FIG. 10 is a schematic diagram of a scene when a wearer observes a real object.
  • FIG. 11 is a schematic diagram of a scene when a wearer observes real objects and virtual content.
  • FIG. 12 is a schematic diagram of a scene when a wearer observes real objects and virtual content through the head-mounted display device of the embodiment.
  • an AR head-mounted display device 10 which generally includes an optical machine 12 , a combiner 14 , an in-coupling grating 16 and an out-coupling grating 18 .
  • Light reflected from a real object 20 existing in the real world may pass through the outcoupling grating 18 and the combiner 14 in sequence to be incident on the wearer's eye 30 .
  • the optomechanical 12 is used to output light carrying the virtual content 22; the light carrying the virtual content 22 can be coupled into the combiner 14 through the coupling grating 16, and is totally reflected in the combiner 14 towards the coupler. out of the direction of the grating 18.
  • the light is coupled out by the outcoupling grating 18 and exits to the wearer's eyes 30, correspondingly enabling the wearer to see virtual content 22 that does not exist in the real world. Based on this, while observing the real object 20 through the head-mounted display device 10 , the wearer can also see the virtual content 22 , so as to provide a different visual experience from electronic devices such as mobile phones and televisions.
  • the virtual content 22 can be processed by modeling, rendering, etc. to adaptively present the virtual content twenty two.
  • the virtual content 22 is partially occluded by the real object 20; or, there is no occlusion relationship between the virtual content 22 and the real object 20, and so on.
  • the virtual content 22 may include virtual objects and corresponding virtual three-dimensional scenes; or, the virtual content 22 may also only include virtual objects or virtual three-dimensional scenes, which is not limited.
  • the virtual content 22 may be a whale floating in the air, a hot air balloon, a building sign, and the like.
  • the real object 20 is an object existing in the real world
  • the light carrying the real object 20 is directly incident on the wearer's eyes 30 through the head mounted display device 10 . That is, the light carrying the virtual content 22 and the light carrying the real object 20 are relatively independent, and the virtual content 22 generated by the optical machine 12 cannot block the real object 20 .
  • a cube is generally used as an example of the real object 20 and a cylinder is used as an example of the virtual content 22 for description.
  • FIG. 2 is a schematic diagram of a real object and virtual content under a first positional relationship. As shown in FIG. 2 , based on the head-mounted display device 10 , under the first positional relationship, there is no overlap between the real object 20 and the virtual content 22 , so that the wearer can relatively clearly see objects in different directions. Real objects 20 and virtual content 22 .
  • FIG. 3 is a schematic diagram of a real object and virtual content in a second positional relationship.
  • the virtual content 22 is located behind the real object 20, although there is overlap between the two, since the virtual content 22 is composed of generated by the optical machine 12 , so the virtual content 22 can be rendered and other processing so as not to display the part of the virtual content 22 located behind the real object 20 .
  • the virtual content 22 can thus be seen relatively clearly behind the real object 20 in the eyes of the wearer.
  • FIG. 4 is a schematic diagram of a real object and virtual content in a third positional relationship.
  • the virtual content 22 is to block a part of the real object 20 . That is, the wearer cannot see part of the real object 20 behind through the virtual content 22 .
  • the head-mounted display device 10 can only perform processing such as rendering on the virtual content 22 of the cylinder, but cannot perform processing such as rendering on the real object 20 of the cube, accordingly, the virtual content 22 located in the front cannot be well aligned.
  • the real object 20 is occluded. Problems such as ghosting may exist between the virtual content 22 and the real object 20 , the sense of violation is strong, and the experience effect of the wearer is also poor.
  • the embodiment of the present application provides an optical see-through display device for virtual and real fusion.
  • the wearer sees the virtual content and the real object
  • the virtual content and the real object can be blocked from each other, so as to achieve the fusion of the virtual and the real. effect, so as to provide the wearer with a more natural and realistic visual experience.
  • the optical see-through display device is mainly illustrated by a head-mounted display device, but is not limited thereto.
  • the optical see-through display device may also be a head up display device (Head Up Display, HUD).
  • HUD Head Up Display
  • the head-up display device can be applied in the smart cockpit of the car.
  • the head-up display device can cooperate with the front windshield of a car to present relevant content on the windshield.
  • ambient light and display light are defined in various embodiments of the present application.
  • the ambient light refers to the light reflected by the real object, and the ambient light carries real object information; the ambient light includes, for example, a first ambient light, a second ambient light, and the like.
  • the display light is light carrying virtual content, which can be generated by the optical machine 12; the display light includes, for example, a first display light, a second display light, and the like.
  • a head-mounted display device 100 for realizing virtual-real fusion provided by an embodiment of the present application.
  • the head-mounted display device 100 includes: a first combiner 110 and a second combiner 120, and both the first combiner 110 and the second combiner 120 can transmit light through total reflection.
  • the first combiner 110 is mainly used to transmit ambient light
  • the second combiner 120 is mainly used to transmit ambient light and display light. It should be understood that, as illustrated in FIG. 5 , in each embodiment, the optical path of ambient light transmission is extended by the cooperation of the first combiner 110 and the second combiner 120 .
  • the ambient light can be polarized, blocked, etc., relatively conveniently through the relevant structure, so that the visual information of the real object 20 carried by the ambient light can be changed, so as to adjust the real object 20 adaptively.
  • the mutual occlusion relationship between the object 20 and the virtual content 22 From the perspective of the wearer, the real object 20 and the virtual content 22 can be blocked from each other, and the visual presentation effect of the head-mounted device will be better.
  • first coupler and the second coupler are spaced apart, but not limited thereto. In some other embodiments, the first coupler and the second coupler can also be arranged in layers.
  • the head-mounted display device 100 may further include: a first modulation component 130 , a second modulation component 140 , a blocking template 152 and an optomechanical 154 .
  • the first modulation component 130 , the second modulation component 140 , the shielding template 152 and the optomechanical 154 may all be disposed on the same side of the first combiner 110 and close to the second combiner 120 .
  • the first coupler 110 has less related structures on the side away from the second coupler 120 , so that the thickness of the head-mounted display device 100 can be better controlled, so as to improve the integration of the head-mounted display device 100 Spend.
  • the head-mounted display device 100 can also have a small volume, which is convenient for the wearer to wear or carry.
  • the first modulation component 130 is located on the light-emitting side of the first combiner 110 , and can transmit and modulate the first ambient light emitted by the first combiner 110 to form the second ambient light.
  • the shielding template 152 can modulate and reflect the second ambient light transmitted through the first modulation component 130 to form the third ambient light.
  • the first modulation component 130 can transmit and modulate the third ambient light modulated by the occlusion template 152 again to form the fourth ambient light.
  • the fourth ambient light can be transmitted to the second modulation component 140 for subsequent processing.
  • the shielding template 152 may be disposed on one side of the first modulation component 130 and away from the second combiner 120 .
  • the first modulation component 130 may perform imaging, polarization and other processing on the first ambient light, and image (relay) the image of the real object 20 corresponding to the formed second ambient light onto the occlusion template 152; That is, the occlusion template 152 is optically conjugated to the real object 20 . Based on this, the second ambient light can be modulated and reflected through the occlusion template 152 to adjust the visual information of the real object 20 carried by the second ambient light.
  • the second modulation component 140 is disposed on one side of the first modulation component 130 and away from the shielding template 152 . Taking the second combiner 120 as a reference, the second modulation component 140 is specifically located on the light incident side of the second combiner 120 and away from the first combiner 110 . Wherein, the second modulation component 140 can at least transmit and collimate the fourth ambient light, so as to transmit the fourth ambient light to the second combiner 120 .
  • the optomechanical 154 may be disposed on one side of the second modulation component 140 and away from the second combiner 120 . Based on this, the light machine 154 may emit the first display light carrying the virtual content 22 . It should be understood that the second modulation component 140 can also receive the first display light emitted from the optical machine 154, and can perform polarization, collimation, etc. on the first display light to form the second display light, and the second display light The display light is transmitted to the second combiner 120 .
  • the fourth ambient light and the second display light are combined through the second modulation component 140, they can be incident on the second combiner 120 and transmitted in the second combiner 120 to be incident on the wearer's eyes 30, so that the A visual effect of mutual occlusion is achieved between the real object 20 and the virtual content 22 .
  • the first combiner 110 may include a first in-coupling grating 112 , a first substrate 114 and a first out-coupling grating 116 .
  • the first in-coupling grating 112 and the first out-coupling grating 116 are disposed at intervals, and are both disposed on the surface of the first substrate 114 .
  • the first ambient light is coupled into the first substrate 114 after being diffracted by the first coupling grating 112 .
  • a certain diffraction order of the first ambient light may undergo a phenomenon of total reflection so as to be transmitted toward the direction of the first coupling-out grating 116 .
  • the first ambient light transmitted through the first substrate 114 is diffracted again by the first coupling-out grating 116 to be coupled out from the first combiner 110 and enter the first modulation component 130 .
  • the light incident side of the first combiner 110 may be the side corresponding to the first coupling grating 112 , that is, the side where light enters the first combiner 110 as shown in FIGS. 6 and 7 .
  • the light-emitting side of the first combiner 110 may be the side corresponding to the first outcoupling grating 116 , that is, the side where the light exits the first combiner 110 as shown in FIGS. 6 and 7 .
  • the first combiner 110 may be, for example, a diffractive optical waveguide, a reflective optical waveguide, or a pin-hole mirror (Pin-mirror), which is not limited thereto.
  • the first coupling grating 112 and the first coupling out grating 116 are both reflective diffraction gratings, but not limited thereto.
  • at least one of the first coupling-in grating 112 and the first coupling-out grating 116 may also be a transmissive diffraction grating. It should be understood that, based on the type of the first in-coupling grating 112 and the first out-coupling grating 116 , the light-incident side and the light-exiting side of the first combiner 110 may be the same or different.
  • the first coupling grating 112 is a reflective diffraction grating
  • the light incident side of the first combiner 110 is the side away from the first coupling grating 112 .
  • the first coupling grating 112 is a transmissive diffraction grating
  • the light incident side of the first combiner 110 is the side where the first coupling grating 112 is disposed.
  • the first modulation component 130 may include a first lens 132 , the first lens 132 is facing the first out-coupling grating 116 of the first combiner 110 , and the shielding template 152 is located at the first lens 132 on the back focal plane (also known as the second focal plane). Based on this, after the first ambient light is coupled out from the first combiner 110 , the first lens 132 can image the image of the real object 20 corresponding to the first ambient light onto the occlusion template 152 .
  • the occlusion template 152 needs to modulate and reflect the first ambient light; that is, between the first modulation component 130 and the second modulation component 140 , the occlusion template 152 needs to reflect the light carrying real object information.
  • the first modulation component 130 may further include a first polarizer ((Polarizer) 134 and a first polarization beam splitter (Polarization Beam Splitter, PBS) 136, the first polarizer 134 and The first polarizing beam splitter 136 is sequentially located on the light exit side of the first lens 132 and away from the first combiner 110 ; wherein the shielding template 152 is provided on one side of the first polarizing beam splitter 136 and away from the second modulation component 140 .
  • the first polarizer 134 can modulate the first ambient light into S-polarized polarized light, that is, the second ambient light. Based on this, the first polarized beam splitter 136 can reflect the second ambient light to the shielding template 152 .
  • the occlusion template 152 is located on the back focal plane of the first lens 132 on the optical path, the occlusion template 152 is conjugated to the real object 20 .
  • the second ambient light with a specific polarization state can be imaged onto the occlusion template 152 , and the formed image corresponds to the real object 20 .
  • the shielding template 152 may be, for example, a liquid crystal on silicon (LCoS) or a digital light processor (Digital Light Processing, DLP) and other devices to modulate ambient light, which is not limited. Based on this, the occlusion template 152 can perform pixel-level polarization phase modulation on the second ambient light, so as to change the visual information of the real object 20 carried by the second ambient light.
  • the controller of the head-mounted display device can obtain the depth, position and attitude information of the real object 20 and the virtual content 22 output by the optical machine, and then by comparing the information of the virtual content 22 and the real object 20, it can The occlusion relationship between the real object 20 and the virtual content 22 is obtained, which will be described in detail below.
  • the occlusion template 152 a part of the image corresponding to the real object 20 can be occluded, and the part is the part of the real object 20 that is blocked by the virtual content 22 . Therefore, when the subsequent fourth ambient light is incident on the wearer's eyes 30 , the real object 20 seen by the wearer based on the fourth ambient light is different from the real object seen by the wearer after the head mounted display device 100 is removed. There are certain differences between the objects 20 . That is, a part of the real object 20 that the wearer sees based on the fourth ambient light may be missing, and this part is the part that the virtual content 22 blocks the real object 20 .
  • the occlusion template 152 is an LCoS as an example, the LCoS can modulate the incident second ambient light, and then reflect the modulated third ambient light.
  • the LCoS can perform binary phase modulation on the second ambient light; that is, the pixels on the LCoS have two states, one that changes the polarization state of the incident light, and the other state that does not change the incident light. polarization state.
  • the states of the pixels on the LCoS are defined as a first state and a second state. Wherein, under the modulation of the pixels in the first state, the corresponding part of the second ambient light can reflect the light in the P polarization state, and based on the optical characteristics of the first polarization beam splitter 136, the light in the P polarization state can pass through the first polarization state.
  • a polarizing beam splitter 136 for subsequent processing.
  • the corresponding part of the second ambient light reflects the light in the S polarization state. Based on the optical characteristics of the first polarization beam splitter 136 , the light in the S polarization state cannot pass through the first polarization state.
  • the polarizing beam splitter 136 that is, the image of the real object 20 corresponding to the part of the light is blocked, and thus cannot be incident on the wearer's eyes 30 .
  • the pixels on the LCoS can be switched between the first state and the second state to realize modulation of the second ambient light according to requirements such as usage scenarios and picture presentation effects.
  • the state of the pixels on the LCoS changes in real time to refresh the fusion effect between the real object 20 and the virtual content 22 in real time.
  • the first modulation component 130 may further include a half-wave plate 138 .
  • the half-wave plate 138 is located on the light-emitting side of the first polarizing beam splitter 136 and away from the shielding template 152 .
  • the half-wave plate 138 can modulate the polarization state of the third ambient light to the S-polarized state, that is, the fourth ambient light is formed, It is transmitted to the second combiner 120 in order to cooperate with the second modulation component 140 .
  • the included angle between the fast axis direction of the half-wave plate 138 and the polarization direction of the incident third ambient light is 45°;
  • the angle between the polarization directions is approximately 45°. Based on this, after the incident third ambient light in the P-polarized state exits the half-wave plate 138 , the P-polarized state thereof will be modulated into the S-polarized state, thereby forming the fourth ambient light.
  • the fourth ambient light can facilitate transmission through the second modulation component 140 later.
  • the first lens 132 is illustrated as a single lens. In some other embodiments, the first lens 132 may also be an assembly of two or more lenses, which is not limited.
  • the second modulation component 140 may include a second polarizer 142 , a second polarizing beam splitter 144 and a second lens 146 arranged in sequence.
  • the second polarizer 142 is disposed between the optical machine 154 and the second polarizing beam splitter 144 , and the second polarizer 142 can modulate the first display light emitted by the optical machine 154 into light with a P polarization state, that is, to form a second polarizer. Show light.
  • the second polarizing beam splitter 144 faces the half-wave plate 138 , which can reflect the fourth ambient light in the S-polarized state to the second lens 146 .
  • the second polarizing beam splitter 144 can also transmit the second display light in the P-polarized state to the second lens 146, and based on this, the fourth ambient light and the second display light can be combined.
  • the second lens 146 faces the second polarizing beam splitter 144 and is located between the second polarizing beam splitter 144 and the second combiner 120 .
  • the second lens 146 is facing the light incident side of the second combiner 120, and can perform collimation processing on the combined fourth ambient light and the second display light.
  • the shielding template 152 and the optical machine 154 are both located on the focal plane of the second lens 146. Therefore, after the fourth ambient light and the second display light are collimated by the second lens 146, they can Transmission within the combiner 120 and incidence to the wearer's eye 30 .
  • the optical machine 154 may adaptively not display another part of the virtual content 22 , and the other part is the part of the virtual content 22 blocked by the real object 20 . Based on this, similar to the fitting of the pieces in the puzzle, from the perspective of the wearer, the real object 20 and the virtual content 22 can just be merged with each other, and there is no or less overlap between the two. This can achieve a good visual effect of virtual and real fusion.
  • the optomechanical 154 may be, for example, based on Liquid Crystal on Silicon (LCoS), Digital Light Processing (DLP), Micro Organic Light-Emitting Diode (Micro Organic Light-Emitting Diode, Micro -OLED), Micro Light-Emitting Diode (Micro-LED) or laser beam scanner (Laser Beam Scanner, LBS), etc., there is no restriction on this.
  • LCD Liquid Crystal on Silicon
  • DLP Digital Light Processing
  • Micro Organic Light-Emitting Diode Micro Organic Light-Emitting Diode
  • Micro -OLED Micro Organic Light-Emitting Diode
  • Micro-LED Micro Light-Emitting Diode
  • LBS laser beam scanner
  • the second lens 146 is illustrated as a single lens. In some other embodiments, the second lens 146 may also be an assembly of two or more lenses, which is not limited.
  • the second combiner 120 may include a second in-coupling grating 122 , a second substrate 124 and a second out-coupling grating 126 .
  • the second in-coupling grating 122 and the second out-coupling grating 126 are spaced apart, and are both disposed on the surface of the second substrate 124 . Similar to the first combiner 110 , the combined fourth ambient light and the second display light are coupled into the second substrate 124 after diffracted by the second coupling grating 122 .
  • a phenomenon of total reflection of a diffraction order of the fourth ambient light and a phenomenon of total reflection of a diffraction order of the second display light are both directed toward the first Two out-coupled grating 126 directions transmit.
  • the light transmitted through the second substrate 124 can be diffracted again by the second outcoupling grating 126 to be coupled out from the second coupler 120 .
  • the fourth ambient light and the second display light coupled out from the second combiner 120 may be incident on the wearer's eye 30 . Based on this, the wearer can see the real object 20 and the virtual content 22 in which the virtual and the real are merged, so as to improve the visual experience.
  • the light incident side of the second combiner 120 may be the side corresponding to the second coupling grating 122 , that is, the side where light enters the second combiner 120 as shown in FIGS. 6 and 7 .
  • the light-emitting side of the second combiner 120 may be the side corresponding to the second outcoupling grating 126 ; that is, the side where light exits the second combiner 120 as shown in FIGS. 6 and 7 .
  • the second combiner 120 may be, for example, a diffractive optical waveguide, a reflective optical waveguide, or a pinhole mirror, etc., which is not limited.
  • the second coupling-in grating 122 and the second coupling-out grating 126 are both reflective diffraction gratings, but not This is limited. In some other embodiments, at least one of the second coupling-in grating 122 and the second coupling-out grating 126 may also be a transmissive diffraction grating.
  • the light-incident side and the light-exiting side of the second combiner 120 may be the same or different.
  • the second coupling grating 122 is a reflective diffraction grating
  • the light incident side of the second combiner 120 is the side away from the second coupling grating 122 .
  • the second coupling grating 122 is a transmissive diffraction grating
  • the light incident side of the second combiner 120 is the side where the second coupling grating 122 is disposed.
  • the head mounted display device 100 may further include an electrochromic sheet 162 sandwiched between the first coupler 110 and the second coupler 120 .
  • the head mounted display device 100 needs to perform occlusion processing on the light carrying real object information. That is, when the head mounted display device 100 works normally, the first ambient light will not directly enter the wearer's eyes 30 , and the wearer will not directly see the real object 20 .
  • the electrochromic sheet 162 can block part of the first ambient light passing through the first coupler 110 when the power is on, so as to prevent the part of the first ambient light from directly incident on the wearer's eyes 30 and avoid occurrence of Problems such as ghosting between the real object 20 and the virtual content 22 and the disappearance of the occlusion effect on the real object 20 .
  • the electrochromic sheet 162 when the electrochromic sheet 162 is not energized, the electrochromic sheet 162 is in a transparent state, based on which the wearer can see through the second coupler 120, the electrochromic sheet 162 and the first The real object 20 can be seen directly by the combination 110 .
  • the head-mounted display device 100 may not use the electrochromic sheet 162, but use a dark-colored blocking sheet, which can also block part of the first ambient light passing through the first coupler .
  • the head mounted display device 100 may further include a depth detector 164 .
  • the depth detector 164 may be disposed on one side of the first coupler 110 and away from the second coupler 120 . It should be understood that the depth detector 164 is used to obtain depth information of the real environment around the wearer.
  • the depth detector 164 may be, for example, a device based on a binocular camera, structured light, time-of-flight (ToF), or the like.
  • SLAM Simultaneous Localization And Mapping
  • the head mounted display device 100 may further include a first zoom lens 166 .
  • the first zoom lens 166 is disposed on one side of the first combiner 110 and away from the second combiner 120 .
  • the first zoom lens 166 is used for collimating the first ambient light, so that the first ambient light is transmitted in the first combiner 110 . It should be understood that the collimated first ambient light can be coupled into the first substrate 114 through the first coupling grating 112 to be transmitted toward the direction of the first coupling grating 116 .
  • the head-mounted display device 100 may further include a second zoom lens 168 .
  • the second zoom lens 168 is disposed on one side of the second combiner 120 and is far from the first combiner 110; that is, the two combiners (110, 120) are sandwiched between the first zoom lens 166 and the second zoom lens between 168. It should be understood that after the combined fourth ambient light and the second display light are coupled out through the second coupling-out grating 126, the second zoom lens 168 can collimate the combined fourth ambient light and the second display light, so that the collimated fourth ambient light and the second display light are incident on the wearer's eye 30 .
  • the first zoom lens 166 is exemplified as a convex lens
  • the second zoom lens 168 is exemplified as a concave lens.
  • Both the first zoom lens 166 and the second zoom lens 168 can be, for example, a liquid crystal zoom lens, a liquid zoom lens, an Alvarez lens, or other devices that can realize real-time zooming.
  • both the first zoom lens 166 and the second zoom lens 168 can be replaced with fixed focal length lenses.
  • the first zoom lens 166 is replaced with a convex lens with a fixed focal length
  • the second zoom lens 168 is replaced with a concave lens with a fixed focal length, which is not limited.
  • the head mounted display device 100 may further include a controller 170 and an eye tracking component 172 .
  • the controller 170 is electrically connected to the first zoom lens 166, the second zoom lens 168 and the eye tracking component 172, respectively.
  • the eye tracking assembly can capture the direction of the gaze of the wearer's eyes 30 . According to the intersection of binocular vision, the depth of the wearer's gaze point can be obtained. Based on the depth, the controller 170 may adjust the power of the first zoom lens 166 and the second zoom lens 168 to adaptively adjust the presentation effects of the real object 20 and the virtual content 22 .
  • the number of controllers 170 is one, but not limited thereto. In some other embodiments, the number of the controllers 170 may also be two, so as to control the first zoom lens 166 and the second zoom lens 168 respectively.
  • the eye tracking component 172 may include a near-infrared transmitter 172a and a near-infrared receiver 172b.
  • the near-infrared transmitter 172a may emit near-infrared light
  • the near-infrared receiver 172b may receive the near-infrared light reflected from the wearer's eye 30 .
  • the line of sight direction of the wearer's eyes can be obtained, and the depth of the wearer's gaze point can be obtained according to the intersection of the wearer's eyes in the respective line-of-sight directions.
  • the SLAM system can reconstruct a three-dimensional model of the real environment.
  • the real environment can be understood as a part of the real world, which is generally updated in real time according to the position and posture of the wearer.
  • the virtual three-dimensional scene includes virtual objects, and information such as the position, posture, size, and color of the virtual objects in the virtual three-dimensional scene.
  • the mutual occlusion relationship between the virtual object and the real object 20 can be obtained by comparing the position, posture, depth and size of the virtual object and the real object 20 . For example, by comparing the position and depth of the virtual object and the real object, it can be determined that the virtual object is located behind the real object, and accordingly, the virtual object will be partially occluded by the real object. Another example: by comparing information such as the position and depth of the virtual object and the real object, it can be determined that the virtual object is located in front of the real object, and accordingly, the real object will be partially occluded by the virtual object.
  • Another example by comparing the position, posture, size, depth and other information of the virtual object and the real object, it can be determined that the depth of the virtual object and the real object are roughly the same, and the orientation of the virtual object and the real object are different, then the virtual object can be determined. and the real object are set at intervals, and they are not occluded from each other.
  • the occlusion template 152 can modulate the second ambient light, and in the subsequent light path, can intercept the part of the image corresponding to the real object 20 that is blocked by the virtual object; After the occlusion template 152 is modulated, only the portion of the image corresponding to the real object 20 that is not occluded by the virtual object can be output.
  • images corresponding to the left and right eyes of the wearer are respectively generated.
  • virtual cameras can be placed at positions corresponding to the wearer's eyes, and each virtual camera The optical axis of the virtual camera is consistent with the line of sight of the wearer's eye 30 , and the position and size of the entrance pupil of the virtual camera match the pupil of the wearer's eye 30 .
  • the image obtained by the virtual camera shooting the virtual three-dimensional scene is the original virtual content.
  • the positions of the eyes of the wearer may be determined according to the position and posture of the wearer, and the direction of the line of sight of the wearer may be determined according to the eye tracking component 172 .
  • the part of the virtual object occluded by the real object 20 can be obtained, and the virtual content 22 in the above embodiments can be obtained by removing this part from the original virtual content.
  • the virtual content 22 may be emitted by the light machine 154 as the first display light.
  • the fourth ambient light can be transmitted to the second combiner 120 through the second modulation component 140.
  • the Two display rays can also be transmitted to the second combiner 120 and finally incident on the wearer's eye 30 . Based on this, the wearer can see the real object 20 and the virtual content 22 in which the virtual and the real are merged.
  • the wearer looks at the real object 20 in the real world, his eyeballs will turn and face the object, and the human brain can judge the depth of the object through the vergence angle of the eyes, which is called the vergence distance (Vergence Distance).
  • the human eye adjusts the diopter of the lens through the contraction of the ciliary muscle to image the target clearly, so the state of the ciliary muscle contraction can give the brain a depth signal, which is also called the Accommodation Distance.
  • the head-mounted display device of each embodiment of the present application can also Defocusing is performed on the light carrying real object information and the light carrying virtual object information to improve the visual effect of virtual and real fusion.
  • FIG. 8 is a schematic diagram of the occlusion template, the virtual content and the scene seen by the wearer when the wearer looks at the real object. Please refer to FIG. , the virtual object can be defocused and blurred to make the effect of virtual and real fusion more realistic. It should be understood that the blurring degree of the virtual object can be determined according to the distance between the virtual object and the wearer's gaze point.
  • the occlusion template 152 can also be subjected to the same blurring processing, that is, defocus blurring processing is performed on the black cylinder in FIG. 8 .
  • FIG. 9 is a schematic diagram of the occlusion template, the virtual content, and the scene seen by the wearer when the wearer looks at the virtual object. Please refer to FIG. 9. Since the wearer's gaze point is located on the virtual content 22, the occlusion template 152 and the virtual content 22 are blocked. None of the content 22 is defocused. It should be understood that since the first ambient light is derived from the real object 20 , when the wearer looks at the virtual content 22 , the image corresponding to the real object 20 will also be automatically defocused and blurred, and no special processing is required.
  • defocusing of the virtual content 22 may be achieved by a rendering algorithm.
  • way of defocusing can also be understood by analogy with the depth of field in the field of photography.
  • the head-mounted display device 100 can adaptively integrate the virtual and the real while realizing the fusion. Corrects the wearer's possible nearsightedness or farsightedness. Taking myopia as an example, it is assumed that the degree of myopia of the wearer is M (unit is Diopter), the corresponding desired virtual image position is V (unit is Diopter), the refractive power of the first zoom lens 166 is P1, and the second zoom lens 168 The optical power is P2, then the parameters shown above can satisfy the following relationship:
  • the degree of myopia of the wearer can be input by the wearer himself. And according to the depth of the wearer's gaze point (ie the vergence depth) obtained by the eye tracking component, the depth is defined as the virtual image position V. Therefore, the virtual image position V can be adjusted by adjusting the values of P1 and P2 according to the above relationship, so as to achieve clear imaging for different wearers.
  • the head-mounted display device 100 can obtain the depth of the wearer's gaze point in real time through the eye tracking component 172 . Based on the depth, the first zoom lens 166 can adaptively change its optical power, and the second zoom lens 168 can adaptively change its optical power, so as to replace the wearer's lens to achieve focusing to a certain extent function to achieve clear imaging.
  • the wearer can better feel the relationship between himself and the real object 20, such as the orientation; for example, the wearer can roughly estimate how many steps are required to walk to reach the position of the cylinder.
  • the principle of VR, AR, MR and other types of head-mounted display devices is roughly to project the image of the optical machine 154 or the display screen to a certain virtual image position.
  • the virtual images corresponding to the left eye and the right eye have a certain parallax.
  • the human eye looks at an object on the virtual image surface, the eyeball will also turn and face the object, so the brain will obtain the depth of the object according to the vergence angle of the eyes.
  • the dotted cylinder and dotted cube are the vergence depths felt by the brain when looking at the corresponding object. Taking a cylinder as an example, the wearer obtains the vergence depth of the cylinder as L1 according to the vergence angle of the eyes.
  • the human eye will always focus on the virtual image surface for clear imaging, so the depth of the human eye to the virtual image surface is the focusing depth.
  • the solid-line cylinder and solid-line cube are respectively the focusing depth perceived by the brain, and an example of the focusing depth is L2.
  • the vergence depth L1 and the focus depth L2 are not equal.
  • the head-mounted display device 100 is based on the cooperation of the controller 170 , the first zoom lens 166 , the second zoom lens 168 , and the eye tracking component 172 , etc.
  • the depth of the wearer's gaze point can be obtained through the eye tracking component 172; that is, it can be known whether the wearer is looking at the real object 20 or the virtual content 22.
  • a virtual image can be set on the corresponding virtual image plane to improve the VAC problem.
  • the angle at which the wearer’s eyes converge is small. Based on the depth of the wearer’s gaze point calculated by the eye tracking component 172, the first power of the zoom lens 166 and the second zoom lens 168 to set the virtual image at the first virtual image plane S101. Similarly, when the wearer is gazing at the virtual content 22 (ie, the cylinder in FIG. 12 ) that is closer, the angle at which the wearer’s eyes converge is larger.
  • the power of the first zoom lens 166 and the second zoom lens 168 may be adjusted to set the virtual image at the second virtual image plane S102. Wherein, compared with the first virtual image surface S101, the second virtual image surface S102 is closer to the wearer. Therefore, from the perspective of the wearer, in the process of staring at the real object 20 or the virtual content 22, the corresponding vergence depth and focus depth are consistent, and the wearer is not prone to eye fatigue and dizziness. VAC problem.

Abstract

A virtual-reality fusion display device (100), comprising a first combiner (110), a second combiner (120), a first modulation assembly (130), a second modulation assembly (140), a blocking pattern plate (152) and an optical engine (154). The first combiner (110) receives first ambient light and transmits the first ambient light to the first modulation assembly (130); the first modulation assembly (130) modulates the polarization state of the first ambient light to form second ambient light and images the second ambient light to the blocking pattern plate (152); the blocking pattern plate (152) modulates the polarization state of the second ambient light according to a blocking relationship between a real object (20) and a virtual object (22), to form third ambient light; the first modulation assembly (130) modulates the polarization state of the third ambient light again to form fourth ambient light, and transmits the fourth ambient light to the second modulation assembly (140); the second modulation assembly (140) is used for transmitting the fourth ambient light and second display light to the second combiner (120); and the second combiner (120) can transmit the fourth ambient light and the second display light into eyes (30) of a wearer so as to achieve a virtual-reality fusion display effect.

Description

实现虚实融合的显示设备A display device that realizes the fusion of virtual and real
本申请要求于2020年11月30日提交中国专利局、申请号为202011374209.6、申请名称为“实现虚实融合的显示设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed on November 30, 2020 with the application number 202011374209.6 and titled "Display Device for Realizing Virtual Reality Integration", the entire contents of which are incorporated into this application by reference.
技术领域technical field
本申请涉及头戴式显示设备的技术领域,特别涉及一种实现虚实融合的显示设备。The present application relates to the technical field of head-mounted display devices, and in particular, to a display device that realizes virtual-real fusion.
背景技术Background technique
头戴式显示设备(Head Mounted Display,HMD)是指供佩戴者佩戴到头上的显示设备,一般可以分成虚拟现实(Virtual Reality,VR)、增强现实(Augmented Reality,AR)和混合现实(Mixed Reality,MR)此三类。其中例如AR和MR的HMD虽然可以使佩戴者同时观察到真实物体和虚拟内容,但是由于HMD的结合器是将载有真实物体的光线和载有虚拟内容的光线同时投射到佩戴者的眼中,因此从佩戴者的视角看来,虚拟内容并不能有效地遮挡真实物体,虚拟内容和真是物体之间存在重影等问题,无法达到虚实融合的效果,佩戴者的体验效果较差。Head Mounted Display (HMD) refers to a display device worn by the wearer on the head, which can generally be divided into Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (Mixed Reality). , MR) these three categories. Among them, HMDs such as AR and MR can enable the wearer to observe real objects and virtual content at the same time, but because the HMD's combiner projects the light carrying the real object and the light carrying the virtual content into the wearer's eyes at the same time, Therefore, from the perspective of the wearer, the virtual content cannot effectively block the real object, and there are problems such as ghosting between the virtual content and the real object, which cannot achieve the effect of virtual and real fusion, and the wearer's experience effect is poor.
发明内容SUMMARY OF THE INVENTION
本申请的目的在于提供一种实现虚实融合的显示设备,以解决现有头戴式显示设备不能实现虚实融合的问题。The purpose of the present application is to provide a display device that realizes virtual-real fusion, so as to solve the problem that the existing head-mounted display device cannot realize the virtual-real fusion.
为了解决上述技术问题,本申请提供了一种实现虚实融合的显示设备,包括:第一结合器、第二结合器、第一调制组件、第二调制组件、遮挡模板和光机;所述第一调制组件位于所述第一结合器的出光侧;所述第二调制组件位于所述第一调制组件的出光侧,且位于所述第二结合器的入光侧。所述第一结合器用于接收第一环境光线,并将所述第一环境光线传输至所述第一调制组件,所述第一环境光线载有真实物体信息。所述第一调制组件用于调制所述第一环境光线的偏振态,以形成第二环境光线,并将所述第二环境光线成像至所述遮挡模板。所述遮挡模板用于根据真实物体与虚拟物体的遮挡关系,调制所述第二环境光线的偏振态,以形成第三环境光线。所述第一调制组件再次调制所述第三环境光线的偏振态,形成第四环境光线,将所述第四环境光线传输至所述第二调制组件。所述第二调制组件用于将所述第四环境光线传输至所述第二结合器,还用于调制第一显示光线的偏振态,以形成第二显示光线,以及将第二显示光线传输至所述第二结合器;其中,所述第一显示光线由所述光机产生,并载有所述虚拟物体信息;所述第二显示光线的偏振态和所述第四环境光线的偏振态不同。所述第二结合器用于传输所述第四环境光线和所述第二显示光线,使所述第四环境光线和所述第二显示光线入射至佩戴者的眼睛。基于此,佩戴者在看到虚拟内容和真实物体的同时,虚拟内容和真实物体之间可以相互遮挡,以达到虚实融合的效果,从而可以给佩戴者提供更为自然、逼真的视觉体验。In order to solve the above technical problems, the present application provides a display device for realizing virtual and real fusion, including: a first combiner, a second combiner, a first modulation component, a second modulation component, a shading template, and an optomechanical; the first The modulation component is located on the light-emitting side of the first combiner; the second modulation component is located on the light-emitting side of the first modulation component and on the light-incident side of the second combiner. The first combiner is configured to receive first ambient light and transmit the first ambient light to the first modulation component, where the first ambient light carries real object information. The first modulation component is configured to modulate the polarization state of the first ambient light to form a second ambient light, and image the second ambient light to the occlusion template. The occlusion template is used to modulate the polarization state of the second ambient light according to the occlusion relationship between the real object and the virtual object to form a third ambient light. The first modulation component modulates the polarization state of the third ambient light again to form a fourth ambient light, and transmits the fourth ambient light to the second modulation component. The second modulation component is used to transmit the fourth ambient light to the second combiner, and is also used to modulate the polarization state of the first display light to form a second display light, and transmit the second display light to the second combiner; wherein the first display light is generated by the optomechanical and carries the virtual object information; the polarization state of the second display light and the polarization of the fourth ambient light state is different. The second combiner is used to transmit the fourth ambient light and the second display light, so that the fourth ambient light and the second display light are incident on the wearer's eyes. Based on this, when the wearer sees the virtual content and the real object, the virtual content and the real object can be blocked from each other, so as to achieve the effect of virtual and real fusion, so as to provide the wearer with a more natural and realistic visual experience.
一些实施例中,所述第一调制组件包括第一透镜;所述第一透镜正对所述第一结合器的出光侧,用于将所述第一环境光线成像到所述遮挡模板,以便于遮挡模板对光线进行偏振态 的调制。In some embodiments, the first modulation component includes a first lens; the first lens is facing the light-emitting side of the first combiner, and is used for imaging the first ambient light to the occlusion template, so that The polarization state of light is modulated by the occlusion template.
一些实施例中,所述第一调制组件还包括第一偏振片、第一偏振分束镜和半波片。所述第一偏振片和所述第一偏振分束镜依次位于所述第一透镜的出光侧,并且远离所述第一结合器;所述半波片位于所述第一偏振分束镜的出光侧,所述遮挡模板位于所述第一透镜的后焦面上,所述半波片的快轴方向与所述第三环境光线的偏振方向之间的夹角为45°。所述第一偏振片用于调制所述第一环境光线的偏振态,以形成所述第二环境光线;所述偏振分束镜用于将所述第二环境光线反射至所述遮挡模板、以及供所述第三环境光线透过;所述半波片用于调制所述第三环境光线的偏振态,以形成所述第四环境光线。In some embodiments, the first modulation component further includes a first polarizer, a first polarizing beam splitter and a half-wave plate. The first polarizer and the first polarizing beam splitter are sequentially located on the light-emitting side of the first lens and away from the first combiner; the half-wave plate is located on the side of the first polarizing beam splitter. On the light-emitting side, the shielding template is located on the back focal plane of the first lens, and the angle between the fast axis direction of the half-wave plate and the polarization direction of the third ambient light is 45°. The first polarizer is used to modulate the polarization state of the first ambient light to form the second ambient light; the polarization beam splitter is used to reflect the second ambient light to the shielding template, and for transmitting the third ambient light; the half-wave plate is used to modulate the polarization state of the third ambient light to form the fourth ambient light.
一些实施例中,所述第二调制组件包括第二偏振片、第二偏振分束镜和第二透镜。所述第二透镜正对所述第二结合器的入光侧,所述第二偏振分束镜位于所述第二透镜的入光侧,并且正对所述半波片;所述光机和所述遮挡模板均位于所述第二透镜的焦平面上。所述第二偏振片用于调制所述第一显示光线的偏振态,以形成所述第二显示光线。所述第二偏振分束镜用于将所述第四环境光线反射至所述第二透镜,还用于供所述第二显示光线透过至所述第二透镜。所述第二透镜用于准直所述第四环境光线和所述第二显示光线。In some embodiments, the second modulation component includes a second polarizer, a second polarizing beam splitter, and a second lens. The second lens faces the light incident side of the second combiner, the second polarizing beam splitter is located on the light incident side of the second lens, and faces the half-wave plate; the optomechanical and the shielding template are both located on the focal plane of the second lens. The second polarizer is used to modulate the polarization state of the first display light to form the second display light. The second polarizing beam splitter is used for reflecting the fourth ambient light to the second lens, and for transmitting the second display light to the second lens. The second lens is used for collimating the fourth ambient light and the second display light.
一些实施例中,所述显示设备还包括第一变焦透镜,所述第一变焦透镜位于所述第一结合器的入光侧;所述第一变焦透镜用于准直所述第一环境光线,以便于第一环境光线在第一结合器中传输。In some embodiments, the display device further includes a first zoom lens, the first zoom lens is located on the light incident side of the first combiner; the first zoom lens is used for collimating the first ambient light , so that the first ambient light is transmitted in the first combiner.
一些实施例中,所述显示设备还包括第二变焦透镜,所述第二变焦透镜位于所述第二结合器的出光侧;所述第二变焦透镜用于准直经所述第二结合器传输的所述第四环境光线和所述第二显示光线。In some embodiments, the display device further includes a second zoom lens, the second zoom lens is located on the light-emitting side of the second combiner; the second zoom lens is used for collimating through the second combiner The fourth ambient light and the second display light are transmitted.
一些实施例中,所述显示设备还包括眼动追踪组件;所述眼动追踪组件用于捕获佩戴者的眼睛的视线方向。In some embodiments, the display device further includes an eye tracking component; the eye tracking component is used to capture the gaze direction of the wearer's eyes.
一些实施例中,所述显示设备还包括控制器,所述控制器分别与所述第一变焦透镜、所述第二变焦透镜和所述眼动追踪组件电连接;所述控制器用于根据所述佩戴者的视线方向,调整所述第一变焦透镜和所述第二变焦透镜的光焦度。基于此,控制器可以调整第一变焦透镜和第二变焦透镜的光焦度,以适应性地调整真实物体和虚拟物体的呈现效果。In some embodiments, the display device further includes a controller that is electrically connected to the first zoom lens, the second zoom lens, and the eye tracking component, respectively; the controller is configured to: Adjust the optical power of the first zoom lens and the second zoom lens according to the direction of the wearer's line of sight. Based on this, the controller can adjust the refractive power of the first zoom lens and the second zoom lens to adaptively adjust the presentation effects of the real object and the virtual object.
一些实施例中,根据佩戴者的视线方向而得到佩戴者的注视点的深度,所述控制器用于控制所述第一变焦透镜和所述第二变焦透镜,以将所述真实物体和所述虚拟物体所对应的虚像设置在注视点所在的虚像面上,以此来改善VAC问题。In some embodiments, the depth of the wearer's gaze point is obtained according to the wearer's line of sight, and the controller is configured to control the first zoom lens and the second zoom lens, so that the real object and the The virtual image corresponding to the virtual object is set on the virtual image plane where the gaze point is located to improve the VAC problem.
一些实施例中,所述显示设备还包括深度探测器;所述深度探测器设于所述第一结合器的一侧,并且远离所述第二结合器;所述深度探测器用于获取佩戴者周围的真实环境的深度信息。根据所述深度信息以及所述虚拟内容,所述遮挡模板还用于实时刷新所述第三环境光线,以适应性地改变所述真实物体与所述虚拟物体之间的遮挡关系。根据所述深度信息,所述光机还用于实时刷新所述第一显示光线所载有的虚拟物体信息,以适应性地改变所述虚拟物体与所述真实物体之间的遮挡关系。In some embodiments, the display device further includes a depth detector; the depth detector is provided on one side of the first coupler and away from the second coupler; the depth detector is used to obtain the wearer's Depth information of the surrounding real environment. According to the depth information and the virtual content, the occlusion template is further configured to refresh the third ambient light in real time, so as to adaptively change the occlusion relationship between the real object and the virtual object. According to the depth information, the optical machine is further configured to refresh the virtual object information carried by the first display light in real time, so as to adaptively change the occlusion relationship between the virtual object and the real object.
一些实施例中,基于佩戴者的近视度数,所述控制器用于根据以下公式调整所述第一变焦透镜和所述第二变焦透镜的光焦度。In some embodiments, based on the wearer's degree of myopia, the controller is configured to adjust the optical power of the first zoom lens and the second zoom lens according to the following formula.
Figure PCTCN2021133765-appb-000001
Figure PCTCN2021133765-appb-000001
其中,P 1为所述第一变焦透镜的光焦度,P 2为所述第二变焦透镜的光焦度,M为佩戴者的近视度数,V为由佩戴者的眼睛的视线方向而得到的注视点的深度。基于此,所述显示设 备可以适应性地矫正佩戴者的近视。 Wherein, P 1 is the optical power of the first zoom lens, P 2 is the optical power of the second zoom lens, M is the wearer's degree of myopia, and V is the line of sight of the wearer's eyes. the depth of the gaze point. Based on this, the display device can adaptively correct the wearer's myopia.
一些实施例中,基于佩戴者的视线方向而得到的注视点的深度,所述遮挡模板还用于对入射的所述第二环境光线进行离焦虚化;和/或,所述光机用于对所述第一显示光线进行离焦虚化。基于此,可以提高真实物体与虚拟物体之间的虚实融合效果,以提高佩戴者的视觉体验。In some embodiments, based on the depth of the gaze point obtained from the wearer's line of sight, the occlusion template is further used to defocus the incident second ambient light; and/or, the optomechanical for defocusing the first display light. Based on this, the virtual-real fusion effect between the real object and the virtual object can be improved, so as to improve the visual experience of the wearer.
一些实施例中,所述显示设备还包括电致变色片;所述电致变色片夹设在所述第一结合器和所述第二结合器之间,并用于在通电后遮挡透过所述第一结合器的光线。或者,所述显示设备还包括遮挡片;所述遮挡片夹设在所述第一结合器和所述第二结合器之间,并用于遮挡透过所述第一结合器的光线。In some embodiments, the display device further includes an electrochromic sheet; the electrochromic sheet is sandwiched between the first coupler and the second coupler, and is used to block the transmission through the light from the first combiner. Alternatively, the display device further includes a shielding piece; the shielding piece is sandwiched between the first coupler and the second coupler, and is used to block light passing through the first coupler.
一些实施例中,所述显示设备为头戴式显示设备或者抬头显示设备。其中,抬头显示设备可以应用在汽车的智能座舱中。例如:该抬头显示设备可以配合汽车的前挡风玻璃,以在该前挡风玻璃上呈现相关内容。In some embodiments, the display device is a head-mounted display device or a head-up display device. Among them, the head-up display device can be applied in the smart cockpit of the car. For example, the head-up display device can cooperate with the front windshield of a car to present relevant content on the windshield.
一些实施例中,所述半波片的快轴方向与所述第三环境光线的偏振方向之间的夹角也可以约为45°。In some embodiments, the included angle between the fast axis direction of the half-wave plate and the polarization direction of the third ambient light may also be about 45°.
一些实施例中,所述第一透镜为单个透镜,或者,所述第一透镜为两个或者多个透镜的组件。In some embodiments, the first lens is a single lens, or the first lens is an assembly of two or more lenses.
一些实施例中,所述第二透镜为单个透镜,或者,所述第二透镜为两个或者多个透镜的组件。In some embodiments, the second lens is a single lens, or the second lens is an assembly of two or more lenses.
一些实施例中,所述第一结合器为衍射光波导、反射光波导或者针孔反射镜。所述第二结合器为衍射光波导、反射光波导或者针孔反射镜。In some embodiments, the first combiner is a diffractive optical waveguide, a reflective optical waveguide or a pinhole mirror. The second combiner is a diffractive optical waveguide, a reflective optical waveguide or a pinhole mirror.
一些实施例中,所述遮挡模板为LCoS或者DLP。In some embodiments, the occlusion template is LCoS or DLP.
本申请通过第一调制组件、遮挡模板、第二调制组件等结构之间的配合,可以使佩戴者看到虚实融合的虚拟内容和真实物体,从而可以给佩戴者提供更为自然、逼真的视觉体验。In the present application, through the cooperation between the structures such as the first modulation component, the occlusion template, the second modulation component, etc., the wearer can see the virtual content and real objects that are combined with the virtual and the real, so as to provide the wearer with a more natural and realistic vision experience.
附图说明Description of drawings
图1是现有的AR显示设备的示意图。FIG. 1 is a schematic diagram of a conventional AR display device.
图2是真实物体与虚拟内容在第一种位置关系下的示意图。FIG. 2 is a schematic diagram of a real object and virtual content under a first positional relationship.
图3是真实物体与虚拟内容在第二种位置关系下的示意图。FIG. 3 is a schematic diagram of a real object and virtual content in a second positional relationship.
图4是真实物体与虚拟内容在第三种位置关系下的示意图。FIG. 4 is a schematic diagram of a real object and virtual content in a third positional relationship.
图5是本申请一实施例的头戴式显示设备的框架图。FIG. 5 is a frame diagram of a head-mounted display device according to an embodiment of the present application.
图6是本申请一实施例的头戴式显示设备的结构示意图。FIG. 6 is a schematic structural diagram of a head-mounted display device according to an embodiment of the present application.
图7是本申请一实施例的头戴式显示设备的结构示意图。FIG. 7 is a schematic structural diagram of a head-mounted display device according to an embodiment of the present application.
图8是佩戴者注视真实物体时,遮挡模板、虚拟内容以及佩戴者看到的场景的示意图。FIG. 8 is a schematic diagram of the occlusion template, the virtual content, and the scene seen by the wearer when the wearer looks at the real object.
图9是佩戴者注视虚拟物体时,遮挡模板、虚拟内容以及佩戴者看到的场景的示意图。FIG. 9 is a schematic diagram of the occlusion template, the virtual content, and the scene seen by the wearer when the wearer stares at the virtual object.
图10是佩戴者观察真实物体时的场景示意图。FIG. 10 is a schematic diagram of a scene when a wearer observes a real object.
图11是佩戴者观察真实物体和虚拟内容时的场景示意图。FIG. 11 is a schematic diagram of a scene when a wearer observes real objects and virtual content.
图12是佩戴者通过实施例的头戴式显示设备观察真实物体和虚拟内容时的场景示意图。12 is a schematic diagram of a scene when a wearer observes real objects and virtual content through the head-mounted display device of the embodiment.
具体实施方式Detailed ways
下面将结合本申请实施方式中的附图,对本申请实施方式中的技术方案进行清楚、完整地描述。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
请参考图1,以AR的头戴式显示设备10为例,其一般会包括光机12、结合器(Combiner)14、耦入光栅16和耦出光栅18。真实世界存在的真实物体20反射的光线可以依次穿过耦出光栅18和结合器14,以入射至佩戴者的眼睛30。同步的,光机12用于输出载有虚拟内容22的光线;该载有虚拟内容22的光线可以通过耦入光栅16耦入结合器14,并且在结合器14内以全反射的方式朝向耦出光栅18的方向传输。之后,该光线由耦出光栅18耦出,并出射至佩戴者的眼睛30,相应使佩戴者看到真实世界不存在的虚拟内容22。基于此,佩戴者在通过头戴式显示设备10观察真实物体20的同时,还可以看到虚拟内容22,以提供与手机、电视等电子设备不同的视觉体验。Please refer to FIG. 1 , taking an AR head-mounted display device 10 as an example, which generally includes an optical machine 12 , a combiner 14 , an in-coupling grating 16 and an out-coupling grating 18 . Light reflected from a real object 20 existing in the real world may pass through the outcoupling grating 18 and the combiner 14 in sequence to be incident on the wearer's eye 30 . Synchronously, the optomechanical 12 is used to output light carrying the virtual content 22; the light carrying the virtual content 22 can be coupled into the combiner 14 through the coupling grating 16, and is totally reflected in the combiner 14 towards the coupler. out of the direction of the grating 18. Afterwards, the light is coupled out by the outcoupling grating 18 and exits to the wearer's eyes 30, correspondingly enabling the wearer to see virtual content 22 that does not exist in the real world. Based on this, while observing the real object 20 through the head-mounted display device 10 , the wearer can also see the virtual content 22 , so as to provide a different visual experience from electronic devices such as mobile phones and televisions.
应当理解,根据真实物体20在三维模型上的位置、以及真实物体20与虚拟内容22之间的遮挡关系,可以通过建模、渲染等来对虚拟内容22进行处理,以适应性地呈现虚拟内容22。例如:虚拟内容22被真实物体20遮挡了一部分;或者,虚拟内容22与真实物体20之间没有遮挡关系等。其中,该虚拟内容22可以包括虚拟物体以及对应的虚拟三维场景;或者,该虚拟内容22也可以仅包括虚拟物体或者虚拟三维场景,对此不加限制。例如:虚拟内容22可以为漂浮在空中的鲸鱼、热气球、建筑物指示牌等。但是,由于真实物体20是真实世界存在的物体,载有该真实物体20的光线是直接穿过头戴式显示设备10而入射到佩戴者的眼睛30。即,载有虚拟内容22的光线和载有真实物体20的光线是相对独立的,由光机12生成的虚拟内容22并不能对真实物体20进行遮挡。It should be understood that, according to the position of the real object 20 on the three-dimensional model and the occlusion relationship between the real object 20 and the virtual content 22, the virtual content 22 can be processed by modeling, rendering, etc. to adaptively present the virtual content twenty two. For example, the virtual content 22 is partially occluded by the real object 20; or, there is no occlusion relationship between the virtual content 22 and the real object 20, and so on. Wherein, the virtual content 22 may include virtual objects and corresponding virtual three-dimensional scenes; or, the virtual content 22 may also only include virtual objects or virtual three-dimensional scenes, which is not limited. For example, the virtual content 22 may be a whale floating in the air, a hot air balloon, a building sign, and the like. However, since the real object 20 is an object existing in the real world, the light carrying the real object 20 is directly incident on the wearer's eyes 30 through the head mounted display device 10 . That is, the light carrying the virtual content 22 and the light carrying the real object 20 are relatively independent, and the virtual content 22 generated by the optical machine 12 cannot block the real object 20 .
为了便于理解,如无特别说明,各实施例中一般是将立方体作为真实物体20的示例、以及将圆柱体作为虚拟内容22的示例性来进行说明。For ease of understanding, unless otherwise specified, in each embodiment, a cube is generally used as an example of the real object 20 and a cylinder is used as an example of the virtual content 22 for description.
图2是真实物体与虚拟内容在第一种位置关系下的示意图。如图2所示例的,基于头戴式显示设备10,在该第一位置关系下,真实物体20与虚拟内容22之间不存在重叠,相应使得佩戴者可以相对清楚地看到不同方位上的真实物体20和虚拟内容22。FIG. 2 is a schematic diagram of a real object and virtual content under a first positional relationship. As shown in FIG. 2 , based on the head-mounted display device 10 , under the first positional relationship, there is no overlap between the real object 20 and the virtual content 22 , so that the wearer can relatively clearly see objects in different directions. Real objects 20 and virtual content 22 .
图3是真实物体与虚拟内容在第二种位置关系下的示意图。如图3所示例的,基于头戴式显示设备10,在该第二位置关系下,虚拟内容22位于真实物体20的后方时,二者之间虽然存在交叠,但是由于虚拟内容22是由光机12生成的,因此可以通过对虚拟内容22进行渲染等处理,以不显示虚拟内容22位于真实物体20后方的部分内容。由此在佩戴者的眼中,可以相对清楚地看到虚拟内容22位于真实物体20的后方。FIG. 3 is a schematic diagram of a real object and virtual content in a second positional relationship. As shown in FIG. 3 , based on the head-mounted display device 10, under the second positional relationship, when the virtual content 22 is located behind the real object 20, although there is overlap between the two, since the virtual content 22 is composed of generated by the optical machine 12 , so the virtual content 22 can be rendered and other processing so as not to display the part of the virtual content 22 located behind the real object 20 . The virtual content 22 can thus be seen relatively clearly behind the real object 20 in the eyes of the wearer.
图4是真实物体与虚拟内容在第三种位置关系下的示意图。如图4示例的,基于头戴式显示设备10,在该第三位置关系下,虚拟内容22位于真实物体20的前方时,理想情况下,虚拟内容22是要遮挡真实物体20的一部分。即,佩戴者并不能透过该虚拟内容22而看到后方的部分真实物体20。但是,由于头戴式显示设备10仅能够对圆柱体这个虚拟内容22进行渲染等处理,而不能对立方体此真实物体20进行渲染等处理,相应使得位于前方的虚拟内容22并不能很好地对真实物体20进行遮挡。虚拟内容22与真实物体20之间会存在重影等问题,违和感较强,佩戴者的体验效果也较差。FIG. 4 is a schematic diagram of a real object and virtual content in a third positional relationship. As illustrated in FIG. 4 , based on the head mounted display device 10 , under the third positional relationship, when the virtual content 22 is located in front of the real object 20 , ideally, the virtual content 22 is to block a part of the real object 20 . That is, the wearer cannot see part of the real object 20 behind through the virtual content 22 . However, since the head-mounted display device 10 can only perform processing such as rendering on the virtual content 22 of the cylinder, but cannot perform processing such as rendering on the real object 20 of the cube, accordingly, the virtual content 22 located in the front cannot be well aligned. The real object 20 is occluded. Problems such as ghosting may exist between the virtual content 22 and the real object 20 , the sense of violation is strong, and the experience effect of the wearer is also poor.
基于以上存在的问题,本申请实施例提供了虚实融合的光学透视式显示设备,佩戴者在看到虚拟内容和真实物体的同时,虚拟内容和真实物体之间可以相互遮挡,以达到虚实融合的效果,从而可以给佩戴者提供更为自然、逼真的视觉体验。Based on the above problems, the embodiment of the present application provides an optical see-through display device for virtual and real fusion. When the wearer sees the virtual content and the real object, the virtual content and the real object can be blocked from each other, so as to achieve the fusion of the virtual and the real. effect, so as to provide the wearer with a more natural and realistic visual experience.
应当理解,本申请各实施例中主要是以头戴式显示设备来对光学透视式显示设备进行举例说明,但不以此为限。在其他的一些实施例中,该光学透视式显示设备也可以为抬头显示设备(Head Up Display,HUD)。其中,该抬头显示设备可以应用在汽车的智能座舱中。例如:该抬头显示设备可以配合汽车的前挡风玻璃,以在该前挡风玻璃上呈现相关内容。It should be understood that, in each embodiment of the present application, the optical see-through display device is mainly illustrated by a head-mounted display device, but is not limited thereto. In some other embodiments, the optical see-through display device may also be a head up display device (Head Up Display, HUD). Among them, the head-up display device can be applied in the smart cockpit of the car. For example, the head-up display device can cooperate with the front windshield of a car to present relevant content on the windshield.
为了对真实物体和虚拟内容进行区分,本申请各实施例中定义了环境光线和显示光线。其中,环境光线是指由真实物体反射的光线,该环境光线载有真实物体信息;该环境光线例如包括第一环境光线、第二环境光线等。显示光线则是载有虚拟内容的光线,其可以由光机12生成;该显示光线例如包括第一显示光线、第二显示光线等。In order to distinguish real objects from virtual content, ambient light and display light are defined in various embodiments of the present application. The ambient light refers to the light reflected by the real object, and the ambient light carries real object information; the ambient light includes, for example, a first ambient light, a second ambient light, and the like. The display light is light carrying virtual content, which can be generated by the optical machine 12; the display light includes, for example, a first display light, a second display light, and the like.
请同步参考图5和图6,本申请实施例提供的一种实现虚实融合的头戴式显示设备100。该头戴式显示设备100包括:第一结合器110和第二结合器120,第一结合器110和第二结合器120均可以供光线进行全反射传输。其中,第一结合器110主要是用于传输环境光线,第二结合器120主要是用于传输环境光线和显示光线。应当理解,如图5所示例的,各实施例中通过第一结合器110和第二结合器120的配合,延长了环境光线传输的光路。在该环境光线的传播过程中,可以相对方便地通过相关结构来对环境光线进行偏振、遮挡等处理,以此可以改变该环境光线所载有真实物体20的视觉信息,从而适应性地调整真实物体20与虚拟内容22之间的相互遮挡关系。从佩戴者的视角来看,真实物体20和虚拟内容22之间可以相互遮挡,头戴式设备的视觉呈现效果也会更好。Please refer to FIG. 5 and FIG. 6 simultaneously, a head-mounted display device 100 for realizing virtual-real fusion provided by an embodiment of the present application. The head-mounted display device 100 includes: a first combiner 110 and a second combiner 120, and both the first combiner 110 and the second combiner 120 can transmit light through total reflection. The first combiner 110 is mainly used to transmit ambient light, and the second combiner 120 is mainly used to transmit ambient light and display light. It should be understood that, as illustrated in FIG. 5 , in each embodiment, the optical path of ambient light transmission is extended by the cooperation of the first combiner 110 and the second combiner 120 . During the propagation process of the ambient light, the ambient light can be polarized, blocked, etc., relatively conveniently through the relevant structure, so that the visual information of the real object 20 carried by the ambient light can be changed, so as to adjust the real object 20 adaptively. The mutual occlusion relationship between the object 20 and the virtual content 22 . From the perspective of the wearer, the real object 20 and the virtual content 22 can be blocked from each other, and the visual presentation effect of the head-mounted device will be better.
一些实施例中,第一结合器和第二结合器之间为间隔设置,但不以此为限。在其他的一些实施例中,第一结合器和第二结合器也可以为层叠设置。In some embodiments, the first coupler and the second coupler are spaced apart, but not limited thereto. In some other embodiments, the first coupler and the second coupler can also be arranged in layers.
请再同步参考图5和图6,一些实施例中,本申请各实施例提供的头戴式显示设备100还可以包括:第一调制组件130、第二调制组件140、遮挡模板152和光机154。第一调制组件130、第二调制组件140、遮挡模板152和光机154可以均设于第一结合器110的同一侧且靠近第二结合器120。应当理解,第一结合器110在远离第二结合器120的一侧的相关结构较少,由此可以较好地控制头戴式显示设备100的厚度,以提高头戴式显示设备100的集成度。此外,该头戴式显示设备100还可以具有较小的体积,便于佩戴者佩戴或者携带。Please refer to FIG. 5 and FIG. 6 again. In some embodiments, the head-mounted display device 100 provided by the embodiments of the present application may further include: a first modulation component 130 , a second modulation component 140 , a blocking template 152 and an optomechanical 154 . The first modulation component 130 , the second modulation component 140 , the shielding template 152 and the optomechanical 154 may all be disposed on the same side of the first combiner 110 and close to the second combiner 120 . It should be understood that the first coupler 110 has less related structures on the side away from the second coupler 120 , so that the thickness of the head-mounted display device 100 can be better controlled, so as to improve the integration of the head-mounted display device 100 Spend. In addition, the head-mounted display device 100 can also have a small volume, which is convenient for the wearer to wear or carry.
一些实施例中,第一调制组件130位于第一结合器110的出光侧,其可以传输和调制由第一结合器110出射的第一环境光线,以形成第二环境光线。该遮挡模板152可以调制和反射经第一调制组件130传输过来的第二环境光线,以形成第三环境光线。第一调制组件130可以再次传输和调制经遮挡模板152调制后的第三环境光线,以形成第四环境光线。该第四环境光线可以传输至第二调制组件140,以进行后续处理。In some embodiments, the first modulation component 130 is located on the light-emitting side of the first combiner 110 , and can transmit and modulate the first ambient light emitted by the first combiner 110 to form the second ambient light. The shielding template 152 can modulate and reflect the second ambient light transmitted through the first modulation component 130 to form the third ambient light. The first modulation component 130 can transmit and modulate the third ambient light modulated by the occlusion template 152 again to form the fourth ambient light. The fourth ambient light can be transmitted to the second modulation component 140 for subsequent processing.
一些实施例中,遮挡模板152可以设于第一调制组件130的一侧,并且远离第二结合器120。In some embodiments, the shielding template 152 may be disposed on one side of the first modulation component 130 and away from the second combiner 120 .
一些实施例中,第一调制组件130可以对第一环境光线进行成像、偏振等处理,以及将形成的第二环境光线中所对应的真实物体20的像成像(relay)到遮挡模板152上;即,遮挡模板152与真实物体20在光学上共轭。基于此,通过遮挡模板152可以对第二环境光线进行调制和反射等处理,以调整第二环境光线所载有的真实物体20的视觉信息。In some embodiments, the first modulation component 130 may perform imaging, polarization and other processing on the first ambient light, and image (relay) the image of the real object 20 corresponding to the formed second ambient light onto the occlusion template 152; That is, the occlusion template 152 is optically conjugated to the real object 20 . Based on this, the second ambient light can be modulated and reflected through the occlusion template 152 to adjust the visual information of the real object 20 carried by the second ambient light.
一些实施例中,第二调制组件140设于第一调制组件130的一侧,并且远离遮挡模板152。以第二结合器120作为参照,该第二调制组件140具体是位于第二结合器120的入光侧并且远离第一结合器110。其中,第二调制组件140至少可以传输和准直第四环境光线,以将该第四环境光线传输至第二结合器120。In some embodiments, the second modulation component 140 is disposed on one side of the first modulation component 130 and away from the shielding template 152 . Taking the second combiner 120 as a reference, the second modulation component 140 is specifically located on the light incident side of the second combiner 120 and away from the first combiner 110 . Wherein, the second modulation component 140 can at least transmit and collimate the fourth ambient light, so as to transmit the fourth ambient light to the second combiner 120 .
一些实施例中,光机154可以设于第二调制组件140的一侧,并且远离第二结合器120。基于此,光机154可以发射载有虚拟内容22的第一显示光线。应当理解,该第二调制组件140还可以接收从光机154发射的第一显示光线,并且可以对该第一显示光线进行偏振、准直等处理,以形成第二显示光线,以及将第二显示光线传输到第二结合器120上。由此,第四环境光线和第二显示光线通过第二调制组件140汇合后,可以入射到第二结合器120并且 在第二结合器120中传输,以入射到佩戴者的眼睛30,从而使真实物体20和虚拟内容22之间达到相互遮挡的视觉效果。In some embodiments, the optomechanical 154 may be disposed on one side of the second modulation component 140 and away from the second combiner 120 . Based on this, the light machine 154 may emit the first display light carrying the virtual content 22 . It should be understood that the second modulation component 140 can also receive the first display light emitted from the optical machine 154, and can perform polarization, collimation, etc. on the first display light to form the second display light, and the second display light The display light is transmitted to the second combiner 120 . Therefore, after the fourth ambient light and the second display light are combined through the second modulation component 140, they can be incident on the second combiner 120 and transmitted in the second combiner 120 to be incident on the wearer's eyes 30, so that the A visual effect of mutual occlusion is achieved between the real object 20 and the virtual content 22 .
请参考图7,一些实施例中,第一结合器110可以包括第一耦入光栅112、第一基底114和第一耦出光栅116。第一耦入光栅112和第一耦出光栅116间隔设置,并且均设于第一基底114的表面。第一环境光线通过第一耦入光栅112衍射后,耦入第一基底114。在第一基底114内,第一环境光线的某一衍射级可以发生全反射的现象,以朝向第一耦出光栅116的方向传输。经第一基底114传输后的第一环境光线通过第一耦出光栅116再次衍射,以从该第一结合器110耦出,进入第一调制组件130。应当理解,第一结合器110的入光侧可以是对应第一耦入光栅112的一侧,即如图6和图7中光线进入第一结合器110的一侧。第一结合器110的出光侧可以是对应第一耦出光栅116的一侧,即如图6和图7中光线射出第一结合器110的一侧。Referring to FIG. 7 , in some embodiments, the first combiner 110 may include a first in-coupling grating 112 , a first substrate 114 and a first out-coupling grating 116 . The first in-coupling grating 112 and the first out-coupling grating 116 are disposed at intervals, and are both disposed on the surface of the first substrate 114 . The first ambient light is coupled into the first substrate 114 after being diffracted by the first coupling grating 112 . In the first substrate 114 , a certain diffraction order of the first ambient light may undergo a phenomenon of total reflection so as to be transmitted toward the direction of the first coupling-out grating 116 . The first ambient light transmitted through the first substrate 114 is diffracted again by the first coupling-out grating 116 to be coupled out from the first combiner 110 and enter the first modulation component 130 . It should be understood that the light incident side of the first combiner 110 may be the side corresponding to the first coupling grating 112 , that is, the side where light enters the first combiner 110 as shown in FIGS. 6 and 7 . The light-emitting side of the first combiner 110 may be the side corresponding to the first outcoupling grating 116 , that is, the side where the light exits the first combiner 110 as shown in FIGS. 6 and 7 .
一些实施例中,第一结合器110可例如为衍射光波导、反射光波导或者针孔反射镜(Pin-mirror)等,对此不加限制。In some embodiments, the first combiner 110 may be, for example, a diffractive optical waveguide, a reflective optical waveguide, or a pin-hole mirror (Pin-mirror), which is not limited thereto.
一些实施例中,如图7所示例的,第一耦入光栅112和第一耦出光栅116均为反射式衍射光栅,但不以此为限。在其他的一些实施例中,第一耦入光栅112和第一耦出光栅116中至少一个也可以为透射式衍射光栅。应当理解,基于第一耦入光栅112和第一耦出光栅116的类型,第一结合器110的入光侧和出光侧可以相同或者不同。例如:第一耦入光栅112为反射式衍射光栅,则第一结合器110的入光侧为远离该第一耦入光栅112的一侧。又例如:第一耦入光栅112为透射式衍射光栅,则第一结合器110的入光侧为设置该第一耦入光栅112的一侧。In some embodiments, as illustrated in FIG. 7 , the first coupling grating 112 and the first coupling out grating 116 are both reflective diffraction gratings, but not limited thereto. In some other embodiments, at least one of the first coupling-in grating 112 and the first coupling-out grating 116 may also be a transmissive diffraction grating. It should be understood that, based on the type of the first in-coupling grating 112 and the first out-coupling grating 116 , the light-incident side and the light-exiting side of the first combiner 110 may be the same or different. For example, if the first coupling grating 112 is a reflective diffraction grating, the light incident side of the first combiner 110 is the side away from the first coupling grating 112 . For another example, the first coupling grating 112 is a transmissive diffraction grating, and the light incident side of the first combiner 110 is the side where the first coupling grating 112 is disposed.
请参考图7,一些实施例中,第一调制组件130可以包括第一透镜132,该第一透镜132正对第一结合器110的第一耦出光栅116,遮挡模板152位于第一透镜132的后焦面(又称第二焦平面)上。基于此,第一环境光线从第一结合器110耦出后,第一透镜132可以将第一环境光线所对应的真实物体20的像成像到遮挡模板152上。而由于遮挡模板152需要对第一环境光线进行调制和反射;即,在第一调制组件130和第二调制组件140之间,遮挡模板152是需要载有真实物体信息的光线进行反射的操作。对此,为了与该遮挡模板152配合,第一调制组件130还可以包括第一偏振片((Polarizer)134和第一偏振分束镜(Polarization Beam Splitter,PBS)136,第一偏振片134和第一偏振分束镜136依次位于第一透镜132的出光侧,并且远离第一结合器110;其中,遮挡模板152设于第一偏振分束镜136的一侧并且远离第二调制组件140。第一偏振片134可以将第一环境光线调制成S偏振态的偏振光,即第二环境光线。基于此,该第一偏振分束镜136可以将第二环境光线反射到遮挡模板152上。Referring to FIG. 7 , in some embodiments, the first modulation component 130 may include a first lens 132 , the first lens 132 is facing the first out-coupling grating 116 of the first combiner 110 , and the shielding template 152 is located at the first lens 132 on the back focal plane (also known as the second focal plane). Based on this, after the first ambient light is coupled out from the first combiner 110 , the first lens 132 can image the image of the real object 20 corresponding to the first ambient light onto the occlusion template 152 . The occlusion template 152 needs to modulate and reflect the first ambient light; that is, between the first modulation component 130 and the second modulation component 140 , the occlusion template 152 needs to reflect the light carrying real object information. In this regard, in order to cooperate with the shielding template 152, the first modulation component 130 may further include a first polarizer ((Polarizer) 134 and a first polarization beam splitter (Polarization Beam Splitter, PBS) 136, the first polarizer 134 and The first polarizing beam splitter 136 is sequentially located on the light exit side of the first lens 132 and away from the first combiner 110 ; wherein the shielding template 152 is provided on one side of the first polarizing beam splitter 136 and away from the second modulation component 140 . The first polarizer 134 can modulate the first ambient light into S-polarized polarized light, that is, the second ambient light. Based on this, the first polarized beam splitter 136 can reflect the second ambient light to the shielding template 152 .
一些实施例中,由于遮挡模板152在光路上是位于第一透镜132的后焦面上,该遮挡模板152与真实物体20共轭。通过第一调制组件130对第一环境光线的调制,其可以将具有特定偏振态的第二环境光线成像到遮挡模板152上,所形成的像即与真实物体20相对应。In some embodiments, since the occlusion template 152 is located on the back focal plane of the first lens 132 on the optical path, the occlusion template 152 is conjugated to the real object 20 . Through the modulation of the first ambient light by the first modulation component 130 , the second ambient light with a specific polarization state can be imaged onto the occlusion template 152 , and the formed image corresponds to the real object 20 .
一些实施例中,遮挡模板152可例如为硅基液晶(Liquid Crystal on Silicon,LCoS)或者数字光处理器(Digital Light Processing,DLP)等器件,以对环境光线进行调制,对此不加限制。基于此,该遮挡模板152可以对第二环境光线进行像素级的偏振相位调制,以改变第二环境光线所载有的真实物体20的视觉信息。例如:头戴式显示设备的控制器可以获取真实物体20以及光机输出的虚拟内容22在深度、位置和姿态的方面的信息,进而通过比较虚拟内容22与真实物体20的该些信息,可以得到真实物体20与虚拟内容22之间的遮挡 关系,此会在下文进行详细描述。In some embodiments, the shielding template 152 may be, for example, a liquid crystal on silicon (LCoS) or a digital light processor (Digital Light Processing, DLP) and other devices to modulate ambient light, which is not limited. Based on this, the occlusion template 152 can perform pixel-level polarization phase modulation on the second ambient light, so as to change the visual information of the real object 20 carried by the second ambient light. For example, the controller of the head-mounted display device can obtain the depth, position and attitude information of the real object 20 and the virtual content 22 output by the optical machine, and then by comparing the information of the virtual content 22 and the real object 20, it can The occlusion relationship between the real object 20 and the virtual content 22 is obtained, which will be described in detail below.
由此,通过遮挡模板152的调制,可以遮挡住对应真实物体20的像的一部分,该部分即是真实物体20被虚拟内容22挡住的部分。由此,当后续的第四环境光线入射到佩戴者的眼睛30,佩戴者基于第四环境光线看到的真实物体20,其与取下头戴式显示设备100后佩戴者所看到的真实物体20之间存在一定的差别。即,佩戴者基于第四环境光线看到的真实物体20可能会少了一部分,而该部分正是虚拟内容22遮挡真实物体20的部分。以此,当第四环境光线和第二显示光线入射到佩戴者的眼睛30,真实物体20与虚拟内容22之间不容易出现重影等问题,由此可以实现真实物体20与虚拟内容22之间的融合。Therefore, through the modulation of the occlusion template 152 , a part of the image corresponding to the real object 20 can be occluded, and the part is the part of the real object 20 that is blocked by the virtual content 22 . Therefore, when the subsequent fourth ambient light is incident on the wearer's eyes 30 , the real object 20 seen by the wearer based on the fourth ambient light is different from the real object seen by the wearer after the head mounted display device 100 is removed. There are certain differences between the objects 20 . That is, a part of the real object 20 that the wearer sees based on the fourth ambient light may be missing, and this part is the part that the virtual content 22 blocks the real object 20 . In this way, when the fourth ambient light and the second display light are incident on the wearer's eyes 30, problems such as ghosting and the like are unlikely to occur between the real object 20 and the virtual content 22, so that the real object 20 and the virtual content 22 can be connected between the real object 20 and the virtual content 22. fusion between.
一些实施例中,以遮挡模板152为LCoS来举例说明,该LCoS可以对入射的第二环境光线进行调制,然后反射调制形成的第三环境光线。In some embodiments, the occlusion template 152 is an LCoS as an example, the LCoS can modulate the incident second ambient light, and then reflect the modulated third ambient light.
应当理解,LCoS可以对第二环境光线做二值化的相位调制;即,LCoS上的像素具有两种状态,一种状态是改变入射光的偏振态,另一种状态则是不改变入射光的偏振态。为便于理解,将LCoS上的像素的状态定义为第一状态和第二状态。其中,在第一状态的像素的调制下,对应的部分第二环境光线可以反射出P偏振态的光线,基于第一偏振分束镜136的光学特性,该P偏振态的光线可以穿过第一偏振分束镜136,以进行后续处理。而在第二状态的像素的调制下,对应的部分第二环境光线反射出S偏振态的光线,基于第一偏振分束镜136的光学特性,该S偏振态的光线并无法穿过第一偏振分束镜136;即,该部分光线所对应的真实物体20的像被遮挡,也就不能入射到佩戴者的眼睛30。It should be understood that the LCoS can perform binary phase modulation on the second ambient light; that is, the pixels on the LCoS have two states, one that changes the polarization state of the incident light, and the other state that does not change the incident light. polarization state. For ease of understanding, the states of the pixels on the LCoS are defined as a first state and a second state. Wherein, under the modulation of the pixels in the first state, the corresponding part of the second ambient light can reflect the light in the P polarization state, and based on the optical characteristics of the first polarization beam splitter 136, the light in the P polarization state can pass through the first polarization state. A polarizing beam splitter 136 for subsequent processing. Under the modulation of the pixels in the second state, the corresponding part of the second ambient light reflects the light in the S polarization state. Based on the optical characteristics of the first polarization beam splitter 136 , the light in the S polarization state cannot pass through the first polarization state. The polarizing beam splitter 136 ; that is, the image of the real object 20 corresponding to the part of the light is blocked, and thus cannot be incident on the wearer's eyes 30 .
一些实施例中,根据使用场景、画面呈现效果等需求,LCoS上的像素可以在第一状态和第二状态之间切换,以实现对第二环境光线的调制。In some embodiments, the pixels on the LCoS can be switched between the first state and the second state to realize modulation of the second ambient light according to requirements such as usage scenarios and picture presentation effects.
一些实施例中,LCoS上的像素的状态为实时变化,以实时刷新真实物体20与虚拟内容22之间的融合效果。In some embodiments, the state of the pixels on the LCoS changes in real time to refresh the fusion effect between the real object 20 and the virtual content 22 in real time.
请参考图7,一些实施例中,第一调制组件130还可以包括半波片138。该半波片138位于第一偏振分束镜136的出光侧,并且远离遮挡模板152。当第三环境光线再次通过第一偏振分束镜136而传输到半波片138时,该半波片138可以将第三环境光线的偏振态调制成S偏振态,即形成第四环境光线,以便于与第二调制组件140配合而传输到第二结合器120。Referring to FIG. 7 , in some embodiments, the first modulation component 130 may further include a half-wave plate 138 . The half-wave plate 138 is located on the light-emitting side of the first polarizing beam splitter 136 and away from the shielding template 152 . When the third ambient light is transmitted to the half-wave plate 138 through the first polarization beam splitter 136 again, the half-wave plate 138 can modulate the polarization state of the third ambient light to the S-polarized state, that is, the fourth ambient light is formed, It is transmitted to the second combiner 120 in order to cooperate with the second modulation component 140 .
一些实施例中,半波片138的快轴方向与入射的第三环境光线的偏振方向之间的夹角成45°;或者,半波片138的快轴方向与入射的第三环境光线的偏振方向之间的夹角大约成45°。基于此,当入射的P偏振态的第三环境光线从该半波片138出射后,其P偏振态会被调制成S偏振态,从而形成第四环境光线。该第四环境光线可以便于之后通过第二调制组件140实现传输。In some embodiments, the included angle between the fast axis direction of the half-wave plate 138 and the polarization direction of the incident third ambient light is 45°; The angle between the polarization directions is approximately 45°. Based on this, after the incident third ambient light in the P-polarized state exits the half-wave plate 138 , the P-polarized state thereof will be modulated into the S-polarized state, thereby forming the fourth ambient light. The fourth ambient light can facilitate transmission through the second modulation component 140 later.
一些实施例中,如图7所示例的,第一透镜132示例为单个透镜。在其他的一些实施例中,该第一透镜132也可以是两个或者多个透镜的组件,对此不加限制。In some embodiments, as illustrated in FIG. 7 , the first lens 132 is illustrated as a single lens. In some other embodiments, the first lens 132 may also be an assembly of two or more lenses, which is not limited.
请参考图7,一些实施例中,第二调制组件140可以包括依次设置的第二偏振片142、第二偏振分束镜144和第二透镜146。第二偏振片142设于光机154和第二偏振分束镜144之间,该第二偏振片142可以将光机154发射的第一显示光线调制成P偏振态的光线,即形成第二显示光线。应当理解,第二偏振分束镜144正对半波片138,其可以使S偏振态的第四环境光线反射到第二透镜146上。此外,该第二偏振分束镜144还可以使P偏振态的第二显示光线透射到第二透镜146上,基于此,可以实现第四环境光线和第二显示光线的汇合。Referring to FIG. 7 , in some embodiments, the second modulation component 140 may include a second polarizer 142 , a second polarizing beam splitter 144 and a second lens 146 arranged in sequence. The second polarizer 142 is disposed between the optical machine 154 and the second polarizing beam splitter 144 , and the second polarizer 142 can modulate the first display light emitted by the optical machine 154 into light with a P polarization state, that is, to form a second polarizer. Show light. It should be understood that the second polarizing beam splitter 144 faces the half-wave plate 138 , which can reflect the fourth ambient light in the S-polarized state to the second lens 146 . In addition, the second polarizing beam splitter 144 can also transmit the second display light in the P-polarized state to the second lens 146, and based on this, the fourth ambient light and the second display light can be combined.
一些实施例中,第二透镜146正对第二偏振分束镜144,并且位于第二偏振分束镜144和第二结合器120之间。该第二透镜146正对第二结合器120的入光侧,可以对汇合后的第 四环境光线和第二显示光线进行准直处理。应当理解,在光路上,遮挡模板152和光机154均是位于第二透镜146的焦平面上,因此,第四环境光线和第二显示光线在通过第二透镜146准直后,可以在第二结合器120内传输以及入射到佩戴者的眼睛30。In some embodiments, the second lens 146 faces the second polarizing beam splitter 144 and is located between the second polarizing beam splitter 144 and the second combiner 120 . The second lens 146 is facing the light incident side of the second combiner 120, and can perform collimation processing on the combined fourth ambient light and the second display light. It should be understood that on the optical path, the shielding template 152 and the optical machine 154 are both located on the focal plane of the second lens 146. Therefore, after the fourth ambient light and the second display light are collimated by the second lens 146, they can Transmission within the combiner 120 and incidence to the wearer's eye 30 .
以真实物体20与虚拟内容22相互遮挡为例,由于第二环境光线可以通过遮挡模板152的调制而自适应地遮挡掉其的一部分,该一部分即是真实物体20需要被虚拟内容22挡住的部分。而光机154在对虚拟内容22进行渲染时,可以自适应地不显示虚拟内容22的另一部分,该另一部分即是虚拟内容22被真实物体20挡住的部分。基于此,与拼图中各板块相嵌合类似,从佩戴者的角度来看,真实物体20和虚拟内容22之间刚好可以相互融合,二者之间不会或者较少存在重叠的部分,由此可以达到良好的虚实融合的视觉效果。Taking the mutual occlusion of the real object 20 and the virtual content 22 as an example, since the second ambient light can be adaptively occluded by modulating a part of the occlusion template 152 , this part is the part of the real object 20 that needs to be blocked by the virtual content 22 . When rendering the virtual content 22 , the optical machine 154 may adaptively not display another part of the virtual content 22 , and the other part is the part of the virtual content 22 blocked by the real object 20 . Based on this, similar to the fitting of the pieces in the puzzle, from the perspective of the wearer, the real object 20 and the virtual content 22 can just be merged with each other, and there is no or less overlap between the two. This can achieve a good visual effect of virtual and real fusion.
一些实施例中,该光机154可例如为基于硅基液晶(Liquid Crystal on Silicon,LCoS)、数字光处理器(Digital Light Processing,DLP)、微有机发光半导体(Micro Organic Light-Emitting Diode,Micro-OLED)、微发光半导体(Micro Light-Emitting Diode,Micro-LED)或者激光束扫描仪(Laser Beam Scanner,LBS)等的光机,对此不加限制。In some embodiments, the optomechanical 154 may be, for example, based on Liquid Crystal on Silicon (LCoS), Digital Light Processing (DLP), Micro Organic Light-Emitting Diode (Micro Organic Light-Emitting Diode, Micro -OLED), Micro Light-Emitting Diode (Micro-LED) or laser beam scanner (Laser Beam Scanner, LBS), etc., there is no restriction on this.
一些实施例中,如图7所示例的,类似第一透镜132,该第二透镜146示例为单个透镜。在其他的一些实施例中,该第二透镜146也可以是两个或者多个透镜的组件,对此不加限制。In some embodiments, as illustrated in FIG. 7 , like the first lens 132 , the second lens 146 is illustrated as a single lens. In some other embodiments, the second lens 146 may also be an assembly of two or more lenses, which is not limited.
请参考图7,一些实施例中,第二结合器120可以包括第二耦入光栅122、第二基底124和第二耦出光栅126。第二耦入光栅122和第二耦出光栅126间隔设置,并且均设于第二基底124的表面。类似第一结合器110,汇合后的第四环境光线和第二显示光线通过第二耦入光栅122衍射后,耦入第二基底124。在第二基底124内,第四环境光线的某一衍射级发生全反射的现象、以及第二显示光线的某一衍射级发生全反射的现象,第四环境光线和第二显示光线均朝向第二耦出光栅126的方向传输。经第二基底124传输后的光线可以通过第二耦出光栅126再次衍射,以从该第二结合器120耦出。从第二结合器120耦出的第四环境光线和第二显示光线可以入射到佩戴者的眼睛30。基于此,佩戴者可以看到虚实融合的真实物体20和虚拟内容22,以提高视觉体验。Referring to FIG. 7 , in some embodiments, the second combiner 120 may include a second in-coupling grating 122 , a second substrate 124 and a second out-coupling grating 126 . The second in-coupling grating 122 and the second out-coupling grating 126 are spaced apart, and are both disposed on the surface of the second substrate 124 . Similar to the first combiner 110 , the combined fourth ambient light and the second display light are coupled into the second substrate 124 after diffracted by the second coupling grating 122 . In the second substrate 124, a phenomenon of total reflection of a diffraction order of the fourth ambient light and a phenomenon of total reflection of a diffraction order of the second display light, the fourth ambient light and the second display light are both directed toward the first Two out-coupled grating 126 directions transmit. The light transmitted through the second substrate 124 can be diffracted again by the second outcoupling grating 126 to be coupled out from the second coupler 120 . The fourth ambient light and the second display light coupled out from the second combiner 120 may be incident on the wearer's eye 30 . Based on this, the wearer can see the real object 20 and the virtual content 22 in which the virtual and the real are merged, so as to improve the visual experience.
应当理解,第二结合器120的入光侧可以是对应第二耦入光栅122的一侧,即如图6和图7中光线进入第二结合器120的一侧。第二结合器120的出光侧可以是对应第二耦出光栅126的一侧;即如图6和图7中光线射出第二结合器120的一侧。It should be understood that the light incident side of the second combiner 120 may be the side corresponding to the second coupling grating 122 , that is, the side where light enters the second combiner 120 as shown in FIGS. 6 and 7 . The light-emitting side of the second combiner 120 may be the side corresponding to the second outcoupling grating 126 ; that is, the side where light exits the second combiner 120 as shown in FIGS. 6 and 7 .
一些实施例中,类似第一结合器110,该第二结合器120可例如为衍射光波导、反射光波导或者针孔反射镜等,对此不加限制。In some embodiments, similar to the first combiner 110, the second combiner 120 may be, for example, a diffractive optical waveguide, a reflective optical waveguide, or a pinhole mirror, etc., which is not limited.
一些实施例中,如图7所示例的,类似第一耦入光栅112和第一耦出光栅116,第二耦入光栅122和第二耦出光栅126均为反射式衍射光栅,但不以此为限。在其他的一些实施例中,第二耦入光栅122和第二耦出光栅126中至少一个也可以为透射式衍射光栅。应当理解,基于第二耦入光栅122和第二耦出光栅126的类型,第二结合器120的入光侧和出光侧可以相同或者不同。例如:第二耦入光栅122为反射式衍射光栅,则第二结合器120的入光侧为远离该第二耦入光栅122的一侧。又例如:第二耦入光栅122为透射式衍射光栅,则第二结合器120的入光侧为设置该第二耦入光栅122的一侧。In some embodiments, as illustrated in FIG. 7 , similar to the first coupling-in grating 112 and the first coupling-out grating 116 , the second coupling-in grating 122 and the second coupling-out grating 126 are both reflective diffraction gratings, but not This is limited. In some other embodiments, at least one of the second coupling-in grating 122 and the second coupling-out grating 126 may also be a transmissive diffraction grating. It should be understood that, based on the types of the second in-coupling grating 122 and the second out-coupling grating 126 , the light-incident side and the light-exiting side of the second combiner 120 may be the same or different. For example, if the second coupling grating 122 is a reflective diffraction grating, the light incident side of the second combiner 120 is the side away from the second coupling grating 122 . For another example, the second coupling grating 122 is a transmissive diffraction grating, and the light incident side of the second combiner 120 is the side where the second coupling grating 122 is disposed.
请参考图7,一些实施例中,头戴式显示设备100还可以包括电致变色片162,该电致变色片162夹设在第一结合器110和第二结合器120之间。Referring to FIG. 7 , in some embodiments, the head mounted display device 100 may further include an electrochromic sheet 162 sandwiched between the first coupler 110 and the second coupler 120 .
应当理解,由于本申请各实施例的头戴式显示设备100是需要对载有真实物体信息的光线进行遮挡处理。即,在头戴式显示设备100正常工作的情况下,第一环境光线并不会直接入射到佩戴者的眼睛30,佩戴者也不会直接看到真实物体20。基于上述的需求,电致变色片 162在通电的情况下可以挡住透过第一结合器110的部分第一环境光线,以防止该部分第一环境光线直接入射到佩戴者的眼睛30,避免出现真实物体20与虚拟内容22之间重影以及对真实物体20的遮挡效果消失等问题。It should be understood that the head mounted display device 100 according to the embodiments of the present application needs to perform occlusion processing on the light carrying real object information. That is, when the head mounted display device 100 works normally, the first ambient light will not directly enter the wearer's eyes 30 , and the wearer will not directly see the real object 20 . Based on the above requirements, the electrochromic sheet 162 can block part of the first ambient light passing through the first coupler 110 when the power is on, so as to prevent the part of the first ambient light from directly incident on the wearer's eyes 30 and avoid occurrence of Problems such as ghosting between the real object 20 and the virtual content 22 and the disappearance of the occlusion effect on the real object 20 .
一些实施例中,在电致变色片162没有通电的情况下,该电致变色片162处于透明的状态,基于此,佩戴者可以透过第二结合器120、电致变色片162和第一结合器110而直接看到真实物体20。In some embodiments, when the electrochromic sheet 162 is not energized, the electrochromic sheet 162 is in a transparent state, based on which the wearer can see through the second coupler 120, the electrochromic sheet 162 and the first The real object 20 can be seen directly by the combination 110 .
在其他的一些实施例中,头戴式显示设备100也可以不使用电致变色片162,而是使用深色系的遮挡片,其同样可以挡住透过第一结合器的部分第一环境光线。In some other embodiments, the head-mounted display device 100 may not use the electrochromic sheet 162, but use a dark-colored blocking sheet, which can also block part of the first ambient light passing through the first coupler .
请参考图7,一些实施例中,头戴式显示设备100还可以包括深度探测器164。该深度探测器164可以设于第一结合器110的一侧,并且远离第二结合器120。应当理解,该深度探测器164用于获取佩戴者周围的真实环境的深度信息。该深度探测器164可例如为基于双目相机、结构光、飞行时间(Time-of-Flight,ToF)等的器件。Referring to FIG. 7 , in some embodiments, the head mounted display device 100 may further include a depth detector 164 . The depth detector 164 may be disposed on one side of the first coupler 110 and away from the second coupler 120 . It should be understood that the depth detector 164 is used to obtain depth information of the real environment around the wearer. The depth detector 164 may be, for example, a device based on a binocular camera, structured light, time-of-flight (ToF), or the like.
基于佩戴者周围的真实环境的深度信息、以及即时定位与地图构建(Simultaneous Localization And Mapping,SLAM)系统,可以重建佩戴者周围的真实环境的三维模型。根据使用场景,还可以使虚拟内容22与真实物体20自适应地实现融合,以达到更好的视觉呈现效果。Based on the depth information of the real environment around the wearer and the Simultaneous Localization And Mapping (SLAM) system, a three-dimensional model of the real environment around the wearer can be reconstructed. According to the usage scenario, the virtual content 22 and the real object 20 can also be fused adaptively, so as to achieve a better visual presentation effect.
请参考图7,一些实施例中,头戴式显示设备100还可以包括第一变焦透镜166。该第一变焦透镜166设于第一结合器110的一侧,并且远离所述第二结合器120。该第一变焦透镜166用于准直第一环境光线,以便于第一环境光线在第一结合器110中传输。应当理解,准直后的第一环境光线可以通过第一耦入光栅112耦入第一基底114,以朝向第一耦出光栅116的方向传输。Referring to FIG. 7 , in some embodiments, the head mounted display device 100 may further include a first zoom lens 166 . The first zoom lens 166 is disposed on one side of the first combiner 110 and away from the second combiner 120 . The first zoom lens 166 is used for collimating the first ambient light, so that the first ambient light is transmitted in the first combiner 110 . It should be understood that the collimated first ambient light can be coupled into the first substrate 114 through the first coupling grating 112 to be transmitted toward the direction of the first coupling grating 116 .
请参考图7,一些实施例中,头戴式显示设备100还可以包括第二变焦透镜168。该第二变焦透镜168设于第二结合器120的一侧,并且远离第一结合器110;即,两个结合器(110,120)是夹设在第一变焦透镜166和第二变焦透镜168之间。应当理解,汇合后的第四环境光线和第二显示光线通过第二耦出光栅126耦出后,该第二变焦透镜168可以对汇合后的第四环境光线和第二显示光线进行准直,以使准直后的第四环境光线和第二显示光线入射到佩戴者的眼睛30。Referring to FIG. 7 , in some embodiments, the head-mounted display device 100 may further include a second zoom lens 168 . The second zoom lens 168 is disposed on one side of the second combiner 120 and is far from the first combiner 110; that is, the two combiners (110, 120) are sandwiched between the first zoom lens 166 and the second zoom lens between 168. It should be understood that after the combined fourth ambient light and the second display light are coupled out through the second coupling-out grating 126, the second zoom lens 168 can collimate the combined fourth ambient light and the second display light, so that the collimated fourth ambient light and the second display light are incident on the wearer's eye 30 .
一些实施例中,第一变焦透镜166示例为凸透镜,第二变焦透镜168示例为凹透镜。第一变焦透镜166和第二变焦透镜168均可例如为液晶变焦透镜、液体变焦透镜、Alvarez透镜或其他可以实现实时变焦的器件。In some embodiments, the first zoom lens 166 is exemplified as a convex lens, and the second zoom lens 168 is exemplified as a concave lens. Both the first zoom lens 166 and the second zoom lens 168 can be, for example, a liquid crystal zoom lens, a liquid zoom lens, an Alvarez lens, or other devices that can realize real-time zooming.
在其他的一些实施例中,第一变焦透镜166和第二变焦透镜168均可以更换成焦距固定的透镜。例如:第一变焦透镜166更换成焦距固定的凸透镜,第二变焦透镜168更换成焦距固定的凹透镜,对此不加限制。In some other embodiments, both the first zoom lens 166 and the second zoom lens 168 can be replaced with fixed focal length lenses. For example, the first zoom lens 166 is replaced with a convex lens with a fixed focal length, and the second zoom lens 168 is replaced with a concave lens with a fixed focal length, which is not limited.
请参考图7,一些实施例中,头戴式显示设备100还可以包括控制器170和眼动追踪组件172。控制器170分别与第一变焦透镜166、第二变焦透镜168和眼动追踪组件172电连接。应当理解,眼动追踪组件可以捕获佩戴者的眼睛30的视线方向。根据双目视线的交点,可以得到佩戴者注视点的深度。基于该深度,控制器170可以调整第一变焦透镜166和第二变焦透镜168的光焦度,以适应性地调整真实物体20和虚拟内容22的呈现效果。Referring to FIG. 7 , in some embodiments, the head mounted display device 100 may further include a controller 170 and an eye tracking component 172 . The controller 170 is electrically connected to the first zoom lens 166, the second zoom lens 168 and the eye tracking component 172, respectively. It should be understood that the eye tracking assembly can capture the direction of the gaze of the wearer's eyes 30 . According to the intersection of binocular vision, the depth of the wearer's gaze point can be obtained. Based on the depth, the controller 170 may adjust the power of the first zoom lens 166 and the second zoom lens 168 to adaptively adjust the presentation effects of the real object 20 and the virtual content 22 .
一些实施例中,如图7所示例的,控制器170的数量为一个,但不以此为限。在其他的一些实施例中,控制器170的数量也可以是两个,以分别控制第一变焦透镜166和第二变焦透镜168。In some embodiments, as shown in FIG. 7 , the number of controllers 170 is one, but not limited thereto. In some other embodiments, the number of the controllers 170 may also be two, so as to control the first zoom lens 166 and the second zoom lens 168 respectively.
请参考图7,一些实施例中,该眼动追踪组件172可以包括近红外发射器172a和近红外接收器172b。近红外发射器172a可以发射近红外光,近红外接收器172b可以接收从佩戴者的眼睛30反射过来的近红外光。基于瞳孔角膜反射法,可以获取佩戴者双眼的视线方向,而根据佩戴者双眼在各自视线方向上的交点可以得到佩戴者的注视点的深度。Referring to FIG. 7, in some embodiments, the eye tracking component 172 may include a near-infrared transmitter 172a and a near-infrared receiver 172b. The near-infrared transmitter 172a may emit near-infrared light, and the near-infrared receiver 172b may receive the near-infrared light reflected from the wearer's eye 30 . Based on the pupil corneal reflex method, the line of sight direction of the wearer's eyes can be obtained, and the depth of the wearer's gaze point can be obtained according to the intersection of the wearer's eyes in the respective line-of-sight directions.
应当理解,基于深度探测器164获取佩戴者周围的真实环境的深度信息,SLAM系统可以重建该真实环境的三维模型。其中,该真实环境可以理解为真实世界的一部分,一般是根据佩戴者的位置和姿态而实时更新。It should be understood that based on the depth detector 164 acquiring depth information of the real environment around the wearer, the SLAM system can reconstruct a three-dimensional model of the real environment. Among them, the real environment can be understood as a part of the real world, which is generally updated in real time according to the position and posture of the wearer.
同步的,基于输入控制器170的虚拟三维场景,可以得到虚拟物体与真实物体20之间的相互遮挡关系。其中,该虚拟三维场景包括了虚拟物体、以及虚拟物体在虚拟三维场景中的位置、姿态、尺寸和色彩等信息。Simultaneously, based on the virtual three-dimensional scene input to the controller 170, the mutual occlusion relationship between the virtual object and the real object 20 can be obtained. Wherein, the virtual three-dimensional scene includes virtual objects, and information such as the position, posture, size, and color of the virtual objects in the virtual three-dimensional scene.
基于真实环境的三维模型以及虚拟三维场景,通过比较虚拟物体与真实物体20在位置、姿态、深度和尺寸等方面的信息,可以得到虚拟物体与真实物体20之间的相互遮挡关系。例如:通过比较虚拟物体与真实物体的位置和深度等信息,可以确定虚拟物体是位于真实物体的后方,相应的,虚拟物体会被真实物体遮挡了一部分。又例如:通过比较虚拟物体与真实物体的位置和深度等信息,可以确定虚拟物体是位于真实物体的前方,相应的,真实物体会被虚拟物体遮挡了一部分。再例如:通过比较虚拟物体和真实物体的位置、姿态、尺寸、深度等信息,可以确定虚拟物体和真实物体的深度大致相同,并且虚拟物体和真实物体所处的方位不同,则可以确定虚拟物体和真实物体为间隔设置,且彼此之间均没有被遮挡。Based on the 3D model of the real environment and the virtual 3D scene, the mutual occlusion relationship between the virtual object and the real object 20 can be obtained by comparing the position, posture, depth and size of the virtual object and the real object 20 . For example, by comparing the position and depth of the virtual object and the real object, it can be determined that the virtual object is located behind the real object, and accordingly, the virtual object will be partially occluded by the real object. Another example: by comparing information such as the position and depth of the virtual object and the real object, it can be determined that the virtual object is located in front of the real object, and accordingly, the real object will be partially occluded by the virtual object. Another example: by comparing the position, posture, size, depth and other information of the virtual object and the real object, it can be determined that the depth of the virtual object and the real object are roughly the same, and the orientation of the virtual object and the real object are different, then the virtual object can be determined. and the real object are set at intervals, and they are not occluded from each other.
应当理解,根据真实物体20被虚拟物体遮挡的情况,遮挡模板152可以对第二环境光线进行调制,在后续光路中,可以拦截对应真实物体20的像中被虚拟物体遮挡的部分;或者,经遮挡模板152的调制后,可以仅出射对应真实物体20的像中不被虚拟物体遮挡的部分。It should be understood that according to the situation that the real object 20 is blocked by the virtual object, the occlusion template 152 can modulate the second ambient light, and in the subsequent light path, can intercept the part of the image corresponding to the real object 20 that is blocked by the virtual object; After the occlusion template 152 is modulated, only the portion of the image corresponding to the real object 20 that is not occluded by the virtual object can be output.
根据虚拟三维场景,分别生成对应佩戴者左右眼的图像。应当理解,由于佩戴者一般是通过其左右眼的视角差来获取被注视物体的深度信息;基于此,在虚拟三维场景中,可以在对应佩戴者双眼的位置摆放虚拟相机,每个虚拟相机的光轴与佩戴者眼睛30的视线方向一致,虚拟相机的入瞳的位置和尺寸与佩戴者眼睛30的瞳孔匹配。虚拟相机拍摄虚拟三维场景所得到的图像,即为原始的虚拟内容。其中,佩戴者的双眼的位置可以根据佩戴者的位置和姿态来确定,佩戴者的视线方向可以根据眼动追踪组件172来确定。According to the virtual three-dimensional scene, images corresponding to the left and right eyes of the wearer are respectively generated. It should be understood that, because the wearer generally obtains the depth information of the object being watched through the difference in the viewing angle between its left and right eyes; based on this, in the virtual three-dimensional scene, virtual cameras can be placed at positions corresponding to the wearer's eyes, and each virtual camera The optical axis of the virtual camera is consistent with the line of sight of the wearer's eye 30 , and the position and size of the entrance pupil of the virtual camera match the pupil of the wearer's eye 30 . The image obtained by the virtual camera shooting the virtual three-dimensional scene is the original virtual content. The positions of the eyes of the wearer may be determined according to the position and posture of the wearer, and the direction of the line of sight of the wearer may be determined according to the eye tracking component 172 .
根据虚拟物体与真实物体20之间的遮挡关系,可以得到虚拟物体被真实物体20遮挡的部分,在原始的虚拟内容中去掉该部分,即可以得到上述各实施例中的虚拟内容22。该虚拟内容22可以通过光机154发射,以作为第一显示光线。According to the occlusion relationship between the virtual object and the real object 20, the part of the virtual object occluded by the real object 20 can be obtained, and the virtual content 22 in the above embodiments can be obtained by removing this part from the original virtual content. The virtual content 22 may be emitted by the light machine 154 as the first display light.
如上所述的,第四环境光线可以通过第二调制组件140传输到第二结合器120,由光机154发射的第一显示光线通过第二调制组件140调制形成第二显示光线后,该第二显示光线也可以传输到第二结合器120,最后入射到佩戴者的眼睛30。基于此,佩戴者可以看到虚实融合的真实物体20和虚拟内容22。As described above, the fourth ambient light can be transmitted to the second combiner 120 through the second modulation component 140. After the first display light emitted by the optical machine 154 is modulated by the second modulation component 140 to form the second display light, the Two display rays can also be transmitted to the second combiner 120 and finally incident on the wearer's eye 30 . Based on this, the wearer can see the real object 20 and the virtual content 22 in which the virtual and the real are merged.
应当理解,佩戴者在真实世界中注视真实物体20时,其眼球会转动并朝向该物体,而人脑可以通过双眼的辐辏角度来判断物体的深度,该深度被称为辐辏深度(Vergence Distance)。同时,人眼会通过睫状肌收缩来调节晶状体的屈光度使目标物清晰成像,因此睫状肌收缩的状态可以给大脑一个深度信号,该深度又被称为调焦深度(Accommodation Distance)。It should be understood that when the wearer looks at the real object 20 in the real world, his eyeballs will turn and face the object, and the human brain can judge the depth of the object through the vergence angle of the eyes, which is called the vergence distance (Vergence Distance). . At the same time, the human eye adjusts the diopter of the lens through the contraction of the ciliary muscle to image the target clearly, so the state of the ciliary muscle contraction can give the brain a depth signal, which is also called the Accommodation Distance.
基于此,为了更好地模拟出真实世界的情况,通过第一变焦透镜166、第二变焦透镜168和眼动追踪组件172等结构的配合,本申请各实施例的头戴式显示设备还可以对载有真实物体信息的光线和载有虚拟物体信息的光线进行离焦虚化处理,以提高虚实融合的视觉效果。 图8佩戴者注视真实物体时,遮挡模板、虚拟内容以及佩戴者看到的场景的示意图,请参考图8,由于虚拟物体与真实物体20不在同一辐辏深度,因此当佩戴者注视真实物体20时,虚拟物体可以进行离焦虚化的处理,以使虚实融合的效果更为逼真。应当理解,根据虚拟物体到佩戴者注视点之间的距离可以确定虚拟物体的虚化程度。距离注视点越远,虚化程度越严重;反之,距离注视点越近,虚化程度越轻微。如图8所示例的,为了实现真实物体20与虚拟内容22之间的匹配,还可以对遮挡模板152做相同的虚化处理,即对图8中黑色的圆柱体进行离焦虚化处理。Based on this, in order to better simulate the real world situation, through the cooperation of structures such as the first zoom lens 166 , the second zoom lens 168 and the eye tracking component 172 , the head-mounted display device of each embodiment of the present application can also Defocusing is performed on the light carrying real object information and the light carrying virtual object information to improve the visual effect of virtual and real fusion. FIG. 8 is a schematic diagram of the occlusion template, the virtual content and the scene seen by the wearer when the wearer looks at the real object. Please refer to FIG. , the virtual object can be defocused and blurred to make the effect of virtual and real fusion more realistic. It should be understood that the blurring degree of the virtual object can be determined according to the distance between the virtual object and the wearer's gaze point. The farther away from the gaze point, the more serious the blurring degree; conversely, the closer the distance to the gaze point, the less blurry degree. As shown in FIG. 8 , in order to realize the matching between the real object 20 and the virtual content 22 , the occlusion template 152 can also be subjected to the same blurring processing, that is, defocus blurring processing is performed on the black cylinder in FIG. 8 .
图9是佩戴者注视虚拟物体时,遮挡模板、虚拟内容以及佩戴者看到的场景的示意图,请参考图9,由于佩戴者的注视点位于虚拟内容22上,对此,遮挡模板152和虚拟内容22均没有进行离焦虚化处理。应当理解,由于第一环境光线是来源于真实物体20,当佩戴者注视虚拟内容22时,对应真实物体20的像也会自动地离焦虚化,而不需要再做特别处理。9 is a schematic diagram of the occlusion template, the virtual content, and the scene seen by the wearer when the wearer looks at the virtual object. Please refer to FIG. 9. Since the wearer's gaze point is located on the virtual content 22, the occlusion template 152 and the virtual content 22 are blocked. None of the content 22 is defocused. It should be understood that since the first ambient light is derived from the real object 20 , when the wearer looks at the virtual content 22 , the image corresponding to the real object 20 will also be automatically defocused and blurred, and no special processing is required.
应当理解,对虚拟内容22的离焦虚化可以通过渲染算法来实现。此外,该离焦虚化的方式也可以类比摄影领域的景深来进行理解。It should be understood that the defocusing of the virtual content 22 may be achieved by a rendering algorithm. In addition, the way of defocusing can also be understood by analogy with the depth of field in the field of photography.
一些实施例中,基于控制器170、第一变焦透镜166、第二变焦透镜168和眼动追踪组件等结构的配合,该头戴式显示设备100在实现虚实融合的同时,还可以适应性地矫正佩戴者可能存在的近视或者远视。以近视为例,假设佩戴者的近视度数为M(单位为Diopter),其对应期望的虚像位置为V(单位为Diopter),第一变焦透镜166的光焦度为P1,第二变焦透镜168的光焦度为P2,则上述示出的参数可以满足以下关系式:In some embodiments, based on the cooperation of the controller 170 , the first zoom lens 166 , the second zoom lens 168 , the eye tracking component and other structures, the head-mounted display device 100 can adaptively integrate the virtual and the real while realizing the fusion. Corrects the wearer's possible nearsightedness or farsightedness. Taking myopia as an example, it is assumed that the degree of myopia of the wearer is M (unit is Diopter), the corresponding desired virtual image position is V (unit is Diopter), the refractive power of the first zoom lens 166 is P1, and the second zoom lens 168 The optical power is P2, then the parameters shown above can satisfy the following relationship:
Figure PCTCN2021133765-appb-000002
Figure PCTCN2021133765-appb-000002
其中,佩戴者的近视度数可以由佩戴者自己输入。而根据眼动追踪组件获取的佩戴者的注视点的深度(即辐辏深度),将该深度定义为虚像位置V。由此可以根据上述的关系式,通过调整P1和P2的数值来实现对虚像位置V的调整,以针对不同的佩戴者实现清晰成像。The degree of myopia of the wearer can be input by the wearer himself. And according to the depth of the wearer's gaze point (ie the vergence depth) obtained by the eye tracking component, the depth is defined as the virtual image position V. Therefore, the virtual image position V can be adjusted by adjusting the values of P1 and P2 according to the above relationship, so as to achieve clear imaging for different wearers.
应当理解,对于老花眼的佩戴者,其晶状体的调焦能力较弱,睫状肌等眼部肌肉的调节能力也较差。对此,本申请各实施例的头戴式显示设备100可以通过眼动追踪组件172来实时获取该佩戴者注视点的深度。基于该深度,第一变焦透镜166可以适应性地改变其光焦度,第二变焦透镜168可以适应性地改变其光焦度,以此在一定程度上可以代替佩戴者的晶状体以实现调焦功能,以实现清晰成像。It should be understood that for a wearer with presbyopia, the focusing ability of the lens is weak, and the adjusting ability of eye muscles such as the ciliary muscle is also poor. In this regard, the head-mounted display device 100 according to the embodiments of the present application can obtain the depth of the wearer's gaze point in real time through the eye tracking component 172 . Based on the depth, the first zoom lens 166 can adaptively change its optical power, and the second zoom lens 168 can adaptively change its optical power, so as to replace the wearer's lens to achieve focusing to a certain extent function to achieve clear imaging.
请参考图10,当人观察真实世界中的真实物体20(示例为圆柱体和立方体)时,辐辏深度和调焦深度是相符的,不会发生冲突。因此,佩戴者可以较好地感受自身与真实物体20之间的方位等关系;例如:佩戴者可以大概估算达到圆柱体的位置需要步行多少步。Referring to FIG. 10 , when a person observes a real object 20 in the real world (for example, a cylinder and a cube), the vergence depth and the focus depth are consistent and will not conflict. Therefore, the wearer can better feel the relationship between himself and the real object 20, such as the orientation; for example, the wearer can roughly estimate how many steps are required to walk to reach the position of the cylinder.
但是由于VR、AR和MR等类型的头戴显示设备的原理大致上是将光机154或者显示屏等的图像投影到某个虚像位置。左眼和右眼对应的虚像具有一定的视差,当人眼注视虚像面上的某个物体时,眼球同样会转动并朝向该物体,因此大脑会根据双眼的辐辏角度得到物体的深度。请参考图10,虚线的圆柱体以及虚线的立方体即为大脑注视对应物体时所感受到的辐辏深度。以圆柱体为例,佩戴者根据双眼的辐辏角度而得到该圆柱体的辐辏深度为L1。与此同时,人眼为了清晰成像会一直聚焦到虚像面上,因此人眼到虚像面深度为调焦深度。如图11所示例的,实线的圆柱体和实线的立方体分别为大脑感受到的调焦深度,调焦深度示例为L2。以圆柱体为例,可以明显地看出,辐辏深度L1和调焦深度L2并不相等。当上述两个深度信号发生冲突时,显然是与佩戴者观察真实世界的真实物体不同,佩戴者并不能很好地把握自身与圆柱体之间的距离;此也就导致了佩戴者容易产生眼疲劳和眩晕感,这种现象又被 称为视觉辐辏调节冲突(VergenceAccommodation Conflict,VAC)。However, the principle of VR, AR, MR and other types of head-mounted display devices is roughly to project the image of the optical machine 154 or the display screen to a certain virtual image position. The virtual images corresponding to the left eye and the right eye have a certain parallax. When the human eye looks at an object on the virtual image surface, the eyeball will also turn and face the object, so the brain will obtain the depth of the object according to the vergence angle of the eyes. Please refer to Figure 10. The dotted cylinder and dotted cube are the vergence depths felt by the brain when looking at the corresponding object. Taking a cylinder as an example, the wearer obtains the vergence depth of the cylinder as L1 according to the vergence angle of the eyes. At the same time, the human eye will always focus on the virtual image surface for clear imaging, so the depth of the human eye to the virtual image surface is the focusing depth. As exemplified in FIG. 11 , the solid-line cylinder and solid-line cube are respectively the focusing depth perceived by the brain, and an example of the focusing depth is L2. Taking a cylinder as an example, it can be clearly seen that the vergence depth L1 and the focus depth L2 are not equal. When the above two depth signals collide, it is obviously different from the wearer's observation of real objects in the real world, and the wearer cannot well grasp the distance between himself and the cylinder; this also leads to the wearer's easy to produce eyeballs. Fatigue and vertigo, also known as Vergence Accommodation Conflict (VAC).
基于此,请参考图12,本申请各实施例的头戴式显示设备100基于控制器170、第一变焦透镜166、第二变焦透镜168和眼动追踪组件172等结构的配合,在提供良好的虚实融合的效果的同时,还可以通过眼动追踪组件172来获取佩戴者注视点的深度;即,可以知道佩戴者是在注视真实物体20还是虚拟内容22。基于此,通过调整第一变焦透镜166和第二变焦透镜168的光焦度,可以将虚像设置在对应的虚像面上,以改善VAC问题。Based on this, please refer to FIG. 12 , the head-mounted display device 100 according to the embodiments of the present application is based on the cooperation of the controller 170 , the first zoom lens 166 , the second zoom lens 168 , and the eye tracking component 172 , etc. At the same time, the depth of the wearer's gaze point can be obtained through the eye tracking component 172; that is, it can be known whether the wearer is looking at the real object 20 or the virtual content 22. Based on this, by adjusting the refractive power of the first zoom lens 166 and the second zoom lens 168, a virtual image can be set on the corresponding virtual image plane to improve the VAC problem.
当佩戴者注视较远处的真实物体20(即图12中的立方体)时,佩戴者双眼辐辏的角度较小,基于眼动追踪组件172计算得到的佩戴者注视点的深度,可以调整第一变焦透镜166和第二变焦透镜168的光焦度,以将虚像设置在第一虚像面S101处。类似的,当佩戴者注视较近处的虚拟内容22(即图12中的圆柱体)时,佩戴者双眼辐辏的角度较大,基于眼动追踪组件172计算得到的佩戴者注视点的深度,可以调整第一变焦透镜166和第二变焦透镜168的光焦度,以将虚像设置在第二虚像面S102处。其中,相较于第一虚像面S101,第二虚像面S102会更靠近佩戴者。由此,从佩戴者的视角来看,在注视真实物体20或者虚拟内容22的过程中,对应的辐辏深度和调焦深度均保持一致,佩戴者并不容易产生眼疲劳和眩晕感,以改善VAC问题。When the wearer is looking at the real object 20 that is far away (ie, the cube in FIG. 12 ), the angle at which the wearer’s eyes converge is small. Based on the depth of the wearer’s gaze point calculated by the eye tracking component 172, the first power of the zoom lens 166 and the second zoom lens 168 to set the virtual image at the first virtual image plane S101. Similarly, when the wearer is gazing at the virtual content 22 (ie, the cylinder in FIG. 12 ) that is closer, the angle at which the wearer’s eyes converge is larger. Based on the depth of the wearer’s gaze point calculated by the eye tracking component 172, The power of the first zoom lens 166 and the second zoom lens 168 may be adjusted to set the virtual image at the second virtual image plane S102. Wherein, compared with the first virtual image surface S101, the second virtual image surface S102 is closer to the wearer. Therefore, from the perspective of the wearer, in the process of staring at the real object 20 or the virtual content 22, the corresponding vergence depth and focus depth are consistent, and the wearer is not prone to eye fatigue and dizziness. VAC problem.
以上所述是本申请具体的实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也视为本申请的保护范围。The above are specific embodiments of the present application. It should be pointed out that for those skilled in the art, without departing from the principles of the present application, several improvements and modifications can also be made, and these improvements and modifications may also be regarded as The protection scope of this application.

Claims (14)

  1. 一种实现虚实融合的显示设备,其特征在于,包括:第一结合器、第二结合器、第一调制组件、第二调制组件、遮挡模板和光机;所述第一调制组件位于所述第一结合器的出光侧;所述第二调制组件位于所述第一调制组件的出光侧,且位于所述第二结合器的入光侧;A display device for realizing virtual-real fusion, comprising: a first combiner, a second combiner, a first modulation component, a second modulation component, a shading template, and an optomechanical; the first modulation component is located in the second a light-emitting side of a combiner; the second modulating component is located on the light-emitting side of the first modulating component and on the light-incident side of the second combiner;
    所述第一结合器用于接收第一环境光线,并将所述第一环境光线传输至所述第一调制组件,所述第一环境光线载有真实物体信息;The first combiner is configured to receive first ambient light and transmit the first ambient light to the first modulation component, and the first ambient light carries real object information;
    所述第一调制组件用于调制所述第一环境光线的偏振态,以形成第二环境光线,并将所述第二环境光线成像至所述遮挡模板;The first modulation component is configured to modulate the polarization state of the first ambient light to form a second ambient light, and image the second ambient light to the occlusion template;
    所述遮挡模板用于根据真实物体与虚拟物体的遮挡关系,调制所述第二环境光线的偏振态,以形成第三环境光线;The occlusion template is used to modulate the polarization state of the second ambient light according to the occlusion relationship between the real object and the virtual object to form a third ambient light;
    所述第一调制组件再次调制所述第三环境光线的偏振态,形成第四环境光线,将所述第四环境光线传输至所述第二调制组件;The first modulation component modulates the polarization state of the third ambient light again to form a fourth ambient light, and transmits the fourth ambient light to the second modulation component;
    所述第二调制组件用于将所述第四环境光线传输至所述第二结合器,还用于调制第一显示光线的偏振态,以形成第二显示光线,以及将所述第二显示光线传输至所述第二结合器;其中,所述第一显示光线由所述光机产生,并载有所述虚拟物体信息;所述第二显示光线的偏振态和所述第四环境光线的偏振态不同;The second modulation component is used for transmitting the fourth ambient light to the second combiner, and is also used for modulating the polarization state of the first display light to form a second display light, and the second display light Light is transmitted to the second combiner; wherein the first display light is generated by the optomechanical and carries the virtual object information; the polarization state of the second display light and the fourth ambient light different polarization states;
    所述第二结合器用于传输所述第四环境光线和所述第二显示光线,使所述第四环境光线和所述第二显示光线入射至佩戴者的眼睛。The second combiner is used to transmit the fourth ambient light and the second display light, so that the fourth ambient light and the second display light are incident on the wearer's eyes.
  2. 如权利要求1所述的显示设备,其特征在于,所述第一调制组件包括第一透镜;所述第一透镜正对所述第一结合器的出光侧,用于将所述第一环境光线成像到所述遮挡模板。The display device according to claim 1, wherein the first modulating component comprises a first lens; the first lens is facing the light-emitting side of the first combiner, and is used to convert the first environment Rays are imaged onto the occlusion template.
  3. 如权利要求2所述的显示设备,其特征在于,所述第一调制组件还包括第一偏振片、第一偏振分束镜和半波片;The display device of claim 2, wherein the first modulation component further comprises a first polarizer, a first polarizing beam splitter and a half-wave plate;
    所述第一偏振片和所述第一偏振分束镜依次位于所述第一透镜的出光侧,并且远离所述第一结合器;所述半波片位于所述第一偏振分束镜的出光侧,所述遮挡模板位于所述第一透镜的后焦面上,所述半波片的快轴方向与所述第三环境光线的偏振方向之间的夹角为45°;The first polarizer and the first polarizing beam splitter are sequentially located on the light-emitting side of the first lens and away from the first combiner; the half-wave plate is located on the side of the first polarizing beam splitter. On the light-emitting side, the shielding template is located on the back focal plane of the first lens, and the angle between the fast axis direction of the half-wave plate and the polarization direction of the third ambient light is 45°;
    所述第一偏振片用于调制所述第一环境光线的偏振态,以形成所述第二环境光线;所述偏振分束镜用于将所述第二环境光线反射至所述遮挡模板、以及供所述第三环境光线透过;所述半波片用于调制所述第三环境光线的偏振态,以形成所述第四环境光线。The first polarizer is used to modulate the polarization state of the first ambient light to form the second ambient light; the polarization beam splitter is used to reflect the second ambient light to the shielding template, and for transmitting the third ambient light; the half-wave plate is used to modulate the polarization state of the third ambient light to form the fourth ambient light.
  4. 如权利要求3所述的显示设备,其特征在于,所述第二调制组件包括第二偏振片、第二偏振分束镜和第二透镜;The display device of claim 3, wherein the second modulation component comprises a second polarizer, a second polarizing beam splitter and a second lens;
    所述第二透镜正对所述第二结合器的入光侧,所述第二偏振分束镜位于所述第二透镜的入光侧,并且正对所述半波片;所述光机和所述遮挡模板均位于所述第二透镜的焦平面上;The second lens faces the light incident side of the second combiner, the second polarizing beam splitter is located on the light incident side of the second lens, and faces the half-wave plate; the optomechanical and the shielding template are both located on the focal plane of the second lens;
    所述第二偏振片用于调制所述第一显示光线的偏振态,以形成所述第二显示光线;The second polarizer is used to modulate the polarization state of the first display light to form the second display light;
    所述第二偏振分束镜用于将所述第四环境光线反射至所述第二透镜,还用于供所述第二显示光线透过至所述第二透镜;The second polarizing beam splitter is used to reflect the fourth ambient light to the second lens, and is also used to transmit the second display light to the second lens;
    所述第二透镜用于准直所述第四环境光线和所述第二显示光线。The second lens is used for collimating the fourth ambient light and the second display light.
  5. 如权利要求1至4任一项所述的显示设备,其特征在于,所述显示设备还包括第一变焦透镜,所述第一变焦透镜位于所述第一结合器的入光侧;所述第一变焦透镜用于准直所述第一环境光线。The display device according to any one of claims 1 to 4, wherein the display device further comprises a first zoom lens, and the first zoom lens is located on the light incident side of the first combiner; the The first zoom lens is used for collimating the first ambient light.
  6. 如权利要求5所述的显示设备,其特征在于,所述显示设备还包括第二变焦透镜,所 述第二变焦透镜位于所述第二结合器的出光侧;所述第二变焦透镜用于准直经所述第二结合器传输的所述第四环境光线和所述第二显示光线。The display device according to claim 5, wherein the display device further comprises a second zoom lens, the second zoom lens is located on the light exit side of the second combiner; the second zoom lens is used for The fourth ambient light and the second display light transmitted through the second combiner are collimated.
  7. 如权利要求6所述的显示设备,其特征在于,所述显示设备还包括眼动追踪组件;所述眼动追踪组件用于捕获佩戴者的眼睛的视线方向。6. The display device of claim 6, wherein the display device further comprises an eye-tracking component; the eye-tracking component is used to capture the gaze direction of the wearer's eyes.
  8. 如权利要求7所述的显示设备,其特征在于,所述显示设备还包括控制器,所述控制器分别与所述第一变焦透镜、所述第二变焦透镜和所述眼动追踪组件电连接;所述控制器用于根据佩戴者的视线方向,调整所述第一变焦透镜和所述第二变焦透镜的光焦度。8. The display device of claim 7, wherein the display device further comprises a controller electrically connected to the first zoom lens, the second zoom lens, and the eye tracking assembly, respectively. connection; the controller is configured to adjust the optical power of the first zoom lens and the second zoom lens according to the direction of the wearer's line of sight.
  9. 如权利要求8所述的显示设备,其特征在于,根据佩戴者的视线方向而得到佩戴者的注视点的深度,所述控制器用于控制所述第一变焦透镜和所述第二变焦透镜,以将所述真实物体和所述虚拟物体所对应的虚像设置在注视点所在的虚像面上。The display device according to claim 8, wherein the depth of the wearer's gaze point is obtained according to the wearer's gaze direction, and the controller is configured to control the first zoom lens and the second zoom lens, The virtual images corresponding to the real object and the virtual object are set on the virtual image plane where the gaze point is located.
  10. 如权利要求8所述的显示设备,其特征在于,所述显示设备还包括深度探测器;所述深度探测器设于所述第一结合器的一侧,并且远离所述第二结合器;所述深度探测器用于获取佩戴者周围的真实环境的深度信息;The display device according to claim 8, characterized in that, the display device further comprises a depth detector; the depth detector is provided on one side of the first combiner and is far from the second combiner; The depth detector is used to obtain depth information of the real environment around the wearer;
    根据所述深度信息以及所述虚拟内容,所述遮挡模板还用于实时刷新所述第三环境光线,以适应性地改变所述真实物体与所述虚拟物体之间的遮挡关系;According to the depth information and the virtual content, the occlusion template is further configured to refresh the third ambient light in real time, so as to adaptively change the occlusion relationship between the real object and the virtual object;
    根据所述深度信息,所述光机还用于实时刷新所述第一显示光线所载有的虚拟物体信息,以适应性地改变所述虚拟物体与所述真实物体之间的遮挡关系。According to the depth information, the optical machine is further configured to refresh the virtual object information carried by the first display light in real time, so as to adaptively change the occlusion relationship between the virtual object and the real object.
  11. 如权利要求8至10任一项所述的显示设备,其特征在于,基于佩戴者的近视度数,所述控制器用于根据以下公式调整所述第一变焦透镜和所述第二变焦透镜的光焦度;The display device according to any one of claims 8 to 10, wherein the controller is configured to adjust the light of the first zoom lens and the second zoom lens according to the following formula based on the degree of myopia of the wearer power;
    Figure PCTCN2021133765-appb-100001
    Figure PCTCN2021133765-appb-100001
    其中,P 1为所述第一变焦透镜的光焦度,P 2为所述第二变焦透镜的光焦度,M为佩戴者的近视度数,V为由佩戴者的眼睛的视线方向而得到的注视点的深度。 Wherein, P 1 is the optical power of the first zoom lens, P 2 is the optical power of the second zoom lens, M is the wearer's degree of myopia, and V is the line of sight of the wearer's eyes. the depth of the gaze point.
  12. 如权利要求11所述的显示设备,其特征在于,基于佩戴者的视线方向而得到的注视点的深度,所述遮挡模板还用于对入射的所述第二环境光线进行离焦虚化;和/或,所述光机用于对所述第一显示光线进行离焦虚化。The display device according to claim 11, wherein, based on the depth of the gaze point obtained from the wearer's line of sight, the occlusion template is further used to defocus the incident second ambient light; And/or, the optical machine is used for defocusing and blurring the first display light.
  13. 如权利要求1至12任一项所述的显示设备,其特征在于,所述显示设备还包括电致变色片;所述电致变色片夹设在所述第一结合器和所述第二结合器之间,并用于在通电后遮挡透过所述第一结合器的光线;或者,The display device according to any one of claims 1 to 12, wherein the display device further comprises an electrochromic sheet; the electrochromic sheet is sandwiched between the first coupler and the second between the couplers, and used to block the light passing through the first coupler after power-on; or,
    所述显示设备还包括遮挡片;所述遮挡片夹设在所述第一结合器和所述第二结合器之间,并用于遮挡透过所述第一结合器的光线。The display device further includes a shielding piece; the shielding piece is sandwiched between the first coupler and the second coupler, and is used to block light passing through the first coupler.
  14. 如权利要求1至13任一项所述的显示设备,其特征在于,所述显示设备为头戴式显示设备或者抬头显示设备。The display device according to any one of claims 1 to 13, wherein the display device is a head-mounted display device or a head-up display device.
PCT/CN2021/133765 2020-11-30 2021-11-27 Virtual-reality fusion display device WO2022111668A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011374209.6A CN114578554B (en) 2020-11-30 2020-11-30 Display equipment for realizing virtual-real fusion
CN202011374209.6 2020-11-30

Publications (1)

Publication Number Publication Date
WO2022111668A1 true WO2022111668A1 (en) 2022-06-02

Family

ID=81753999

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/133765 WO2022111668A1 (en) 2020-11-30 2021-11-27 Virtual-reality fusion display device

Country Status (2)

Country Link
CN (1) CN114578554B (en)
WO (1) WO2022111668A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101029968A (en) * 2007-04-06 2007-09-05 北京理工大学 Optical perspective helmet display device of addressing light-ray shielding mechanism
CN106125324A (en) * 2016-06-24 2016-11-16 北京国承万通信息科技有限公司 Light field editing device, system and method and light field display system and method
CN108072978A (en) * 2017-12-21 2018-05-25 成都理想境界科技有限公司 A kind of augmented reality wears display device
US20190004350A1 (en) * 2017-06-29 2019-01-03 Varjo Technologies Oy Display apparatus and method of displaying using polarizers and optical combiners
CN110673340A (en) * 2019-09-24 2020-01-10 歌尔科技有限公司 Augmented reality device and control method thereof
CN111587393A (en) * 2018-01-03 2020-08-25 萨贾德·阿里·可汗 Method and system for compact display for occlusion functionality

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000105348A (en) * 1998-07-27 2000-04-11 Mr System Kenkyusho:Kk Picture observation device
US7639208B1 (en) * 2004-05-21 2009-12-29 University Of Central Florida Research Foundation, Inc. Compact optical see-through head-mounted display with occlusion support
US8884984B2 (en) * 2010-10-15 2014-11-11 Microsoft Corporation Fusing virtual content into real content
CN103869467A (en) * 2012-12-17 2014-06-18 联想(北京)有限公司 Display device and wearable spectacle equipment
US9494799B2 (en) * 2014-09-24 2016-11-15 Microsoft Technology Licensing, Llc Waveguide eye tracking employing switchable diffraction gratings
US20160371884A1 (en) * 2015-06-17 2016-12-22 Microsoft Technology Licensing, Llc Complementary augmented reality
US10134198B2 (en) * 2016-04-19 2018-11-20 Adobe Systems Incorporated Image compensation for an occluding direct-view augmented reality system
CN106526861A (en) * 2016-12-16 2017-03-22 擎中科技(上海)有限公司 AR (Augmented Reality) display device
AU2018270109A1 (en) * 2017-05-18 2019-12-05 Arizona Board Of Regents On Behalf Of The University Of Arizona Multilayer high-dynamic-range head-mounted display
JP2019094021A (en) * 2017-11-27 2019-06-20 株式会社小糸製作所 Head-up display device for vehicle
CN108267856A (en) * 2017-12-21 2018-07-10 成都理想境界科技有限公司 A kind of augmented reality wears display equipment
CN208092341U (en) * 2017-12-21 2018-11-13 成都理想境界科技有限公司 A kind of optical system for wearing display equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101029968A (en) * 2007-04-06 2007-09-05 北京理工大学 Optical perspective helmet display device of addressing light-ray shielding mechanism
CN106125324A (en) * 2016-06-24 2016-11-16 北京国承万通信息科技有限公司 Light field editing device, system and method and light field display system and method
US20190004350A1 (en) * 2017-06-29 2019-01-03 Varjo Technologies Oy Display apparatus and method of displaying using polarizers and optical combiners
CN108072978A (en) * 2017-12-21 2018-05-25 成都理想境界科技有限公司 A kind of augmented reality wears display device
CN111587393A (en) * 2018-01-03 2020-08-25 萨贾德·阿里·可汗 Method and system for compact display for occlusion functionality
CN110673340A (en) * 2019-09-24 2020-01-10 歌尔科技有限公司 Augmented reality device and control method thereof

Also Published As

Publication number Publication date
CN114578554B (en) 2023-08-22
CN114578554A (en) 2022-06-03

Similar Documents

Publication Publication Date Title
TWI569040B (en) Autofocus head mounted display device
CN110637249B (en) Optical device, head-mounted display, imaging system and method of imaging an object
JP7431267B2 (en) Virtual image display device and head-mounted display using the same
CN107247333B (en) Display system capable of switching display modes
US11327307B2 (en) Near-eye peripheral display device
CN111886533A (en) Inclined array based display
JP7474317B2 (en) Display systems providing concentric light fields and hybrid monocular to binocular
EP3695270A1 (en) Augmented reality display comprising eyepiece having a transparent emissive display
TW201835636A (en) Augmented reality imaging system
CN110914786A (en) Method and system for registration between an external scene and a virtual image
US11287663B2 (en) Optical transmitting module and head mounted display device
KR101852680B1 (en) The head mounted display apparatus and method for possible implementation of augmented reality or mixed reality
JP2013532297A (en) Embedded lattice structure
US11536969B2 (en) Scene camera
WO2021092314A1 (en) System and method for displaying an object with depths
WO2022111668A1 (en) Virtual-reality fusion display device
KR20210035555A (en) Augmented reality device and wearable device including the same
JP2012022278A (en) Video virtual feeling glasses
CN109963145B (en) Visual display system and method and head-mounted display device
US20220128756A1 (en) Display system for generating three-dimensional image and method therefor
CN107908006A (en) A kind of head-mounted display apparatus
US20230115411A1 (en) Smart eyeglasses
CN109963141B (en) Visual display system and method and head-mounted display device
KR20220145668A (en) Display apparatus including free-formed surface and operating method of the same
WO2023158654A1 (en) Hybrid waveguide to maximize coverage in field of view (fov)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21897168

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21897168

Country of ref document: EP

Kind code of ref document: A1