WO2018149267A1 - Display method and device based on augmented reality - Google Patents

Display method and device based on augmented reality Download PDF

Info

Publication number
WO2018149267A1
WO2018149267A1 PCT/CN2018/073473 CN2018073473W WO2018149267A1 WO 2018149267 A1 WO2018149267 A1 WO 2018149267A1 CN 2018073473 W CN2018073473 W CN 2018073473W WO 2018149267 A1 WO2018149267 A1 WO 2018149267A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
display
virtual
augmented reality
display mode
Prior art date
Application number
PCT/CN2018/073473
Other languages
French (fr)
Chinese (zh)
Inventor
毛颖
钟张翼
Original Assignee
深圳梦境视觉智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳梦境视觉智能科技有限公司 filed Critical 深圳梦境视觉智能科技有限公司
Publication of WO2018149267A1 publication Critical patent/WO2018149267A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays

Definitions

  • the embodiments of the present application relate to the field of augmented reality technologies, and in particular, to a display method and device based on augmented reality.
  • Head mounted display is a new technology developed in recent years.
  • the specific applications can be divided into virtual reality and augmented reality.
  • Human 3D vision is experimented through the collaborative work of both eyes.
  • the image seen by the left eye and the image seen by the right eye have subtle differences in perspective.
  • the optic nerve of the human brain mixes the images of the left and right eyes to form a 3D vision.
  • the head-mounted display can display different images by displaying the image on the display near the front of the eye.
  • the left eye can only see the left display area, and the right eye can only see the right display area, which can also simulate 3D vision. If, during this process, the head-mounted display blocks the light and images of the external area at the same time, the virtual reality effect will be produced when the eye can only see the display screen.
  • the principle of augmented reality is to simulate virtual vision through a head-mounted display, superimposed on the user's normal vision.
  • the following figure is a typical implementation of an augmented reality head mounted display.
  • the augmented reality head-mounted display has two implementations of optical perspective and video perspective, the main difference being that the optical combiner is different.
  • the optical synthesizer of such a head-mounted display is a semi-transparent mirror, from which part of the ambient light passes through the half-lens, and the virtual object is projected through the display on the half-reflecting mirror through the half mirror. The surface is reflected into the user's eyes to synthesize the real and virtual worlds.
  • the optical synthesis of such a head mounted display is achieved by a camera device and a display screen.
  • the camera device captures the real environment, and the video data and the virtual object are superimposed by computer processing and presented to the user through the display screen.
  • the system structure is basically the same as that of the virtual reality head-mounted display, and only needs to increase the camera environment of the shooting environment and the software module that processes the real and virtual world synthesis.
  • the optical see-through head-mounted display has a better user experience and a better user experience because the user directly sees the real environment.
  • the technical problem to be solved by the embodiment of the present application is to provide a display method and device based on augmented reality, which has a sense of presence, a large display area, and privacy.
  • the embodiment of the present application provides a display method based on augmented reality, including: emitting a first ray including a virtual image, the virtual image being an image transmitted by a receiving external device; and acquiring an external scene a second ray of the live image; synthesizing the first ray comprising the virtual image with a second ray comprising a live view of the external scene.
  • an augmented reality based display method combines an external real image with a virtual image, and the virtual image can provide a virtual display screen or a virtual mouse keyboard with a large display range for the user, and The virtual image can be used in conjunction with real-life physical screens, mice, keyboards, touch screens, buttons, etc., with a large field of view and privacy.
  • FIG. 1a is a schematic structural diagram of a display device based on augmented reality provided by Embodiment 1 of the present application;
  • Figure 1b is a schematic view of the see-through light guiding element shown in Figure 1a when it is placed on the head frame;
  • Figure 1c is a first relationship diagram between a side view angle and a display brightness of the display module shown in Figure 1a;
  • Figure 1d is a second relationship diagram between the side view angle and the display brightness of the display module shown in Figure 1a;
  • Figure 1e is a third relationship diagram between a side view angle and a display brightness of the display module shown in Figure 1a;
  • FIG. 2a is a schematic diagram showing a positional relationship between a display module and a user's face when the augmented reality-based display device shown in FIG. 1a is worn;
  • Figure 2b is a schematic view showing the rotation of the display module shown in Figure 1a;
  • FIG. 3 is a schematic diagram of an imaging principle of the augmented reality based display device shown in FIG. 1a;
  • FIG. 4 is a schematic view of the augmented reality based display device shown in FIG. 1a when a diopter correction lens is provided;
  • FIG. 5 is a schematic diagram showing the distance relationship between the diagonal field of view area and the farthest end of the head frame to the foremost end of the user's head of the augmented reality display device shown in FIG. 1a;
  • FIG. 6 is a schematic diagram of the augmented reality based display device shown in FIG. 1a connected to an external device;
  • FIG. 7 is a schematic structural diagram of a display device based on augmented reality provided by Embodiment 2 of the present application.
  • FIG. 8 is a schematic diagram of the augmented reality based display device shown in FIG. 7 connected to an external device;
  • FIG. 9 is still another schematic diagram of the augmented reality based display device shown in FIG. 7 connected to an external device; FIG.
  • FIG. 10 is a schematic diagram of the operation of the augmented reality based display device shown in FIG. 7;
  • FIG. 11 is a schematic diagram of a first display mode in an augmented reality based display method according to a third embodiment of the present application.
  • FIG. 12 is a schematic diagram of a second display mode in an augmented reality based display method according to a third embodiment of the present application.
  • FIG. 13 is a first application example diagram of a display method based on augmented reality provided by a third embodiment of the present application.
  • FIG. 14 is a schematic diagram of a second application example in a display method based on augmented reality provided by a third embodiment of the present application;
  • FIG. 15 is a schematic diagram of a third application example in a display method based on augmented reality provided by a third embodiment of the present application.
  • an augmented reality based display device provided by the embodiment of the present application has a total weight of the display device of the augmented reality of less than 350 grams, including: a head frame 11, two display modules 12, and two Perspective light guiding element 13.
  • the see-through light guiding element 13 is a partially transmissive, partially reflective optical synthesizing device.
  • the display module 12 and the see-through light guiding elements 13 are all disposed on the head frame 11.
  • the bracket 11 fixes the display module 12 and the see-through light guiding element 13.
  • the display module 12 is disposed on the upper side of the see-through light guiding element 13, and the light emitted by the display module 12 can be reflected after passing through the see-through light guiding element 13.
  • the display module 13 may also be located on the side of the see-through light guiding element 13.
  • the augmented reality-based display device further includes a main board 17 disposed on the head frame 11 and located between the two display modules 12.
  • the main board 17 is provided with a processor for processing a virtual image signal and displaying the virtual image information on the display module 12.
  • the head frame 11 is used for wearing on the head of the user, and each of the see-through light guiding elements 13 has a concave surface which is disposed toward the eyes of the user.
  • the first light reflected through the concave surface of the see-through light guiding element 13 enters the left eye of the user, and the other first light reflected through the concave surface of the other see-through light guiding element 13 enters the right eye of the user to be at the user's
  • the vision of a 3D virtual scene is formed in the mind.
  • the first light is emitted by the display module 12, and the first light includes virtual image information of the left eye and the right eye.
  • two see-through light guiding elements 13 are disposed on the head frame 11 and are independently embedded in the head frame 11, respectively.
  • two regions corresponding to the left and right eyes of the user may be disposed on the raw material for fabricating the fluoroscopic light guiding element, and the shape and size of the region are different from the shape and size of each of the fluoroscopic light guiding members 13 when independently disposed as described above.
  • the same effect; the final effect is that a large see-through light guiding element is provided with two areas corresponding to the left and right eyes of the user.
  • the two fluoroscopic light guiding elements 13 are integrally formed.
  • the see-through light guiding elements provided corresponding to the left and right eye regions of the user are embedded in the head frame 11.
  • the display module 12 is detachably mounted on the head frame 11, for example, the display module is an intelligent display terminal such as a mobile phone or a tablet computer; or the display module is fixedly mounted on the head frame, for example, the display module and the head. Wear a frame integrated design.
  • Two display modules 12 can be mounted on the headgear frame 11.
  • the left eye and the right eye of the user are respectively provided with a display module 12, for example, one display module 12 is configured to emit a first light containing virtual image information of the left eye, and A display module 12 is configured to emit another first light that includes virtual image information of the right eye.
  • the two display modules 12 are respectively located above the two fluoroscopic light guiding elements 13 in a one-to-one correspondence.
  • the two display modules 12 are respectively located in the user one by one.
  • the display module 12 can also be located on the side of the see-through light guiding element, that is, two see-through light guiding elements are located between the two display modules, when the display device based on the augmented reality is worn on the user In the head, the two display modules are respectively located one-to-one correspondingly to the sides of the left and right eyes of the user.
  • a single display module 12 can also be mounted on the headgear frame 11.
  • the single display module 12 has two display areas, one for emitting a first ray containing left eye virtual image information and the other for transmitting Another first ray of virtual image information for the right eye.
  • the display module includes, but is not limited to, an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), an LCOS (Liquid Crystal On Silicon), or the like.
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • LCOS Liquid Crystal On Silicon
  • the horizontal axis represents the side view angle and the vertical axis represents the display brightness.
  • the display module 12 is an LCD
  • the brightness of the display module 12 varies with the angle of the viewer.
  • the side observation angle ⁇ at a display luminance of 50% is generally large.
  • the LCD When the LCD is applied to an augmented reality display system, it is more suitable for a small side viewing angle, and the brightness of such a display module 12 is concentrated in an angular area near the center. Because the augmented reality display system mainly uses an angled area near the center, the brightness of the first light and the second light projected into the user's eyes will be relatively high. Referring to Fig. 1d, the brightness of the first light and the second light applied to the LCD in the enhanced reality display system is generally small when the display brightness is 50%. Moreover, the distribution of the brightness of the first light and the second light emitted by the LCD applied in the augmented reality display system is symmetric about the 0 degree side view, and the side view angle is less than 60 degrees.
  • the display brightness of the brightness of the first light and the second light emitted by the display module 12 is maximum, and when the user's angle of view is shifted to both sides, the display brightness is gradually reduced, and the side view is gradually reduced. When it is less than 60 degrees, the display brightness is 0.
  • the brightness distribution of the first light and the second light emitted by the LCD applied to the augmented reality display system may not be symmetric with respect to the 0 degree side view, and the side view angle when the brightness is the brightest is not 0. degree.
  • two display modules 12 are respectively located in a one-to-one correspondence above the two perspective light guiding elements 13.
  • the display module 12 forms a front plane with the user's head.
  • An angle a the angle of the angle a is from 0 to 180 degrees, preferably an obtuse angle.
  • the projection of the display module 12 on the horizontal plane is perpendicular to the normal plane.
  • the position of the see-through light guiding element 13 can be rotated by a certain angle b around a certain axis of rotation perpendicular to the horizontal plane, the angle b of the angle b being 0 degrees to 180 degrees, preferably 0 degrees to 90 degrees.
  • the see-through light guiding elements 13 corresponding to the left and right eyes can be adjusted in pitch by the mechanical structure on the head frame 11 to accommodate the user's interpupillary distance, ensuring comfort and imaging quality in use.
  • the farthest distance between the edges of the two see-through light guiding elements 13 is less than 150 mm, that is, the left edge of the see-through light guiding element 13 corresponding to the left eye to the see-through light guiding element corresponding to the right eye
  • the distance to the right edge of 13 is less than 150 mm.
  • the display modules 12 are connected by a mechanical structure, and the distance between the display modules 12 can also be adjusted, or the same effect can be achieved by adjusting the position of the display content on the display module 12.
  • the headgear frame 11 may be an eyeglass frame structure for hanging on the ear and nose of the user, on which the nose pad 111 and the temple 112 are disposed, and the nose pad 111 and the temple 112 are fixed to the user's head.
  • the temple 112 is a foldable structure, wherein the nose pad 111 is correspondingly fixed on the nose bridge of the user, and the temple 112 is correspondingly fixed on the user's ear.
  • the temples 112 can also be connected by an elastic band, and the elastic band tightens the temples when worn to help the frame to be fixed at the head.
  • the nose pad 111 and the temple 112 are telescopic mechanisms that adjust the height of the nose pad 111 and the telescopic length of the temple 112, respectively.
  • the nose pad 111 and the temple 112 can also be of a detachable structure, and the nose pad 111 or the temple 112 can be replaced after disassembly.
  • the head frame 11 may include a nose pad and a stretch rubber band that is fixed to the user's head by a nose pad and a stretch rubber band; or only a stretch rubber band that is fixed to the user's head by the stretch rubber band.
  • the headgear frame 11 may also be a helmet-type frame structure for wearing on the top and nose of the user's head.
  • the head frame 11 since the main function of the head frame 11 is to be worn on the user's head and to provide support for the optical and electrical components such as the display module 12 and the see-through light guiding element 13, the head frame includes but not Limited to the above manner, under the premise of having the above-mentioned main effects, those skilled in the art can make some modifications to the head frame according to the needs of practical applications.
  • the display module 12 emits a first light ray 121 including left-eye virtual image information, and the first light ray 121 reflected by the concave surface 131 of the see-through light guiding element 13 enters the left eye 14 of the user; similarly, the display module emits Another first light containing the virtual image information of the right eye, another first light reflected by the concave surface of the other see-through light guiding element enters the right eye of the user, thereby forming a visual feeling of the 3D virtual scene in the user's brain,
  • a small display screen is directly disposed in front of the user's right eye, resulting in a small visual area.
  • more display modules are reflected by two fluoroscopy light guiding elements. The first light enters the user's eyes, respectively, and the visual area is large.
  • each of the see-through light guiding elements 13 further has a convex surface disposed opposite to the concave surface; the convex surface and the concave surface via the see-through light guiding element 13
  • the transmitted second light containing the external image information enters the user's eyes to form a visual blend of the 3D virtual scene and the real scene.
  • a see-through light guiding element 13 further has a convex surface 132 disposed opposite the concave surface 131, and the second light 151 containing external image information transmitted through the convex surface 132 and the concave surface 131 of the see-through light guiding element 13 enters the user.
  • the left eye 14 is similarly shaped.
  • the other see-through light guiding element further has a convex surface disposed opposite to the concave surface thereof, and the second light containing the external image information transmitted through the convex surface and the concave surface of the see-through light guiding element enters the right side of the user.
  • the user can see the real scene of the outside world, thereby forming a visual experience of mixing the 3D virtual scene and the real scene.
  • a diopter correcting lens 16 is disposed between the human eye and the see-through light guiding element 13, the diopter correcting lens 16 being disposed perpendicular to the horizontal plane.
  • the plane of the diopter correction lens may also be at an angle of 30 degrees to 90 degrees from the horizontal plane.
  • different degrees of diopter correcting lenses may be arbitrarily set.
  • the display module 12 emits a first light ray 121 including left-eye virtual image information, a first light ray 121 reflected through the concave surface 131 of the fluoroscopic light guiding element 13, and a convex surface 132 and a concave surface 131 transmitted through the fluoroscopic light guiding element 13
  • the second light ray 151 of the image information passes through the refractive correction lens 16 before entering the left eye 14 of the user.
  • the refractive correction lens 16 is a concave lens, and the first light 121 and the second light 151 passing therethrough are diverged, so that the focus of the first light 121 and the second light 151 on the left eye 14 are shifted back.
  • the refractive correction lens 16 can also be a convex lens that converges the first light ray 121 and the second light ray 151 thereon to advance the focus of the first light ray 121 and the second light 151 on the left eye 14.
  • the display module emits another first light containing the virtual image information of the right eye, another first light reflected through the concave surface of the other see-through light guiding element, and the convex and concave surface transmitted through the transparent light guiding element.
  • the lens is also corrected by a diopter.
  • the user's eyeball is the apex, and the user's eyeball is on both sides of the virtual display area of the virtual image seen through the see-through light guiding element 13.
  • the edges form a diagonal field of view.
  • the distance from the farthest end of the head frame to the contact position with the foremost end of the head is c, and the distance length of the c can be adjusted as needed.
  • the angular extent of the diagonal field of view region is inversely proportional to the distance from the most distal end of the head frame 11 to the contact position with the foremost end of the head.
  • the distance from the farthest end of the head frame to the contact position with the foremost end of the head is less than 80 mm under the premise that the diagonal field of view area is greater than 55 degrees.
  • the second display module 12 is connected to the main board 17 by a cable.
  • the main board 17 is also provided with a video interface, a power interface, a communication chip, and a memory.
  • the video interface is used to connect a computer, a mobile phone, or other device to receive a video signal.
  • the video interface may be: hmdi, display port, thunderbolt or usb type-c, micro usb, MHL (Mobile High-Definition Link) and the like.
  • the processor is configured to process data, wherein the video signal is mainly used for decoding and displayed on the display module 12.
  • the power interface is used for external power supply or battery power supply.
  • the power interface includes a USB interface or other interfaces.
  • the communication chip is configured to perform data interaction with the outside world through a communication protocol, specifically, connecting to the Internet through a communication protocol such as WiFi, WDMA, TD-LTE, and then acquiring data through the Internet or connecting with other display devices based on the augmented reality; or directly It is connected to other augmented reality based display devices through a communication protocol.
  • a communication protocol such as WiFi, WDMA, TD-LTE
  • the memory is used for storing data, and is mainly used for storing display data displayed in the display module 12.
  • An earphone interface, a sound card chip or other generating device can also be provided on the main board 17.
  • the earbud connector is used to connect the earbuds and transmit audio signals to the earbuds.
  • the sound card chip is used to parse the sound signal.
  • a speaker can also be disposed on the head frame 11 of the display device based on the augmented reality, and the sound signal parsed by the sound card chip is converted into sound.
  • the augmented reality-based display device includes only the head frame 11, the two display modules 12, the two see-through light guiding elements 13, and the main board 17, as described above, all 3D virtual scene renderings and image generation corresponding to both eyes are performed.
  • the external device includes: a computer, a mobile phone, a tablet computer, and the like.
  • the display device based on the augmented reality receives the video signal of the external device through the video interface, and displays it on the display module 12 after decoding.
  • the interaction with the user is performed by an application software on an external device such as a computer, a mobile phone, a tablet computer, etc., and the augmented reality-based display device can be interacted by using a mouse keyboard, a touch pad or a button on the external device.
  • Augmented reality based display devices can project a display screen at a fixed location within the user's field of view. The user needs to adjust the size, position, and the like of the projection screen through software on the device connected to the augmented reality based display device.
  • the display device based on the augmented reality provided by the embodiment of the present application reflects the first ray including the left-eye virtual image information and the right-eye virtual image information into the user by the concave surfaces of the two fluoroscopic light guiding elements. Both eyes form a visual experience of a 3D virtual scene in the user's brain, and the visual area is large.
  • a plurality of sensors are disposed to perform sensing on a surrounding environment.
  • the total weight of the display device based on the augmented reality is less than 350 grams, and includes: a head frame 21, two display modules 22, and two see-through light guiding elements 23 And motherboard 24.
  • the display module 22, the see-through light guiding element 23 and the main board 24 are all disposed on the head frame 21.
  • the head frame 21 fixes the display module 22, the see-through light guiding element 23 and the main board 24.
  • the display module 22 is disposed on the upper side of the see-through light guiding element 23, and the light emitted by the display module 22 can be reflected by the see-through light guiding element 23.
  • the main board 24 and the main board 24 are located between the two display modules 22.
  • the main board 24 is provided with a processor for processing virtual image signals and displaying the virtual image information on the display module 22.
  • the head frame 21, the two display modules 22, the two see-through light guiding elements 23, the main board 24 and the head frame 11 described in the first embodiment, the two display modules 12, the two see-through light guiding elements 13, and the main board 17 The specific functions, structures, and positional relationships are the same and will not be described here.
  • a diopter correcting lens is disposed between the human eye and the see-through light guiding element 23, the diopter correcting lens being disposed perpendicular to the horizontal plane.
  • different degrees of diopter correcting lenses may be arbitrarily set.
  • the head frame 21 is further provided with a monocular camera 211, a binocular/multi-view camera 212, an eyeball tracking camera 213, a gyroscope 214, an accelerometer 215, a magnetometer 216, a depth of field sensor 217, an ambient light sensor 218, and/or a distance. Sensor 219.
  • the monocular camera 211, the binocular/multi-view camera 212, the eyeball tracking camera 213, the gyroscope 214, the accelerometer 215, the magnetometer 216, the depth of field sensor 217, the ambient light sensor 218, and/or the distance sensor 219 are all electrically connected to the main board 24 on.
  • the monocular camera 211 is a color monocular camera placed at the front of the head frame 21.
  • the monocular camera 211 faces the other side with respect to the user's face, and the camera can be used to take a photo.
  • the display device based on the augmented reality may be assisted by using the camera to detect a position known in the environment using computer vision technology.
  • the monocular camera 211 can also be a high-resolution camera for taking photos or taking video; the captured video can also superimpose the virtual object seen by the user through software, and the user can see through the augmented reality-based display device. To the content.
  • the binocular/multi-view camera 212 may be a monochrome or color camera disposed on the front or side of the head frame 21 and located on one side, two sides or all sides of the monocular camera 211. Further, the binocular/multi-view camera 212 may be provided with an infrared filter. Using the binocular camera, you can further obtain the depth of field information on the image based on the environment image. With a multi-camera camera, you can further expand the camera's viewing angle to get more ambient image and depth of field information.
  • the ambient image and distance information captured by the dual/multi-view camera 212 can be used to: (1) fuse with the data of the gyroscope 214, the accelerometer 215, and the magnetometer 216 to calculate the pose of the augmented reality based display device. (2) Capture user gestures, palm prints, etc. for human-computer interaction.
  • each of the above-mentioned monocular camera or binocular/multi-view camera may be one of an RGB camera, a monochrome camera or an infrared camera.
  • the eyeball tracking camera 213 is disposed on one side of the see-through light guiding element 23, and when the user wears the augmented reality based display device, the eyeball tracking camera 213 faces the side opposite to the user's face.
  • the eyeball tracking camera 213 is used to track the focus of the human eye, and to track and specialize the specific parts of the virtual object or virtual screen that the human eye is looking at. For example, the specific information of the object is automatically displayed next to the object that the human eye is watching.
  • the area of the human eye can display the high-definition virtual object image, while for other areas, only the low-definition image can be displayed, which can effectively reduce the amount of image rendering calculation without affecting the user experience.
  • the gyroscope 214, the accelerometer 215, and the magnetometer 216 are disposed between the two display modules 22.
  • the relative pose between the user's head and the initial position of the system can be obtained by fusing the data of the gyroscope 214, the accelerometer 215, and the magnetometer 216.
  • the raw data of these sensors can be further fused with the data of the binocular/multi-view camera 212 to obtain the position and attitude of the augmented reality based display device in a fixed environment.
  • the depth of field sensor 217 is disposed at the front of the head frame 21, and can directly obtain depth information in the environment. Compared to the dual/multi-view camera 212, the depth of field sensor can obtain more accurate, higher resolution depth of field data. Similarly, the use of these data can be: (1) merging with the data of the gyroscope 214, the accelerometer 215, and the magnetometer 216 to calculate the pose of the display device based on the augmented reality. (2) Capture user gestures, palm prints, etc. to interact with humans. (3) detecting three-dimensional information of objects around the user.
  • the ambient light sensor 218 is disposed on the head frame 21, and can monitor the intensity of ambient light in real time.
  • the display device based on the augmented reality adjusts the brightness of the display module 22 in real time according to the change of the ambient light to ensure the consistency of the display effect under different ambient light.
  • the distance sensor 219 is disposed at a position where the augmented reality based display device is in contact with the user's face for detecting whether the augmented reality based display device is worn on the user's head. If the user removes the display device based on the augmented reality, the power can be saved by turning off the display module 22, the processor, or the like.
  • the augmented reality-based display device further includes: an infrared/near-infrared LED electrically connected to the main board 24, wherein the infrared/near-infrared LED is used for binocular
  • the multi-view camera 212 provides a light source. Specifically, the infrared/near-infrared LED emits infrared rays, and when the infrared rays reach an object acquired through the binocular/multi-view camera 212, the object reflects the infrared rays, and the photosensitive element on the binocular/multi-view camera 212 receives the reflection. The returned infrared rays are converted into electrical signals, followed by imaging processing.
  • the operations that the augmented reality-based display device can perform when performing human-computer interaction include the following:
  • Augmented reality based display devices can project a display screen at a fixed location within the user's field of view. The user can adjust the size, position, and the like of the projection screen through sensors on the augmented reality based display device.
  • the remote control has a button, a joystick, a touchpad, etc., and is connected to the display device based on the augmented reality through a wired or wireless manner as a human-computer interaction interface.
  • the device and the microphone can be integrated by adding an audio decoding and power amplifying chip to the main board, integrating an earphone jack, an earplug, or a speaker, and allowing the user to interact with the augmented reality based display device using voice.
  • a video interface and a processor are provided on the motherboard.
  • the augmented reality based display device includes the headgear frame 21, the two display modules 22, the two see-through light guiding elements 23, the main board 24, and the plurality of sensors as described above, all of the 3D virtual scene rendering,
  • the image generation corresponding to both eyes and the processing of data acquired by a plurality of sensors can be performed in an external device connected to the display device based on the augmented reality.
  • the external device includes: a computer, a mobile phone, a tablet computer, and the like.
  • the display device based on the augmented reality receives the video signal of the external device through the video interface, and displays it on the display module 23 after decoding.
  • the external device receives the data acquired by the plurality of sensors on the augmented reality-based display device, and performs processing to adjust the image displayed by both eyes according to the data, and is reflected on the image displayed on the display module 23.
  • the processor on the augmented reality based display device is only used to support the transmission and display of video signals and the transfer of sensor data.
  • a processor with strong computing power is disposed on the motherboard, and some or all of the computer vision algorithms are completed in the display device based on the augmented reality.
  • the display device based on the augmented reality receives the video signal of the external device through the video interface, and displays it on the display module 23 after decoding.
  • the external device receives data acquired by a part of the sensors on the augmented reality-based display device, and performs processing to adjust the image displayed by the two eyes according to the sensor data, and is reflected on the image displayed on the display module 23.
  • the data acquired by the remaining sensors is processed on an augmented reality based display device.
  • data acquired by the monocular camera 211, the binocular/multi-view camera 212, the gyroscope 214, the accelerometer 215, the magnetometer 216, and the depth of field sensor 217 are processed in an augmented reality based display device.
  • the data acquired by the eyeball tracking camera 213, the ambient light sensor 218, and the distance sensor 219 are processed in an external device.
  • the processor on the augmented reality based display device is used to support the transmission and display of video signals, the processing of partial sensor data, and the transfer of remaining sensor data.
  • a high-performance processor and an image processor are provided on the motherboard to perform all operations in an augmented reality based display device.
  • Augmented Reality displays operate as a stand-alone system without the need to connect an external device.
  • the augmented reality-based display device processes the data acquired by the sensor, the image displayed by the two eyes is adjusted, and then displayed on the display module 23 after rendering.
  • the processor on the augmented reality based display device is used for decoding processing and display of video signals and processing of sensor data.
  • the concave surface of the see-through light guiding element is plated with a reflective film.
  • the reflective surface of the see-through light guiding element coated with the reflective film has a reflectance of 20% to 80%.
  • the concave surface of the see-through light guiding element is plated with a polarizing reflective film, and the polarization direction of the polarizing reflective film and the polarization of the first light
  • the angle between the directions is greater than 70° and less than or equal to 90°, for example, the polarization direction of the polarizing reflective film is perpendicular to the polarization direction of the first light, achieving a reflectivity of approximately 100%, and, in addition, due to the second image containing external image information.
  • the light is unpolarized light.
  • the concave surface of the see-through light guiding element is plated with a polarizing reflective film, when the second light passes through the polarizing reflective film, nearly 50% of the second light enters the user's eyes, and the user can still see the outside world. Real scene.
  • the convex surface of the see-through light guiding element is coated with an anti-reflection film.
  • the perspective The concave surface of the light guiding element is provided with a pressure sensitive reflective film, and by changing the magnitude of the voltage applied to the pressure sensitive reflective film, the reflectance of the pressure sensitive reflective film can be adjusted between 0 and 100% when the reflection of the pressure sensitive reflective film When the rate is 100%, the display device based on augmented reality can realize the function of virtual reality.
  • the other surface of the see-through light guiding element disposed opposite the concave surface A pressure-sensitive black sheet is provided thereon, and the light transmittance of the pressure-sensitive black sheet can be adjusted by changing the magnitude of the voltage applied to the pressure-sensitive black sheet.
  • the display device based on the augmented reality provided by the embodiment of the present application reflects the first ray including the left eye virtual image information and the right eye virtual image information into the user by the concave surfaces of the two fluoroscopic light guiding elements.
  • the eyes of the eyes form a visual experience of the 3D virtual scene in the user's brain, and the visual area is large.
  • a plurality of sensors are arranged on the display device based on the augmented reality. After the sensor senses the surrounding environment, the perceived result can be reflected in the image displayed in the display module, so that the on-site experience is better and the user experience is better.
  • the embodiment provides a display method based on augmented reality, including:
  • the first ray containing the virtual image is combined with the second ray containing the live image of the external scene.
  • the real-life image of the acquired external scene includes a real-life image of an environment in which the user wearing the augmented reality-based display device is located.
  • the acquired real-life image includes: an image of a student, a desk, a chair, a learning tool, and the like in the classroom after being reflected by the light;
  • the user is in the office, the The acquired real-life images include images of objects such as desks, computers, keyboards, and mice that are reflected by light.
  • the acquiring the virtual image includes: acquiring the virtual image displayed by the display module 12 after being processed by the processor.
  • the virtual image is a virtual image of display data transmitted by a device connected to the augmented reality based display device, the virtual image including: a virtual display screen, a virtual mouse, a virtual keyboard, and the like.
  • the virtual display screen is used to display data transmitted by an external device, network data acquired through the Internet, or data stored in a local storage.
  • the two display modules 12 emit a first light containing display data of the virtual image, and then combine the acquired second light rays of the see-through light guiding element 13 containing the external scene image information, and the two kinds of light pass through the augmented reality based display device.
  • the synthesis of the fluoroscopic light guiding element 13 is fused in the eyes of the user, and the content of the display data of the virtual image can be presented in front of the user's eyes in a three-dimensional manner through the human brain processing. It can be understood that the augmented reality based display device projects the display data of the virtual image into a live view image within the user's field of view.
  • the first display mode is between a virtual image and a real image
  • the relative angle and the relative position are not fixed display modes
  • the second display mode is a display mode in which the relative angle between the virtual image and the real image and the relative position are fixed.
  • the third mode is a display mode in which a relative angle between the virtual image and the live image is fixed and the relative position is not fixed.
  • the user is projecting a piece of virtual screen and a virtual keyboard to the table, the augmented reality based display device, facing the user, and placing it on the desktop in the real space.
  • first display mode When the user's head moves or rotates, the projected virtual screen and keyboard do not change in front of the user's eyes, and their position in real space changes or changes.
  • This display mode is called "first display mode".
  • the processor combines the second ray including the real scene image of the external scene with the first ray including the virtual image, and displays the image in the first display mode.
  • the monocular camera 211 can be used to detect the position-known mark in the environment by using the computer vision technology to help the augmented reality-based display device to perform positioning.
  • the depth of field sensor 217 obtains depth of field information in the environment.
  • the augmented reality based display device may further obtain the depth of field information on the acquired image by using the binocular/multi-view camera 212 on the basis of obtaining the environment image. Then, the display device based on the augmented reality processes the data obtained by the monocular camera 211, the depth of field sensor 217 or the binocular/multi-view camera 212, and the processor uses the computer vision technology to perform 3D modeling on the surrounding environment to identify the real environment in real time.
  • the augmented reality based display device can analyze which spaces in the vicinity of the user can better project the virtual screen, virtual keyboard and other display content.
  • the augmented reality based display device can also calculate the image and depth of field data obtained by the gyroscope 214, the accelerometer 215, and the magnetic field meter 216 in combination with the image obtained by the monocular camera 211, the depth of field sensor 217, or the binocular/multi-view camera 212.
  • the position and attitude of the display device based on the augmented reality in real space, that is, the relative position and angular relationship T of the coordinate systems F H and F I .
  • the position and posture in the coordinate system F H are known, and the projected digital content can be obtained in the real space (F I ) by T The position and angle.
  • the position and posture of the projected content in the augmented reality based display device coordinate system F H can be calculated through the relationship T, Projected content such as a virtual screen is placed here.
  • the augmented reality based display device can implement the "second display mode".
  • the user is projecting a virtual screen and a virtual keyboard on the table, the augmented reality based display device, facing the user, and placing it on the desktop in the real space.
  • the second display mode is used, the position of the projected virtual keyboard and the screen in the real space does not change, so that the user generates the screen and the keyboard is real and placed on the desktop. illusion.
  • the augmented reality-based display device uses a gyroscope, an accelerometer, and a magnetic field meter to obtain a relative angle between the user's head and the environment in which the "third display mode" can be realized.
  • a gyroscope an accelerometer
  • a magnetic field meter to obtain a relative angle between the user's head and the environment in which the "third display mode" can be realized.
  • the relative angle is fixed, but the relative position can be moved.
  • the user is projecting a piece of virtual screen and a virtual keyboard on the table, the augmented reality based display device, facing the user, placed on the desktop in real space.
  • the relative angle of the projected virtual keyboard and the screen in the real space does not change, and when the user moves, the projected virtual keyboard and screen are in the real space.
  • the relative position in the change changes, followed by the user to move.
  • the relationship between the first display mode, the second display mode, and the third display mode and the real environment and the user's head is as follows:
  • first display mode the “second display mode” or the “third display mode” may be used for different virtual images, and may be determined by the system software or by the user.
  • the "first display mode”, the "second display mode” or the “third mode” is implemented by a two-dimensional code set in a live view image or other manually set auxiliary mark.
  • the two-dimensional code set in the real-life image is scanned and recognized by the monocular camera 211, the depth of field sensor 217, or the binocular/multi-view camera 212, and the two-dimensional code includes turning on the first display mode and turning on the second display mode.
  • Information or turn on the information in the third display mode After the information in the two-dimensional code is recognized, it is displayed in a display mode corresponding to the information of the two-dimensional code. For example, if the information in the two-dimensional code is scanned to turn on the information of the first display mode, the display is performed in the first display mode; and the information in the scanned two-dimensional code is the second display mode or the third display mode. The information is displayed in the second display mode or the third mode.
  • the manual mark set in the live image may be scanned and recognized by the monocular camera 211, the depth of field sensor 217 or the binocular/multi-view camera 212, and the artificial mark includes opening the first display mode or turning on the second display mode. information. For example, if the information in the manual mark is the information that the first display mode is turned on, the display is performed in the first display mode; and if the information in the identified manual mark is the information in the third mode of the second display mode, Display in the second display mode or the third mode.
  • the artificial mark on the two-dimensional plane set in the two-dimensional code or other real-life image can also be used to assist the positioning by the augmented reality-based display device when displaying in the second display mode: according to the monocular camera 211, the depth of field sensor 217 or the double
  • the shape and size of the two-dimensional code or the artificial mark captured by the mesh/multi-view camera 212 are compared with the actual size and shape of the two-dimensional code or the artificial mark on the two-dimensional plane, and the relationship between the mark and the camera is calculated.
  • Relative position and angle Since the position of the mark in the environment is fixed, the relative position and angle relationship T of the display device based on the augmented reality and the environment can be calculated therefrom, thereby implementing the second display mode.
  • the augmented reality-based display device can track the motion of the user's gesture through the monocular camera 211, the depth of field sensor 217, or the binocular/multi-view camera 212, analyze the user's intention, and operate the virtual displayed content.
  • the monocular camera 211 the depth of field sensor 217, or the binocular/multi-view camera 212 to track the position of the user's finger
  • an operation command corresponding to the finger click action is executed.
  • the virtual screen As a whole, or an object in the virtual screen, is dragged.
  • an instruction corresponding to the zoomed or zoomed gesture action is performed on the virtual screen or the virtual screen.
  • the object is scaled.
  • the user wears the augmented reality-based display device according to Embodiment 1 or 2, and connects the augmented reality-based display device to the notebook computer, and the notebook computer transmits the video signal to the augmented reality-based display through the cable.
  • the cable can also be used for data and audio signal transmission as well as powering the head display. Cables can be single (such as USB Type-C) or multiple (video, audio, and data using different cables).
  • the augmented reality based display device acquires a video signal transmitted through the notebook computer and then displays on the display module 12 of the augmented reality based display device.
  • the virtual image includes: one or several virtual display screens and display content displayed on the display screen, the display content being the content transmitted by the notebook computer.
  • the two display modules 12 emit light rays including the above-mentioned virtual image and combine the acquired light rays of the perspective light guiding element 13 containing the real scene image information of the external scene, and the two kinds of light pass through the perspective guide on the information processing device based on the augmented reality.
  • the synthesis of the optical element 13 is fused within the eyes of the user, and after processing by the human brain, the content of the virtual image can be presented in front of the user's eyes in three dimensions.
  • the augmented reality-based information processing device projects the virtual display screen and the content displayed on the virtual display screen in a live view image of the external scene within the user's field of view. For example, if there is a physical mouse and keyboard in the real scene image of the external scene in the user's field of vision, the physical mouse and keyboard can be connected to the notebook computer, so that the user can input information in the most familiar manner, thereby improving work efficiency.
  • the user wears the augmented reality-based display device according to Embodiment 1 or 2, and connects the augmented reality-based display device to the mobile phone or other mobile terminal, and the mobile phone or other mobile terminal transmits the video signal through the cable to the enhanced device.
  • Realistic display device The cable can also be used for data and audio signal transmission as well as powering the head display. Cables can be single (such as USB Type-C) or multiple (video, audio, and data using different cables).
  • the augmented reality based display device acquires a video signal of a virtual image transmitted by the mobile phone or other mobile terminal, and then displays on the display module 12 of the augmented reality based display device.
  • the virtual image includes: one or several virtual display screens and display content displayed on the display screen, the display content being content transmitted by a mobile phone or other mobile terminal.
  • the two display modules 12 emit light rays including the above-mentioned virtual image and combine the acquired light rays of the perspective light guiding element 13 containing the real scene image information of the external scene, and the two kinds of light pass through the perspective guide on the information processing device based on the augmented reality.
  • the synthesis of the optical element 13 is fused within the eyes of the user, and after processing by the human brain, the content of the virtual image can be presented in front of the user's eyes in three dimensions.
  • the augmented reality-based information processing device projects the virtual display screen and the content displayed on the virtual display screen in a live view image of the external scene within the user's field of view.
  • the portable mouse and keyboard are connected to the mobile phone or other mobile terminal via Bluetooth or other means of communication.
  • the real scene image of the external scene in the user's field of vision has a physical mouse and keyboard, that is, the user wearing the display device based on the augmented reality can see the entity mouse and the keyboard, so that the user inputs information in the most familiar manner, and improves the user. Work efficiency.
  • the user wears the augmented reality-based display device according to Embodiment 1 or 2, and connects the augmented reality-based display device to the mobile phone or other mobile terminal, and the mobile phone or other mobile terminal transmits the video through the cable.
  • Signal to augmented reality based display devices The augmented reality based display device acquires a video signal of a virtual image transmitted by the mobile phone or other mobile terminal, and then displays on the display module 12 of the augmented reality based display device.
  • the virtual image includes: one or several virtual display screens, display content displayed on the display screen, and a virtual mouse and keyboard.
  • the position of the user's finger can be tracked by using the monocular camera 211, the depth of field sensor 217, or the binocular/multi-view camera 212, and after the action of clicking the mouse or the keyboard of the finger is recognized, the operation of the operation key corresponding to the finger click action is performed. Instructions to achieve the input of information.
  • the arrangement of the multiple display screens may be adjusted.
  • the position of the user's finger can be tracked by using the monocular camera 211, the depth of field sensor 217, or the binocular/multi-view camera 212, and after the action of moving the display screen of the finger is recognized, the operation of the operation key corresponding to the finger movement display screen is performed. Instructions to move the display.
  • the content projected by the augmented reality-based display device is sound-bearing content
  • sound can be emitted through an external earphone or sounded through a speaker.
  • the eyeball tracking camera 213 can also track the focus of the user's eyes, and track and specialize the specific parts of the virtual object or virtual screen that the user's eye focus is on, for example, in the eyes of the user.
  • the local area of the observation, the annotations are automatically displayed, and the specific information of the observed object.
  • an augmented reality based display method combines an external real image with a virtual image, and the virtual image can provide a virtual display screen or a virtual mouse keyboard with a large display range for the user, and The virtual image can be used in conjunction with real-life physical screens, mice, keyboards, touch screens, buttons, etc., with a large field of view and privacy.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)

Abstract

A display method based on augmented reality, comprising: emitting first rays (121) comprising a virtual image, the virtual image being a received image transmitted from an external device; obtaining second rays (151) comprising a live view image of an external scene; and synthesizing the first rays (121) comprising the virtual image and the second rays (151) comprising the live view image of the external scene. Different from the prior art, the display method based on augmented reality combines the live view image of the external scene and the virtual image; since the virtual image can provide a virtual display screen with large display range or a virtual mouse and keyboard for a user, and the virtual image can be used in cooperation with a real mouse and keyboard, touch screen and the like. The present invention has a relatively large view range, and has privacy.

Description

一种基于增强现实的显示方法及设备Display method and device based on augmented reality
本申请要求于2017年2月14日提交中国专利局,申请号为2017100791995,发明名称为“一种基于增强现实的显示方法及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims priority to Chinese Patent Application No. 2017100791995, entitled "A Display Method and Apparatus Based on Augmented Reality" on February 14, 2017, the entire contents of which are incorporated by reference. In this application.
技术领域Technical field
本申请实施例涉及增强现实技术领域,特别是涉及一种基于增强现实的显示方法及设备。The embodiments of the present application relate to the field of augmented reality technologies, and in particular, to a display method and device based on augmented reality.
背景技术Background technique
头戴式显示器(head mounted display)是近年来发展的一项新技术,具体的应用可以主要分为虚拟现实(virtual reality)和增强现实(augmented reality)两种。Head mounted display is a new technology developed in recent years. The specific applications can be divided into virtual reality and augmented reality.
虚拟现实:Virtual Reality:
人类的3D视觉是通过双眼的协同工作实验的。当人看到一个实际的3D物体的时候,左眼看到的图像,和右眼看到的图像,会有视角上的细微差别。人脑的视神经将左右两眼的图像进行混合处理,就可以形成3D的视觉。头戴式显示器通过在靠近眼前的显示屏上显示不同的图像,左眼只能看到左边显示区域,右眼只能看到右边显示区域,同样可以模拟3D视觉。如果在这个过程中,头戴式显示器同时遮挡住外部区域的光线和图像,让人眼只能看到显示屏时,将会产生虚拟现实的效果。Human 3D vision is experimented through the collaborative work of both eyes. When a person sees an actual 3D object, the image seen by the left eye and the image seen by the right eye have subtle differences in perspective. The optic nerve of the human brain mixes the images of the left and right eyes to form a 3D vision. The head-mounted display can display different images by displaying the image on the display near the front of the eye. The left eye can only see the left display area, and the right eye can only see the right display area, which can also simulate 3D vision. If, during this process, the head-mounted display blocks the light and images of the external area at the same time, the virtual reality effect will be produced when the eye can only see the display screen.
增强现实Augmented Reality
增强现实的原理,是在通过头戴式显示器模拟虚拟视觉,叠加在用户正常的视觉上。The principle of augmented reality is to simulate virtual vision through a head-mounted display, superimposed on the user's normal vision.
这两种视觉信息可以相互补充,从而使用户可以更方便的进行很多工作和娱乐休闲的活动。下图是增强现实头戴式显示器的典型实现方式。目前,增强现实头戴式显示器有光学透视式和视频透视式两种实现方式,其主要区别在于光学合成装置(Optical combiner)不同。These two visual information can complement each other, making it easier for users to do a lot of work and entertainment activities. The following figure is a typical implementation of an augmented reality head mounted display. At present, the augmented reality head-mounted display has two implementations of optical perspective and video perspective, the main difference being that the optical combiner is different.
光学透视式:Optical perspective:
这类头戴式显示器的光学合成装置是一面半透半反镜,来自真是环境的光线部分通过半透镜,而虚拟的物件则通过显示器投影在半透半反镜上,通过半透半反镜表面反射到使用者眼中,从而合成真实和虚拟世界。The optical synthesizer of such a head-mounted display is a semi-transparent mirror, from which part of the ambient light passes through the half-lens, and the virtual object is projected through the display on the half-reflecting mirror through the half mirror. The surface is reflected into the user's eyes to synthesize the real and virtual worlds.
视频透视式:Video perspective:
这类头戴式显示器的光学合成通过摄像装置和显示屏幕实现。摄相装置拍摄真实环境,其视频数据与虚拟物件通过计算机处理叠加,通过显示屏幕呈现给用户。其系统结构与虚拟现实头戴式显示器基本一致,仅需增加拍摄环境的摄像装置及处理真实、虚拟世界合成的软件模块。The optical synthesis of such a head mounted display is achieved by a camera device and a display screen. The camera device captures the real environment, and the video data and the virtual object are superimposed by computer processing and presented to the user through the display screen. The system structure is basically the same as that of the virtual reality head-mounted display, and only needs to increase the camera environment of the shooting environment and the software module that processes the real and virtual world synthesis.
相比这两类实现方式,光学透视式头戴式显示器由于使用者直接看到真实环境,临场感更好,具有更佳的用户体验。Compared with these two types of implementations, the optical see-through head-mounted display has a better user experience and a better user experience because the user directly sees the real environment.
目前,移动办公与娱乐的主要设备包括笔记本,平板电脑以及手机。无论办公还是娱乐,这些现有设备因体积与重量的限制,均无法提供台式电脑显示器、电视或影院所能提供的大尺寸显示,降低了办公效率与娱乐体验。同时,若使用屏幕较大的设备,在公共场所,如机场,车站等等,办公或娱乐时的私密性无法得到保证。Currently, the main devices for mobile office and entertainment include notebooks, tablets and mobile phones. Whether it's office or entertainment, these existing devices, due to size and weight limitations, do not provide the large-size display available on desktop monitors, TVs, or theaters, reducing office productivity and entertainment. At the same time, if you use a large screen device, the privacy of office or entertainment in public places, such as airports, stations, etc., cannot be guaranteed.
发明内容Summary of the invention
本申请实施例主要解决的技术问题是提供一种基于增强现实的显示方法及设备,具有临场感、显示区域大且具有私密性。The technical problem to be solved by the embodiment of the present application is to provide a display method and device based on augmented reality, which has a sense of presence, a large display area, and privacy.
为解决上述技术问题,本申请实施例提供一种基于增强现实的显示方法,包括:发出包含虚拟图像的第一光线,所述虚拟图像为接收的外接设备传输过来的图像;获取包含外部场景的实景图像的第二光线;将所述包含虚拟图像的第一光线与包含外部场景的实景图像的第二光线进行合成。To solve the above technical problem, the embodiment of the present application provides a display method based on augmented reality, including: emitting a first ray including a virtual image, the virtual image being an image transmitted by a receiving external device; and acquiring an external scene a second ray of the live image; synthesizing the first ray comprising the virtual image with a second ray comprising a live view of the external scene.
区别于现有技术,本实施例提供的一种基于增强现实的显示方法将外部的实景图像与虚拟图像结合,虚拟图像能够为使用者提供显示范围 大的虚拟显示屏幕或者虚拟的鼠标键盘,并且虚拟图像可以和现实中的实体的屏幕,鼠标,键盘,触摸屏,按钮等配合使用,具有较大的视野范围,且具有私密性。Different from the prior art, an augmented reality based display method provided by the embodiment combines an external real image with a virtual image, and the virtual image can provide a virtual display screen or a virtual mouse keyboard with a large display range for the user, and The virtual image can be used in conjunction with real-life physical screens, mice, keyboards, touch screens, buttons, etc., with a large field of view and privacy.
附图说明DRAWINGS
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。The one or more embodiments are exemplified by the accompanying drawings in the accompanying drawings, and FIG. The figures in the drawings do not constitute a scale limitation unless otherwise stated.
图1a是本申请实施例一提供的一种基于增强现实的显示设备的结构示意图;1a is a schematic structural diagram of a display device based on augmented reality provided by Embodiment 1 of the present application;
图1b是图1a所示的透视型导光元件设置在头戴框架上时的示意图;Figure 1b is a schematic view of the see-through light guiding element shown in Figure 1a when it is placed on the head frame;
图1c是图1a所示的显示模块的侧视角度与显示亮度之间的第一关系图;Figure 1c is a first relationship diagram between a side view angle and a display brightness of the display module shown in Figure 1a;
图1d是图1a所示的显示模块的侧视角度与显示亮度之间的第二关系图;Figure 1d is a second relationship diagram between the side view angle and the display brightness of the display module shown in Figure 1a;
图1e是图1a所示的显示模块的侧视角度与显示亮度之间的第三关系图;Figure 1e is a third relationship diagram between a side view angle and a display brightness of the display module shown in Figure 1a;
图2a是佩戴图1a所示的基于增强现实的显示设备时显示模块与用户脸部的位置关系示意图;2a is a schematic diagram showing a positional relationship between a display module and a user's face when the augmented reality-based display device shown in FIG. 1a is worn;
图2b是旋转图1a所示的显示模块的示意图;Figure 2b is a schematic view showing the rotation of the display module shown in Figure 1a;
图3是图1a所示的基于增强现实的显示设备的成像原理示意图;3 is a schematic diagram of an imaging principle of the augmented reality based display device shown in FIG. 1a;
图4是图1a所示的基于增强现实的显示设备设置屈光度矫正镜片时的示意图;4 is a schematic view of the augmented reality based display device shown in FIG. 1a when a diopter correction lens is provided;
图5是图1a所示的基于增强现实的显示设备对角线视场区域与头部框架的最远端到用户头部最前端的距离关系的示意图;5 is a schematic diagram showing the distance relationship between the diagonal field of view area and the farthest end of the head frame to the foremost end of the user's head of the augmented reality display device shown in FIG. 1a;
图6是图1a所示的基于增强现实的显示设备连接外接设备工作时的示意图;6 is a schematic diagram of the augmented reality based display device shown in FIG. 1a connected to an external device;
图7是本申请实施例二提供的一种基于增强现实的显示设备的结构示意图;7 is a schematic structural diagram of a display device based on augmented reality provided by Embodiment 2 of the present application;
图8是图7所示的基于增强现实的显示设备连接外接设备工作时的示意图;8 is a schematic diagram of the augmented reality based display device shown in FIG. 7 connected to an external device;
图9是图7所示的基于增强现实的显示设备连接外接设备工作时的又一示意图;FIG. 9 is still another schematic diagram of the augmented reality based display device shown in FIG. 7 connected to an external device; FIG.
图10是图7所示的基于增强现实的显示设备工作时的示意图;10 is a schematic diagram of the operation of the augmented reality based display device shown in FIG. 7;
图11是本申请第三实施例提供的一种基于增强现实的显示方法中的第一显示模式的示意图;11 is a schematic diagram of a first display mode in an augmented reality based display method according to a third embodiment of the present application;
图12是本申请第三实施例提供的一种基于增强现实的显示方法中的第二显示模式的示意图;12 is a schematic diagram of a second display mode in an augmented reality based display method according to a third embodiment of the present application;
图13是本申请第三实施例提供的一种基于增强现实的显示方法的第一应用实例图;FIG. 13 is a first application example diagram of a display method based on augmented reality provided by a third embodiment of the present application; FIG.
图14是本申请第三实施例提供的一种基于增强现实的显示方法中的第二应用实例的示意图;14 is a schematic diagram of a second application example in a display method based on augmented reality provided by a third embodiment of the present application;
图15是本申请第三实施例提供的一种基于增强现实的显示方法中的第三应用实例的示意图。FIG. 15 is a schematic diagram of a third application example in a display method based on augmented reality provided by a third embodiment of the present application.
具体实施方式detailed description
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。In order to make the objects, technical solutions, and advantages of the present application more comprehensible, the present application will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting.
此外,下面所描述的本申请各个实施例中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。Further, the technical features involved in the various embodiments of the present application described below may be combined with each other as long as they do not constitute a conflict with each other.
实施例一Embodiment 1
参阅图1a,本申请实施例提供的一种基于增强现实的显示设备,所述基于增强现实的显示设备的总重量小于350克,其包括:头戴框架11、两个显示模块12、两个透视型导光元件13。其中,透视型导光元件13是一种部分透射、部分反射的光学合成装置。Referring to FIG. 1a, an augmented reality based display device provided by the embodiment of the present application has a total weight of the display device of the augmented reality of less than 350 grams, including: a head frame 11, two display modules 12, and two Perspective light guiding element 13. Wherein, the see-through light guiding element 13 is a partially transmissive, partially reflective optical synthesizing device.
所述显示模块12及透视形导光元件13皆设置在头戴框架11上,支架11将显示模块12及透视形导光元件13进行固定。显示模块12设置在透视形导光元件13的上侧,显示模块12发出的光线能够经过透视形导光元件13后发生反射。可选地,所述显示模块13还可位于所述透视型导光元件13的侧方。The display module 12 and the see-through light guiding elements 13 are all disposed on the head frame 11. The bracket 11 fixes the display module 12 and the see-through light guiding element 13. The display module 12 is disposed on the upper side of the see-through light guiding element 13, and the light emitted by the display module 12 can be reflected after passing through the see-through light guiding element 13. Optionally, the display module 13 may also be located on the side of the see-through light guiding element 13.
所述基于增强现实的显示设备还包括:主板17,主板17设置在头戴框架11上,且位于二显示模块12之间。所述主板17上设置有处理器,所述处理器用于处理虚拟图像信号并将虚拟图像信息显示在显示模块12上。The augmented reality-based display device further includes a main board 17 disposed on the head frame 11 and located between the two display modules 12. The main board 17 is provided with a processor for processing a virtual image signal and displaying the virtual image information on the display module 12.
本申请实施例中,头戴框架11用于佩戴在用户的头部,每一透视型导光元件13具有一凹面,凹面朝向用户的双眼设置。经由一透视型导光元件13的凹面反射的第一光线进入用户的左眼,以及经由另一透视型导光元件13的凹面反射的另一第一光线进入用户的右眼,以在用户的头脑中形成3D虚拟场景的视觉。其中,第一光线是由显示模块12发射的,且第一光线包含左眼及右眼的虚拟图像信息。In the embodiment of the present application, the head frame 11 is used for wearing on the head of the user, and each of the see-through light guiding elements 13 has a concave surface which is disposed toward the eyes of the user. The first light reflected through the concave surface of the see-through light guiding element 13 enters the left eye of the user, and the other first light reflected through the concave surface of the other see-through light guiding element 13 enters the right eye of the user to be at the user's The vision of a 3D virtual scene is formed in the mind. The first light is emitted by the display module 12, and the first light includes virtual image information of the left eye and the right eye.
参阅图1b,两个透视型导光元件13设置在头戴框架11上,分别独立地嵌入到头戴框架11上。可选地,可在制作透视型导光元件的原材料上设置两个对应于用户左右眼的区域,所述区域的形状大小与上述的独立设置时的每一透视型导光元件13的形状大小相同;最终的效果为一块大的透视型导光元件上设置有两个对应于用户左右眼的区域。可以理解为在一块大的透视型导光元件的原材料上加工出两个与独立设置时的透视型导光元件13的形状大小相同的区域,即两个透视型导光元件13一体成型。所述设置有对应于用户左右眼区域的透视型导光元件 嵌入到头戴框架11上。Referring to FIG. 1b, two see-through light guiding elements 13 are disposed on the head frame 11 and are independently embedded in the head frame 11, respectively. Optionally, two regions corresponding to the left and right eyes of the user may be disposed on the raw material for fabricating the fluoroscopic light guiding element, and the shape and size of the region are different from the shape and size of each of the fluoroscopic light guiding members 13 when independently disposed as described above. The same effect; the final effect is that a large see-through light guiding element is provided with two areas corresponding to the left and right eyes of the user. It can be understood that two regions of the same size as the independently disposed fluoroscopy light guiding elements 13 are processed on the raw material of a large fluoroscopic light guiding element, that is, the two fluoroscopic light guiding elements 13 are integrally formed. The see-through light guiding elements provided corresponding to the left and right eye regions of the user are embedded in the head frame 11.
需要说明的是,显示模块12可拆卸安装于头戴框架11上,比如,显示模块为手机、平板电脑等智能显示终端;或者,显示模块固定安装于头戴框架上,比如,显示模块与头戴框架集成设计。It should be noted that the display module 12 is detachably mounted on the head frame 11, for example, the display module is an intelligent display terminal such as a mobile phone or a tablet computer; or the display module is fixedly mounted on the head frame, for example, the display module and the head. Wear a frame integrated design.
头戴框架11上可以安装两个显示模块12,用户的左眼和右眼分别对应地设置一个显示模块12,例如,一个显示模块12用于发射包含左眼虚拟图像信息的第一光线,另一个显示模块12用于发射包含右眼虚拟图像信息的另一第一光线。两个显示模块12可以分别一一对应地位于两个透视型导光元件13的上方,当基于增强现实的显示设备佩戴在用户的头部时,两个显示模块12分别一一对应地位于用户的左眼和右眼的上方;显示模块12也可以位于透视型导光元件的侧方,即两个透视型导光元件位于两个显示模块之间,当基于增强现实的显示设备佩戴在用户的头部时,两个显示模块分别一一对应地位于用户的左眼和右眼的侧方。Two display modules 12 can be mounted on the headgear frame 11. The left eye and the right eye of the user are respectively provided with a display module 12, for example, one display module 12 is configured to emit a first light containing virtual image information of the left eye, and A display module 12 is configured to emit another first light that includes virtual image information of the right eye. The two display modules 12 are respectively located above the two fluoroscopic light guiding elements 13 in a one-to-one correspondence. When the display device based on the augmented reality is worn on the user's head, the two display modules 12 are respectively located in the user one by one. Above the left and right eyes; the display module 12 can also be located on the side of the see-through light guiding element, that is, two see-through light guiding elements are located between the two display modules, when the display device based on the augmented reality is worn on the user In the head, the two display modules are respectively located one-to-one correspondingly to the sides of the left and right eyes of the user.
头戴框架11上也可以安装单个显示模块12,该单个显示模块12上有两个显示区域,一个显示区域用于发射包含左眼虚拟图像信息的第一光线,另一个显示区域用于发射包含右眼虚拟图像信息的另一第一光线。A single display module 12 can also be mounted on the headgear frame 11. The single display module 12 has two display areas, one for emitting a first ray containing left eye virtual image information and the other for transmitting Another first ray of virtual image information for the right eye.
显示模块包括但不限于LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)、LCOS(Liquid Crystal On Silicon,硅基液晶)等类型的显示器。The display module includes, but is not limited to, an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), an LCOS (Liquid Crystal On Silicon), or the like.
参阅图1c,图中的横向轴标识侧视角度,纵向轴表示显示亮度。显示模块12为LCD时,显示模块12的亮度是随着观察者的角度来变化的。对于普通LCD,在显示亮度为50%时的侧观察角度θ一般都比较大。Referring to Figure 1c, the horizontal axis represents the side view angle and the vertical axis represents the display brightness. When the display module 12 is an LCD, the brightness of the display module 12 varies with the angle of the viewer. For a normal LCD, the side observation angle θ at a display luminance of 50% is generally large.
LCD应用于对于增强现实显示系统时,则比较适用于小的侧观察角度,这样的显示模块12的亮度就会集中在靠近中心的角度区域。因为增强现实显示系统主要使用靠近中心的角度区域,这样的话投影到用户眼中的第一光线及第二光线的亮度会比较高。参阅图1d,应用于增强现 实显示系统中的LCD发出的第一光线及第二光线的亮度在显示亮度为50%时的侧观察角度θ一般都比较小。并且,应用于增强现实显示系统中的LCD发出的第一光线及第二光线的亮度的分布关于0度侧视角左右对称,且侧视角度小于60度。即是,用户视角垂直于显示模块12时,显示模块12发出的第一光线及第二光线的亮度的显示亮度为最大,用户视角向两侧偏移时,显示亮度逐渐减小,在侧视角小于60度时,显示亮度为0。When the LCD is applied to an augmented reality display system, it is more suitable for a small side viewing angle, and the brightness of such a display module 12 is concentrated in an angular area near the center. Because the augmented reality display system mainly uses an angled area near the center, the brightness of the first light and the second light projected into the user's eyes will be relatively high. Referring to Fig. 1d, the brightness of the first light and the second light applied to the LCD in the enhanced reality display system is generally small when the display brightness is 50%. Moreover, the distribution of the brightness of the first light and the second light emitted by the LCD applied in the augmented reality display system is symmetric about the 0 degree side view, and the side view angle is less than 60 degrees. That is, when the user's angle of view is perpendicular to the display module 12, the display brightness of the brightness of the first light and the second light emitted by the display module 12 is maximum, and when the user's angle of view is shifted to both sides, the display brightness is gradually reduced, and the side view is gradually reduced. When it is less than 60 degrees, the display brightness is 0.
可选地,参阅图1e,应用于增强现实显示系统的LCD的发出的第一光线及第二光线的亮度分布可不关于0度侧视角对称,且显示亮度最亮时的侧视角度不为0度。Optionally, referring to FIG. 1e, the brightness distribution of the first light and the second light emitted by the LCD applied to the augmented reality display system may not be symmetric with respect to the 0 degree side view, and the side view angle when the brightness is the brightest is not 0. degree.
参阅图2a,两个显示模块12分别一一对应地位于两个透视型导光元件13的上方,用户佩戴上所述基于增强现实的显示设备时,显示模块12与用户头部的正平面形成一夹角a,所述夹角a的角度为0度至180度,优选为钝角。同时,显示模块12在水平面上的投影与正平面垂直。Referring to FIG. 2a, two display modules 12 are respectively located in a one-to-one correspondence above the two perspective light guiding elements 13. When the user wears the augmented reality based display device, the display module 12 forms a front plane with the user's head. An angle a, the angle of the angle a is from 0 to 180 degrees, preferably an obtuse angle. At the same time, the projection of the display module 12 on the horizontal plane is perpendicular to the normal plane.
参阅图2b,在某些实例中,透视形导光元件13的位置可以绕与水平面垂直的某一转轴旋转一定角度b,所述角度b的角度为0度至180度,优选为0度至90度。同时,对应左眼和右眼的透视型导光元件13可以通过头戴框架11上的机械结构调整间距,以适应不同用户的瞳距,保证使用时的舒适度和成像质量。所述两个透视型导光元件13的边缘之间的最远距离小于150毫米,即对应于左眼设置的透视型导光元件13的左边缘到对应于右眼设置的透视型导光元件13的右边缘的距离小于150毫米。相应的,显示模块12之间通过机械结构连接,所述显示模块12之间的距离也可以进行调整,或者通过调整显示内容在显示模块12上的位置达到同样的效果。Referring to FIG. 2b, in some examples, the position of the see-through light guiding element 13 can be rotated by a certain angle b around a certain axis of rotation perpendicular to the horizontal plane, the angle b of the angle b being 0 degrees to 180 degrees, preferably 0 degrees to 90 degrees. At the same time, the see-through light guiding elements 13 corresponding to the left and right eyes can be adjusted in pitch by the mechanical structure on the head frame 11 to accommodate the user's interpupillary distance, ensuring comfort and imaging quality in use. The farthest distance between the edges of the two see-through light guiding elements 13 is less than 150 mm, that is, the left edge of the see-through light guiding element 13 corresponding to the left eye to the see-through light guiding element corresponding to the right eye The distance to the right edge of 13 is less than 150 mm. Correspondingly, the display modules 12 are connected by a mechanical structure, and the distance between the display modules 12 can also be adjusted, or the same effect can be achieved by adjusting the position of the display content on the display module 12.
头戴框架11可以是用于挂在用户耳部和鼻梁部的眼镜式的框架结构,其上设置有鼻托111和镜腿112,通过鼻托111与镜腿112固定在用户的头部,所述镜腿112为可折叠结构,其中鼻托111对应固定在用 户的鼻梁上,镜腿112对应固定在用户的耳朵上。进一步地,眼镜腿112之间还可以通过松紧带相连,佩戴时松紧带收紧眼镜腿,帮助框架在头部的固定。The headgear frame 11 may be an eyeglass frame structure for hanging on the ear and nose of the user, on which the nose pad 111 and the temple 112 are disposed, and the nose pad 111 and the temple 112 are fixed to the user's head. The temple 112 is a foldable structure, wherein the nose pad 111 is correspondingly fixed on the nose bridge of the user, and the temple 112 is correspondingly fixed on the user's ear. Further, the temples 112 can also be connected by an elastic band, and the elastic band tightens the temples when worn to help the frame to be fixed at the head.
可选地,鼻托111和镜腿112为可伸缩机构,可分别调整鼻托111的高度和镜腿112的伸缩长度。同样,鼻托111和镜腿112还可以为可拆卸结构,拆卸后可对鼻托111或者镜腿112进行更换。Optionally, the nose pad 111 and the temple 112 are telescopic mechanisms that adjust the height of the nose pad 111 and the telescopic length of the temple 112, respectively. Similarly, the nose pad 111 and the temple 112 can also be of a detachable structure, and the nose pad 111 or the temple 112 can be replaced after disassembly.
可选地,头戴框架11可包括鼻托和伸缩皮筋,通过鼻托与伸缩皮筋固定在用户头部;或者仅包括伸缩皮筋,通过所述伸缩皮筋固定在用户头部。可选地,头戴框架11也可以是用于佩戴在用户头顶和鼻梁部的头盔式框架结构。本申请实施例中,由于头戴框架11的主要作用是用来佩戴在用户的头部以及为显示模块12、透视型导光元件13等光、电元器件提供支撑,头戴框架包括但不限于上述方式,在具备上述主要作用的前提下,本领域技术人员能够根据实际应用的需要对头戴框架作出若干变形。Alternatively, the head frame 11 may include a nose pad and a stretch rubber band that is fixed to the user's head by a nose pad and a stretch rubber band; or only a stretch rubber band that is fixed to the user's head by the stretch rubber band. Alternatively, the headgear frame 11 may also be a helmet-type frame structure for wearing on the top and nose of the user's head. In the embodiment of the present application, since the main function of the head frame 11 is to be worn on the user's head and to provide support for the optical and electrical components such as the display module 12 and the see-through light guiding element 13, the head frame includes but not Limited to the above manner, under the premise of having the above-mentioned main effects, those skilled in the art can make some modifications to the head frame according to the needs of practical applications.
参阅图3,显示模块12发射包含左眼虚拟图像信息的第一光线121,经由一透视型导光元件13的凹面131反射的第一光线121进入用户的左眼14;同理,显示模块发射包含右眼虚拟图像信息的另一第一光线,经由另一透视型导光元件的凹面反射的另一第一光线进入用户的右眼,从而在用户的大脑中形成3D虚拟场景的视觉感受,另外,不同于谷歌眼镜中通过在用户的右眼前直接设置一块小型显示屏的方式,导致视觉区域较小,本申请实施例中,通过两个透视型导光元件反射更多的显示模块发射的第一光线分别进入用户的双眼,视觉区域较大。Referring to FIG. 3, the display module 12 emits a first light ray 121 including left-eye virtual image information, and the first light ray 121 reflected by the concave surface 131 of the see-through light guiding element 13 enters the left eye 14 of the user; similarly, the display module emits Another first light containing the virtual image information of the right eye, another first light reflected by the concave surface of the other see-through light guiding element enters the right eye of the user, thereby forming a visual feeling of the 3D virtual scene in the user's brain, In addition, unlike the gaze glasses, a small display screen is directly disposed in front of the user's right eye, resulting in a small visual area. In the embodiment of the present application, more display modules are reflected by two fluoroscopy light guiding elements. The first light enters the user's eyes, respectively, and the visual area is large.
在本申请实施例中,当基于增强现实的显示设备实现增强现实的功能,每一透视型导光元件13还具有一与凹面相背设置的凸面;经由透视型导光元件13的凸面和凹面透射的包含外界图像信息的第二光线进入用户的双眼,以形成混合3D虚拟场景和真实场景的视觉。再次参阅图1a,一透视型导光元件13还具有与凹面131相背设置的凸面132,经由透视型导光元件13的凸面132和凹面131透射的包含外界图像信 息的第二光线151进入用户的左眼14,同理,另一透视型导光元件还具有与其凹面相背设置的凸面,经由该透视型导光元件的凸面和凹面透射的包含外界图像信息的第二光线进入用户的右眼,用户能够看到外界的真实场景,从而形成混合3D虚拟场景和真实场景的视觉感受。In the embodiment of the present application, when the augmented reality-based display device implements the function of augmented reality, each of the see-through light guiding elements 13 further has a convex surface disposed opposite to the concave surface; the convex surface and the concave surface via the see-through light guiding element 13 The transmitted second light containing the external image information enters the user's eyes to form a visual blend of the 3D virtual scene and the real scene. Referring again to FIG. 1a, a see-through light guiding element 13 further has a convex surface 132 disposed opposite the concave surface 131, and the second light 151 containing external image information transmitted through the convex surface 132 and the concave surface 131 of the see-through light guiding element 13 enters the user. The left eye 14 is similarly shaped. The other see-through light guiding element further has a convex surface disposed opposite to the concave surface thereof, and the second light containing the external image information transmitted through the convex surface and the concave surface of the see-through light guiding element enters the right side of the user. In the eye, the user can see the real scene of the outside world, thereby forming a visual experience of mixing the 3D virtual scene and the real scene.
参阅图4,可选地,在人眼与透视型导光元件13之间设置一屈光度矫正镜片16,所述屈光度矫正镜片16垂直于水平面设置。可选地,所述屈光度矫正镜片所在平面也可与水平面成30度到90度的夹角。可选地,可任意设置不同度数的屈光度矫正镜片。显示模块12发射包含左眼虚拟图像信息的第一光线121,经由透视型导光元件13的凹面131反射的第一光线121以及经由透视型导光元件13的凸面132和凹面131透射的包含外界图像信息的第二光线151进入用户的左眼14之前,先经过屈光矫正镜片16。所述屈光矫正镜片16为凹透镜,使经过其上的第一光线121以及第二光线151发散,使第一光线121以及第二光线151在左眼14上的焦点后移。同样,所述屈光矫正镜片16还可为凸透镜,使经过其上的第一光线121以及第二光线151汇聚,使第一光线121以及第二光线151在左眼14上的焦点前移。Referring to Figure 4, optionally, a diopter correcting lens 16 is disposed between the human eye and the see-through light guiding element 13, the diopter correcting lens 16 being disposed perpendicular to the horizontal plane. Optionally, the plane of the diopter correction lens may also be at an angle of 30 degrees to 90 degrees from the horizontal plane. Alternatively, different degrees of diopter correcting lenses may be arbitrarily set. The display module 12 emits a first light ray 121 including left-eye virtual image information, a first light ray 121 reflected through the concave surface 131 of the fluoroscopic light guiding element 13, and a convex surface 132 and a concave surface 131 transmitted through the fluoroscopic light guiding element 13 The second light ray 151 of the image information passes through the refractive correction lens 16 before entering the left eye 14 of the user. The refractive correction lens 16 is a concave lens, and the first light 121 and the second light 151 passing therethrough are diverged, so that the focus of the first light 121 and the second light 151 on the left eye 14 are shifted back. Similarly, the refractive correction lens 16 can also be a convex lens that converges the first light ray 121 and the second light ray 151 thereon to advance the focus of the first light ray 121 and the second light 151 on the left eye 14.
同理,显示模块发射包含右眼虚拟图像信息的另一第一光线,经由另一透视型导光元件的凹面反射的另一第一光线以及经由该透视型导光元件的凸面和凹面透射的包含外界图像信息的第二光线进入用户的右眼之前,也先经过一屈光度矫正镜片。Similarly, the display module emits another first light containing the virtual image information of the right eye, another first light reflected through the concave surface of the other see-through light guiding element, and the convex and concave surface transmitted through the transparent light guiding element. Before the second light containing the external image information enters the right eye of the user, the lens is also corrected by a diopter.
参阅图5,基于增强现实的显示设备佩戴在用户头部上后,以用户的眼球为顶点,用户的眼球到通过所述透视型导光元件13看到的虚拟图像的虚拟显示区域的两侧边缘构成对角线视场区域。头部框架的最远端到与头部最前端接触位置的距离为c,可根据需要调节所述c的距离长度。所述对角线视场区域的角度大小与所述头部框架11的最远端到与头部最前端接触位置的距离成反比。优选地,在保证对角线视场区域大于55度的前提下,头部框架的最远端到与头部最前端接触位置的距离小于80mm。Referring to FIG. 5, after the augmented reality-based display device is worn on the user's head, the user's eyeball is the apex, and the user's eyeball is on both sides of the virtual display area of the virtual image seen through the see-through light guiding element 13. The edges form a diagonal field of view. The distance from the farthest end of the head frame to the contact position with the foremost end of the head is c, and the distance length of the c can be adjusted as needed. The angular extent of the diagonal field of view region is inversely proportional to the distance from the most distal end of the head frame 11 to the contact position with the foremost end of the head. Preferably, the distance from the farthest end of the head frame to the contact position with the foremost end of the head is less than 80 mm under the premise that the diagonal field of view area is greater than 55 degrees.
参阅图6,二显示模块12通过电缆连接到主板17上。Referring to FIG. 6, the second display module 12 is connected to the main board 17 by a cable.
主板17上还设置有视频接口、电源接口、通信芯片以及存储器。The main board 17 is also provided with a video interface, a power interface, a communication chip, and a memory.
所述视频接口用于连接计算机、手机、或其他设备接收视频信号。其中所述视频接口可以为:hmdi、display port、thunderbolt或usb type-c,micro usb,MHL(Mobile High-Definition Link)等接口。The video interface is used to connect a computer, a mobile phone, or other device to receive a video signal. The video interface may be: hmdi, display port, thunderbolt or usb type-c, micro usb, MHL (Mobile High-Definition Link) and the like.
所述处理器,用于处理数据,其中主要用于解码视频信号传输并显示在显示模块12上。The processor is configured to process data, wherein the video signal is mainly used for decoding and displayed on the display module 12.
所述电源接口,用于外接电源或电池供电。所述电源接口包括USB接口或者其他接口。The power interface is used for external power supply or battery power supply. The power interface includes a USB interface or other interfaces.
所述通信芯片,用于通过通信协议与外界进行数据交互,具体为通过WiFi、WDMA、TD-LTE等通信协议连接互联网,再通过互联网获取数据或者与其它基于增强现实的显示设备连接;或者直接通过通信协议与其它基于增强现实的显示设备相连。The communication chip is configured to perform data interaction with the outside world through a communication protocol, specifically, connecting to the Internet through a communication protocol such as WiFi, WDMA, TD-LTE, and then acquiring data through the Internet or connecting with other display devices based on the augmented reality; or directly It is connected to other augmented reality based display devices through a communication protocol.
所述存储器,用于存储数据,主要用于存储显示模块12中显示的显示数据。The memory is used for storing data, and is mainly used for storing display data displayed in the display module 12.
主板17上还可设置耳塞接口、声卡芯片或者其他发生装置。耳塞接口用于连接耳塞,向耳塞传输音频信号。声卡芯片用于解析声音信号。基于增强现实的显示设备的头戴框架11上还可以设置扬声器,将声卡芯片解析的声音信号转换成声音发出。An earphone interface, a sound card chip or other generating device can also be provided on the main board 17. The earbud connector is used to connect the earbuds and transmit audio signals to the earbuds. The sound card chip is used to parse the sound signal. A speaker can also be disposed on the head frame 11 of the display device based on the augmented reality, and the sound signal parsed by the sound card chip is converted into sound.
当基于增强现实的显示设备仅包括如上所述的头戴框架11、二显示模块12、两个透视型导光元件13及主板17时,所有的3D虚拟场景渲染、对应双眼的图像生成均在与基于增强现实的显示设备相连的外接设备中进行。所述外接设备包括:计算机、手机、平板电脑等。When the augmented reality-based display device includes only the head frame 11, the two display modules 12, the two see-through light guiding elements 13, and the main board 17, as described above, all 3D virtual scene renderings and image generation corresponding to both eyes are performed. Performed in an external device connected to an augmented reality based display device. The external device includes: a computer, a mobile phone, a tablet computer, and the like.
具体地,基于增强现实的显示设备通过视频接口接收外接设备的视频信号,解码后在显示模块12上显示。同时,与用户的交互通过计算机、手机、平板电脑等外接设备上的应用软件进行,可通过使用外接设备上的鼠标键盘、触摸板或按钮与所述基于增强现实的显示设备进行交 互。Specifically, the display device based on the augmented reality receives the video signal of the external device through the video interface, and displays it on the display module 12 after decoding. At the same time, the interaction with the user is performed by an application software on an external device such as a computer, a mobile phone, a tablet computer, etc., and the augmented reality-based display device can be interacted by using a mouse keyboard, a touch pad or a button on the external device.
这种基本结构的应用实例包括但不限于大屏幕便携显示器。基于增强现实的显示设备可以将显示屏幕投射在用户视野内的某一固定位置。用户需要通过与基于增强现实的显示设备相连的设备上的软件进行调整投射屏幕的尺寸、位置等操作。Examples of such basic structures include, but are not limited to, large screen portable displays. Augmented reality based display devices can project a display screen at a fixed location within the user's field of view. The user needs to adjust the size, position, and the like of the projection screen through software on the device connected to the augmented reality based display device.
本申请实施例提供的一种基于增强现实的显示设备,通过两个透视型导光元件的凹面更多地将包含左眼虚拟图像信息以及右眼虚拟图像信息的第一光线分别反射进入用户的双眼,从而在用户的大脑中形成3D虚拟场景的视觉感受,视觉区域较大。The display device based on the augmented reality provided by the embodiment of the present application reflects the first ray including the left-eye virtual image information and the right-eye virtual image information into the user by the concave surfaces of the two fluoroscopic light guiding elements. Both eyes form a visual experience of a 3D virtual scene in the user's brain, and the visual area is large.
实施例二 Embodiment 2
参阅图7,在实施例一中提供的一种基于增强现实的显示设备的基础上,设置多个传感器进行对周边环境进行感知。Referring to FIG. 7, on the basis of an augmented reality based display device provided in Embodiment 1, a plurality of sensors are disposed to perform sensing on a surrounding environment.
本实施例提供的一种基于增强现实的显示设备,所述基于增强现实的显示设备的总重量小于350克,其包括:头戴框架21、二显示模块22、两个透视型导光元件23及主板24。An augmented reality-based display device provided by the embodiment, the total weight of the display device based on the augmented reality is less than 350 grams, and includes: a head frame 21, two display modules 22, and two see-through light guiding elements 23 And motherboard 24.
所述显示模块22、透视形导光元件23及主板24皆设置在头戴框架21上,头戴框架21将显示模块22、透视形导光元件23及主板24进行固定。显示模块22设置在透视形导光元件23的上侧,显示模块22发出的光线能够经过透视形导光元件23后发生反射。主板24,主板24位于二显示模块22之间,所述主板24上设置有处理器,所述处理器用于处理虚拟图像信号并将虚拟图像信息显示在显示模块22上。The display module 22, the see-through light guiding element 23 and the main board 24 are all disposed on the head frame 21. The head frame 21 fixes the display module 22, the see-through light guiding element 23 and the main board 24. The display module 22 is disposed on the upper side of the see-through light guiding element 23, and the light emitted by the display module 22 can be reflected by the see-through light guiding element 23. The main board 24 and the main board 24 are located between the two display modules 22. The main board 24 is provided with a processor for processing virtual image signals and displaying the virtual image information on the display module 22.
头戴框架21、二显示模块22、两个透视型导光元件23、主板24与实施例一中所述的头戴框架11、二显示模块12、两个透视型导光元件13、主板17的具体功能、结构及位置关系相同,在此不进行赘述。The head frame 21, the two display modules 22, the two see-through light guiding elements 23, the main board 24 and the head frame 11 described in the first embodiment, the two display modules 12, the two see-through light guiding elements 13, and the main board 17 The specific functions, structures, and positional relationships are the same and will not be described here.
同样,在人眼与透视型导光元件23之间设置一屈光度矫正镜片,所述屈光度矫正镜片垂直于水平面设置。可选地,可任意设置不同度数的屈光度矫正镜片。Similarly, a diopter correcting lens is disposed between the human eye and the see-through light guiding element 23, the diopter correcting lens being disposed perpendicular to the horizontal plane. Alternatively, different degrees of diopter correcting lenses may be arbitrarily set.
头部框架21上还设置有单目摄像头211、双目/多目摄像头212、眼球追踪摄像头213、陀螺仪214、加速度计215、磁场计216、景深传感器217、环境光传感器218和/或距离传感器219。The head frame 21 is further provided with a monocular camera 211, a binocular/multi-view camera 212, an eyeball tracking camera 213, a gyroscope 214, an accelerometer 215, a magnetometer 216, a depth of field sensor 217, an ambient light sensor 218, and/or a distance. Sensor 219.
单目摄像头211、双目/多目摄像头212、眼球追踪摄像头213、陀螺仪214、加速度计215、磁场计216、景深传感器217、环境光传感器218和/或距离传感器219皆电连接在主板24上。The monocular camera 211, the binocular/multi-view camera 212, the eyeball tracking camera 213, the gyroscope 214, the accelerometer 215, the magnetometer 216, the depth of field sensor 217, the ambient light sensor 218, and/or the distance sensor 219 are all electrically connected to the main board 24 on.
具体地,所述单目摄像头211为彩色单目摄像头,放置于头部框架21的前部。用户佩戴所述基于增强现实的显示设备时,单目摄像头211朝向相对于用户脸部的另一侧,可以使用该摄像头进行拍照。进一步地,还可以对使用该摄像头,运用计算机视觉技术检测环境中的位置已知的标记,帮助所述基于增强现实的显示设备进行定位。Specifically, the monocular camera 211 is a color monocular camera placed at the front of the head frame 21. When the user wears the augmented reality-based display device, the monocular camera 211 faces the other side with respect to the user's face, and the camera can be used to take a photo. Further, the display device based on the augmented reality may be assisted by using the camera to detect a position known in the environment using computer vision technology.
所述单目摄像头211还可以为高分辨率的摄像头,用于拍照或者拍摄视频;拍摄所获得的视频还可以通过软件叠加用户所见的虚拟物体,复现用户通过基于增强现实的显示设备看到的内容。The monocular camera 211 can also be a high-resolution camera for taking photos or taking video; the captured video can also superimpose the virtual object seen by the user through software, and the user can see through the augmented reality-based display device. To the content.
所述双目/多目摄像头212可以是单色或彩色的摄像头,其设置在头戴框架21前部或侧面,且位于单目摄像头211的一侧、两侧或者四周。进一步地,所述双目/多目摄像头212可以带有红外滤光片。使用双目摄像头,可以在获得环境图像的基础上,进一步得到图像上的景深信息。使用多目摄像头,则可以进一步扩展相机的视角,获得更多的环境图像与景深信息。双/多目摄像头212捕获的环境图像和距离信息可被用于:(1)与陀螺仪214、加速度计215、磁场计216的数据相融合,计算基于增强现实的显示设备的姿态。(2)捕获用户手势、掌纹等用于人机交互。The binocular/multi-view camera 212 may be a monochrome or color camera disposed on the front or side of the head frame 21 and located on one side, two sides or all sides of the monocular camera 211. Further, the binocular/multi-view camera 212 may be provided with an infrared filter. Using the binocular camera, you can further obtain the depth of field information on the image based on the environment image. With a multi-camera camera, you can further expand the camera's viewing angle to get more ambient image and depth of field information. The ambient image and distance information captured by the dual/multi-view camera 212 can be used to: (1) fuse with the data of the gyroscope 214, the accelerometer 215, and the magnetometer 216 to calculate the pose of the augmented reality based display device. (2) Capture user gestures, palm prints, etc. for human-computer interaction.
可选地,上述的单目摄像头或双目/多目摄像头中的每一目均可是RGB摄像头、单色摄像头或红外摄像头中的一种。Optionally, each of the above-mentioned monocular camera or binocular/multi-view camera may be one of an RGB camera, a monochrome camera or an infrared camera.
所述眼球追踪摄像头213,设置在透视型导光元件23的一侧,用户佩戴所述基于增强现实的显示设备时,眼球追踪摄像头213朝向相对于用户脸部的一侧。所述眼球追踪摄像头213用于跟踪人眼焦点,对人眼 所注视的虚拟物件或虚拟屏幕中的特定部位进行追踪和特殊处理。比如,在人眼所注视的物件旁边自动显示此物件的具体信息等。另外对人眼注视的区域可以显示高清晰度的虚拟物件图像,而对其他区域则只显示低清晰度图像即可,这样可以有效减少图像渲染的计算量,而不会影响用户体验。The eyeball tracking camera 213 is disposed on one side of the see-through light guiding element 23, and when the user wears the augmented reality based display device, the eyeball tracking camera 213 faces the side opposite to the user's face. The eyeball tracking camera 213 is used to track the focus of the human eye, and to track and specialize the specific parts of the virtual object or virtual screen that the human eye is looking at. For example, the specific information of the object is automatically displayed next to the object that the human eye is watching. In addition, the area of the human eye can display the high-definition virtual object image, while for other areas, only the low-definition image can be displayed, which can effectively reduce the amount of image rendering calculation without affecting the user experience.
陀螺仪214、加速度计215、磁场计216设置在二显示模块22之间。可以通过融合陀螺仪214、加速度计215和磁场计216的数据,得到用户头部与系统初始位置间相对姿态。这些传感器的原始数据可以进一步和双目/多目摄像头212的数据进行融合,得到基于增强现实的显示设备在固定环境中的位置和姿态。The gyroscope 214, the accelerometer 215, and the magnetometer 216 are disposed between the two display modules 22. The relative pose between the user's head and the initial position of the system can be obtained by fusing the data of the gyroscope 214, the accelerometer 215, and the magnetometer 216. The raw data of these sensors can be further fused with the data of the binocular/multi-view camera 212 to obtain the position and attitude of the augmented reality based display device in a fixed environment.
所述景深传感器217设置在头戴框架21的前部,可以直接获得环境中的景深信息。与双/多目摄像头212相比,景深传感器可以获得更准确、分辨率更高的景深数据。类似的,使用这些数据可以:(1)与陀螺仪214、加速度计215、磁场计216的数据相融合,计算基于增强现实的显示设备的姿态。(2)捕获用户手势、掌纹等用与人机交互。(3)检测用户周围物体的三维信息。The depth of field sensor 217 is disposed at the front of the head frame 21, and can directly obtain depth information in the environment. Compared to the dual/multi-view camera 212, the depth of field sensor can obtain more accurate, higher resolution depth of field data. Similarly, the use of these data can be: (1) merging with the data of the gyroscope 214, the accelerometer 215, and the magnetometer 216 to calculate the pose of the display device based on the augmented reality. (2) Capture user gestures, palm prints, etc. to interact with humans. (3) detecting three-dimensional information of objects around the user.
所述环境光传感器218设置在头戴框架21上,可以实时监控环境光线的强弱。基于增强现实的显示设备根据环境光的变化实时的调整显示模块22的亮度,以保证显示效果在不同环境光下的一致性。The ambient light sensor 218 is disposed on the head frame 21, and can monitor the intensity of ambient light in real time. The display device based on the augmented reality adjusts the brightness of the display module 22 in real time according to the change of the ambient light to ensure the consistency of the display effect under different ambient light.
所述距离传感器219设置在基于增强现实的显示设备与用户面部接触的位置,用于检测基于增强现实的显示设备是否佩戴在用户头部。若用户摘下了基于增强现实的显示设备,则可以通过关闭显示模块22、处理器等方式节电。The distance sensor 219 is disposed at a position where the augmented reality based display device is in contact with the user's face for detecting whether the augmented reality based display device is worn on the user's head. If the user removes the display device based on the augmented reality, the power can be saved by turning off the display module 22, the processor, or the like.
可选地,所述基于增强现实的显示设备还包括:红外/近红外光LED,所述红外/近红外光LED电连接在主板24上,所述红外/近红外光LED用于为双目/多目摄像头212提供光源。具体为,所述红外/近红外LED发出红外线,在红外线到达通过双目/多目摄像头212获取的物体时,所述物体将红外线反射回去,双目/多目摄像头212上的感光元件接收 反射回来的红外线并转换成电信号,接着在进行成像处理。Optionally, the augmented reality-based display device further includes: an infrared/near-infrared LED electrically connected to the main board 24, wherein the infrared/near-infrared LED is used for binocular The multi-view camera 212 provides a light source. Specifically, the infrared/near-infrared LED emits infrared rays, and when the infrared rays reach an object acquired through the binocular/multi-view camera 212, the object reflects the infrared rays, and the photosensitive element on the binocular/multi-view camera 212 receives the reflection. The returned infrared rays are converted into electrical signals, followed by imaging processing.
所述基于增强现实的显示设备在进行人机交互时,可进行的操作包括如下:The operations that the augmented reality-based display device can perform when performing human-computer interaction include the following:
(1)基于增强现实的显示设备可以将显示屏幕投射在用户视野内的某一固定位置。用户可通过基于增强现实的显示设备上的传感器进行调整投射屏幕的尺寸、位置等操作。(1) Augmented reality based display devices can project a display screen at a fixed location within the user's field of view. The user can adjust the size, position, and the like of the projection screen through sensors on the augmented reality based display device.
(2)可以通过各类传感器进行手势、掌纹识别,用于人机交互。(2) Gesture and palmprint recognition can be performed by various sensors for human-computer interaction.
(3)可以通过眼球追踪判断用户的意图,对人眼所观察虚拟物件或虚拟屏幕中的特定部位进行相应处理。(3) The user's intention can be judged by the eyeball tracking, and the virtual object or the specific part in the virtual screen observed by the human eye is processed accordingly.
(4)还可以在支架上增加实体或触摸按钮、摇杆等,用于人机交互。(4) It is also possible to add entities or touch buttons, rockers, etc. to the bracket for human-computer interaction.
(5)可以配有遥控器,遥控器上有按钮、摇杆、触控板等,通过有线或无线的方式与基于增强现实的显示设备相连,作为人机交互界面。(5) It can be equipped with a remote control. The remote control has a button, a joystick, a touchpad, etc., and is connected to the display device based on the augmented reality through a wired or wireless manner as a human-computer interaction interface.
(6)可以通过在主板上增加音频解码和功率放大芯片,集成耳塞插孔、耳塞、或喇叭等发生设备与麦克风,允许用户使用语音与基于增强现实的显示设备进行交互。(6) The device and the microphone can be integrated by adding an audio decoding and power amplifying chip to the main board, integrating an earphone jack, an earplug, or a speaker, and allowing the user to interact with the augmented reality based display device using voice.
参阅图8,主板上设置有视频接口和处理器。Referring to Figure 8, a video interface and a processor are provided on the motherboard.
当基于增强现实的显示设备包括如上所述的头戴框架21、二显示模块22、两个透视型导光元件23、主板24以及如上所述的多个传感器时,所有的3D虚拟场景渲染、对应双眼的图像生成以及多个传感器获取的数据的处理均可在与基于增强现实的显示设备相连的外接设备中进行。所述外接设备包括:计算机、手机、平板电脑等。When the augmented reality based display device includes the headgear frame 21, the two display modules 22, the two see-through light guiding elements 23, the main board 24, and the plurality of sensors as described above, all of the 3D virtual scene rendering, The image generation corresponding to both eyes and the processing of data acquired by a plurality of sensors can be performed in an external device connected to the display device based on the augmented reality. The external device includes: a computer, a mobile phone, a tablet computer, and the like.
具体地,基于增强现实的显示设备通过视频接口接收外接设备的视频信号,解码后在显示模块23上显示。外接设备接收基于增强现实的显示设备上的多个传感器获取的数据,进行处理后根据数据对双眼显示 的图像进行调整,在显示模块23上显示的图像上进行体现。基于增强现实的显示设备上的处理器仅用于支持视频信号的传输与显示以及传感器数据的传递。Specifically, the display device based on the augmented reality receives the video signal of the external device through the video interface, and displays it on the display module 23 after decoding. The external device receives the data acquired by the plurality of sensors on the augmented reality-based display device, and performs processing to adjust the image displayed by both eyes according to the data, and is reflected on the image displayed on the display module 23. The processor on the augmented reality based display device is only used to support the transmission and display of video signals and the transfer of sensor data.
参阅图9,主板上设置有运算能力较强的处理器,将部分或全部计算机视觉算法在基于增强现实的显示设备内完成。Referring to FIG. 9, a processor with strong computing power is disposed on the motherboard, and some or all of the computer vision algorithms are completed in the display device based on the augmented reality.
具体地,基于增强现实的显示设备通过视频接口接收外接设备的视频信号,解码后在显示模块23上显示。外接设备接收基于增强现实的显示设备上的部分传感器获取的数据,进行处理后根据传感器数据对双眼显示的图像进行调整,在显示模块23上显示的图像上进行体现。其余传感器获取的数据则在基于增强现实的显示设备上处理。例如,单目摄像头211、双目/多目摄像头212、陀螺仪214、加速度计215、磁场计216及景深传感器217获取的数据在基于增强现实的显示设备中处理。眼球追踪摄像头213、环境光传感器218及距离传感器219获取的数据在外接设备中处理。基于增强现实的显示设备上的处理器用于支持视频信号的传输与显示、部分传感器数据的处理以及其余传感器数据的传递。Specifically, the display device based on the augmented reality receives the video signal of the external device through the video interface, and displays it on the display module 23 after decoding. The external device receives data acquired by a part of the sensors on the augmented reality-based display device, and performs processing to adjust the image displayed by the two eyes according to the sensor data, and is reflected on the image displayed on the display module 23. The data acquired by the remaining sensors is processed on an augmented reality based display device. For example, data acquired by the monocular camera 211, the binocular/multi-view camera 212, the gyroscope 214, the accelerometer 215, the magnetometer 216, and the depth of field sensor 217 are processed in an augmented reality based display device. The data acquired by the eyeball tracking camera 213, the ambient light sensor 218, and the distance sensor 219 are processed in an external device. The processor on the augmented reality based display device is used to support the transmission and display of video signals, the processing of partial sensor data, and the transfer of remaining sensor data.
参阅图10,主板上设置有高性能的处理器以及图像处理器,在基于增强现实的显示设备内完成所有的运算。在这种模式下,增强现实显示无需连接外接设备,可作为一个独立的系统运行。Referring to Figure 10, a high-performance processor and an image processor are provided on the motherboard to perform all operations in an augmented reality based display device. In this mode, Augmented Reality displays operate as a stand-alone system without the need to connect an external device.
具体地,基于增强现实的显示设备将传感器获取的数据进行处理后,对双眼显示的图像进行调整,渲染后在显示模块23上显示。基于增强现实的显示设备上的处理器用于视频信号的解码处理与显示以及传感器数据的处理。Specifically, after the augmented reality-based display device processes the data acquired by the sensor, the image displayed by the two eyes is adjusted, and then displayed on the display module 23 after rendering. The processor on the augmented reality based display device is used for decoding processing and display of video signals and processing of sensor data.
在实施例一及实施例二中所述的基于增强现实的显示设备实现增强现实的实际应用中,为了增加透视型导光元件的凹面对显示模块发射的第一光线的反射率,例如,透视型导光元件的凹面镀有反射膜,较佳的,镀有反射膜的透视型导光元件的凹面的反射率是20%-80%。又如,若第一光线是线偏振光,为了增加透视型导光元件的凹面的反射率,透 视型导光元件的凹面镀有偏振反射膜,偏振反射膜的偏振方向与第一光线的偏振方向之间的角度大于70°且小于等于90°,比如:偏振反射膜的偏振方向与第一光线的偏振方向垂直,实现近乎为100%的反射率,另外,由于包含外界图像信息的第二光线是非偏振光,若透视型导光元件的凹面镀有偏振反射膜,当第二光线经由该偏振反射膜时,有近乎50%的第二光线进入用户的双眼,用户仍然能够看到外界的真实场景。为了更好地让包含外界图像信息的第二光线进入用户的双眼,透视型导光元件的凸面镀有增透膜。In the practical application of implementing the augmented reality in the augmented reality-based display device described in the first embodiment and the second embodiment, in order to increase the reflectivity of the first light ray emitted by the display module by the concave light-guiding element, for example, The concave surface of the see-through light guiding element is plated with a reflective film. Preferably, the reflective surface of the see-through light guiding element coated with the reflective film has a reflectance of 20% to 80%. For example, if the first light is linearly polarized light, in order to increase the reflectivity of the concave surface of the see-through light guiding element, the concave surface of the see-through light guiding element is plated with a polarizing reflective film, and the polarization direction of the polarizing reflective film and the polarization of the first light The angle between the directions is greater than 70° and less than or equal to 90°, for example, the polarization direction of the polarizing reflective film is perpendicular to the polarization direction of the first light, achieving a reflectivity of approximately 100%, and, in addition, due to the second image containing external image information. The light is unpolarized light. If the concave surface of the see-through light guiding element is plated with a polarizing reflective film, when the second light passes through the polarizing reflective film, nearly 50% of the second light enters the user's eyes, and the user can still see the outside world. Real scene. In order to better allow the second light containing the external image information to enter the eyes of the user, the convex surface of the see-through light guiding element is coated with an anti-reflection film.
在实施例一及实施例二中所述的基于增强现实的显示设备的实际应用中,为了实现透视型导光元件的凹面对显示模块发射的第一光线的反射率的可控调节,透视型导光元件的凹面设有压敏反射膜,通过改变加载在压敏反射膜上的电压大小,能够调节压敏反射膜的反射率位于0至100%之间,当压敏反射膜的反射率为100%时,基于增强现实的显示设备可以实现虚拟现实的功能。In the practical application of the augmented reality based display device described in the first embodiment and the second embodiment, in order to realize the controllable adjustment of the reflectance of the first light emitted by the concave display module of the see-through light guiding element, the perspective The concave surface of the light guiding element is provided with a pressure sensitive reflective film, and by changing the magnitude of the voltage applied to the pressure sensitive reflective film, the reflectance of the pressure sensitive reflective film can be adjusted between 0 and 100% when the reflection of the pressure sensitive reflective film When the rate is 100%, the display device based on augmented reality can realize the function of virtual reality.
为了实现透视型导光元件的与凹面相背设置的另一表面对包含外界图像信息的第二光线的透光率的可控调节,透视型导光元件的与凹面相背设置的另一表面上设有压敏黑片,通过改变加载在压敏黑片上的电压大小,能够调节压敏黑片透光率的高低。In order to achieve a controllable adjustment of the transmittance of the second light comprising the external image information by the other surface of the see-through light guiding element disposed opposite the concave surface, the other surface of the see-through light guiding element disposed opposite the concave surface A pressure-sensitive black sheet is provided thereon, and the light transmittance of the pressure-sensitive black sheet can be adjusted by changing the magnitude of the voltage applied to the pressure-sensitive black sheet.
本申请实施例提供的一种基于增强现实的显示设备,通过两个透视型导光元件的凹面更多地将包含左眼虚拟图像信息及包含右眼虚拟图像信息的第一光线分别反射进入用户的双眼,从而在用户的大脑中形成3D虚拟场景的视觉感受,视觉区域较大。同时在基于增强现实的显示设备上设置多个传感器,传感器感知周边的环境后,可将感知的结果在显示模块中显示的图像中进行体现,使得临场感受更好,用户体验更佳。The display device based on the augmented reality provided by the embodiment of the present application reflects the first ray including the left eye virtual image information and the right eye virtual image information into the user by the concave surfaces of the two fluoroscopic light guiding elements. The eyes of the eyes form a visual experience of the 3D virtual scene in the user's brain, and the visual area is large. At the same time, a plurality of sensors are arranged on the display device based on the augmented reality. After the sensor senses the surrounding environment, the perceived result can be reflected in the image displayed in the display module, so that the on-site experience is better and the user experience is better.
实施例三 Embodiment 3
本实施例提供一种基于增强现实的显示方法,包括:The embodiment provides a display method based on augmented reality, including:
发出包含虚拟图像的第一光线,所述虚拟图像为接收的外接设备传输过来的图像;Emitating a first ray comprising a virtual image, the virtual image being an image transmitted by the received external device;
获取包含外部场景的实景图像的第二光线;Obtaining a second light of a real scene image including an external scene;
将所述包含虚拟图像的第一光线与包含外部场景的实景图像的第二光线进行合成。The first ray containing the virtual image is combined with the second ray containing the live image of the external scene.
所述获取的外部场景的实景图像包括:佩戴基于增强现实的显示设备的使用者所处环境的实景图像。示例的,使用者处于教室内,则所述获取的实景图像包括:教室内的学生、课桌椅、学习用具等物体经过光线反射后呈现出来的图像;若使用者处于办公室内,则所述获取的实景图像包括:办公桌、电脑、键盘、鼠标等物体经过光线反射后呈现出来的图像。The real-life image of the acquired external scene includes a real-life image of an environment in which the user wearing the augmented reality-based display device is located. For example, if the user is in the classroom, the acquired real-life image includes: an image of a student, a desk, a chair, a learning tool, and the like in the classroom after being reflected by the light; if the user is in the office, the The acquired real-life images include images of objects such as desks, computers, keyboards, and mice that are reflected by light.
所述获取虚拟图像包括:获取显示模块12经过处理器处理后显示出来的虚拟图像。所述虚拟图像为与基于增强现实的显示设备连接的设备传输过来的显示数据的虚拟图像,所述虚拟图像包括:虚拟显示屏、虚拟鼠标、虚拟键盘等。所述虚拟显示屏用于显示外接设备传输过来的数据、通过互联网获取的网络数据或者本地存储器里面存储的数据。The acquiring the virtual image includes: acquiring the virtual image displayed by the display module 12 after being processed by the processor. The virtual image is a virtual image of display data transmitted by a device connected to the augmented reality based display device, the virtual image including: a virtual display screen, a virtual mouse, a virtual keyboard, and the like. The virtual display screen is used to display data transmitted by an external device, network data acquired through the Internet, or data stored in a local storage.
两个显示模块12发出包含虚拟图像的显示数据的第一光线再结合获取的经过透视型导光元件13的包含外部场景图像信息的第二光线,两种光线经过基于增强现实的显示设备上的透视型导光元件13的合成后在用户眼睛内融合,经过使用者的人脑处理,可以将虚拟图像的显示数据的内容以三维的形式呈现在使用者的眼前。可以理解为所述基于增强现实的显示设备将虚拟图像的显示数据投射在用户视野内的实景图像中。The two display modules 12 emit a first light containing display data of the virtual image, and then combine the acquired second light rays of the see-through light guiding element 13 containing the external scene image information, and the two kinds of light pass through the augmented reality based display device. The synthesis of the fluoroscopic light guiding element 13 is fused in the eyes of the user, and the content of the display data of the virtual image can be presented in front of the user's eyes in a three-dimensional manner through the human brain processing. It can be understood that the augmented reality based display device projects the display data of the virtual image into a live view image within the user's field of view.
将获取到的所述外部场景的实景图像与所述虚拟图像合成后进行显示时包括第一显示模式、第二显示模式或第三模式;所述第一显示模式为虚拟图像与实景图像之间的相对角度以及相对位置皆不固定的显示模式;所述第二显示模式为虚拟图像与实景图像之间的相对角度以及相对位置皆固定的显示模式。所述第三模式为虚拟图像与实景图像间的相对角度固定,相对位置不固定的显示模式。具体地,参阅图11,使用者正对桌片,基于增强现实的显示设备投射一块虚拟屏幕和一台虚拟键 盘,正对用户,放置在真实空间中的桌面上。当使用者的头部移动或转动以后,投射的虚拟屏幕和键盘在使用者眼前的位置不发生改变,而它们在真实空间中的位置会跟着转动或移动,发生变化。这种显示模式称为“第一显示模式”。处理器将所述包含外部场景的实景图像的第二光线与包含虚拟图像的第一光线进行合成后以第一显示模式进行显示。And integrating the acquired real scene image of the external scene with the virtual image to include a first display mode, a second display mode, or a third mode; the first display mode is between a virtual image and a real image The relative angle and the relative position are not fixed display modes; the second display mode is a display mode in which the relative angle between the virtual image and the real image and the relative position are fixed. The third mode is a display mode in which a relative angle between the virtual image and the live image is fixed and the relative position is not fixed. Specifically, referring to Fig. 11, the user is projecting a piece of virtual screen and a virtual keyboard to the table, the augmented reality based display device, facing the user, and placing it on the desktop in the real space. When the user's head moves or rotates, the projected virtual screen and keyboard do not change in front of the user's eyes, and their position in real space changes or changes. This display mode is called "first display mode". The processor combines the second ray including the real scene image of the external scene with the first ray including the virtual image, and displays the image in the first display mode.
在实施例二所述的基于增强现实的显示设备的应用时,可以通过单目摄像头211运用计算机视觉技术检测环境中的位置已知的标记,帮助所述基于增强现实的显示设备进行定位,通过景深传感器217获得环境中的景深信息。或者,基于增强现实的显示设备还可以通过使用双目/多目摄像头212,在获得环境图像的基础上,进一步得到获取的图像上的景深信息。接着,基于增强现实的显示设备通过将单目摄像头211、景深传感器217或者双目/多目摄像头212获得的数据进行处理,处理器利用计算机视觉技术对周围环境进行3D建模,实时识别真实环境中不同物体并确定它们在空间中的位置和姿态。这样,基于增强现实的显示设备能分析得出使用者附近有哪些空间能够较好地投射虚拟屏幕、虚拟键盘以及其他显示内容。另外,基于增强现实的显示设备还可以通过陀螺仪214、加速度计215、磁场计216获取的数据结合单目摄像头211、景深传感器217或者双目/多目摄像头212获得的图像和景深数据,计算基于增强现实的显示设备在真实空间中的位置和姿态,即坐标系F H与F I的相对位置和角度关系T。由于基于增强现实的显示设备中投射的虚拟图像,如虚拟显示屏、虚拟键盘等,在坐标系F H中的位置和姿态已知,通过T可以获得投射数字内容在真实空间(F I)中的位置和角度。相对的,若希望投射内容出现在真实空间(F I)的某个位置和角度,则可通过关系T,计算出投射内容在基于增强现实的显示设备坐标系F H中的位置和姿态,将虚拟屏幕等投射内容放置于此。 In the application of the augmented reality-based display device according to the second embodiment, the monocular camera 211 can be used to detect the position-known mark in the environment by using the computer vision technology to help the augmented reality-based display device to perform positioning. The depth of field sensor 217 obtains depth of field information in the environment. Alternatively, the augmented reality based display device may further obtain the depth of field information on the acquired image by using the binocular/multi-view camera 212 on the basis of obtaining the environment image. Then, the display device based on the augmented reality processes the data obtained by the monocular camera 211, the depth of field sensor 217 or the binocular/multi-view camera 212, and the processor uses the computer vision technology to perform 3D modeling on the surrounding environment to identify the real environment in real time. Different objects in the middle and determine their position and attitude in space. In this way, the augmented reality based display device can analyze which spaces in the vicinity of the user can better project the virtual screen, virtual keyboard and other display content. In addition, the augmented reality based display device can also calculate the image and depth of field data obtained by the gyroscope 214, the accelerometer 215, and the magnetic field meter 216 in combination with the image obtained by the monocular camera 211, the depth of field sensor 217, or the binocular/multi-view camera 212. The position and attitude of the display device based on the augmented reality in real space, that is, the relative position and angular relationship T of the coordinate systems F H and F I . Due to the virtual image projected in the augmented reality based display device, such as a virtual display screen, a virtual keyboard, etc., the position and posture in the coordinate system F H are known, and the projected digital content can be obtained in the real space (F I ) by T The position and angle. In contrast, if the projected content appears in a certain position and angle of the real space (F I ), the position and posture of the projected content in the augmented reality based display device coordinate system F H can be calculated through the relationship T, Projected content such as a virtual screen is placed here.
这样,基于增强现实的显示设备可以实现“第二显示模式”。参阅图12,使用者正对桌片,基于增强现实的显示设备投射一块虚拟屏幕和一台虚拟键盘,正对用户,放置在真实空间中的桌面上。当使用者的头部移动或转动以后,若使用第二显示模式,投射的虚拟键盘和屏幕在真 实空间中的位置不发生改变,让使用者产生屏幕和键盘是真实存在且放置在桌面上的错觉。In this way, the augmented reality based display device can implement the "second display mode". Referring to FIG. 12, the user is projecting a virtual screen and a virtual keyboard on the table, the augmented reality based display device, facing the user, and placing it on the desktop in the real space. After the user's head moves or rotates, if the second display mode is used, the position of the projected virtual keyboard and the screen in the real space does not change, so that the user generates the screen and the keyboard is real and placed on the desktop. illusion.
基于增强现实的显示设备使用陀螺仪、加速度计和磁场计,获得用户头部与所处环境间的相对角度,可以实现“第三显示模式”在这种显示模式下,虚拟物体的与环境间的相对角度固定,但相对位置可以发生移动。The augmented reality-based display device uses a gyroscope, an accelerometer, and a magnetic field meter to obtain a relative angle between the user's head and the environment in which the "third display mode" can be realized. In this display mode, between the virtual object and the environment The relative angle is fixed, but the relative position can be moved.
例如,使用者正对桌片,基于增强现实的显示设备投射一块虚拟屏幕和一台虚拟键盘,正对用户,放置在真实空间中的桌面上。当使用者的头部移动或转动以后,若使用第三显示模式,投射的虚拟键盘和屏幕在真实空间中的相对角度不发生改变,在使用者移动时,投射的虚拟键盘和屏幕在真实空间中的相对位置发生改变,跟随着使用者进行移动。For example, the user is projecting a piece of virtual screen and a virtual keyboard on the table, the augmented reality based display device, facing the user, placed on the desktop in real space. After the user's head moves or rotates, if the third display mode is used, the relative angle of the projected virtual keyboard and the screen in the real space does not change, and when the user moves, the projected virtual keyboard and screen are in the real space. The relative position in the change changes, followed by the user to move.
综上所述,第一显示模式、第二显示模式以及第三显示模式与真实环境以及使用者头部之间的关系如下表所示:In summary, the relationship between the first display mode, the second display mode, and the third display mode and the real environment and the user's head is as follows:
与环境相对位置Relative to the environment 与环境相对角度Relative to the environment 与头部相对位置Relative position to the head 与头部相对角度Relative angle to the head
第一显示模式First display mode 不固定Not fixed 不固定Not fixed 固定fixed 固定fixed
第二显示模式Second display mode 固定fixed 固定fixed 不固定Not fixed 不固定Not fixed
第三显示模式Third display mode 不固定Not fixed 固定fixed 固定fixed 不固定Not fixed
需要注意的是,“第一显示模式”、“第二显示模式”或“第三显示模式”可以针对不同的虚拟图像混合使用,可以由系统软件决定也可以由使用者自主设置。It should be noted that the “first display mode”, the “second display mode” or the “third display mode” may be used for different virtual images, and may be determined by the system software or by the user.
所述“第一显示模式”、“第二显示模式”或“第三模式”通过实景图像中设置的二维码或者其他人工设定的辅助标记实现。The "first display mode", the "second display mode" or the "third mode" is implemented by a two-dimensional code set in a live view image or other manually set auxiliary mark.
具体地,通过单目摄像头211、景深传感器217或者双目/多目摄像头212扫描并识别在实景图像中设置的二维码,所述二维码包含开启第一显示模式、开启第二显示模式的信息或开启第三显示模式的信息。在识别出二维码中的信息后,以所述二维码的信息对应的显示模式进行显 示。如,扫描出二维码中的信息为开启第一显示模式的信息,则以第一显示模式进行显示;又如扫描出的二维码中的信息为开启第二显示模式或者第三显示模式的信息,则以第二显示模式或者第三模式进行显示。Specifically, the two-dimensional code set in the real-life image is scanned and recognized by the monocular camera 211, the depth of field sensor 217, or the binocular/multi-view camera 212, and the two-dimensional code includes turning on the first display mode and turning on the second display mode. Information or turn on the information in the third display mode. After the information in the two-dimensional code is recognized, it is displayed in a display mode corresponding to the information of the two-dimensional code. For example, if the information in the two-dimensional code is scanned to turn on the information of the first display mode, the display is performed in the first display mode; and the information in the scanned two-dimensional code is the second display mode or the third display mode. The information is displayed in the second display mode or the third mode.
同理,可通过单目摄像头211、景深传感器217或者双目/多目摄像头212扫描并识别在实景图像中设置的人工标记,所述人工标记包含开启第一显示模式或者开启第二显示模式的信息。如,识别出人工标记中的信息为开启第一显示模式的信息,则以第一显示模式进行显示;又如识别出的人工标记中的信息为开启第二显示模式第三模式的信息,则以第二显示模式或者第三模式进行显示。Similarly, the manual mark set in the live image may be scanned and recognized by the monocular camera 211, the depth of field sensor 217 or the binocular/multi-view camera 212, and the artificial mark includes opening the first display mode or turning on the second display mode. information. For example, if the information in the manual mark is the information that the first display mode is turned on, the display is performed in the first display mode; and if the information in the identified manual mark is the information in the third mode of the second display mode, Display in the second display mode or the third mode.
二维码或其他实景图像中设置的二维平面上的人工标记还可以用于辅助以第二显示模式显示时的基于增强现实的显示设备进行定位:根据单目摄像头211、景深传感器217或者双目/多目摄像头212拍摄到的二维码或者人工标记的形状与大小,与该二维码或者人工标记在二维平面上的实际大小与形状进行比对,推算出标记与摄像头之间的相对位置和角度。由于标记在环境中的位置固定,则可以由此计算出基于增强现实的显示设备与环境的相对位置和角度关系T,从而实现第二显示模式。The artificial mark on the two-dimensional plane set in the two-dimensional code or other real-life image can also be used to assist the positioning by the augmented reality-based display device when displaying in the second display mode: according to the monocular camera 211, the depth of field sensor 217 or the double The shape and size of the two-dimensional code or the artificial mark captured by the mesh/multi-view camera 212 are compared with the actual size and shape of the two-dimensional code or the artificial mark on the two-dimensional plane, and the relationship between the mark and the camera is calculated. Relative position and angle. Since the position of the mark in the environment is fixed, the relative position and angle relationship T of the display device based on the augmented reality and the environment can be calculated therefrom, thereby implementing the second display mode.
进一步地,可对虚拟图像进行操控。基于增强现实的显示设备可通过单目摄像头211、景深传感器217或者双目/多目摄像头212跟踪用户的手势的运动,分析使用者的意图,实现对虚拟显示的内容进行操作。Further, the virtual image can be manipulated. The augmented reality-based display device can track the motion of the user's gesture through the monocular camera 211, the depth of field sensor 217, or the binocular/multi-view camera 212, analyze the user's intention, and operate the virtual displayed content.
示例的,通过利用单目摄像头211、景深传感器217或者双目/多目摄像头212跟踪使用者的手指的位置,在识别到手指的点击动作之后,执行手指点击动作对应的操作指令。For example, by using the monocular camera 211, the depth of field sensor 217, or the binocular/multi-view camera 212 to track the position of the user's finger, after the click action of the finger is recognized, an operation command corresponding to the finger click action is executed.
示例的,通过利用单目摄像头211、景深传感器217或者双目/多目摄像头212识别到使用者的手势的抓取动作,对虚拟屏幕整体,或者虚拟屏幕中的物体进行拖曳操作。For example, by using the monocular camera 211, the depth of field sensor 217, or the binocular/multi-view camera 212 to recognize the grabbing action of the user's gesture, the virtual screen as a whole, or an object in the virtual screen, is dragged.
示例的,通过利用单目摄像头211、景深传感器217或者双目/多目摄像头212识别到放大或缩小的手势动作,则执行放大或缩小的手势动作对应的指令,对虚拟屏幕或者虚拟屏幕中的物体进行缩放。For example, by using the monocular camera 211, the depth of field sensor 217, or the binocular/multi-view camera 212 to recognize an enlarged or reduced gesture action, an instruction corresponding to the zoomed or zoomed gesture action is performed on the virtual screen or the virtual screen. The object is scaled.
具体地,参阅图13,使用者佩戴实施例一或二所述的基于增强现实的显示设备,将基于增强现实的显示设备连接至笔记本电脑,笔记本电脑通过电缆传输视频信号至基于增强现实的显示设备。所述电缆也可用于数据、音频信号的传输以及为头显供电。电缆可以是单条(比如USB Type-C)也可是多条(视频、音频和数据使用不同电缆)。基于增强现实的显示设备获取通过笔记本电脑传送的视频信号,然后在基于增强现实的显示设备的显示模块12上进行显示。虚拟图像包括:一块或数块虚拟显示屏以及在显示屏上展现的显示内容,所述显示内容为笔记本电脑传输过来的内容。两个显示模块12发出包含上述虚拟图像的光线再结合获取的经过透视型导光元件13的包含外部场景的实景图像信息的光线,两种光线经过基于增强现实的信息处理设备上的透视型导光元件13的合成后在用户眼睛内融合,经过人脑处理,可以将虚拟图像的内容以三维的形式呈现在使用者的眼前。可以理解为所述基于增强现实的信息处理设备将虚拟显示屏以及虚拟显示屏上显示的内容投射在使用者视野内的外部场景的实景图像中。如使用者视野内的外部场景的实景图像中有实体的鼠标键盘,可将实体的鼠标键盘连接笔记本电脑使用,让使用者以最熟悉的方式输入信息,提高了工作效率。Specifically, referring to FIG. 13, the user wears the augmented reality-based display device according to Embodiment 1 or 2, and connects the augmented reality-based display device to the notebook computer, and the notebook computer transmits the video signal to the augmented reality-based display through the cable. device. The cable can also be used for data and audio signal transmission as well as powering the head display. Cables can be single (such as USB Type-C) or multiple (video, audio, and data using different cables). The augmented reality based display device acquires a video signal transmitted through the notebook computer and then displays on the display module 12 of the augmented reality based display device. The virtual image includes: one or several virtual display screens and display content displayed on the display screen, the display content being the content transmitted by the notebook computer. The two display modules 12 emit light rays including the above-mentioned virtual image and combine the acquired light rays of the perspective light guiding element 13 containing the real scene image information of the external scene, and the two kinds of light pass through the perspective guide on the information processing device based on the augmented reality. The synthesis of the optical element 13 is fused within the eyes of the user, and after processing by the human brain, the content of the virtual image can be presented in front of the user's eyes in three dimensions. It can be understood that the augmented reality-based information processing device projects the virtual display screen and the content displayed on the virtual display screen in a live view image of the external scene within the user's field of view. For example, if there is a physical mouse and keyboard in the real scene image of the external scene in the user's field of vision, the physical mouse and keyboard can be connected to the notebook computer, so that the user can input information in the most familiar manner, thereby improving work efficiency.
参阅图14,使用者佩戴实施例一或二所述的基于增强现实的显示设备,将基于增强现实的显示设备连接至手机或其他移动终端,手机或其他移动终端通过电缆传输视频信号至基于增强现实的显示设备。所述电缆也可用于数据、音频信号的传输以及为头显供电。电缆可以是单条(比如USB Type-C)也可是多条(视频、音频和数据使用不同电缆)。基于增强现实的显示设备获取通过手机或其他移动终端传送的虚拟图像的视频信号,然后在基于增强现实的显示设备的显示模块12上进行显示。虚拟图像包括:一块或数块虚拟显示屏以及在显示屏上展现的显示内容,所述显示内容为手机或其他移动终端传输过来的内容。两个显示模块12发出包含上述虚拟图像的光线再结合获取的经过透视型导光元件13的包含外部场景的实景图像信息的光线,两种光线经过基于增强现实的信息处理设备上的透视型导光元件13的合成后在用户眼睛内融合,经过人脑处理,可以将虚拟图像的内容以三维的形式呈现在使用者的眼 前。可以理解为所述基于增强现实的信息处理设备将虚拟显示屏以及虚拟显示屏上显示的内容投射在使用者视野内的外部场景的实景图像中。便携鼠标与键盘通过蓝牙或其他通信方式与手机或其它移动终端相连。使用者视野内的外部场景的实景图像中有实体的鼠标键盘,即佩戴基于增强现实的显示设备的使用者可以看到实体的鼠标及键盘,让使用者以最熟悉的方式输入信息,提高了工作效率。Referring to FIG. 14, the user wears the augmented reality-based display device according to Embodiment 1 or 2, and connects the augmented reality-based display device to the mobile phone or other mobile terminal, and the mobile phone or other mobile terminal transmits the video signal through the cable to the enhanced device. Realistic display device. The cable can also be used for data and audio signal transmission as well as powering the head display. Cables can be single (such as USB Type-C) or multiple (video, audio, and data using different cables). The augmented reality based display device acquires a video signal of a virtual image transmitted by the mobile phone or other mobile terminal, and then displays on the display module 12 of the augmented reality based display device. The virtual image includes: one or several virtual display screens and display content displayed on the display screen, the display content being content transmitted by a mobile phone or other mobile terminal. The two display modules 12 emit light rays including the above-mentioned virtual image and combine the acquired light rays of the perspective light guiding element 13 containing the real scene image information of the external scene, and the two kinds of light pass through the perspective guide on the information processing device based on the augmented reality. The synthesis of the optical element 13 is fused within the eyes of the user, and after processing by the human brain, the content of the virtual image can be presented in front of the user's eyes in three dimensions. It can be understood that the augmented reality-based information processing device projects the virtual display screen and the content displayed on the virtual display screen in a live view image of the external scene within the user's field of view. The portable mouse and keyboard are connected to the mobile phone or other mobile terminal via Bluetooth or other means of communication. The real scene image of the external scene in the user's field of vision has a physical mouse and keyboard, that is, the user wearing the display device based on the augmented reality can see the entity mouse and the keyboard, so that the user inputs information in the most familiar manner, and improves the user. Work efficiency.
可选地,参阅图15,使用者佩戴实施例一或二所述的基于增强现实的显示设备,将基于增强现实的显示设备连接至手机或其他移动终端,手机或其他移动终端通过电缆传输视频信号至基于增强现实的显示设备。基于增强现实的显示设备获取通过手机或其他移动终端传送的虚拟图像的视频信号,然后在基于增强现实的显示设备的显示模块12上进行显示。虚拟图像包括:一块或数块虚拟显示屏、在显示屏上展现的显示内容以及虚拟的鼠标键盘等。可通过利用单目摄像头211、景深传感器217或者双目/多目摄像头212跟踪使用者的手指的位置,在识别到手指的点击鼠标或键盘的动作之后,执行手指点击动作对应的操作键的操作指令,实现信息的输入。Optionally, referring to FIG. 15, the user wears the augmented reality-based display device according to Embodiment 1 or 2, and connects the augmented reality-based display device to the mobile phone or other mobile terminal, and the mobile phone or other mobile terminal transmits the video through the cable. Signal to augmented reality based display devices. The augmented reality based display device acquires a video signal of a virtual image transmitted by the mobile phone or other mobile terminal, and then displays on the display module 12 of the augmented reality based display device. The virtual image includes: one or several virtual display screens, display content displayed on the display screen, and a virtual mouse and keyboard. The position of the user's finger can be tracked by using the monocular camera 211, the depth of field sensor 217, or the binocular/multi-view camera 212, and after the action of clicking the mouse or the keyboard of the finger is recognized, the operation of the operation key corresponding to the finger click action is performed. Instructions to achieve the input of information.
可选地,上述投射的显示屏如显示多个显示屏时,可以调整多个显示屏的排列方式。可通过利用单目摄像头211、景深传感器217或者双目/多目摄像头212跟踪使用者的手指的位置,在识别到手指的移动显示屏的动作之后,执行手指移动显示屏对应的操作键的操作指令,实现对显示屏的移动。Optionally, when the projected display screen displays multiple display screens, the arrangement of the multiple display screens may be adjusted. The position of the user's finger can be tracked by using the monocular camera 211, the depth of field sensor 217, or the binocular/multi-view camera 212, and after the action of moving the display screen of the finger is recognized, the operation of the operation key corresponding to the finger movement display screen is performed. Instructions to move the display.
当基于增强现实的显示设备投射的内容为带有声音的内容时,可通过外接的耳塞进行发出声音,或者通过扬声器发出声音。When the content projected by the augmented reality-based display device is sound-bearing content, sound can be emitted through an external earphone or sounded through a speaker.
可选地,还可通过眼球追踪摄像头213跟踪使用者的眼睛的焦点,对使用者眼睛焦点所注视的虚拟物件或虚拟屏幕中的特定部位进行追踪和特殊处理,比如,在使用者眼睛所集中观察的局部区域,自动显示注释,和所观察物体的具体信息等。Optionally, the eyeball tracking camera 213 can also track the focus of the user's eyes, and track and specialize the specific parts of the virtual object or virtual screen that the user's eye focus is on, for example, in the eyes of the user. The local area of the observation, the annotations are automatically displayed, and the specific information of the observed object.
区别于现有技术,本实施例提供的一种基于增强现实的显示方法将 外部的实景图像与虚拟图像结合,虚拟图像能够为使用者提供显示范围大的虚拟显示屏幕或者虚拟的鼠标键盘,并且虚拟图像可以和现实中的实体的屏幕,鼠标,键盘,触摸屏,按钮等配合使用,具有较大的视野范围,且具有私密性。Different from the prior art, an augmented reality based display method provided by the embodiment combines an external real image with a virtual image, and the virtual image can provide a virtual display screen or a virtual mouse keyboard with a large display range for the user, and The virtual image can be used in conjunction with real-life physical screens, mice, keyboards, touch screens, buttons, etc., with a large field of view and privacy.
以上所述仅为本申请的实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above description is only the embodiment of the present application, and thus does not limit the scope of the patent application, and the equivalent structure or equivalent process transformation made by using the specification and the drawings of the present application, or directly or indirectly applied to other related technologies. The fields are all included in the scope of patent protection of this application.

Claims (14)

  1. 一种基于增强现实的显示方法,其特征在于,包括:A display method based on augmented reality, characterized in that it comprises:
    发出包含虚拟图像的第一光线,所述虚拟图像为接收的外接设备传输过来的图像;Emitating a first ray comprising a virtual image, the virtual image being an image transmitted by the received external device;
    获取包含外部场景的实景图像的第二光线;Obtaining a second light of a real scene image including an external scene;
    将所述包含虚拟图像的第一光线与包含外部场景的实景图像的第二光线进行合成。The first ray containing the virtual image is combined with the second ray containing the live image of the external scene.
  2. 根据权利要求1所述的方法,其特征在于,所述获取的实景图像为使用者所处环境的实景图像;所述虚拟图像包括:虚拟的显示屏、虚拟鼠标或虚拟键盘。The method according to claim 1, wherein the acquired real-life image is a real-life image of a environment in which the user is located; the virtual image includes: a virtual display screen, a virtual mouse, or a virtual keyboard.
  3. 根据权利要求1或2所述的方法,其特征在于,将获取的虚拟图像与所述实景图像合成后包括第一显示模式、第二显示模式或第三模式;所述第一显示模式为虚拟图像与实景图像之间的相对角度以及相对位置皆不固定的显示模式;所述第二显示模式为虚拟图像与实景图像之间的相对角度以及相对位置皆固定的显示模式;所述第三模式为虚拟图像与实景图像间的相对角度固定,相对位置不固定的显示模式。The method according to claim 1 or 2, wherein the acquired virtual image is combined with the real-life image to include a first display mode, a second display mode or a third mode; the first display mode is virtual a display mode in which a relative angle and a relative position between the image and the real image are not fixed; the second display mode is a display mode in which a relative angle between the virtual image and the real image and a relative position are fixed; the third mode A display mode in which the relative angle between the virtual image and the real image is fixed and the relative position is not fixed.
  4. 根据权利要求3所述的方法,其特征在于,所述第一显示模式、第二显示模式或第三显示模式通过实景图像中设置的二维码或者其他人工设定的辅助标记实现。The method according to claim 3, wherein the first display mode, the second display mode or the third display mode is implemented by a two-dimensional code set in the live view image or other manually set auxiliary mark.
  5. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:跟踪手势的运动,对虚拟图像进行操作。The method according to claim 1 or 2, wherein the method further comprises: tracking the movement of the gesture to operate the virtual image.
  6. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:对用户眼睛所注视的虚拟图像的特定部位进行追踪和特定处理。The method according to claim 1 or 2, wherein the method further comprises: tracking and specific processing of a specific portion of the virtual image that the user's eyes are looking at.
  7. 一种基于增强现实的显示设备,包括:A display device based on augmented reality, comprising:
    显示模块,用于显示虚拟图像并发出包含虚拟图像的第一光线,所述虚拟图像为接收的外接设备传输过来的图像;a display module, configured to display a virtual image and emit a first light including a virtual image, where the virtual image is an image transmitted by the received external device;
    透视型导光元件,用于获取包含外部场景的实景图像的第二光线;a fluoroscopic light guiding element for acquiring a second light including a real scene image of an external scene;
    透视型导光元件还用于将所述包含虚拟图像的第一光线与包含外部场景的实景图像的第二光线进行合成。The fluoroscopic light guiding element is further configured to synthesize the first ray including the virtual image and the second ray including the real scene image of the external scene.
  8. 根据权利要求7所述的设备,其特征在于,透视型导光元件获取的实景图像为使用者所处环境的实景图像;显示模块现实的所述虚拟图像包括:虚拟的显示屏、虚拟鼠标或虚拟键盘。The device according to claim 7, wherein the real-life image acquired by the see-through light guiding element is a real-life image of the environment in which the user is located; the virtual image of the display module reality comprises: a virtual display screen, a virtual mouse or virtual keyboard.
  9. 根据权利要求7或8所述的设备,其特征在于,所述设备还包括处理器、陀螺仪、加速度计、磁场计、单目摄像头、景深传感器或者双目/多目摄像头,所述处理器用于将所述包含虚拟图像的第一光线与包含外部场景的实景图像的第二光线进行合成后以第一显示模式进行显示;或者处理器结合陀螺仪、加速度计、磁场计、单目摄像头、景深传感器或者双目/多目摄像头的数据后以第二显示模式进行显示;或者处理器结合陀螺仪、加速度计或磁场计的数据后,以第三显示模式进行显示;所述第一显示模式为虚拟图像与实景图像之间的相对角度以及相对位置皆不固定的显示模式;所述第二显示模式为虚拟图像与实景图像之间的相对角度以及相对位置皆固定的显示模式;所述第三模式为虚拟图像与实景图像间的相对角度固定,相对位置不固定的显示模式。The device according to claim 7 or 8, wherein the device further comprises a processor, a gyroscope, an accelerometer, a magnetometer, a monocular camera, a depth of field sensor or a binocular/multi-view camera, wherein the processor is And combining the first light including the virtual image with the second light of the real scene image including the external scene, and displaying the first light in the first display mode; or the processor combining the gyroscope, the accelerometer, the magnetic field meter, the monocular camera, The data of the depth of field sensor or the binocular/multi-view camera is displayed in the second display mode; or the processor combines the data of the gyroscope, the accelerometer or the magnetometer to display in the third display mode; the first display mode a display mode in which a relative angle and a relative position between the virtual image and the real image are not fixed; the second display mode is a display mode in which a relative angle between the virtual image and the real image and a relative position are fixed; The three modes are display modes in which the relative angle between the virtual image and the real image is fixed, and the relative position is not fixed.
  10. 根据权利要求9所述的设备,其特征在于,所述第一显示模式、第二显示模式或第三显示模式通过实景图像中设置的二维码或者其他人工设定的辅助标记实现。The apparatus according to claim 9, wherein said first display mode, second display mode or third display mode is implemented by a two-dimensional code set in a live view image or other manually set auxiliary mark.
  11. 根据权利要求9所述的设备,其特征在于,所述单目摄像头或双目/多目摄像头中的每一目均可是RGB摄像头、单色摄像头或红外摄像头中的一种。The apparatus according to claim 9, wherein each of said monocular camera or binocular/multi-view camera is one of an RGB camera, a monochrome camera or an infrared camera.
  12. 根据权利要求7或8所述的设备,其特征在于,所述单目摄像头、景深传感器或者双目/多目摄像头还用于跟踪手势的运动,并识别手势动作。The apparatus according to claim 7 or 8, wherein the monocular camera, depth of field sensor or binocular/multi-view camera is further used to track the motion of the gesture and recognize the gesture motion.
  13. 根据权利要求7或8所述的设备,其特征在于,所述设备还包 括眼球追踪摄像头,用于对用户眼睛所注视的虚拟图像的特定部位进行追踪和特定处理。Device according to claim 7 or 8, characterized in that the device further comprises an eye tracking camera for tracking and specific processing of specific parts of the virtual image that the user's eyes are looking at.
  14. 根据权利要求7或8所述的设备,其特征在于,所述基于增强现实的显示设备与外接设备相连时,可通过使用外接设备上的鼠标键盘、触摸板或按钮与所述基于增强现实的显示设备进行交互。The device according to claim 7 or 8, wherein the augmented reality based display device is connected to the external device by using a mouse keyboard, a touch pad or a button on the external device and the augmented reality based The display device interacts.
PCT/CN2018/073473 2017-02-14 2018-01-19 Display method and device based on augmented reality WO2018149267A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710079199.5A CN108427194A (en) 2017-02-14 2017-02-14 A kind of display methods and equipment based on augmented reality
CN201710079199.5 2017-02-14

Publications (1)

Publication Number Publication Date
WO2018149267A1 true WO2018149267A1 (en) 2018-08-23

Family

ID=63155134

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/073473 WO2018149267A1 (en) 2017-02-14 2018-01-19 Display method and device based on augmented reality

Country Status (2)

Country Link
CN (1) CN108427194A (en)
WO (1) WO2018149267A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110174939A (en) * 2019-04-19 2019-08-27 深圳远为文化有限公司 Reality system is merged in panoramic scanning
CN110362231B (en) * 2019-07-12 2022-05-20 腾讯科技(深圳)有限公司 Head-up touch device, image display method and device
CN111736692B (en) * 2020-06-01 2023-01-31 Oppo广东移动通信有限公司 Display method, display device, storage medium and head-mounted device
CN111664742B (en) * 2020-06-08 2023-01-06 中国人民解放军陆军特种作战学院 Intelligent target system based on air imaging
CN111664741B (en) * 2020-06-08 2023-01-06 中国人民解放军陆军特种作战学院 Interaction method of intelligent target system for shooting training

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019377A (en) * 2012-12-04 2013-04-03 天津大学 Head-mounted visual display equipment-based input method and device
US20130215235A1 (en) * 2011-04-29 2013-08-22 Austin Russell Three-dimensional imager and projection device
CN104915979A (en) * 2014-03-10 2015-09-16 苏州天魂网络科技有限公司 System capable of realizing immersive virtual reality across mobile platforms
CN105892631A (en) * 2015-11-16 2016-08-24 乐视致新电子科技(天津)有限公司 Method and device for simplifying operation of virtual reality application
CN105955453A (en) * 2016-04-15 2016-09-21 北京小鸟看看科技有限公司 Information input method in 3D immersion environment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013171731A1 (en) * 2012-05-16 2013-11-21 Imagine Mobile Augmented Reality Ltd A system worn by a moving user for fully augmenting reality by anchoring virtual objects
US9367960B2 (en) * 2013-05-22 2016-06-14 Microsoft Technology Licensing, Llc Body-locked placement of augmented reality objects
CN105446580B (en) * 2014-08-13 2019-02-05 联想(北京)有限公司 A kind of control method and portable electronic device
CN105607730A (en) * 2014-11-03 2016-05-25 航天信息股份有限公司 Eyeball tracking based enhanced display method and apparatus
CN104407700A (en) * 2014-11-27 2015-03-11 曦煌科技(北京)有限公司 Mobile head-wearing type virtual reality and augmented reality device
WO2016169221A1 (en) * 2015-04-20 2016-10-27 我先有限公司 Virtual reality device and operating mode
CN105809144B (en) * 2016-03-24 2019-03-08 重庆邮电大学 A kind of gesture recognition system and method using movement cutting
CN105866955A (en) * 2016-06-16 2016-08-17 深圳市世尊科技有限公司 Smart glasses

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130215235A1 (en) * 2011-04-29 2013-08-22 Austin Russell Three-dimensional imager and projection device
CN103019377A (en) * 2012-12-04 2013-04-03 天津大学 Head-mounted visual display equipment-based input method and device
CN104915979A (en) * 2014-03-10 2015-09-16 苏州天魂网络科技有限公司 System capable of realizing immersive virtual reality across mobile platforms
CN105892631A (en) * 2015-11-16 2016-08-24 乐视致新电子科技(天津)有限公司 Method and device for simplifying operation of virtual reality application
CN105955453A (en) * 2016-04-15 2016-09-21 北京小鸟看看科技有限公司 Information input method in 3D immersion environment

Also Published As

Publication number Publication date
CN108427194A (en) 2018-08-21

Similar Documents

Publication Publication Date Title
WO2018149267A1 (en) Display method and device based on augmented reality
JP5884576B2 (en) Head-mounted display device and method for controlling head-mounted display device
US9122321B2 (en) Collaboration environment using see through displays
US20120249587A1 (en) Keyboard avatar for heads up display (hud)
US10235808B2 (en) Communication system
KR101661991B1 (en) Hmd device and method for supporting a 3d drawing with a mobility in the mixed space
WO2019001575A1 (en) Wearable display device
JP2014192550A (en) Head-mounted display device, and control method of head-mounted display device
RU138628U1 (en) Augmented Reality Glasses
CN108428375A (en) A kind of teaching auxiliary and equipment based on augmented reality
KR20200106547A (en) Positioning system for head-worn displays including sensor integrated circuits
US10416460B2 (en) Head mounted display device, control method for head mounted display device, and computer program
TW201802642A (en) System f for decting line of sight
JP2014131094A (en) Display device and method of controlling display device
CN108427193A (en) Augmented reality display system
JP2015115849A (en) Information processor and control method of the same
WO2018045985A1 (en) Augmented reality display system
CN108446011A (en) A kind of medical householder method and equipment based on augmented reality
US11846782B1 (en) Electronic devices with deformation sensors
US20200012352A1 (en) Discrete Type Wearable Input and Output Kit for Mobile Device
CN107111143B (en) Vision system and film viewer
WO2018149266A1 (en) Information processing method and device based on augmented reality
WO2023082980A1 (en) Display method and electronic device
WO2018035842A1 (en) Additional near-eye display apparatus
TW201805689A (en) Add-on near eye display device characterized in that sharpened images are outputted onto the transparent display so that they are superposed on scenes viewed with naked eyes of the user

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18754559

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18754559

Country of ref document: EP

Kind code of ref document: A1