WO2021237952A1 - Augmented reality display system and method - Google Patents

Augmented reality display system and method Download PDF

Info

Publication number
WO2021237952A1
WO2021237952A1 PCT/CN2020/109366 CN2020109366W WO2021237952A1 WO 2021237952 A1 WO2021237952 A1 WO 2021237952A1 CN 2020109366 W CN2020109366 W CN 2020109366W WO 2021237952 A1 WO2021237952 A1 WO 2021237952A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
processing module
image
user
angle
Prior art date
Application number
PCT/CN2020/109366
Other languages
French (fr)
Chinese (zh)
Inventor
张元�
钟正杰
Original Assignee
上海鸿臣互动传媒有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202020950665.XU external-priority patent/CN212012916U/en
Priority claimed from CN202010477543.8A external-priority patent/CN111491159A/en
Application filed by 上海鸿臣互动传媒有限公司 filed Critical 上海鸿臣互动传媒有限公司
Publication of WO2021237952A1 publication Critical patent/WO2021237952A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays

Definitions

  • the invention relates to the field of augmented reality, and relates to an augmented reality display system and method.
  • Augmented reality technology is a way of identifying and locating scenes and objects in the real world, and placing virtual three-dimensional objects in the real scene in real time.
  • the goal of this technology is to integrate and interact with the virtual world and the real world.
  • Augmented reality mainly relies on two key technologies: one is the real-time rendering and display of three-dimensional models, and the other is the perception of the shape and position of real objects.
  • the mobile phone’s three-free gyroscope is used for positioning, but the user cannot move closer to or away from the virtual three-dimensional object;
  • Positioning is performed by calculating the relative position between the front camera of the mobile phone and the preset picture.
  • the user needs the picture to realize the six-degree-of-freedom positioning.
  • the technical stability of the picture augmented reality is not high, and the front camera cannot directly When you see the picture, you see the deformed picture through the front lens.
  • the picture must be within the visible range of the front camera to display the 3D model, which makes the tracking and positioning very unstable, and the user's moving range is determined by the location of the picture.
  • the limitations are not high, and the front camera cannot directly When you see the picture, you see the deformed picture through the front lens.
  • the picture must be within the visible range of the front camera to display the 3D model, which makes the tracking and positioning very unstable, and the user's moving range is determined by the location of the picture.
  • the present invention provides an augmented reality display system, which is characterized in that it includes:
  • a head-mounted display frame the head-mounted display frame has a circular ring shape and is used for wearing by the user;
  • a groove, the opening of the groove is inclined upward, and one side of the groove is connected with the headset frame;
  • a lens the lens is arranged below the groove and connected to the other side of the groove, and the lens is made of a semi-reflective and semi-transparent material;
  • a portable terminal the portable terminal has a first side provided with a display unit and a second side provided with an image acquisition unit, the first side and the second side are arranged opposite to each other, the portable terminal is also Comprising a processing unit for processing real-time images collected by the image collecting unit and displaying them on the display unit;
  • the size of the portable terminal is adapted to the size of the groove, and when the portable terminal is put into the groove, the first side of the portable terminal faces the lens;
  • the processing unit specifically includes:
  • Inertial measurement module used to collect and output real-time motion data
  • a pose processing module connected to the inertial measurement module, and configured to determine the current pose of the portable terminal according to the real-time image collected by the image acquisition unit and the real-time motion data at the corresponding time;
  • An image processing module connected to the pose processing module, for generating a virtual visual range and a virtual screen according to the current pose of the portable terminal, and reflecting the virtual screen to the display unit and the lens User view.
  • the image acquisition unit obtains the real-time image by collecting a feature point area including a plurality of feature points;
  • the processing unit also includes:
  • a feature point processing module which is respectively connected to the image acquisition unit and the pose processing module, and is used to obtain the feature point in the real-time image and analyze the location of the feature point area, according to Outputting the characteristic points to the pose processing module as a result of the analysis;
  • the pose processing module uses the position of the feature point area as a reference, and determines the current pose of the portable terminal according to the real-time motion data at the corresponding time.
  • the virtual visual range includes virtual angle information
  • the image processing module includes:
  • a first processing component connected to the pose processing module, and configured to construct a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and determine the spatial rotation angle of the image acquisition unit, And the included angle between the spatial rotation angle and the user, and the virtual angle information is generated according to the spatial rotation angle and the included angle and included in the virtual visual range for output.
  • the virtual visual range includes virtual location information
  • the image processing module includes:
  • a second processing component connected to the pose processing module, configured to construct a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and select the center of the user's brow as a preset reference point And generating the virtual position information according to the offset between the portable terminal and the preset reference point and the interpupillary distance of the user and including the virtual position information for output in the virtual visual range.
  • the virtual visual range includes virtual field angle information
  • the image processing module includes:
  • a third processing component connected to the pose processing module, for generating a preset virtual screen, and generating the virtual field of view according to the position difference between the curved edge of the preset virtual screen and the user
  • the angle information is also included in the virtual visual range and output.
  • An augmented reality display method applied to the display system according to any one of the above, characterized in that a head-mounted display frame, a groove, a lens and a portable terminal are provided in the display device;
  • the display method includes:
  • Step S1 the image acquisition unit acquires a real-time image of a plane directly above the portable terminal
  • the inertial measurement module collects real-time motion data
  • Step S2 the pose processing module processes the current pose of the portable terminal according to the real-time image and the real-time motion data
  • Step S3 the image processing module generates a virtual visual range and a virtual screen according to the current pose, and sends the virtual screen to the display unit for display;
  • Step S4 the virtual screen displayed on the display unit is reflected by the lens to the user for viewing.
  • the step S1 includes:
  • Step S11 the image acquisition unit obtains the real-time image by collecting a feature point region including a plurality of feature points;
  • Step S12 the image acquisition unit acquires the feature points in the real-time image and analyzes whether the location of the feature point area meets the viewing angle requirement:
  • step S2 If yes, go to step S2;
  • step S13 the image acquisition unit generates a prompt instruction for indicating that the feature points are too few and feeds it back to the user, and then returns to the step S11.
  • the image processing module determines the current pose through a vSLAM algorithm.
  • the virtual viewing range includes virtual angle information
  • the step S3 includes a first process of generating the virtual angle information
  • the first process includes:
  • Step S31A the image processing module constructs a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and determines the spatial rotation angle of the image acquisition unit in the spatial rectangular coordinate system;
  • Step S32A acquiring the angle between the image acquisition unit and the user
  • Step S33A generating the virtual angle information according to the spatial rotation angle and the included angle.
  • the virtual angle information is expressed by the following formula:
  • is used to express the virtual angle information
  • ⁇ X is used to represent the pitch angle in the space rotation angle
  • ⁇ Y is used to represent the yaw angle in the space rotation angle
  • is used to indicate the included angle
  • ⁇ Z is used to represent the roll angle in the spatial rotation angle.
  • the virtual visual range includes virtual location information
  • the step S3 includes a second process of generating the virtual location information
  • the second process includes:
  • step S31B the image processing module constructs a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and selects the center of the user’s brow as a preset reference point.
  • Step S32B The image processing module adjusts the first position information according to the interpupillary distance of the user, generates second position information, and outputs the second position information as the virtual position information.
  • the first location information is expressed by the following formula:
  • ⁇ ′ (-B X ,-B Y ,-B Z )
  • ⁇ ′ is used to express the first position information
  • B X is used to indicate the projection of the offset on the X axis
  • B Y is used to represent the projection of the offset on the Y axis
  • B Z is used to indicate the projection of the offset on the Z axis.
  • the second position information includes left-eye position information and right-eye position information
  • the second location information is expressed by the following formula:
  • ⁇ " 1 is used to express the left eye position information
  • ⁇ " 2 is used to express the right eye position information
  • B X is used to indicate the projection of the offset on the X axis
  • B Y is used to represent the projection of the offset on the Y axis
  • I is used to represent the interpupillary distance of the user
  • B Z is used to indicate the projection of the offset on the Z axis.
  • the virtual visual range includes virtual field of view angle information
  • the step S3 includes a third process of generating the virtual field of view information
  • the third process includes:
  • Step S31C the image processing module generates a preset virtual screen and displays it on the lens
  • Step S32C the image processing module calculates the position difference between the curved edge of the preset virtual screen and the user
  • step S33C the image processing module determines the virtual field of view information according to the position difference.
  • FIG. 1 is a schematic diagram of the structure in a preferred embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a portable terminal before being placed in a preferred embodiment of the present invention
  • FIG. 3 is a schematic diagram of a portable terminal in a preferred embodiment of the present invention after being placed;
  • Figure 4 is a schematic diagram of the overall flow in a preferred embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of step S1 in a preferred embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of the first process in a preferred embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of the second process in a preferred embodiment of the present invention.
  • FIG. 8 is a schematic flowchart of the third process in a preferred embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of the third process in a preferred embodiment of the present invention.
  • An augmented reality display system as shown in Figure 1 to Figure 3, includes:
  • a headset frame 1 which has a circular ring shape and is used for users to wear;
  • a groove 2 the opening of the groove 2 is inclined upward, and one side of the groove 2 is connected with the headset frame 1;
  • a lens 3, the lens 3 is arranged under the groove 2 and connected to the other side of the groove 2, and the lens 3 is made of semi-reflective and semi-transparent material;
  • a portable terminal 4 the portable terminal 4 has a first side provided with a display unit and a second side provided with an image acquisition unit 41, the first side and the second side are arranged opposite to each other, the portable terminal 4 further includes a processing unit , Used to process the real-time images collected by the image collection unit 41 and display them on the display unit;
  • the size of the portable terminal 4 is adapted to the size of the groove 2.
  • the first side of the portable terminal 4 faces the lens 3;
  • the processing unit specifically includes:
  • Inertial measurement module used to collect and output real-time motion data
  • the pose processing module connected to the inertial measurement module, is used to determine the current pose of the portable terminal 4 according to the real-time image collected by the image acquisition unit 41 and the real-time motion data at the corresponding time;
  • the image processing module connected to the pose processing module, is used to generate a virtual visual range and a virtual screen according to the current pose of the portable terminal 4, and reflect the virtual screen to the user through the display unit and the lens 3 for viewing.
  • the display device in the prior art often sets the processing unit to perform data interaction in the display device, and uses the three-free gyroscope of the mobile phone or preset pictures for positioning, which results in the user being unable to move closer to or away from the virtual
  • the tracking and positioning of three-dimensional objects is very unstable, and the user's range of movement is limited by the location of the picture.
  • This technical solution provides an augmented reality display system.
  • the portable terminal 4 collects real-time images through the image acquisition unit 41 and the inertial measurement module.
  • Real-time motion data then the current pose is determined by the pose processing module and the image processing module, and the virtual visual range and virtual screen are generated according to the current pose.
  • the virtual screen is reflected to the user through the display unit and the lens 3, and the user
  • the virtual picture and the real environment are observed through the lens 3, and the superposition and fusion between the virtual picture and the real environment is realized, and the purpose of augmented reality is achieved.
  • a mobile phone can be selected as the portable terminal 4, so as to realize the rapid acquisition of real-time images and real-time motion data, and the generation of virtual visual ranges and virtual images.
  • the image processing unit needs to obtain the current pose of the portable terminal 4 and construct a spatial rectangular coordinate system, and determine the virtual visual range based on the user’s real visual range to generate an appropriate virtual visual range.
  • the screen is displayed on the lens 3.
  • the first side of the portable terminal 4 faces the lens 3, and the area in the groove 2 that fits the first side of the portable terminal 4 can be either a hollow design or a hollow design. It is made of light-transmitting materials to display a virtual screen on the lens 3 for easy viewing by the user.
  • a hook 21 matching fixing device such as a fixing rope, can be provided on one side of the groove to assist in fixing the portable terminal 4, so as to avoid the position and angle of the portable terminal 4 that may be caused by the user's posture changes during use. Change.
  • the image collection unit 41 obtains a real-time image by collecting a feature point area including a plurality of feature points;
  • the processing unit also includes:
  • the feature point processing module is connected to the image acquisition unit 41 and the pose processing module respectively, and is used to obtain the feature points in the real-time image and analyze the location of the feature point area, and output the feature points to the pose processing module according to the analysis result ;
  • the pose processing module uses the position of the feature point area as a reference, and determines the current pose of the portable terminal 4 according to the real-time motion data at the corresponding time.
  • the image acquisition unit 41 acquires real-time images and outputs the real-time images to the image processing unit.
  • the image processing unit extracts feature points in the image and analyzes the regions corresponding to the feature points, and generates an instruction according to the analysis result to enable the image acquisition unit 41 collects more feature points with spatial recognition, so as to finally determine the corresponding area of the virtual screen.
  • the virtual visual range includes virtual angle information
  • the image processing module includes:
  • a first processing component connected to the pose processing module, is used to construct a spatial rectangular coordinate system with the image acquisition unit 41 as the origin according to the current pose, determine the spatial rotation angle of the image acquisition unit 41, and the spatial rotation angle with the user
  • the included angle between the virtual angle information is generated according to the space rotation angle and the included angle and included in the virtual visual range for output.
  • the virtual visual range includes virtual location information
  • the image processing module includes:
  • a second processing component connected to the pose processing module, is used to construct a spatial rectangular coordinate system with the image acquisition unit 41 as the origin according to the current pose, and select the center of the user’s brow as the preset reference point, according to the portable terminal 4 and the preset reference point. It is assumed that the offset between the reference points and the interpupillary distance of the user generates virtual position information and is included in the virtual visual range for output.
  • the virtual visual range includes virtual field of view information
  • the image processing module includes:
  • a third processing component connected to the pose processing module, is used to generate a preset virtual screen, and generate virtual field of view information according to the position difference between the curved edge of the preset virtual screen and the user, and include it in the virtual visual Output in the range.
  • FIG. 4 An augmented reality display method, applied to any one of the above-mentioned display systems, as shown in FIG. 4, a head-mounted display frame 1, a groove 2, a lens 3, and a portable terminal 4 are arranged in the display device;
  • the display methods include:
  • Step S1 the image acquisition unit 41 acquires a real-time image of the plane directly above the portable terminal 4, and
  • Inertial measurement module collects real-time motion data
  • Step S2 the pose processing module processes the current pose of the portable terminal 4 according to the real-time image and real-time motion data
  • Step S3 the image processing module generates a virtual visual range and a virtual picture according to the current pose, and sends the virtual picture to the display unit for display;
  • Step S4 the virtual picture displayed on the display unit is reflected by the lens 3 to the user for viewing.
  • an augmented reality display method is provided.
  • the user wears the head-mounted display frame 1, the user's eyes are facing the lens 3 of the display device, the image acquisition unit 41 of the portable terminal 4 faces upward, and the real-time image above the portable terminal 4 is collected.
  • the inertial measurement module measures the portable
  • the real-time motion data of the terminal 4 finally determines the current pose of the portable terminal 4.
  • the actual visual range and the virtual visual range of the user are determined according to the current pose of the portable terminal 4.
  • the real visual range here refers to the visual range of the user’s human eyes to observe the real environment ahead through the lens 3
  • the virtual visual range refers to the simulation in the virtual space by the image processing module in the process of generating the virtual screen.
  • the sight range corresponding to the virtual human eye. use the virtual visual range to generate a suitable virtual screen.
  • the virtual visual range is determined according to the actual visual range.
  • the virtual visual range here includes virtual position information, virtual angle information, and virtual field of view angle information.
  • the virtual position information, virtual angle information, and virtual field angle information in the virtual visual range correspond to the actual position information, real angle information, and real field angle information in the display visual range, it can ensure that the virtual space
  • the virtual screen is reflected on the lens 3, it can be reflected in the user’s eyes together with the real environment, allowing the user to observe The virtual picture is more in line with the ideal effect.
  • step S1 includes:
  • Step S11 the image acquisition unit 41 obtains a real-time image by collecting a feature point region including a plurality of feature points;
  • Step S12 the image acquisition unit 41 acquires the feature points in the real-time image and analyzes whether the location of the feature point area meets the viewing angle requirement:
  • step S2 If yes, go to step S2;
  • step S13 the image acquisition unit 41 generates a prompt instruction for indicating that there are too few feature points and feeds it back to the user, and then returns to step S11.
  • the preset program in the portable terminal 4 will guide the user to walk around in the space, and the corresponding image acquisition unit 41 starts and collects the upper image, extracts the feature points in the real-time image, and analyzes the features Whether the corresponding area covered by the point, the robustness of the feature point and other parameters meet the viewing angle requirements, if yes, go to step S2, if not, then generate a guidance instruction to guide the user to change the current position, travel direction and line of sight direction to fill the collected In the image corresponding to the area with fewer feature points, the specific position of the virtual picture in the real scene is finally determined.
  • feature points can also be preset through an external feature point device.
  • the feature point device includes a launch unit and a diffusion unit.
  • the diffusion unit is arranged above the launch unit.
  • a transparent disc can be selected from the diffusion unit and multiple feature points can be added to the transparent disc.
  • the launch unit can select a laser emission
  • the device projects the characteristic points of the diffusion unit above the space where the display device is located, and the image processing module then performs characteristic point analysis according to the upper image collected by the image acquisition unit 41.
  • step S2 the image processing module determines the current pose through the vSLAM algorithm.
  • the virtual viewing range includes virtual angle information
  • Step S3 includes the first process of generating virtual angle information
  • the first process includes:
  • step S31A the image processing module constructs a spatial rectangular coordinate system with the image acquisition unit 41 as the origin according to the current pose, and determines the spatial rotation angle of the image acquisition unit 41 in the spatial rectangular coordinate system;
  • Step S32A acquiring the angle between the image acquisition unit 41 and the user
  • Step S33A generating virtual angle information according to the spatial rotation angle and the included angle.
  • the virtual angle information is expressed by the following formula:
  • is used to express virtual angle information
  • ⁇ X is used to represent the pitch angle in the space rotation angle
  • ⁇ Y is used to represent the yaw angle in the space rotation angle
  • is used to indicate the included angle
  • ⁇ Z is used to represent the roll angle in the space rotation angle.
  • the virtual visual range includes virtual location information
  • Step S3 includes a second process of generating virtual location information
  • the second process includes:
  • step S31B the image processing module constructs a spatial rectangular coordinate system with the image acquisition unit 41 as the origin according to the current pose, and selects the center of the user's brow as a preset reference point, according to the offset between the image acquisition unit 41 and the preset reference point Quantities, generate the first position information;
  • step S32B the image processing module adjusts the first position information according to the user's interpupillary distance, generates second position information, and outputs the second position information as virtual position information.
  • the roll angle of the image acquisition unit 41 also changes accordingly, and the angle of the up and down rotation corresponds to the roll angle.
  • the pitch angle of the image acquisition unit 41 also changes accordingly. Change, the angle of the front and back rotation corresponds to the pitch angle, and when the user rotates the head left and right, the image acquisition unit 41 performs a circular rotation at the current position, and there is a deviation between the angle of the left and right rotation and the yaw angle.
  • the space coordinate system is constructed with the image acquisition unit 41 at this time as the origin, and the space rotation angle of the image acquisition unit 41 in the space coordinate system is determined.
  • Euler angles are used to express as ( ⁇ X , ⁇ Y , ⁇ Z )
  • the angle ⁇ between the image acquisition unit 41 and the user’s line of sight on the y-axis is calculated by formula (1), thereby determining the virtual angle information ⁇ in the virtual visual range, and according to the virtual angle information, to ensure that the virtual space
  • the angle information of the virtual human eye overlaps with the angle information of the user’s eye in the real environment, and the virtual visual range corresponds to the user’s real visual range, so as to realize the superimposition and fusion of the virtual screen and the real scene.
  • the first location information is expressed by the following formula:
  • ⁇ ′ is used to express the first position information
  • B X is used to indicate the projection of the offset on the X axis
  • B Y is used to indicate the projection of the offset on the Y axis
  • B Z is used to indicate the projection of the offset on the Z axis.
  • the second position information includes left-eye position information and right-eye position information
  • the second location information is expressed by the following formula:
  • ⁇ ′′ 1 is used to express the position information of the left eye
  • ⁇ ′′ 2 is used to express the position information of the right eye
  • B X is used to indicate the projection of the offset on the X axis
  • B Y is used to indicate the projection of the offset on the Y axis
  • I is used to represent the interpupillary distance of the user
  • B Z is used to indicate the projection of the offset on the Z axis.
  • the vSLAM algorithm is often used to fuse the image collected by the image acquisition unit 41 with the data collected by the sensor to calculate the six degrees of freedom information of the device, and the determined position information in the virtual visual range is the image acquisition unit 41 current position. Due to the large deviation between the user's eyes in the real environment and the position information of the image acquisition unit 41, when the user wears the display device to observe the virtual screen, obvious screen misalignment may occur.
  • step S3 a second process of determining virtual position information is set in step S3, considering that the distance between the lens 3 and the center of the eyebrows, and the distance between the center of the eyebrows and the left and right eyes are basically The center of the brow is selected as the reference point.
  • step S31B it is first determined that the position information of the image acquisition unit 41 in the spatial rectangular coordinate system is (0, 0, 0), and then the position between the image acquisition unit 41 and the center of the brow is acquired.
  • the relative position thereby determining the first position information, that is, the position information (-B X , B Y , B Z ) of the center of the eyebrows in the spatial rectangular coordinate system, which can be regarded as the generated virtual human eye position from the image
  • the acquisition unit 41 turns to the center of the eyebrows, and then obtains the pupil distance I of the real user's eyes in step S32B, and then adjusts the first position information to generate second position information, that is, the position information of the left and right eyes in the spatial rectangular coordinate system (-B X , -B Y -I/2, -B Z ) and (-B X , -B Y +I/2, -B Z ) to ensure that the position information of the virtual human eye in the virtual space is consistent with that in the real environment
  • the position information of the user's eyes overlaps, and the virtual visual range corresponds to the user's real visual range, so as to realize the superimposition and fusion of the virtual screen and the real scene.
  • the virtual viewing range includes virtual viewing angle information
  • Step S3 includes a third process of generating virtual field of view information
  • the third process includes:
  • Step S31C the image processing module generates a preset virtual picture and displays it on the lens 3;
  • Step S32C the image processing module calculates the position difference between the curved edge of the preset virtual screen and the user
  • step S33C the image processing module determines the virtual field of view information according to the position difference.
  • the portable terminal 4 in the process of determining the virtual field of view information, forms a preset virtual screen on the lens 3 through the display unit, and respectively calculates the curved edge of the preset virtual screen to the corresponding user person. The distance of the eye finally determines the virtual field of view information.

Abstract

The present invention relates to the field of augmented reality, and relates to an augmented reality display system and method. The system comprises: a head-mounted display frame, a recess, a lens, and a portable terminal. When the portable terminal is placed into the recess, a first surface of the portable terminal is enabled to face toward the lens. A processing unit specifically comprises: an inertia measurement module; a pose processing module; and an image processing module, connected to the pose processing module and used for generating a virtual visual range and a virtual screen according to the current pose of the portable terminal and reflecting the virtual screen by means of a display unit and the lens to a user for viewing. The technical solution has the beneficial effects that positioning can be quickly implemented, the virtual screen is generated, and spatial adjustment is implemented.

Description

一种增强现实的显示系统及方法Display system and method for augmented reality 技术领域Technical field
本发明涉及增强现实领域,涉及到一种增强现实的显示系统及方法。The invention relates to the field of augmented reality, and relates to an augmented reality display system and method.
背景技术Background technique
增强现实技术是一种通过对现实世界中的场景和物体进行识别于定位,并实时得将虚拟的三维物体放置于现实场景中。这种技术的目标是将虚拟世界于现实世界融合并进行互动。增强现实主要依赖于两种关键技术:一是三维模型的实时渲染和显示,二是对现实物体形态和位置的感知。Augmented reality technology is a way of identifying and locating scenes and objects in the real world, and placing virtual three-dimensional objects in the real scene in real time. The goal of this technology is to integrate and interact with the virtual world and the real world. Augmented reality mainly relies on two key technologies: one is the real-time rendering and display of three-dimensional models, and the other is the perception of the shape and position of real objects.
而目前的增强现实设备对位置的感知中通常具有两种方式:However, current augmented reality devices usually have two ways to perceive location:
(1)使用手机的三自由陀螺仪进行定位,然而用户无法通过移动的方式靠近或远离虚拟的三维物体;(1) The mobile phone’s three-free gyroscope is used for positioning, but the user cannot move closer to or away from the virtual three-dimensional object;
(2)通过计算手机端的前置摄像头和预设图片之间的相对位置进行定位,然而用户需要图片才可以实现六自由度定位,图片增强现实的技术稳定性不高,而前置摄像头无法直接看到图片,是透过前方透镜看到变形的图片画面,图片必须在前置摄像头的可视范围内才会显示三维模型,这使追踪定位很不稳定,并且用户的移动范围被图片所在位置所局限。(2) Positioning is performed by calculating the relative position between the front camera of the mobile phone and the preset picture. However, the user needs the picture to realize the six-degree-of-freedom positioning. The technical stability of the picture augmented reality is not high, and the front camera cannot directly When you see the picture, you see the deformed picture through the front lens. The picture must be within the visible range of the front camera to display the 3D model, which makes the tracking and positioning very unstable, and the user's moving range is determined by the location of the picture. The limitations.
由此可见,目前仍缺少一种能够快速进行位置定位、生成虚拟画面并进行空间校正的显示设备和显示方法。It can be seen that there is still a lack of a display device and display method that can quickly perform position positioning, generate virtual images, and perform spatial correction.
发明内容Summary of the invention
针对上述的现有技术的缺陷,本发明提供一种增强现实显示系统,其特征在于,包括:In view of the above-mentioned defects in the prior art, the present invention provides an augmented reality display system, which is characterized in that it includes:
一头显框架,所述头显框架呈圆环状,用于用户佩戴;A head-mounted display frame, the head-mounted display frame has a circular ring shape and is used for wearing by the user;
一凹槽,所述凹槽的开口倾斜朝上,所述凹槽的一侧与所述头显框架连接;A groove, the opening of the groove is inclined upward, and one side of the groove is connected with the headset frame;
一镜片,所述镜片设置在所述凹槽的下方并与所述凹槽的另一侧连接,所述镜片采用半反半透材料制作;A lens, the lens is arranged below the groove and connected to the other side of the groove, and the lens is made of a semi-reflective and semi-transparent material;
一便携式终端,所述便携式终端具有一设置有显示单元的第一面以及一设置有图像采集单元的第二面,所述第一面与所述第二面背向设置,所述便携式终端还包括一处理单元,用于对所述图像采集单元采集得到的实时图像进行处理并通过所述显示单元进行显示;A portable terminal, the portable terminal has a first side provided with a display unit and a second side provided with an image acquisition unit, the first side and the second side are arranged opposite to each other, the portable terminal is also Comprising a processing unit for processing real-time images collected by the image collecting unit and displaying them on the display unit;
所述便携式终端的尺寸适配于所述凹槽的尺寸,在将所述便携式终端放入所述凹槽时,将所述便携式终端的所述第一面朝向所述镜片;The size of the portable terminal is adapted to the size of the groove, and when the portable terminal is put into the groove, the first side of the portable terminal faces the lens;
所述处理单元具体包括:The processing unit specifically includes:
惯性测量模块,用于采集得到实时运动数据并输出;Inertial measurement module, used to collect and output real-time motion data;
位姿处理模块,连接所述惯性测量模块,用于根据所述图像采集单元采集到的实时图像和对应时刻的所述实时运动数据确定所述便携式终端的当前位姿;A pose processing module, connected to the inertial measurement module, and configured to determine the current pose of the portable terminal according to the real-time image collected by the image acquisition unit and the real-time motion data at the corresponding time;
图像处理模块,连接所述位姿处理模块,用于根据所述便携式终端的当前位姿生成虚拟可视范围和虚拟画面,将所述虚拟画面通过所述显示单元和所述镜片反射给所述用户查看。An image processing module, connected to the pose processing module, for generating a virtual visual range and a virtual screen according to the current pose of the portable terminal, and reflecting the virtual screen to the display unit and the lens User view.
优选的,所述图像采集单元通过采集一包括多个特征点的特征点区域得到所述实时图像;Preferably, the image acquisition unit obtains the real-time image by collecting a feature point area including a plurality of feature points;
所述处理单元中还包括:The processing unit also includes:
特征点处理模块,所述特征点处理模块分别连接所述图像采集单元和所述位姿处理模块,用于获取所述实时图像中的所述特征点并分析所述特征点区域的位置,根据分析结果将所述特征点输出至所述位姿处理模块;A feature point processing module, which is respectively connected to the image acquisition unit and the pose processing module, and is used to obtain the feature point in the real-time image and analyze the location of the feature point area, according to Outputting the characteristic points to the pose processing module as a result of the analysis;
所述位姿处理模块将所述特征点区域的位置作为参照,并根据对应时刻的所述实时运动数据确定所述便携式终端的当前位姿。The pose processing module uses the position of the feature point area as a reference, and determines the current pose of the portable terminal according to the real-time motion data at the corresponding time.
优选的,所述虚拟可视范围包括虚拟角度信息;Preferably, the virtual visual range includes virtual angle information;
所述图像处理模块中包括:The image processing module includes:
一第一处理部件,与所述位姿处理模块连接,用于根据所述当前位姿,构建以所述图像采集单元为原点的空间直角坐标系,确定所述图像采集单元的空间转动角度,以及所述空间转动角度与所述用户之间的夹角,根据所述空间转动角度和所述夹角生成所述虚拟角度信息并包括在所述虚拟可视范围中输出。A first processing component, connected to the pose processing module, and configured to construct a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and determine the spatial rotation angle of the image acquisition unit, And the included angle between the spatial rotation angle and the user, and the virtual angle information is generated according to the spatial rotation angle and the included angle and included in the virtual visual range for output.
优选的,所述虚拟可视范围包括虚拟位置信息;Preferably, the virtual visual range includes virtual location information;
所述图像处理模块中包括:The image processing module includes:
一第二处理部件,与所述位姿处理模块连接,用于根据所述当前位姿,构建以所述图像采集单元为原点的空间直角坐标系,选取所述用户的眉心作为预设参考点,根据所述便携式终端与所述预设参考点之间的偏移量以及所述用户的瞳距生成所述虚拟位置信息并包括在所述虚拟可视范围中输出。A second processing component, connected to the pose processing module, configured to construct a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and select the center of the user's brow as a preset reference point And generating the virtual position information according to the offset between the portable terminal and the preset reference point and the interpupillary distance of the user and including the virtual position information for output in the virtual visual range.
优选的,所述虚拟可视范围包括虚拟视场角信息;Preferably, the virtual visual range includes virtual field angle information;
所述图像处理模块中包括:The image processing module includes:
一第三处理部件,与所述位姿处理模块连接,用于生成预设虚拟画面,并根据所述预设虚拟画面的曲面边缘与所述用户之间的位置差值生成所述虚拟视场角信息并包括在所述虚拟可视范围中输出。A third processing component, connected to the pose processing module, for generating a preset virtual screen, and generating the virtual field of view according to the position difference between the curved edge of the preset virtual screen and the user The angle information is also included in the virtual visual range and output.
一种增强现实的显示方法,应用于如上述任意一项所述的显示系统,其特征在于,于所述显示设备中设置一头显框架、一凹槽、一镜片和一便携式终端;An augmented reality display method, applied to the display system according to any one of the above, characterized in that a head-mounted display frame, a groove, a lens and a portable terminal are provided in the display device;
所述显示方法中包括:The display method includes:
步骤S1,所述图像采集单元采集得到位于所述便携式终端的正上方的平面的实时图像,以及Step S1, the image acquisition unit acquires a real-time image of a plane directly above the portable terminal, and
所述惯性测量模块采集得到实时运动数据;The inertial measurement module collects real-time motion data;
步骤S2,所述位姿处理模块根据所述实时图像和所述实时运动数据处理得到所述便携式终端的当前位姿;Step S2, the pose processing module processes the current pose of the portable terminal according to the real-time image and the real-time motion data;
步骤S3,所述图像处理模块根据所述当前位姿生成虚拟可视范围和虚拟画面,并将所述虚拟画面发送至所述显示单元进行显示;Step S3, the image processing module generates a virtual visual range and a virtual screen according to the current pose, and sends the virtual screen to the display unit for display;
步骤S4,于所述显示单元上显示的所述虚拟画面通过所述镜片被反射给所述用户查看。Step S4, the virtual screen displayed on the display unit is reflected by the lens to the user for viewing.
优选的,所述步骤S1中包括:Preferably, the step S1 includes:
步骤S11,所述图像采集单元通过采集一包括多个特征点的特征点区域得到所述实时图像;Step S11, the image acquisition unit obtains the real-time image by collecting a feature point region including a plurality of feature points;
步骤S12,所述图像采集单元获取所述实时图像中的特征点并分析所述特征点区域的位置是否满足视角需求:Step S12, the image acquisition unit acquires the feature points in the real-time image and analyzes whether the location of the feature point area meets the viewing angle requirement:
若是,则转至步骤S2;If yes, go to step S2;
若否,则转至步骤S13;If not, go to step S13;
步骤S13,所述图像采集单元生成一用于指示所述特征点过少的提示指令并反馈给所述用户,随后返回所述步骤S11。In step S13, the image acquisition unit generates a prompt instruction for indicating that the feature points are too few and feeds it back to the user, and then returns to the step S11.
优选的,所述步骤S2中所述图像处理模块通过vSLAM算法确定所述当前位姿。Preferably, in the step S2, the image processing module determines the current pose through a vSLAM algorithm.
优选的,所述虚拟可视范围中包括虚拟角度信息;Preferably, the virtual viewing range includes virtual angle information;
所述步骤S3中包括生成所述虚拟角度信息的第一过程;The step S3 includes a first process of generating the virtual angle information;
所述第一过程中包括:The first process includes:
步骤S31A,所述图像处理模块根据所述当前位姿构建以所述图像采集单元为原点的空间直角坐标系,并确定所述图像采集单元在所述空间直角坐标系下的空间转动角度;Step S31A, the image processing module constructs a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and determines the spatial rotation angle of the image acquisition unit in the spatial rectangular coordinate system;
步骤S32A,获取所述图像采集单元与所述用户之间的夹角;Step S32A, acquiring the angle between the image acquisition unit and the user;
步骤S33A,根据所述空间转动角度和所述夹角生成所述虚拟角度信息。Step S33A, generating the virtual angle information according to the spatial rotation angle and the included angle.
优选的,所述虚拟角度信息采用下述公式表述:Preferably, the virtual angle information is expressed by the following formula:
θ=(θ XY-α,θ Z) θ=(θ XY -α,θ Z )
其中,in,
θ用于表述所述虚拟角度信息;θ is used to express the virtual angle information;
θ X用于表示所述空间转动角度中的俯仰角; θ X is used to represent the pitch angle in the space rotation angle;
θ Y用于表示所述空间转动角度中的偏航角; θ Y is used to represent the yaw angle in the space rotation angle;
α用于表示所述夹角;α is used to indicate the included angle;
θ Z用于表示所述空间转动角度中的翻滚角。 θ Z is used to represent the roll angle in the spatial rotation angle.
优选的,所述虚拟可视范围中包括虚拟位置信息;Preferably, the virtual visual range includes virtual location information;
所述步骤S3中包括生成所述虚拟位置信息的第二过程;The step S3 includes a second process of generating the virtual location information;
所述第二过程中包括:The second process includes:
步骤S31B,所述图像处理模块根据所述当前位姿构建以所述图像采集单元为原点的空间直角坐标系,并选取所述用户的眉心作为预设参考点,根据所述图像采集单元与所述预设参考点之间的偏移量,生成第一位置信息;In step S31B, the image processing module constructs a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and selects the center of the user’s brow as a preset reference point. The offset between the preset reference points to generate first position information;
步骤S32B,所述图像处理模块根据所述用户的瞳距对所述第一位置信息调节,生成第二位置信息,并将所述第二位置信息作为所述虚拟位置信息输出。Step S32B: The image processing module adjusts the first position information according to the interpupillary distance of the user, generates second position information, and outputs the second position information as the virtual position information.
优选的,所述第一位置信息采用下述公式表述:Preferably, the first location information is expressed by the following formula:
χ′=(-B X,-B Y,-B Z) χ′=(-B X ,-B Y ,-B Z )
其中,in,
χ′用于表述所述第一位置信息;χ′ is used to express the first position information;
B X用于表示所述偏移量在X轴上的投影; B X is used to indicate the projection of the offset on the X axis;
B Y用于表示所述偏移量在Y轴上的投影; B Y is used to represent the projection of the offset on the Y axis;
B Z用于表示所述偏移量在Z轴上的投影。 B Z is used to indicate the projection of the offset on the Z axis.
优选的,所述第二位置信息包括左眼位置信息和右眼位置信息;Preferably, the second position information includes left-eye position information and right-eye position information;
所述第二位置信息采用下述公式表述:The second location information is expressed by the following formula:
Figure PCTCN2020109366-appb-000001
Figure PCTCN2020109366-appb-000001
其中,in,
χ″ 1用于表述所述左眼位置信息; χ" 1 is used to express the left eye position information;
χ″ 2用于表述所述右眼位置信息; χ" 2 is used to express the right eye position information;
B X用于表示所述偏移量在X轴上的投影; B X is used to indicate the projection of the offset on the X axis;
B Y用于表示所述偏移量在Y轴上的投影; B Y is used to represent the projection of the offset on the Y axis;
I用于表示所述用户的所述瞳距;I is used to represent the interpupillary distance of the user;
B Z用于表示所述偏移量在Z轴上的投影。 B Z is used to indicate the projection of the offset on the Z axis.
优选的,所述虚拟可视范围中包括虚拟视场角信息;Preferably, the virtual visual range includes virtual field of view angle information;
所述步骤S3中包括生成所述虚拟视场角信息的第三过程;The step S3 includes a third process of generating the virtual field of view information;
所述第三过程中包括:The third process includes:
步骤S31C,所述图像处理模块生成一预设虚拟画面并显示在所述镜片上;Step S31C, the image processing module generates a preset virtual screen and displays it on the lens;
步骤S32C,所述图像处理模块计算所述预设虚拟画面的曲面边缘与所述用户之间的位置差值;Step S32C, the image processing module calculates the position difference between the curved edge of the preset virtual screen and the user;
步骤S33C,所述图像处理模块根据所述位置差值确定所述虚拟视场角信息。In step S33C, the image processing module determines the virtual field of view information according to the position difference.
上述技术方案的有益效果是:能够快速进行位置定位、生成虚拟画面并进行空间校正。The beneficial effects of the above technical solutions are: position positioning can be performed quickly, virtual images can be generated, and spatial correction can be performed.
附图说明Description of the drawings
图1为本发明的一种较优实施例中的结构示意图;FIG. 1 is a schematic diagram of the structure in a preferred embodiment of the present invention;
图2为本发明的一种较优实施例中的便携式终端摆放前示意图;2 is a schematic diagram of a portable terminal before being placed in a preferred embodiment of the present invention;
图3为本发明的一种较优实施例中的便携式终端摆放后示意图;FIG. 3 is a schematic diagram of a portable terminal in a preferred embodiment of the present invention after being placed;
图4为本发明的一种较优实施例中的总流程示意图;Figure 4 is a schematic diagram of the overall flow in a preferred embodiment of the present invention;
图5为本发明的一种较优实施例中的步骤S1流程示意图;FIG. 5 is a schematic flowchart of step S1 in a preferred embodiment of the present invention;
图6为本发明的一种较优实施例中的第一过程的流程示意图;FIG. 6 is a schematic flowchart of the first process in a preferred embodiment of the present invention;
图7为本发明的一种较优实施例中的第二过程的流程示意图;FIG. 7 is a schematic flowchart of the second process in a preferred embodiment of the present invention;
图8为本发明的一种较优实施例中的第三过程的流程示意图;FIG. 8 is a schematic flowchart of the third process in a preferred embodiment of the present invention;
图9为本发明的一种较优实施例中的第三过程的结构示意图。FIG. 9 is a schematic structural diagram of the third process in a preferred embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
需要说明的是,在不冲突的情况下,本发明中的实施例及实施例中的特征可以相互组合。It should be noted that the embodiments of the present invention and the features in the embodiments can be combined with each other if there is no conflict.
一种增强现实显示系统,如图1-图3所示,包括:An augmented reality display system, as shown in Figure 1 to Figure 3, includes:
一头显框架1,头显框架1呈圆环状,用于用户佩戴;A headset frame 1, which has a circular ring shape and is used for users to wear;
一凹槽2,凹槽2的开口倾斜朝上,凹槽2的一侧与头显框架1连接;A groove 2, the opening of the groove 2 is inclined upward, and one side of the groove 2 is connected with the headset frame 1;
一镜片3,镜片3设置在凹槽2的下方并与凹槽2的另一侧连接,镜片3采用半反半透材料制作;A lens 3, the lens 3 is arranged under the groove 2 and connected to the other side of the groove 2, and the lens 3 is made of semi-reflective and semi-transparent material;
一便携式终端4,便携式终端4具有一设置有显示单元的第一面以及一设置有图像采集单元41的第二面,第一面与第二面背向设置,便携式终端4还包括一处理单元,用于对图像采集单元41采集得到的实时图像进行处理并通过显示单元进行显示;A portable terminal 4, the portable terminal 4 has a first side provided with a display unit and a second side provided with an image acquisition unit 41, the first side and the second side are arranged opposite to each other, the portable terminal 4 further includes a processing unit , Used to process the real-time images collected by the image collection unit 41 and display them on the display unit;
便携式终端4的尺寸适配于凹槽2的尺寸,在将便携式终端4放入凹槽 2时,将便携式终端4的第一面朝向镜片3;The size of the portable terminal 4 is adapted to the size of the groove 2. When the portable terminal 4 is placed in the groove 2, the first side of the portable terminal 4 faces the lens 3;
处理单元具体包括:The processing unit specifically includes:
惯性测量模块,用于采集得到实时运动数据并输出;Inertial measurement module, used to collect and output real-time motion data;
位姿处理模块,连接惯性测量模块,用于根据图像采集单元41采集到的实时图像和对应时刻的实时运动数据确定便携式终端4的当前位姿;The pose processing module, connected to the inertial measurement module, is used to determine the current pose of the portable terminal 4 according to the real-time image collected by the image acquisition unit 41 and the real-time motion data at the corresponding time;
图像处理模块,连接位姿处理模块,用于根据便携式终端4的当前位姿生成虚拟可视范围和虚拟画面,将虚拟画面通过显示单元和镜片3反射给用户查看。The image processing module, connected to the pose processing module, is used to generate a virtual visual range and a virtual screen according to the current pose of the portable terminal 4, and reflect the virtual screen to the user through the display unit and the lens 3 for viewing.
具体地,现有技术中的显示装置常将处理单元设置至在显示设备中进行数据交互,利用手机的三自由陀螺仪或者预设图片进行定位,导致用户无法通过移动的方式靠近或远离虚拟的三维物体并且追踪定位很不稳定,并且用户的移动范围被图片所在位置所局限。Specifically, the display device in the prior art often sets the processing unit to perform data interaction in the display device, and uses the three-free gyroscope of the mobile phone or preset pictures for positioning, which results in the user being unable to move closer to or away from the virtual The tracking and positioning of three-dimensional objects is very unstable, and the user's range of movement is limited by the location of the picture.
本技术方案提供一增强现实的显示系统,便携式终端4与头显框架1、凹槽2和镜片3之间不进行任何数据交互,便携式终端4通过图像采集单元41采集实时图像、惯性测量模块采集实时运动数据,随后通过位姿处理模块和图像处理模块确定当前位姿、并根据当前位姿生成虚拟可视范围和虚拟画面,最后再将虚拟画面通过显示单元和镜片3,反射至用户,用户通过镜片3上观察到虚拟画面和现实环境,实现虚拟画面和现实环境之间叠加融合,达到增强现实的目的。于此处可选用手机作为便携式终端4,以便于实现快速采集实时图像和实时运动数据、生成虚拟可视范围和虚拟画面。This technical solution provides an augmented reality display system. There is no data interaction between the portable terminal 4 and the headset frame 1, the groove 2 and the lens 3. The portable terminal 4 collects real-time images through the image acquisition unit 41 and the inertial measurement module. Real-time motion data, then the current pose is determined by the pose processing module and the image processing module, and the virtual visual range and virtual screen are generated according to the current pose. Finally, the virtual screen is reflected to the user through the display unit and the lens 3, and the user The virtual picture and the real environment are observed through the lens 3, and the superposition and fusion between the virtual picture and the real environment is realized, and the purpose of augmented reality is achieved. Here, a mobile phone can be selected as the portable terminal 4, so as to realize the rapid acquisition of real-time images and real-time motion data, and the generation of virtual visual ranges and virtual images.
进一步地,图像处理单元在生成虚拟可视范围的过程中,需要获取便携式终端4的当前位姿并构建空间直角坐标系,以用户的现实可视范围确定虚 拟可视范围,以生成适当的虚拟画面并显示到镜片3上。Further, in the process of generating the virtual visual range, the image processing unit needs to obtain the current pose of the portable terminal 4 and construct a spatial rectangular coordinate system, and determine the virtual visual range based on the user’s real visual range to generate an appropriate virtual visual range. The screen is displayed on the lens 3.
进一步地,便携式终端4放入凹槽2时,将便携式终端4的第一面朝向镜片3,并且凹槽2中与便携式终端4的第一面贴合的区域可选用镂空设计,也可选用透光材料制作等,以将虚拟画面显示在镜片3上,便于用户查看。Further, when the portable terminal 4 is placed in the groove 2, the first side of the portable terminal 4 faces the lens 3, and the area in the groove 2 that fits the first side of the portable terminal 4 can be either a hollow design or a hollow design. It is made of light-transmitting materials to display a virtual screen on the lens 3 for easy viewing by the user.
进一步地,可于凹槽的一侧上设置一挂钩21配套固定装置,如固定绳等,辅助固定便携式终端4,以避免用户使用过程中可能会因姿态变动导致便携式终端4的位置和角度的改变。Further, a hook 21 matching fixing device, such as a fixing rope, can be provided on one side of the groove to assist in fixing the portable terminal 4, so as to avoid the position and angle of the portable terminal 4 that may be caused by the user's posture changes during use. Change.
本发明的一种较优实施例中,图像采集单元41通过采集一包括多个特征点的特征点区域得到实时图像;In a preferred embodiment of the present invention, the image collection unit 41 obtains a real-time image by collecting a feature point area including a plurality of feature points;
处理单元中还包括:The processing unit also includes:
特征点处理模块,特征点处理模块分别连接图像采集单元41和位姿处理模块,用于获取实时图像中的特征点并分析特征点区域的位置,根据分析结果将特征点输出至位姿处理模块;The feature point processing module is connected to the image acquisition unit 41 and the pose processing module respectively, and is used to obtain the feature points in the real-time image and analyze the location of the feature point area, and output the feature points to the pose processing module according to the analysis result ;
位姿处理模块将特征点区域的位置作为参照,并根据对应时刻的实时运动数据确定便携式终端4的当前位姿。The pose processing module uses the position of the feature point area as a reference, and determines the current pose of the portable terminal 4 according to the real-time motion data at the corresponding time.
具体的,图像采集单元41采集实时图像并将实时图像输出至图像处理单元,图像处理单元提取图像中的特征点并对特征点对应的区域进行分析,根据分析结果生成指示指令,使图像采集单元41采集到更多具有空间辨识度的特征点,从而最终确定虚拟画面的对应区域。Specifically, the image acquisition unit 41 acquires real-time images and outputs the real-time images to the image processing unit. The image processing unit extracts feature points in the image and analyzes the regions corresponding to the feature points, and generates an instruction according to the analysis result to enable the image acquisition unit 41 collects more feature points with spatial recognition, so as to finally determine the corresponding area of the virtual screen.
本发明的一种较优实施例中,虚拟可视范围包括虚拟角度信息;In a preferred embodiment of the present invention, the virtual visual range includes virtual angle information;
图像处理模块中包括:The image processing module includes:
一第一处理部件,与位姿处理模块连接,用于根据当前位姿,构建以图 像采集单元41为原点的空间直角坐标系,确定图像采集单元41的空间转动角度,以及空间转动角度与用户之间的夹角,根据空间转动角度和夹角生成虚拟角度信息并包括在虚拟可视范围中输出。A first processing component, connected to the pose processing module, is used to construct a spatial rectangular coordinate system with the image acquisition unit 41 as the origin according to the current pose, determine the spatial rotation angle of the image acquisition unit 41, and the spatial rotation angle with the user The included angle between the virtual angle information is generated according to the space rotation angle and the included angle and included in the virtual visual range for output.
本发明的一种较优实施例中,虚拟可视范围包括虚拟位置信息;In a preferred embodiment of the present invention, the virtual visual range includes virtual location information;
图像处理模块中包括:The image processing module includes:
一第二处理部件,与位姿处理模块连接,用于根据当前位姿,构建以图像采集单元41为原点的空间直角坐标系,选取用户的眉心作为预设参考点,根据便携式终端4与预设参考点之间的偏移量以及用户的瞳距生成虚拟位置信息并包括在虚拟可视范围中输出。A second processing component, connected to the pose processing module, is used to construct a spatial rectangular coordinate system with the image acquisition unit 41 as the origin according to the current pose, and select the center of the user’s brow as the preset reference point, according to the portable terminal 4 and the preset reference point. It is assumed that the offset between the reference points and the interpupillary distance of the user generates virtual position information and is included in the virtual visual range for output.
本发明的一种较优实施例中,虚拟可视范围包括虚拟视场角信息;In a preferred embodiment of the present invention, the virtual visual range includes virtual field of view information;
图像处理模块中包括:The image processing module includes:
一第三处理部件,与位姿处理模块连接,用于生成预设虚拟画面,并根据预设虚拟画面的曲面边缘与用户之间的位置差值生成虚拟视场角信息并包括在虚拟可视范围中输出。A third processing component, connected to the pose processing module, is used to generate a preset virtual screen, and generate virtual field of view information according to the position difference between the curved edge of the preset virtual screen and the user, and include it in the virtual visual Output in the range.
一种增强现实的显示方法,应用于如上述中任意一项的显示系统,如图4所示,于显示设备中设置一头显框架1、一凹槽2、一镜片3和一便携式终端4;An augmented reality display method, applied to any one of the above-mentioned display systems, as shown in FIG. 4, a head-mounted display frame 1, a groove 2, a lens 3, and a portable terminal 4 are arranged in the display device;
显示方法中包括:The display methods include:
步骤S1,图像采集单元41采集得到位于便携式终端4的正上方的平面的实时图像,以及Step S1, the image acquisition unit 41 acquires a real-time image of the plane directly above the portable terminal 4, and
惯性测量模块采集得到实时运动数据;Inertial measurement module collects real-time motion data;
步骤S2,位姿处理模块根据实时图像和实时运动数据处理得到便携式终 端4的当前位姿;Step S2, the pose processing module processes the current pose of the portable terminal 4 according to the real-time image and real-time motion data;
步骤S3,图像处理模块根据当前位姿生成虚拟可视范围和虚拟画面,并将虚拟画面发送至显示单元进行显示;Step S3, the image processing module generates a virtual visual range and a virtual picture according to the current pose, and sends the virtual picture to the display unit for display;
步骤S4,于显示单元上显示的虚拟画面通过镜片3被反射给用户查看。Step S4, the virtual picture displayed on the display unit is reflected by the lens 3 to the user for viewing.
具体地,为实现虚拟画面与现实画面的融合,提供一种增强现实的显示方法。于步骤S1-步骤S2中,用户佩戴头显框架1,用户的眼睛正视显示设备的镜片3,便携式终端4的图像采集单元41朝向上方,采集便携式终端4上方的实时图像,惯性测量模块测量便携式终端4的实时运动数据,最终确定便携式终端4的当前位姿。Specifically, in order to realize the fusion of virtual images and real images, an augmented reality display method is provided. In step S1-step S2, the user wears the head-mounted display frame 1, the user's eyes are facing the lens 3 of the display device, the image acquisition unit 41 of the portable terminal 4 faces upward, and the real-time image above the portable terminal 4 is collected. The inertial measurement module measures the portable The real-time motion data of the terminal 4 finally determines the current pose of the portable terminal 4.
增强现实需要将虚拟画面呈现在镜片3上,以实现虚拟画面与用户透过镜片3看到的现实图像之间的叠加融合,因此图像处理模块要确保最后生成的虚拟画面能够准确入射至用户的眼睛。由此于步骤S3中,根据便携式终端4的当前位姿确定用户的现实可视范围和虚拟可视范围。此处的现实可视范围是指用户的人眼透过镜片3观察到前方现实环境的视线范围,而虚拟可视范围则是指图像处理模块在生成虚拟画面过程中,于虚拟空间中模拟的虚拟人眼对应的视线范围。随后利用虚拟可视范围能够生成合适的虚拟画面。在生成虚拟画面时根据现实可视范围确定虚拟可视范围。Augmented reality needs to present a virtual screen on the lens 3 to realize the superposition and fusion between the virtual screen and the real image that the user sees through the lens 3. Therefore, the image processing module must ensure that the final virtual screen can be accurately incident on the user's Eye. Therefore, in step S3, the actual visual range and the virtual visual range of the user are determined according to the current pose of the portable terminal 4. The real visual range here refers to the visual range of the user’s human eyes to observe the real environment ahead through the lens 3, and the virtual visual range refers to the simulation in the virtual space by the image processing module in the process of generating the virtual screen. The sight range corresponding to the virtual human eye. Then use the virtual visual range to generate a suitable virtual screen. When the virtual screen is generated, the virtual visual range is determined according to the actual visual range.
此处的虚拟可视范围包括虚拟位置信息、虚拟角度信息和虚拟视场角信息。当虚拟可视范围中的虚拟位置信息、虚拟角度信息和虚拟视场角信息与显示可视范围中的现实位置信息、现实角度信息和现实视场角信息一一对应,则能确保虚拟空间中的虚拟人眼与现实环境中的用户人眼重合,虚拟可视范围与用户的现实可视范围对应,使虚拟画面反射至镜片3后,能与现实环境 一起反射至用户眼中,让用户观察到的虚拟画面更符合理想效果。The virtual visual range here includes virtual position information, virtual angle information, and virtual field of view angle information. When the virtual position information, virtual angle information, and virtual field angle information in the virtual visual range correspond to the actual position information, real angle information, and real field angle information in the display visual range, it can ensure that the virtual space The virtual human eyes overlap with the user’s eyes in the real environment, and the virtual visual range corresponds to the user’s real visual range. After the virtual screen is reflected on the lens 3, it can be reflected in the user’s eyes together with the real environment, allowing the user to observe The virtual picture is more in line with the ideal effect.
本发明的一种较优实施例中,如图5所示,步骤S1中包括:In a preferred embodiment of the present invention, as shown in FIG. 5, step S1 includes:
步骤S11,图像采集单元41通过采集一包括多个特征点的特征点区域得到实时图像;Step S11, the image acquisition unit 41 obtains a real-time image by collecting a feature point region including a plurality of feature points;
步骤S12,图像采集单元41获取实时图像中的特征点并分析特征点区域的位置是否满足视角需求:Step S12, the image acquisition unit 41 acquires the feature points in the real-time image and analyzes whether the location of the feature point area meets the viewing angle requirement:
若是,则转至步骤S2;If yes, go to step S2;
若否,则转至步骤S13;If not, go to step S13;
步骤S13,图像采集单元41生成一用于指示特征点过少的提示指令并反馈给用户,随后返回步骤S11。In step S13, the image acquisition unit 41 generates a prompt instruction for indicating that there are too few feature points and feeds it back to the user, and then returns to step S11.
具体地,在用户佩戴头显框架1后,便携式终端4中的预设程序会引导用户在空间内走动,相应的图像采集单元41启动并采集上方图像,提取实时图像中的特征点并分析特征点覆盖的对应区域、特征点的鲁棒性等参数是否满足视角需求,若是则转至步骤S2,若否,则生成引导指令,引导用户改变当前位置、行进走向和视线方向,以填补采集的图像中对应的特征点较少的区域,最终确定虚拟画面在现实场景中的具体位置。Specifically, after the user wears the headset frame 1, the preset program in the portable terminal 4 will guide the user to walk around in the space, and the corresponding image acquisition unit 41 starts and collects the upper image, extracts the feature points in the real-time image, and analyzes the features Whether the corresponding area covered by the point, the robustness of the feature point and other parameters meet the viewing angle requirements, if yes, go to step S2, if not, then generate a guidance instruction to guide the user to change the current position, travel direction and line of sight direction to fill the collected In the image corresponding to the area with fewer feature points, the specific position of the virtual picture in the real scene is finally determined.
进一步地,为控制虚拟画面的具体位置信息,也可通过一外部特征点装置预设特征点。特征点装置中包括一发射单元和一扩散单元,扩散单元设置在发射单元的上方,扩散单元中可选取一透明圆盘并于透明圆盘上添加多个特征点,发射单元可选取一激光发射装置,将扩散单元的特征点投射至显示设备所在的空间上方,图像处理模块随后根据图像采集单元41采集的上方图像进行特征点分析。Further, in order to control the specific location information of the virtual screen, feature points can also be preset through an external feature point device. The feature point device includes a launch unit and a diffusion unit. The diffusion unit is arranged above the launch unit. A transparent disc can be selected from the diffusion unit and multiple feature points can be added to the transparent disc. The launch unit can select a laser emission The device projects the characteristic points of the diffusion unit above the space where the display device is located, and the image processing module then performs characteristic point analysis according to the upper image collected by the image acquisition unit 41.
本发明的一种较优实施例中,步骤S2中图像处理模块通过vSLAM算法确定当前位姿。In a preferred embodiment of the present invention, in step S2, the image processing module determines the current pose through the vSLAM algorithm.
本发明的一种较优实施例中,虚拟可视范围中包括虚拟角度信息;In a preferred embodiment of the present invention, the virtual viewing range includes virtual angle information;
步骤S3中包括生成虚拟角度信息的第一过程;Step S3 includes the first process of generating virtual angle information;
第一过程中,如图6所示,包括:The first process, as shown in Figure 6, includes:
步骤S31A,图像处理模块根据当前位姿构建以图像采集单元41为原点的空间直角坐标系,并确定图像采集单元41在空间直角坐标系下的空间转动角度;In step S31A, the image processing module constructs a spatial rectangular coordinate system with the image acquisition unit 41 as the origin according to the current pose, and determines the spatial rotation angle of the image acquisition unit 41 in the spatial rectangular coordinate system;
步骤S32A,获取图像采集单元41与用户之间的夹角;Step S32A, acquiring the angle between the image acquisition unit 41 and the user;
步骤S33A,根据空间转动角度和夹角生成虚拟角度信息。Step S33A, generating virtual angle information according to the spatial rotation angle and the included angle.
本发明的一种较优实施例中,虚拟角度信息采用下述公式表述:In a preferred embodiment of the present invention, the virtual angle information is expressed by the following formula:
θ=(θ XY-α,θ Z)      (1) θ=(θ XY -α,θ Z ) (1)
其中,in,
θ用于表述虚拟角度信息;θ is used to express virtual angle information;
θ X用于表示空间转动角度中的俯仰角; θ X is used to represent the pitch angle in the space rotation angle;
θ Y用于表示空间转动角度中的偏航角; θ Y is used to represent the yaw angle in the space rotation angle;
α用于表示夹角;α is used to indicate the included angle;
θ Z用于表示空间转动角度中的翻滚角。 θ Z is used to represent the roll angle in the space rotation angle.
本发明的一种较优实施例中,虚拟可视范围中包括虚拟位置信息;In a preferred embodiment of the present invention, the virtual visual range includes virtual location information;
步骤S3中包括生成虚拟位置信息的第二过程;Step S3 includes a second process of generating virtual location information;
第二过程中,如图7所示,包括:The second process, as shown in Figure 7, includes:
步骤S31B,图像处理模块根据当前位姿构建以图像采集单元41为原点 的空间直角坐标系,并选取用户的眉心作为预设参考点,根据图像采集单元41与预设参考点之间的偏移量,生成第一位置信息;In step S31B, the image processing module constructs a spatial rectangular coordinate system with the image acquisition unit 41 as the origin according to the current pose, and selects the center of the user's brow as a preset reference point, according to the offset between the image acquisition unit 41 and the preset reference point Quantities, generate the first position information;
步骤S32B,图像处理模块根据用户的瞳距对第一位置信息调节,生成第二位置信息,并将第二位置信息作为虚拟位置信息输出。In step S32B, the image processing module adjusts the first position information according to the user's interpupillary distance, generates second position information, and outputs the second position information as virtual position information.
具体地,当用户上下转动头部时,图像采集单元41的翻滚角也随之变化,上下转动的角度与翻滚角对应,当用户前后转动头部时,图像采集单元41的俯仰角也随之变化,前后转动的角度与俯仰角对应,而当用户左右转动头部时,图像采集单元41以当前位置进行环形旋转,左右转动的角度与偏航角之间具有偏差。Specifically, when the user turns the head up and down, the roll angle of the image acquisition unit 41 also changes accordingly, and the angle of the up and down rotation corresponds to the roll angle. When the user turns the head back and forth, the pitch angle of the image acquisition unit 41 also changes accordingly. Change, the angle of the front and back rotation corresponds to the pitch angle, and when the user rotates the head left and right, the image acquisition unit 41 performs a circular rotation at the current position, and there is a deviation between the angle of the left and right rotation and the yaw angle.
因此以此时的图像采集单元41为原点构建空间坐标系,确定图像采集单元41在空间坐标系下的空间转动角度,此处使用欧拉角进行表示为(θ XYZ),随后通过公式(1)计算得到图像采集单元41与用户视线在y轴上的夹角α,由此确定虚拟可视范围中的虚拟角度信息θ,并根据虚拟角度信息,以确保虚拟空间中的虚拟人眼的角度信息与现实环境中的用户人眼的角度信息重合,虚拟可视范围与用户的现实可视范围对应,以此实现虚拟画面与现实场景的叠加融合。 Therefore, the space coordinate system is constructed with the image acquisition unit 41 at this time as the origin, and the space rotation angle of the image acquisition unit 41 in the space coordinate system is determined. Here, Euler angles are used to express as (θ X , θ Y , θ Z ) Then, the angle α between the image acquisition unit 41 and the user’s line of sight on the y-axis is calculated by formula (1), thereby determining the virtual angle information θ in the virtual visual range, and according to the virtual angle information, to ensure that the virtual space The angle information of the virtual human eye overlaps with the angle information of the user’s eye in the real environment, and the virtual visual range corresponds to the user’s real visual range, so as to realize the superimposition and fusion of the virtual screen and the real scene.
本发明的一种较优实施例中,第一位置信息采用下述公式表述:In a preferred embodiment of the present invention, the first location information is expressed by the following formula:
χ′=(-B X,-B Y,-B Z)     (2) χ′=(-B X ,-B Y ,-B Z ) (2)
其中,in,
χ′用于表述第一位置信息;χ′ is used to express the first position information;
B X用于表示偏移量在X轴上的投影; B X is used to indicate the projection of the offset on the X axis;
B Y用于表示偏移量在Y轴上的投影; B Y is used to indicate the projection of the offset on the Y axis;
B Z用于表示偏移量在Z轴上的投影。 B Z is used to indicate the projection of the offset on the Z axis.
本发明的一种较优实施例中,第二位置信息包括左眼位置信息和右眼位置信息;In a preferred embodiment of the present invention, the second position information includes left-eye position information and right-eye position information;
第二位置信息采用下述公式表述:The second location information is expressed by the following formula:
Figure PCTCN2020109366-appb-000002
Figure PCTCN2020109366-appb-000002
其中,in,
χ″ 1用于表述左眼位置信息; χ″ 1 is used to express the position information of the left eye;
χ″ 2用于表述右眼位置信息; χ″ 2 is used to express the position information of the right eye;
B X用于表示偏移量在X轴上的投影; B X is used to indicate the projection of the offset on the X axis;
B Y用于表示偏移量在Y轴上的投影; B Y is used to indicate the projection of the offset on the Y axis;
I用于表示用户的瞳距;I is used to represent the interpupillary distance of the user;
B Z用于表示偏移量在Z轴上的投影。 B Z is used to indicate the projection of the offset on the Z axis.
具体地,现有技术中常采用vSLAM算法将图像采集单元41采集的图像与传感器采集的数据进行融合计算出设备的六自由度信息,确定的虚拟可视范围中的位置信息为图像采集单元41的当前位置。由于现实环境中的用户眼睛与图像采集单元41的位置信息的偏差较大,因此当用户佩戴显示设备观察虚拟画面的过程中会出现明显的画面错位。Specifically, in the prior art, the vSLAM algorithm is often used to fuse the image collected by the image acquisition unit 41 with the data collected by the sensor to calculate the six degrees of freedom information of the device, and the determined position information in the virtual visual range is the image acquisition unit 41 current position. Due to the large deviation between the user's eyes in the real environment and the position information of the image acquisition unit 41, when the user wears the display device to observe the virtual screen, obvious screen misalignment may occur.
由此于步骤S3中设置一确定虚拟位置信息的第二过程,考虑到用户在使用头显框架1的过程中,镜片3与眉心之间的距离差、眉心与左右眼之间的距离差基本不变,因此选取眉心作为参考点,于步骤S31B中首先确定图像采集单元41在空间直角坐标系中的位置信息为(0,0,0),随后获取图像采集 单元41与眉心位置之间的相对位置,由此确定第一位置信息,也就是眉心位置在空间直角坐标系中的位置信息(-B X,B Y,B Z),此时可视作生成的虚拟人眼的位置自图像采集单元41转至眉心,随后于步骤S32B中获取现实用户的双眼瞳距I,随后对第一位置信息调节生成第二位置信息,也就是左右眼在空间直角坐标系中的位置信息(-B X,-B Y-I/2,-B Z)和(-B X,-B Y+I/2,-B Z),以确保虚拟空间中的虚拟人眼的位置信息与现实环境中的用户人眼的位置信息重合,虚拟可视范围与用户的现实可视范围对应,以此实现虚拟画面与现实场景的叠加融合。 Therefore, a second process of determining virtual position information is set in step S3, considering that the distance between the lens 3 and the center of the eyebrows, and the distance between the center of the eyebrows and the left and right eyes are basically The center of the brow is selected as the reference point. In step S31B, it is first determined that the position information of the image acquisition unit 41 in the spatial rectangular coordinate system is (0, 0, 0), and then the position between the image acquisition unit 41 and the center of the brow is acquired. The relative position, thereby determining the first position information, that is, the position information (-B X , B Y , B Z ) of the center of the eyebrows in the spatial rectangular coordinate system, which can be regarded as the generated virtual human eye position from the image The acquisition unit 41 turns to the center of the eyebrows, and then obtains the pupil distance I of the real user's eyes in step S32B, and then adjusts the first position information to generate second position information, that is, the position information of the left and right eyes in the spatial rectangular coordinate system (-B X , -B Y -I/2, -B Z ) and (-B X , -B Y +I/2, -B Z ) to ensure that the position information of the virtual human eye in the virtual space is consistent with that in the real environment The position information of the user's eyes overlaps, and the virtual visual range corresponds to the user's real visual range, so as to realize the superimposition and fusion of the virtual screen and the real scene.
本发明的一种较优实施例中,虚拟可视范围中包括虚拟视场角信息;In a preferred embodiment of the present invention, the virtual viewing range includes virtual viewing angle information;
步骤S3中包括生成虚拟视场角信息的第三过程;Step S3 includes a third process of generating virtual field of view information;
第三过程中,如图8所示,包括:The third process, as shown in Figure 8, includes:
步骤S31C,图像处理模块生成一预设虚拟画面并显示在镜片3上;Step S31C, the image processing module generates a preset virtual picture and displays it on the lens 3;
步骤S32C,图像处理模块计算预设虚拟画面的曲面边缘与用户之间的位置差值;Step S32C, the image processing module calculates the position difference between the curved edge of the preset virtual screen and the user;
步骤S33C,图像处理模块根据位置差值确定虚拟视场角信息。In step S33C, the image processing module determines the virtual field of view information according to the position difference.
具体地,如图9所示,在确定虚拟视场角信息的过程中,便携式终端4通过显示单元在镜片3上形成预设虚拟画面,分别计算预设虚拟画面的曲面边缘到对应的用户人眼的距离,最终确定虚拟视场角信息。Specifically, as shown in FIG. 9, in the process of determining the virtual field of view information, the portable terminal 4 forms a preset virtual screen on the lens 3 through the display unit, and respectively calculates the curved edge of the preset virtual screen to the corresponding user person. The distance of the eye finally determines the virtual field of view information.
以上仅为本发明较佳的实施例,并非因此限制本发明的实施方式及保护范围,对于本领域技术人员而言,应当能够意识到凡运用本发明说明书及图示内容所作出的等同替换和显而易见的变化所得到的方案,均应当包含在本发明的保护范围内。The above are only preferred embodiments of the present invention, and do not therefore limit the implementation and protection scope of the present invention. For those skilled in the art, they should be aware of equivalent replacements and equivalents made by using the description and illustrations of the present invention. All solutions obtained by obvious changes should be included in the protection scope of the present invention.

Claims (14)

  1. 一种增强现实显示系统,其特征在于,包括:An augmented reality display system is characterized in that it comprises:
    一头显框架,所述头显框架呈圆环状,用于用户佩戴;A head-mounted display frame, the head-mounted display frame has a circular ring shape and is used for wearing by the user;
    一凹槽,所述凹槽的开口倾斜朝上,所述凹槽的一侧与所述头显框架连接;A groove, the opening of the groove is inclined upward, and one side of the groove is connected with the headset frame;
    一镜片,所述镜片设置在所述凹槽的下方并与所述凹槽的另一侧连接,所述镜片采用半反半透材料制作;A lens, the lens is arranged below the groove and connected to the other side of the groove, and the lens is made of a semi-reflective and semi-transparent material;
    一便携式终端,所述便携式终端具有一设置有显示单元的第一面以及一设置有图像采集单元的第二面,所述第一面与所述第二面背向设置,所述便携式终端还包括一处理单元,用于对所述图像采集单元采集得到的实时图像进行处理并通过所述显示单元进行显示;A portable terminal, the portable terminal has a first side provided with a display unit and a second side provided with an image acquisition unit, the first side and the second side are arranged opposite to each other, the portable terminal is also Comprising a processing unit for processing real-time images collected by the image collecting unit and displaying them on the display unit;
    所述便携式终端的尺寸适配于所述凹槽的尺寸,在将所述便携式终端放入所述凹槽时,将所述便携式终端的所述第一面朝向所述镜片;The size of the portable terminal is adapted to the size of the groove, and when the portable terminal is put into the groove, the first side of the portable terminal faces the lens;
    所述处理单元具体包括:The processing unit specifically includes:
    惯性测量模块,用于采集得到实时运动数据并输出;Inertial measurement module, used to collect and output real-time motion data;
    位姿处理模块,连接所述惯性测量模块,用于根据所述图像采集单元采集到的实时图像和对应时刻的所述实时运动数据确定所述便携式终端的当前位姿;A pose processing module, connected to the inertial measurement module, and configured to determine the current pose of the portable terminal according to the real-time image collected by the image acquisition unit and the real-time motion data at the corresponding time;
    图像处理模块,连接所述位姿处理模块,用于根据所述便携式终端的当前位姿生成虚拟可视范围和虚拟画面,将所述虚拟画面通过所述显示单元和所述镜片反射给所述用户查看。An image processing module, connected to the pose processing module, for generating a virtual visual range and a virtual screen according to the current pose of the portable terminal, and reflecting the virtual screen to the display unit and the lens User view.
  2. 根据权利要求1中的所述的一种增强现实显示系统,其特征在于,所述实时图像包括特征点;The augmented reality display system according to claim 1, wherein the real-time image includes feature points;
    所述处理单元中还包括:The processing unit also includes:
    特征点处理模块,所述特征点处理模块分别连接所述图像 采集单元和所述位姿处理模块,用于获取所述实时图像中的所述特征点并分析所述特征点对应的区域,根据分析结果将所述特征点输出至所述位姿处理模块;A feature point processing module, which is respectively connected to the image acquisition unit and the pose processing module, and is used to obtain the feature points in the real-time image and analyze the area corresponding to the feature points, according to Outputting the characteristic points to the pose processing module as a result of the analysis;
    所述位姿处理模块根据所述特征点对应的所述区域和对应时刻的所述实时运动数据确定所述便携式终端的当前位姿。The pose processing module determines the current pose of the portable terminal according to the area corresponding to the feature point and the real-time motion data at the corresponding time.
  3. 根据权利要求1中的所述的一种增强现实显示系统,其特征在于,所述虚拟可视范围包括虚拟角度信息;The augmented reality display system according to claim 1, wherein the virtual visual range includes virtual angle information;
    所述图像处理模块中包括:The image processing module includes:
    一第一处理部件,与所述位姿处理模块连接,用于根据所述当前位姿,构建以所述图像采集单元为原心的空间直角坐标系,确定所述图像采集单元的空间转动角度,以及所述空间转动角度与所述用户之间的夹角,并根据所述空间转动角度和所述夹角生成所述虚拟角度信息;A first processing component, connected to the pose processing module, and configured to construct a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and determine the spatial rotation angle of the image acquisition unit , And the angle between the space rotation angle and the user, and generating the virtual angle information according to the space rotation angle and the included angle;
    一图像生成部件,与所述第一处理部件连接,用于根据所述虚拟角度信息生成所述虚拟图像,并将所述虚拟画面输出至所述显示单元,通过所述显示单元和所述镜片反射给所述用户查看。An image generating component, connected to the first processing component, for generating the virtual image according to the virtual angle information, and outputting the virtual screen to the display unit, through the display unit and the lens Reflect to the user for viewing.
  4. 根据权利要求1中的所述的一种增强现实显示系统,其特征在于,所述虚拟可视范围包括虚拟位置信息;The augmented reality display system according to claim 1, wherein the virtual visual range includes virtual position information;
    所述图像处理模块中包括:The image processing module includes:
    一第二处理部件,与所述位姿处理模块连接,用于根据所述当前位姿,构建以所述图像采集单元为原心的空间直角坐标系,选取所述用户的眉心作为预设参考点,根据所述便携式终端与所述预设参考点之间的偏移量以及所述用户的瞳距生成所述虚拟位置信息;A second processing component, connected to the pose processing module, configured to construct a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and select the center of the user's brow as a preset reference Point, generating the virtual position information according to the offset between the portable terminal and the preset reference point and the interpupillary distance of the user;
    一图像生成部件,与所述第二处理部件连接,用于根据所述虚拟位置信息生成所述虚拟图像,并将所述虚拟画面输出至所述显 示单元,通过所述显示单元和所述镜片反射给所述用户查看。An image generating component, connected to the second processing component, for generating the virtual image according to the virtual position information, and outputting the virtual screen to the display unit, through the display unit and the lens Reflect to the user for viewing.
  5. 根据权利要求1中的所述的一种增强现实显示系统,其特征在于,所述虚拟可视范围包括虚拟视场角信息;The augmented reality display system according to claim 1, wherein the virtual visual range includes virtual field of view information;
    所述图像处理模块中包括:The image processing module includes:
    一第三处理部件,与所述位姿处理模块连接,用于生成预设虚拟画面,并根据所述预设虚拟画面的曲面边缘与所述用户之间的位置差值生成所述虚拟视场角信息;A third processing component, connected to the pose processing module, for generating a preset virtual screen, and generating the virtual field of view according to the position difference between the curved edge of the preset virtual screen and the user Angle information
    一图像生成部件,与所述第三处理部件连接,用于根据所述虚拟视场角信息生成所述虚拟图像,并将所述虚拟画面输出至所述显示单元,通过所述显示单元和所述镜片反射给所述用户查看。An image generating component, connected to the third processing component, for generating the virtual image according to the virtual field of view information, and outputting the virtual screen to the display unit, through the display unit and the The lens reflects for the user to view.
  6. 一种增强现实的显示方法,应用于如权利要求1-5中任意一项所述的显示系统,其特征在于,于所述显示设备中设置一头显框架、一凹槽、一镜片和一便携式终端;An augmented reality display method, applied to the display system according to any one of claims 1-5, characterized in that a head-mounted display frame, a groove, a lens, and a portable display device are provided in the display device terminal;
    所述显示方法中包括:The display method includes:
    步骤S1,所述图像采集单元采集得到位于所述便携式终端的正上方的平面的实时图像,以及Step S1, the image acquisition unit acquires a real-time image of a plane directly above the portable terminal, and
    所述惯性测量模块采集得到实时运动数据;The inertial measurement module collects real-time motion data;
    步骤S2,所述位姿处理模块根据所述实时图像和所述实时运动数据处理得到所述便携式终端的当前位姿;Step S2, the pose processing module processes the current pose of the portable terminal according to the real-time image and the real-time motion data;
    步骤S3,所述图像处理模块根据所述当前位姿生成虚拟可视范围和虚拟画面,并将所述虚拟画面发送至所述显示单元进行显示;Step S3, the image processing module generates a virtual visual range and a virtual screen according to the current pose, and sends the virtual screen to the display unit for display;
    步骤S4,于所述显示单元上显示的所述虚拟画面通过所述镜片被反射给所述用户查看。Step S4, the virtual screen displayed on the display unit is reflected by the lens to the user for viewing.
  7. 根据权利要求6所述的一种显示方法,其特征在于,所述步骤S1中包括:The display method according to claim 6, wherein the step S1 includes:
    步骤S11,所述图像采集单元通过采集一包括多个特征点的特征点区域得到所述实时图像;Step S11, the image acquisition unit obtains the real-time image by collecting a feature point region including a plurality of feature points;
    步骤S12,所述图像采集单元提取所述实时图像中的特征点并分析所述特征点对应的区域是否满足视角需求:Step S12, the image acquisition unit extracts feature points in the real-time image and analyzes whether the area corresponding to the feature point meets the viewing angle requirement:
    若是,则转至步骤S2;If yes, go to step S2;
    若否,则转至步骤S13;If not, go to step S13;
    步骤S13,所述图像采集单元生成一用于指示所述特征点过少的提示指令并反馈给所述用户,随后返回所述步骤S11。In step S13, the image acquisition unit generates a prompt instruction for indicating that the feature points are too few and feeds it back to the user, and then returns to the step S11.
  8. 根据权利要求6所述的一种显示方法,其特征在于,所述步骤S2中所述图像处理模块通过vSLAM算法确定所述当前位姿。The display method according to claim 6, wherein in the step S2, the image processing module determines the current pose through a vSLAM algorithm.
  9. 根据权利要求6所述的一种显示方法,其特征在于,所述虚拟可视范围中包括虚拟角度信息;The display method according to claim 6, wherein the virtual viewing range includes virtual angle information;
    所述步骤S3中包括生成所述虚拟角度信息的第一过程;The step S3 includes a first process of generating the virtual angle information;
    所述第一过程中包括:The first process includes:
    步骤S31A,所述图像处理模块根据所述当前位姿构建以所述图像采集单元为原心的空间直角坐标系,并确定所述图像采集单元在所述空间直角坐标系下的空间转动角度;Step S31A, the image processing module constructs a spatial rectangular coordinate system centered on the image acquisition unit according to the current pose, and determines the spatial rotation angle of the image acquisition unit in the spatial rectangular coordinate system;
    步骤S32A,获取所述图像采集单元与所述用户之间的夹角;Step S32A, acquiring the angle between the image acquisition unit and the user;
    步骤S33A,根据所述空间转动角度和所述夹角生成所述虚拟角度信息。Step S33A, generating the virtual angle information according to the spatial rotation angle and the included angle.
  10. 根据权利要求9所述的一种显示方法,其特征在于,所述虚拟角度信息采用下述公式表述:The display method according to claim 9, wherein the virtual angle information is expressed by the following formula:
    θ=(θ XY-α,θ Z) θ=(θ XY -α,θ Z )
    其中,in,
    θ用于表述所述虚拟角度信息;θ is used to express the virtual angle information;
    θ X用于表示所述空间转动角度中的俯仰角; θ X is used to represent the pitch angle in the space rotation angle;
    θ Y用于表示所述空间转动角度中的偏航角; θ Y is used to represent the yaw angle in the space rotation angle;
    α用于表示所述夹角;α is used to indicate the included angle;
    θ Z用于表示所述空间转动角度中的翻滚角。 θ Z is used to represent the roll angle in the spatial rotation angle.
  11. 根据权利要求6所述的一种显示方法,其特征在于,所述虚拟可视范围中包括虚拟位置信息;The display method according to claim 6, wherein the virtual visual range includes virtual position information;
    所述步骤S3中包括生成所述虚拟位置信息的第二过程;The step S3 includes a second process of generating the virtual location information;
    所述第二过程中包括:The second process includes:
    步骤S31B,所述图像处理模块根据所述当前位姿构建以所述图像采集单元为原心的空间直角坐标系,并选取所述用户的眉心作为预设参考点,根据所述图像采集单元与所述预设参考点之间的偏移量,生成第一位置信息;In step S31B, the image processing module constructs a spatial rectangular coordinate system centered on the image acquisition unit according to the current pose, and selects the center of the user's brow as a preset reference point, and according to the image acquisition unit and Generating the first position information by the offset between the preset reference points;
    步骤S32B,所述图像处理模块根据所述用户的瞳距对所述第一位置信息调节,生成第二位置信息,并将所述第二位置信息作为所述虚拟位置信息输出。Step S32B: The image processing module adjusts the first position information according to the interpupillary distance of the user, generates second position information, and outputs the second position information as the virtual position information.
  12. 根据权利要求11所述的一种显示方法,其特征在于,所述第一位置信息采用下述公式表述:The display method according to claim 11, wherein the first position information is expressed by the following formula:
    χ′=(-B X,-B Y,-B Z) χ′=(-B X ,-B Y ,-B Z )
    其中,in,
    χ′用于表述所述第一位置信息;χ′ is used to express the first position information;
    B X用于表示所述偏移量在X轴上的投影; B X is used to indicate the projection of the offset on the X axis;
    B Y用于表示所述偏移量在Y轴上的投影; B Y is used to represent the projection of the offset on the Y axis;
    B Z用于表示所述偏移量在Z轴上的投影。 B Z is used to indicate the projection of the offset on the Z axis.
  13. 根据权利要求11所述的一种显示方法,其特征在于,所述第二位置信息包括左眼位置信息和右眼位置信息;The display method according to claim 11, wherein the second position information includes left-eye position information and right-eye position information;
    所述第二位置信息采用下述公式表述:The second location information is expressed by the following formula:
    Figure PCTCN2020109366-appb-100001
    Figure PCTCN2020109366-appb-100001
    其中,in,
    χ″ 1用于表述所述左眼位置信息; χ" 1 is used to express the left eye position information;
    χ″ 2用于表述所述右眼位置信息; χ" 2 is used to express the right eye position information;
    B X用于表示所述偏移量在X轴上的投影; B X is used to indicate the projection of the offset on the X axis;
    B Y用于表示所述偏移量在Y轴上的投影; B Y is used to represent the projection of the offset on the Y axis;
    I用于表示所述用户的所述瞳距;I is used to represent the interpupillary distance of the user;
    B Z用于表示所述偏移量在Z轴上的投影。 B Z is used to indicate the projection of the offset on the Z axis.
  14. 根据权利要求6所述的一种显示方法,其特征在于,所述虚拟可视范围中包括虚拟视场角信息;The display method according to claim 6, wherein the virtual visual range includes virtual field of view information;
    所述步骤S3中包括生成所述虚拟视场角信息的第三过程;The step S3 includes a third process of generating the virtual field of view information;
    所述第三过程中包括:The third process includes:
    步骤S31C,所述图像处理模块生成一预设虚拟画面并显示在所述镜片上;Step S31C, the image processing module generates a preset virtual screen and displays it on the lens;
    步骤S32C,所述图像处理模块计算所述预设虚拟画面的曲面边缘与所述用户之间的位置差值;Step S32C, the image processing module calculates the position difference between the curved edge of the preset virtual screen and the user;
    步骤S33C,所述图像处理模块根据所述位置差值确定所述虚拟视场角信息。In step S33C, the image processing module determines the virtual field of view information according to the position difference.
PCT/CN2020/109366 2020-05-29 2020-08-14 Augmented reality display system and method WO2021237952A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202020950665.X 2020-05-29
CN202020950665.XU CN212012916U (en) 2020-05-29 2020-05-29 Augmented reality's display device
CN202010477543.8 2020-05-29
CN202010477543.8A CN111491159A (en) 2020-05-29 2020-05-29 Augmented reality display system and method

Publications (1)

Publication Number Publication Date
WO2021237952A1 true WO2021237952A1 (en) 2021-12-02

Family

ID=78722957

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/109366 WO2021237952A1 (en) 2020-05-29 2020-08-14 Augmented reality display system and method

Country Status (1)

Country Link
WO (1) WO2021237952A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114743419A (en) * 2022-03-04 2022-07-12 广州容溢教育科技有限公司 VR-based multi-user virtual experiment teaching system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140287806A1 (en) * 2012-10-31 2014-09-25 Dhanushan Balachandreswaran Dynamic environment and location based augmented reality (ar) systems
CN107037587A (en) * 2016-02-02 2017-08-11 迪士尼企业公司 Compact augmented reality/virtual reality display
CN108022302A (en) * 2017-12-01 2018-05-11 深圳市天界幻境科技有限公司 A kind of sterically defined AR 3 d display devices of Inside-Out
CN111491159A (en) * 2020-05-29 2020-08-04 上海鸿臣互动传媒有限公司 Augmented reality display system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140287806A1 (en) * 2012-10-31 2014-09-25 Dhanushan Balachandreswaran Dynamic environment and location based augmented reality (ar) systems
CN107037587A (en) * 2016-02-02 2017-08-11 迪士尼企业公司 Compact augmented reality/virtual reality display
CN108022302A (en) * 2017-12-01 2018-05-11 深圳市天界幻境科技有限公司 A kind of sterically defined AR 3 d display devices of Inside-Out
CN111491159A (en) * 2020-05-29 2020-08-04 上海鸿臣互动传媒有限公司 Augmented reality display system and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114743419A (en) * 2022-03-04 2022-07-12 广州容溢教育科技有限公司 VR-based multi-user virtual experiment teaching system
CN114743419B (en) * 2022-03-04 2024-03-29 国育产教融合教育科技(海南)有限公司 VR-based multi-person virtual experiment teaching system

Similar Documents

Publication Publication Date Title
JP6860488B2 (en) Mixed reality system
US9779512B2 (en) Automatic generation of virtual materials from real-world materials
JP6195893B2 (en) Shape recognition device, shape recognition program, and shape recognition method
US20160189426A1 (en) Virtual representations of real-world objects
US9961335B2 (en) Pickup of objects in three-dimensional display
JP5844880B2 (en) Head mounted display, calibration method and calibration program, and recording medium
JP6177872B2 (en) I / O device, I / O program, and I / O method
US20140152558A1 (en) Direct hologram manipulation using imu
JP6250024B2 (en) Calibration apparatus, calibration program, and calibration method
WO2014128751A1 (en) Head mount display apparatus, head mount display program, and head mount display method
CN111491159A (en) Augmented reality display system and method
JP2015060071A (en) Image display device, image display method, and image display program
JP6250025B2 (en) I / O device, I / O program, and I / O method
JP6446465B2 (en) I / O device, I / O program, and I / O method
WO2021237952A1 (en) Augmented reality display system and method
JP2017191546A (en) Medical use head-mounted display, program of medical use head-mounted display, and control method of medical use head-mounted display
JP2016057634A (en) Head-mounted display, calibration method, calibration program, and recording medium
JP6479835B2 (en) I / O device, I / O program, and I / O method
JP6608208B2 (en) Image display device
JP2017215597A (en) Information display method and information display device
JP6479836B2 (en) I / O device, I / O program, and I / O method
JP2017111724A (en) Head-mounted display for piping
JP2017111721A (en) Head-mounted display for clean room, control method of head-mounted display for clean room, and control program for head-mounted display for clean room

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20937651

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20937651

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/06/2023)