CN111491159A - Augmented reality display system and method - Google Patents

Augmented reality display system and method Download PDF

Info

Publication number
CN111491159A
CN111491159A CN202010477543.8A CN202010477543A CN111491159A CN 111491159 A CN111491159 A CN 111491159A CN 202010477543 A CN202010477543 A CN 202010477543A CN 111491159 A CN111491159 A CN 111491159A
Authority
CN
China
Prior art keywords
virtual
image
processing module
user
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010477543.8A
Other languages
Chinese (zh)
Inventor
张元�
钟正杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yujian Guanchi Technology Co.,Ltd.
Original Assignee
Shanghai Hongchen Interactive Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hongchen Interactive Media Co ltd filed Critical Shanghai Hongchen Interactive Media Co ltd
Priority to CN202010477543.8A priority Critical patent/CN111491159A/en
Publication of CN111491159A publication Critical patent/CN111491159A/en
Priority to PCT/CN2020/109366 priority patent/WO2021237952A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/167Synchronising or controlling image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/371Image reproducers using viewer tracking for tracking viewers with different interocular distances; for tracking rotational head movements around the vertical axis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/373Image reproducers using viewer tracking for tracking forward-backward translational head movements, i.e. longitudinal movements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/376Image reproducers using viewer tracking for tracking left-right translational head movements, i.e. lateral movements

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the field of augmented reality, in particular to a display system and a display method for augmented reality. The method comprises the following steps: a head display frame; a groove; a lens; a portable terminal; when the portable terminal is placed in the groove, the first surface of the portable terminal faces the lens; the processing unit specifically comprises: an inertial measurement module; a pose processing module; and the image processing module is connected with the pose processing module and used for generating a virtual visual range and a virtual picture according to the current pose of the portable terminal and reflecting the virtual picture to the user for viewing through the display unit and the lens. The beneficial effects of the above technical scheme are: the position can be quickly positioned, the virtual picture can be generated, and the space correction can be carried out.

Description

Augmented reality display system and method
Technical Field
The invention relates to the field of augmented reality, in particular to a display system and a display method for augmented reality.
Background
The augmented reality technology is to identify and position scenes and objects in the real world and to place virtual three-dimensional objects in the real scenes in real time. The goal of this technology is to merge and interact the virtual world with the real world. Augmented reality relies mainly on two key technologies: the method comprises the steps of rendering and displaying a three-dimensional model in real time, and sensing the shape and the position of a real object.
However, there are two general ways in the sensing of location by current augmented reality devices:
(1) the three-free gyroscope of the mobile phone is used for positioning, but a user cannot approach or move away from the virtual three-dimensional object in a moving mode;
(2) the positioning is carried out by calculating the relative position between the front camera of the mobile phone end and the preset picture, however, the user can realize the positioning with six degrees of freedom only by the picture, the technical stability of the picture augmented reality is not high, the front camera cannot directly see the picture, the deformed picture can be seen through the front lens, the picture can display the three-dimensional model only in the visible range of the front camera, the tracking and the positioning are unstable, and the moving range of the user is limited by the position of the picture.
Therefore, a display device and a display method capable of quickly performing position positioning, generating a virtual picture and performing spatial correction are still lacking.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present invention provides an augmented reality display system, comprising:
the head display frame is in a circular ring shape and is worn by a user;
the opening of the groove is inclined upwards, and one side of the groove is connected with the head display frame;
the lens is arranged below the groove and connected with the other side of the groove, and is made of a semi-reflecting and semi-permeable material;
the portable terminal is provided with a first surface provided with a display unit and a second surface provided with an image acquisition unit, wherein the first surface and the second surface are arranged oppositely;
the size of the portable terminal is adapted to the size of the groove, and the first face of the portable terminal faces the lens when the portable terminal is placed in the groove;
the processing unit specifically comprises:
the inertia measurement module is used for acquiring and outputting real-time motion data;
the pose processing module is connected with the inertial measurement module and used for determining the current pose of the portable terminal according to the real-time image acquired by the image acquisition unit and the real-time motion data at the corresponding moment;
and the image processing module is connected with the pose processing module and used for generating a virtual visual range and a virtual picture according to the current pose of the portable terminal and reflecting the virtual picture to the user for viewing through the display unit and the lens.
Preferably, the image acquisition unit acquires the real-time image by acquiring a feature point region including a plurality of feature points;
the processing unit further comprises:
the feature point processing module is respectively connected with the image acquisition unit and the pose processing module and is used for acquiring the feature points in the real-time image, analyzing the positions of the feature point areas and outputting the feature points to the pose processing module according to an analysis result;
and the pose processing module takes the position of the characteristic point area as a reference and determines the current pose of the portable terminal according to the real-time motion data at the corresponding moment.
Preferably, the virtual visual range includes virtual angle information;
the image processing module comprises:
and the first processing component is connected with the pose processing module and used for establishing a space rectangular coordinate system with the image acquisition unit as an original point according to the current pose, determining the space rotation angle of the image acquisition unit and an included angle between the space rotation angle and the user, and generating the virtual angle information according to the space rotation angle and the included angle and outputting the virtual angle information in the virtual visual range.
Preferably, the virtual visual range includes virtual position information;
the image processing module comprises:
and the second processing part is connected with the pose processing module and used for constructing a spatial rectangular coordinate system with the image acquisition unit as an origin according to the current pose, selecting the eyebrow center of the user as a preset reference point, generating the virtual position information according to the offset between the portable terminal and the preset reference point and the pupil distance of the user, and outputting the virtual position information in the virtual visual range.
Preferably, the virtual visual range includes virtual field angle information;
the image processing module comprises:
and the third processing part is connected with the pose processing module and used for generating a preset virtual picture, generating the virtual field angle information according to the position difference value between the curved surface edge of the preset virtual picture and the user and outputting the virtual field angle information in the virtual visual range.
An augmented reality display method is applied to the display system, and is characterized in that a display frame, a groove, a lens and a portable terminal are arranged in the display equipment;
the display method comprises the following steps:
step S1, the image acquisition unit acquires a real-time image of a plane located directly above the portable terminal, an
The inertia measurement module acquires real-time motion data;
step S2, the pose processing module processes the real-time image and the real-time motion data to obtain the current pose of the portable terminal;
step S3, the image processing module generates a virtual visual range and a virtual picture according to the current pose, and sends the virtual picture to the display unit for display;
in step S4, the virtual frame displayed on the display unit is reflected to the user through the lens for viewing.
Preferably, step S1 includes:
step S11, the image acquisition unit acquires the real-time image by acquiring a characteristic point area comprising a plurality of characteristic points;
step S12, the image acquisition unit acquires the feature points in the real-time image and analyzes whether the positions of the feature point regions meet the requirement of viewing angles:
if yes, go to step S2;
if not, go to step S13;
in step S13, the image capturing unit generates a prompt instruction indicating that the number of feature points is too small, and feeds back the prompt instruction to the user, and then returns to step S11.
Preferably, the image processing module in step S2 determines the current pose through a vS L AM algorithm.
Preferably, the virtual visual range includes virtual angle information;
the step S3 includes a first process of generating the virtual angle information;
the first process includes:
step S31A, the image processing module constructs a space rectangular coordinate system with the image acquisition unit as an origin according to the current pose, and determines a space rotation angle of the image acquisition unit under the space rectangular coordinate system;
step S32A, acquiring an included angle between the image acquisition unit and the user;
step S33A, generating the virtual angle information according to the space rotation angle and the included angle.
Preferably, the virtual angle information is expressed by the following formula:
θ=(θXY-α,θZ)
wherein the content of the first and second substances,
theta is used for expressing the virtual angle information;
θXfor representing a pitch angle in the spatial rotation angle;
θYfor representing a yaw angle in the spatial rotation angle;
α, for indicating the included angle;
θZfor representing the roll angle in said spatial rotation angle.
Preferably, the virtual visual range includes virtual position information;
the step S3 includes a second process of generating the virtual location information;
the second process includes:
step S31B, the image processing module constructs a space rectangular coordinate system with the image acquisition unit as an origin according to the current pose, selects the eyebrow center of the user as a preset reference point, and generates first position information according to the offset between the image acquisition unit and the preset reference point;
step S32B, the image processing module adjusts the first position information according to the pupil distance of the user, generates second position information, and outputs the second position information as the virtual position information.
Preferably, the first position information is expressed by the following formula:
χ′=(-BX,-BY,-BZ)
wherein the content of the first and second substances,
chi' is used to express the first position information;
BXfor representing a projection of said offset on the X-axis;
BYfor indicating the offset atProjection on the Y axis;
BZrepresenting the projection of said offset on the Z-axis.
Preferably, the second position information includes left eye position information and right eye position information;
the second position information is expressed by the following formula:
Figure BDA0002516314690000061
wherein the content of the first and second substances,
χ″1for expressing the left eye position information;
χ″2for expressing the right eye position information;
BXfor representing a projection of said offset on the X-axis;
BYfor representing a projection of said offset on the Y-axis;
i is used to represent the interpupillary distance of the user;
BZrepresenting the projection of said offset on the Z-axis.
Preferably, the virtual visual range includes virtual field angle information;
the step S3 includes a third process of generating the virtual field angle information;
the third process includes:
step S31C, the image processing module generates a preset virtual image and displays the preset virtual image on the lens;
step S32C, the image processing module calculates a position difference between the curved edge of the preset virtual frame and the user;
step S33C, the image processing module determines the virtual field angle information according to the position difference.
The beneficial effects of the above technical scheme are: the position can be quickly positioned, the virtual picture can be generated, and the space correction can be carried out.
Drawings
FIG. 1 is a schematic structural view of a preferred embodiment of the present invention;
FIG. 2 is a schematic diagram of a portable terminal according to a preferred embodiment of the present invention before placement;
FIG. 3 is a diagram illustrating a portable terminal according to a preferred embodiment of the present invention after being placed;
FIG. 4 is a schematic general flow chart of a preferred embodiment of the present invention;
FIG. 5 is a flowchart of step S1 in a preferred embodiment of the present invention;
FIG. 6 is a schematic flow chart of a first process in a preferred embodiment of the present invention;
FIG. 7 is a flow chart of a second process in a preferred embodiment of the present invention;
FIG. 8 is a schematic flow chart of a third process in a preferred embodiment of the present invention;
FIG. 9 is a schematic diagram of a third process in a preferred embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
An augmented reality display system, as shown in fig. 1-3, comprising:
the head display frame 1 is annular and is used for being worn by a user;
a groove 2, the opening of the groove 2 is inclined upwards, and one side of the groove 2 is connected with the head display frame 1;
the lens 3 is arranged below the groove 2 and connected with the other side of the groove 2, and the lens 3 is made of a semi-reflecting and semi-permeable material;
the portable terminal 4 is provided with a first surface provided with a display unit and a second surface provided with an image acquisition unit 41, wherein the first surface and the second surface are arranged in a back-to-back manner, and the portable terminal 4 further comprises a processing unit which is used for processing the real-time image acquired by the image acquisition unit 41 and displaying the real-time image through the display unit;
the size of the portable terminal 4 is adapted to the size of the recess 2, and when the portable terminal 4 is placed in the recess 2, the first side of the portable terminal 4 is directed towards the lens 3;
the processing unit specifically comprises:
the inertia measurement module is used for acquiring and outputting real-time motion data;
the pose processing module is connected with the inertial measurement module and used for determining the current pose of the portable terminal 4 according to the real-time image acquired by the image acquisition unit 41 and the real-time motion data at the corresponding moment;
and the image processing module is connected with the pose processing module and used for generating a virtual visual range and a virtual picture according to the current pose of the portable terminal 4 and reflecting the virtual picture to a user for viewing through the display unit and the lens 3.
Specifically, in the display device in the prior art, the processing unit is often set in the display device for data interaction, and the three-free gyroscope of the mobile phone or the preset picture is used for positioning, so that the user cannot approach or leave the virtual three-dimensional object in a moving manner and the tracking and positioning are unstable, and the moving range of the user is limited by the position of the picture.
The technical scheme provides an augmented reality display system, data interaction is not carried out between a portable terminal 4 and a head display frame 1, a groove 2 and a lens 3, the portable terminal 4 collects real-time images through an image collecting unit 41, real-time motion data are collected through an inertia measuring module, then the current pose is determined through a pose processing module and an image processing module, a virtual visual range and a virtual picture are generated according to the current pose, finally the virtual picture is reflected to a user through a display unit and the lens 3, the user observes the virtual picture and a real environment through the lens 3, superposition and fusion between the virtual picture and the real environment are achieved, and the purpose of augmented reality is achieved. A mobile phone can be used as the portable terminal 4, so that real-time images and real-time motion data can be collected quickly, and a virtual visual range and a virtual picture can be generated.
Further, the image processing unit needs to acquire the current pose of the portable terminal 4 and construct a spatial rectangular coordinate system in the process of generating the virtual visual range, determine the virtual visual range with the real visual range of the user, to generate an appropriate virtual picture and display it on the lens 3.
Further, when putting into recess 2 portable terminal 4, with portable terminal 4's first face towards lens 3 to the optional fretwork design that uses of the region of laminating with portable terminal 4's first face in recess 2 also can choose for use printing opacity material preparation etc. with virtual picture display on lens 3, the user of being convenient for looks over.
Further, a hook 21 may be provided on one side of the groove to assist in fixing the portable terminal 4, such as a fixing rope, so as to prevent the position and angle of the portable terminal 4 from changing due to posture variation during the use of the user.
In a preferred embodiment of the present invention, the image acquisition unit 41 acquires a real-time image by acquiring a feature point region including a plurality of feature points;
the processing unit further comprises:
the feature point processing module is respectively connected with the image acquisition unit 41 and the pose processing module, and is used for acquiring feature points in the real-time image, analyzing the positions of feature point areas, and outputting the feature points to the pose processing module according to an analysis result;
the pose processing module takes the position of the feature point area as a reference, and determines the current pose of the portable terminal 4 according to the real-time motion data at the corresponding moment.
Specifically, the image acquisition unit 41 acquires a real-time image and outputs the real-time image to the image processing unit, the image processing unit extracts feature points in the image and analyzes regions corresponding to the feature points, and generates an instruction according to an analysis result, so that the image acquisition unit 41 acquires more feature points with spatial identification, and finally determines the corresponding region of the virtual picture.
In a preferred embodiment of the present invention, the virtual visual range includes virtual angle information;
the image processing module comprises:
and the first processing part is connected with the pose processing module and used for constructing a space rectangular coordinate system with the image acquisition unit 41 as an original point according to the current pose, determining a space rotation angle of the image acquisition unit 41 and an included angle between the space rotation angle and a user, generating virtual angle information according to the space rotation angle and the included angle and outputting the virtual angle information included in the virtual visual range.
In a preferred embodiment of the present invention, the virtual visual range includes virtual location information;
the image processing module comprises:
and the second processing part is connected with the pose processing module and used for constructing a spatial rectangular coordinate system with the image acquisition unit 41 as an origin according to the current pose, selecting the eyebrow center of the user as a preset reference point, generating virtual position information according to the offset between the portable terminal 4 and the preset reference point and the pupil distance of the user, and outputting the virtual position information in a virtual visual range.
In a preferred embodiment of the present invention, the virtual visual range includes virtual field angle information;
the image processing module comprises:
and the third processing part is connected with the pose processing module and used for generating a preset virtual picture, generating virtual visual angle information according to the position difference value between the curved surface edge of the preset virtual picture and the user and outputting the virtual visual angle information in a virtual visual range.
An augmented reality display method is applied to the display system as described in any one of the above, as shown in fig. 4, a display frame 1, a groove 2, a lens 3 and a portable terminal 4 are arranged in a display device;
the display method comprises the following steps:
in step S1, the image pickup unit 41 picks up a real-time image of a plane located directly above the portable terminal 4, an
The inertia measurement module acquires real-time motion data;
step S2, the pose processing module processes the real-time image and the real-time motion data to obtain the current pose of the portable terminal 4;
step S3, the image processing module generates a virtual visual range and a virtual picture according to the current pose, and sends the virtual picture to the display unit for displaying;
in step S4, the virtual frame displayed on the display unit is reflected to the user through the lens 3 for viewing.
Specifically, to realize the fusion of a virtual picture and a real picture, a display method for augmented reality is provided. In step S1-step S2, the user wears the head display frame 1, the user' S eyes are looking at the lens 3 of the display device, the image capturing unit 41 of the portable terminal 4 faces upward, a real-time image of the upper side of the portable terminal 4 is captured, the inertial measurement module measures real-time motion data of the portable terminal 4, and finally the current pose of the portable terminal 4 is determined.
In augmented reality, a virtual picture needs to be displayed on the lens 3 to realize superposition and fusion between the virtual picture and a real image seen by a user through the lens 3, so that the image processing module ensures that the finally generated virtual picture can be accurately incident to the eyes of the user. Thus, in step S3, the real visual range and the virtual visual range of the user are determined in accordance with the current pose of the portable terminal 4. The real visual range refers to a visual range of the user's eyes observing the real environment in front through the lens 3, and the virtual visual range refers to a visual range corresponding to the virtual human eyes simulated in the virtual space by the image processing module in the process of generating the virtual image. An appropriate virtual picture can then be generated using the virtual visibility range. The virtual visual range is determined from the real visual range when the virtual screen is generated.
The virtual visual range here includes virtual position information, virtual angle information, and virtual field angle information. When the virtual position information, the virtual angle information and the virtual field angle information in the virtual visual range correspond to the real position information, the real angle information and the real field angle information in the display visual range one by one, the virtual human eyes in the virtual space can be ensured to be superposed with the human eyes of the user in the real environment, the virtual visual range corresponds to the real visual range of the user, the virtual picture can be reflected to the eyes of the user together with the real environment after being reflected to the lens 3, and the virtual picture observed by the user can better accord with the ideal effect.
In a preferred embodiment of the present invention, as shown in fig. 5, step S1 includes:
step S11, the image acquisition unit 41 acquires a real-time image by acquiring a feature point region including a plurality of feature points;
step S12, the image capturing unit 41 acquires the feature points in the real-time image and analyzes whether the positions of the feature point regions satisfy the viewing angle requirement:
if yes, go to step S2;
if not, go to step S13;
in step S13, the image capturing unit 41 generates a prompt instruction indicating that the number of feature points is too small, and feeds back the instruction to the user, and then returns to step S11.
Specifically, after the user wears the head display frame 1, the preset program in the portable terminal 4 guides the user to move in the space, the corresponding image acquisition unit 41 starts and acquires the top image, extracts the feature points in the real-time image and analyzes whether the parameters such as the corresponding regions covered by the feature points and the robustness of the feature points meet the requirement of the viewing angle, if yes, the step S2 is carried out, if no, a guide instruction is generated to guide the user to change the current position, the moving direction and the sight line direction so as to fill the regions with fewer corresponding feature points in the acquired image, and finally the specific position of the virtual picture in the real scene is determined.
Furthermore, in order to control the specific position information of the virtual frame, the feature points can also be preset by an external feature point device. The characteristic point device comprises an emitting unit and a diffusion unit, the diffusion unit is arranged above the emitting unit, a transparent disc can be selected from the diffusion unit, a plurality of characteristic points are added on the transparent disc, the emitting unit can select a laser emitting device, the characteristic points of the diffusion unit are projected to the upper portion of the space where the display device is located, and then the image processing module carries out characteristic point analysis according to the upper image collected by the image collecting unit 41.
In a preferred embodiment of the present invention, the image processing module determines the current pose by the vS L AM algorithm in step S2.
In a preferred embodiment of the present invention, the virtual visual range includes virtual angle information;
step S3 includes a first process of generating virtual angle information;
the first process, as shown in fig. 6, includes:
step S31A, the image processing module constructs a space rectangular coordinate system with the image acquisition unit 41 as an origin according to the current pose, and determines the space rotation angle of the image acquisition unit 41 under the space rectangular coordinate system;
step S32A, acquiring an included angle between the image acquisition unit 41 and the user;
step S33A, virtual angle information is generated according to the space rotation angle and the included angle.
In a preferred embodiment of the present invention, the virtual angle information is expressed by the following formula:
θ=(θXY-α,θZ) (1)
wherein the content of the first and second substances,
theta is used for expressing virtual angle information;
θXused for representing the pitch angle in the space rotation angle;
θYfor representing a yaw angle in the spatial rotation angle;
α are used to denote included angles;
θZfor representing the roll angle in the spatial rotation angle.
In a preferred embodiment of the present invention, the virtual visual range includes virtual location information;
step S3 includes a second process of generating virtual location information;
in the second process, as shown in fig. 7, the method includes:
step S31B, the image processing module constructs a space rectangular coordinate system with the image acquisition unit 41 as the origin according to the current pose, selects the eyebrow center of the user as a preset reference point, and generates first position information according to the offset between the image acquisition unit 41 and the preset reference point;
in step S32B, the image processing module adjusts the first position information according to the pupil distance of the user, generates second position information, and outputs the second position information as virtual position information.
Specifically, when the user turns the head up and down, the roll angle of the image pickup unit 41 changes, and the angle of the up-and-down turning corresponds to the roll angle, and when the user turns the head back and forth, the pitch angle of the image pickup unit 41 changes, and the angle of the back-and-forth turning corresponds to the pitch angle, and when the user turns the head left and right, the image pickup unit 41 performs circular rotation at the current position, and there is a deviation between the angle of the left-and-right turning and the yaw angle.
Therefore, a spatial coordinate system is constructed with the image capturing unit 41 at this time as the origin, and a spatial rotation angle of the image capturing unit 41 in the spatial coordinate system is determined, which is expressed as (θ) using the euler angleXYZ) Then, an included angle α between the image acquisition unit 41 and the sight line of the user on the y axis is obtained through calculation of formula (1), so that virtual angle information theta in a virtual visual range is determined, and according to the virtual angle information, the angle information of the virtual human eyes in the virtual space is ensured to be overlapped with the angle information of the human eyes of the user in the real environment, and the virtual visual range corresponds to the real visual range of the user, so that superposition and fusion of a virtual picture and a real scene are achieved.
In a preferred embodiment of the present invention, the first position information is expressed by the following formula:
χ′=(-BX,-BY,-BZ) (2)
wherein the content of the first and second substances,
chi' is used to express the first position information;
BXfor representing the projection of the offset on the X-axis;
BYfor representing the projection of the offset on the Y-axis;
BZrepresenting the projection of the offset on the Z-axis.
In a preferred embodiment of the present invention, the second position information includes left-eye position information and right-eye position information;
the second position information is expressed by the following formula:
Figure BDA0002516314690000161
wherein the content of the first and second substances,
χ″1for expressing left eye position information;
χ″2for expressing right eye position information;
BXfor representing the projection of the offset on the X-axis;
BYfor representing the projection of the offset on the Y-axis;
i is used for representing the interpupillary distance of the user;
BZrepresenting the projection of the offset on the Z-axis.
Specifically, in the prior art, a vS L AM algorithm is often used to fuse an image acquired by the image acquisition unit 41 and data acquired by a sensor to calculate six-degree-of-freedom information of the device, and the determined position information in the virtual visual range is the current position of the image acquisition unit 41.
Thus, a second process of determining the virtual position information is provided in step S3, and considering that the distance difference between the glasses 3 and the center of the eyebrow and the distance difference between the center of the eyebrow and the left and right eyes are substantially constant when the user uses the head frame 1, the center of the eyebrow is selected as the reference point, and the first process in step S31B is performedFirst, the position information of the image acquisition unit 41 in the rectangular spatial coordinate system is determined to be (0,0,0), and then the relative position between the image acquisition unit 41 and the position of the eyebrow center is acquired, thereby determining the first position information, that is, the position information (-B) of the position of the eyebrow center in the rectangular spatial coordinate systemX,BY,BZ) At this time, the position of the virtual human eye which can be considered to be generated is changed from the image acquisition unit 41 to the eyebrow center, then the interpupillary distance I of the eyes of the real user is acquired in step S32B, and then the first position information is adjusted to generate the second position information, that is, the position information (-B) of the left and right eyes in the spatial rectangular coordinate systemX,-BY-I/2,-BZ) And (-B)X,-BY+I/2,-BZ) The position information of the virtual human eyes in the virtual space is ensured to be superposed with the position information of the human eyes of the user in the real environment, and the virtual visual range corresponds to the real visual range of the user, so that the superposition and fusion of the virtual picture and the real scene are realized.
In a preferred embodiment of the present invention, the virtual visual range includes virtual field angle information;
step S3 includes a third process of generating virtual field angle information;
in the third process, as shown in fig. 8, the method includes:
step S31C, the image processing module generates a preset virtual image and displays the preset virtual image on the lens 3;
step S32C, the image processing module calculates the position difference between the curved surface edge of the preset virtual picture and the user;
in step S33C, the image processing module determines virtual field angle information from the position difference value.
Specifically, as shown in fig. 9, in the process of determining the virtual field angle information, the portable terminal 4 forms a preset virtual picture on the lens 3 through the display unit, respectively calculates the distance from the curved edge of the preset virtual picture to the corresponding human eye of the user, and finally determines the virtual field angle information.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (14)

1. An augmented reality display system, comprising:
the head display frame is in a circular ring shape and is worn by a user;
the opening of the groove is inclined upwards, and one side of the groove is connected with the head display frame;
the lens is arranged below the groove and connected with the other side of the groove, and is made of a semi-reflecting and semi-permeable material;
the portable terminal is provided with a first surface provided with a display unit and a second surface provided with an image acquisition unit, wherein the first surface and the second surface are arranged oppositely;
the size of the portable terminal is adapted to the size of the groove, and the first face of the portable terminal faces the lens when the portable terminal is placed in the groove;
the processing unit specifically comprises:
the inertia measurement module is used for acquiring and outputting real-time motion data;
the pose processing module is connected with the inertial measurement module and used for determining the current pose of the portable terminal according to the real-time image acquired by the image acquisition unit and the real-time motion data at the corresponding moment;
and the image processing module is connected with the pose processing module and used for generating a virtual visual range and a virtual picture according to the current pose of the portable terminal and reflecting the virtual picture to the user for viewing through the display unit and the lens.
2. An augmented reality display system as claimed in claim 1, wherein the real-time image comprises feature points;
the processing unit further comprises:
the feature point processing module is respectively connected with the image acquisition unit and the pose processing module and is used for acquiring the feature points in the real-time image, analyzing the areas corresponding to the feature points and outputting the feature points to the pose processing module according to the analysis result;
and the pose processing module determines the current pose of the portable terminal according to the real-time motion data of the area corresponding to the characteristic point and the corresponding moment.
3. The augmented reality display system of claim 1, wherein the virtual visual scope includes virtual angle information;
the image processing module comprises:
the first processing component is connected with the pose processing module and used for constructing a spatial rectangular coordinate system taking the image acquisition unit as an origin according to the current pose, determining a spatial rotation angle of the image acquisition unit and an included angle between the spatial rotation angle and the user, and generating the virtual angle information according to the spatial rotation angle and the included angle;
and the image generating component is connected with the first processing component and used for generating the virtual image according to the virtual angle information, outputting the virtual image to the display unit and reflecting the virtual image to the user for viewing through the display unit and the lens.
4. An augmented reality display system as claimed in claim 1, wherein the virtual visual scope comprises virtual location information;
the image processing module comprises:
the second processing part is connected with the pose processing module and used for constructing a spatial rectangular coordinate system taking the image acquisition unit as a primitive center according to the current pose, selecting the eyebrow center of the user as a preset reference point and generating the virtual position information according to the offset between the portable terminal and the preset reference point and the pupil distance of the user;
and the image generating component is connected with the second processing component and used for generating the virtual image according to the virtual position information, outputting the virtual image to the display unit and reflecting the virtual image to the user for viewing through the display unit and the lens.
5. The augmented reality display system of claim 1, wherein the virtual field of view comprises virtual field angle information;
the image processing module comprises:
the third processing component is connected with the pose processing module and used for generating a preset virtual picture and generating the virtual field angle information according to a position difference value between the curved surface edge of the preset virtual picture and the user;
and the image generating part is connected with the third processing part and is used for generating the virtual image according to the virtual field angle information, outputting the virtual image to the display unit and reflecting the virtual image to the user for viewing through the display unit and the lens.
6. An augmented reality display method applied to the display system according to any one of claims 1 to 5, wherein a display frame, a groove, a lens and a portable terminal are provided in the display device;
the display method comprises the following steps:
step S1, the image acquisition unit acquires a real-time image of a plane located directly above the portable terminal, an
The inertia measurement module acquires real-time motion data;
step S2, the pose processing module processes the real-time image and the real-time motion data to obtain the current pose of the portable terminal;
step S3, the image processing module generates a virtual visual range and a virtual picture according to the current pose, and sends the virtual picture to the display unit for display;
in step S4, the virtual frame displayed on the display unit is reflected to the user through the lens for viewing.
7. The method according to claim 6, wherein said step S1 includes:
step S11, the image acquisition unit acquires the real-time image by acquiring a characteristic point area comprising a plurality of characteristic points;
step S12, the image acquisition unit extracts feature points in the real-time image and analyzes whether the regions corresponding to the feature points meet the viewing angle requirements:
if yes, go to step S2;
if not, go to step S13;
in step S13, the image capturing unit generates a prompt instruction indicating that the number of feature points is too small, and feeds back the prompt instruction to the user, and then returns to step S11.
8. A display method according to claim 6, wherein the image processing module determines the current pose in step S2 by using a vS L AM algorithm.
9. A display method according to claim 6, wherein the virtual visual range includes virtual angle information;
the step S3 includes a first process of generating the virtual angle information;
the first process includes:
step S31A, the image processing module constructs a space rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and determines the space rotation angle of the image acquisition unit under the space rectangular coordinate system;
step S32A, acquiring an included angle between the image acquisition unit and the user;
step S33A, generating the virtual angle information according to the space rotation angle and the included angle.
10. The display method according to claim 9, wherein the virtual angle information is expressed by the following formula:
θ=(θXY-α,θZ)
wherein the content of the first and second substances,
theta is used for expressing the virtual angle information;
θXfor representing a pitch angle in the spatial rotation angle;
θYfor representing a yaw angle in the spatial rotation angle;
α, for indicating the included angle;
θZfor representing the roll angle in said spatial rotation angle.
11. A display method according to claim 6, wherein the virtual visual range includes virtual position information;
the step S3 includes a second process of generating the virtual location information;
the second process includes:
step S31B, the image processing module constructs a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, selects the eyebrow center of the user as a preset reference point, and generates first position information according to the offset between the image acquisition unit and the preset reference point;
step S32B, the image processing module adjusts the first position information according to the pupil distance of the user, generates second position information, and outputs the second position information as the virtual position information.
12. The display method according to claim 11, wherein the first position information is expressed by the following formula:
χ′=(-BX,-BY,-BZ)
wherein the content of the first and second substances,
chi' is used to express the first position information;
BXfor representing a projection of said offset on the X-axis;
BYfor representing a projection of said offset on the Y-axis;
BZrepresenting the projection of said offset on the Z-axis.
13. A display method according to claim 11, wherein the second position information includes left eye position information and right eye position information;
the second position information is expressed by the following formula:
Figure FDA0002516314680000061
wherein the content of the first and second substances,
χ″1for expressing the left eye position information;
χ″2for expressing the right eye position information;
BXfor representing a projection of said offset on the X-axis;
BYfor representing a projection of said offset on the Y-axis;
i is used to represent the interpupillary distance of the user;
BZrepresenting the projection of said offset on the Z-axis.
14. The display method according to claim 6, wherein the virtual visual field includes virtual field angle information;
the step S3 includes a third process of generating the virtual field angle information;
the third process includes:
step S31C, the image processing module generates a preset virtual image and displays the preset virtual image on the lens;
step S32C, the image processing module calculates a position difference between the curved edge of the preset virtual frame and the user;
step S33C, the image processing module determines the virtual field angle information according to the position difference.
CN202010477543.8A 2020-05-29 2020-05-29 Augmented reality display system and method Pending CN111491159A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010477543.8A CN111491159A (en) 2020-05-29 2020-05-29 Augmented reality display system and method
PCT/CN2020/109366 WO2021237952A1 (en) 2020-05-29 2020-08-14 Augmented reality display system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010477543.8A CN111491159A (en) 2020-05-29 2020-05-29 Augmented reality display system and method

Publications (1)

Publication Number Publication Date
CN111491159A true CN111491159A (en) 2020-08-04

Family

ID=71813746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010477543.8A Pending CN111491159A (en) 2020-05-29 2020-05-29 Augmented reality display system and method

Country Status (1)

Country Link
CN (1) CN111491159A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177434A (en) * 2021-03-30 2021-07-27 青岛小鸟看看科技有限公司 Virtual reality system fixation rendering method and system based on monocular tracking
WO2021237952A1 (en) * 2020-05-29 2021-12-02 上海鸿臣互动传媒有限公司 Augmented reality display system and method
CN114578954A (en) * 2020-11-28 2022-06-03 上海光之里科技有限公司 Augmented reality display device and display control method thereof

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021237952A1 (en) * 2020-05-29 2021-12-02 上海鸿臣互动传媒有限公司 Augmented reality display system and method
CN114578954A (en) * 2020-11-28 2022-06-03 上海光之里科技有限公司 Augmented reality display device and display control method thereof
CN113177434A (en) * 2021-03-30 2021-07-27 青岛小鸟看看科技有限公司 Virtual reality system fixation rendering method and system based on monocular tracking
US11715176B2 (en) 2021-03-30 2023-08-01 Qingdao Pico Technology Co, Ltd. Foveated rendering method and system of virtual reality system based on monocular eyeball tracking

Similar Documents

Publication Publication Date Title
JP6393367B2 (en) Tracking display system, tracking display program, tracking display method, wearable device using them, tracking display program for wearable device, and operation method of wearable device
JP5646263B2 (en) Image processing program, image processing apparatus, image processing system, and image processing method
JP5739674B2 (en) Information processing program, information processing apparatus, information processing system, and information processing method
JP6177872B2 (en) I / O device, I / O program, and I / O method
CN111491159A (en) Augmented reality display system and method
JP6250024B2 (en) Calibration apparatus, calibration program, and calibration method
WO2014128751A1 (en) Head mount display apparatus, head mount display program, and head mount display method
JP6294054B2 (en) Video display device, video presentation method, and program
JP4580678B2 (en) Gaze point display device
JP6250025B2 (en) I / O device, I / O program, and I / O method
CN110968182A (en) Positioning tracking method and device and wearable equipment thereof
JP6446465B2 (en) I / O device, I / O program, and I / O method
WO2021237952A1 (en) Augmented reality display system and method
CN212012916U (en) Augmented reality's display device
JP6479835B2 (en) I / O device, I / O program, and I / O method
JP6608208B2 (en) Image display device
JP6479836B2 (en) I / O device, I / O program, and I / O method
CN116797643A (en) Method for acquiring user fixation area in VR, storage medium and electronic device
CN114578954A (en) Augmented reality display device and display control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211220

Address after: Room 211, No. 7, Lane 658, Jinzhong Road, Changning District, Shanghai 200050

Applicant after: Shanghai guanchi spotlight Digital Technology Co.,Ltd.

Address before: 201815 room j1468, 3 / F, building 8, 55 Huiyuan Road, Jiading District, Shanghai

Applicant before: SHANGHAI HONGCHEN INTERACTIVE MEDIA Co.,Ltd.

TA01 Transfer of patent application right

Effective date of registration: 20240429

Address after: Room 108, Building A, No. 48, Lane 1088, Yuyuan Road, Changning District, Shanghai, 200050

Applicant after: Shanghai Yujian Guanchi Technology Co.,Ltd.

Country or region after: China

Address before: Room 211, No. 7, Lane 658, Jinzhong Road, Changning District, Shanghai 200050

Applicant before: Shanghai guanchi spotlight Digital Technology Co.,Ltd.

Country or region before: China