Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present invention provides an augmented reality display system, comprising:
the head display frame is in a circular ring shape and is worn by a user;
the opening of the groove is inclined upwards, and one side of the groove is connected with the head display frame;
the lens is arranged below the groove and connected with the other side of the groove, and is made of a semi-reflecting and semi-permeable material;
the portable terminal is provided with a first surface provided with a display unit and a second surface provided with an image acquisition unit, wherein the first surface and the second surface are arranged oppositely;
the size of the portable terminal is adapted to the size of the groove, and the first face of the portable terminal faces the lens when the portable terminal is placed in the groove;
the processing unit specifically comprises:
the inertia measurement module is used for acquiring and outputting real-time motion data;
the pose processing module is connected with the inertial measurement module and used for determining the current pose of the portable terminal according to the real-time image acquired by the image acquisition unit and the real-time motion data at the corresponding moment;
and the image processing module is connected with the pose processing module and used for generating a virtual visual range and a virtual picture according to the current pose of the portable terminal and reflecting the virtual picture to the user for viewing through the display unit and the lens.
Preferably, the image acquisition unit acquires the real-time image by acquiring a feature point region including a plurality of feature points;
the processing unit further comprises:
the feature point processing module is respectively connected with the image acquisition unit and the pose processing module and is used for acquiring the feature points in the real-time image, analyzing the positions of the feature point areas and outputting the feature points to the pose processing module according to an analysis result;
and the pose processing module takes the position of the characteristic point area as a reference and determines the current pose of the portable terminal according to the real-time motion data at the corresponding moment.
Preferably, the virtual visual range includes virtual angle information;
the image processing module comprises:
and the first processing component is connected with the pose processing module and used for establishing a space rectangular coordinate system with the image acquisition unit as an original point according to the current pose, determining the space rotation angle of the image acquisition unit and an included angle between the space rotation angle and the user, and generating the virtual angle information according to the space rotation angle and the included angle and outputting the virtual angle information in the virtual visual range.
Preferably, the virtual visual range includes virtual position information;
the image processing module comprises:
and the second processing part is connected with the pose processing module and used for constructing a spatial rectangular coordinate system with the image acquisition unit as an origin according to the current pose, selecting the eyebrow center of the user as a preset reference point, generating the virtual position information according to the offset between the portable terminal and the preset reference point and the pupil distance of the user, and outputting the virtual position information in the virtual visual range.
Preferably, the virtual visual range includes virtual field angle information;
the image processing module comprises:
and the third processing part is connected with the pose processing module and used for generating a preset virtual picture, generating the virtual field angle information according to the position difference value between the curved surface edge of the preset virtual picture and the user and outputting the virtual field angle information in the virtual visual range.
An augmented reality display method is applied to the display system, and is characterized in that a display frame, a groove, a lens and a portable terminal are arranged in the display equipment;
the display method comprises the following steps:
step S1, the image acquisition unit acquires a real-time image of a plane located directly above the portable terminal, an
The inertia measurement module acquires real-time motion data;
step S2, the pose processing module processes the real-time image and the real-time motion data to obtain the current pose of the portable terminal;
step S3, the image processing module generates a virtual visual range and a virtual picture according to the current pose, and sends the virtual picture to the display unit for display;
in step S4, the virtual frame displayed on the display unit is reflected to the user through the lens for viewing.
Preferably, step S1 includes:
step S11, the image acquisition unit acquires the real-time image by acquiring a characteristic point area comprising a plurality of characteristic points;
step S12, the image acquisition unit acquires the feature points in the real-time image and analyzes whether the positions of the feature point regions meet the requirement of viewing angles:
if yes, go to step S2;
if not, go to step S13;
in step S13, the image capturing unit generates a prompt instruction indicating that the number of feature points is too small, and feeds back the prompt instruction to the user, and then returns to step S11.
Preferably, the image processing module in step S2 determines the current pose through a vS L AM algorithm.
Preferably, the virtual visual range includes virtual angle information;
the step S3 includes a first process of generating the virtual angle information;
the first process includes:
step S31A, the image processing module constructs a space rectangular coordinate system with the image acquisition unit as an origin according to the current pose, and determines a space rotation angle of the image acquisition unit under the space rectangular coordinate system;
step S32A, acquiring an included angle between the image acquisition unit and the user;
step S33A, generating the virtual angle information according to the space rotation angle and the included angle.
Preferably, the virtual angle information is expressed by the following formula:
θ=(θX,θY-α,θZ)
wherein the content of the first and second substances,
theta is used for expressing the virtual angle information;
θXfor representing a pitch angle in the spatial rotation angle;
θYfor representing a yaw angle in the spatial rotation angle;
α, for indicating the included angle;
θZfor representing the roll angle in said spatial rotation angle.
Preferably, the virtual visual range includes virtual position information;
the step S3 includes a second process of generating the virtual location information;
the second process includes:
step S31B, the image processing module constructs a space rectangular coordinate system with the image acquisition unit as an origin according to the current pose, selects the eyebrow center of the user as a preset reference point, and generates first position information according to the offset between the image acquisition unit and the preset reference point;
step S32B, the image processing module adjusts the first position information according to the pupil distance of the user, generates second position information, and outputs the second position information as the virtual position information.
Preferably, the first position information is expressed by the following formula:
χ′=(-BX,-BY,-BZ)
wherein the content of the first and second substances,
chi' is used to express the first position information;
BXfor representing a projection of said offset on the X-axis;
BYfor indicating the offset atProjection on the Y axis;
BZrepresenting the projection of said offset on the Z-axis.
Preferably, the second position information includes left eye position information and right eye position information;
the second position information is expressed by the following formula:
wherein the content of the first and second substances,
χ″1for expressing the left eye position information;
χ″2for expressing the right eye position information;
BXfor representing a projection of said offset on the X-axis;
BYfor representing a projection of said offset on the Y-axis;
i is used to represent the interpupillary distance of the user;
BZrepresenting the projection of said offset on the Z-axis.
Preferably, the virtual visual range includes virtual field angle information;
the step S3 includes a third process of generating the virtual field angle information;
the third process includes:
step S31C, the image processing module generates a preset virtual image and displays the preset virtual image on the lens;
step S32C, the image processing module calculates a position difference between the curved edge of the preset virtual frame and the user;
step S33C, the image processing module determines the virtual field angle information according to the position difference.
The beneficial effects of the above technical scheme are: the position can be quickly positioned, the virtual picture can be generated, and the space correction can be carried out.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
An augmented reality display system, as shown in fig. 1-3, comprising:
the head display frame 1 is annular and is used for being worn by a user;
a groove 2, the opening of the groove 2 is inclined upwards, and one side of the groove 2 is connected with the head display frame 1;
the lens 3 is arranged below the groove 2 and connected with the other side of the groove 2, and the lens 3 is made of a semi-reflecting and semi-permeable material;
the portable terminal 4 is provided with a first surface provided with a display unit and a second surface provided with an image acquisition unit 41, wherein the first surface and the second surface are arranged in a back-to-back manner, and the portable terminal 4 further comprises a processing unit which is used for processing the real-time image acquired by the image acquisition unit 41 and displaying the real-time image through the display unit;
the size of the portable terminal 4 is adapted to the size of the recess 2, and when the portable terminal 4 is placed in the recess 2, the first side of the portable terminal 4 is directed towards the lens 3;
the processing unit specifically comprises:
the inertia measurement module is used for acquiring and outputting real-time motion data;
the pose processing module is connected with the inertial measurement module and used for determining the current pose of the portable terminal 4 according to the real-time image acquired by the image acquisition unit 41 and the real-time motion data at the corresponding moment;
and the image processing module is connected with the pose processing module and used for generating a virtual visual range and a virtual picture according to the current pose of the portable terminal 4 and reflecting the virtual picture to a user for viewing through the display unit and the lens 3.
Specifically, in the display device in the prior art, the processing unit is often set in the display device for data interaction, and the three-free gyroscope of the mobile phone or the preset picture is used for positioning, so that the user cannot approach or leave the virtual three-dimensional object in a moving manner and the tracking and positioning are unstable, and the moving range of the user is limited by the position of the picture.
The technical scheme provides an augmented reality display system, data interaction is not carried out between a portable terminal 4 and a head display frame 1, a groove 2 and a lens 3, the portable terminal 4 collects real-time images through an image collecting unit 41, real-time motion data are collected through an inertia measuring module, then the current pose is determined through a pose processing module and an image processing module, a virtual visual range and a virtual picture are generated according to the current pose, finally the virtual picture is reflected to a user through a display unit and the lens 3, the user observes the virtual picture and a real environment through the lens 3, superposition and fusion between the virtual picture and the real environment are achieved, and the purpose of augmented reality is achieved. A mobile phone can be used as the portable terminal 4, so that real-time images and real-time motion data can be collected quickly, and a virtual visual range and a virtual picture can be generated.
Further, the image processing unit needs to acquire the current pose of the portable terminal 4 and construct a spatial rectangular coordinate system in the process of generating the virtual visual range, determine the virtual visual range with the real visual range of the user, to generate an appropriate virtual picture and display it on the lens 3.
Further, when putting into recess 2 portable terminal 4, with portable terminal 4's first face towards lens 3 to the optional fretwork design that uses of the region of laminating with portable terminal 4's first face in recess 2 also can choose for use printing opacity material preparation etc. with virtual picture display on lens 3, the user of being convenient for looks over.
Further, a hook 21 may be provided on one side of the groove to assist in fixing the portable terminal 4, such as a fixing rope, so as to prevent the position and angle of the portable terminal 4 from changing due to posture variation during the use of the user.
In a preferred embodiment of the present invention, the image acquisition unit 41 acquires a real-time image by acquiring a feature point region including a plurality of feature points;
the processing unit further comprises:
the feature point processing module is respectively connected with the image acquisition unit 41 and the pose processing module, and is used for acquiring feature points in the real-time image, analyzing the positions of feature point areas, and outputting the feature points to the pose processing module according to an analysis result;
the pose processing module takes the position of the feature point area as a reference, and determines the current pose of the portable terminal 4 according to the real-time motion data at the corresponding moment.
Specifically, the image acquisition unit 41 acquires a real-time image and outputs the real-time image to the image processing unit, the image processing unit extracts feature points in the image and analyzes regions corresponding to the feature points, and generates an instruction according to an analysis result, so that the image acquisition unit 41 acquires more feature points with spatial identification, and finally determines the corresponding region of the virtual picture.
In a preferred embodiment of the present invention, the virtual visual range includes virtual angle information;
the image processing module comprises:
and the first processing part is connected with the pose processing module and used for constructing a space rectangular coordinate system with the image acquisition unit 41 as an original point according to the current pose, determining a space rotation angle of the image acquisition unit 41 and an included angle between the space rotation angle and a user, generating virtual angle information according to the space rotation angle and the included angle and outputting the virtual angle information included in the virtual visual range.
In a preferred embodiment of the present invention, the virtual visual range includes virtual location information;
the image processing module comprises:
and the second processing part is connected with the pose processing module and used for constructing a spatial rectangular coordinate system with the image acquisition unit 41 as an origin according to the current pose, selecting the eyebrow center of the user as a preset reference point, generating virtual position information according to the offset between the portable terminal 4 and the preset reference point and the pupil distance of the user, and outputting the virtual position information in a virtual visual range.
In a preferred embodiment of the present invention, the virtual visual range includes virtual field angle information;
the image processing module comprises:
and the third processing part is connected with the pose processing module and used for generating a preset virtual picture, generating virtual visual angle information according to the position difference value between the curved surface edge of the preset virtual picture and the user and outputting the virtual visual angle information in a virtual visual range.
An augmented reality display method is applied to the display system as described in any one of the above, as shown in fig. 4, a display frame 1, a groove 2, a lens 3 and a portable terminal 4 are arranged in a display device;
the display method comprises the following steps:
in step S1, the image pickup unit 41 picks up a real-time image of a plane located directly above the portable terminal 4, an
The inertia measurement module acquires real-time motion data;
step S2, the pose processing module processes the real-time image and the real-time motion data to obtain the current pose of the portable terminal 4;
step S3, the image processing module generates a virtual visual range and a virtual picture according to the current pose, and sends the virtual picture to the display unit for displaying;
in step S4, the virtual frame displayed on the display unit is reflected to the user through the lens 3 for viewing.
Specifically, to realize the fusion of a virtual picture and a real picture, a display method for augmented reality is provided. In step S1-step S2, the user wears the head display frame 1, the user' S eyes are looking at the lens 3 of the display device, the image capturing unit 41 of the portable terminal 4 faces upward, a real-time image of the upper side of the portable terminal 4 is captured, the inertial measurement module measures real-time motion data of the portable terminal 4, and finally the current pose of the portable terminal 4 is determined.
In augmented reality, a virtual picture needs to be displayed on the lens 3 to realize superposition and fusion between the virtual picture and a real image seen by a user through the lens 3, so that the image processing module ensures that the finally generated virtual picture can be accurately incident to the eyes of the user. Thus, in step S3, the real visual range and the virtual visual range of the user are determined in accordance with the current pose of the portable terminal 4. The real visual range refers to a visual range of the user's eyes observing the real environment in front through the lens 3, and the virtual visual range refers to a visual range corresponding to the virtual human eyes simulated in the virtual space by the image processing module in the process of generating the virtual image. An appropriate virtual picture can then be generated using the virtual visibility range. The virtual visual range is determined from the real visual range when the virtual screen is generated.
The virtual visual range here includes virtual position information, virtual angle information, and virtual field angle information. When the virtual position information, the virtual angle information and the virtual field angle information in the virtual visual range correspond to the real position information, the real angle information and the real field angle information in the display visual range one by one, the virtual human eyes in the virtual space can be ensured to be superposed with the human eyes of the user in the real environment, the virtual visual range corresponds to the real visual range of the user, the virtual picture can be reflected to the eyes of the user together with the real environment after being reflected to the lens 3, and the virtual picture observed by the user can better accord with the ideal effect.
In a preferred embodiment of the present invention, as shown in fig. 5, step S1 includes:
step S11, the image acquisition unit 41 acquires a real-time image by acquiring a feature point region including a plurality of feature points;
step S12, the image capturing unit 41 acquires the feature points in the real-time image and analyzes whether the positions of the feature point regions satisfy the viewing angle requirement:
if yes, go to step S2;
if not, go to step S13;
in step S13, the image capturing unit 41 generates a prompt instruction indicating that the number of feature points is too small, and feeds back the instruction to the user, and then returns to step S11.
Specifically, after the user wears the head display frame 1, the preset program in the portable terminal 4 guides the user to move in the space, the corresponding image acquisition unit 41 starts and acquires the top image, extracts the feature points in the real-time image and analyzes whether the parameters such as the corresponding regions covered by the feature points and the robustness of the feature points meet the requirement of the viewing angle, if yes, the step S2 is carried out, if no, a guide instruction is generated to guide the user to change the current position, the moving direction and the sight line direction so as to fill the regions with fewer corresponding feature points in the acquired image, and finally the specific position of the virtual picture in the real scene is determined.
Furthermore, in order to control the specific position information of the virtual frame, the feature points can also be preset by an external feature point device. The characteristic point device comprises an emitting unit and a diffusion unit, the diffusion unit is arranged above the emitting unit, a transparent disc can be selected from the diffusion unit, a plurality of characteristic points are added on the transparent disc, the emitting unit can select a laser emitting device, the characteristic points of the diffusion unit are projected to the upper portion of the space where the display device is located, and then the image processing module carries out characteristic point analysis according to the upper image collected by the image collecting unit 41.
In a preferred embodiment of the present invention, the image processing module determines the current pose by the vS L AM algorithm in step S2.
In a preferred embodiment of the present invention, the virtual visual range includes virtual angle information;
step S3 includes a first process of generating virtual angle information;
the first process, as shown in fig. 6, includes:
step S31A, the image processing module constructs a space rectangular coordinate system with the image acquisition unit 41 as an origin according to the current pose, and determines the space rotation angle of the image acquisition unit 41 under the space rectangular coordinate system;
step S32A, acquiring an included angle between the image acquisition unit 41 and the user;
step S33A, virtual angle information is generated according to the space rotation angle and the included angle.
In a preferred embodiment of the present invention, the virtual angle information is expressed by the following formula:
θ=(θX,θY-α,θZ) (1)
wherein the content of the first and second substances,
theta is used for expressing virtual angle information;
θXused for representing the pitch angle in the space rotation angle;
θYfor representing a yaw angle in the spatial rotation angle;
α are used to denote included angles;
θZfor representing the roll angle in the spatial rotation angle.
In a preferred embodiment of the present invention, the virtual visual range includes virtual location information;
step S3 includes a second process of generating virtual location information;
in the second process, as shown in fig. 7, the method includes:
step S31B, the image processing module constructs a space rectangular coordinate system with the image acquisition unit 41 as the origin according to the current pose, selects the eyebrow center of the user as a preset reference point, and generates first position information according to the offset between the image acquisition unit 41 and the preset reference point;
in step S32B, the image processing module adjusts the first position information according to the pupil distance of the user, generates second position information, and outputs the second position information as virtual position information.
Specifically, when the user turns the head up and down, the roll angle of the image pickup unit 41 changes, and the angle of the up-and-down turning corresponds to the roll angle, and when the user turns the head back and forth, the pitch angle of the image pickup unit 41 changes, and the angle of the back-and-forth turning corresponds to the pitch angle, and when the user turns the head left and right, the image pickup unit 41 performs circular rotation at the current position, and there is a deviation between the angle of the left-and-right turning and the yaw angle.
Therefore, a spatial coordinate system is constructed with the image capturing unit 41 at this time as the origin, and a spatial rotation angle of the image capturing unit 41 in the spatial coordinate system is determined, which is expressed as (θ) using the euler angleX,θY,θZ) Then, an included angle α between the image acquisition unit 41 and the sight line of the user on the y axis is obtained through calculation of formula (1), so that virtual angle information theta in a virtual visual range is determined, and according to the virtual angle information, the angle information of the virtual human eyes in the virtual space is ensured to be overlapped with the angle information of the human eyes of the user in the real environment, and the virtual visual range corresponds to the real visual range of the user, so that superposition and fusion of a virtual picture and a real scene are achieved.
In a preferred embodiment of the present invention, the first position information is expressed by the following formula:
χ′=(-BX,-BY,-BZ) (2)
wherein the content of the first and second substances,
chi' is used to express the first position information;
BXfor representing the projection of the offset on the X-axis;
BYfor representing the projection of the offset on the Y-axis;
BZrepresenting the projection of the offset on the Z-axis.
In a preferred embodiment of the present invention, the second position information includes left-eye position information and right-eye position information;
the second position information is expressed by the following formula:
wherein the content of the first and second substances,
χ″1for expressing left eye position information;
χ″2for expressing right eye position information;
BXfor representing the projection of the offset on the X-axis;
BYfor representing the projection of the offset on the Y-axis;
i is used for representing the interpupillary distance of the user;
BZrepresenting the projection of the offset on the Z-axis.
Specifically, in the prior art, a vS L AM algorithm is often used to fuse an image acquired by the image acquisition unit 41 and data acquired by a sensor to calculate six-degree-of-freedom information of the device, and the determined position information in the virtual visual range is the current position of the image acquisition unit 41.
Thus, a second process of determining the virtual position information is provided in step S3, and considering that the distance difference between the glasses 3 and the center of the eyebrow and the distance difference between the center of the eyebrow and the left and right eyes are substantially constant when the user uses the head frame 1, the center of the eyebrow is selected as the reference point, and the first process in step S31B is performedFirst, the position information of the image acquisition unit 41 in the rectangular spatial coordinate system is determined to be (0,0,0), and then the relative position between the image acquisition unit 41 and the position of the eyebrow center is acquired, thereby determining the first position information, that is, the position information (-B) of the position of the eyebrow center in the rectangular spatial coordinate systemX,BY,BZ) At this time, the position of the virtual human eye which can be considered to be generated is changed from the image acquisition unit 41 to the eyebrow center, then the interpupillary distance I of the eyes of the real user is acquired in step S32B, and then the first position information is adjusted to generate the second position information, that is, the position information (-B) of the left and right eyes in the spatial rectangular coordinate systemX,-BY-I/2,-BZ) And (-B)X,-BY+I/2,-BZ) The position information of the virtual human eyes in the virtual space is ensured to be superposed with the position information of the human eyes of the user in the real environment, and the virtual visual range corresponds to the real visual range of the user, so that the superposition and fusion of the virtual picture and the real scene are realized.
In a preferred embodiment of the present invention, the virtual visual range includes virtual field angle information;
step S3 includes a third process of generating virtual field angle information;
in the third process, as shown in fig. 8, the method includes:
step S31C, the image processing module generates a preset virtual image and displays the preset virtual image on the lens 3;
step S32C, the image processing module calculates the position difference between the curved surface edge of the preset virtual picture and the user;
in step S33C, the image processing module determines virtual field angle information from the position difference value.
Specifically, as shown in fig. 9, in the process of determining the virtual field angle information, the portable terminal 4 forms a preset virtual picture on the lens 3 through the display unit, respectively calculates the distance from the curved edge of the preset virtual picture to the corresponding human eye of the user, and finally determines the virtual field angle information.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.