CN112419508A - Method for realizing mixed reality based on large-range space accurate positioning - Google Patents

Method for realizing mixed reality based on large-range space accurate positioning Download PDF

Info

Publication number
CN112419508A
CN112419508A CN202011319591.0A CN202011319591A CN112419508A CN 112419508 A CN112419508 A CN 112419508A CN 202011319591 A CN202011319591 A CN 202011319591A CN 112419508 A CN112419508 A CN 112419508A
Authority
CN
China
Prior art keywords
user
mixed reality
real space
space
engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011319591.0A
Other languages
Chinese (zh)
Other versions
CN112419508B (en
Inventor
张燕翔
訾宇彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202011319591.0A priority Critical patent/CN112419508B/en
Publication of CN112419508A publication Critical patent/CN112419508A/en
Application granted granted Critical
Publication of CN112419508B publication Critical patent/CN112419508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for realizing mixed reality based on large-range space accurate positioning, which comprises the following steps: establishing multimedia annotations corresponding to objects appearing in mixed reality; establishing a virtual space through a unity3D engine or a real3D engine to simulate a real space for processing an occlusion relation; and a large-range space accurate positioning mode and gyroscope included angle data of the intelligent mobile terminal are used for ensuring that parameters of the real camera are consistent with those of the virtual camera, and the calibration of the picture and the rendering and presentation of the multimedia annotation are realized. The method can realize the display of mixed reality on the premise of conforming to the perspective principle and following the shielding relation of the real physical world, and can enable a user to watch the annotation of the mixed reality through the intelligent mobile terminal in a large-scale space environment.

Description

Method for realizing mixed reality based on large-range space accurate positioning
Technical Field
The invention relates to the field of mixed reality, in particular to a method for realizing mixed reality based on large-range space accurate positioning.
Background
In the mixed reality field, the most commonly used application technologies are Hololens, SLAM and Magic Leap. Hololens and Magic leaps track environmental measurements based on depth cameras, and their scanning ranges are relatively limited, e.g., Hololens is about 0.8-3.1 meters, but after the user walks a long distance from the starting point, the original starting point position space mapping disappears. SLAM reconstructs the actual environment by acquiring data of the environmental space based on the reflection of the ground and the wall surface, and for a large-area space, high-cost calculation is generated in the use process, and the SLAM is difficult to operate for a long time under the condition of limited resources and is more prone to generating the problem of perceptual aliasing. The above three are all more suitable for small-range spaces.
Less mixed reality is realized in a large-scale space, and most of the existing solutions realize augmented reality. For example, a markerless edge tracking AR (augmented reality) developed by Jihyun et al in 2008 is a UMPC integrated with devices such as a camera, an ultrasonic receiver, and a gyro sensor, and is also called an augmented reality museum navigation system of an intelligent mobile terminal. In this system ultrasound localization is used to achieve marker-free AR. The technical core of the system is an edge-based tracking method and a feature point-based tracking method. The article extracts edges from the camera image using the Canny operator and then finds correspondences between the edges extracted from the camera image and edges calculated by projecting the 3D graphical model onto the image plane using the initial camera projection matrix. Street Museum APP introduced by Museum of london in 2010 enables a user to see a virtual-real combined augmented reality scene in a mobile device through GPS positioning technology. However, in the above researches, a mixed reality conforming to the perspective principle and following the occlusion relation of the real physical world is not realized, and the problems of signal occlusion and positioning accuracy are not solved well. Researchers use UWB positioning technology to perform virtual-real fusion on a two-dimensional plane according to coordinate information, and apply the virtual-real fusion to stage performance. However, at present, there is no method for accurately displaying mixed reality in a large-scale three-dimensional space.
Disclosure of Invention
Based on the problems in the prior art, the invention aims to provide a method for realizing mixed reality based on precise positioning in a large-scale space, which can solve the problem that the existing method for carrying out virtual-real fusion according to coordinate information is only realized on a two-dimensional plane, but cannot realize virtual-real fusion in a large-scale three-dimensional space.
The purpose of the invention is realized by the following technical scheme:
the embodiment of the invention provides a method for realizing mixed reality based on large-range space accurate positioning, which comprises the following steps:
step 1) positioning a user through an accurate positioning system arranged in a large-scale space for mixed reality, and sending the obtained user position information in the large-scale space to a data processing server;
step 2) the intelligent mobile terminal of the user sends data of the focal length of the camera and the included angle of the gyroscope to the data processing server;
step 3) the data processing server synchronizes the collected user position information, gyroscope included angle data and camera focal length data with a virtual camera in a 3D engine running in the data processing server, a real space low-precision model which is preset and well rendered and is modeled according to a ratio of 1:1 of a real space of the large-scale space and a corresponding multimedia annotation are placed in the 3D engine, and the real space low-precision model is subjected to transparentization processing;
and 4) after the camera of the intelligent mobile terminal of the user is synchronized with the virtual camera of the 3D engine, the rendered multimedia annotation and the transparentized real space low-precision model are sent to the intelligent mobile terminal of the user for displaying, so that the multimedia annotation corresponding to the shot object appears in a real picture shot by the camera displayed on the screen of the intelligent mobile terminal of the user, namely, the screen of the intelligent mobile terminal of the user is displayed in a mixed reality mode.
According to the technical scheme provided by the invention, the method for realizing mixed reality based on large-range space accurate positioning provided by the embodiment of the invention has the following beneficial effects:
because the UWB positioning system is used for accurately positioning the user in the large-scale space, the real space low-precision model established in a matching way can realize the display of mixed reality on the premise of conforming to the perspective principle and following the shielding relation of the real physical world, and the user can watch the annotation of the mixed reality through the intelligent mobile terminal in the large-scale space environment.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart of a method for implementing mixed reality based on large-scale spatial accurate positioning according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a UWB positioning system in a method provided by an embodiment of the invention;
fig. 3 is a schematic diagram illustrating a hardware device in the method according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the specific contents of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention. Details which are not described in detail in the embodiments of the invention belong to the prior art which is known to the person skilled in the art.
Referring to fig. 1 and 3, an embodiment of the present invention provides a method for implementing mixed reality based on large-scale space accurate positioning, where the method implements mixed reality by large-scale space accurate positioning (i.e., large-scale space with length or width of about 10-300 meters), and is applicable to education based on mixed reality technology in various large indoor and outdoor environments (such as application in scientific education scenes of primary and secondary school students, in scientific venues or museums, etc.), including:
step 1) positioning a user through an accurate positioning system arranged in a large-scale space for mixed reality, and sending the obtained user position information in the large-scale space to a data processing server;
step 2) the intelligent mobile terminal of the user sends data of the focal length of the camera and the included angle of the gyroscope to the data processing server;
step 3) the data processing server synchronizes the collected user position information, gyroscope included angle data and camera focal length data with a virtual camera in a 3D engine running in the data processing server, a real space low-precision model which is preset and well rendered and is modeled according to a ratio of 1:1 of a real space of the large-scale space and a corresponding multimedia annotation are placed in the 3D engine, and the real space low-precision model is subjected to transparentization processing;
and 4) after the camera of the intelligent mobile terminal of the user is synchronized with the virtual camera of the 3D engine, the rendered multimedia annotation and the transparentized real space low-precision model are sent to the intelligent mobile terminal of the user for displaying, so that the multimedia annotation corresponding to the shot object appears in a real picture shot by the camera displayed on the screen of the intelligent mobile terminal of the user, namely, the screen of the intelligent mobile terminal of the user is displayed in a mixed reality mode.
The precise positioning system in the step 1 of the method adopts any one of a UWB positioning system and an ultrasonic positioning system.
Referring to fig. 2, the precision positioning system preferably adopts a UWB positioning system to realize precision positioning in a large-area space. The UWB positioning system adopts four base stations or base stations 1 of integral multiple of four and one or more positioning labels 2, each positioning label 2 is fixed on a user body or an intelligent mobile terminal of the user, and the real-time position data of the label 2 can be transmitted to a server 3 by matching with the base stations 1.
In step 3 of the method, the 3D engine running in the data processing server is a Unity3D engine or a real3D engine.
In step 4 of the method, a real space low-precision model which is pre-fitted, arranged and rendered, and is modeled according to a ratio of 1:1 according to the real space of the large-scale space and a corresponding multimedia annotation are placed in the 3D engine, and the real space low-precision model is subjected to transparentization treatment as follows:
step 41) carrying out low-precision 1:1 modeling on the real space of the large-range space to obtain a real space low-precision model, and placing the real space low-precision model into a 3D engine;
step 42) establishing a multimedia annotation, and fitting and arranging the multimedia annotation in the 3D engine and the real space low-precision model;
and 43) rendering the real space low-precision model and the multimedia annotation respectively, and performing transparentization processing on the real space low-precision model.
In the above method, the corresponding multimedia annotation includes: at least one of video, audio, image, 3D model.
The method realizes mixed reality by large-range space accurate positioning (a large-range space accurate positioning mode such as UWB, ultrasonic wave and the like), and establishes a virtual space by a 3D engine (unity3D engine or real3D engine) to simulate a real space so as to process the calibration of an occlusion relation and a picture, and the rendering and presentation of multimedia annotations can realize that a user views the mixed reality annotations through an intelligent mobile terminal. The method is simple to implement, convenient to operate, capable of being widely applied to a large-range space, capable of achieving virtual-real fusion in the space, strong in space sense, interactivity and immersion sense, high in positioning accuracy, good in using effect and convenient to popularize and use. Compared with other MR devices, the MR device has low price and is convenient to popularize and use. The teaching aid is particularly suitable for education based on mixed reality technology in various large indoor and outdoor environments, such as a scientific venue or a museum, and can enhance teaching effects, improve learning interest, simultaneously mobilize various senses of children, feel the mixing of virtual and reality in a real environment, and can also be used in other displayed commercial environments.
The embodiments of the present invention are described in further detail below.
Referring to fig. 1, an embodiment of the present invention provides a method for implementing mixed reality based on large-scale space accurate positioning, including the following steps:
step 1) a UWB positioning system is arranged in a large-scale space needing mixed reality, as shown in figure 1, the UWB positioning system uses 4 base stations (or base stations of integral multiple of 4) and one or more labels, and can carry out large-area accurate positioning on users with labels in the large-scale space;
step 2) fixing the label on an intelligent mobile terminal (the intelligent mobile terminal can be a smart phone or other intelligent electronic equipment convenient to hold by hands, such as a tablet personal computer, a special terminal and the like) of a user or a user body, acquiring the position information of the user by a UWB positioning system, and sending the position information of the user to a data processing server responsible for processing data in real time;
step 3) the intelligent mobile terminal of the user transmits the camera focal length data and the gyroscope included angle data to the data processing server; the camera focal length data is used for controlling the size of a virtual picture, and the gyroscope included angle data is used for assisting the correction of the virtual picture and a real picture;
step 4) performing low-precision 1:1 modeling on the real space of the large-scale space to obtain a real space low-precision model, and placing the real space low-precision model into a 3D engine (such as a Unity3D engine or a real3D engine); the occlusion problem can be processed by a real space low-precision model;
step 5), establishing multimedia annotations in a form including at least one of video, audio, image and 3D models, and fitting and arranging the multimedia annotations with the real space low-precision model in a 3D engine; therefore, the virtual video and the real space low-precision model which processes the shielding relation are conveniently fitted and rendered in the 3D engine;
step 6), rendering the real space low-precision model and the multimedia annotation respectively, and performing transparent processing on the real space low-precision model to ensure that the user can only see the rendered multimedia annotation at the intelligent mobile terminal;
step 7), synchronizing the collected user position information, gyroscope included angle data and camera focal length data with a virtual camera in the 3D engine;
and 8) after the camera of the intelligent mobile terminal of the user is synchronized with the virtual camera of the 3D engine, the rendered multimedia annotation and the transparent real space low-precision model are sent to the intelligent mobile terminal of the user for displaying, so that the multimedia annotation appears in a real picture shot by the camera displayed on the screen of the intelligent mobile terminal of the user, namely, the virtual-real fused display of the screen of the intelligent mobile terminal of the user is realized.
Examples
In this embodiment, in a museum, the length, width and height data of the museum and the volume and position data of the exhibits are measured, and the model software 1:1, establishing a real space low-precision model of the museum, and synchronizing the model into Unity software.
Specifically, the system shown in fig. 2 and 3 and the UWB positioning system shown in fig. 2 adopt four base stations or base stations 1 of integral multiple of four and one or more positioning tags 2, each positioning tag 2 is fixed to a user or an intelligent mobile terminal 3 of the user, and real-time position data of the tag can be transmitted to a server 4 by matching with the base station 1;
fig. 3 illustrates a hardware configuration for implementing virtual-real mixing, where the user mobile end camera 8 is consistent with the virtual camera 6, and in the components, the user mobile end 3 is generally a mobile phone and may also be provided by other handheld electronic devices, and may upload the social camera focal length and the gyroscope included angle data to the data processing server 4, synchronize the position information, the handheld gyroscope included angle data of the user mobile end 3, and the focal length of the actual camera 8 to the virtual camera 6 in the Unity3D engine or the real3D engine, and transmit the rendered MR annotation image to the screen of the handheld user mobile end 3; and (3) an occlusion relation processing model 5, which performs low precision 1:1 modeling, the model can be used to handle occlusion problems; virtual video clip 7: multimedia annotations are designed in forms including text, audio, video, images, and 3D models. And fitting and rendering the low-precision model processed by the virtual video and the occlusion relation in a Unity3D engine or a real3D engine.
Specifically, a corresponding multimedia annotation is designed according to exhibits in a museum, and the annotation form comprises: text, images, video, audio, 3D models, etc. The placing positions and the shielding relation of the multimedia annotations are processed according to the real space low-precision model of the museum, and the positions are placed in a Unity3D engine or a real3D engine for processing;
rendering the real space low-precision model and the multimedia annotation of the museum in a Unity or real3D engine respectively, so that only the multimedia annotation can be displayed on the basis of correctly processing the occlusion relationship, and the real space low-precision model can be subjected to certain transparentization processing;
calling virtual camera position data in a Unity3D engine or a real3D engine;
positioning a base station by using UWB (ultra wide band) in a museum, and fixing a tag on a mobile phone of a user;
the user's cell-phone calls gyroscope contained angle and camera focus data (the mixed reality APP that usable user's cell-phone downloaded in advance realizes) and transmit them to the data processing server in: and the data processing server synchronizes the user position information, the gyroscope included angle data and the camera focal length data of the intelligent mobile terminal into a virtual camera of the Unity3D engine or the real3D engine. Thereby the position direction of the virtual camera is consistent with the mobile phone camera of the user in reality;
the user aims the mobile phone at the exhibit in the museum, the picture actually shot in the real space can be seen on the screen of the mobile phone to be mixed with the virtual multimedia annotations, and each annotation is placed beside the exhibit due to the processing of the shielding relation, and the phenomenon that the user penetrates through the wall or sees the virtual annotations in other rooms is avoided;
therefore, the mixed reality multimedia annotation of the museum exhibits is realized, the three-dimensional multimedia annotation can be watched at multiple angles along with the movement of the user, and the mixing effect is good.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. A method for realizing mixed reality based on large-range space accurate positioning is characterized by comprising the following steps:
step 1) positioning a user through an accurate positioning system arranged in a large-scale space for mixed reality, and sending the obtained user position information in the large-scale space to a data processing server;
step 2) the intelligent mobile terminal of the user sends data of the focal length of the camera and the included angle of the gyroscope to the data processing server;
step 3) the data processing server synchronizes the collected user position information, gyroscope included angle data and camera focal length data with a virtual camera in a 3D engine running in the data processing server, a real space low-precision model which is preset and well rendered and is modeled according to a ratio of 1:1 of a real space of the large-scale space and a corresponding multimedia annotation are placed in the 3D engine, and the real space low-precision model is subjected to transparentization processing;
and 4) after the camera of the intelligent mobile terminal of the user is synchronized with the virtual camera of the 3D engine, the rendered multimedia annotation and the transparentized real space low-precision model are sent to the intelligent mobile terminal of the user for displaying, so that the multimedia annotation corresponding to the shot object appears in a real picture shot by the camera displayed on the screen of the intelligent mobile terminal of the user, namely, the screen of the intelligent mobile terminal of the user is displayed in a mixed reality mode.
2. The method for realizing mixed reality based on wide-range spatial accurate positioning according to claim 1, wherein the accurate positioning system in step 1 adopts any one of a UWB positioning system and an ultrasonic positioning system.
3. The method of claim 2, wherein the UWB positioning system employs four base stations or an integer multiple of four base stations and one or more positioning tags, each positioning tag is fixed to a user or a smart mobile terminal of the user.
4. The method for realizing mixed reality based on wide-area spatial accurate positioning as claimed in claim 1, wherein in the step 3, the 3D engine running in the data processing server is a Unity3D engine or a real3D engine.
5. The method for realizing mixed reality based on wide-range space accurate positioning according to any one of claims 1 to 3, wherein in the step 4, a real space low-precision model which is pre-fitted, laid out and rendered according to the real space of the wide-range space and modeled according to a 1:1 ratio and a corresponding multimedia annotation are placed in the 3D engine, and the real space low-precision model is subjected to a transparentization treatment as follows:
step 41) carrying out low-precision 1:1 modeling on the real space of the large-range space to obtain a real space low-precision model, and placing the real space low-precision model into the 3D engine;
step 42) establishing a multimedia annotation, and fitting and arranging the multimedia annotation in the Unity software and the low-precision model of the real space;
and 43) rendering the real space low-precision model and the multimedia annotation respectively, and performing transparentization processing on the real space low-precision model.
6. The method for realizing mixed reality based on wide-range spatial accurate positioning according to claim 5, wherein the corresponding multimedia annotation comprises: at least one of video, audio, image, 3D model.
CN202011319591.0A 2020-11-23 2020-11-23 Method for realizing mixed reality based on large-scale space accurate positioning Active CN112419508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011319591.0A CN112419508B (en) 2020-11-23 2020-11-23 Method for realizing mixed reality based on large-scale space accurate positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011319591.0A CN112419508B (en) 2020-11-23 2020-11-23 Method for realizing mixed reality based on large-scale space accurate positioning

Publications (2)

Publication Number Publication Date
CN112419508A true CN112419508A (en) 2021-02-26
CN112419508B CN112419508B (en) 2024-03-29

Family

ID=74778737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011319591.0A Active CN112419508B (en) 2020-11-23 2020-11-23 Method for realizing mixed reality based on large-scale space accurate positioning

Country Status (1)

Country Link
CN (1) CN112419508B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190073832A1 (en) * 2016-07-09 2019-03-07 Doubleme, Inc. Mixed-Reality Space Map Creation and Mapping Format Compatibility-Enhancing Method for a Three-Dimensional Mixed-Reality Space and Experience Construction Sharing System
CN111179436A (en) * 2019-12-26 2020-05-19 浙江省文化实业发展有限公司 Mixed reality interaction system based on high-precision positioning technology

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190073832A1 (en) * 2016-07-09 2019-03-07 Doubleme, Inc. Mixed-Reality Space Map Creation and Mapping Format Compatibility-Enhancing Method for a Three-Dimensional Mixed-Reality Space and Experience Construction Sharing System
CN111179436A (en) * 2019-12-26 2020-05-19 浙江省文化实业发展有限公司 Mixed reality interaction system based on high-precision positioning technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
江林;: "增强现实技术在移动学习中的应用初探", 数字技术与应用, no. 12 *
陈宝权;秦学英;: "混合现实中的虚实融合与人机智能交融", 中国科学:信息科学, no. 12 *

Also Published As

Publication number Publication date
CN112419508B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
Huang et al. A 3D GIS-based interactive registration mechanism for outdoor augmented reality system
CN104376118B (en) The outdoor moving augmented reality method of accurate interest point annotation based on panorama sketch
WO2017156949A1 (en) Transparent display method and transparent display apparatus
Andersen et al. Virtual annotations of the surgical field through an augmented reality transparent display
Honkamaa et al. Interactive outdoor mobile augmentation using markerless tracking and GPS
CN108540542A (en) A kind of mobile augmented reality system and the method for display
JP2005135355A (en) Data authoring processing apparatus
US10733777B2 (en) Annotation generation for an image network
CN106846237A (en) A kind of enhancing implementation method based on Unity3D
CN108564662A (en) The method and device that augmented reality digital culture content is shown is carried out under a kind of remote scene
CN104881114A (en) Angle rotation real-time matching method based on try wearing of 3D (three dimensional) glasses
CN102647512A (en) All-round display method of spatial information
CN113253842A (en) Scene editing method and related device and equipment
Selvam et al. Augmented reality for information retrieval aimed at museum exhibitions using smartphones
CN108615260A (en) The method and device that shows of augmented reality digital culture content is carried out under a kind of exception actual environment
CN108955723B (en) Method for calibrating augmented reality municipal pipe network
CN116109684B (en) Online video monitoring two-dimensional and three-dimensional data mapping method and device for variable electric field station
CN112675541A (en) AR information sharing method and device, electronic equipment and storage medium
CN112419508B (en) Method for realizing mixed reality based on large-scale space accurate positioning
CN105183142A (en) Digital information reproduction method by means of space position nailing
Min et al. Interactive registration for Augmented Reality GIS
Johri et al. Marker-less augmented reality system for home interior and designing
Siegl et al. An augmented reality human–computer interface for object localization in a cognitive vision system
Zhang et al. Mixed reality annotations system for museum space based on the UWB positioning and mobile device
Han et al. The application of augmented reality technology on museum exhibition—a museum display project in Mawangdui Han dynasty tombs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant