CN111179436A - Mixed reality interaction system based on high-precision positioning technology - Google Patents

Mixed reality interaction system based on high-precision positioning technology Download PDF

Info

Publication number
CN111179436A
CN111179436A CN201911368661.9A CN201911368661A CN111179436A CN 111179436 A CN111179436 A CN 111179436A CN 201911368661 A CN201911368661 A CN 201911368661A CN 111179436 A CN111179436 A CN 111179436A
Authority
CN
China
Prior art keywords
model
precision positioning
mixed reality
data
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911368661.9A
Other languages
Chinese (zh)
Inventor
张大川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Culture Industrial Development Co ltd
Original Assignee
Zhejiang Culture Industrial Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Culture Industrial Development Co ltd filed Critical Zhejiang Culture Industrial Development Co ltd
Priority to CN201911368661.9A priority Critical patent/CN111179436A/en
Publication of CN111179436A publication Critical patent/CN111179436A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a mixed reality interaction system based on a high-precision positioning technology, which comprises: the high-precision positioning system is used for acquiring the current spatial position of the user; the 3D path module is used for providing a reference for the material position of the 3D model and judging the boundary and shielding conditions of the 3D model and an actual wall building; the application program module is used for managing, calculating and rendering the 3D models and realizing a series of interactive effects between a user and each 3D model; the interaction device is used for enabling a user to send an operation instruction to the virtual image; and the shooting and display module is used for combining the pictures shot by the camera with the 3D model materials and presenting the combined pictures to a user. According to the scheme, the spatial position, the orientation and the inclination angle of the user are positioned through the high-precision positioning system, the picture collected by the camera is overlapped with the built-in 3D path or 3D structure model of the application program, and the entrance requirement of mixed reality is lowered.

Description

Mixed reality interaction system based on high-precision positioning technology
Technical Field
The invention relates to the technical field of mixed reality, in particular to a mixed reality interaction system based on a high-precision positioning technology.
Background
The Mixed Reality (Mixed Reality) technology is an extension based on a Virtual Reality (Virtual Reality) technology and an Augmented Reality (Augmented Reality) technology, and is mainly characterized in that a user sees a picture combining Virtual content and real content, and compared with a Virtual space provided by Virtual Reality, the Mixed Reality technology is based on a real scene; compared with the space limitation of augmented reality, the mixed reality has higher degree of freedom. Therefore, the mixed reality technology has wider prospects in the fields of scientific research, education and office, and in the fields of leisure, entertainment and creativity.
The current development of mixed reality technology is still limited mainly by the functioning of portable devices and the methods of achieving mixed reality.
The performance of portable devices (e.g., mobile phones, tablet computers, wearable devices, etc.) is limited by the operating rate of the processor, the capacity and volume of the storage device, and the overall heating power and heat dissipation of the device. These beliefs are certainly perfected and perfected as hardware technology develops.
The method for realizing mixed reality, which is the mainstream at present, takes microsoft's mixed reality device, e.g., Hololens, and is realized by instantly scanning surrounding walls, simulating the surrounding environment in software, then calculating and loading virtual content, and finally projecting the virtual content to retinas of left and right eyes. Its advantages are excellent immersion and perfect picture; the disadvantages are high price, very high requirements for software and hardware, short standby time and a little heavy as a wearable device.
Disclosure of Invention
The invention mainly solves the technical problems of high price, high requirements on software and hardware, poor portability and the like in the prior art, and provides a high-precision positioning technology-based mixed reality interaction system which is easy to operate and realize.
The invention mainly solves the technical problems through the following technical scheme: a mixed reality interaction system based on high-precision positioning technology comprises:
the high-precision positioning system is used for acquiring the current spatial position of the user;
the 3D path module is used for providing a reference for the material position of the 3D model and judging the boundary and shielding conditions of the 3D model and an actual wall building;
the application program module is used for managing, calculating and rendering the 3D models and realizing a series of interactive effects between a user and each 3D model;
the interaction device is used for enabling a user to send an operation instruction to the virtual image;
and the shooting and display module is used for combining the pictures shot by the camera with the 3D model materials and presenting the combined pictures to a user.
Preferably, the spatial position data of the user currently located obtained by the high-precision positioning system includes any one or several of the following:
(1) positioning data (such as GPS positioning, Beidou positioning and the like) directly provided by a navigation satellite and a ground base station;
(2) positioning data obtained by secondary positioning calculation according to a navigation satellite and/or a ground base station and matched with auxiliary wireless signals (including but not limited to Bluetooth, WiFi, RFID, ZigBee, infrared and the like);
(3) positioning data obtained from signal emitting devices self-erected in a specific area;
(4) the positioning data obtained by matching the signal receiving equipment erected in the specific area by self with the transmitting equipment held by the user.
Preferably, the spatial position data obtained by the high-precision positioning system at present comprises horizontal data and orientation data provided by a mobile device, a wearable device or a third-party device, and such data is obtained through a gyroscope. The high-precision positioning system comprises a signal receiving or transmitting chip and a signal conversion program in the mobile equipment, the wearable equipment or the third-party equipment.
Preferably, the 3D model and data controlled by the 3D path module include:
(1) 1, manufacturing according to an actual scene: a 3D scene frame model of 1 size, wherein the opacity (alpha value) of the frame model is 0 (suitable for a scene with complex structure);
(2) the method comprises the steps of manufacturing a 3D ground path model with the size of 1:1 according to an actual scene, wherein the opacity (alpha value) of the ground path model is 0, and the ground path corresponds to the limit height value of a wall (suitable for a scene with a simple structure);
(3) and (3) latitude and longitude data measured according to an actual scene (applicable to an ultra-large-scale scene).
The 3D path or 3D structure model may be implemented by either field survey modeling or live action scan modeling.
Preferably, the application program module includes:
(1) all contents are integrated in a single-machine application program;
(2) based on a general network, the system is divided into a front-end program, a back-end program and a network application program stored in a database;
(3) based on a high-speed network, the method is divided into a front-end program, a cloud-end program and a network application program stored in a database.
Preferably, the application program module further includes:
the physical engine module is used for rendering the model light source according to the time zone of the outdoor field scene, and is integrated in front-end equipment (a general mode) or a back-end server (a cloud mode);
and the model library module is used for storing the models and is positioned on the front-end equipment or the rear-end server.
The required virtual model is built in the application program, the virtual model can be placed at the position corresponding to the 3D path or the 3D structure model, and the model in which interaction is needed is configured with corresponding interaction options and animations.
Preferably, when the model library module is located in the back-end server and rendering is completed by the front-end equipment, the data transmitted by the back-end server to the front-end equipment comprises model data, model position data, model action data and model sound effect data; when the rendering is completed by the back-end server, the data transmitted by the back-end server to the front-end device includes rendered streaming media audio/video data.
Preferably, the interaction device comprises any one or several of the following:
(1) the mobile equipment sends an instruction by clicking the touch screen or sliding on the touch screen;
(2) the wearable somatosensory device sends instructions through limb actions;
(3) the handle type equipment sends instructions through handle key operation;
(4) the handle type body sensing equipment sends instructions through handle key operation and limb actions.
Preferably, the shooting and display module includes any one or several of the following:
(1) the mobile equipment is provided with a single camera and a common display screen;
(2) the mobile equipment comprises a 3D camera group and a naked eye 3D display screen;
(3) a head-mounted Augmented Reality (AR) or Mixed Reality (MR) device;
(4) 3D camera equipment and wear-type Virtual Reality (VR) equipment.
Preferably, when the shooting and display module uses a common mobile device, the display picture content at least comprises a live-action picture layer, a virtual picture layer and an interactive operation interface layer; when the shooting and display module uses 3D mobile equipment, the display picture content at least comprises a real-scene picture layer, a virtual picture layer and an interactive operation interface layer, and two groups of display pictures are respectively rendered according to left and right eyes so as to achieve the purpose of three-dimensional imaging; when the shooting and display module uses the head-mounted equipment, the display picture content at least comprises a virtual picture layer and an interactive operation interface layer, and two groups of display pictures are respectively rendered according to left and right eyes so as to achieve the purpose of three-dimensional imaging.
The positioning chip and the gyroscope provided by the mobile terminal device or the third-party device are used for positioning the spatial position, the orientation and the inclination angle of the user, and enabling the picture acquired by the camera to be overlapped with the 3D path or the 3D structure model built in the application program. In the case of a head-mounted device, the screen seen in front of the eye is made to coincide with a 3D path or 3D structure model built in the application. It should be noted that the smaller the relative distance between the positioning chip and the camera, the higher the final synthesized picture fitting degree. If the third-party chip is used for high-precision positioning, the positioning device is designed to be fixed near the camera.
And superposing and displaying the picture collected by the camera, the picture of the virtual model and the interactive operation interface on the mobile terminal from bottom to top. The end user may interact with the mixed reality image via a touch screen or other interactive device.
Compared with the prior art, the method and the device have the advantages that the 3D path or the 3D structure model of the external environment is not established through the picture shot by the camera, but the models are directly called from the front-end equipment or the back-end server according to the position and the orientation, so that a large amount of computing resources are saved. The 3D path or 3D structural model of the environment and building in the front-end device or back-end server is built by pre-measurement or live-action. The image data shot by the camera in real time does not enter the application program module, but is only mixed into the display picture.
The invention solves the problem of over-high threshold of the mixed reality user, the user can enjoy the most basic mixed reality function only by matching the common mobile terminal with the desired application program, and if the user is matched with high-end equipment, the user can also experience the high-end mixed reality function.
Drawings
FIG. 1 is a schematic diagram of an embodiment of the present invention;
in the figure: 1. a high precision positioning system; 2. a 3D path module; 3. an application program module; 4. an interaction device; 5. and a shooting and display module.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings.
Example (b): a mixed reality interactive system based on high-precision positioning technology in this embodiment, as shown in fig. 1, includes:
the high-precision positioning system 1 is used for acquiring the current spatial position of a user;
the 3D path module 2 is used for providing a reference for the material position of the 3D model and judging the boundary and shielding conditions of the 3D model and an actual wall building;
the application program module 3 is used for managing, calculating and rendering the 3D models and realizing a series of interactive effects between a user and each 3D model;
the interaction device 4 is used for enabling a user to send an operation instruction to the virtual image;
and the shooting and display module 5 is used for combining the pictures shot by the camera with the 3D model materials and presenting the combined pictures to the user.
The spatial position data of the user currently located obtained by the high-precision positioning system comprises any one or more of the following:
(1) positioning data (such as GPS positioning, Beidou positioning and the like) directly provided by a navigation satellite and a ground base station;
(2) positioning data obtained by secondary positioning calculation according to a navigation satellite and/or a ground base station and matched with auxiliary wireless signals (including but not limited to Bluetooth, WiFi, RFID, ZigBee, infrared and the like);
(3) positioning data obtained from signal emitting devices self-erected in a specific area;
(4) the positioning data obtained by matching the signal receiving equipment erected in the specific area by self with the transmitting equipment held by the user.
The spatial position data obtained by the high-precision positioning system at present comprises horizontal data and orientation data provided by mobile equipment, wearable equipment or third-party equipment, and the data is obtained through a gyroscope. The high-precision positioning system comprises a signal receiving or transmitting chip and a signal conversion program in the mobile equipment, the wearable equipment or the third-party equipment.
The 3D model and data controlled by the 3D path module include:
(1) 1, manufacturing according to an actual scene: a 3D scene frame model of 1 size, wherein the opacity (alpha value) of the frame model is 0 (suitable for a scene with complex structure);
(2) the method comprises the steps of manufacturing a 3D ground path model with the size of 1:1 according to an actual scene, wherein the opacity (alpha value) of the ground path model is 0, and the ground path corresponds to the limit height value of a wall (suitable for a scene with a simple structure);
(3) and (3) latitude and longitude data measured according to an actual scene (applicable to an ultra-large-scale scene).
The 3D path or 3D structure model may be implemented by either field survey modeling or live action scan modeling.
The application program module comprises:
(1) all contents are integrated in a single-machine application program;
(2) based on a general network, the system is divided into a front-end program, a back-end program and a network application program stored in a database;
(3) based on a high-speed network, the method is divided into a front-end program, a cloud-end program and a network application program stored in a database.
The application module further comprises:
the physical engine module is used for rendering the model light source according to the time zone of the outdoor field scene, and is integrated in front-end equipment (a general mode) or a back-end server (a cloud mode);
and the model library module is used for storing the models and is positioned on the front-end equipment or the rear-end server.
The required virtual model is built in the application program, the virtual model is placed at the position corresponding to the 3D path or the 3D structure model, and corresponding interaction options and animations are configured for the model needing interaction.
When the model library module is positioned in the back-end server, when rendering is completed by the front-end equipment, the data transmitted to the front-end equipment by the back-end server comprises model data, model position data, model action data and model sound effect data; when the rendering is completed by the back-end server, the data transmitted by the back-end server to the front-end device includes rendered streaming media audio/video data.
The interaction device comprises any one or more of the following:
(1) the mobile equipment sends an instruction by clicking the touch screen or sliding on the touch screen;
(2) the wearable somatosensory device sends instructions through limb actions;
(3) the handle type equipment sends instructions through handle key operation;
(4) the handle type body sensing equipment sends instructions through handle key operation and limb actions.
The shooting and display module comprises any one or more of the following modules:
(1) the mobile equipment is provided with a single camera and a common display screen;
(2) the mobile equipment comprises a 3D camera group and a naked eye 3D display screen;
(3) a head-mounted Augmented Reality (AR) or Mixed Reality (MR) device;
(4) 3D camera equipment and wear-type Virtual Reality (VR) equipment.
When the shooting and display module uses a common mobile device, the display picture content at least comprises a real-scene picture layer, a virtual picture layer and an interactive operation interface layer; when the shooting and display module uses 3D mobile equipment, the display picture content at least comprises a real-scene picture layer, a virtual picture layer and an interactive operation interface layer, and two groups of display pictures are respectively rendered according to left and right eyes so as to achieve the purpose of three-dimensional imaging; when the shooting and display module uses the head-mounted equipment, the display picture content at least comprises a virtual picture layer and an interactive operation interface layer, and two groups of display pictures are respectively rendered according to left and right eyes so as to achieve the purpose of three-dimensional imaging.
This scheme first needs to guarantee at the regional all-round high accuracy positioning signal that covers of mixed reality to can turn into high accuracy positioning data, wherein the data source of positioning data includes but not limited to:
firstly, high-precision positioning data directly provided by a navigation satellite and a ground base station is obtained;
secondly, high-precision positioning data is obtained through secondary positioning calculation according to a navigation satellite, a ground base station and matching with Bluetooth, WiFi and other auxiliary signals;
③, high-precision positioning data obtained by signal transmitting equipment erected in a specific area by oneself;
and fourthly, high-precision positioning data obtained by matching the signal receiving equipment erected in the specific area with the transmitting equipment held by the user.
The scheme also obtains an equal-proportion 3D path or 3D structure model of the mixed reality area, and the model can be realized through on-site measurement modeling or real-scene scanning modeling. The higher the accuracy of the model is, the higher the final degree of fit between the virtual image and the real scene is. Wherein the alpha value of the model defaults to 0, i.e. is fully transparent. If an ultra-large scale mixed reality scene is to be realized, longitude and latitude data can be directly used to replace a traditional 3D model.
The required virtual model is built in the application program, the virtual model is placed at the position corresponding to the 3D path or the 3D structure model, and corresponding interaction options and animations are configured for the model needing interaction.
In the physics engine used by the application program, the transparent wall needs to play a role of a shade to prevent the virtual model from being displayed in advance from the back of the wall when being loaded, and the user experience is influenced.
The positioning chip and the gyroscope provided by the mobile terminal device or the third-party device are used for positioning the spatial position, the orientation and the inclination angle of the user, and enabling the picture acquired by the camera to be overlapped with the 3D path or the 3D structure model built in the application program. In the case of a head-mounted device, the screen seen in front of the eye is made to coincide with a 3D path or 3D structure model built in the application. It should be noted that the smaller the relative distance between the positioning chip and the camera, the higher the final synthesized picture fitting degree. If the third-party chip is used for high-precision positioning, the positioning device is designed to be fixed near the camera.
And superposing and displaying the picture collected by the camera, the picture of the virtual model and the interactive operation interface on the mobile terminal from bottom to top. The end user may interact with the mixed reality image via a touch screen or other interactive device.
The problem of mixed reality user threshold too high has been solved to this scheme, and the user only need download an application of installation on the cell-phone, can enjoy most basic mixed reality function, if cooperate high-end equipment, can experience the mixed reality function of high-end equally.
This scheme can provide technical support for fields such as indoor amusement, outdoor amusement, experiment teaching, science and technology exhibition, wisdom city.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Although terms such as mixed reality, positioning, interaction, etc. are used more often herein, the possibility of using other terms is not excluded. These terms are used merely to more conveniently describe and explain the nature of the present invention; they are to be construed as being without limitation to any additional limitations that may be imposed by the spirit of the present invention.

Claims (10)

1. A mixed reality interaction system based on high-precision positioning technology is characterized by comprising:
the high-precision positioning system is used for acquiring the current spatial position of the user;
the 3D path module is used for providing a reference for the material position of the 3D model and judging the boundary and shielding conditions of the 3D model and an actual wall building;
the application program module is used for managing, calculating and rendering the 3D models and realizing a series of interactive effects between a user and each 3D model;
the interaction device is used for enabling a user to send an operation instruction to the virtual image;
and the shooting and display module is used for combining the pictures shot by the camera with the 3D model materials and presenting the combined pictures to a user.
2. The mixed reality interaction system based on the high-precision positioning technology as claimed in claim 1, wherein the spatial location data of the user currently located obtained by the high-precision positioning system includes any one or several of:
(1) positioning data directly provided by a navigation satellite and a ground base station;
(2) the positioning data obtained by secondary positioning calculation is matched with an auxiliary wireless signal according to a navigation satellite and/or a ground base station;
(3) positioning data obtained from signal emitting devices self-erected in a specific area;
(4) the positioning data obtained by matching the signal receiving equipment erected in the specific area by self with the transmitting equipment held by the user.
3. The mixed reality interaction system based on the high-precision positioning technology as claimed in claim 2, wherein the spatial location data obtained by the high-precision positioning system, where the user is currently located, further includes horizontal data and orientation data provided by the mobile device, the wearable device or a third-party device.
4. The mixed reality interaction system based on high-precision positioning technology as claimed in claim 1, wherein the 3D model and data controlled by the 3D path module comprise:
(1) 1, manufacturing according to an actual scene: 1-size 3D scene frame model, wherein the frame model opacity is 0;
(2) the method comprises the steps of manufacturing a 3D ground path model with the size of 1:1 according to an actual scene, wherein the opacity of the ground path model is 0, and the ground path corresponds to the limit height value of a wall body;
(3) and measuring longitude and latitude data according to the actual scene.
5. The mixed reality interaction system based on high-precision positioning technology as claimed in claim 1, wherein the application program module comprises:
(1) all contents are integrated in a single-machine application program;
(2) based on a general network, the system is divided into a front-end program, a back-end program and a network application program stored in a database;
(3) based on a high-speed network, the method is divided into a front-end program, a cloud-end program and a network application program stored in a database.
6. The mixed reality interaction system based on high-precision positioning technology as claimed in claim 5, wherein the application program module further comprises:
the physical engine module is used for rendering the model light source according to the time zone of the outdoor field scene, and is integrated in the front-end equipment or the back-end server;
and the model library module is used for storing the models and is positioned on the front-end equipment or the rear-end server.
7. The mixed reality interaction system based on the high-precision positioning technology as claimed in claim 6, wherein when the model base module is located in the back-end server and rendering is completed by the front-end device, the data transmitted by the back-end server to the front-end device comprises model data, model position data, model action data and model sound effect data; when the rendering is completed by the back-end server, the data transmitted by the back-end server to the front-end device includes rendered streaming media audio/video data.
8. The mixed reality interaction system based on the high-precision positioning technology as claimed in claim 1, wherein the interaction device comprises any one or more of the following:
(1) the mobile equipment sends an instruction by clicking the touch screen or sliding on the touch screen;
(2) the wearable somatosensory device sends instructions through limb actions;
(3) the handle type equipment sends instructions through handle key operation;
(4) the handle type body sensing equipment sends instructions through handle key operation and limb actions.
9. The mixed reality interaction system based on the high-precision positioning technology as claimed in claim 1, wherein the shooting and display module comprises any one or more of the following:
(1) the mobile equipment is provided with a single camera and a common display screen;
(2) the mobile equipment comprises a 3D camera group and a naked eye 3D display screen;
(3) a head-mounted augmented reality or mixed reality device;
(4) 3D camera equipment and wear-type virtual reality equipment.
10. The system of claim 9, wherein when the capturing and displaying module uses a general mobile device, the display screen at least comprises a real-scene screen layer, a virtual screen layer, and an interactive operation interface layer; when the shooting and display module uses 3D mobile equipment, the display picture content at least comprises a real-scene picture layer, a virtual picture layer and an interactive operation interface layer, and two groups of display pictures are respectively rendered according to left and right eyes so as to achieve the purpose of three-dimensional imaging; when the shooting and display module uses the head-mounted equipment, the display picture content at least comprises a virtual picture layer and an interactive operation interface layer, and two groups of display pictures are respectively rendered according to left and right eyes so as to achieve the purpose of three-dimensional imaging.
CN201911368661.9A 2019-12-26 2019-12-26 Mixed reality interaction system based on high-precision positioning technology Pending CN111179436A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911368661.9A CN111179436A (en) 2019-12-26 2019-12-26 Mixed reality interaction system based on high-precision positioning technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911368661.9A CN111179436A (en) 2019-12-26 2019-12-26 Mixed reality interaction system based on high-precision positioning technology

Publications (1)

Publication Number Publication Date
CN111179436A true CN111179436A (en) 2020-05-19

Family

ID=70656418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911368661.9A Pending CN111179436A (en) 2019-12-26 2019-12-26 Mixed reality interaction system based on high-precision positioning technology

Country Status (1)

Country Link
CN (1) CN111179436A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111679743A (en) * 2020-08-11 2020-09-18 南京瑞巨数码科技有限公司 Method for realizing posture interaction naked eye three-dimensional mixed virtual reality system
CN112419508A (en) * 2020-11-23 2021-02-26 中国科学技术大学 Method for realizing mixed reality based on large-range space accurate positioning
CN112558761A (en) * 2020-12-08 2021-03-26 南京航空航天大学 Remote virtual reality interaction system and method for mobile terminal
CN114519935A (en) * 2020-11-20 2022-05-20 华为技术有限公司 Road recognition method and device
CN114937118A (en) * 2022-06-09 2022-08-23 北京新唐思创教育科技有限公司 Model conversion method, apparatus, device and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103162682A (en) * 2011-12-08 2013-06-19 中国科学院合肥物质科学研究院 Indoor path navigation method based on mixed reality
GB2519744A (en) * 2013-10-04 2015-05-06 Linknode Ltd Augmented reality systems and methods
CN106780754A (en) * 2016-11-30 2017-05-31 福建北极光虚拟视觉展示科技有限公司 A kind of mixed reality method and system
WO2018058601A1 (en) * 2016-09-30 2018-04-05 深圳达闼科技控股有限公司 Method and system for fusing virtuality and reality, and virtual reality device
CN110400375A (en) * 2019-07-31 2019-11-01 陶峰 Mixed reality interactive system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103162682A (en) * 2011-12-08 2013-06-19 中国科学院合肥物质科学研究院 Indoor path navigation method based on mixed reality
GB2519744A (en) * 2013-10-04 2015-05-06 Linknode Ltd Augmented reality systems and methods
WO2018058601A1 (en) * 2016-09-30 2018-04-05 深圳达闼科技控股有限公司 Method and system for fusing virtuality and reality, and virtual reality device
CN106780754A (en) * 2016-11-30 2017-05-31 福建北极光虚拟视觉展示科技有限公司 A kind of mixed reality method and system
CN110400375A (en) * 2019-07-31 2019-11-01 陶峰 Mixed reality interactive system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111679743A (en) * 2020-08-11 2020-09-18 南京瑞巨数码科技有限公司 Method for realizing posture interaction naked eye three-dimensional mixed virtual reality system
CN114519935A (en) * 2020-11-20 2022-05-20 华为技术有限公司 Road recognition method and device
CN112419508A (en) * 2020-11-23 2021-02-26 中国科学技术大学 Method for realizing mixed reality based on large-range space accurate positioning
CN112419508B (en) * 2020-11-23 2024-03-29 中国科学技术大学 Method for realizing mixed reality based on large-scale space accurate positioning
CN112558761A (en) * 2020-12-08 2021-03-26 南京航空航天大学 Remote virtual reality interaction system and method for mobile terminal
CN114937118A (en) * 2022-06-09 2022-08-23 北京新唐思创教育科技有限公司 Model conversion method, apparatus, device and medium
CN114937118B (en) * 2022-06-09 2023-03-21 北京新唐思创教育科技有限公司 Model conversion method, apparatus, device and medium

Similar Documents

Publication Publication Date Title
CN111179436A (en) Mixed reality interaction system based on high-precision positioning technology
US20240112430A1 (en) Techniques for capturing and displaying partial motion in virtual or augmented reality scenes
Panou et al. An architecture for mobile outdoors augmented reality for cultural heritage
US10535116B2 (en) Shared virtual reality
Arth et al. The history of mobile augmented reality
US9804257B2 (en) Methods and systems for an immersive virtual reality system using multiple active markers
US10495726B2 (en) Methods and systems for an immersive virtual reality system using multiple active markers
US11557083B2 (en) Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
CN109584295A (en) The method, apparatus and system of automatic marking are carried out to target object in image
Gupta et al. Indoor mapping for smart cities—An affordable approach: Using Kinect Sensor and ZED stereo camera
US9595294B2 (en) Methods, systems and apparatuses for multi-directional still pictures and/or multi-directional motion pictures
Piekarski Interactive 3d modelling in outdoor augmented reality worlds
WO2018113759A1 (en) Detection system and detection method based on positioning system and ar/mr
Attila et al. Beyond reality: The possibilities of augmented reality in cultural and heritage tourism
KR20210004862A (en) Method, system and non-transitory computer-readable recording medium for supporting user experience sharing
Siang et al. Interactive holographic application using augmented reality EduCard and 3D holographic pyramid for interactive and immersive learning
CN103959241A (en) Mechanism for facilitating context-aware model-based image composition and rendering at computing devices
CN110160529A (en) A kind of guide system of AR augmented reality
Sudarshan Augmented reality in mobile devices
KR101104827B1 (en) 3 dimensional spherical image display apparatus for displaying earth environment image
CN112423142B (en) Image processing method, device, electronic equipment and computer readable medium
Kot et al. Application of augmented reality in mobile robot teleoperation
KR101902131B1 (en) System for producing simulation panoramic indoor images
US11494997B1 (en) Augmented reality system with display of object with real world dimensions
KR20120048888A (en) 3d advertising method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination