CN111047708A - Complex equipment high-risk project training system based on mixed reality - Google Patents
Complex equipment high-risk project training system based on mixed reality Download PDFInfo
- Publication number
- CN111047708A CN111047708A CN201911174115.1A CN201911174115A CN111047708A CN 111047708 A CN111047708 A CN 111047708A CN 201911174115 A CN201911174115 A CN 201911174115A CN 111047708 A CN111047708 A CN 111047708A
- Authority
- CN
- China
- Prior art keywords
- virtual
- mixed reality
- server
- dimensional model
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012549 training Methods 0.000 title claims abstract description 82
- 238000009434 installation Methods 0.000 claims abstract description 20
- 238000000034 method Methods 0.000 claims description 30
- 230000004927 fusion Effects 0.000 claims description 10
- 238000009877 rendering Methods 0.000 claims description 9
- 238000004088 simulation Methods 0.000 claims description 9
- 230000002452 interceptive effect Effects 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 4
- 230000007246 mechanism Effects 0.000 claims description 4
- 230000001149 cognitive effect Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 11
- 238000007654 immersion Methods 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 21
- 239000011521 glass Substances 0.000 description 7
- 230000003993 interaction Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000006378 damage Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000004984 smart glass Substances 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Educational Technology (AREA)
- Strategic Management (AREA)
- Software Systems (AREA)
- Tourism & Hospitality (AREA)
- Educational Administration (AREA)
- Computer Graphics (AREA)
- Economics (AREA)
- Computer Hardware Design (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- General Business, Economics & Management (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a mixed reality-based complex equipment high-risk project training system, which comprises complex equipment, a server, mixed reality equipment, a controller and a wireless router, wherein the complex equipment is connected with the server through a network; the server and the mixed reality equipment are in network connection through the wireless router; the complex equipment consists of a real-installation training part and a console; the server adopts a three-dimensional modeling tool to construct a virtual three-dimensional model which is the same as the real-installation training part, calculates the space positioning and motion information of the virtual three-dimensional model, and communicates with the mixed reality equipment through a wireless network. The invention realizes the holographic virtualization of the part with higher traditional real-installation training cost and the materialization of the part with lower cost, has good operability and high immersion feeling, is simple and portable, is not easily interfered by the external environment, can carry out high-risk project training of large-scale equipment in the indoor space, reduces the training cost and improves the experience feeling and the training effect.
Description
Technical Field
The invention belongs to the technical field of human-computer interaction, and particularly relates to a complex equipment high-risk project training system based on mixed reality.
Background
The mixed reality technology (MR) provides a brand-new support means for immersive virtual information display and interaction, is a further development on the augmented reality technology (AR), and further emphasizes the reality and real-time performance of fusion of a real world and a virtual world, and physical entities and virtual information in a user visual environment. The technology presents virtual scene information in a real scene, and builds an interactive feedback information loop among the real world, the virtual world and a user so as to enhance the sense of reality of user experience.
Generally, for the operation training of large-scale equipment such as cranes, loading trucks and the like, there is a certain risk in the training directly on actual installation, unpredictable accidents may occur in the training process, which may cause damage to the equipment, and even cause injury to operators, and because the large physical space occupied is needed to be performed outdoors, it is very easy to be interfered by external environmental factors, so virtual simulation training is often applied in these fields. The traditional pure virtual simulation or semi-physical simulation has limitations, and although a virtual simulation (such as VR) virtual environment is relatively real, the virtual simulation has strong split feeling with reality, the operation experience is poor, and the expected training effect is difficult to achieve; the semi-physical simulation user has great limitation on the observation visual angle, and can only see a part of visual angles with fixed angles; in the aspect of interaction, the VR virtual simulation mostly adopts a control handle matched with display equipment to map an interaction function, and has a certain difference with real installation operation, so that the training effect is influenced; although the semi-physical simulation training is more real in operation, the interactive mode is single, the training is dull, and intelligent prompting and guiding cannot be performed in the training process. Therefore, a training system which is simple and easy to operate, diversified in interactive mode and high in immersive training experience needs to be invented, so that the efficiency and the effect of training a high-risk project of large-scale complex equipment are improved.
Disclosure of Invention
In order to solve the problems, the invention aims to provide a complex equipment high-risk project training system based on mixed reality, a virtual three-dimensional simulation model of complex equipment is established and projected in mixed reality glasses, a console adopts a real object, the virtual model and a real object are precisely matched to achieve the effect of virtual-real fusion by identifying a mechanical supporting platform of the console and marks on the console, and the virtual model with equal ratio size can be viewed from different angles and directions, so that the operation feeling and the immersion feeling of training personnel are improved; and the interactive mode is diversified, the operation is simpler, and the operation efficiency and the training effect are improved.
In order to achieve the above object, the present invention adopts the following technical solutions.
A mixed reality-based complex equipment high-risk project training system comprises complex equipment, a server, mixed reality equipment, a controller and a wireless router; the server is in network connection with the mixed reality equipment through a wireless router; the complex equipment consists of a real-installation training part and a console;
the server adopts a three-dimensional modeling tool to construct a virtual three-dimensional model which is the same as the real-installation training part, calculates the space positioning and motion information of the virtual three-dimensional model, and communicates with the mixed reality equipment through a wireless network;
the mixed reality equipment receives the virtual three-dimensional model and the spatial positioning and motion information thereof, and performs projection display on the virtual three-dimensional model so as to interact with a wearing user;
the wearing user interacts with the virtual three-dimensional model in a mode of controlling an operation console, gestures or voice;
the control console is fixed on the training field and is pasted with a sign picture; the position relation between the virtual three-dimensional model and the mark picture is set in the server, so that when the mixed reality equipment reads the mark picture on the console, the matching of the display position of the virtual three-dimensional model and the actual position of the console can be realized, and further the virtual-real fusion is realized;
the console transmits an operation signal of a wearing user to the server through driving, and after the server calculates the operation, new space positioning and motion information is generated and fed back to the mixed reality equipment for projection display.
Further, the mixed reality device comprises a memory, a processor, an image collector and a sound collector;
the image collector is used for collecting gesture instructions of the wearing user and synchronizing collected gesture instruction data to the server in real time;
the sound collector is used for collecting a voice instruction of a wearing user; synchronizing the collected voice instruction data to a server in real time;
the processor is used for analyzing the interactive information transmitted by the server into instruction information corresponding to the motion of the virtual three-dimensional model and controlling the virtual three-dimensional model to perform corresponding motion according to the instruction information;
and the memory records and stores the motion state information of the model during training.
Further, the virtual three-dimensional model which is the same as the actual installation training part is constructed by adopting a three-dimensional modeling tool, and the space positioning and motion information of the virtual three-dimensional model is solved, which specifically comprises the following steps:
firstly, constructing each part of a real-package training part by using a 3DS Max model of Rhino;
secondly, setting a parent-child nesting relationship among the components of the real-installation training part by analyzing the motion dependency relationship among the components of the real-installation training part, and further constructing a whole three-dimensional virtual filling model motion scene tree;
and finally, performing texture rendering on the surface of the model by using a Unity Shader by adopting a three-dimensional simulation rendering engine taking Unity 3D as a core to obtain a virtual three-dimensional model.
Further, the components of the loading training part are constructed by utilizing a 3DS Max model of the Rhino, and the grid modeling, the patch modeling and/or the NURBS modeling are adopted; wherein, the part which looks hard and has less detail is modeled by a grid; patch modeling and/or NURBS modeling are used for smooth, organized areas of the surface.
Furthermore, the virtual three-dimensional model is constructed by utilizing an indoor SLAM calibration method to make the space coordinate position of the virtual three-dimensional model absolute and determining a unified world coordinate system taking the virtual three-dimensional model as a reference.
Further, the server is communicated with a plurality of mixed reality devices, and each mixed reality device is communicated with the server in a one-to-one mode; multiple mixed reality devices have the same cognitive hierarchy.
Furthermore, the message consistency detection in the host domain is introduced in a one-to-one communication mode of the server and the mixed reality device, namely, a real-time information synchronization mechanism is added among a plurality of mixed reality devices.
Compared with the prior art, the invention has the beneficial effects that:
(1) the invention virtualizes the actual installation training part, namely the main body part of the complex equipment, and projects the actual installation training part in the form of holographic images through the mixed reality intelligent glasses; the control console part of the complex equipment is independently manufactured according to the original object, and is matched with tracking registration based on image recognition, so that the virtual-real fusion of the virtual model and the real object control console is realized.
(2) The invention can realize the effect that multiple users wear multiple mixed reality devices to observe the virtual three-dimensional model from the direction and the angle of the users, supports the cooperation of the multiple users through interactive modes such as an operation console, gestures, voice and the like to carry out virtual cooperative training, and has the functions of reappearing and analyzing the training process.
(3) The invention carries out holographic virtualization on the part (large equipment) with higher traditional real-mounted training cost, and carries out virtual-real fusion with the console with lower cost, thereby having good operability and high immersion feeling, being simple and portable, being not easily interfered by external environment, being capable of carrying out high-risk project training on the large equipment in indoor space, reducing training cost and improving experience feeling and training effect.
Drawings
The invention is described in further detail below with reference to the figures and specific embodiments.
FIG. 1 is a flow chart of an embodiment of the present invention;
fig. 2 is a network connection diagram of a server and a plurality of mixed reality devices according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a virtual-real fusion of a user training process according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an exemplary scenario of operation of a user training process in an embodiment of the present invention;
FIG. 5 is a control command message transmission diagram according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a process of storing motion information of a virtual three-dimensional model according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating a process of reproducing motion information of a virtual three-dimensional model according to an embodiment of the present invention;
Detailed Description
The embodiments and effects of the present invention will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a mixed reality-based complex equipment high-risk project training system includes a complex equipment, a server, a mixed reality equipment, a controller and a wireless router; the server is in network connection with the mixed reality equipment through a wireless router; the complex equipment consists of a real-installation training part and a console.
The server adopts a three-dimensional modeling tool to construct a virtual three-dimensional model which is the same as the real-installation training part, calculates the space positioning and motion information of the virtual three-dimensional model, and communicates with the mixed reality equipment through a wireless network; the mixed reality equipment receives the virtual three-dimensional model and the spatial positioning and motion information thereof, and performs projection display on the virtual three-dimensional model so as to interact with a wearing user; the wearing user interacts with the virtual three-dimensional model in a mode of controlling an operation console, gestures or voice; the control console is fixed on the training field and is pasted with a sign picture; the position relation between the virtual three-dimensional model and the mark picture is set in the server, so that when the mixed reality equipment reads the mark picture on the console, the matching of the display position of the virtual three-dimensional model and the actual position of the console can be realized, and further the virtual-real fusion is realized; the specific process comprises the following steps: the method comprises the steps of utilizing Vufuria SDK in Unity 3D to track and register a virtual model based on image recognition, uploading a recognition picture to a Vufuria server, downloading a Unity Package and a License Key containing picture recognition information, configuring the positions of the virtual model and the picture in Unity, and attaching the picture to the corresponding positions of a mechanical support and a console. When a camera of the mixed reality equipment identifies a picture, generating a virtual three-dimensional model and forming a virtual-real fusion effect with the console; the space schematic diagram of the system virtual-real fusion is shown in fig. 2, and the real scene of the training scene is shown in fig. 3.
The console transmits an operation signal of a wearing user to the server through driving, and after the server calculates the operation, new space positioning and motion information is generated and fed back to the mixed reality equipment for projection display.
The mixed reality equipment comprises a memory, a processor, an image collector and a sound collector; the image collector is used for collecting gesture instructions of the wearing user and synchronizing collected gesture instruction data to the server in real time; the sound collector is used for collecting a voice instruction of a wearing user; synchronizing the collected voice instruction data to a server in real time;
the processor is used for analyzing the interactive information transmitted by the server into instruction information corresponding to the motion of the virtual three-dimensional model and controlling the virtual three-dimensional model to perform corresponding motion according to the instruction information; and the memory records and stores the motion state information of the model during training.
Illustratively, the virtual three-dimensional model building process of the invention specifically comprises the following steps:
firstly, constructing each part of a real-package training part by using a 3DS Max model of Rhino;
secondly, setting a parent-child nesting relationship among the components of the real-installation training part by analyzing the motion dependency relationship among the components of the real-installation training part, and further constructing a whole three-dimensional virtual filling model motion scene tree;
and finally, performing texture rendering on the surface of the model by using a Unity Shader by adopting a three-dimensional simulation rendering engine taking Unity 3D as a core to obtain a virtual three-dimensional model.
In the modeling process, the invention utilizes a Rhino and 3DS Max three-dimensional modeling tool to construct a virtual filling model, wherein the selected modeling method comprises the following steps: grid modeling, patch modeling and NURBS modeling; wherein, a mesh modeling method can be used for the model which looks hard and has less detail; the surface patch modeling and NURBS modeling method is more suitable for complex, smooth-surfaced and organized models; the three modeling methods are diversified and combined to obtain the most perfect modeling effect. And after the motion part is modeled, guiding the model into a Unity 3D rendering engine, and rendering the surface texture of the model by using a shader in the Unity 3D. In order to ensure the simulation degree of the virtual rendering scene, the Unity 3D rendering engine is utilized to provide vivid physical simulation characteristics such as gravity, collision, flexibility, inertia, weather and the like for the model component. And setting a parent-child nesting relationship among the model components by analyzing the motion dependency relationship among the model components, and finally completing the construction of the whole three-dimensional virtual filling model motion scene tree.
Specifically, the server of the invention is communicated with a plurality of mixed reality devices, and each mixed reality intelligent glasses and the server form one-to-one communication; the multiple mixed reality smart glasses have the same cognitive level.
Illustratively, a server, a plurality of mixed reality smart glasses, and a PC are connected through a wireless router, as shown in fig. 4; the network connection follows a TCP/IP protocol, the server automatically opens the network monitoring when the system runs, monitors the network in real time, and waits for the mixed reality intelligent glasses client to be connected with the client in a matching way; the user wears the intelligent glasses with mixed reality to enter a virtual training scene, the operation information of the user on the model is transmitted into the server through the wireless network, and the result is transmitted back to the client side of the mixed reality equipment after the operation of the server.
The system can realize multi-view observation of the pose of the three-dimensional virtual model: when the server interacts with a plurality of mixed reality devices simultaneously, the server makes the spatial coordinate position of the virtual three-dimensional model absolute by using an indoor SLAM calibration method and determines a unified world coordinate system taking the virtual three-dimensional model as a reference; therefore, a plurality of users can roam in a three-dimensional scene by wearing the mixed reality equipment at the same time, and observe the model in an all-round way. The mixed reality equipment sends the model image observed in real time to the server, and other personnel can observe the multi-view state of the model by connecting with other projection equipment.
When multi-user cooperative observation and interaction are carried out, the method introduces the message consistency detection in the host domain in a one-to-one communication mode of the server-mixed reality intelligent glasses, namely, a real-time information synchronization mechanism is added among a plurality of mixed reality intelligent glasses. The specific interaction process is as follows: the user interacts with the virtual three-dimensional model (three-dimensional virtual filling model) by operating any one or more ways of console, gesture, voice, etc., and the control flow is as shown in fig. 5.
The operation driver matched with the console is connected to a USB interface of the server, and the console operation driver analyzes the operation control flow input by the operation handle into an operation instruction flow of the virtual three-dimensional model and transmits the operation instruction flow to the server; the server has corresponding code program for setting specific response of each instruction model; when a user performs gesture and voice operation, the mixed reality display equipment concentrates viewpoints on a virtual three-dimensional model component to be operated, and when the user gives a specific gesture action or voice, model component information and gesture and voice information in the viewpoints are transmitted to a server; the server is provided with a set of specific programs used for analyzing the instruction information corresponding to the current gesture operation and voice operation and applying the instruction information on the virtual three-dimensional model to control the motion of the model.
The invention relates to multi-user multi-device collaborative operation training, which introduces message consistency detection in a host domain in a server-client one-to-one communication mode, and adds a real-time message synchronization mechanism among devices on the basis of traditional socket network connection: when any host domain shared message data marked as a SyncVar tag in the server and the client is modified, the data is shared to all devices in the host domain through a wireless network, and before responding to the received data operation, the devices disclose the received data information in the host domain, check whether the data information received by all the devices is consistent, if so, further render according to the message data, otherwise, all the devices give up the response to the message until the shared message data of the devices becomes consistent.
The information storage and reproduction process in the training process of the invention is as follows:
and (3) information storage in the training process: the motion information of the virtual three-dimensional model in the training process is stored in a database in the form of an instruction set, each operation of the motion information corresponds to a specific instruction, and the connection of the specific instruction sets on the time axis sequence forms the complete training process. As shown in fig. 6, when training starts, the motion instruction information of the model is stored in the memory, and if the instruction is interrupted in the training process, the instruction stored in the memory is cleared; and when the whole training process is finished, all the instructions in the memory are stored in the database.
The training process is repeated: as shown in fig. 7, when a training process is to be reviewed, the training process is selected in the system, and then the system loads information of the training scenario, where the loading scenario information is mainly to lock an instruction set in the database and then store the instruction set in the memory of the server; playing the video, namely reading the instructions in the memory one by one, and reproducing the motion of the model.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (7)
1. A mixed reality-based complex equipment high-risk project training system is characterized by comprising complex equipment, a server, mixed reality equipment, a controller and a wireless router; the server is in network connection with the mixed reality equipment through a wireless router; the complex equipment consists of a real-installation training part and a console;
the server adopts a three-dimensional modeling tool to construct a virtual three-dimensional model which is the same as the real-installation training part, calculates the space positioning and motion information of the virtual three-dimensional model, and communicates with the mixed reality equipment through a wireless network;
the mixed reality equipment receives the virtual three-dimensional model and the spatial positioning and motion information thereof, and performs projection display on the virtual three-dimensional model so as to interact with a wearing user;
the wearing user interacts with the virtual three-dimensional model in a mode of controlling an operation console, gestures or voice;
the control console is fixed on the training field and is pasted with a sign picture; the position relation between the virtual three-dimensional model and the mark picture is set in the server, so that when the mixed reality equipment reads the mark picture on the console, the matching of the display position of the virtual three-dimensional model and the actual position of the console can be realized, and further the virtual-real fusion is realized;
the console transmits an operation signal of a wearing user to the server through driving, and after the server calculates the operation, new space positioning and motion information is generated and fed back to the mixed reality equipment for projection display.
2. The mixed reality-based complex device high-risk project training system of claim 1, wherein the mixed reality device comprises a memory, a processor, an image collector, and a sound collector;
the image collector is used for collecting gesture instructions of the wearing user and synchronizing collected gesture instruction data to the server in real time;
the sound collector is used for collecting a voice instruction of a wearing user; synchronizing the collected voice instruction data to a server in real time;
the processor is used for analyzing the interactive information transmitted by the server into instruction information corresponding to the motion of the virtual three-dimensional model and controlling the virtual three-dimensional model to perform corresponding motion according to the instruction information;
and the memory records and stores the motion state information of the model during training.
3. The mixed reality-based complex equipment high-risk project training system according to claim 1, wherein the virtual three-dimensional model which is the same as the actual-installed training part is constructed by using a three-dimensional modeling tool, and the spatial positioning and motion information of the virtual three-dimensional model is calculated, which specifically comprises:
firstly, constructing each part of a real-package training part by using a 3DS Max model of Rhino;
secondly, setting a parent-child nesting relationship among the components of the real-installation training part by analyzing the motion dependency relationship among the components of the real-installation training part, and further constructing a whole three-dimensional virtual filling model motion scene tree;
and finally, performing texture rendering on the surface of the model by using a Unity Shader by adopting a three-dimensional simulation rendering engine taking Unity 3D as a core to obtain a virtual three-dimensional model.
4. The mixed reality-based complex equipment high-risk project training system of claim 3, wherein the components of the real-world training part are constructed by using a 3DS Max model of Rhino, which uses mesh modeling, patch modeling and/or NURBS modeling; wherein, the part which looks hard and has less detail is modeled by a grid; patch modeling and/or NURBS modeling are used for smooth, organized areas of the surface.
5. The mixed reality-based complex equipment high-risk project training system of claim 1, wherein the virtual three-dimensional model is constructed by using an indoor SLAM calibration method to make the spatial coordinate position of the virtual three-dimensional model absolute and determining a unified world coordinate system with the virtual three-dimensional model as a reference.
6. The mixed reality-based complex equipment high-risk project training system of claim 5, wherein the server is in communication with a plurality of mixed reality equipment, each mixed reality equipment in one-to-one communication with the server; multiple mixed reality devices have the same cognitive hierarchy.
7. The mixed reality-based complex equipment high-risk project training system as claimed in claim 6, wherein message consistency detection in host domain is introduced in one-to-one communication mode of server-mixed reality equipment, that is, a real-time information synchronization mechanism is added among a plurality of mixed reality equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911174115.1A CN111047708B (en) | 2019-11-26 | 2019-11-26 | Complex equipment high-risk project training system based on mixed reality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911174115.1A CN111047708B (en) | 2019-11-26 | 2019-11-26 | Complex equipment high-risk project training system based on mixed reality |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111047708A true CN111047708A (en) | 2020-04-21 |
CN111047708B CN111047708B (en) | 2022-12-13 |
Family
ID=70233425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911174115.1A Active CN111047708B (en) | 2019-11-26 | 2019-11-26 | Complex equipment high-risk project training system based on mixed reality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111047708B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111680736A (en) * | 2020-06-03 | 2020-09-18 | 长春博立电子科技有限公司 | Artificial intelligence behavior analysis model training system and method based on virtual reality |
CN111899590A (en) * | 2020-08-25 | 2020-11-06 | 成都合纵连横数字科技有限公司 | Mixed reality observation method for simulation operation training process |
CN113566829A (en) * | 2021-07-19 | 2021-10-29 | 上海极赫信息技术有限公司 | High-precision positioning technology-based mixed reality navigation method and system and MR (magnetic resonance) equipment |
WO2022036473A1 (en) * | 2020-08-17 | 2022-02-24 | 南京翱翔智能制造科技有限公司 | Dynamic 3d reconstruction-based hybrid reality collaborative scene sharing method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184342A (en) * | 2011-06-15 | 2011-09-14 | 青岛科技大学 | Virtual-real fused hand function rehabilitation training system and method |
CN109782907A (en) * | 2018-12-28 | 2019-05-21 | 西安交通大学 | A kind of virtual filling coorinated training system based on polyhybird real world devices |
-
2019
- 2019-11-26 CN CN201911174115.1A patent/CN111047708B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184342A (en) * | 2011-06-15 | 2011-09-14 | 青岛科技大学 | Virtual-real fused hand function rehabilitation training system and method |
CN109782907A (en) * | 2018-12-28 | 2019-05-21 | 西安交通大学 | A kind of virtual filling coorinated training system based on polyhybird real world devices |
Non-Patent Citations (1)
Title |
---|
蔡启航等: "导弹装备分布式虚拟协同操作训练系统设计", 《传感器与微系统》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111680736A (en) * | 2020-06-03 | 2020-09-18 | 长春博立电子科技有限公司 | Artificial intelligence behavior analysis model training system and method based on virtual reality |
WO2022036473A1 (en) * | 2020-08-17 | 2022-02-24 | 南京翱翔智能制造科技有限公司 | Dynamic 3d reconstruction-based hybrid reality collaborative scene sharing method |
CN111899590A (en) * | 2020-08-25 | 2020-11-06 | 成都合纵连横数字科技有限公司 | Mixed reality observation method for simulation operation training process |
CN111899590B (en) * | 2020-08-25 | 2022-03-11 | 成都合纵连横数字科技有限公司 | Mixed reality observation method for simulation operation training process |
CN113566829A (en) * | 2021-07-19 | 2021-10-29 | 上海极赫信息技术有限公司 | High-precision positioning technology-based mixed reality navigation method and system and MR (magnetic resonance) equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111047708B (en) | 2022-12-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111047708B (en) | Complex equipment high-risk project training system based on mixed reality | |
KR101918262B1 (en) | Method and system for providing mixed reality service | |
CN102509330B (en) | Application of virtual three-dimensional system of transformer substation on the basis of electric power geographic information system (GIS) | |
CN109859538A (en) | A kind of key equipment training system and method based on mixed reality | |
CN106249607A (en) | Virtual Intelligent household analogue system and method | |
CN109531566B (en) | Robot live-line work control method based on virtual reality system | |
CN102880464B (en) | A kind of three-dimensional game engine system | |
CN109782907A (en) | A kind of virtual filling coorinated training system based on polyhybird real world devices | |
CN106600688A (en) | Virtual reality system based on three-dimensional modeling technology | |
CN104750931A (en) | Intelligent device control arrangement system applied to interior design | |
CN207397530U (en) | For the immersion multi-person synergy training device of the virtual implementing helmet formula of Substation Training | |
CN108765576B (en) | OsgEarth-based VIVE virtual earth roaming browsing method | |
CN110321000B (en) | Virtual simulation system for complex tasks of intelligent system | |
CN114341943A (en) | Simple environment solver using plane extraction | |
CN103019702B (en) | A kind of visualization of 3 d display and control editing system and method | |
CN110977981A (en) | Robot virtual reality synchronization system and synchronization method | |
CN103093502A (en) | Three-dimensional model information obtaining method based on rotary three views | |
CN111292409A (en) | Building design method based on VR technology | |
CN111467789A (en) | Mixed reality interaction system based on Holo L ens | |
CN114169546A (en) | MR remote cooperative assembly system and method based on deep learning | |
Fu et al. | Real-time multimodal human–avatar interaction | |
CN109829205A (en) | A kind of scene creation method, apparatus and computer readable storage medium | |
CN109857258B (en) | Virtual remote control method, device and system | |
CN206877277U (en) | A kind of virtual man-machine teaching system based on mixed reality technology | |
Jenzeri et al. | Development of a mixed reality system based on IoT and augmented reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |