CN115527008A - Safety simulation experience training system based on mixed reality technology - Google Patents

Safety simulation experience training system based on mixed reality technology Download PDF

Info

Publication number
CN115527008A
CN115527008A CN202110706962.9A CN202110706962A CN115527008A CN 115527008 A CN115527008 A CN 115527008A CN 202110706962 A CN202110706962 A CN 202110706962A CN 115527008 A CN115527008 A CN 115527008A
Authority
CN
China
Prior art keywords
model
algorithm
mapping
mixed reality
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110706962.9A
Other languages
Chinese (zh)
Inventor
矫恒超
王春
李磊
袁纪武
李智临
张奕奕
范亚苹
刘刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Petroleum and Chemical Corp
Sinopec Qingdao Safety Engineering Institute
Original Assignee
China Petroleum and Chemical Corp
Sinopec Qingdao Safety Engineering Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Petroleum and Chemical Corp, Sinopec Qingdao Safety Engineering Institute filed Critical China Petroleum and Chemical Corp
Priority to CN202110706962.9A priority Critical patent/CN115527008A/en
Publication of CN115527008A publication Critical patent/CN115527008A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • G06Q50/2057Career enhancement or continuing education service
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Tourism & Hospitality (AREA)
  • Software Systems (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The safety simulation experience training system based on the mixed reality technology comprises a software system and a hardware system, wherein the software system comprises a three-dimensional virtual reality module based on a tank area fire emergency disposal interactive scene and is used for setting a training database for virtual operation of a user by adopting a three-dimensional simulation technology and the mixed reality technology; the hardware system comprises tank field fire emergency disposal equipment based on the mixed reality technology and is used for emergency disposal of accidents in a three-dimensional operation scene. According to the safety simulation experience training system based on the mixed reality technology, the simulation experience training system enables the training personnel to carry out field operation according to the emergency disposal process, so that the training of accident disposal is carried out with approximately real feeling, and the training effect and efficiency are improved.

Description

Safety simulation experience training system based on mixed reality technology
Technical Field
The invention relates to a safety simulation experience training system based on a mixed reality technology, and belongs to the field of emergency disposal training and examination of petrochemical enterprise accident scenes.
Background
The rapid development of the petrochemical industry is faced with various severe tests, and the problems exposed in the links of petrochemical engineering construction, safety production, field direct operation, emergency handling of accidents and the like are becoming more and more serious. Due to the complexity and the uneven technical level of workers in the petrochemical industry, accidents in petrochemical production, storage and operation links can not be timely and effectively handled, so that the accidents are rapidly expanded, and huge losses are brought to enterprises and the society. The effective training of accident emergency treatment, equipment maintenance, emergency drilling, rescue and the like on personnel in the petrochemical industry is an effective measure for improving the safety capability of petrochemical enterprises and reducing the accident possibility and consequence. The traditional accident emergency treatment training is usually carried out in a classroom with a multimedia function, the training mode and the training means are single, the training content is old, the teaching and training are mainly carried out in an accident case and basic knowledge mode, the method is difficult to combine with the actual scene of an operation site, so that the theoretical and actual conditions are split, and the emergency treatment operation experience of the field accidents of workers cannot be effectively improved. The on-site real accident emergency treatment training can be closest to a daily operation scene, and is beneficial to enhancing the training effect of accident emergency treatment, but the method is difficult to set all types of accidents in the existing scene, and has high training cost, great difficulty and limited training times, so the method is difficult to be widely applied.
Disclosure of Invention
The safety simulation experience training system based on mixed reality technology that this patent provided, three-dimensional virtual reality module based on mutual scene is dealt with in the emergent processing of tank field conflagration, a training database for adopt three-dimensional simulation technique and mixed reality technique to set up and to supply user virtual operation, adopt the emergent processing equipment of tank field conflagration based on mixed reality technique, be arranged in the emergent operation of accident in the three-dimensional operation scene, experience training system through the simulation, make the personnel of participating in training carry out the site operation according to emergent processing flow, carry out the emergent training of accident with near real impression, training effect and efficiency are improved.
In the process of using and developing the Unity3D, the loading rate is reduced due to the phenomenon that the number of polygons is increased caused by the blasting effect (such as accident explosion), and the overall performance of the system is affected. The augmented reality head display equipment has limited computing resources due to the fact that the augmented reality head display equipment is provided with the independent computing unit, so that the requirement for model optimization is higher than that of normal host equipment, a normally constructed high-precision equipment model and a normally constructed dynamic effect cannot be used on the existing head display equipment, therefore, the polygon surface reduction algorithm is provided, the model quality is guaranteed, meanwhile, the complexity of the model is reduced by reducing the number of polygon surfaces in the software running period, and the problem of software loading rate is effectively avoided by simplifying the model. The algorithm can ensure the correct display of the geometric characteristics of the model, and realize a vivid, real-time and high-robustness modeling technology.
The complexity of the three-dimensional model is reduced by continuous use of side collapse operation, and how to correctly select the sides needing to be collapsed has minimum influence on the visual change of the three-dimensional model, and meanwhile, the difficulty of researching the optimization method is to ensure that the time and the calculation cost spent on each optimal collapse selection are minimum. Therefore, a method for reducing the number of polygon surfaces in the operation stage of Unity3D software is needed, and the quality of the generated low-surface model can be ensured. To solve the above problems, an algorithm for optimal selection of collapse edge is proposed herein, and the formula is as follows:
Figure BDA0003131759480000021
in the formula: tu-a set of triangles that contain vertex u; tuv-a set of triangles that contains both vertex u and vertex v.
In order to prove the optimization effect, the number of the model surfaces before and after simplification is compared, and the surface reduction effect can preferably achieve the effect that the number of the simplified model surfaces is one tenth of the number of the original model surfaces and simultaneously the three-dimensional display effect is not greatly influenced.
In order to prove the optimization effect, the number of the model surfaces before and after simplification is compared, and the surface reduction effect can preferably achieve the effect that the number of the simplified model surfaces is one tenth of the number of the original model surfaces and simultaneously the three-dimensional display effect is not greatly influenced.
Common fire explosion effect can form a large amount of polygon patches among the petrochemical accident disaster scene, the model subtracts a face optimization algorithm can optimize the patch number that too much explosion formed, and obtain on the personage model and surpass anticipated result, the system is less than when can produce the threshold value that blocks or the model is far away from current visual angle position at the frame rate, through the model optimization algorithm, can optimize the deduction to the high accuracy model, replace original high accuracy model, promote the smoothness degree of system operation under the prerequisite of guaranteeing that the visual experience effect does not have too big influence.
In order to realize the local display of the three-dimensional model under the operation visual angle, the object is assumed to be at the sphere center of the space sphere, and the camera viewpoint is assumed to be at the sphere surface of the space sphere. The observer can use the interactive means of rotation and zooming to view the model details from any angle, and the operation behaviors of the user are respectively defined to rotate and zoom. The spatial position of the camera viewpoint on the spherical surface can be updated by using the rotation interaction, the change of the spherical size can be carried out by using the scaling interaction, and finally the updating of the distance between the camera viewpoint and the model is realized. As shown in fig. 3, assuming that the sphere center of the spatial sphere is O, the center of the object model is P, and P and O have consistency in spatial relationship, where V is the camera viewpoint and is the viewing angle center of the display end, the plane of the display end is tangent to the sphere, and R is the distance between the camera viewpoint V and the sphere center P.
And after the visual angle of an operator is determined, selecting a proper number of the blocks to perform block model combination and performing local rendering display. The block model generally consists of one or more continuous local surfaces in the whole model, wherein the local surfaces are relatively close, so that the whole normal vector of the model has directional similarity. The normal direction of the block is one of the decisive conditions for defining the matching degree of the observation visual angle, and the block can be observed more clearly when the overall normal direction of the block is opposite to the direction of the observation visual angle. The selection algorithm used in the method is to calculate the matching degree of each block and an observation visual angle according to a known observation visual angle, sort the blocks according to the matching degree and select the first s blocks for display. s is related to the hardware environment of the output device, and the higher the hardware configuration, the larger the number s of the tiled displays.
The rendering and displaying algorithm based on the visual angle takes the average normal vector of all the triangular patches contained in the block as a normal direction. Each patch of the model can obtain the normal direction of the patch according to the information of 3 vertexes of the model, each vertex also uses the normal direction to represent the direction of the point besides the space coordinate, and the initial normal direction of each vertex is obtained by calculating the average normal direction of the connected patches during modeling. And taking the average Normal vector of all the patches contained in the block model as the integral Normal of the block, namely the direction of the single block, and recording the average Normal vector as Normal, wherein the calculation formula is as follows:
Figure BDA0003131759480000031
in the above formula: s is the total number of patches contained in the partition, and S (i, j) is the normal vector of the jth vertex of the ith patch.
The local rendering display algorithm solves the problem that delay blocking may occur due to too long rendering time generated by loading a large number of models, and meanwhile, the displayed models can be guaranteed to be free from detail loss.
Using discrete level of Detail (LOD) techniques will greatly reduce the stress of loading all model data over time. And simplifying the same model in multiple levels in advance, dividing the model into different resolution levels, and then selecting the optimal level for rendering according to different display requirements. And the final display strategy takes a local display technology as a main technology and takes a discrete LOD technology as an auxiliary technology.
Model simplification based on LOD, the model is optimized once, and model information of only one simplified level is stored. Because the original high-precision model has larger data and the optimized hierarchical level is lower, a general calculation and display unit can display the optimized integral model in real time without delay. The degree of model optimization is related to the configuration of the computing and displaying unit, and the computing and displaying unit can render and display the whole model without delay.
The conditions for using the simplified model display are two: the operator shrinks the model object to reach a preset threshold or manually clicks and observes the whole model, the threshold is set to be automatically triggered, and the clicking observation is manually triggered. When the model is reduced to a certain degree, the operator mainly observes the overall effect, local detailed information is ignored, the human eyes cannot distinguish details, and the system cannot display the local high-precision effect. After the LOD technology is used for optimization, local details are not displayed any more, the simplified model is displayed as a whole, and the two observation effects cannot be distinguished in detail by human eyes, so that the purposes of reducing calculation and improving efficiency are achieved. When the object is not zoomed, the operator observes the whole model, manually abandons the local display, and integrally displays the simplified object model, so that the operator observes the whole information at a closer angle, but the quality of the displayed model is not fine due to a larger simplification level.
Under the condition that an operator rotates the model, recalculating the matched blocks, judging the loading state of the blocks in the memory, ignoring the processing if the blocks are in the loading state, and reading and displaying new blocks by the memory if the blocks are not in the loading state.
The simplification degree of the experimental model is 60%, and when the simplified model is used for overall display, the definition degree of the overall display and the definition degree of high-precision local details are difficult to distinguish for naked eyes due to reduction to a certain degree. By pulling the simplified model closer to the observed details, the sharpness is reduced compared to the local details of the high-precision model. In table 1, even if the number of Frames Per Second (FPS) is lower than the FPS value for the local high-precision display when the simplified model display is applied, neither of these cases causes a delay or a stuck phenomenon.
TABLE 1 FPS under local and Global exhibition strategies
Figure BDA0003131759480000041
Spatial positioning based on location services is to use various positioning technologies to capture the spatial location of a positioning device and provide information data and services to the device through a network.
The spatial positioning technology based on the trigonometric relation is used, and three base station coordinates are assumed to be known coordinates (x) respectively 1 ,y 1 ,z 1 )、(x 2 ,y 2 ,z 2 )、(x 3 ,y 3 ,z 3 ) The coordinates of the device are the coordinates (x) to be solved 0 ,y 0 ,z 0 ) The coordinates of the measured object can be solved according to the geometric relation formula 3:
Figure BDA0003131759480000051
Figure BDA0003131759480000052
Figure BDA0003131759480000053
the method needs to make a known marker (for example, a paper sheet with a fixed characteristic shape or a two-dimensional code is printed) in advance, then place the marker at a position in the real world space, and determine the mapping marker position; then, performing image recognition and posture evaluation on the marker by using a camera sensor, and marking the spatial position of the marker; then, a certain point of the marker is used as a mapping coordinate origin, so that a mapping coordinate system is established; and finally, establishing a mapping relation between the screen coordinate system and the mapping coordinate system through mapping transformation, so that the displayed image can be mapped in a real world space which is subjected to marker marking and registration based on the mapping relation, and the superposition interaction of the virtual scene in the real space is realized.
In actual encoding, all parameters of the mapping relationship are represented by a matrix, and a coordinate is multiplied by the mapping matrix to obtain a linear transformation of the mapping relationship, as described in the following formula.
Figure BDA0003131759480000054
Matrix C is a camera internal reference matrix, matrix T m The method is characterized in that the method is an external reference matrix of a camera, wherein an internal reference matrix needs to be obtained by calibrating the camera in advance, and the external reference matrix needs to be obtained according to screen coordinates (x) c ,y c And) and a previously defined mapping coordinate system and an internal reference matrix.
The basic principle is similar to a space mapping registration algorithm based on the mark, but any object with enough characteristic points can be used as a plane reference, a mark does not need to be made in advance, and the limitation of the mark to AR is eliminated. The algorithm principle is to extract the characteristic points of a certain object in the real space and mark the characteristic points, when a sensor scans the surrounding scene, the characteristic points of the surrounding scene are extracted and compared with the characteristic points of the marked object, if the matching number of the captured characteristic points and the marked characteristic points is larger than the set threshold value, the object is marked, and the characteristic points are marked according to the principleMarked object feature point coordinate calculation T m And finally, registering the virtual projection in the real space through a space mapping matrix.
The whole matching process based on the 3D printing device miniature model adopts an approximate nearest neighbor matching algorithm based on combination of feature extraction and K-MEANS, then homography mapping is carried out on matched key points, and finally the identified target is finally judged in a mode of scoring according to an Inlier point set.
The image features are similar to the image fingerprints and can be uniquely marked and distinguished from the feature parts of other images, and the extraction quality of the image features directly determines the identification effect of the algorithm. The repeatable detectability of the image is one of the important characteristics of the image features, that is, for the same image, the features of the image are not affected any more and are always the same no matter how the external elements change. The features of the image are detected by an image processing and calculating method and extracted from the image, and the features are collectively called feature description or feature vector of the image.
At present, a Gaussian scale space down-sampling method is mostly adopted for image feature detection and feature point description, and in order to solve the problems of edge blurring, detail loss, low detection precision and the like of the method, a non-linear diffusion filtering structure scale space feature detection algorithm is used. The algorithm can effectively solve the problem of fuzzy image edge information, avoid information loss of target edges, better obtain local precision in the process of feature extraction, and improve the distinguishability of features.
(1) Firstly, preprocessing an input image by adopting Gaussian filtering, and calculating a parameter K by selecting the value of a part of an image gradient histogram, wherein the part of the image gradient histogram is larger than 70%, wherein the parameter K is represented by the following two formulas:
Figure BDA0003131759480000061
Figure BDA0003131759480000062
the parameter K is used as a contrast factor for controlling the diffusion level, and the value of K is inversely proportional to the amount of the reserved edge information.
(2) In the scale space, the scale level is increased logarithmically, and the total number of O groups (octaves) of images is total, each group has S layers (sub-level), and the resolution of all sub-layer images in the group is the same as that of the original image. Establishing a corresponding relation with the scale parameter by the following formula:
Figure BDA0003131759480000063
wherein o represents a group number, s represents a layer number, and N represents a scale parameter with an initial value of σ 0 Total number of images of (a);
(3) Since the nonlinear diffusion filter is developed on the basis of the heat conduction theory, the model unit is time, and the time unit is converted into image pixel unit by converting the time unit into t i Referred to as evolution time.
Figure BDA0003131759480000071
(4) Constructing a nonlinear scale space in an iterative mode according to a group of evolution time, wherein the formula is as follows:
Figure BDA0003131759480000072
(5) Finding a local maximum value through a Hessian matrix to obtain characteristic point detection, wherein the formula is as follows:
Figure BDA0003131759480000073
(6) The surrounding dimension is σ i Is taken as a 24 sigma characteristic point i ×24σ i A square window divided into 4 × 4 16 sub-regions with a size of 9 σ i ×9σ i When every two adjacent sub-areas are storedAt a width of 2 sigma i Then, gaussian weighting is performed on the 16 sub-regions, and a sub-region feature description variable is obtained according to the following formula:
d v =(∑L x ,∑L y ,∑|L x |,∑|L y |)
on the basis, the vector d of each subregion is then processed by a Gaussian window v And carrying out weighting and normalization processing to obtain 64-dimensional feature point description vectors.
The K-MEANS is used as an algorithm of main clustering analysis, and the K-MEANS clustering aims to divide n features into K clusters, so that each feature belongs to a cluster corresponding to a mean value (namely a clustering center) nearest to the feature, and the clustering aim of different feature points is fulfilled.
The key points of the algorithm comprise:
(1) Selection of k value: the influence of the k value on the end result is crucial, but it has to be predetermined. Given a suitable value of k, a priori knowledge is required, and estimation of the null is difficult or may result in poor results.
(2) Presence of outliers: during the iteration, K-MEANS uses the mean of all points as the new center point, which will result in a more severe mean deviation if there are outliers in the cluster. In the present case, the K-Mediods clustering algorithm (K median clustering) is used.
(3) Initial value sensitivity: the calculation result of the K-MEANS algorithm is sensitive to the initial value, so that on the basis of the algorithm, a plurality of classification rules are constructed by adding a mode of initializing a plurality of sets of initial nodes, and the optimal construction rule is selected through calculation and comparison.
Finally, the relative distance of each feature point in the feature space is used for representing the cluster attribute, and the calculation of the relative distance uses a calculation method of Euclidean distance: in euclidean space, point x = (x) 1 ,...,x n ) And y = (y) 1 ,...,y n ) The Euclidean distance relationship between the two is as follows:
Figure BDA0003131759480000081
on the basis of solving the problem of feature point clustering analysis by using a K-MEANS algorithm, a consistency algorithm based on random sampling is used for verifying the collection result. The algorithm is able to iteratively compute the correct mathematical model from a set of data containing "outliers", which generally refer to noise in the data, i.e., data that does not meet the model requirements. The algorithm is an estimation algorithm with a non-unique result, an estimation model can be generated under a certain probability, and the probability of generating a correct estimation model is higher as the iteration number is higher. The specific implementation steps can be divided into the following steps:
(1) Firstly, selecting a minimum data set which can estimate a model;
(2) Estimating a data model at a current number of iterations using the data set;
(3) Substituting all data into a data model under the current iteration times, and calculating to obtain the number of local interior points (the accumulated error is in a set threshold range);
(4) Comparing the number of the local points of the model under the current iteration times with the number of the best model obtained by previous estimation, and recording the maximum number of the local points and corresponding model parameters;
(5) Repeating the steps 2-5 until the iteration is finished or the current model is accurate enough (within a certain error threshold, the number of the local points is more than a certain number).
The method comprises the steps of sorting landform data of petrochemical enterprises, processing live-action data, constructing and generating a three-dimensional model by using 3DMAX, and generating a basic model database by combining multi-source data matching. And importing the three-dimensional model into the Unity3D software to perform rendering optimization of the model, performing operations such as network communication construction, script program compiling, animation state editing and the like according to the process to preliminarily generate a system prototype, and finally generating the tank fire emergency disposal system based on the mixed reality technology through testing, modification, packaging, release, installation and debugging on hardware equipment and the like.
The invention achieves the following beneficial effects:
the safety simulation experience training system provided by the invention consists of two parts, namely software and hardware. The method can restore the real operation scene, enables the training staff to have the feeling of being personally on the scene, and the immersive training can excite the learning interest and enthusiasm of the training staff and save the training cost of the field operation. In addition, the system also provides accident emergency treatment basic knowledge training and accident emergency treatment multimedia three-dimensional course training, and training personnel can freely select a training form and carry out targeted training by combining self requirements. Through the training and examination of accident emergency treatment, the accident recognition and emergency treatment capability of training personnel can be greatly improved.
Drawings
Fig. 1 is a flow chart of the present application.
FIG. 2 is a schematic view of the operating perspective of the present application showing a spatial model.
FIG. 3 is a schematic diagram of spatial mapping based on markers in the present application.
FIG. 4 shows the results of the K-MEANS algorithm in this application.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
The safety simulation experience training system based on the mixed reality technology comprises a software system and a hardware system, wherein the software system comprises a three-dimensional virtual reality module based on a tank area fire emergency disposal interactive scene and is used for setting a training database for virtual operation of a user by adopting a three-dimensional simulation technology and the mixed reality technology;
the hardware system comprises tank field fire emergency disposal equipment based on a mixed reality technology and is used for accident emergency disposal operation in a three-dimensional operation scene.
The three-dimensional virtual reality module in the software system is established as follows:
s1: generating a basic model database according to the real shooting data, the terrain data and the image data;
s2: establishing software for three-dimensional modeling according to the generated basic model database;
s3: importing the established three-dimensional model into Unity3D software, and generating a scene framework under dynamic light configuration and environment rendering;
s4: compiling a flow script for the scene frame, and preliminarily generating a system prototype by assisting network communication and hardware SDK access;
s5: and testing, modifying, packaging and releasing the system prototype to form a three-dimensional virtual reality module for tank field fire simulation training.
In the process of using and developing the Unity3D, the loading rate is reduced due to the phenomenon that the number of polygons is increased caused by the blasting effect (such as accident explosion), and the overall performance of the system is affected. The augmented reality head display equipment has limited computing resources due to the fact that the augmented reality head display equipment is provided with the independent computing unit, so that the requirement for model optimization is higher than that of normal host equipment, a normally constructed high-precision equipment model and a normally constructed dynamic effect cannot be used on the existing head display equipment, therefore, the polygon surface reduction algorithm is provided, the model quality is guaranteed, meanwhile, the complexity of the model is reduced by reducing the number of polygon surfaces in the software running period, and the problem of software loading rate is effectively avoided by simplifying the model. The algorithm can ensure the correct display of the geometric characteristics of the model, and realize a vivid, real-time and high-robustness modeling technology.
The complexity of the three-dimensional model is reduced by continuous use of edge collapse operation, and how to correctly select the edges needing to be collapsed has minimum influence on the visual change of the three-dimensional model, and meanwhile, the difficulty of researching the optimization method is to ensure that the time and the calculation cost spent on each optimal collapse selection are minimum. Therefore, it is necessary to find a method capable of reducing the number of polygon surfaces in the operation stage of Unity3D software, and at the same time, the quality of the generated low-surface model can be ensured. To solve the above problems, an algorithm for optimal selection of collapse edge is proposed herein, and the formula is as follows:
Figure BDA0003131759480000101
in the formula: tu-a set of triangles that contain vertex u; tuv-a set of triangles that contains both vertex u and vertex v.
In order to prove the optimization effect, the number of the model surfaces before and after simplification is compared, and the surface reduction effect can preferably achieve the effect that the number of the simplified model surfaces is one tenth of the number of the original model surfaces and simultaneously the three-dimensional display effect is not greatly influenced.
In order to prove the optimization effect, the number of the model surfaces before and after simplification is compared, and the surface reduction effect can preferably achieve the effect that the number of the simplified model surfaces is one tenth of the number of the original model surfaces, and meanwhile, the three-dimensional display effect cannot be greatly influenced.
Common fire explosion effect can form a large amount of polygon patches among the petrochemical accident disaster scene, the model subtracts a face optimization algorithm can optimize the patch number that too much explosion formed, and obtain on the personage model and surpass anticipated result, the system is less than when can produce the threshold value that blocks or the model is far away from current visual angle position at the frame rate, through the model optimization algorithm, can optimize the deduction to the high accuracy model, replace original high accuracy model, promote the smoothness degree of system operation under the prerequisite of guaranteeing that the visual experience effect does not have too big influence.
In order to realize the local display of the three-dimensional model under the operation visual angle, the object is assumed to be in the sphere center of the space sphere, and the camera viewpoint is assumed to be in the sphere surface of the space sphere. The observer can use the interactive means of rotation and zooming to view the model details from any angle, and the operation behaviors of the user are respectively defined to rotate and zoom. The spatial position of the camera viewpoint on the spherical surface can be updated by using the rotation interaction, the change of the spherical size can be carried out by using the scaling interaction, and finally the updating of the distance between the camera viewpoint and the model is realized. As shown in fig. 2, assuming that the sphere center of the spatial sphere is O, the center of the object model is P, and P and O have consistency in spatial relationship, where V is the camera viewpoint and is the viewing angle center of the display end, the plane of the display end is tangent to the sphere, and R is the distance between the camera viewpoint V and the sphere center P.
And after the visual angle of an operator is determined, selecting a proper number of the blocks to perform block model combination and performing local rendering display. The block model generally consists of one or more continuous local surfaces in the whole model, wherein the local surfaces are relatively close, so that the whole normal vector of the model has directional similarity. The normal direction of the block is one of the decisive conditions for defining the matching degree of the observation visual angle, and the block can be observed more clearly when the overall normal direction of the block is opposite to the direction of the observation visual angle. The selection algorithm used in the method is to calculate the matching degree of each block and an observation visual angle according to a known observation visual angle, sort the blocks according to the matching degree and select the first s blocks for display. s is related to the hardware environment of the output device, and the higher the hardware configuration, the larger the block display number s.
The rendering and displaying algorithm based on the visual angle takes the average normal vector of all the triangular patches contained in the block as a normal direction. Each patch of the model can obtain the normal direction of the patch according to the information of 3 vertexes of the model, each vertex also uses the normal direction to represent the direction of the point besides the space coordinate, and the initial normal direction of each vertex is obtained by calculating the average normal direction of the connected patches during modeling. And taking the average Normal vector of all the patches contained in the block model as the integral Normal of the block, namely the direction of the single block, and recording the average Normal vector as Normal, wherein the calculation formula is as follows:
Figure BDA0003131759480000111
in the above formula: s is the total number of patches contained in the partition, and S (i, j) is the normal vector of the jth vertex of the ith patch.
The local rendering display algorithm solves the problem that delay blocking may occur due to too long rendering time generated by loading a large number of models, and meanwhile, the displayed models can be guaranteed to be free from detail loss.
Using discrete level of Detail (LOD) techniques will greatly reduce the stress of loading all model data over time. And simplifying the same model in multiple levels in advance, dividing the model into different resolution levels, and then selecting the optimal level for rendering according to different display requirements. And the final display strategy takes a local display technology as a main technology and takes a discrete LOD technology as an auxiliary technology.
Model simplification based on LOD, the model is optimized once, and model information of only one simplified level is stored. Because the original high-precision model has larger data and the optimized hierarchical level is lower, a general calculation and display unit can display the optimized integral model in real time without delay. The degree of model optimization is related to the configuration of the computing and displaying unit, and the computing and displaying unit can render and display the whole model without delay.
The conditions for using the simplified model display are two: the operator reduces the mould object to reach a threshold value set in advance or manually clicks and observes the whole model, the threshold value is set to be automatically triggered, and clicking observation is manually triggered. When the model is reduced to a certain degree, the operator mainly observes the overall effect, local detailed information is ignored, the human eyes cannot distinguish details, and the system cannot display the local high-precision effect. After the LOD technology is used for optimization, local details are not displayed any more, the simplified model is displayed as a whole, and the two observation effects cannot be distinguished in detail by human eyes, so that the purposes of reducing calculation and improving efficiency are achieved. When the object is not zoomed, the operator observes the whole model, manually abandons the local display, and integrally displays the simplified object model, so that the operator observes the whole information at a closer angle, but the quality of the displayed model is not fine due to a larger simplification level.
Under the condition that an operator rotates the model, recalculating the matched blocks, judging the loading state of the blocks in the memory, if the matched blocks are in the loading state, ignoring the processing, and if the matched blocks are not in the loading state, reading and displaying new blocks by the memory.
The simplification degree of the experimental model is 60%, and when the simplified model is used for overall display, the definition degree of overall display and high-precision local details is difficult to distinguish for naked eyes due to reduction to a certain degree. By pulling the simplified model closer to the observed details, the sharpness is reduced compared to the local details of the high-precision model. In table 1, even if the value of Frame Per Second (FPS) is lower than that of the local high-precision display when the simplified model display is applied, the delay and the seizure phenomenon do not occur in both cases.
TABLE 1 FPS under local and Global exhibition strategies
Figure BDA0003131759480000131
Spatial positioning based on location services is to use various positioning technologies to capture the spatial location of a positioning device and provide information data and services to the device through a network.
The spatial positioning technology based on the trigonometric relation is used, and three base station coordinates are assumed to be known coordinates (x) respectively 1 ,y 1 ,z 1 )、(x 2 ,y 2 ,z 2 )、(x 3 ,y 3 ,z 3 ) The coordinates of the device are the coordinates (x) to be solved 0 ,y 0 ,z 0 ) The coordinates of the measured object can be solved according to the geometric relation formula 3:
Figure BDA0003131759480000132
Figure BDA0003131759480000133
Figure BDA0003131759480000134
referring to fig. 3, the method needs to make a known marker (for example, a paper sheet printed with a fixed characteristic shape or a two-dimensional code) in advance, then place the marker at a position in the real world space, and determine the mapping marker position; then, image recognition and posture evaluation are carried out on the marker by using a camera sensor, and the spatial position of the marker is marked; then, a certain point of the marker is used as a mapping coordinate origin, so that a mapping coordinate system is established; and finally, establishing a mapping relation between the screen coordinate system and the mapping coordinate system through mapping transformation, so that the displayed image can be mapped in the real world space which is fixedly registered by the marker on the basis of the mapping relation, and the superposition interaction of the virtual scene in the real space is realized.
In actual encoding, all parameters of the mapping relationship are represented by a matrix, and a coordinate is multiplied by the mapping matrix to obtain a linear mapping relationship, as described in the following formula.
Figure BDA0003131759480000135
Matrix C is a camera reference matrix, matrix T m The method is characterized in that the method is an external reference matrix of a camera, wherein an internal reference matrix needs to be obtained by calibrating the camera in advance, and the external reference matrix needs to be obtained according to screen coordinates (x) c ,y c And) with a previously defined mapping coordinate system and an internal reference matrix.
The basic principle is similar to a space mapping registration algorithm based on the mark, but any object with enough characteristic points can be used as a plane reference, a mark does not need to be made in advance, and the limitation of the mark to AR is eliminated. The algorithm principle is to extract the characteristic points of a certain object in the real space and mark the characteristic points, when a sensor scans the surrounding scene, the characteristic points of the surrounding scene are extracted and compared with the characteristic points of the marked object, if the matching number of the captured characteristic points and the marked characteristic points is larger than a set threshold value, the object is marked, and T is calculated according to the coordinates of the characteristic points of the marked object m And finally, registering the virtual projection in the real space through a space mapping matrix.
The whole matching process based on the 3D printing device miniature model adopts an approximate nearest neighbor matching algorithm based on combination of feature extraction and K-MEANS, then homography mapping is carried out on matched key points, and finally the identified target is finally judged in a mode of scoring according to an Inlier point set.
The image characteristics are similar to the image fingerprints, can be uniquely marked and are different from the characteristic parts of other images, and the extraction quality of the image characteristics directly determines the identification effect of the algorithm. The repeatable detectability of the image is one of the important characteristics of the image features, that is, for the same image, the features of the image are not affected any more and are always the same no matter how the external elements change. The features of the image are detected by an image processing calculation method and extracted from the image, and the features are collectively called feature description or feature vector of the image.
At present, a Gaussian scale space down-sampling method is mostly adopted for image feature detection and feature point description, and in order to solve the problems of edge blurring, detail loss, low detection precision and the like of the method, a non-linear diffusion filtering structure scale space feature detection algorithm is used. The algorithm can effectively solve the problem of fuzzy image edge information, avoid information loss of target edges, better obtain local precision in the process of feature extraction, and improve the distinguishability of features.
(1) Firstly, preprocessing an input image by adopting Gaussian filtering, and calculating a parameter K by selecting the value of a part of an image gradient histogram, wherein the part of the image gradient histogram is larger than 70%, wherein the parameter K is represented by the following two formulas:
Figure BDA0003131759480000141
Figure BDA0003131759480000151
the parameter K is used as a contrast factor for controlling the diffusion level, and the value of K is inversely proportional to the amount of the reserved edge information.
(2) In the scale space, the scale level is increased logarithmically, and the total number of O groups (octaves) of images is total, each group has S layers (sub-level), and the resolution of all sub-layer images in the group is the same as that of the original image. Establishing a corresponding relation with the scale parameter by the following formula:
Figure BDA0003131759480000152
wherein the content of the first and second substances,o represents a group number, s represents a layer number, and N represents a scale parameter with an initial value of σ 0 Total number of images of (a);
(3) Since the nonlinear diffusion filter is developed on the basis of the heat conduction theory, the model unit is time, and the time unit is converted into image pixel unit by converting the time unit into t i Referred to as evolution time.
Figure BDA0003131759480000153
(4) Constructing a nonlinear scale space in an iterative mode according to a group of evolution time, wherein the formula is as follows:
Figure BDA0003131759480000154
(5) And finding a local maximum value through a Hessian matrix to obtain characteristic point detection, wherein the formula is as follows:
Figure BDA0003131759480000155
(6) The surrounding dimension is σ i Is taken as a 24 sigma characteristic point i ×24σ i A square window divided into 4 × 4 16 sub-regions with a scale size of 9 σ i ×9σ i When there are 2 σ widths for every two adjacent sub-regions i Then, gaussian weighting is performed on the 16 sub-regions, and a sub-region feature description variable is obtained according to the following formula:
d v =(∑L x ,∑L y ,∑|L x |,∑|L y |)
on the basis, the vector d of each subregion is then processed by a Gaussian window v And carrying out weighting and normalization processing to obtain 64-dimensional feature point description vectors.
The K-MEANS is used as an algorithm of main cluster analysis, and the K-MEANS clustering aims to divide n features into K clusters, so that each feature belongs to a cluster corresponding to a mean value (namely a clustering center) closest to the feature, and the clustering aim of different feature points is fulfilled.
The key points of the algorithm comprise:
(1) Selection of k value: the influence of the k value on the end result is crucial, but it has to be predetermined. Given a suitable value of k, a priori knowledge is required, and estimation of the null is difficult or may result in poor results.
(2) Presence of outliers: during the iteration, K-MEANS uses the mean of all points as the new center point, which will result in a more severe mean deviation if there are outliers in the cluster. In the present case, the K-Mediods clustering algorithm (K median clustering) is used.
(3) Initial value sensitive: the calculation result of the K-MEANS algorithm is sensitive to the initial value, so that on the basis of the algorithm, a plurality of classification rules are constructed by adding a mode of initializing a plurality of sets of initial nodes, and the optimal construction rule is selected through calculation and comparison.
Referring to fig. 4, the relative distance of each feature point in the feature space is finally used to represent the cluster attribute, and the calculation of the relative distance uses the calculation method of euclidean distance here: in euclidean space, point x = (x) 1 ,...,x n ) And y = (y) 1 ,...,y n ) The Euclidean distance relationship between the two is as follows:
Figure BDA0003131759480000161
on the basis of solving the problem of feature point clustering analysis by using a K-MEANS algorithm, a consistency algorithm based on random sampling is used for verifying the result of the collection. The algorithm is able to iteratively compute the correct mathematical model from a set of data containing "outliers", which generally refer to noise in the data, i.e., data that does not meet the model requirements. The algorithm is an estimation algorithm with a non-unique result, an estimation model can be generated under a certain probability, and the probability of generating a correct estimation model is higher as the iteration number is higher. The specific implementation steps can be divided into the following steps:
(1) Firstly, selecting a minimum data set which can estimate a model;
(2) Estimating a data model at a current iteration number using the data set;
(3) Substituting all data into a data model under the current iteration times, and calculating to obtain the number of local interior points (the accumulated error is in a set threshold range);
(4) Comparing the number of the local points of the model under the current iteration times with the number of the best model obtained by previous estimation, and recording the maximum local point number and the corresponding model parameters;
(5) Repeating the steps 2-5 until the iteration is finished or the current model is accurate enough (within a certain error threshold, the number of the local points is more than a certain number).
The method comprises the steps of arranging landform data of petrochemical enterprises, processing live-action data, constructing and generating a three-dimensional model by using 3DMAX, and generating a basic model database by combining multi-source data matching. The three-dimensional model is imported into Unity3D software to perform rendering optimization of the model, operation such as network communication construction, script program compiling, animation state editing and the like is performed according to the process to generate a system prototype, and the system prototype is tested, modified, packaged and released, and installed and debugged on hardware equipment and the like to finally generate the tank fire emergency disposal system based on the mixed reality technology, wherein the technical route is shown in figure 1.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. The safety simulation experience training system based on the mixed reality technology is characterized by comprising a software system and a hardware system, wherein the software system comprises a three-dimensional virtual reality module based on a tank area fire emergency disposal interactive scene and is used for setting a training database for virtual operation of a user by adopting a three-dimensional simulation technology and the mixed reality technology;
the hardware system comprises tank field fire emergency disposal equipment based on a mixed reality technology and is used for accident emergency disposal operation in a three-dimensional operation scene.
2. The mixed reality technology-based safety simulation experience training system as claimed in claim 1, wherein the three-dimensional virtual reality module in the software system is established as follows:
s1: generating a basic model database according to the real shooting data, the terrain data and the image data;
s2: establishing software for three-dimensional modeling according to the generated basic model database;
s3: importing the established three-dimensional model into Unity3D software, and generating a scene framework under dynamic light configuration and environment rendering;
s4: compiling a flow script for the scene frame, and preliminarily generating a system prototype by assisting network communication and hardware SDK access;
s5: and testing, modifying, packaging and releasing the system prototype to form a three-dimensional virtual reality module for tank field fire simulation training.
3. The mixed reality technology-based safety simulation experience training system as claimed in claim 2, wherein in the step S3, the polygon minus surface algorithm is used to reduce the complexity of the model and ensure the quality of the generated low-surface model by reducing the number of polygon surfaces during the operation of Unity3D software while ensuring the quality of the model, and the following formula is adopted:
cost(u,v)=||u-v||×max feTu {min neTuv {(1-f.normal·n.normal)÷2}}
in the formula: tu-a set of triangles that contain vertex u; tuv-a set of triangles that contains both vertex u and vertex v.
4. The mixed reality technology-based safety simulation experience training system as claimed in claim 3, wherein the marker-based spatial mapping registration algorithm specifically comprises the following steps:
(1) A known marker needs to be made in advance, then the marker is placed at a position in a real world space, and the position of a mapping marker is determined;
(2) Then, image recognition and posture evaluation are carried out on the marker by using a camera sensor, and the spatial position of the marker is marked;
(3) Then, a certain point of the marker is used as a mapping coordinate origin, so that a mapping coordinate system is established;
(4) And finally, establishing a mapping relation between the screen coordinate system and the mapping coordinate system through mapping transformation, so that the displayed image can be mapped in the real world space which is fixedly registered by the marker on the basis of the mapping relation, and the superposition interaction of the virtual scene in the real space is realized.
In actual encoding, all parameters of the mapping relationship are represented by a matrix, and a linear transformation of the mapping relationship can be obtained by left-multiplying the coordinates by the mapping matrix, as shown in the following formula:
Figure FDA0003131759470000021
matrix C is a camera reference matrix, matrix T m An external parameter matrix of the camera, wherein the internal parameter matrix needs to be obtained by calibrating the camera in advance, and the external parameter matrix needs to be obtained according to screen coordinates (x) c ,y c And) with a previously defined mapping coordinate system and an internal reference matrix.
5. The system of claim 4, wherein the spatial mapping registration algorithm based on the plane reference can use any object with sufficient feature points as the plane reference without making markers in advance, gets rid of the limitation of the markers on AR, extracts the feature points of an object in real space and marks the feature points, and when the sensor scans the surrounding scene, the sensor will perform the function of comparing the feature points with the feature points of the objectExtracting the characteristic points of the surrounding scene and comparing the characteristic points with the characteristic points of the marked object, marking the object if the matching number of the captured characteristic points and the marked characteristic points is more than a set threshold value, and calculating T according to the coordinates of the characteristic points of the marked object m And finally, registering the virtual projection in the real space through a space mapping matrix.
6. The mixed reality technology-based safety simulation experience training system as claimed in claim 4 or 5, wherein based on the whole matching process of the 3D printing device miniature model, an approximate nearest neighbor matching algorithm based on feature extraction and K-MEANS combination is adopted, then homography mapping is carried out on matched key points, and finally the identified target is finally judged in a manner of scoring according to an Inlier point set;
the approximate nearest neighbor matching algorithm comprises a computer vision-based feature extraction algorithm, a data mining-based clustering algorithm and a random sampling consistency-based verification algorithm.
7. The mixed reality technology-based safety simulation experience training system as claimed in claim 6, wherein the computer vision-based feature extraction algorithm uses nonlinear diffusion filtering to construct a scale-space feature detection algorithm, comprising the following specific steps:
(1) Firstly, the input image is preprocessed by adopting Gaussian filtering, a parameter K is calculated by selecting the value of a part which is more than 70 percent in an image gradient histogram,
Figure FDA0003131759470000031
Figure FDA0003131759470000032
the parameter K is used as a contrast factor for controlling the diffusion level, and the value of the K is inversely proportional to the quantity of the reserved edge information;
(2) In a scale space, the scale level is increased logarithmically, O groups (octaves) of images are totally shared, each group has an S layer (sub-level), the resolution of all sub-layer images in the group is the same as that of an original image, and the corresponding relation with a scale parameter is established through the following formula:
Figure FDA0003131759470000033
wherein o represents a group number, s represents a layer number, and N represents a scale parameter with an initial value of σ 0 Total number of images of (a);
(3) Since the nonlinear diffusion filter is developed based on the heat conduction theory, the model unit is time, and the time unit is converted into image pixel unit by formula 8, where t i Referred to as evolution time:
Figure FDA0003131759470000034
(4) According to a group of evolution time, a nonlinear scale space is constructed in an iterative mode, and the following formula is adopted:
Figure FDA0003131759470000035
(5) And finding a local maximum value through a Hessian matrix to obtain characteristic point detection, wherein the formula is as follows:
Figure FDA0003131759470000036
(6) The surrounding dimension is σ i Is taken as a 24 sigma characteristic point i ×24σ i A square window divided into 4 × 4 16 sub-regions with a size of 9 σ i ×9σ i When there are 2 σ widths for every two adjacent sub-regions i Then overlapping part ofAnd carrying out Gaussian weighting on the 16 sub-regions, and obtaining a sub-region feature description variable according to the following formula:
d v =(∑L x ,∑L y ,∑|K x |,∑|L y |)
on the basis, the vector d of each subregion is then processed by a Gaussian window v And carrying out weighting and normalization processing to obtain 64-dimensional feature point description vectors.
8. The mixed reality technology-based safety simulation experience training system as claimed in claim 7, wherein a clustering algorithm based on data mining uses K-MEANS as an algorithm of main clustering analysis, and the purpose of K-MEANS clustering is to divide n features into K clusters, so that each feature belongs to a cluster corresponding to its nearest mean value, thereby achieving the purpose of clustering different feature points;
the key points of the algorithm comprise:
(1) Selection of k value: the influence of the k value on the final result is crucial, but the k value must be preset, given a proper k value, prior knowledge is needed, and the space estimation is difficult or may result in poor effect;
(2) Presence of outliers: in the iteration process, the K-MEANS uses the mean values of all the points as new central points, if abnormal points exist in the clusters, the mean value deviation is serious, and under the current condition, a K-Mediods clustering algorithm (K median clustering) is used;
(3) Initial value sensitivity: the calculation result of the K-MEANS algorithm is sensitive to the initial value, so that on the basis of the algorithm, a plurality of classification rules are constructed by adding a mode of initializing a plurality of sets of initial nodes, and the optimal construction rule is selected through calculation and comparison;
finally, the relative distance of each feature point in the feature space is used for representing the cluster attribute, and the calculation of the relative distance uses a calculation method of Euclidean distance: in euclidean space, point x = (x) 1 ,...,x n ) And y = (y) 1 ,...,y n ) The euclidean distance relationship between them is as follows:
Figure FDA0003131759470000041
9. the mixed reality technology-based safety simulation experience training system as claimed in claim 8, wherein the result of the clustering is verified by using a consistency algorithm based on random sampling based on the K-MEANS algorithm to solve the problem of feature point clustering analysis, the algorithm can iteratively calculate a correct mathematical model from a group of data including "outliers", wherein the "outliers" generally refer to noise in the data, that is, data that does not meet the requirements of the model, and the specific implementation steps can be divided into the following steps:
(1) Firstly, selecting a minimum data set which can estimate a model;
(2) Estimating a data model at a current iteration number using the data set;
(3) Substituting all data into a data model under the current iteration times, and calculating to obtain the number of local interior points (the accumulated error is in a set threshold range);
(4) Comparing the number of the local points of the model under the current iteration times with the number of the best model obtained by previous estimation, and recording the maximum local point number and the corresponding model parameters;
(5) Repeating the steps 2-5 until the iteration is finished or the current model is accurate enough (within a certain error threshold, the number of the local points is more than a certain number).
10. The mixed reality technology-based safety simulation experience training system as claimed in claim 1, wherein the tank farm fire emergency treatment equipment comprises an accident scene containing miniature model, a virtual reality operating handle, a positioner; the immersive head display provides 6 cameras for collecting environment information and interactive information of a user, serves as a data input collection and three-dimensional display end of a mixed reality technology algorithm, runs on a Windows 10 operating system, and comprises a central processing unit and a holographic processing unit in custom design, and serves as a holographic rendering and computing unit.
CN202110706962.9A 2021-06-24 2021-06-24 Safety simulation experience training system based on mixed reality technology Pending CN115527008A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110706962.9A CN115527008A (en) 2021-06-24 2021-06-24 Safety simulation experience training system based on mixed reality technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110706962.9A CN115527008A (en) 2021-06-24 2021-06-24 Safety simulation experience training system based on mixed reality technology

Publications (1)

Publication Number Publication Date
CN115527008A true CN115527008A (en) 2022-12-27

Family

ID=84694112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110706962.9A Pending CN115527008A (en) 2021-06-24 2021-06-24 Safety simulation experience training system based on mixed reality technology

Country Status (1)

Country Link
CN (1) CN115527008A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091723A (en) * 2022-12-29 2023-05-09 上海网罗电子科技有限公司 Fire emergency rescue live-action three-dimensional modeling method and system based on unmanned aerial vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091723A (en) * 2022-12-29 2023-05-09 上海网罗电子科技有限公司 Fire emergency rescue live-action three-dimensional modeling method and system based on unmanned aerial vehicle
CN116091723B (en) * 2022-12-29 2024-01-05 上海网罗电子科技有限公司 Fire emergency rescue live-action three-dimensional modeling method and system based on unmanned aerial vehicle

Similar Documents

Publication Publication Date Title
CN108764048B (en) Face key point detection method and device
Zhang et al. Image engineering
Xu et al. Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor
Taneja et al. Geometric change detection in urban environments using images
US20160249041A1 (en) Method for 3d scene structure modeling and camera registration from single image
Poullis et al. Photorealistic large-scale urban city model reconstruction
CN112489099B (en) Point cloud registration method and device, storage medium and electronic equipment
CN112053447A (en) Augmented reality three-dimensional registration method and device
CN116229007B (en) Four-dimensional digital image construction method, device, equipment and medium using BIM modeling
CN111695431A (en) Face recognition method, face recognition device, terminal equipment and storage medium
CN109766896B (en) Similarity measurement method, device, equipment and storage medium
CN112084916A (en) Automatic generation and diagnosis method for urban three-dimensional skyline contour line based on shielding rate
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
WO2021188104A1 (en) Object pose estimation and defect detection
CN114676763A (en) Construction progress information processing method
Ikeno et al. An enhanced 3D model and generative adversarial network for automated generation of horizontal building mask images and cloudless aerial photographs
Gibson et al. Interactive reconstruction of virtual environments from photographs, with application to scene-of-crime analysis
CN112802208B (en) Three-dimensional visualization method and device in terminal building
CN114612393A (en) Monocular vision-based reflective part pose estimation method
CN115527008A (en) Safety simulation experience training system based on mixed reality technology
CN114273826A (en) Automatic identification method for welding position of large-sized workpiece to be welded
CN113793251A (en) Pose determination method and device, electronic equipment and readable storage medium
Pyka et al. LiDAR-based method for analysing landmark visibility to pedestrians in cities: case study in Kraków, Poland
CN113160401A (en) Object-oriented visual SLAM lightweight semantic map creation method
CN111179271A (en) Object angle information labeling method based on retrieval matching and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination