CN116416402A - Data display method and system based on MR (magnetic resonance) collaborative digital sand table - Google Patents

Data display method and system based on MR (magnetic resonance) collaborative digital sand table Download PDF

Info

Publication number
CN116416402A
CN116416402A CN202310665907.9A CN202310665907A CN116416402A CN 116416402 A CN116416402 A CN 116416402A CN 202310665907 A CN202310665907 A CN 202310665907A CN 116416402 A CN116416402 A CN 116416402A
Authority
CN
China
Prior art keywords
information
target user
sand table
current environment
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310665907.9A
Other languages
Chinese (zh)
Inventor
王宇翔
马海波
王帅
王亚娜
王虹
李晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Hongtu Information Technology Co Ltd
Original Assignee
Aerospace Hongtu Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Hongtu Information Technology Co Ltd filed Critical Aerospace Hongtu Information Technology Co Ltd
Priority to CN202310665907.9A priority Critical patent/CN116416402A/en
Publication of CN116416402A publication Critical patent/CN116416402A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Abstract

The invention provides a data display method and a system based on MR cooperative digital sand table, which relate to the technical field of data processing and comprise the following steps: acquiring topography basic information of a region to be processed and target information provided by a GIS system; constructing a three-dimensional terrain model of the area to be processed based on the terrain base information and the target information; based on a spatial analysis algorithm and feature point coordinates of the current environment, overlapping and overlapping the three-dimensional terrain model and the current environment in the MR cooperative digital sand table; based on the position information of the first target user in the current environment, the operation information of the first target user and the Unity3D technology acquired by the MR depth camera, a three-dimensional terrain model in the MR collaborative digital sand table is processed, and the processing process is displayed by MR display equipment worn by the first target user, so that the technical problem of poor display effect of the existing three-dimensional model method is solved.

Description

Data display method and system based on MR (magnetic resonance) collaborative digital sand table
Technical Field
The invention relates to the technical field of data processing, in particular to a data display method and system based on an MR (magnetic resonance) collaborative digital sand table.
Background
Along with the deep-into-mind of the concept of digital twinning, in the industrial application of smart cities, smart parks, smart factories and the like, numerous scholars in China conduct three-dimensional space analysis research, the research in theory is mature, and the three-dimensional space analysis technology is applied in multiple fields. If the three-dimensional space analysis technology is applied to urban planning, gradient analysis, sunlight analysis, road network planning analysis and the like in a three-dimensional scene are realized, so that the scientificity and the efficiency of urban planning management are improved. Modeling a canal channel based on a visualization technology, and further, utilizing a theoretical method of three-dimensional space analysis to perform space analysis on channel data, the requirements on high-fidelity three-dimensional scenes are becoming stronger. Of course, the architecture and method of the MR device are utilized in three-dimensional geographical information. According to the development process of the holographic 3DGIS, the system structure of the holographic 3DGIS is designed, the interaction between a user and a virtual three-dimensional digital city and the interaction between the virtual city and a physical world are realized, and the holographic digital city is mapped to the physical world and the like. At this time, the rendering effect of the three-dimensional GIS can realize real topography, topography texture and surface features in a further step from the aspects of picture fineness, interaction smoothness, scene restoration degree and the like, typical spatial analysis algorithms such as profile analysis, visual analysis, illumination analysis and the like are reproduced in an MR technology, environment perception, holographic display and programming interaction interfaces are developed by using MR equipment, and the analysis results are fed back in real time through the spatial interaction of the bodies of the participants, so that the purpose of achieving audio-visual synchronization and real-time immersion of the content is particularly important.
Rendering effect of the three-dimensional GIS is not feasible, on one hand, the three-dimensional GIS is limited by a platform, the three-dimensional GIS is converted from a local environment to a webpage environment, and the schedulable computing capacity is limited; on the other hand, the three-dimensional capability is not enough, from the perspective of the whole three-dimensional industry, the three-dimensional GIS is a mass market, the most advanced three-dimensional technology exists in the place with the strongest demand, namely the game entertainment industry, so the GIS industry does not master the most advanced three-dimensional rendering technology, and the display effect of the existing three-dimensional model method is poor.
An effective solution to the above-mentioned problems has not been proposed yet.
Disclosure of Invention
Therefore, the invention aims to provide a data display method and a data display system based on MR (magnetic resonance) collaborative digital sand tables, so as to solve the technical problem of poor display effect of the existing three-dimensional model method.
In a first aspect, an embodiment of the present invention provides a data display method based on an MR collaborative digital sand table, including: obtaining terrain basic information of an area to be processed and target information provided by a GIS system, wherein the target information comprises: geographic information, earth surface pictures and tiff data; constructing a three-dimensional terrain model of the area to be processed based on the terrain base information and the target information; based on a spatial analysis algorithm and feature point coordinates of a current environment, overlapping and overlapping the three-dimensional terrain model and the current environment in an MR cooperative digital sand table; based on the position information of a first target user in the current environment, the operation information of the first target user and the Unity3D technology acquired by an MR depth camera, processing a three-dimensional terrain model in the MR collaborative digital sand table, and displaying a processing process on an MR display device worn by the first target user, wherein the operation information at least comprises: rotation operation, scaling operation, movement operation, and spatial analysis operation.
Further, based on the terrain base information and the target information, constructing a three-dimensional terrain model of the area to be processed, including: constructing a white model of the area to be processed based on the terrain basic information; and rendering and optimizing the white model based on the target information to obtain the three-dimensional terrain model.
Further, based on the position information of the first target user in the current environment acquired by the MR depth camera, the operation information of the first target user and the Unity3D technology, processing the three-dimensional terrain model in the MR collaborative digital sand table comprises the following steps: determining anchor point information of the first target user in the current environment based on the position information; determining a current three-dimensional terrain model in the MR cooperative digital sand table based on the anchor point information; and processing the current three-dimensional terrain model in the MR collaborative digital sand table based on the operation information of the first target user and the Unity3D technology.
Further, if the current environment includes a second target user, the method further includes: and sharing anchor point information of the first target user in the current environment to the MR display equipment worn by the second target user so that the MR display equipment worn by the second target user displays the processing process. In a second aspect, an embodiment of the present invention further provides a data display system based on an MR collaborative digital sand table, including: the GIS platform is used for storing target information of a region to be processed, wherein the target information comprises: geographic information, earth surface pictures and tiff data; the server is used for acquiring the terrain basic information of the area to be processed and the target information provided by the GIS system; the server is further used for constructing a three-dimensional terrain model of the area to be processed based on the terrain basic information and the target information; the server is also used for superposing and overlapping the three-dimensional terrain model and the current environment in the MR cooperative digital sand table based on a spatial analysis algorithm and the characteristic point coordinates of the current environment; the server is further used for processing the three-dimensional terrain model in the MR collaborative digital sand table based on the position information of the first target user in the current environment, the operation information of the first target user and the Unity3D technology acquired by the MR depth camera, wherein the operation information at least comprises: rotation operation, scaling operation, movement operation, and spatial analysis operation; the MR display device worn by the first target user is used for displaying the processing procedure.
Further, the server is configured to: constructing a white model of the area to be processed based on the terrain basic information; and rendering and optimizing the white model based on the target information to obtain the three-dimensional terrain model.
Further, the server is configured to: determining anchor point information of the first target user in the current environment based on the position information; determining a current three-dimensional terrain model in the MR cooperative digital sand table based on the anchor point information; and processing the current three-dimensional terrain model in the MR collaborative digital sand table based on the operation information of the first target user and the Unity3D technology.
Further, if the current environment includes a second target user, the server is configured to share anchor point information of the first target user in the current environment to an MR display device worn by the second target user; and the MR display device worn by the second target user is used for displaying the processing procedure.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a memory and a processor, where the memory is configured to store a program for supporting the processor to execute the method described in the first aspect, and the processor is configured to execute the program stored in the memory.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon.
In the embodiment of the invention, the terrain basic information of the area to be processed and the target information provided by the GIS are obtained, wherein the target information comprises: geographic information, earth surface pictures and tiff data; constructing a three-dimensional terrain model of the area to be processed based on the terrain base information and the target information; based on a spatial analysis algorithm and feature point coordinates of a current environment, overlapping and overlapping the three-dimensional terrain model and the current environment in an MR cooperative digital sand table; based on the position information of a first target user in the current environment, the operation information of the first target user and the Unity3D technology acquired by an MR depth camera, processing a three-dimensional terrain model in the MR collaborative digital sand table, and displaying a processing process on an MR display device worn by the first target user, wherein the operation information at least comprises: the method comprises the steps of rotating operation, zooming operation, moving operation and space analysis operation, so that the purpose of providing better rendering and displaying effects for the three-dimensional model is achieved, the problem that the existing three-dimensional model method is poor in displaying effect is solved, and the technical effect of providing better three-dimensional model processing experience for users is achieved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a data display method based on an MR collaborative digital sand table according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a data display system based on an MR-collaborative digital sand table according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Embodiment one:
in accordance with an embodiment of the present invention, there is provided an embodiment of a data presentation method based on an MR coordinated digital sand table, it being noted that the steps shown in the flowchart of the figures may be performed in a computer system such as a set of computer executable instructions and, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order other than that shown herein.
Fig. 1 is a flowchart of a data presentation method based on an MR coordinated digital sand table according to an embodiment of the invention, as shown in fig. 1, the method includes the steps of:
step S102, obtaining terrain basic information of a region to be processed and target information provided by a GIS system, wherein the target information comprises: geographic information, earth surface pictures and tiff data;
step S104, constructing a three-dimensional terrain model of the area to be processed based on the terrain base information and the target information;
step S106, based on a spatial analysis algorithm and feature point coordinates of the current environment, overlapping and overlapping the three-dimensional terrain model and the current environment in an MR cooperative digital sand table;
step S108, based on the position information of the first target user in the current environment, the operation information of the first target user and the Unity3D technology acquired by the MR depth camera, processing the three-dimensional terrain model in the MR collaborative digital sand table, and displaying the processing procedure on an MR display device worn by the first target user, wherein the operation information at least comprises: rotation operation, scaling operation, movement operation, and spatial analysis operation.
In the embodiment of the invention, the terrain basic information of the area to be processed and the target information provided by the GIS are obtained, wherein the target information comprises: geographic information, earth surface pictures and tiff data; constructing a three-dimensional terrain model of the area to be processed based on the terrain base information and the target information; based on a spatial analysis algorithm and feature point coordinates of a current environment, overlapping and overlapping the three-dimensional terrain model and the current environment in an MR cooperative digital sand table; based on the position information of a first target user in the current environment, the operation information of the first target user and the Unity3D technology acquired by an MR depth camera, processing a three-dimensional terrain model in the MR collaborative digital sand table, and displaying a processing process on an MR display device worn by the first target user, wherein the operation information at least comprises: the method comprises the steps of rotating operation, zooming operation, moving operation and space analysis operation, so that the purpose of providing better rendering and displaying effects for the three-dimensional model is achieved, the problem that the existing three-dimensional model method is poor in displaying effect is solved, and the technical effect of providing better three-dimensional model processing experience for users is achieved.
In the embodiment of the present invention, step S104 includes the following steps:
constructing a white model of the area to be processed based on the terrain basic information;
and rendering and optimizing the white model based on the target information to obtain the three-dimensional terrain model.
In the embodiment of the invention, a technician builds a UV image, a line block diagram and a city model white model of the area to be processed through modeling software such as 3DMAX according to official terrain basic information, and then renders materials, light effects, textures and the like to build a three-dimensional terrain model of the area to be processed.
Geographic information, earth surface pictures and converted tiff data provided by GIS service are called, the geographic information data and pictures are converted into city model pictures to render white models, the latest rendering technology of an HDRP high-definition rendering pipeline is adopted to conduct rendering adjustment and detail optimization on earth surface illumination effects, surrounding environment effects, climate characteristic effects and the like, meanwhile, correct spatial positions of various GIS objects are displayed, and real and natural forms of the objects can be represented.
In the embodiment of the present invention, step S108 includes the steps of:
determining anchor point information of the first target user in the current environment based on the position information;
determining a current three-dimensional terrain model in the MR cooperative digital sand table based on the anchor point information;
and processing the current three-dimensional terrain model in the MR collaborative digital sand table based on the operation information of the first target user and the Unity3D technology.
In the embodiment of the invention, a DEM space analysis MR cooperated with a digital sand table takes a digital elevation model as an analysis object, typical space analysis algorithms such as profile analysis, visual analysis, illumination analysis and the like are reproduced in the MR, an MR development environment sensing, holographic display and programming interaction interface is utilized, an analysis result is fed back in real time through space interaction of a participant body, then an acquired image of each frame is analyzed and identified by utilizing image data obtained by a camera through a three-dimensional sensing technology of the MR, characteristic points are extracted, then the characteristic points are matched, and coordinates of certain characteristic points in the space are calculated.
Then, the position and the movement of the first target user in the three-dimensional scene are acquired through the depth camera of the MR, and the position and the movement of the first target user in the three-dimensional scene are depth information of the scene, which is reversely calculated through the change of the camera image.
Finally, the Unity3D is used as a development environment, the functions of visual display and space analysis of the DEM model are realized through a holographic display and interaction interface provided by the MR equipment, the original DEM data are called, and the functions of rotation, scaling, movement, space analysis and the like of the digital elevation model are realized based on the Unity3D development environment combined with the holographic display and interaction interface of the MR.
The MR sand table base is a modern high-tech means taking computer technology as a core to generate a vivid three-dimensional image model, the three-dimensional or four-dimensional graphic image model on a computer is overlapped and overlapped with a real scene by means of a head display device, a simulated three-dimensional stereoscopic image is precisely projected to a corresponding position and a spatial position of a solid model, and is imaged in a mapping manner with the solid space, a visitor interacts with the three-dimensional model projected by a large screen in a natural gesture action, and multi-azimuth multi-level browsing inquiry is performed, so that concise, precise, graceful and vivid dynamic information is rapidly acquired.
In an embodiment of the present invention, the method further includes the following steps:
and sharing anchor point information of the first target user in the current environment to the MR display equipment worn by the second target user so that the MR display equipment worn by the second target user displays the processing process.
The MR collaborative sand table system can also support local or remote multi-person collaborative construction of a multi-person shared space on the basis of the above, and by sharing an accurately registered SLAM coordinate system between client devices, a multi-person collaborative seamless interactive experience is realized in a multi-person mixed real world space. When the MR equipment works, the position of virtual information in the fusion scene is determined while the spatial position and the gesture of the user are changed, so that real-time synchronization of the virtual scene and the real scene is realized. Since the position of the virtual information in the mixing space is preset, the key role in the fusion process is the spatial position and posture of the user. The system needs to acquire pose information data of the head of the user in real time to calculate the pose of the virtual object to be displayed, so that virtual-real fusion is realized, and the remote tracking registration effect is improved by combining a calibrated camera through a depth sensor built in the mixed reality equipment. Meanwhile, image information is converted into digital information by using a pattern recognition technology, predefined marks, objects or reference points in a video image are recognized by using the computing capability of a computer, the position information parameter change of an observer in a scene is obtained by using measured values including camera parameters, a visual field range, sensor offset, object positioning, a space scanning technology, a video detection technology and the like, and a coordinate conversion matrix is calculated by detecting offset and rotation of the observer in three directions of an x axis, a y axis and a z axis in the scene, so that new accurate position information parameters are obtained.
On the premise of the same network environment, based on the WIFI6/5G communication technology, communication connection is established between equipment and a server, and when multi-person mixed reality operation is performed, in order to ensure the position consistency of a space virtual object, after the position of a space anchor point is determined, the space anchor point is required to be sent to all other mixed reality clients so as to synchronize the positions of holographic images in real space in each client. By using the space anchor point technology, after a user scans the surrounding environment through the helmet, one point in space is manually or programmatically selected as the anchor point and the space anchor point of the shared experience. The data representing this point may then be serialized and transmitted to other devices that are shared in the experience. Each device will then de-serialize the anchor point data and attempt to find the point in space, i.e. a registration process in three dimensions. In order for the anchor point transmission to work properly, each device must scan enough environments so that different clients can identify points represented by the anchor points using the same spatial anchor point, and can unify the virtual object positions in the space so that all people see the same object position angle, i.e. the mapping that locks the virtual object to the physical space. (for example, the anchors corresponding to the user 1 and the user 2 are the anchor a and the anchor B, if the digital sand table scene corresponding to the anchor a needs to be shared with the user 2, the need to share the spatial anchor refers to sending the spatial parameter corresponding to the anchor a to the user 2, and then sharing the anchor a to the MR device of the user 2 so that the MR device of the user 2 displays the scene information corresponding to the anchor a. After the user 1 inputs the interaction information through the MR device of the user 1, the scene information corresponding to the anchor a displayed in the MR device of the user 1 and the MR device of the user 2 will be changed correspondingly according to the interaction information).
And establishing a network cooperation networking, broadcasting a password message (designated Port) in the local area network through UDP (user datagram protocol) by constructing a basic password message, then receiving a designated Port loopback message (Client IP, port), when the Server and the Client are in the same local area network, connecting the Client to a main Server to realize automatic networking, and when an ID is established on the main Server, actually, not establishing a new application Server, but determining a relatively small number of application servers, and returning an IP address to the Client. When the client calls the application on the main server, the main server searches the application server running the application and returns the IP to the client, so that the scene information and the interaction information are transmitted, and the multi-person cooperative service and the multi-person cooperative interaction service are realized.
In the aspect of content presentation, a wireless streaming mode is implemented by utilizing WiFi, algorithm data and video content are bound by means of edge computing capability based on the premise of distribution, and are issued to an end side, and the center node is accessed through an edge node to pull the data. When the user operates again, the corresponding interaction processing is carried out through the edge node, and then the interaction processing is issued to the end side. Thus, from the edge node to the playing terminal, the operation of point-to-point real-time transmission is realized. An edge service framework, a network protocol, an end-side interaction engine, an edge scheduling system, and an application development tool chain. The edge service framework, the network protocol and the end side interaction engine are respectively provided with framework service capability of the edge node, protocol processing of network communication and interaction and rendering engine of the terminal as shown in the drawing. The edge scheduling system reasonably schedules whether the rendering service of the user should be processed at the terminal or to the edge node according to the conditions of the user terminal, the computing power of the edge node and the like as described above. On the edge service, a basic framework is built, the existing rendering service is borne, and a game engine can be deployed in the future to realize other cloud business services. Because a single edge service node needs to serve a plurality of terminal devices, user Session management of push-pull streaming service is important, and low-delay push-pull processing, high-performance rendering service and the like are all important points to be broken through. Meanwhile, because many defined scenes are modes based on real-time calculation and strong interaction, more like games, uplink data mainly comprise operation instructions, texts and the like, downlink data mainly comprise streaming media data, algorithm data and the like, and in consideration of time delay and the like, a transmission protocol constructed based on UDP is preferred, and in consideration of the problem of network penetration rate, a scheme based on TCP can be used as a base spam strategy. On the end side, emphasis is on a low-delay live broadcast player, and the client implementation of a network protocol, the processing of an uplink instruction of a user and the like finally realize multi-user collaborative interaction.
According to the embodiment of the invention, a short plate with three-dimensional rendering capability is complemented on the existing three-dimensional GIS engine, and the three-dimensional technical capability of a part of rendering effects such as a particle system, a cloud and fog system, a hair system, an engraving system, a skeleton system, a plant system and the like which are lacking in the current three-dimensional GIS is complemented;
secondly, the GIS is combined with the game engine Unity3D technology, and the Unity3D has remarkable advantages for the visual analysis of the DEM due to the strong three-dimensional scene processing rendering capability. A Ray with 3 parameters, namely a Ray starting point, a Ray direction and a Ray acting rendering layer, is provided in Unity3D, and a Ray can be simulated to simulate the sight of a person. In the Update function, the effect of human eye observation can be simulated by changing the ray direction in real time, and whether two points are in a visual effect or not is displayed by rendering the ray through the rendering capability of the Unity 3D. Meanwhile, the Unity3D provides a point light source, a parallel light source, a spotlight, regional light and the like for a developer, and can simulate sunlight when in illumination analysis, and the parallel light source is utilized for illumination analysis, so that the shadow effect generated by parallel illumination on a model is analyzed, and the 1 of the real environment effect is realized: 1 reduction.
GIS is one of earliest technologies for urban management, and has a core value of providing a data management framework based on geography and realizing unified management and integration of various data resources based on the data management framework. In combination with a game engine, the GIS has more compatibility in the data layer, and aiming at the requirement of urban development, data (remote sensing images, vector data, IOT, oblique photography, BIM) and the like required by various industries can be put into a space-time frame to form urban data assets.
In addition, by combining with an MR mixed reality technology, holographic duplication can be carried out on technical products of GIS map data management and game engine, the MR technology is applied to space analysis and visualization, digital ground surface simulation and the like, and the mixed reality equipment truly realizes 'cross-boundary fusion' on a virtual DEM model and space analysis thereof, so that mapping from physical world to virtual world is perfectly realized through digital twin, and the technology breakthrough innovation of cross-boundary integration is calculated to realize landing.
And (5) unified construction of the three-dimensional space of the city. On the other hand, three-dimensional data of multiple elements such as buildings, traffic, water systems, vegetation, pipelines, sites, geology, urban parts and the like are integrated and fused, visual expression of visual images is carried out in a three-dimensional model integrated display environment, full-space visualization of urban multi-scale expression is realized, and on the third aspect, real urban topography and urban overall view can be realized by combining man-machine interaction means such as MR space anchor point positioning, air clicking and the like 1:1, the holographic image is restored, and space display is realized by mirror reflection with the effect similar to naked eye 3D.
Embodiment two:
the embodiment of the invention also provides a data display system based on the MR cooperative digital sand table, which is used for executing the data display method based on the MR cooperative digital sand table provided by the embodiment of the invention, and the following is a specific introduction of the data display system based on the MR cooperative digital sand table provided by the embodiment of the invention.
As shown in fig. 2, fig. 2 is a schematic diagram of the data display system based on the MR cooperative digital sand table, where the data display system based on the MR cooperative digital sand table includes:
the GIS platform 10 is configured to store target information of a region to be processed, where the target information includes: geographic information, earth surface pictures and tiff data;
the server 20 is used for acquiring the topography basic information of the area to be processed and the target information provided by the GIS system;
the server 20 is further configured to construct a three-dimensional terrain model of the area to be processed based on the terrain base information and the target information;
the server 20 is further configured to superimpose and overlap the three-dimensional terrain model and the current environment in the MR collaborative digital sand table based on a spatial analysis algorithm and feature point coordinates of the current environment;
the server 20 is further configured to process the three-dimensional terrain model in the MR cooperative digital sand table based on the position information of the first target user in the current environment acquired by the MR depth camera, the operation information of the first target user, and the Unity3D technology, where the operation information at least includes: rotation operation, scaling operation, movement operation, and spatial analysis operation;
the MR display device 30 worn by the first target user is used for showing the processing procedure.
In the embodiment of the invention, the terrain basic information of the area to be processed and the target information provided by the GIS are obtained, wherein the target information comprises: geographic information, earth surface pictures and tiff data; constructing a three-dimensional terrain model of the area to be processed based on the terrain base information and the target information; based on a spatial analysis algorithm and feature point coordinates of a current environment, overlapping and overlapping the three-dimensional terrain model and the current environment in an MR cooperative digital sand table; based on the position information of a first target user in the current environment, the operation information of the first target user and the Unity3D technology acquired by an MR depth camera, processing a three-dimensional terrain model in the MR collaborative digital sand table, and displaying a processing process on an MR display device worn by the first target user, wherein the operation information at least comprises: the method comprises the steps of rotating operation, zooming operation, moving operation and space analysis operation, so that the purpose of providing better rendering and displaying effects for the three-dimensional model is achieved, the problem that the existing three-dimensional model method is poor in displaying effect is solved, and the technical effect of providing better three-dimensional model processing experience for users is achieved.
Embodiment III:
an embodiment of the present invention further provides an electronic device, including a memory and a processor, where the memory is configured to store a program that supports the processor to execute the method described in the first embodiment, and the processor is configured to execute the program stored in the memory.
Referring to fig. 3, an embodiment of the present invention further provides an electronic device 100, including: a processor 50, a memory 51, a bus 52 and a communication interface 53, the processor 50, the communication interface 53 and the memory 51 being connected by the bus 52; the processor 50 is arranged to execute executable modules, such as computer programs, stored in the memory 51.
The memory 51 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is achieved via at least one communication interface 53 (which may be wired or wireless), and the internet, wide area network, local network, metropolitan area network, etc. may be used.
Bus 52 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 3, but not only one bus or type of bus.
The memory 51 is configured to store a program, and the processor 50 executes the program after receiving an execution instruction, and the method executed by the apparatus for flow defining disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 50 or implemented by the processor 50.
The processor 50 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware in the processor 50 or by instructions in the form of software. The processor 50 may be a general-purpose processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a digital signal processor (Digital Signal Processing, DSP for short), application specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA for short), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 51 and the processor 50 reads the information in the memory 51 and in combination with its hardware performs the steps of the above method.
Embodiment four:
the embodiment of the invention also provides a computer readable storage medium, and a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the steps of the method in the first embodiment are executed.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. The data display method based on the MR cooperative digital sand table is characterized by comprising the following steps of:
obtaining terrain basic information of an area to be processed and target information provided by a GIS system, wherein the target information comprises: geographic information, earth surface pictures and tiff data;
constructing a three-dimensional terrain model of the area to be processed based on the terrain base information and the target information;
based on a spatial analysis algorithm and feature point coordinates of a current environment, overlapping and overlapping the three-dimensional terrain model and the current environment in an MR cooperative digital sand table;
based on the position information of a first target user in the current environment, the operation information of the first target user and the Unity3D technology acquired by an MR depth camera, processing a three-dimensional terrain model in the MR collaborative digital sand table, and displaying a processing process on an MR display device worn by the first target user, wherein the operation information at least comprises: rotation operation, scaling operation, movement operation, and spatial analysis operation.
2. The method of claim 1, wherein constructing a three-dimensional terrain model of the area to be treated based on the terrain base information and the target information comprises:
constructing a white model of the area to be processed based on the terrain basic information;
and rendering and optimizing the white model based on the target information to obtain the three-dimensional terrain model.
3. The method of claim 2, wherein processing the three-dimensional terrain model in the MR collaborative digital sand table based on the MR depth camera acquiring the location information of the first target user in the current environment, the operation information of the first target user, and the Unity3D technology comprises:
determining anchor point information of the first target user in the current environment based on the position information;
determining a current three-dimensional terrain model in the MR cooperative digital sand table based on the anchor point information;
and processing the current three-dimensional terrain model in the MR collaborative digital sand table based on the operation information of the first target user and the Unity3D technology.
4. A method according to claim 3, wherein if a second target user is included in the current environment, the method further comprises:
and sharing anchor point information of the first target user in the current environment to the MR display equipment worn by the second target user so that the MR display equipment worn by the second target user displays the processing process.
5. A data presentation system based on MR coordinated digital sand tables, comprising:
the GIS platform is used for storing target information of a region to be processed, wherein the target information comprises: geographic information, earth surface pictures and tiff data;
the server is used for acquiring the terrain basic information of the area to be processed and the target information provided by the GIS system;
the server is further used for constructing a three-dimensional terrain model of the area to be processed based on the terrain basic information and the target information;
the server is also used for superposing and overlapping the three-dimensional terrain model and the current environment in the MR cooperative digital sand table based on a spatial analysis algorithm and the characteristic point coordinates of the current environment;
the server is further used for processing the three-dimensional terrain model in the MR collaborative digital sand table based on the position information of the first target user in the current environment, the operation information of the first target user and the Unity3D technology acquired by the MR depth camera, wherein the operation information at least comprises: rotation operation, scaling operation, movement operation, and spatial analysis operation;
the MR display device worn by the first target user is used for displaying the processing procedure.
6. The system of claim 5, wherein the server is configured to:
constructing a white model of the area to be processed based on the terrain basic information;
and rendering and optimizing the white model based on the target information to obtain the three-dimensional terrain model.
7. The system of claim 5, wherein the server is configured to:
determining anchor point information of the first target user in the current environment based on the position information;
determining a current three-dimensional terrain model in the MR cooperative digital sand table based on the anchor point information;
and processing the current three-dimensional terrain model in the MR collaborative digital sand table based on the operation information of the first target user and the Unity3D technology.
8. The system of claim 7, wherein if a second target user is included in the current environment
The server is used for sharing anchor point information of the first target user in the current environment to MR display equipment worn by the second target user;
and the MR display device worn by the second target user is used for displaying the processing procedure.
9. An electronic device comprising a memory for storing a program supporting the processor to perform the method of any one of claims 1 to 4, and a processor configured to execute the program stored in the memory.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, performs the steps of the method according to any of the preceding claims 1 to 4.
CN202310665907.9A 2023-06-07 2023-06-07 Data display method and system based on MR (magnetic resonance) collaborative digital sand table Pending CN116416402A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310665907.9A CN116416402A (en) 2023-06-07 2023-06-07 Data display method and system based on MR (magnetic resonance) collaborative digital sand table

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310665907.9A CN116416402A (en) 2023-06-07 2023-06-07 Data display method and system based on MR (magnetic resonance) collaborative digital sand table

Publications (1)

Publication Number Publication Date
CN116416402A true CN116416402A (en) 2023-07-11

Family

ID=87056340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310665907.9A Pending CN116416402A (en) 2023-06-07 2023-06-07 Data display method and system based on MR (magnetic resonance) collaborative digital sand table

Country Status (1)

Country Link
CN (1) CN116416402A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103460256A (en) * 2011-03-29 2013-12-18 高通股份有限公司 Anchoring virtual images to real world surfaces in augmented reality systems
CN107479705A (en) * 2017-08-14 2017-12-15 中国电子科技集团公司第二十八研究所 A kind of command post's work compound goods electronic sand map system based on HoloLens
CN112509151A (en) * 2020-12-11 2021-03-16 华中师范大学 Method for generating sense of reality of virtual object in teaching scene
WO2022056941A1 (en) * 2020-09-17 2022-03-24 中国人民解放军陆军军医大学 Mixed reality high-simulation battlefield first aid training platform and training method using same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103460256A (en) * 2011-03-29 2013-12-18 高通股份有限公司 Anchoring virtual images to real world surfaces in augmented reality systems
CN107479705A (en) * 2017-08-14 2017-12-15 中国电子科技集团公司第二十八研究所 A kind of command post's work compound goods electronic sand map system based on HoloLens
WO2022056941A1 (en) * 2020-09-17 2022-03-24 中国人民解放军陆军军医大学 Mixed reality high-simulation battlefield first aid training platform and training method using same
CN112509151A (en) * 2020-12-11 2021-03-16 华中师范大学 Method for generating sense of reality of virtual object in teaching scene

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐臻: "基于MR的巡视探测遥操作场景建模及交互技术研究", 《中国优秀硕士论文全文数据库 信息科技辑》, no. 3, pages 138 - 745 *
魏一雄 等: "面向头戴式增强/混合现实设备的第三方视角技术研究", 《计算机工程》, vol. 47, no. 6, pages 284 - 291 *

Similar Documents

Publication Publication Date Title
CN106296783B (en) A kind of space representation method of combination space overall situation 3D view and panoramic pictures
US11087553B2 (en) Interactive mixed reality platform utilizing geotagged social media
CN108648269B (en) Method and system for singulating three-dimensional building models
CN111415416B (en) Method and system for fusing monitoring real-time video and scene three-dimensional model
CN109448099B (en) Picture rendering method and device, storage medium and electronic device
Ghadirian et al. Integration of augmented reality and GIS: A new approach to realistic landscape visualisation
CN106548516B (en) Three-dimensional roaming method and device
US20200128178A1 (en) A real-time generation method for 360-degree vr panoramic graphic image and video
CN104484327A (en) Project environment display method
CN107835436A (en) A kind of real-time virtual reality fusion live broadcast system and method based on WebGL
CN105719343A (en) Method for constructing virtual streetscape map
CN102834849A (en) Image drawing device for drawing stereoscopic image, image drawing method, and image drawing program
CN101477701A (en) Built-in real tri-dimension rendering process oriented to AutoCAD and 3DS MAX
CN109741431B (en) Two-dimensional and three-dimensional integrated electronic map frame
Jian et al. Augmented virtual environment: fusion of real-time video and 3D models in the digital earth system
CN101477702B (en) Built-in real tri-dimension driving method for computer display card
CN114863014A (en) Fusion display method and device for three-dimensional model
CN102521876B (en) A kind of method and system realizing 3D user interface stereoeffect
CN101521828B (en) Implanted type true three-dimensional rendering method oriented to ESRI three-dimensional GIS module
CN114255315A (en) Rendering method, device and equipment
CN116109684B (en) Online video monitoring two-dimensional and three-dimensional data mapping method and device for variable electric field station
CN101511034A (en) Truly three-dimensional stereo display method facing Skyline
Wang et al. Research and design of digital museum based on virtual reality
CN116416402A (en) Data display method and system based on MR (magnetic resonance) collaborative digital sand table
CN109427095B (en) Method and system for displaying mixed reality scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination