CN115619990A - Three-dimensional situation display method and system based on virtual reality technology - Google Patents

Three-dimensional situation display method and system based on virtual reality technology Download PDF

Info

Publication number
CN115619990A
CN115619990A CN202211342598.3A CN202211342598A CN115619990A CN 115619990 A CN115619990 A CN 115619990A CN 202211342598 A CN202211342598 A CN 202211342598A CN 115619990 A CN115619990 A CN 115619990A
Authority
CN
China
Prior art keywords
terrain
data
btg
file
creating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211342598.3A
Other languages
Chinese (zh)
Inventor
韩春雷
易凯
张绍泽
王丽华
任磊
王枭雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 20 Research Institute
Original Assignee
CETC 20 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 20 Research Institute filed Critical CETC 20 Research Institute
Priority to CN202211342598.3A priority Critical patent/CN115619990A/en
Publication of CN115619990A publication Critical patent/CN115619990A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Remote Sensing (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a three-dimensional situation display method and a system based on a virtual reality technology, which comprise the following steps: a VR device in communication with the renderer; a server in communication connection with the renderer; a renderer configured with a computer program that when invoked by a processor performs terrain generation based on a given BTG file by performing the steps of: creating a hollow object as a father node based on the longitude and latitude corresponding to the terrain data, and creating a hollow object as a child node based on the BTG file; calculating map coordinates of vertexes included in the triangle based on the triangle corresponding to the material data; traversing all vertexes, and determining the map tile coordinate range corresponding to the given BTG file; and finishing the creation of the terrain grid based on the obtained map tile coordinate range. The embodiment of the application adopts a high-precision high-texture detail terrain generation method to solve the problem that the reality sense of a model in a three-dimensional scene is not strong.

Description

Three-dimensional situation display method and system based on virtual reality technology
Technical Field
The application relates to the technical field of virtual reality, in particular to a three-dimensional situation display method and system based on a virtual reality technology.
Background
The existing situation display system is mostly realized by adopting forms such as character marking, two-dimensional graphics and three-dimensional graphics, human-computer interaction is not intuitive enough, the transmitted information amount is limited, real and complex environment, target and real-time situation information are projected and micro-contracted on a two-dimensional plane, some important information can be lost, situation display is not intuitive and accurate, and command and control personnel are difficult to comprehensively master situations and important events.
For example, in the field of combat simulation research, the situation display and presentation of virtual battlefields are important components of combat simulation systems. The battlefield two-dimensional or three-dimensional situation display terminal can provide visual control interfaces for users and operators to help the users and the operators to intuitively know the battlefield situation.
However, the existing three-dimensional situation display range is limited, the sky, the sea and the land can not be considered simultaneously in the visual area, and meanwhile, the user immersion is not strong.
Disclosure of Invention
The embodiment of the application provides a three-dimensional situation display method and system based on a virtual reality technology, which are used for solving the problem of poor reality of a model in a three-dimensional scene by adopting a high-precision high-texture detail terrain generation method, outputting a rendering result of a virtual environment to two display screens of VR equipment or a helmet and solving the problem of poor immersion when a user watches a computer display.
The embodiment of the application provides a three-dimensional situation display system based on virtual reality technology, includes:
the VR equipment is in communication connection with the rendering machine and is used for presenting the picture information rendered by the rendering machine;
the server is externally connected with the server, is in communication connection with the rendering machine, and sends the position and state information of each appointed platform in the target environment to the rendering machine;
a renderer for performing rendering based on data received from a server and sending rendered screen information to the VR device, configured with a computer program that, when invoked by a processor, performs terrain generation based on a given BTG file, wherein the BTG file includes associated material data and terrain data, by performing the steps of:
creating an empty object as a father node based on the longitude and latitude corresponding to the topographic data, and creating an empty object as a child node based on the BTG file;
calculating map coordinates of vertexes included in the triangle based on the triangle corresponding to the material data;
traversing all the vertexes, and determining the coordinate range of the map tile corresponding to the given BTG file;
and finishing the creation of the terrain grid based on the obtained map tile coordinate range.
Optionally, creating a null object as a child node based on the BTG file includes:
creating an empty object as a child node by using the BTG file name;
and creating empty objects as child nodes by corresponding each material name based on the material data.
Optionally, traversing all the vertices, and determining that the map tile coordinate range corresponding to the given BTG file satisfies:
Figure BDA0003916937320000021
Figure BDA0003916937320000022
wherein, the lambda is longitude and the value range is-180 degrees to 180 degrees; phi is latitude, zoom level, x p 、y p Respectively representing the longitude and latitude of the point p, and the tiles corresponding to the longitude and latitude of the point pThe coordinates are x = [ x ] p ]、y=[y p ]。
Optionally, based on the obtained map tile coordinate range, completing the creation of the terrain mesh includes:
normalizing the calculated tile coordinates, and taking the normalized tile coordinates as texture coordinates of corresponding vertexes;
creating grid data according to the texture coordinates and the terrain data corresponding to the material data;
and adding a MeshFliter component for the object corresponding to the material data, and taking the created grid data as the Mesh of the component to complete the creation of the terrain grid.
Optionally, the rendering machine is further configured to perform the following steps:
numbering elevation data and texture data of the terrain according to longitude and latitude on the basis of the obtained terrain grids;
objects that appear repeatedly in the scene are created using a preform approach.
Optionally, the rendering machine is further configured to perform the following steps:
providing user interaction and acquiring feedback information of the VR equipment;
scene interaction with a user is completed based on collision detection, wherein the collision detection comprises collision detection between the hierarchical bounding boxes and collision detection of rays and colliders.
Optionally, the VR equipment includes VR glasses or VR helmet, VR handle and locator, VR glasses or VR helmet are used for the VR situation to show, the VR handle is used for human-computer interaction, the locator is used for fixing a position VR glasses and VR handle.
The embodiment of the application further provides a three-dimensional situation display method based on the virtual reality technology, which includes:
providing a VR device for presenting picture information rendered by a rendering machine;
providing a server for sending the position and state information of each designated platform in the target environment to the rendering machine;
configuring a computer program within the renderer, the computer program when invoked by a processor performing terrain generation based on a given BTG file, wherein the BTG file includes associated material data and terrain data, performing the steps of:
creating an empty object as a father node based on the longitude and latitude corresponding to the topographic data, and creating an empty object as a child node based on the BTG file;
calculating the map coordinates of vertexes contained in the triangles based on the triangles corresponding to the material data;
traversing all vertexes, and determining the map tile coordinate range corresponding to the given BTG file;
and finishing the creation of the terrain grid based on the obtained map tile coordinate range.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the foregoing three-dimensional situation displaying method based on virtual reality technology are implemented.
According to the method, the problem that the reality sense of a model in a three-dimensional scene is not strong is solved by adopting a high-precision high-texture detail terrain generating method, the rendering result of the virtual environment is output to two display screens of VR equipment or a helmet, and the problem that the immersion sense of a user watching a computer display is not strong is solved.
The above description is only an overview of the technical solutions of the present application, and the present application may be implemented in accordance with the content of the description so as to make the technical means of the present application more clearly understood, and the detailed description of the present application will be given below in order to make the above and other objects, features, and advantages of the present application more clearly understood.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is an architecture example of a three-dimensional situation display system according to an embodiment of the present application;
FIG. 2 is an exemplary system software architecture of a three-dimensional situational display system according to an embodiment of the present application;
FIG. 3 is an example VR handle for an embodiment of the present application;
fig. 4 is a workflow example of a three-dimensional situation display system according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited by the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment of the present application provides a three-dimensional situation display system based on virtual reality technology, as shown in fig. 1, including:
and the VR equipment is in communication connection with the rendering machine and is used for presenting the picture information rendered by the rendering machine. In some embodiments, the VR device includes VR glasses or a VR helmet for VR situational display, a VR handle for human-computer interaction, and a positioner for positioning the VR glasses and the VR handle.
And the server is an external connection server, is in communication connection with the rendering machine, and sends the position and state information of each appointed platform in the target environment to the rendering machine. The rendering machine may be connected to the switch.
The rendering machine in the embodiment of the application is configured with system software for three-dimensional situation display, and is realized by using a Unity3D rendering engine through C # language. As shown in fig. 2, the software components may include: the device comprises an interactive interface generation module, a three-dimensional model generation module, a terrain generation module, a special effect generation module, a network communication module, a situation updating module, a human-computer interaction response module and a VR dynamic rendering module.
The three-dimensional model generation module is mainly used for generating a background and platform three-dimensional model and comprises two parts of three-dimensional model resource loading and three-dimensional model instantiation.
Loading three-dimensional model resources, loading a three-dimensional geometric model, materials and textures of a background and a platform, creating a model name according to needs, loading corresponding model resources from a path where the model resources are located (all the model resources can be stored locally or in a database, and the system stores the model resources locally in a computer), and providing basic data for instantiation of the three-dimensional model of the background and the platform.
And instantiating the three-dimensional model, and combining the loaded three-dimensional geometric model, material, texture and other resources to generate the background and platform three-dimensional model.
And the terrain generation module is mainly used for generating three-dimensional land and three-dimensional dynamic ocean.
The method comprises the steps of three-dimensional land generation, specifically, high-precision terrain elevation data and high-resolution texture data are used when a terrain is created, so that the terrain with high precision and fine texture details is constructed, and meanwhile, a high-precision illumination model is used, illumination is calculated by utilizing a terrain normal line, and the reality sense of the terrain is improved. The building model achieves the purpose of high reality sense by using fine model data and real texture pictures.
More specifically, in Unity3D, the terrain is mainly composed of terrain meshes and corresponding materials. And assigning the created material to the corresponding terrain grid, and performing Unity rendering display to realize the visualization of the three-dimensional terrain.
In a specific implementation process, in order to create a three-dimensional terrain with high realism, in this embodiment, a high-precision terrain mesh is created by using terrain data of flight gear, and terrain textures of flight gear and texture data of Google Maps are used to create terrain materials with high realism, respectively, or other refined Maps may be used, which are only exemplified by Google Maps, but not limited thereto.
The mesh can be dynamically created in Unity3D using the three-dimensional coordinates of the points, normal coordinates, texture coordinates, and indices of vertices that make up the triangles. The BTG file (a terrain data storage file in the FlightGear) contains material information in addition to information required for Unity3D mesh creation. Therefore, the information contained in the BTG file can be fully utilized to create the terrain grid. In some embodiments, the rendering engine is configured to perform rendering based on data received from a server, and to send rendered screen information to the VR device, and is configured with a computer program that, when invoked by a processor, is based on a given BTG file, wherein the BTG file includes associated texture data and terrain data. In order to solve the problem that the precision and the reality of material data and terrain data in FlightGear are not high, google Maps texture data is introduced to achieve terrain mesh creation, and a specific processor executes the following steps to execute terrain generation:
latitude and longitude ranges of the terrain and a scaling level of the Google Maps texture are created.
And loading and decoding the corresponding BTG file to obtain the three-dimensional coordinates, the normal coordinates, the texture coordinates, the vertex indexes forming the triangle and the material information of the point.
And creating an empty object as a parent node based on the longitude and latitude corresponding to the terrain data, and creating an empty object as a child node based on the BTG file, for example, the empty object may be created as a parent node in the longitude and latitude corresponding to the BTG file. In some embodiments, creating the empty object as a child node based on the BTG file comprises: and creating a hollow object as a child node by using the BTG file name, and creating a hollow object as a child node by using each corresponding material name based on the material data.
And if the material is not the airport runway, calculating the map coordinates of the vertexes contained in the triangle based on the triangle corresponding to the material data.
Traversing all vertexes, and determining the map tile coordinate range corresponding to the given BTG file;
and finishing the creation of the terrain grid based on the obtained map tile coordinate range.
If the material is the airport runway, directly creating a Mesh according to the three-dimensional coordinate, the normal coordinate, the texture coordinate of the vertex corresponding to the material and the vertex index forming the triangle, adding a Mesh Fliter component to the material object, and taking the created Mesh as the Mesh of the component to complete the creation of the terrain Mesh.
According to the method, the problem that the reality sense of a model in a three-dimensional scene is not strong is solved by adopting a high-precision high-texture detail terrain generating method, the rendering result of the virtual environment is output to two display screens of VR equipment or a helmet, and the problem that the immersion sense of a user watching a computer display is not strong is solved.
In some embodiments, for materials that are not airport runways, all vertices are traversed and it is determined that the map tile coordinate range for a given BTG file satisfies:
Figure BDA0003916937320000071
Figure BDA0003916937320000072
wherein, the lambda is longitude and the value range is-180 degrees to 180 degrees; phi is latitude, zoom level, x p 、y p Respectively represents the longitude and latitude of the point p, and the tile coordinate corresponding to the longitude and latitude of the point p is x = [) p ]、y=[y p ]。
For materials other than airport runways, in some embodiments, completing terrain mesh creation based on the obtained map tile coordinate ranges comprises:
normalizing the calculated tile coordinates, and taking the normalized tile coordinates as texture coordinates of corresponding vertexes;
and creating grid data according to the texture coordinates and the terrain data corresponding to the material data, wherein the terrain data can comprise three-dimensional coordinates of vertices corresponding to the material, normal coordinates and vertex indexes forming triangles.
And adding a MeshFliter component to the object corresponding to the material data, and taking the created grid data as the Mesh of the component to complete the creation of the terrain grid.
Large-scale data fast loading
Due to the fact that high-precision terrain, model and texture data are used when a high-reality scene is built, the amount of resource data is increased rapidly. In order to accelerate data loading and reduce user waiting time, a large-scale data fast loading technology is adopted in the embodiment of the application. In some embodiments, the rendering machine is further configured to perform the steps of: and numbering the elevation data and the texture data of the terrain according to longitude and latitude based on the obtained terrain grids, and creating objects which repeatedly appear in the scene by using a prefabricated body mode.
Specifically, the large-scale data loading is accelerated mainly through three ways, firstly, elevation data and texture data of a terrain are numbered according to longitude and latitude, and therefore the data searching time can be shortened when corresponding data are loaded; secondly, models such as buildings, trees and the like which repeatedly appear in the scene are created in a prefabricated body mode, the prefabricated body is a reusable object, repeated loading of a large amount of data is avoided, and data loading time is shortened. And thirdly, the multithreading technology is used for loading data in parallel, and the data loading time is reduced by times. Through the method, the large-scale data loading speed is effectively increased, and the user waiting time is reduced.
And (3) generating a three-dimensional dynamic ocean, namely generating the dynamic three-dimensional ocean by utilizing GPU parallel computing power according to parameters such as wind speed, wind direction and fluctuation period. In addition to this, the color, reflection, etc. of the ocean can be adjusted.
The special effect generation module is mainly used for generating special effects such as dynamic special effect and motion trail.
The method comprises the steps of generating a dynamic special effect, simulating the special effect by using a particle effect by adopting a particle system technology in Unity3D, simulating random motion of the effect by using a plurality of real pictures of smoke, fog, fire, cloud and the like as textures of a particle system and combining a random noise image, thereby vividly simulating the effects of smoke, fog, fire, explosion and the like and improving the sense of reality of a VR three-dimensional scene.
And generating a motion track, namely connecting and rendering the missile position updated in real time in the situation information received from the thought server by using a line segment by adopting a line rendering technology in the Unity3D, so that the generation of the motion track of the missile is realized, and the user can observe the motion condition of the missile conveniently.
And the network communication module is mainly used for realizing the communication function with the preset server.
And creating a receiving Socket, setting an IP address and a port number communicated with the imagination server, creating the receiving Socket and providing a receiving channel for receiving the situation information of the imagination server.
And receiving network data, namely receiving and caching the situation information from the scenario server through a receiving Socket, and providing a data basis for the subsequent processing of the situation information.
And the situation updating module is used for analyzing the situation information of the scenario server and updating the platform state.
And analyzing the situation information, namely analyzing the data received and cached by the network communication module according to a data interface protocol communicated with the preset server, so as to obtain information such as the type, position, posture and the like of the platform in the situation information and provide a data basis for updating the state of the platform.
And updating the platform state, namely updating the information of the platform such as creation, deletion, position, posture and the like according to the analyzed situation information so as to realize the updating of the situation.
And the human-computer interaction response module is used for realizing the interaction between the user and the scene and the response to the operation of the user on the VR handle. In some embodiments, the rendering machine is further configured to perform the steps of:
providing user interaction and acquiring feedback information of the VR equipment;
scene interaction with a user is completed based on collision detection, wherein the collision detection comprises collision detection between the hierarchical bounding boxes and collision detection of rays and colliders.
Specifically, interaction between a user and a scene can be realized through a collision detection method according to state information fed back to the system by VR glasses or helmets and handles through a user and scene interaction technology. The adopted collision detection mainly comprises two types, namely the collision detection between the hierarchical bounding boxes, and the collision detection between the ray and the hierarchical bounding boxes. For interaction with close distances, methods of collision detection between hierarchical bounding boxes are mainly used, such as walking on the ground, colliding with objects, and the like. For interaction with remote objects and UIs or small objects, methods of ray and hierarchical bounding box collision detection, such as hit determination, menu selection, object picking, etc., are mainly used.
The hierarchical bounding box is a box which is slightly larger in volume and similar in shape to surround an object. Whether the two bounding boxes intersect is firstly detected, and then the overlapping part between the bounding boxes is detected in detail after the two bounding boxes intersect, so that whether the objects collide with each other is judged. The algorithm can reduce the number of bounding boxes participating in the test and improve the efficiency. The hierarchical bounding box method can be divided into an axial hierarchical bounding box (AABB), bounding spheres, an orientation hierarchical bounding box (OBB), and a discrete orientation polyhedral (K-DOPs) detection method along the coordinate axis. The present invention uses the AABB and bounding volumes as hierarchical bounding volumes for collision detection between bounding volumes.
Detection of a collision of a ray with a collider, in three-dimensional vector space, the ray is used to represent direction, defined by two vectors: one indicating its origin and the other its direction. The mode of the ray is 1. The equation for the ray is:
P=P 0 +t×α (1)
where P denotes a certain point on the ray, P 0 Denotes the origin of the ray, alpha denotes the direction of the ray, P 0 α is a three-dimensional vector, and t ∈ [0, ∞). According to the above formula, when t =0, P represents the starting point, and when t is other numerical value, P represents other point on the ray. Since α represents only the direction of the ray, then t represents the distance of other points on the ray from the ray origin.
Let vector P 1 Is a point on a plane and the vector N represents the normal to the plane, it is sufficient that both vectors determine the plane. For example, given a point vector (0, 0) and a normal vector (0, 1, 0), a plane is uniquely defined. The equation for any one plane is
N·P 1 =d (2)
Where d represents the distance from the origin of the coordinate system to the plane. Assuming that the ray intersects the plane at a point, P must satisfy equations (1) and (2) simultaneously, and solve the system of equations to obtain
t=(N·P-N·P 0 )/(N·α) (3)
Substituting the formula (3) into the formula (1) to obtain the position of the collision
And key operation monitoring, namely monitoring operations such as pressing, lifting and touching of a VR handle key through a monitoring technology, and capturing the operation of a user on a VR handle.
And operation response processing, namely triggering preset corresponding functions such as position translation, instantaneous translation, platform selection and the like according to the operation of the user.
The right handle is mainly used for menu control and roaming mode selection. Pressing the right handle menu button pops up the system control menu. The menu contains a roaming mode option and an exit button. The menu can be confirmed by interacting with rays and pressing a trigger key. When the roaming mode is continuous movement, the right handle touch pad controls the left and right movement and the up and down movement; when the roaming mode is instantaneous movement, the right handle touch pad presses the upper display movement target point, and the user directly moves to the target point after the user releases his hand.
The schematic diagram of the VR handle is shown in FIG. 3, and includes a menu button 1, a touch pad 2, a power key 3, a handle indicator 4, and a trigger key 5.
The right handle is mainly used for menu control and roaming mode selection. Pressing the right handle menu button pops up the system control menu. The menu contains a roaming mode option and an exit button. The menu can be confirmed by interacting through rays and pressing a trigger key. When the roaming mode is continuous movement, the right handle touch pad controls the left and right movement and the up and down movement; when the roaming mode is instantaneous movement, the right handle touch pad is pressed to display the movement target point, and the user directly moves to the target point after the user releases his hand.
The left handle is mainly used for platform selection and platform information display control. The left handle touchpad controls the left and right control platform to select, and controls whether the platform information is displayed up and down.
And the VR dynamic rendering module is used for rendering the dynamic scene into two paths of images with certain parallax.
And dynamic rendering, wherein models in a field of view are subjected to shielding elimination, rasterization, coloring and other processing according to the current visual angles of two eyes of a user, and two paths of image data are generated by rendering.
And outputting a VR rendering result, and outputting the two paths of image data generated by rendering to two screens of VR glasses in parallel, so that a user can see the two paths of image data with certain parallax on the VR glasses, thereby generating an immersive three-dimensional effect.
An exemplary workflow is shown in fig. 4, comprising the steps of:
the system starts to operate;
generating a user interaction interface;
VR dynamically renders and displays the user interaction interface;
and responding to the user operation to determine whether to operate. If so, continuing; otherwise, ending the program;
generating a background three-dimensional model, a terrain model and a special effect model;
receiving situation information of a server, including information such as platform type, position and posture;
generating a corresponding platform three-dimensional model according to the situation information, and updating the position and posture information of the model;
rendering a scene under the current visual angle of a user to generate two paths of images with certain parallax, and outputting the images to two screens of VR glasses to realize VR dynamic rendering display;
responding to the change of the visual angle direction of the user and the operation condition of the VR handle;
repeatedly generating a background three-dimensional model, a terrain model and a special effect model until a user selects an ending program;
and finishing the operation of the program.
By using the method provided by the embodiment of the application, the display range of the three-dimensional situation is wide, the sky, the ocean and the land are taken into consideration in the visual area, and the reality degree of the model is high. The method makes full use of the immersion and the interactivity of the virtual reality, the three-dimensional situation display is more visual and vivid, the telepresence of the user is stronger, the military training cost is reduced, and the training and exercise effects are improved. The method can output the rendering result of the virtual environment with a certain parallax to two display screens of VR glasses or a helmet respectively, and solves the problem that a user cannot feel strong when watching a computer display; and judging the interaction condition of the user with the virtual environment and a UI (user interface) according to data fed back by VR glasses or helmets and the handle by using a collision detection method, so that the interaction problem of the user with the virtual environment is solved.
The embodiment of the application further provides a three-dimensional situation display method based on the virtual reality technology, which comprises the following steps:
providing a VR device for presenting picture information rendered by a rendering machine;
providing a server for sending the position and state information of each designated platform in the target environment to the rendering machine;
configuring a computer program within the renderer, the computer program when invoked by a processor performing terrain generation based on a given BTG file, wherein the BTG file includes associated material data and terrain data, performing the steps of:
creating an empty object as a father node based on the longitude and latitude corresponding to the topographic data, and creating an empty object as a child node based on the BTG file;
calculating the map coordinates of vertexes contained in the triangles based on the triangles corresponding to the material data;
traversing all vertexes, and determining the map tile coordinate range corresponding to the given BTG file;
and finishing the creation of the terrain grids based on the obtained coordinate range of the map tiles.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the foregoing three-dimensional situation displaying method based on virtual reality technology are implemented.
According to the method, the problem that the reality sense of a model in a three-dimensional scene is not strong is solved by adopting a high-precision high-texture detail terrain generating method, the rendering result of the virtual environment is output to two display screens of VR equipment or a helmet, and the problem that the immersion sense of a user watching a computer display is not strong is solved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description, and do not represent the advantages and disadvantages of the embodiments.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (such as a ROM/RAM, a magnetic disk, and an optical disk), and includes several instructions for enabling a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the method described in the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A three-dimensional situation display system based on virtual reality technology, comprising:
the VR equipment is in communication connection with the rendering machine and is used for presenting the picture information rendered by the rendering machine;
the server is externally connected with the server, is in communication connection with the rendering machine, and sends the position and state information of each appointed platform in the target environment to the rendering machine;
a renderer for performing rendering based on data received from a server and sending rendered screen information to the VR device, configured with a computer program that, when invoked by a processor, performs terrain generation based on a given BTG file, wherein the BTG file includes associated material data and terrain data, by performing the steps of:
creating a hollow object as a father node based on the longitude and latitude corresponding to the terrain data, and creating a hollow object as a child node based on the BTG file;
calculating map coordinates of vertexes included in the triangle based on the triangle corresponding to the material data;
traversing all vertexes, and determining the map tile coordinate range corresponding to the given BTG file;
and finishing the creation of the terrain grids based on the obtained coordinate range of the map tiles.
2. The virtual reality technology-based three-dimensional situation display system according to claim 1, wherein creating the empty object as a child node based on the BTG file comprises:
creating an empty object as a child node by using the BTG file name;
and creating empty objects as child nodes by corresponding each material name based on the material data.
3. A virtual reality technology-based three-dimensional situation display system as claimed in claim 1, wherein the map tile coordinate range for a given BTG file is determined to satisfy, by traversing all vertices:
Figure FDA0003916937310000011
Figure FDA0003916937310000012
wherein, the lambda is longitude and the value range is-180 degrees to 180 degrees; phi is latitude, zoom level, x p 、y p Respectively representing the longitude and latitude of the point p, and the tile coordinate corresponding to the longitude and latitude of the point p is x = [ x ] p ]、y=[y p ]。
4. A three-dimensional situation display system based on virtual reality technology according to claim 3, wherein based on the obtained map tile coordinate range, completing terrain mesh creation comprises:
normalizing the calculated tile coordinates, and taking the normalized tile coordinates as texture coordinates of corresponding vertexes;
creating grid data according to the texture coordinates and the terrain data corresponding to the material data;
and adding a MeshFliter component for the object corresponding to the material data, and taking the created grid data as the Mesh of the component to complete the creation of the terrain grid.
5. The virtual reality technology-based three-dimensional situation display system of claim 4, wherein the rendering machine is further configured to perform the steps of:
numbering elevation data and texture data of the terrain according to longitude and latitude on the basis of the obtained terrain grids;
objects which repeatedly appear in the scene are created in a prefabricated mode.
6. The virtual reality technology-based three-dimensional situation display system of claim 1, wherein the rendering machine is further configured to perform the steps of:
providing user interaction and acquiring feedback information of the VR equipment;
scene interaction with a user is completed based on collision detection, wherein the collision detection comprises collision detection between the hierarchical bounding boxes and collision detection of rays and colliders.
7. The virtual reality technology-based three-dimensional situation display system of claim 1, wherein the VR device comprises VR glasses or a VR helmet for VR situation display, a VR handle for human-machine interaction, and a positioner for positioning the VR glasses and the VR handle.
8. A three-dimensional situation display method based on a virtual reality technology is characterized by comprising the following steps:
providing a VR device for presenting picture information rendered by a rendering machine;
providing a server for sending the position and state information of each designated platform in the target environment to the rendering machine;
configuring a computer program in the rendering machine, wherein the computer program, when called by a processor, executes the following steps to execute terrain generation based on a given BTG file, wherein the BTG file comprises associated material data and terrain data:
creating an empty object as a father node based on the longitude and latitude corresponding to the topographic data, and creating an empty object as a child node based on the BTG file;
calculating map coordinates of vertexes included in the triangle based on the triangle corresponding to the material data;
traversing all the vertexes, and determining the coordinate range of the map tile corresponding to the given BTG file;
and finishing the creation of the terrain grids based on the obtained coordinate range of the map tiles.
9. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method for displaying a three-dimensional situation based on virtual reality technology according to claim 8.
CN202211342598.3A 2022-10-31 2022-10-31 Three-dimensional situation display method and system based on virtual reality technology Pending CN115619990A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211342598.3A CN115619990A (en) 2022-10-31 2022-10-31 Three-dimensional situation display method and system based on virtual reality technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211342598.3A CN115619990A (en) 2022-10-31 2022-10-31 Three-dimensional situation display method and system based on virtual reality technology

Publications (1)

Publication Number Publication Date
CN115619990A true CN115619990A (en) 2023-01-17

Family

ID=84877011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211342598.3A Pending CN115619990A (en) 2022-10-31 2022-10-31 Three-dimensional situation display method and system based on virtual reality technology

Country Status (1)

Country Link
CN (1) CN115619990A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117453220A (en) * 2023-12-26 2024-01-26 青岛民航凯亚系统集成有限公司 Airport passenger self-service system based on Unity3D and construction method
CN117634135A (en) * 2023-10-16 2024-03-01 广州汽车集团股份有限公司 Virtual traffic scene establishment method, system, device and equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117634135A (en) * 2023-10-16 2024-03-01 广州汽车集团股份有限公司 Virtual traffic scene establishment method, system, device and equipment
CN117453220A (en) * 2023-12-26 2024-01-26 青岛民航凯亚系统集成有限公司 Airport passenger self-service system based on Unity3D and construction method
CN117453220B (en) * 2023-12-26 2024-04-09 青岛民航凯亚系统集成有限公司 Airport passenger self-service system based on Unity3D and construction method

Similar Documents

Publication Publication Date Title
CN115619990A (en) Three-dimensional situation display method and system based on virtual reality technology
CN111192354A (en) Three-dimensional simulation method and system based on virtual reality
CN110765620B (en) Aircraft visual simulation method, system, server and storage medium
Piekarski Interactive 3d modelling in outdoor augmented reality worlds
CN110163942B (en) Image data processing method and device
CN109859538A (en) A kind of key equipment training system and method based on mixed reality
CN111125347A (en) Knowledge graph 3D visualization method based on unity3D
CN112419499B (en) Immersive situation scene simulation system
CN112529022B (en) Training sample generation method and device
CN108765576B (en) OsgEarth-based VIVE virtual earth roaming browsing method
Piekarski et al. Augmented reality working planes: A foundation for action and construction at a distance
CN110568923A (en) unity 3D-based virtual reality interaction method, device, equipment and storage medium
CN115335894A (en) System and method for virtual and augmented reality
US11449196B2 (en) Menu processing method, device and storage medium in virtual scene
CN109099902A (en) A kind of virtual reality panoramic navigation system based on Unity 3D
CN109375866B (en) Screen touch click response method and system for realizing same
CN115082648B (en) Marker model binding-based AR scene arrangement method and system
US11756267B2 (en) Method and apparatus for generating guidance among viewpoints in a scene
Dias et al. MIXDesign, tangible mixed reality for architectural design
Zhang et al. A demonstration system for the generation and interaction of battlefield situation based on hololens
CN111047716B (en) Three-dimensional scene situation plotting method, computer storage medium and electronic equipment
Garcia et al. Modifying a game interface to take advantage of advanced I/O devices
Ramsbottom A virtual reality interface for previsualization
WO2024131405A1 (en) Object movement control method and apparatus, device, and medium
CN117389338B (en) Multi-view interaction method and device of unmanned aerial vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination