CN114972665A - Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation - Google Patents

Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation Download PDF

Info

Publication number
CN114972665A
CN114972665A CN202210541926.6A CN202210541926A CN114972665A CN 114972665 A CN114972665 A CN 114972665A CN 202210541926 A CN202210541926 A CN 202210541926A CN 114972665 A CN114972665 A CN 114972665A
Authority
CN
China
Prior art keywords
landmark
modeling
building
buildings
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210541926.6A
Other languages
Chinese (zh)
Inventor
刘艳
刘全德
王广科
田政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University
Original Assignee
Dalian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University filed Critical Dalian University
Priority to CN202210541926.6A priority Critical patent/CN114972665A/en
Publication of CN114972665A publication Critical patent/CN114972665A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation, which belongs to the technical field of visual simulation, and is characterized in that buildings are divided into landmark buildings and non-landmark buildings according to a characteristic attribute similarity normalization principle, the landmark buildings are subjected to refined modeling by utilizing three-dimensional animation production software, and a vector grid of the landmark buildings is extracted from a remote sensing image; and then, carrying out large-scale modeling on the non-landmark building by using three-dimensional visual modeling software, and carrying out integrated multi-element fusion on the landmark building model and the non-landmark building model in the illusion engine software. The invention has the advantages of multi-element fusion modeling, greatly improving the modeling speed while ensuring the modeling quality, realizing the three-dimensional mapping of a real scene through a remote sensing image, and having better immersion and interaction performance when establishing the model.

Description

Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation
Technical Field
The invention belongs to the technical field of visual simulation, and particularly relates to a three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation.
Background
Visual simulation is an immersive interactive technology based on computer graphics, and supports visual display of information. Because the flight vision simulation technology can combine the three-dimensional virtual model, the simulation process and the simulation data of the unmanned aerial vehicle, and simultaneously carry out visual output and display, the flight vision simulation technology is popular among scholars at home and abroad. At present, mainstream Flight scene simulation software creators, X-planes and Flight gears have the problems of low efficiency, poor immersion, interactivity and authenticity in the simulation process and the like in the three-dimensional virtual scene modeling.
Early unmanned aerial vehicle view simulation system is based on two-dimensional map data or digital graph mostly for researchers are difficult to detect the relevant characteristic of hiding in the data, can't understand unmanned aerial vehicle's flight situation directly perceivedly. Since the 21 st century, the development of 3D technology has led to the emergence of a batch of Game engines (Game engines), and 3D Game engines provide a whole set of solutions for Game or visual development, strongly promoting the development of digital twin three-dimensional visual scene modeling technology. The 3D game Engine known in the industry at present mainly comprises ORGE, Unity 3D, non Engine 4 and the like. The Uighur Juniperus communis develops a port visual simulation demonstration system based on OGRE, the system mainly simulates the operation condition of port equipment and the influence degree of different climates on a port system, and good progress is made in the aspect of visual immersion. Plum 21197is constructed based on a Unity 3D game engine and is used for reproducing the operation scenes of aircraft carriers and aircrafts, and the platform has vivid display effect and better immersion effect, but has complex scene modeling and lower efficiency.
Disclosure of Invention
In order to overcome the defects of poor immersion and interactive effects, complex scene construction and low modeling efficiency of the traditional unmanned aerial vehicle scene simulation method, the invention provides the three-dimensional visual virtual scene modeling method in the unmanned aerial vehicle virtual simulation, which is used for modeling in a multi-element fusion manner, greatly improving the modeling speed while ensuring the modeling quality, realizing three-dimensional mapping on a real scene through a remote sensing image and establishing a model with better immersion and interactive performance.
The technical scheme adopted by the invention for solving the technical problem is as follows: a three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation comprises the following steps: dividing the buildings into landmark buildings and non-landmark buildings according to a characteristic attribute similarity normalization principle, carrying out fine modeling on the landmark buildings by using three-dimensional animation software, and extracting a vector grid of the landmark buildings from a remote sensing image; and then, carrying out large-scale modeling on the non-landmark building by using three-dimensional visual modeling software, and carrying out integrated multi-element fusion on the landmark building model and the non-landmark building model in the illusion engine software.
As a further embodiment of the invention, the three-dimensional animation software is used for carrying out fine modeling on the landmark building and comprises the following steps: importing CAD data of a landmark building into 3ds Max, and after the CAD data are subjected to capturing, extruding, chamfering and inserting, using a rectangular tool to outline edges; according to the height and the structure of the top surface of the landmark building, different materials and texture elements are added in different areas of the model.
As a further embodiment of the invention, during the fine modeling process of the landmark building, invisible inner surfaces in splicing areas of the connected buildings are removed, and the number of Boolean operations is minimized for horizontal and vertical structures.
As a further embodiment of the present invention, the modeling of non-landmark buildings over a large scale using three-dimensional visualization modeling software comprises: and (3) performing large-range vector data modeling by using the City Engine, training an aviation remote sensing image building data set by using a FAME-Net network by taking the bottom of the building as a standard, and extracting the vector data of the non-landmark building.
As a further embodiment of the present invention, in the large-scale modeling process of the non-landmark building, the building is split into a plurality of structural components, the large-scale CGA rule construction is performed according to the building structure, height and color, and the structural components of the building are texture mapped by using a mapping function.
As a further embodiment of the present invention, the integrating multivariate fusion of landmark building models and non-landmark building models in the illusion engine software comprises: importing DEM digital elevation data into a virtual engine UE4 for topographic data design, taking a GDEMV2 elevation data set as an original data source, interpolating and denoising the original data, importing topographic data into a Global Mapper for three-dimensional expansion to obtain a topographic elevation map file in a hfz format; setting resolution and data range in the World Machine according to the width and height of the elevation map to obtain an elevation map file compatible with UE4 in a RAW16 format; the height map in the RAW16 format is imported into the UE4, the material corresponding to the remote sensing image is selected to create a three-dimensional terrain, and the constructed landmark building model and the non-landmark building model are imported into the UE4 according to the same proportion and placed on the created three-dimensional terrain.
The beneficial effects of the invention include:
1. a multi-element fusion modeling method is designed, and the modeling speed is greatly improved while the modeling quality is ensured;
2. the three-dimensional mapping of the real scene is realized through the remote sensing image, and the model is established to have better immersion and interaction performance.
Drawings
FIG. 1 is a flow chart of a modeling method of the present invention;
FIG. 2 is a schematic diagram of a refined modeling according to embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of vector grid data for large-scale modeling according to embodiment 1 of the present invention;
FIG. 4 is a partial CGA code diagram according to embodiment 1 of the present invention;
FIG. 5 is a schematic diagram of campus three-dimensional virtual scene modeling according to embodiment 1 of the present invention;
FIG. 6 is a three-dimensional scene diagram of a campus according to embodiment 1 of the present invention;
fig. 7 is a virtual scene immersion and interactive test chart in embodiment 2 of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "vertical", "horizontal", "inside", "outside", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or component must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used merely to distinguish one element from another, and are not to be construed as indicating or implying relative importance.
Furthermore, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
The establishment of the virtual world capable of mapping the physical world is a foundation for unmanned aerial vehicle visual simulation, and therefore the twin virtual model of the real flying scene is established in the embodiment. The most important construction element of the virtual scene is a building, a simulation object is constructed by using 3ds Max, and although a real effect can be obtained, the modeling speed is too slow. Therefore, the embodiment provides a multivariate fusion three-dimensional visual virtual scene modeling method, which is divided into landmark buildings (such as libraries and stadiums) and non-landmark buildings (such as dormitory buildings and teaching buildings) according to the building characteristic attribute similarity normalization principle. The 3ds Max is used for carrying out fine modeling on the landmark building, a vector grid of the building is extracted from the remote sensing image, then the City Engine software is used for carrying out large-range modeling on the non-landmark building, the 3ds Max and City Engine models are subjected to integrated multi-element fusion in the UE4, and the modeling flow is shown in figure 1.
1. Refined modeling
Firstly, carrying out fine modeling on important buildings in a scene, taking an N campus library as an example, importing CAD data of the important buildings into 3ds Max, and after the CAD data are captured, extruded, chamfered and inserted, using a rectangular tool to outline the edge of a wall body. Secondly, according to the height of the top surface of the building and the structure thereof, different materials and texture elements are added in different areas of the model, so that the texture and the fidelity of the main body of the building are increased, as shown in fig. 2.
In order to reduce redundancy and complexity of the model and improve the running speed of the model, the embodiment provides a curved surface quantity optimization method. In the modeling process, invisible inner surfaces in splicing areas of connected buildings are removed, the generation of invalid curved surfaces is reduced, and redundancy caused by massive copying of structured models is avoided. For horizontal and vertical structures, the number of Boolean operations is minimized, reducing the complexity of the model. In order to reduce the number of models to the maximum extent, fine threshold values in the model modifier are optimized, and the operation speed of the models is improved on the basis of guaranteeing the authenticity of buildings.
2. Large scale modeling
The large-range modeling mainly solves the problem of non-marker building, tree and road modeling, and in order to improve the modeling speed, the City Engine is used for large-range vector data modeling based on the rule method. In the traditional oblique photography technology, a building roof is used as a standard vector data extraction method, and the modeling deviation can be caused by the problems of building inclination, displacement and loss. As shown in fig. 3, in the embodiment, the building bottom is used as a standard, the FAME-Net network is used to train the building data set of the aerial remote sensing image, so as to avoid the extraction deviation existing in the conventional method and extract the vector data of the building.
The cga (computer Generated architecture) rule is the core of building a large-scale modeling method, mainly focusing on modeling speed and efficiency, and omitting part of detail information of a building. Therefore, the modeling rules are compiled according to the structure type, the floor height and the roof color of the building, the building of the corresponding type is rapidly generated in a large batch, the building details are related to the constraint force of the rules, and the more the rules are, the more the model details are depicted.
In this embodiment, taking an N campus scenario as an example, in order to establish a CGA rule, it is necessary to find out a relationship between the number of floors of a building and the height, measure the height and the number of floors of some buildings in a research area, and draw table 2.
TABLE 2 building height and number of floors
Figure BDA0003650503610000041
The relationship between building height and floor number was obtained by fitting according to table 2, as shown below:
H=3.46N+0.69 (6)
specifically, the building is divided into small structural components, large-scale rule construction is carried out according to the building structure, the height of the floor and the color, texture mapping is carried out on doors, windows, roofs and outer walls of the building by using a mapping function, and part of CGA codes are shown in FIG. 4.
In addition, in addition to the building model, the flower, grass, tree, street lamp and road sections are built using existing rules, and a three-dimensional virtual scene of N campuses is generated as shown in fig. 5.
3. Model multivariate fusion
In order to improve the immersion and the interactivity of the simulation, DEM (digital Elevation map) digital Elevation data is imported into the UE4 to carry out rugged terrain design. In the embodiment, the GDEMV2 elevation data set is used as an original data source, but the operation speed of a subsequent virtual scene is influenced by mass DEM data, and meanwhile, the data size describing a flat terrain and a complex area is different, and the original DEM data needs to be processed. Therefore, the embodiment performs interpolation and noise reduction processing on the original data, and improves the data utilization rate. Then, the terrain data is imported into a Global Mapper for three-dimensional expansion to obtain a terrain file in hfz format. Then, according to the width and height of the elevation map, the resolution and the data range are set in the World Machine to obtain a height map file compatible with the RAW16 format of the UE 4. In order to perform multi-element fusion of scenes in the UE4, a height map in a RAW16 format is imported, materials corresponding to remote sensing images are selected to create a concave-convex real three-dimensional terrain, buildings constructed by 3ds Max and type Engine are imported into the UE4 according to the same proportion and placed on the created three-dimensional terrain, collision setting is added among different objects to solve dynamic interaction among models, and conversion of scenes from two-dimensional static state to three-dimensional dynamic state is achieved, as shown in FIG. 6.
Example 2
Virtual scene immersion and interaction performance testing:
in order to carry out immersion and interactive test of the virtual scene, the digital model of the unmanned aerial vehicle is operated in the N campus portal scene constructed in the figure 6 to carry out autonomous flight, as shown in the figure 7, as the three-dimensional virtual scene construction is real scene twin mapping, mountains and terrain are consistent with real environment, elements such as trees, red flags and the like in the scene can move along with wind, and the whole scene has better immersion. When unmanned aerial vehicle was in the virtual scene when figure 7 circle position, be the surrounding environment information that the real-time perception of fisheye camera was arrived in the little frame in the lower left corner, at this moment, unmanned aerial vehicle perception barrier flag, because the collision setting in the scene will carry out the action of dodging the barrier at that time, have better interaction with the environment, can survey for three-dimensional, keep away the technical support that performance tests such as barrier provide.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (6)

1. A three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation is characterized by comprising the following steps: dividing the buildings into landmark buildings and non-landmark buildings according to a characteristic attribute similarity normalization principle, carrying out fine modeling on the landmark buildings by using three-dimensional animation software, and extracting a vector grid of the landmark buildings from a remote sensing image; and then, carrying out large-scale modeling on the non-landmark building by using three-dimensional visual modeling software, and carrying out integrated multi-element fusion on the landmark building model and the non-landmark building model in the illusion engine software.
2. The method of claim 1, wherein the three-dimensional animation software is used for fine modeling of landmark buildings, and comprises: importing CAD data of a landmark building into 3ds Max, and after the CAD data are subjected to capturing, extruding, chamfering and inserting, using a rectangular tool to outline edges; according to the height and the structure of the top surface of the landmark building, different materials and texture elements are added in different areas of the model.
3. The method of claim 2, wherein invisible inner surfaces in the splicing area of connected buildings are removed during the fine modeling of landmark buildings, and Boolean operation times are minimized for horizontal and vertical structures.
4. The method according to claim 1, wherein the large-scale modeling of the non-landmark buildings by using the three-dimensional visualization modeling software comprises: and (3) performing large-range vector data modeling by using the City Engine, training an aviation remote sensing image building data set by using a FAME-Net network by taking the bottom of the building as a standard, and extracting the vector data of the non-landmark building.
5. The method of claim 4, wherein in the process of modeling the non-landmark building in a large range, the building is split into a plurality of structural components, the building is subjected to large-range CGA rule construction according to the structure, height and color of the building, and the structural components of the building are subjected to texture mapping by using a mapping function.
6. The method of claim 1, wherein the integrating multivariate fusion of landmark building models and non-landmark building models in the illusion engine software comprises: importing DEM digital elevation data into a virtual engine UE4 for topographic data design, taking a GDEMV2 elevation data set as an original data source, interpolating and denoising the original data, importing topographic data into a GlobalMapper for three-dimensional expansion to obtain a topographic elevation map file in a hfz format; setting resolution and data range in WorldMachine according to the width and height of the elevation map to obtain an elevation map file compatible with UE4 in RAW16 format; importing a height map in a RAW16 format into the UE4, selecting a material corresponding to the remote sensing image to create a three-dimensional terrain, importing the constructed landmark building model and the non-landmark building model into the UE4 according to the same proportion, and placing the model on the created three-dimensional terrain.
CN202210541926.6A 2022-05-18 2022-05-18 Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation Pending CN114972665A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210541926.6A CN114972665A (en) 2022-05-18 2022-05-18 Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210541926.6A CN114972665A (en) 2022-05-18 2022-05-18 Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation

Publications (1)

Publication Number Publication Date
CN114972665A true CN114972665A (en) 2022-08-30

Family

ID=82983183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210541926.6A Pending CN114972665A (en) 2022-05-18 2022-05-18 Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation

Country Status (1)

Country Link
CN (1) CN114972665A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115906537A (en) * 2023-01-09 2023-04-04 南京航空航天大学 Unmanned aerial vehicle photoelectric load simulation system based on 3D visual
CN117950552A (en) * 2024-03-07 2024-04-30 北京理工大学长三角研究院(嘉兴) Unmanned aerial vehicle simulation data playback, labeling and collection method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115906537A (en) * 2023-01-09 2023-04-04 南京航空航天大学 Unmanned aerial vehicle photoelectric load simulation system based on 3D visual
CN117950552A (en) * 2024-03-07 2024-04-30 北京理工大学长三角研究院(嘉兴) Unmanned aerial vehicle simulation data playback, labeling and collection method
CN117950552B (en) * 2024-03-07 2024-08-06 北京理工大学长三角研究院(嘉兴) Unmanned aerial vehicle simulation data playback, labeling and collection method

Similar Documents

Publication Publication Date Title
CN111008422B (en) Building live-action map making method and system
EP3951719A1 (en) Blended urban design scene simulation method and system
CN108648269B (en) Method and system for singulating three-dimensional building models
CN108919944B (en) Virtual roaming method for realizing data lossless interaction at display terminal based on digital city model
CN114972665A (en) Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation
CN103077552B (en) A kind of three-dimensional display method based on multi-view point video
CN102289845B (en) Three-dimensional model drawing method and device
CN114219902B (en) Method and device for rendering volume drawing of meteorological data and computer equipment
CN108269304B (en) Scene fusion visualization method under multiple geographic information platforms
CN106157359A (en) A kind of method for designing of virtual scene experiencing system
CN111915726B (en) Construction method of three-dimensional scene of overhead transmission line
CN109242966B (en) 3D panoramic model modeling method based on laser point cloud data
CN103606190A (en) Method for automatically converting single face front photo into three-dimensional (3D) face model
CN110660125B (en) Three-dimensional modeling device for power distribution network system
CN105205861A (en) Tree three-dimensional visualization model realization method based on Sphere-Board
CN110852952B (en) Large-scale terrain real-time drawing method based on GPU
CN108959434A (en) A kind of scene fusion visualization method under more geographical information platforms
CN114998503B (en) White mold automatic texture construction method based on live-action three-dimension
CN113750516A (en) Method, system and equipment for realizing three-dimensional GIS data loading in game engine
CN115760667A (en) 3D WebGIS video fusion method under weak constraint condition
CN116681854A (en) Virtual city generation method and device based on target detection and building reconstruction
CN116543116A (en) Method, system, equipment and terminal for three-dimensional virtual visual modeling of outcrop in field
CN110610536A (en) Method for displaying real scene for VR equipment
CN115131511A (en) Method for creating terrain based on oblique photography technology + Dynamo
Wang Construction of the Three-dimensional Virtual Campus Scenes’ Problems and Solutions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination