CN114972665A - Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation - Google Patents

Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation Download PDF

Info

Publication number
CN114972665A
CN114972665A CN202210541926.6A CN202210541926A CN114972665A CN 114972665 A CN114972665 A CN 114972665A CN 202210541926 A CN202210541926 A CN 202210541926A CN 114972665 A CN114972665 A CN 114972665A
Authority
CN
China
Prior art keywords
modeling
landmark
building
buildings
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210541926.6A
Other languages
Chinese (zh)
Inventor
刘艳
刘全德
王广科
田政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University
Original Assignee
Dalian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University filed Critical Dalian University
Priority to CN202210541926.6A priority Critical patent/CN114972665A/en
Publication of CN114972665A publication Critical patent/CN114972665A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation, which belongs to the technical field of visual simulation, and is characterized in that buildings are divided into landmark buildings and non-landmark buildings according to a characteristic attribute similarity normalization principle, the landmark buildings are subjected to refined modeling by utilizing three-dimensional animation production software, and a vector grid of the landmark buildings is extracted from a remote sensing image; and then, carrying out large-scale modeling on the non-landmark building by using three-dimensional visual modeling software, and carrying out integrated multi-element fusion on the landmark building model and the non-landmark building model in the illusion engine software. The invention has the advantages of multi-element fusion modeling, greatly improving the modeling speed while ensuring the modeling quality, realizing the three-dimensional mapping of a real scene through a remote sensing image, and having better immersion and interaction performance when establishing the model.

Description

一种无人机虚拟仿真中的三维可视化虚拟场景建模方法A 3D visualization virtual scene modeling method in UAV virtual simulation

技术领域technical field

本发明属于视景仿真技术领域,具体涉及一种无人机虚拟仿真中的三维可视化虚拟场景建模方法。The invention belongs to the technical field of visual simulation, and in particular relates to a three-dimensional visualization virtual scene modeling method in the virtual simulation of unmanned aerial vehicles.

背景技术Background technique

视景仿真是一种基于计算机图形学的沉浸式交互技术,支持信息的可视化显示。由于飞行视景仿真技术能够将无人机的三维虚拟模型、仿真过程以及仿真数据结合起来,同时进行可视化输出与显示,备受国内外学者的青睐。目前,主流飞行视景仿真软件Creator、X-Plane、Flight Gear在三维虚拟场景建模或多或少存在效率低下,仿真过程沉浸性、交互性、真实性差等问题。Visual simulation is an immersive interactive technology based on computer graphics, which supports the visual display of information. Because the flight scene simulation technology can combine the three-dimensional virtual model of the UAV, the simulation process and the simulation data, and simultaneously perform visual output and display, it is favored by scholars at home and abroad. At present, the mainstream flight scene simulation software Creator, X-Plane, and Flight Gear have more or less inefficiencies in 3D virtual scene modeling, and the simulation process is immersive, interactive, and unrealistic.

早期的无人机视景仿真系统大多是基于二维地图数据或数字图形,使得研究人员难以检测到数据中隐藏的相关特性,无法直观了解无人机的飞行状况。进入21世纪以来,3D技术的发展催生了一批游戏引擎(Game Engine)的出现,同时3D游戏引擎为游戏或视景开发提供了一整套解决方案,有力地推动了数字孪生三维可视化场景建模技术的发展。目前业内知名的3D游戏引擎主要包括ORGE、Unity 3D、Unreal Engine 4等。上官右柏基于OGRE开发了一个港口可视化仿真演示系统,该系统主要模拟港口设备运行工况以及不同气候对港口系统的影响程度,在可视化的沉浸性方面取得了较好的进步。李勍基于Unity 3D游戏引擎构建了飞行视景仿真平台,用于再现航空母舰和飞行器的运行场景,该平台显示效果逼真,沉浸性效果较好,但场景建模复杂,效率较低。Most of the early UAV visual simulation systems were based on two-dimensional map data or digital graphics, which made it difficult for researchers to detect the relevant features hidden in the data, and could not intuitively understand the flight status of the UAV. Since the beginning of the 21st century, the development of 3D technology has spawned the emergence of a number of game engines. At the same time, 3D game engines have provided a complete set of solutions for game or visual development, effectively promoting digital twin 3D visualization scene modeling. development of technology. At present, the well-known 3D game engines in the industry mainly include ORGE, Unity 3D, Unreal Engine 4, etc. Shangguan Youbai developed a port visualization simulation demonstration system based on OGRE. The system mainly simulates the operating conditions of port equipment and the impact of different climates on the port system, and has made good progress in the immersion of visualization. Li Xie built a flight scene simulation platform based on the Unity 3D game engine, which is used to reproduce the operation scenes of aircraft carriers and aircraft. The platform has realistic display effects and good immersion effects, but the scene modeling is complicated and the efficiency is low.

发明内容SUMMARY OF THE INVENTION

为了克服传统无人机视景仿真方法沉浸性、交互式效果不理想,且场景构建较为复杂,建模效率低下的缺陷,本发明提供一种无人机虚拟仿真中的三维可视化虚拟场景建模方法,多元融合建模,在保证建模质量的同时极大地提高建模速度,通过遥感图像实现对真实场景的三维映射,建立模型具有较好的沉浸和交互性能。In order to overcome the defects of traditional UAV visual simulation methods such as unsatisfactory immersion and interactive effect, complicated scene construction and low modeling efficiency, the present invention provides a three-dimensional visualization virtual scene modeling in UAV virtual simulation. The method, multi-component fusion modeling, greatly improves the modeling speed while ensuring the modeling quality, realizes the three-dimensional mapping of the real scene through remote sensing images, and establishes a model with good immersion and interaction performance.

本发明为解决其技术问题所采用的技术方案是:一种无人机虚拟仿真中的三维可视化虚拟场景建模方法,包括:根据特征属性相似归一原则将建筑物划分为标志性建筑物和非标志性建筑物,利用三维动画制作软件对标志性建筑物进行精细化建模,从遥感图像中提取标志性建筑物的矢量网格;再利用三维可视化建模软件对非标志性建筑物进行大范围建模,并将标志性建筑物模型和非标志性建筑物模型在虚幻引擎软件中进行一体化多元融合。The technical solution adopted by the present invention to solve the technical problem is: a three-dimensional visualization virtual scene modeling method in the virtual simulation of unmanned aerial vehicle, comprising: dividing buildings into landmark buildings and For non-landmark buildings, use 3D animation software to perform refined modeling of landmark buildings, and extract the vector grids of landmark buildings from remote sensing images; Model large-scale and integrate iconic and non-iconic building models in Unreal Engine software.

作为本发明的进一步实施方案,所述利用三维动画制作软件对标志性建筑物进行精细化建模,包括:将标志性建筑物的CAD数据导入3ds Max中,经过捕捉、挤出、倒角、插入处理后,再利用矩形工具勾勒边缘;依据标志性建筑物顶面的高度及其结构,在模型的不同区域中添加不同材质、纹理元素。As a further embodiment of the present invention, the use of three-dimensional animation production software to perform refined modeling of landmark buildings includes: importing the CAD data of landmark buildings into 3ds Max, capturing, extruding, chamfering, After the insertion process, use the rectangle tool to outline the edge; according to the height of the top surface of the iconic building and its structure, add different materials and texture elements in different areas of the model.

作为本发明的进一步实施方案,在标志性建筑物精细化建模过程中,去除相连建筑物拼接区域中不可见的内表面,对于水平和垂直结构,最小化布尔运算次数。As a further embodiment of the present invention, in the process of refined modeling of landmark buildings, the invisible inner surfaces in the splicing area of connected buildings are removed, and the number of Boolean operations is minimized for horizontal and vertical structures.

作为本发明的进一步实施方案,所述利用三维可视化建模软件对非标志性建筑物进行大范围建模,包括:利用City Engine进行大范围矢量数据建模,以建筑底部为标准,使用FAME-Net网络对航空遥感图像建筑物数据集进行训练,提取非标志性建筑物的矢量数据。As a further embodiment of the present invention, the use of 3D visualization modeling software to perform large-scale modeling of non-iconic buildings includes: using City Engine to perform large-scale vector data modeling, using the bottom of the building as a standard, using FAME- Net network is trained on the building dataset of aerial remote sensing images to extract the vector data of non-landmark buildings.

作为本发明的进一步实施方案,在非标志性建筑物大范围建模过程中,将建筑物拆分为多个结构部件,根据建筑物结构、高度和颜色进行大范围CGA规则构建,利用贴图函数对建筑物的结构部件进行纹理贴图。As a further embodiment of the present invention, in the large-scale modeling process of non-iconic buildings, the building is divided into multiple structural components, and a large-scale CGA rule is constructed according to the building structure, height and color, and the mapping function is used. Texture mapping the structural components of the building.

作为本发明的进一步实施方案,所述将标志性建筑物模型和非标志性建筑物模型在虚幻引擎软件中进行一体化多元融合,包括:在虚拟引擎UE4中导入DEM数字高程数据进行地形数据设计,以GDEMV2高程数据集作为原始数据源,对原始数据进行插值和降噪处理,然后,将地形数据导入Global Mapper进行三维扩展,以获得hfz格式的地形高程图文件;根据高程图的宽和高,在World Machine中设置分辨率与数据范围,以获得兼容UE4的RAW16格式的高程图文件;在UE4中导入RAW16格式的高度图,选择与遥感图像对应的材质,以创建三维地形,将所建构的标志性建筑物模型和非标志性建筑物模型按照同等比例导入至UE4中,置于所建三维地形之上。As a further embodiment of the present invention, the integration of the iconic building model and the non-iconic building model in the Unreal Engine software includes: importing DEM digital elevation data into the virtual engine UE4 for terrain data design , take the GDEMV2 elevation dataset as the original data source, perform interpolation and noise reduction processing on the original data, and then import the terrain data into Global Mapper for 3D expansion to obtain the terrain elevation map file in hfz format; according to the width and height of the elevation map , set the resolution and data range in World Machine to obtain a UE4-compatible RAW16 format elevation map file; import the RAW16 format height map in UE4, select the material corresponding to the remote sensing image to create a 3D terrain, and convert the constructed The iconic building model and the non-iconic building model are imported into UE4 in the same proportion and placed on the built 3D terrain.

本发明的有益效果包括:The beneficial effects of the present invention include:

1.设计多元融合建模法,在保证建模质量的同时极大地提高建模速度;1. Design a multivariate fusion modeling method, which greatly improves the modeling speed while ensuring the modeling quality;

2.通过遥感图像实现对真实场景的三维映射,建立模型具有较好的沉浸和交互性能。2. Realize 3D mapping of real scenes through remote sensing images, and establish models with good immersion and interaction performance.

附图说明Description of drawings

图1是本发明建模方法流程图;Fig. 1 is the flow chart of the modeling method of the present invention;

图2是本发明实施例1精细化建模示意图;2 is a schematic diagram of refined modeling in Embodiment 1 of the present invention;

图3是本发明实施例1大范围建模矢量网格数据示意图;3 is a schematic diagram of large-scale modeling vector grid data in Embodiment 1 of the present invention;

图4是本发明实施例1部分CGA代码图;Fig. 4 is the CGA code diagram of the embodiment 1 of the present invention;

图5是本发明实施例1校园三维虚拟场景建模示意图;FIG. 5 is a schematic diagram of a three-dimensional virtual scene modeling of a campus in Embodiment 1 of the present invention;

图6是本发明实施例1校园三维场景图;6 is a three-dimensional scene diagram of a campus in Embodiment 1 of the present invention;

图7是本发明实施例2虚拟场景沉浸性和交互式测试图。FIG. 7 is a test diagram of virtual scene immersion and interaction in Embodiment 2 of the present invention.

具体实施方式Detailed ways

下面将结合附图对本发明的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings. Obviously, the described embodiments are a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

在本发明的描述中,需要说明的是,术语“竖直”、“水平”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或部件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”、“第三”仅用于区分部件,而不能理解为指示或暗示相对重要性。In the description of the present invention, it should be noted that the orientations or positional relationships indicated by the terms "vertical", "horizontal", "inside", "outside", etc. are based on the orientations or positional relationships shown in the drawings, only In order to facilitate the description of the present invention and simplify the description, it is not indicated or implied that the device or component referred to must have a particular orientation, be constructed and operate in a particular orientation, and therefore should not be construed as a limitation of the present invention. Furthermore, the terms "first", "second", and "third" are only used to distinguish components and should not be construed to indicate or imply relative importance.

此外,下面所描述的本发明不同实施方式中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。In addition, the technical features involved in the different embodiments of the present invention described below can be combined with each other as long as they do not conflict with each other.

实施例1Example 1

建立可映射物理世界的虚拟世界是进行无人机视景仿真的基石,为此,本实施例构建了真实飞行场景的孪生虚拟模型。虚拟场景的构建元素最为关键的是建筑物,使用3dsMax构建仿真对象,虽然可获得较为真实的效果,但建模速度过慢。为此,本实施例提出一种多元融合的三维可视化虚拟场景建模法,根据建筑物特征属性相似归一原则划分为标志性建筑物(如:图书馆,体育场)和非标志性建筑物(如:宿舍楼,教学楼)。用3ds Max对标志性建筑物进行精细化建模,从遥感图像中提取建筑物的矢量网格,再利用City Engine软件对非标志建筑物进行大范围建模,并将3ds Max和City Engine的模型在UE4中进行一体化多元融合,建模流程如图1所示。Establishing a virtual world that can map the physical world is the cornerstone of UAV visual simulation. For this reason, this embodiment builds a twin virtual model of a real flight scene. Buildings are the most critical elements in the construction of virtual scenes. Using 3dsMax to build simulation objects, although more realistic effects can be obtained, the modeling speed is too slow. To this end, this embodiment proposes a multi-integrated 3D visualization virtual scene modeling method, which is divided into landmark buildings (such as libraries, stadiums) and non-symbolic buildings ( Such as: dormitory building, teaching building). Use 3ds Max to model the iconic buildings in detail, extract the vector meshes of the buildings from the remote sensing images, and then use the City Engine software to model non-signature buildings in a large scale, and combine the 3ds Max and City Engine The model is integrated and multi-integrated in UE4, and the modeling process is shown in Figure 1.

1.精细化建模1. Refinement modeling

首先,对场景中的重要建筑物进行精细化建模,以N校园图书馆为例,将其CAD数据导入3ds Max里,经过捕捉、挤出、倒角、插入处理后,再利用矩形工具勾勒墙体边缘。其次,依据建筑物顶面的高度及其结构,在模型的不同区域中添加不同材质、纹理元素,增加建筑物主体质感和逼真度,如图2所示。First, carry out refined modeling of important buildings in the scene, take N campus library as an example, import its CAD data into 3ds Max, after capturing, extruding, chamfering, inserting, and then use the rectangle tool to outline wall edge. Secondly, according to the height and structure of the top surface of the building, different materials and texture elements are added in different areas of the model to increase the texture and fidelity of the main body of the building, as shown in Figure 2.

为降低模型的冗余和复杂性,提高模型的运行速度,本实施例提出一种曲面数量优化方法。在建模过程中,去除相连建筑物拼接区域中不可见的内表面,减少无效曲面的产生,避免大量复制结构化模型产生的冗余。对于水平和垂直结构,最小化布尔运算次数,减小模型复杂性。为最大限度地减少模型数量,优化模型修改器中精细阈值,在保障建筑物真实性的基础上,提高模型的运行速度。In order to reduce the redundancy and complexity of the model and improve the running speed of the model, this embodiment proposes a method for optimizing the number of surfaces. In the modeling process, the invisible inner surfaces in the splicing area of the connected buildings are removed, the generation of invalid surfaces is reduced, and the redundancy caused by a large number of copied structured models is avoided. For horizontal and vertical structures, the number of Boolean operations is minimized and the model complexity is reduced. In order to minimize the number of models, the fine threshold in the model modifier is optimized to improve the running speed of the model on the basis of ensuring the authenticity of the building.

2.大范围建模2. Large-scale modeling

大范围建模主要解决非标志建筑物、树木和道路建模,为提高建模速度,基于规则法本实施例利用City Engine进行大范围矢量数据建模。传统倾斜摄影技术以建筑屋顶为标准的矢量数据提取方法,存在的建筑物倾斜、移位和缺失问题会导致建模偏差。如图3所示,本实施例以建筑底部为标准,使用FAME-Net网络对航空遥感图像建筑物数据集进行训练,规避传统方法存在的提取偏差,提取建筑物的矢量数据。The large-scale modeling mainly solves the modeling of non-signature buildings, trees and roads. In order to improve the modeling speed, based on the rule method, this embodiment uses the City Engine to perform large-scale vector data modeling. The traditional oblique photography technology uses the building roof as the standard vector data extraction method, and the existing problems of building inclination, displacement and missing will lead to modeling deviation. As shown in Figure 3, in this embodiment, the building bottom is used as the standard, and the FAME-Net network is used to train the building data set of aerial remote sensing images, so as to avoid the extraction deviation existing in the traditional method, and extract the vector data of the building.

CGA(Computer Generated Architecture)规则是构建大范围建模法的核心,主要关注建模速度和效率,可忽略建筑物的部分细节信息。为此,根据建筑物的结构类型、楼层高度、屋顶颜色编写建模规则,大批量快速生成相应类型的建筑物,建筑物细节与规则的约束力有关,规则越多,模型细节刻画越完善。The CGA (Computer Generated Architecture) rule is the core of building a large-scale modeling method, which mainly focuses on the modeling speed and efficiency, and can ignore some details of the building. To this end, modeling rules are written according to the structure type, floor height, and roof color of the building, and corresponding types of buildings are quickly generated in large batches. The building details are related to the binding force of the rules. The more rules, the more perfect the model details will be.

在本实施例中,以N校园场景为例,为了建立CGA规则,需要找出建筑物层数和高度之间的关系,测量了研究区域内一些建筑物的高度和层数,并绘制表2。In this embodiment, taking the N campus scene as an example, in order to establish a CGA rule, it is necessary to find out the relationship between the number of floors and the height of buildings, and the height and number of floors of some buildings in the study area are measured, and Table 2 is drawn. .

表2建筑物高度与层数Table 2 Building height and number of floors

Figure BDA0003650503610000041
Figure BDA0003650503610000041

根据表2拟合得到建筑高度和楼层数之间的关系,如下式所示:According to Table 2, the relationship between the building height and the number of floors is obtained by fitting, as shown in the following formula:

H=3.46N+0.69 (6)H=3.46N+0.69 (6)

将建筑物具体地拆分为各个小的结构部件,根据建筑物结构、楼层高度和颜色进行大范围规则构建,利用贴图函数对建筑物的门、窗、屋顶、外墙进行纹理贴图,部分CGA代码如图4所示。Specifically, the building is divided into small structural components, and large-scale rules are constructed according to the building structure, floor height and color, and texture maps are used to map the doors, windows, roofs, and exterior walls of the building, and some CGAs are used. The code is shown in Figure 4.

此外,除了建筑物模型之外,花草树木、路灯和道路部分使用现有规则进行建立,生成N校园的三维虚拟场景如图5所示。In addition, in addition to the building model, the flowers, trees, street lights and road parts are built using existing rules to generate a 3D virtual scene of the N campus as shown in Figure 5.

3.模型多元融合3. Model multiple fusion

为了提高仿真的沉浸性和交互性,在UE4中导入DEM(Digital Elevation Map)数字高程数据进行凹凸不平的地形设计。本实施例以GDEMV2高程数据集作为原始数据源,但海量DEM数据会影响后续虚拟场景的运行速度,同时描述地形平坦和复杂地区的数据量大小不一,需要对原始DEM数据进行处理。为此,本实施例对原始数据进行插值和降噪处理,提高数据利用率。然后,将地形数据导入Global Mapper进行三维扩展,以获得hfz格式的地形文件。之后,根据高程图的宽和高,在World Machine中设置分辨率与数据范围,以获得兼容UE4的RAW16格式的高度图文件。为在UE4中进行场景的多元融合,导入RAW16格式的高度图,选择与遥感图像对应的材质,以创建凹凸不一的真实三维地形,将3ds Max和CityEngine所建构的建筑物按照同等比例导入至UE4中,置于所建三维地形之上,为解决模型间的动态交互,不同的对象之间添加碰撞设置,实现场景从二维静态到三维动态的转化,如图6所示。In order to improve the immersion and interactivity of the simulation, the DEM (Digital Elevation Map) digital elevation data was imported into UE4 for uneven terrain design. This embodiment uses the GDEMV2 elevation data set as the original data source, but the massive DEM data will affect the running speed of subsequent virtual scenes, and the amount of data describing flat terrain and complex areas varies, so the original DEM data needs to be processed. To this end, the present embodiment performs interpolation and noise reduction processing on the original data to improve data utilization. Then, import terrain data into Global Mapper for 3D expansion to obtain terrain files in hfz format. After that, according to the width and height of the elevation map, set the resolution and data range in World Machine to obtain a UE4-compatible RAW16 format height map file. In order to perform multiple fusion of the scene in UE4, import the height map in RAW16 format, select the material corresponding to the remote sensing image to create a real 3D terrain with different bumps, and import the buildings constructed by 3ds Max and CityEngine in the same proportion. In UE4, it is placed on top of the built 3D terrain. In order to solve the dynamic interaction between models, collision settings are added between different objects to realize the conversion of the scene from 2D static to 3D dynamic, as shown in Figure 6.

实施例2Example 2

虚拟场景沉浸和交互性能测试:Virtual scene immersion and interaction performance test:

为进行虚拟场景的沉浸性和交互式测试,令无人机数字模型运行在图6所构建N校园正门场景中进行自主飞行,如图7所示,由于三维虚拟场景构建是真实场景孪生映射,山脉,地势与真实环境保持一致,场景中树木,红旗等元素也会随风而动,整个场景具有较好的沉浸性。当无人机处于虚拟场景中图7圆圈位置时,左下角小框内是鱼眼相机实时感知到的周围环境信息,此时,无人机感知到障碍物旗帜,由于场景中的碰撞设置,届时将执行躲避障碍物的动作,与环境具有较好的交互性,可为三维勘测,避障等性能测试提供技术支撑。In order to conduct the immersive and interactive test of the virtual scene, the digital model of the UAV is operated in the main entrance scene of the N campus constructed in Figure 6 for autonomous flight, as shown in Figure 7, since the 3D virtual scene construction is a real scene twin mapping, Mountains and terrain are consistent with the real environment, and elements such as trees and red flags in the scene will also move with the wind, and the whole scene has good immersion. When the drone is in the circle position in Figure 7 in the virtual scene, the small frame in the lower left corner is the real-time perception of the surrounding environment information by the fisheye camera. At this time, the drone perceives the obstacle flag. Due to the collision setting in the scene, At that time, the action of avoiding obstacles will be performed, which has good interaction with the environment, and can provide technical support for performance tests such as 3D survey and obstacle avoidance.

显然,上述实施例仅仅是为清楚地说明所作的举例,而并非对实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。而由此所引伸出的显而易见的变化或变动仍处于本发明创造的保护范围之中。Obviously, the above-mentioned embodiments are only examples for clear description, and are not intended to limit the implementation manner. For those of ordinary skill in the art, changes or modifications in other different forms can also be made on the basis of the above description. There is no need and cannot be exhaustive of all implementations here. And the obvious changes or changes derived from this are still within the protection scope of the present invention.

Claims (6)

1. A three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation is characterized by comprising the following steps: dividing the buildings into landmark buildings and non-landmark buildings according to a characteristic attribute similarity normalization principle, carrying out fine modeling on the landmark buildings by using three-dimensional animation software, and extracting a vector grid of the landmark buildings from a remote sensing image; and then, carrying out large-scale modeling on the non-landmark building by using three-dimensional visual modeling software, and carrying out integrated multi-element fusion on the landmark building model and the non-landmark building model in the illusion engine software.
2. The method of claim 1, wherein the three-dimensional animation software is used for fine modeling of landmark buildings, and comprises: importing CAD data of a landmark building into 3ds Max, and after the CAD data are subjected to capturing, extruding, chamfering and inserting, using a rectangular tool to outline edges; according to the height and the structure of the top surface of the landmark building, different materials and texture elements are added in different areas of the model.
3. The method of claim 2, wherein invisible inner surfaces in the splicing area of connected buildings are removed during the fine modeling of landmark buildings, and Boolean operation times are minimized for horizontal and vertical structures.
4. The method according to claim 1, wherein the large-scale modeling of the non-landmark buildings by using the three-dimensional visualization modeling software comprises: and (3) performing large-range vector data modeling by using the City Engine, training an aviation remote sensing image building data set by using a FAME-Net network by taking the bottom of the building as a standard, and extracting the vector data of the non-landmark building.
5. The method of claim 4, wherein in the process of modeling the non-landmark building in a large range, the building is split into a plurality of structural components, the building is subjected to large-range CGA rule construction according to the structure, height and color of the building, and the structural components of the building are subjected to texture mapping by using a mapping function.
6. The method of claim 1, wherein the integrating multivariate fusion of landmark building models and non-landmark building models in the illusion engine software comprises: importing DEM digital elevation data into a virtual engine UE4 for topographic data design, taking a GDEMV2 elevation data set as an original data source, interpolating and denoising the original data, importing topographic data into a GlobalMapper for three-dimensional expansion to obtain a topographic elevation map file in a hfz format; setting resolution and data range in WorldMachine according to the width and height of the elevation map to obtain an elevation map file compatible with UE4 in RAW16 format; importing a height map in a RAW16 format into the UE4, selecting a material corresponding to the remote sensing image to create a three-dimensional terrain, importing the constructed landmark building model and the non-landmark building model into the UE4 according to the same proportion, and placing the model on the created three-dimensional terrain.
CN202210541926.6A 2022-05-18 2022-05-18 Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation Pending CN114972665A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210541926.6A CN114972665A (en) 2022-05-18 2022-05-18 Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210541926.6A CN114972665A (en) 2022-05-18 2022-05-18 Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation

Publications (1)

Publication Number Publication Date
CN114972665A true CN114972665A (en) 2022-08-30

Family

ID=82983183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210541926.6A Pending CN114972665A (en) 2022-05-18 2022-05-18 Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation

Country Status (1)

Country Link
CN (1) CN114972665A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115906537A (en) * 2023-01-09 2023-04-04 南京航空航天大学 Unmanned aerial vehicle photoelectric load simulation system based on 3D visual
CN117950552A (en) * 2024-03-07 2024-04-30 北京理工大学长三角研究院(嘉兴) A playback, annotation and collection method for unmanned aerial vehicle simulation data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1455308A1 (en) * 2003-03-07 2004-09-08 France Telecom Method for managing the displaying of at least one three-dimensional scene
US7193633B1 (en) * 2000-04-27 2007-03-20 Adobe Systems Incorporated Method and apparatus for image assisted modeling of three-dimensional scenes
KR20090062729A (en) * 2007-12-13 2009-06-17 버츄얼빌더스 주식회사 XM-based 3D Building Elevation and Interior Automatic Modeling and Navigation System and Its Method
CN101719286A (en) * 2009-12-09 2010-06-02 北京大学 Multiple viewpoints three-dimensional scene reconstructing method fusing single viewpoint scenario analysis and system thereof
CN108986207A (en) * 2018-06-29 2018-12-11 广东星舆科技有限公司 A kind of road based on true road surface data and emulation modelling method is built along the line
CN109410327A (en) * 2018-10-09 2019-03-01 鼎宸建设管理有限公司 A kind of three-dimension tidal current method based on BIM and GIS
CN112308954A (en) * 2019-11-26 2021-02-02 海南发控智慧环境建设集团有限公司 Building model informatization and real scene virtual simulation method thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7193633B1 (en) * 2000-04-27 2007-03-20 Adobe Systems Incorporated Method and apparatus for image assisted modeling of three-dimensional scenes
EP1455308A1 (en) * 2003-03-07 2004-09-08 France Telecom Method for managing the displaying of at least one three-dimensional scene
US20050012742A1 (en) * 2003-03-07 2005-01-20 Jerome Royan Process for managing the representation of at least one 3D model of a scene
KR20090062729A (en) * 2007-12-13 2009-06-17 버츄얼빌더스 주식회사 XM-based 3D Building Elevation and Interior Automatic Modeling and Navigation System and Its Method
CN101719286A (en) * 2009-12-09 2010-06-02 北京大学 Multiple viewpoints three-dimensional scene reconstructing method fusing single viewpoint scenario analysis and system thereof
CN108986207A (en) * 2018-06-29 2018-12-11 广东星舆科技有限公司 A kind of road based on true road surface data and emulation modelling method is built along the line
CN109410327A (en) * 2018-10-09 2019-03-01 鼎宸建设管理有限公司 A kind of three-dimension tidal current method based on BIM and GIS
CN112308954A (en) * 2019-11-26 2021-02-02 海南发控智慧环境建设集团有限公司 Building model informatization and real scene virtual simulation method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李满;邓峻权;: "三维校园景观沙盘模型新形式的探索与应用――以广州工商学院部分建筑景观为例", 现代信息科技, no. 13, 10 July 2020 (2020-07-10) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115906537A (en) * 2023-01-09 2023-04-04 南京航空航天大学 Unmanned aerial vehicle photoelectric load simulation system based on 3D visual
CN117950552A (en) * 2024-03-07 2024-04-30 北京理工大学长三角研究院(嘉兴) A playback, annotation and collection method for unmanned aerial vehicle simulation data
CN117950552B (en) * 2024-03-07 2024-08-06 北京理工大学长三角研究院(嘉兴) Unmanned aerial vehicle simulation data playback, labeling and collection method

Similar Documents

Publication Publication Date Title
CN111008422B (en) A method and system for making a real-world map of a building
CN108919944B (en) Virtual roaming method for realizing data lossless interaction at display terminal based on digital city model
EP3951719A1 (en) Blended urban design scene simulation method and system
CN113808261B (en) Panorama-based self-supervised learning scene point cloud completion data set generation method
CN110097635B (en) Establishment method of road 3D roaming simulation driving system based on BIM and VR
CN108648269A (en) The monomerization approach and system of three-dimensional building object model
CN114677467B (en) Terrain image rendering method, apparatus, device, and computer-readable storage medium
CN108269304B (en) A scene fusion visualization method under the multi-geographic information platform
CN102289845B (en) Three-dimensional model drawing method and device
CN103530907B (en) Complicated three-dimensional model drawing method based on images
CN114972665A (en) Three-dimensional visual virtual scene modeling method in unmanned aerial vehicle virtual simulation
CN108803876A (en) Hydraulic engineering displaying exchange method based on augmented reality and system
CN102308320A (en) Generating three-dimensional models from images
CN111563948B (en) Virtual terrain rendering method for dynamically processing and caching resources based on GPU
CN112904827B (en) Unmanned virtual simulation test system for multiple ICUs
CN117150755A (en) An autonomous driving scene simulation method and system based on neural point rendering
CN103093504A (en) Three-dimensional image generating method
CN101770655B (en) Method for simplifying large-scale virtual dynamic group
CN105184843B (en) A 3D animation production method based on OpenSceneGraph
CN116310111A (en) Indoor scene three-dimensional reconstruction method based on pseudo-plane constraint
CN113750516A (en) Method, system and equipment for realizing three-dimensional GIS data loading in game engine
CN114627237A (en) Real-scene three-dimensional model-based front video image generation method
CN112419482B (en) Three-dimensional reconstruction method for group pose of mine hydraulic support with depth point cloud fusion
CN117496084A (en) Large lake scene modeling method, system, computer equipment and storage medium
CN114998503B (en) A method for automatic texture construction of white model based on real scene three-dimensional

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination