CN115409958A - Plane construction method based on illusion engine, electronic device and storage medium - Google Patents

Plane construction method based on illusion engine, electronic device and storage medium Download PDF

Info

Publication number
CN115409958A
CN115409958A CN202210837695.3A CN202210837695A CN115409958A CN 115409958 A CN115409958 A CN 115409958A CN 202210837695 A CN202210837695 A CN 202210837695A CN 115409958 A CN115409958 A CN 115409958A
Authority
CN
China
Prior art keywords
engine
vertex
coordinate system
vector
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210837695.3A
Other languages
Chinese (zh)
Inventor
崔婵婕
任宇鹏
任增辉
黄积晟
李乾坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202210837695.3A priority Critical patent/CN115409958A/en
Publication of CN115409958A publication Critical patent/CN115409958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Abstract

The application discloses a plane construction method based on a ghost engine, electronic equipment and a storage medium, wherein the method comprises the following steps: obtaining vector data obtained based on the remote sensing image, and reading out a vector line and a vector plane in the vector data; generating a planar object in the illusion engine; adding all contour points in the vector line into the planar object to obtain a first original vertex in the planar object, performing width expansion on the first original vertex in the planar object to obtain a newly added vertex, and obtaining a planar grid body corresponding to the vector line based on the first original vertex and the newly added vertex; adding all contour points in the vector surface into the planar object to obtain a second original vertex in the planar object, and obtaining a planar mesh body corresponding to the vector surface based on the second original vertex; and assigning materials for the plane grids corresponding to the vector lines or vector planes, and generating planes corresponding to the vector data in the illusion engine. By the scheme, convenience of constructing the plane in the illusion engine can be improved.

Description

Plane construction method based on illusion engine, electronic device and storage medium
Technical Field
The present application relates to the field of virtual reality technologies, and in particular, to a plane construction method based on a ghost engine, an electronic device, and a storage medium.
Background
With the continuous development of the virtual reality technology, the application of the digital twin aspect is also paid more attention, and the real world can be mapped to the virtual world by utilizing the digital twin technology, so that a virtual model corresponding to the real world is constructed in the virtual world, and the problems existing in the realized world can be analyzed based on the virtual model. However, in the prior art, multiple modeling software are needed to cooperate when a plane is constructed in the illusion engine, and the process is complicated and inconvenient. In view of this, how to improve the convenience of constructing a plane in the illusion engine becomes an urgent problem to be solved.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a plane construction method based on a ghost engine, an electronic device and a storage medium, which can improve the convenience of constructing a plane in the ghost engine.
In order to solve the above technical problem, a first aspect of the present application provides a method for constructing a plane based on an illusion engine, including: obtaining vector data obtained based on a remote sensing image, and reading out a vector line and a vector plane in the vector data; generating a planar object in the illusion engine; responding to the plane object corresponding to the vector line, adding all contour points in the vector line into the plane object to obtain a first original vertex in the plane object, performing width expansion on the first original vertex in the plane object to obtain a newly added vertex, and obtaining a plane grid body corresponding to the vector line based on the first original vertex and the newly added vertex; responding to the planar object corresponding to the vector plane, adding all contour points in the vector plane into the planar object to obtain a second original vertex in the planar object, and obtaining a planar grid body corresponding to the vector plane based on the second original vertex; and assigning materials for the plane grids corresponding to the vector lines or the vector planes, and generating planes corresponding to the vector data in the illusion engine.
In order to solve the above technical problem, a second aspect of the present application provides an electronic device, including: a memory and a processor coupled to each other, wherein the memory stores program data, and the processor calls the program data to execute the method of the first aspect.
To solve the above technical problem, a third aspect of the present application provides a computer-readable storage medium having stored thereon program data, which when executed by a processor, implements the method of the first aspect.
According to the scheme, vector data obtained based on a remote sensing image is obtained, a vector line and a vector plane in the vector data are read out, a plane object is generated in the illusion engine, contour points on the vector line are added to the plane object, the contour points are used as first original vertexes of the plane object, width expansion is conducted on the first original vertexes, so that the first original vertexes and the newly added vertexes can jointly form a plane, a plane grid body corresponding to the vector line is generated in the illusion engine based on the first original vertexes and the newly added vertexes, or the contour points on the vector plane are added to the plane object, the contour points are used as second original vertexes of the plane object, a plane grid body corresponding to the vector plane surface is generated in the illusion engine based on the second original vertexes, materials are assigned to the plane grid body, so that a plane corresponding to the real world can be conveniently generated in the illusion engine, and convenience of plane construction in the illusion engine is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a phantom engine based model construction method according to the present application;
FIG. 2 is a flow diagram illustrating one embodiment of a method for building a coordinate system in the illusion engine of the present application;
FIG. 3 is a flow chart of an embodiment of the illusion engine-based terrain construction method of the present application;
fig. 4 is a schematic application scenario diagram of an embodiment of the illusion-engine-based terrain constructing method according to the present application;
FIG. 5 is a flowchart illustrating an embodiment of a mapping method based on a ghost engine according to the present application;
FIG. 6 is a flowchart illustrating an embodiment of a method for creating a plane based on an illusion engine according to the present application;
FIG. 7 is a schematic flow chart diagram illustrating an embodiment of a three-dimensional construction method based on a ghost engine according to the present application;
FIG. 8 is a schematic view of an application scenario of an embodiment of the illusion engine-based stereoscopic construction method of the present application;
FIG. 9 is a flowchart illustrating an embodiment of a ghost engine based icon construction method according to the present application;
FIG. 10 is a flow chart diagram of an embodiment of a fiction engine-based thermodynamic diagram construction method of the present application;
FIG. 11 is a schematic diagram of an embodiment of an electronic device of the present application;
FIG. 12 is a schematic structural diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
The Unreal Engine (Unreal Engine) referred to herein is a game development Engine for constructing a virtual world, and the execution subject corresponding to each method provided herein is a server or a processing terminal corresponding to the Unreal Engine.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a method for building a phantom-engine-based model according to the present application, the method including:
s101: a coordinate system is constructed in the illusion engine.
Specifically, a coordinate system is built into the fantasy engine to enable locations in the fantasy engine to be associated with the real world through the coordinate system.
In an application mode, a coordinate system is constructed in a phantom engine and an engine origin is determined, the coordinate system comprises a geographic coordinate system, a projection coordinate system and an engine global coordinate system, wherein the geographic coordinate system and the projection coordinate system are related to a global positioning system, the engine global coordinate system is used for positioning any object in the phantom engine, the engine origin corresponds to origin coordinates in the projection coordinate system, and the engine global coordinates in the engine global coordinate system are obtained by subtracting the origin coordinates from projection coordinates in the projection coordinate system.
In another application mode, a geographic global object is created in the phantom engine and a coordinate system is constructed, the coordinate system comprises a geographic coordinate system, a projection coordinate system, an engine global coordinate system and engine local coordinates, the geographic global object comprises an engine origin and a conversion relation between any two coordinate systems in the coordinate system, the geographic coordinate system and the projection coordinate system are related to a global positioning system, the engine global coordinate system is used for positioning any object in the phantom engine, the engine origin corresponds to origin coordinates in the projection coordinate system, the engine global coordinates in the engine global coordinate system are obtained by subtracting the origin coordinates from the projection coordinates in the projection coordinate system, and the engine local coordinates in the engine local coordinate system are determined based on an offset value of a vertex on the current object relative to the engine global coordinates of the current object.
S102: the method comprises the steps of obtaining a DEM image, raster data and vector data obtained based on a remote sensing image, importing the DEM image, the raster data and the vector data into a ghost engine, and generating a terrain corresponding to the DEM image, a map corresponding to the raster data, a plane corresponding to the vector data and a solid.
Specifically, the remote sensing image is a grid structure with pixel as a unit, grid data can be obtained based on the remote sensing image, vector data can be obtained by vector extraction of the grid structure in the remote sensing image, a Digital Elevation Model (DEM) obtained by algorithm extraction based on the remote sensing image is obtained, and a DEM image is obtained from the Digital Elevation Model, wherein the grid data comprises image information of the remote sensing image, the DEM image comprises height information in the remote sensing image, and the vector data comprises vector lines and vector planes extracted from the remote sensing image.
In an application mode, a DEM image obtained based on a remote sensing image is obtained, the remote sensing image is read, image information in the remote sensing image is extracted to obtain raster data, a raster structure in the remote sensing image is subjected to vector extraction to obtain vector data, the DEM image is led into a phantom engine to obtain a terrain corresponding to the DEM image, the raster data is led into the phantom engine to obtain a map corresponding to the raster data, the vector data is led into the phantom engine to obtain a plane and a solid corresponding to the vector data, wherein the plane corresponds to a vector line or a vector plane, the solid corresponds to the vector plane, the vector line can be generated in the phantom engine after being widened, the plane can be generated in the phantom engine based on the vector plane, and the solid can be generated in the phantom engine after the vector plane is pulled up.
In another application mode, a DEM image obtained based on a remote sensing image is obtained, the DEM image is led into an illusion engine, and a terrain corresponding to the DEM image is generated in the illusion engine; obtaining raster data obtained based on the remote sensing image, importing the raster data into an illusion engine, and generating a map corresponding to the raster data in the illusion engine; obtaining vector data obtained based on the remote sensing image, guiding a vector line or a vector plane in the vector data into the illusion engine, and generating a plane corresponding to the vector data in the illusion engine; and obtaining vector data obtained based on the remote sensing image, guiding the vector plane in the vector data into the illusion engine, and generating a stereo corresponding to the vector data in the illusion engine.
Specifically, the remote sensing image is read by using the spatial data conversion library, so that the illusion engine can be compatible with the remote sensing image, the remote sensing image is used as source data for constructing a virtual model, the remote sensing image can be used as a reference of the virtual model in the real world, and a terrain corresponding to the DEM image, a map corresponding to the raster data and a plane and a solid corresponding to the vector data are generated in the illusion engine, so that the difficulty of constructing the virtual model in the illusion engine is reduced.
In an application scenario, the spatial Data transformation Library is a open source grid spatial Data transformation Library (GDAL) under an X/MIT licensing agreement, so that the illusion engine can be compatible with the remote sensing image of a grid structure with a pixel as a unit.
S103: and generating a virtual model corresponding to the remote sensing image in the illusion engine based on the terrain, the map, the plane and the position of the solid in the coordinate system.
Specifically, the coordinates of the terrain, the map, the plane and the solid in a coordinate system of the illusion engine are determined, the virtual terrain, the virtual map, the virtual plane and the virtual solid are generated according to the corresponding coordinates of the terrain, the map, the plane and the solid in the illusion engine, and the virtual model corresponding to the remote sensing image is generated in the illusion engine.
In an application mode, a coordinate system comprises a geographic coordinate system, a projection coordinate system and an engine global coordinate system, and virtual landforms, virtual maps, virtual planes and virtual stereoscopy are respectively generated in a virtual engine according to landforms, maps, planes and stereoscopic engine global coordinates in the engine global coordinate system, so that a virtual model corresponding to a remote sensing image is constructed in the virtual engine, and the fidelity of the virtual model is improved.
In another application mode, the coordinate system comprises a geographic coordinate system, a projection coordinate system, an engine global coordinate system and an engine local coordinate, and based on the engine global coordinate of the terrain, the map, the plane and the solid in the engine global coordinate system and the engine local coordinate of the vertex corresponding to the terrain, the map, the plane and the solid in the engine local coordinate system, the virtual terrain, the virtual map, the virtual plane and the virtual solid are respectively generated according to the engine global coordinate of the terrain, the map, the plane and the solid in the illusion engine, so that the virtual model corresponding to the remote sensing image is constructed in the illusion engine, and the fidelity of the virtual model and the positioning accuracy are improved.
According to the scheme, a coordinate system is built in the illusion engine so that the real world and the virtual world can be accurately corresponded, DEM images, raster data and vector data obtained based on remote sensing images are obtained, the remote sensing images are used as source data for building the virtual model, the DEM images are led into the illusion engine to obtain terrains corresponding to the DEM images, the raster data are led into the illusion engine to obtain maps corresponding to the raster data, the vector data are led into the illusion engine to obtain planes and three-dimensional bodies corresponding to the vector data, and the virtual model corresponding to the remote sensing images is generated in the illusion engine based on the positions of the terrains, the maps, the planes and the three-dimensional bodies in the coordinate system. Therefore, the illusion engine can be compatible with the remote sensing image and use the remote sensing image as source data for generating the virtual model, and the remote sensing image can be used as a reference of the virtual model in the real world so as to reduce difficulty in constructing the virtual model in the illusion engine.
In some embodiments, please refer to fig. 2, where fig. 2 is a flowchart illustrating an embodiment of a method for building a coordinate system in an illusion engine according to the present application, a process of building the coordinate system in the illusion engine specifically includes:
s201: a geographical coordinate system and a projection coordinate system are imported in the illusion engine, wherein the geographical coordinate system and the projection coordinate system are associated with a global positioning system.
Specifically, a geographical coordinate system and a projection coordinate system are introduced in the illusion engine, the geographical coordinate system and the projection coordinate system both being associated with a global positioning system, the geographical coordinate system being a spherical coordinate system that determines a position based on longitude, latitude, and altitude, the projection coordinate system being a coordinate system that projects a sphere onto a cylinder and expands into a plane based on the geographical coordinate system.
In an application mode, a geographic coordinate system and a projection coordinate system are led into an illusion engine by using a spatial data conversion library, and the geographic coordinate system and the projection coordinate system corresponding to the geographic coordinate system are determined in the illusion engine.
In another application, a geographic coordinate system and a projection coordinate system matched with the planet type are imported by a geographic registration plug-in the illusion engine based on the planet type defined in the illusion engine, wherein the planet type comprises a flat planet and a circular planet, the flat planet is approximated by a plane and is modeled according to a projection mode, all coordinates are defined in the projection coordinate system by using a transfer offset, the circular planet can be modeled by using a geocentric coordinate system when an origin is at the center of the planet, and the origin can be arranged at any point on the surface of the planet and is modeled by using the geographic coordinate system or the projection coordinate system.
In one application scenario, the geographic coordinate system includes the WGS84 coordinate system, and the projected coordinate system includes the mercator projected coordinate system corresponding to the WGS84 coordinate system.
Specifically, the WGS84 coordinate system is a coordinate system established for use by a GPS global positioning system, the mercator projection coordinate system is obtained by assuming that a cylinder whose direction is consistent with the earth axis is cut or cut on the earth, projecting a graticule onto a cylindrical surface according to an equiangular condition, unfolding the cylindrical surface into a plane, and the universal mercator projection divides the earth surface between 84 degrees north latitude and 80 degrees south latitude into north and south longitudinal bands according to 6 degrees longitude to obtain the universal mercator projection coordinate system. The geographic coordinate system and the projection coordinate system are set as common coordinate systems, so that the compatibility of the coordinate system in the illusion engine is improved, and the difficulty of position matching between the illusion engine and the real world is reduced.
S202: a geographical global object is created in the illusion engine, an engine origin of the illusion engine is obtained based on the position of the geographical global object in a geographical coordinate system, and the engine origin is converted into a projection coordinate system from the geographical coordinate system to obtain an origin coordinate of the engine origin.
Specifically, a geographic global object is created in the ghost engine, an engine origin in the ghost engine is defined by the geographic global object, geographic coordinates of the geographic global object in a geographic coordinate system are converted into a projection coordinate system, and origin coordinates of the engine origin in the projection coordinate system are obtained.
In an application scene, a geographic global object is created in a ghost engine, wherein the geographic global object is an AGEoActor, the AGEoActor is used for defining an engine origin in the ghost engine, determining a geographic coordinate of the geographic global object in a geographic coordinate system, converting the geographic coordinate corresponding to the geographic global object into a projection coordinate system by using a spatial data conversion library to obtain a projection coordinate corresponding to the geographic global object, and taking the projection coordinate corresponding to the geographic global object as an origin coordinate of the engine origin in the projection coordinate system.
S203: and establishing an engine global coordinate system and an engine local coordinate system based on the origin coordinates, wherein the engine global coordinate system takes the origin coordinates as the origin of coordinates, and the engine local coordinates take the engine global coordinates of the object in the illusion engine in the engine global coordinate system as the origin of coordinates.
Specifically, an engine global coordinate system is established by taking the origin coordinates as the origin of coordinates of the engine global coordinate system, the engine global coordinates of the object in the phantom engine in the engine global coordinate system are taken as the origin of coordinates, and an engine local coordinate system of the corresponding object is established, so that the engine global coordinate system is used for positioning any object in the phantom engine, and the engine local coordinate system is used for positioning the vertex of any object in the phantom engine.
In an application mode, an engine global coordinate system is established by taking the origin coordinates as the origin of coordinates of the engine global coordinate system, the position of the current object in the engine global coordinate system is taken as the origin of coordinates, and an engine local coordinate system corresponding to the current object is established.
In another application mode, an engine global coordinate system is established by taking the coordinates of the origin as the coordinate origin of the engine global coordinate system, the position of the vertex of the preset part on the current object in the engine global coordinate system is designated as the coordinate origin, and an engine local coordinate system corresponding to the current object is established.
Specifically, the engine global coordinates of any object in the engine global coordinate system are offset values of the current object relative to the engine origin, and therefore the engine global coordinates of any object in the engine global coordinate system are offset values of the coordinates of the origin, not offset values of the geographic coordinate system, the smaller the distance between the object in the phantom engine and the engine origin, the smaller the numerical value of the corresponding engine global coordinates, and when the phantom engine includes a plurality of objects, the smaller the numerical value of the engine global coordinates corresponding to at least some of the objects, so that when the engine global coordinates of the object are called or the calculation is performed based on the engine global coordinates, the processing burden of the phantom engine can be effectively reduced and the calculation efficiency can be improved.
S204: and determining a conversion relation between any two coordinate systems of the geographic coordinate system, the projection coordinate system, the engine global coordinate system and the engine local coordinate system to obtain a coordinate system in the illusion engine.
Specifically, a conversion relationship is provided between any two coordinate systems in the geographic coordinate system, and the conversion relationship between any two coordinate systems in the geographic coordinate system, the projection coordinate system, the engine global coordinate system and the engine local coordinate system is determined to determine the final coordinate system.
Optionally, the engine origin and the conversion relationship between any two coordinate systems in the coordinate systems are bound to the geographic global object, so that any object inherited from the geographic global object can obtain the engine origin and the conversion relationship.
In an application mode, two coordinate systems are arbitrarily extracted from a coordinate system, a conversion relation between any two coordinate systems in the coordinate system is determined based on a construction mode of each coordinate system in the coordinate system, and an engine origin and the conversion relation between any two coordinate systems in the coordinate system are bound with a geographic global object, so that any object inherited to the geographic global object can obtain the engine origin and the conversion relation.
In another application mode, the engine global coordinate system is taken as the core of the coordinate system, the transformation relation between the geographic coordinate system, the projection coordinate system and the engine local coordinate system and the engine global coordinate system is determined, the transformation relation between the geographic coordinate system and the projection coordinate system is determined by utilizing the spatial data transformation library, and the transformation relation between the geographic coordinate system and the engine local coordinate system and the transformation relation between the projection coordinate system and the engine local coordinate system are determined based on the transformation relation between the geographic coordinate system and the projection coordinate system and the transformation relation between the geographic coordinate system and the engine local coordinate system and the engine global coordinate system.
In this embodiment, a coordinate system including a geographic coordinate system, a projection coordinate system, an engine global coordinate system, and an engine local coordinate system is constructed in the ghost engine, where the geographic coordinate system and the projection coordinate system are related to a global positioning system, so as to reduce the difficulty of position matching between the ghost engine and the real world, a geographic global object is created in the ghost engine, coordinates of the geographic global object in the projection coordinate system are used as origin coordinates, and an engine global coordinate system and an engine local coordinate system are created based on the origin coordinates, where the engine global coordinate system uses the origin coordinates as a coordinate origin, and the engine local coordinates use engine global coordinates of an object in the ghost engine in the engine global coordinate system as a coordinate origin, so that the engine global coordinate system is used to locate any object in the ghost engine, the engine local coordinate system is used to locate a vertex of any object in the ghost engine, and values of engine global coordinates corresponding to at least some objects are small, values corresponding to the engine local coordinates in the engine local coordinate system are small, which can effectively reduce the processing burden of the ghost engine and improve the calculation efficiency, and determine a transformation relationship between any two coordinate systems, so as to improve the convenience of the transformation of any object in the ghost engine local coordinate system.
In some implementation scenarios, the step S202 specifically includes: creating a geographic global object in the illusion engine; the geographic global object corresponds to a geographic coordinate in a geographic coordinate system; taking the position corresponding to the geographic coordinate of the geographic global object in a geographic coordinate system as an engine origin of the illusion engine; and converting the geographic coordinates of the engine origin from the geographic coordinate system into a projection coordinate system by using a spatial data conversion library, and taking the projection coordinates of the engine origin in the projection coordinate system as origin coordinates.
Specifically, a geographic global object is created in the ghost engine and used for defining an engine origin in the ghost engine, the geographic global object corresponds to geographic coordinates in a geographic coordinate system, the position of the geographic global object in the geographic coordinate system is determined based on the geographic coordinates of the geographic global object, the position of the geographic global object in the geographic coordinate system is used as the engine origin of the ghost engine, the geographic coordinates of the engine origin are converted into a projection coordinate system from the geographic coordinate system by using a spatial conversion library, projection coordinates of the engine origin in the projection coordinate system are obtained, and projection coordinates of the engine origin in the projection coordinate system are used as origin coordinates. The geographic global object can be used for defining an engine origin, and projection coordinates of the engine origin in a projection coordinate system are obtained after the engine origin is determined and serve as origin coordinates, so that an engine global coordinate system can be constructed conveniently.
Further, the method, after converting the geographic coordinates of the engine origin from the geographic coordinate system into the projection coordinate system by using the spatial data conversion library and taking the projection coordinates of the engine origin in the projection coordinate system as origin coordinates, further comprises: determining a geographical position attribute corresponding to an engine origin in a geographical coordinate system, and determining an engine position attribute corresponding to the engine origin in a projection coordinate system; the geographic position attribute comprises longitude, latitude and altitude corresponding to the engine origin in a geographic coordinate system, the engine position attribute comprises abscissa, ordinate and ordinate which are respectively matched with the longitude, latitude and altitude corresponding to the engine origin in the geographic coordinate system, and the geographic position attribute and the engine position attribute are expressed by floating point type numbers.
Specifically, longitude, latitude and altitude corresponding to an engine origin are determined in a geographic coordinate system, so that a geographic position attribute corresponding to the engine origin is obtained, an abscissa matched with the longitude corresponding to the engine origin in the geographic coordinate system, an ordinate matched with the latitude and an ordinate matched with the altitude are determined in a projection coordinate system, so that an engine position attribute corresponding to the engine origin is obtained, and the geographic position attribute corresponding to the engine origin and the engine position attribute are both represented by a floating point type number, so that the accuracy of the geographic position attribute and the accuracy of the engine position attribute are improved.
Further, the geographical location attribute defines longitude, latitude, and altitude in a floating-point type respectively, indicating a location of the geographical global object in the real world, and the engine location attribute indicates a location of the geographical global object in the ghost engine in a floating-point type abscissa, ordinate, and ordinate.
In a specific application scenario, the geographic location attribute defines longitude, latitude and altitude in a floating-point type, and is represented by GeoCoordinate in a ghost engine, and the engine location attribute is represented by a floating-point type abscissa, ordinate and ordinate, which correspond to X, Y and Z, respectively, and is represented by EngineCoordinate in the ghost engine.
In an application, the step S203 specifically includes: establishing an engine global coordinate system by taking the origin coordinates as the origin of coordinates; taking the global coordinate of the object in the illusion engine in the engine global coordinate system as the origin of coordinates, and establishing an engine local coordinate system corresponding to the object in the illusion engine; the engine global coordinate of the current object in the illusion engine in the engine global coordinate system comprises an offset value of the current object relative to the origin coordinate; the engine local coordinates of the vertex on the current object in the corresponding engine local coordinate system include an offset value of the vertex on the current object from the engine global coordinates of the current object.
Specifically, an engine global coordinate system is established in the ghost engine by taking the origin coordinates as the coordinate origin, so that the engine global coordinate system can locate the position of any object in the ghost engine, and the engine global coordinates of the current object in the ghost engine in the engine global coordinate system represent the offset value of the current object relative to the origin coordinates.
Further, the global coordinate of the object in the illusion engine in the engine global coordinate system is used as the coordinate origin, and the engine local coordinate system corresponding to the object in the illusion engine is established, so that the engine local coordinate system can locate the vertex on any object in the illusion engine, and the engine local coordinate of the vertex on the current object in the corresponding engine local coordinate system represents the offset value of the vertex on the current object relative to the engine global coordinate of the current object.
It can be understood that, when the distance between an object in the ghost engine and the engine origin is closer, the numerical value of the corresponding engine global coordinate is smaller, when the ghost engine includes a plurality of objects, the numerical value of the engine global coordinate corresponding to at least a part of the objects is smaller, so that when the engine global coordinate of the object is called or calculated based on the engine global coordinate, the processing load of the ghost engine can be effectively reduced and the calculation efficiency can be improved, a vertex on any object can be positioned based on the engine local coordinate system, and the engine local coordinate represents an offset value of the vertex on the current object relative to the engine global coordinate of the current object, so that the numerical value corresponding to the engine local coordinate in the engine local coordinate system is smaller, so that when the engine local coordinate of the object is called or calculated based on the engine local coordinate, the processing load of the ghost engine can be effectively reduced and the calculation efficiency can be improved.
In an application, the step S204 specifically includes: generating a corresponding conversion function between any two coordinate systems of a geographic coordinate system, a projection coordinate system, an engine global coordinate system and an engine local coordinate system; and binding the corresponding conversion functions between the geographic position attribute, the engine origin and any two coordinate systems with the geographic global object to obtain a coordinate system related to the geographic global object in the illusion engine.
Specifically, a conversion function is generated between any two coordinate systems in the coordinate systems, and the geographic location attribute, the engine origin and the conversion function corresponding to any two coordinate systems are bound with the geographic global object, so that any other object inherited from the geographic global object can obtain the corresponding conversion function between the geographic location attribute, the engine origin and any two coordinate systems, and the newly generated object can obtain and utilize the corresponding location attribute and the corresponding conversion function.
Further, generating a corresponding conversion function between any two coordinate systems of a geographic coordinate system, a projection coordinate system, an engine global coordinate system and an engine local coordinate system, including: taking any two coordinate systems of a geographic coordinate system, a projection coordinate system, an engine global coordinate system and an engine local coordinate system as a first coordinate system and a second coordinate system; and determining a coordinate type and a conversion type corresponding to the first coordinate system and the second coordinate system, and generating a conversion function between the first coordinate system and the second coordinate system based on the coordinate type and the conversion type.
Specifically, any two coordinate systems are extracted from a geographic coordinate system and are respectively used as a first coordinate system and a second coordinate system, coordinate types corresponding to the first coordinate system and the second coordinate system and a conversion type between the first coordinate system and the second coordinate system are determined, wherein the geographic coordinate system, the projection coordinate system, the engine global coordinate system and the engine local coordinate system respectively correspond to the coordinate types and are mutually different, the conversion type comprises a positive value and a negative value, when the conversion type is the positive value, the first coordinate system is converted to the second coordinate system, when the conversion type is the negative value, the second coordinate system is converted to the first coordinate system, and based on the coordinate types and the conversion types, a conversion function between the first coordinate system and the second coordinate system is generated, so that the coordinate system comprises the conversion function between any two coordinate systems.
Further, in the process of using the coordinate systems, conversion between the coordinate systems can be realized by calling the conversion function, and the conversion efficiency between any two coordinate systems is improved.
In a specific application scene, a first conversion function corresponds to the geographic coordinate system and the projection coordinate system, and the first conversion function is used for calling a spatial data conversion library to realize conversion between the geographic coordinate system and the projection coordinate system; a second conversion function is corresponding to the projection coordinate system and the engine global coordinate system, the second conversion function is used for calling the origin coordinate, subtracting the origin coordinate from the projection coordinate in the projection coordinate system to obtain the engine global coordinate in the engine global coordinate system, and adding the origin coordinate to the engine coordinate in the engine global coordinate system to obtain the projection coordinate in the projection coordinate system; a third conversion function is corresponding between the engine global coordinate system and the engine local coordinate system, the third conversion function is used for calling the engine global coordinate of the current object in the illusion engine in the engine global coordinate system, subtracting the engine global coordinate corresponding to the current object from the engine global coordinate of the vertex on the current object in the engine global coordinate system to obtain the engine local coordinate of the vertex on the current object in the engine local coordinate, and adding the engine global coordinate corresponding to the current object to obtain the engine global coordinate of the vertex on the current object in the engine global coordinate by using the engine local coordinate of the vertex on the current object in the engine local coordinate system and the engine global coordinate corresponding to the current object; a fourth conversion function corresponds to the engine global coordinate system and the geographic coordinate system, and is used for calling the first conversion function and the second conversion function to realize the conversion between the engine global coordinate system and the geographic coordinate system; a fifth conversion function is corresponding to the engine local coordinate system and the projection coordinate system, and the fifth conversion function is used for calling the second conversion function and the third conversion function to realize the conversion between the engine local coordinate system and the projection coordinate system; and the sixth conversion function is used for calling the first conversion function, the second conversion function and the third conversion function to realize the conversion between the engine local coordinate system and the geographic coordinate system.
Specifically, the first conversion function is defined as pool Geo _ Projection _ convert (FVector Point, pool Geo2 Pro), and the conversion between the geographic coordinates and the Projection coordinates is implemented by the GDAL library.
Further, the unit and magnitude of the Engine global coordinate is consistent with the Projection coordinate, the Engine global coordinate is the difference of the Projection coordinate of the current Point relative to the Engine origin, the second conversion function is defined as a pool Projection _ EngineGlobal _ change (FVector Point, pool Pro2 Engine), and the implementation step of the second conversion function is: first, the Projection coordinates of the engine origin, AGEoActor, are calculated using the Geo _ project _ convert function, and then the Projection coordinates are converted to engine global coordinates by: subtracting the difference value of the projection coordinate of the AGEoAactor from the projection coordinate, and converting the global coordinate of the engine into the projection coordinate by: the sum of the engine global coordinates plus the projected coordinates of the AGeoAactor.
Further, the engine Global coordinate is actually a position of the current object in the engine, the engine Local coordinate is used to define vertex coordinates of the static mesh of the current object, the engine Local coordinate is a difference value of the current vertex relative to the position coordinates of the current object in the engine, the third conversion function is defined as a pool Global _ Local _ converter (FVector Point, pool Global2 Local), and the implementation of the engine Global coordinate to the engine Local coordinate is: the difference value of the global coordinate of the engine is subtracted from the coordinate of the current point, and the conversion from the local coordinate of the engine to the global coordinate of the engine is realized as follows: the coordinates of the current point plus the sum of the engine global coordinates.
Further, a fourth conversion function corresponding to the Engine global coordinate system and the geographic coordinate system is defined as a pool Geo _ Engine global _ convert (FVector Point, pool Geo2 Engine), and the specific implementation steps of converting the geographic coordinate into the Engine global coordinate are as follows: firstly, a first conversion function is called to convert the geographic coordinate into a projection coordinate, then a second conversion function is called to convert the projection coordinate into an engine global coordinate, and the implementation steps of converting the engine global coordinate into the geographic coordinate are opposite to the implementation steps.
Further, a fifth conversion function corresponding to the Engine local coordinate system and the projection coordinate system is defined as a cool Pro _ Engine local _ change (fvactor Point, cool Pro2 Engine), wherein the specific implementation steps of converting the projection coordinate into the Engine local coordinate are as follows: firstly, a second conversion function is called to convert the projection coordinate into an engine global coordinate, then a third conversion function is called to convert the engine global coordinate into an engine local coordinate, and the implementation steps of converting the engine local coordinate into the projection coordinate are opposite to the steps.
Further, a sixth conversion function corresponding to the Engine local coordinate system and the geographic coordinate system is defined as a boul Geo _ Engine local _ convert (FVector Point, boul Geo2 Engine), and the implementation step of converting the geographic coordinate system to the Engine local coordinate system is as follows: firstly, a first conversion function is called to convert the geographic coordinate into a projection coordinate, then a second conversion function is called to convert the projection coordinate into an engine global coordinate, finally a third conversion function is called to convert the engine global coordinate into an engine local coordinate, and the implementation steps of converting the engine local coordinate into the geographic coordinate are opposite to the implementation steps.
It should be noted that in each conversion function, point represents a coordinate of a Point to be converted, fvettcor represents a coordinate type, if the Point represents a geographic coordinate, longitude, latitude and altitude are represented, if the Point represents a projection coordinate, an engine global coordinate or an engine local coordinate, X, Y and Z are represented, geo2Pro represents a coordinate conversion type, the conversion type corresponds to true and flush, true represents that a first coordinate system is converted into a second coordinate system, and flush represents that the second coordinate system is converted into the first coordinate system. Through the six conversion functions, conversion between any two coordinate systems can be achieved respectively, when the two coordinate systems need to be converted, the corresponding conversion functions are called, and conversion efficiency is improved.
It should be noted that, after the coordinate system is constructed, the illusion engine corresponds to a coordinate system and an engine origin, and the coordinate system includes a geographic coordinate system, a projection coordinate system and an engine global coordinate system; the geographic coordinate system and the projection coordinate system are related to a global positioning system, the engine global coordinate system is used for positioning any object in the illusion engine, an engine origin corresponds to origin coordinates in the projection coordinate system, and the engine global coordinates in the engine global coordinate system are obtained by subtracting the origin coordinates from the projection coordinates in the projection coordinate system.
In some embodiments, please refer to fig. 3, where fig. 3 is a schematic flowchart of an embodiment of the unreal engine-based terrain construction method according to the present application, and the process of obtaining a DEM image obtained based on a remote sensing image, importing the DEM image into an unreal engine, and generating a terrain corresponding to the DEM image in the unreal engine specifically includes:
s301: and obtaining a DEM image obtained based on the remote sensing image, and determining the image size of the DEM image.
Specifically, a Digital Elevation Model (DEM) obtained by performing algorithm extraction based on a remote sensing image is obtained, a DEM image is obtained from the Digital Elevation Model, and the image size of the DEM image is determined.
In an application mode, a DEM image corresponding to the remote sensing image is extracted by using the spatial data conversion library, the data size corresponding to the DEM image is obtained, and the DEM image is interpolated on the basis of the data size of the DEM image and the unit length of the illusion engine, so that the image size corresponding to the DEM image is obtained.
In another application mode, the digital elevation model is read by using the spatial data conversion library to obtain a DEM image and a data size corresponding to the DEM image, the data range of the DEM image is obtained by calculating the data size, and the DEM image is interpolated based on the data range of the DEM image to obtain an image size corresponding to the DEM image, so that the image size corresponding to the DEM image is matched with the unit length of the illusion engine.
S302: the DEM image is divided into at least one image block based on the image size and an image size threshold in the illusion engine, and the image sub-size of each image block is determined.
Specifically, a single terrain object in the illusion engine corresponds to an image size threshold, the image size threshold limits the size range of the constructed terrain, when the terrain is constructed across areas or the original size corresponding to the terrain is large, the corresponding actual size in the DEM image is large, and the image size of the DEM image is large and exceeds the image size threshold, so that the terrain matched with the actual size is difficult to construct in the illusion engine.
Further, when the image size is smaller than or equal to the image size threshold, the DEM image is divided into one image block based on the image size and the image size threshold in the illusion engine, when the image size is larger than the image size threshold, the DEM image needs to be divided, the DEM image is divided into a plurality of image blocks based on the image size and the image size threshold in the illusion engine, and the image sub-size of each image block is determined.
In an application mode, the DEM image corresponds to an image length and an image width, the image size threshold corresponds to an area upper limit, the number of image blocks into which the DEM image is divided is determined based on the ratio of the product of the image length and the image width to the image size threshold, and the image sub-size of each image block is obtained.
In another application mode, the DEM image corresponds to a pixel size, the image size threshold corresponds to a pixel size upper limit, the ratio of the pixel size corresponding to the DEM image to the pixel size upper limit determines the number of image blocks into which the DEM image is divided, and obtains the image sub-size of each image block.
S303: and generating a terrain size, a terrain height and a terrain coordinate corresponding to the image block based on the image sub-size corresponding to the image block for each image block, and generating a terrain object matched with the terrain size and the terrain height at the terrain coordinate of the illusion engine.
Specifically, the terrain size corresponding to the terrain object is determined based on the image sub-size corresponding to the image block, and the terrain height and the terrain coordinate corresponding to the terrain object are respectively obtained based on the height information and the position information corresponding to the image block.
Further, the terrain size of the terrain objects in the illusion engine is defined based on the segment pixel size, the segments, and the components, each terrain object includes a plurality of components, each component corresponds to a component pixel size, the component pixel size is related to the number of segments in the component and the segment pixel size, and the components adopt a uniform component pixel size.
In an application mode, the image block corresponds to height information and position information, the assemblies correspond to the largest number of segments, each segment adopts the largest segment pixel size, the number of the assemblies in the terrain object is determined based on the image sub-size of the image block and the assembly pixel size, the terrain size corresponding to the terrain object is obtained based on the number of the assemblies in the terrain object, and the terrain height and the terrain coordinate corresponding to the terrain object are respectively obtained based on the height information and the position information corresponding to the image block.
In another application mode, the image block corresponds to height information and position information, the components correspond to the largest number of segments, each segment adopts a specified segment pixel size, the number of the components in the terrain object is determined based on the image sub-size and the component pixel size of the image block, the terrain size corresponding to the terrain object is obtained based on the number of the components in the terrain object, and the terrain height and the terrain coordinate corresponding to the terrain object are respectively obtained based on the height information and the position information corresponding to the image block.
S304: and generating a terrain corresponding to the DEM image in the illusion engine based on the terrain objects generated by all the image blocks.
Specifically, a position corresponding to a terrain coordinate is determined in the illusion engine, a terrain object matched with the terrain size and the terrain height is generated at the terrain coordinate, so that the terrain object corresponds to the position of the real world until the terrain object corresponding to all the image blocks is traversed, and a terrain corresponding to the DEM image is generated in the illusion engine.
In the embodiment, a DEM image obtained based on a remote sensing image is obtained, the image size of the DEM image is determined, the DEM image is segmented based on the image size and the image size threshold of the illusion engine to obtain at least one image block, for each image block, the terrain size, the terrain height and the terrain coordinate corresponding to the image block are generated based on the image sub-size corresponding to the image block, a terrain object matched with the terrain size and the terrain height is generated at the terrain coordinate position, so that the terrain object corresponds to the position of the real world until all terrain objects corresponding to the image block are traversed, a terrain corresponding to the DEM image is generated in the illusion engine, and after the DEM image is divided into at least one image block, a terrain with any size can be generated in the illusion engine, and the degree of freedom of terrain generation in the illusion engine is improved.
In some implementation scenarios, the step S301 specifically includes: reading a DEM image corresponding to the remote sensing image by using a spatial data conversion library to obtain the DEM image and a data size and an affine matrix corresponding to the DEM image, and determining the spatial resolution of the DEM image based on the affine matrix; determining the data range of the DEM image based on the data size and the spatial resolution of the DEM image; and based on the data range and the unit length of the ghost engine, interpolating pixels in the DEM image to obtain the DEM image with the image size matched with the unit length of the ghost engine.
Specifically, a DEM image corresponding to the remote sensing image is read by using a spatial data conversion library to obtain the DEM image, a data size and an affine matrix corresponding to the DEM image, the spatial resolution of the DEM image is determined from the affine matrix, a data range of the DEM image is obtained based on the product of the spatial resolution and the data size stored in the affine matrix, pixels in the DEM image are interpolated based on the data range of the DEM image and the unit length of a ghost engine to obtain the DEM image of which the image size is matched with the unit length of the ghost engine, and the image length and the image width of the DEM image after interpolation are used as the image size of the DEM image, so that the ghost engine can be compatible with the DEM image and can construct a terrain by using the DEM image, wherein the spatial resolution comprises the pixel size of the DEM image, and the data size comprises the length corresponding to a single pixel.
In a specific application scene, reading a DEM image by using a GDAL, obtaining data stored in the DEM image, a maximum and minimum value of the data, a data size, a data format, an affine matrix and coordinate system information, wherein the data size is a length corresponding to a single pixel and a unit is meter, calculating a data range of the DEM image according to a spatial resolution value and the data size stored in the affine matrix, that is, the length and the width of the DEM image and a unit are meters, interpolating the DEM image according to the data range and the unit length of a ghost engine so that the image size of the DEM image matches with the unit length of the ghost engine to obtain the image size of the DEM image, and storing the image size of the DEM image as a 16-bit unsigned integer, wherein the unit length of the ghost engine is one meter, and the unit length of the ghost engine in other application scenes can be other values, which is not particularly limited in this application.
In some implementation scenarios, the step S302 specifically includes: respectively calculating the ratio of the image length and the image width corresponding to the image size threshold of the illusion engine and rounding upwards to obtain the horizontal quantity and the vertical quantity corresponding to the image block; determining the number of image blocks corresponding to the DEM image as a first numerical value based on the product of the horizontal number and the vertical number; determining the number of transverse pixels corresponding to each image block in the transverse direction and the number of longitudinal pixels corresponding to each image block in the longitudinal direction based on the ratio of the image size to the transverse number and the longitudinal number respectively; the image sub-size of each image block includes the number of horizontal pixels and the number of vertical pixels corresponding to each image block.
Specifically, the DEM image corresponds to an image length and an image width, a ratio of the image length to an image size threshold is obtained and rounded upwards to obtain a horizontal number corresponding to the image block, the ratio of the image width to the image size threshold is obtained and rounded upwards to obtain a longitudinal number corresponding to the image block, the horizontal number is multiplied by the longitudinal number to obtain a first numerical value, and therefore the number of the image blocks corresponding to the DEM image is determined to be the first numerical value, so that the DEM image can be divided into a plurality of image blocks, and therefore a terrain with any size can be constructed in the illusion engine.
Further, dividing the image length corresponding to the DEM image by the number of horizontal pixels to obtain the number of horizontal pixels corresponding to the image block in the horizontal direction, dividing the image width corresponding to the DEM image by the number of vertical pixels to obtain the number of vertical pixels corresponding to the terrain object in the vertical direction, and determining the image sub-size of the image block based on the number of horizontal pixels and the number of vertical pixels.
In a specific application scene, an image size threshold is determined based on the maximum segment pixel size, the maximum segment number and the maximum component number in the illusion engine, the image length and the image width corresponding to the DEM image are divided by the image size threshold respectively, and the horizontal number corresponding to the image block of the DEM image in the horizontal direction after segmentation is num _ landscape _ x, and the vertical number corresponding to the image block in the vertical direction is num _ landscape _ y. And calculating the product of num _ landscape _ x and num _ landscape _ y to obtain the total number of the terrain objects to be created, so that the DEM image of any size can be divided into at least one image block, the number of the terrain objects to be created corresponding to the DEM image is obtained and recorded as a first numerical value, and therefore the terrain of any size can be constructed in the illusion engine, and the size limit of the constructed terrain in the illusion engine is broken through.
Further, the image length corresponding to the DEM image is divided by the horizontal number num _ landscap _ x and rounded up to obtain the horizontal pixel number pixel _ per _ landscap _ x corresponding to the image block in the horizontal direction, and the image width corresponding to the DEM image is divided by the vertical number num _ landscap _ y and rounded up to obtain the vertical pixel number pixel _ per _ landscap _ y corresponding to the image block in the vertical direction.
In some implementations, the terrain size of the terrain object is defined based on a segment pixel size, a segment, and a component, each terrain object comprising a plurality of components, each component corresponding to a component pixel size, the component pixel size being related to a number of segments in the component and a segment pixel size.
Further, based on the image sub-size corresponding to the image block, generating a terrain size, a terrain height and a terrain coordinate corresponding to the image block, and generating a terrain object matched with the terrain size and the terrain height at the terrain coordinate of the illusion engine, including: determining the number of components in the terrain object based on the ratio of the image sub-size corresponding to the current image block to the component pixel size, and obtaining the terrain size corresponding to the terrain object based on the number of the components in the terrain object; generating a terrain height corresponding to the terrain object within a range of a terrain size corresponding to the terrain object based on the height information in the current image block; and determining the terrain coordinates corresponding to the terrain objects in the illusion engine based on the position information in the current image block.
Specifically, the total number of pixels divided into a single terrain object is determined based on the image sub-size corresponding to the image block, the number of components in the terrain object corresponding to the image block is determined based on the ratio of the total number of pixels divided into the single image block to the component pixel size, the terrain size corresponding to the terrain object is obtained by multiplying the number of components in the terrain object by the component pixel size, the height information in the image block is introduced into the terrain object, the terrain height corresponding to the terrain object is formed within the range of the terrain size corresponding to the terrain object, the terrain coordinate corresponding to the terrain object is determined in the phantom engine by using the position information in the image block, so that the position information and the height information corresponding to the image block can be introduced into the terrain object in the phantom engine, and the terrain size corresponding to the single terrain object can accommodate the total number of pixels divided into the single terrain object, so that after all terrain objects are traversed, the terrain with any size can be generated in the phantom engine.
In an application scenario, each component corresponds to a preset number of segments, the segments correspond to a preset pixel size, and the component pixel size is determined based on the product of the preset number and the preset pixel size.
Further, determining the number of components in the terrain object based on the ratio of the image sub-size corresponding to the current image block to the component pixel size, and obtaining the terrain size corresponding to the terrain object based on the number of components in the terrain object, includes: determining a second value corresponding to the components of the terrain object in the transverse direction and a third value corresponding to the components in the longitudinal direction based on the transverse pixel number and the ratio of the longitudinal pixel number to the component pixel size; wherein the second numerical value and the third numerical value are rounded up results; based on the component pixel size, the second value, and the third value, a transverse pixel size in a transverse direction of the terrain object and a longitudinal pixel size in a longitudinal direction of the terrain object are determined, and a corresponding terrain size of the terrain object is determined.
Specifically, the number of horizontal pixels and the number of vertical pixels are divided by the pixel size of the component and rounded up, respectively, to determine a second value corresponding to the component of the terrain object in the horizontal direction and a third value corresponding to the component in the vertical direction, thereby ensuring that the terrain object can include enough components in both the horizontal direction and the vertical direction to accommodate the number of horizontal pixels corresponding to the terrain object in the horizontal direction and the number of vertical pixels corresponding to the terrain object in the vertical direction.
Further, the transverse pixel size of the terrain object in the transverse direction and the longitudinal pixel size of the terrain object in the longitudinal direction are determined based on the product of the component pixel size and the second numerical value and the third numerical value respectively, and finally the corresponding terrain size of the terrain object is obtained, so that the accuracy of the terrain size is ensured.
In a specific application scenario, the preset pixel size quadrpsource section corresponding to a segment is 255, the preset number sectioncomponent of the segments in each component is 2, the pixel size of the component is quadrpsource component 510, and the method for calculating the second value componentcount x and the third value componentcount y is as follows:
ComponentCountX=FMath::CeilToInt(float(pixels_per_landscape_x-1)/QuadsPerComponent);
ComponentCountY=FMath::CeilToInt(float(pixels_per_landscape_y-1)/QuadsPerComponent);
further, the method of calculating the lateral pixel size FinalSizeX of the terrain object in the lateral direction, and the longitudinal pixel size FinalSizeY of the terrain object in the longitudinal direction is as follows:
FinalSizeX=ComponentCountX*QuadsPerComponent+1;
FinalSizeY=ComponentCountY*QuadsPerComponent+1;
it should be noted that the preset pixel size quadrpsersection and the preset number sectionalcomp of segments in each component may be set by a user in other specific application scenarios based on the threshold limit of the illusion engine, which is not specifically limited in this application.
Further, please refer to fig. 4, where fig. 4 is a schematic view of an application scene of an embodiment of the present invention of a terrain construction method based on a ghost engine, and it is assumed that a DEM image is divided into four image blocks, each image block corresponds to a terrain object, and the image blocks are two in the horizontal direction and the vertical direction. After determining the terrain size corresponding to the terrain object based on the component pixel size, the second value and the third value, determining a transverse pixel size of the terrain object in the transverse direction, and a longitudinal pixel size of the terrain object in the longitudinal direction, further comprising: determining a horizontal offset coordinate corresponding to the horizontal pixel size and a vertical offset coordinate corresponding to the vertical pixel size based on the horizontal pixel size, the vertical pixel size, the image size and the first numerical value; interpolating between the edge of the terrain object corresponding to the transverse offset coordinate and the transverse offset coordinate, and interpolating between the edge of the terrain object corresponding to the longitudinal offset coordinate and the longitudinal offset coordinate to obtain an interpolation result; and generating the terrain of the terrain object in the illusion engine based on the interpolation result, the component pixel size and the second value corresponding to the component of the terrain object in the transverse direction and the third value corresponding to the component in the longitudinal direction.
Specifically, since the number of the topographic objects and the components corresponding to the topographic objects are rounded up, and each parameter is a value obtained by subtracting a power of 2 or a power of 2, the sum of the plurality of topographic objects may not be consistent with the size of the image size, and when the image size is smaller than the sum of the plurality of topographic objects, the actual position corresponding to the image size, that is, the shadow position in fig. 4, is determined based on the image length corresponding to the image size, the lateral pixel size corresponding to the topographic object, and the lateral number corresponding to the first value, and further, the lateral offset coordinate corresponding to the lateral pixel size, that is, the position of the lateral black dot in fig. 4, is determined based on the difference between the lateral pixel size corresponding to the topographic object and the actual position, and is calculated by:
bias_x_first=FinalSizeX-FMath::CeilToInt(float(DemL-FinalSizeX*(num_landscape_x-2))/2);
bias_x_last=DemL-FinalSizeX*(num_landscape_x-2)-(FinalSizeX-bias_x_first);
determining an actual position corresponding to the image size, namely a shadow position in fig. 4, based on the image width corresponding to the image size, the longitudinal pixel size corresponding to the terrain object, and the longitudinal number corresponding to the first value, and further determining a longitudinal offset coordinate corresponding to the longitudinal pixel size, namely a position of a black dot in the longitudinal direction in fig. 4, based on a difference value between the longitudinal pixel size corresponding to the terrain object and the actual position, wherein the longitudinal offset coordinate is calculated in a manner that:
bias_y_first=FinalSizeY-FMath::CeilToInt(float(DemW-FinalSizeY*(num_landscape_y-2))/2);
bias_y_last=DemW-FinalSizeY*(num_landscape_y-2)-(FinalSizeY-bias_y_first);
the horizontal pixel size is FinalSizeX, the vertical pixel size is FinalSizeY, the image length corresponding to the image size is dell, the image width is DemW, the horizontal quantity corresponding to the first value is num _ landscap _ x, and the vertical quantity corresponding to the first value is num _ landscap _ y.
Further, interpolating between the edge of the terrain object corresponding to the lateral offset coordinate and the lateral offset coordinate, that is, the edge of the lateral offset coordinate to the dotted line in fig. 4, that is, the white part in the lateral direction in fig. 4, interpolating between the edge of the terrain object corresponding to the longitudinal offset coordinate and the longitudinal offset coordinate, that is, the edge of the longitudinal offset coordinate to the dotted line in fig. 4, that is, the white part in the longitudinal direction in fig. 4, obtains an interpolation result to ensure that each terrain object corresponds to the data, wherein the interpolation process performs interpolation in a way of complementing 0.
Further, based on the interpolation result, the component pixel size, and a second value corresponding to the component of the terrain object in the lateral direction, and a third value corresponding to the component in the longitudinal direction, a terrain corresponding to the terrain object is generated in the illusion engine.
Further, generating a terrain height corresponding to the terrain object within a range of a terrain size corresponding to the terrain object based on the height information in the current image block, comprising: setting a preset height between the edge of the terrain object corresponding to the transverse offset coordinate and the transverse offset coordinate, and setting a preset height between the edge of the terrain object corresponding to the longitudinal offset coordinate and the longitudinal offset coordinate; and indexing the height information in the current image block to an area surrounded by the horizontal offset coordinate and the vertical offset coordinate, and generating the terrain height corresponding to the terrain object within the range of the terrain size corresponding to the terrain object.
Specifically, the interpolation between the edge of the terrain object corresponding to the lateral offset coordinate, that is, the edge of the lateral offset coordinate to the dotted line in fig. 4, that is, the white portion in the lateral direction in fig. 4, and the interpolation between the edge of the terrain object corresponding to the longitudinal offset coordinate, that is, the edge of the longitudinal offset coordinate to the dotted line in fig. 4, that is, the white portion in the longitudinal direction in fig. 4, are set to a preset height, and the height information in the image block is indexed to the area surrounded by the lateral offset coordinate and the longitudinal offset coordinate, that is, the area corresponding to the shaded portion in fig. 4, so that the height in the terrain object corresponds to the height information of the image block.
In a specific application scenario, a data matrix for storing height data is generated, ix and iy represent the transverse number and the longitudinal number of the terrain object, and are increased from 0, and in the terrain object with ix =0, 0 is given to the left side of bias _ x _ first; in a terrain object of ix = num _ landscape _ x-1, 0 is given to the right of bias _ x _ last; in the terrain object of iy =0, 0 is given on the upper side of bias _ y _ first; in the terrain object where iy = num _ landscape _ y-1, 0 is given below bias _ y _ last. The values of the remaining areas are copied from the height information of the video block through an index relationship, the index relationship includes a height index and a data index, and the height index _ height includes: index _ height = col × FinalSizeX + row; the data index _ dem includes: index _ dem =2 ((iy) FinalSizeY + col-bias _ y _ first) DemL + ix FinalSizeX + row-bias _ x _ first); wherein col is the Y-direction index and row is the X-direction index.
Further, determining the terrain coordinates corresponding to the terrain object in the illusion engine based on the position information in the current image block, including: determining the geographic coordinates of the current image block in a geographic coordinate system based on the position information in the current image block; converting the geographic coordinate corresponding to the current image block to a projection coordinate system by using a spatial data conversion library to obtain the projection coordinate of the current image block in the projection coordinate system; and determining the terrain projection coordinates of the terrain object in a projection coordinate system based on the projection coordinates and the terrain size corresponding to the current image block, and converting the terrain projection coordinates into an engine global coordinate system to obtain the engine global coordinates of the terrain object in the engine global coordinate system.
Specifically, the geographic coordinates of the image block in the geographic coordinate system are determined based on the position information in the image block, the geographic coordinates corresponding to the image block are converted from the geographic coordinate system to the projection coordinate system by using the spatial data conversion library, the projection coordinates of the image block in the projection coordinate system are obtained, the terrain projection coordinates of the terrain object in the projection coordinate system are determined based on the terrain size corresponding to the terrain object after interpolation at the edge, namely the terrain projection coordinates of a dotted line frame corresponding to each terrain object in fig. 4 in the projection coordinate system are determined, so that the accuracy of the projection coordinates corresponding to the terrain object is improved, the terrain projection coordinates are converted into the engine global coordinate system, the engine global coordinates of the terrain object in the engine global coordinate system are obtained, and the accuracy of the engine global coordinates corresponding to the terrain object is improved.
In a specific application scene, subtracting the origin coordinate corresponding to the engine origin from the projection coordinate corresponding to the upper left corner of the terrain object after interpolation to obtain the engine global coordinate corresponding to the terrain object, wherein ix and iy represent the transverse number and the longitudinal number of the terrain object and are increased progressively from 0, the terrain object with ix >0, the terrain object with x coordinate value minus 1 and iy >0, and the terrain object with y coordinate value minus 1, so that the coordinate values are matched with the calculation mode of the terrain size.
Optionally, before dividing the DEM image into at least one image block and determining an image sub-size of each image block based on the image size and an image size threshold in the illusion engine, the method further includes: obtaining a coordinate setting attribute corresponding to the engine origin, and resetting the engine origin of the coordinate system based on position information in the DEM image when the coordinate setting attribute corresponds to an allowable setting; wherein the coordinate setting attribute includes an enable setting and a disable setting.
Specifically, the coordinate setting attribute of the engine origin is judged, if the setting is allowed, the engine origin of the coordinate system is reset based on the position information in the DEM image, the coordinate setting attribute is set to be a prohibited setting after the resetting, and if the setting is prohibited, the existing engine origin is used. Furthermore, when the setting is allowed, the data center of the DEM image can be preferentially used as a new engine origin, so that the value of the engine global coordinate of the terrain object corresponding to the DEM image is effectively reduced, the processing load of the illusion engine is effectively reduced, and the calculation efficiency is improved.
In a specific application scenario, the position information comprises coordinate system information and a data center in the DEM image, the data center is converted from the coordinate system information of the DEM image into longitude and latitude values in a geographic coordinate system, a geographic position attribute in a geographic global object is given, and an engine origin is reset.
In some implementation scenarios, after generating the terrain object matching the terrain size and the terrain height, the method further comprises: and setting a material matched with the slope of the terrain for the terrain corresponding to the DEM image by utilizing a terrain material tool in the illusion engine.
Specifically, use autolandscape topography material instrument to carry out the topography material and generate in the illusion engine, set up different materials according to the gradient of topography automatically by the topography material instrument, improve the fidelity of topography.
In some embodiments, please refer to fig. 5, where fig. 5 is a schematic flowchart of an embodiment of a map construction method based on an illusion engine according to the present application, and the method includes steps of obtaining raster data obtained based on a remote sensing image, importing the raster data into the illusion engine, and generating a map corresponding to the raster data in the illusion engine, and specifically includes:
s501: obtaining raster data obtained based on the remote sensing image, and converting the raster data into a raster data matrix, wherein the raster matrix size of the raster data matrix is related to the data size of the raster data.
Specifically, the remote sensing image is a grid structure with a pixel as a unit, grid data can be obtained based on the remote sensing image, the grid data is converted into a grid data matrix based on the data size of the grid data, and the grid matrix size of the grid data matrix is related to the data size of the grid data, so that the illusion engine can be compatible with the remote sensing image, and the remote sensing image is converted into a map with a proportional size in the illusion engine.
In an application mode, extracting image information in a remote sensing image by using a spatial data conversion library to obtain raster data, obtaining a data size corresponding to the raster data, and interpolating the raster data based on the data size of the raster data and the unit length of an illusion engine to obtain a raster data matrix.
In another application mode, the remote sensing image is read by using the spatial data conversion library to obtain raster data and a data size corresponding to the raster data, a data range of the raster data is obtained by using data size calculation, and the raster data is interpolated based on the data range of the raster data to obtain a raster data matrix, so that the raster matrix size of the raster data matrix is matched with the unit length of the illusion engine.
S502: and generating a map mesh body corresponding to the raster data based on the raster matrix size of the raster data matrix.
Specifically, a mesh body matched with the grid matrix size of the grid data matrix is created, and a map mesh body corresponding to the grid data is generated in the illusion engine.
In an application mode, the grid matrix size comprises a matrix length and a matrix width, a grid body matched with the matrix length and the matrix width of the grid data matrix is generated, grid vertexes, triangular grid indexes, texture coordinates and normal lines of the grid body are determined, modeling is carried out on the basis of the grid vertexes, the triangular grid indexes, the texture coordinates and the normal lines corresponding to the grid body through a programmed modeling function, and a map grid body corresponding to the grid data is generated in a phantom engine.
In another application, a map object is created in the illusion engine, wherein the map object includes grid attributes, the grid matrix size includes a matrix length and a matrix width, a grid body matched with the matrix length and the matrix width of the grid data matrix is generated, grid vertices, triangular grid indexes, texture coordinates and normals of the grid body are determined, the grid vertices, the triangular grid indexes, the texture coordinates and the normals of the grid body are stored in the grid attributes, and the grid attributes are called in the illusion engine by using a programmed modeling function to generate the map grid body corresponding to the grid data.
In an application scene, a rectangular mesh body with the same length and width as the matrix of the raster data matrix is generated in the illusion engine, the mesh vertex of the rectangular mesh body is positioned in the illusion engine based on the coordinate information corresponding to the raster data, triangulation is carried out on the rectangular mesh body to obtain a triangular mesh index, and the texture coordinate and the normal corresponding to the mesh vertex are determined based on the position of the mesh vertex. In other application scenarios, the raster data matrix may also be scaled.
S503: the raster data is converted into the texture of the map mesh, and a map corresponding to the raster data is generated in the illusion engine.
Specifically, the raster data is converted into the texture of the map grid body, so that the remote sensing image is used as the texture asset of the map grid body, a map corresponding to the raster data is generated in the illusion engine, the remote sensing image can be used as a map in the illusion engine to show the whole area, and can also be used as a scale and a reference object for modeling in the illusion engine, and the modeling precision in the illusion model is improved.
In one application, a ghost engine generates a texture corresponding to a map mesh based on data characteristics of raster data, and generates a map corresponding to the raster data by applying the texture to the map mesh. Wherein the data characteristics are related to the remote sensing image corresponding to the raster data.
In another application, a texture pointer variable is created in the illusion engine, and data features corresponding to the raster data are stored in the texture pointer variable, so that the raster data are converted into textures of a map grid body, and a map corresponding to the raster data is generated, wherein the data features are related to a remote sensing image corresponding to the raster data.
Optionally, when the image size corresponding to the remote sensing image exceeds the maximum pixel size of the illusion engine, reducing the remote sensing image to be less than or equal to the maximum pixel size of the illusion engine, so as to convert the grid data corresponding to the reduced remote sensing image into the texture of the map grid body.
In the embodiment, raster data obtained based on a remote sensing image is obtained, the raster data is converted into a raster data matrix, the size of the raster matrix of the raster data matrix is related to the data size of the raster data so as to match the size of the raster matrix with the data size of the raster data, a map mesh body corresponding to the raster data is generated based on the size of the raster matrix of the raster data matrix, the raster data is converted into the texture of the map mesh body, and a map corresponding to the raster data is generated in an illusion engine, so that the remote sensing image can serve as a map in the illusion engine to show the overall appearance of the whole area, can serve as a scale and a reference object for modeling in the illusion engine, and the modeling precision in the illusion engine is improved.
In some implementation scenarios, the step S501 specifically includes: reading the remote sensing image by using a spatial data conversion library to obtain raster data and corresponding data size, affine matrix and coordinate information of the raster data, and determining the spatial resolution of the raster data based on the affine matrix; determining a data range of the raster data based on the data size, the spatial resolution and the coordinate information of the raster data; interpolating the raster data based on the data range and the unit length of the illusion engine to obtain a raster data matrix corresponding to the raster data; wherein the grid matrix size of the grid data matrix matches the unit length of the illusion engine.
Specifically, a remote sensing image is read by using a spatial data conversion library, raster data and a data size, an affine matrix and coordinate information corresponding to the raster data are obtained from the remote sensing image, a data range of the raster data is obtained based on a product of a spatial resolution and the data size stored in the affine matrix, the position of the data range is located based on the coordinate information, the raster data is interpolated based on the data range of the raster data and the unit length of the illusion engine, the matrix length and the matrix width of the raster data matrix are determined, so that the size of the raster matrix of the raster data is matched with the unit length of the illusion engine, and the illusion engine can conveniently be compatible with the raster data and construct a map by using the raster data, wherein the spatial resolution comprises the pixel size of the raster data, and the data size comprises the length corresponding to a single pixel.
In a specific application scenario, the GDAL is utilized to read raster data, and obtain data, a data size, an affine matrix, and coordinate information stored in the raster data, where the data size is a length corresponding to a single pixel and a unit is meter, and based on a spatial resolution value and a data size stored in the affine matrix and coordinate information, a position of the raster data is located and a data range of the raster data is determined, that is, the length and the width of the raster data and a unit are meters.
Further, before converting the raster data into the texture of the map mesh and generating the map corresponding to the raster data in the illusion engine, the method further includes: determining the geographic coordinates of the grid data in a geographic coordinate system based on the coordinate information and the preset corner of the affine matrix; converting the geographic coordinates corresponding to the grid data into a projection coordinate system by using a spatial data conversion library to obtain projection coordinates of the grid data in the projection coordinate system; and converting the projection coordinate corresponding to the raster data into an engine global coordinate system to obtain the engine global coordinate of the map grid body corresponding to the raster data in the engine global coordinate system.
Specifically, the illusion engine corresponds to a coordinate system, geographic coordinates of a preset corner of an affine matrix in a geographic coordinate system are determined based on coordinate information and the preset corner of the affine matrix, the geographic coordinates of the preset corner of the affine matrix in the geographic coordinate system are used as coordinates of grid data in the geographic coordinate system, the geographic coordinates are converted into a projection coordinate system, projection coordinates of the grid data in the projection coordinate system are obtained, the projection coordinates corresponding to the grid data are converted into an engine global coordinate system, and engine global coordinates of a map grid body corresponding to the grid data in the engine global coordinate system are obtained, so that the map grid body is matched with the position in the real world.
In a specific application scenario, a preset corner is the upper left corner of an affine matrix, the geographic coordinate of the upper left corner of the affine matrix in a geographic coordinate system is determined based on coordinate information and the matrix size of the affine matrix, the geographic coordinate is determined based on longitude and latitude, the geographic coordinate of the upper left corner in the geographic coordinate system is converted into a projection coordinate system, the projection coordinate of the upper left corner of the affine matrix is obtained, the origin coordinate of an engine origin is subtracted from the projection coordinate of the upper left corner of the affine matrix, the engine global coordinate of the upper left corner of the affine matrix is obtained, the horizontal coordinate in the engine global coordinate corresponds to the longitude of the upper left corner of the affine matrix, the vertical coordinate in the engine global coordinate corresponds to the latitude of the upper left corner of the affine matrix, and the vertical coordinate is assigned to be 0. In other specific application scenarios, the preset corner may also be another corner of the affine matrix, which is not specifically limited in the present application.
In some implementation scenarios, the step S502 specifically includes: generating a map object corresponding to the raster data in the illusion engine; the map object comprises a position attribute, a grid attribute, a material attribute and an image address attribute; the map remote sensing image processing method comprises the steps that a position attribute is used for storing coordinate information, a grid attribute is used for storing a map grid body corresponding to a map object, a material attribute is used for storing the material of the map grid body, and an image address attribute is used for storing an address index of a remote sensing image; storing an address index corresponding to the remote sensing image into the image address attribute of the map object; generating a mesh vertex set corresponding to the map object in the illusion engine based on the grid matrix size of the grid data matrix; wherein the mesh vertex set comprises a plurality of mesh vertices; and generating a map grid body corresponding to the map object based on the grid vertex set.
Specifically, a map object ALayoractor matched with raster data is generated in the illusion engine, the map object ALayoractor inherits the base class object AActor map object of the illusion engine and comprises a position attribute, a grid attribute, a material attribute and an image address attribute, wherein the position attribute can be convenient for storing coordinate information, the grid attribute can be convenient for storing a map grid body corresponding to the map object, the material attribute can be convenient for storing the material of the map grid body, the image address attribute is convenient for storing an address index of a remote sensing image, the address index corresponding to the remote sensing image is traversed, and the address index corresponding to the remote sensing image is stored into the image address attribute of the map object so as to search the remote sensing image.
Further, a grid body matched with the grid matrix size is generated in the illusion engine based on the grid matrix size of the grid data, the vertex of the grid body is determined to obtain a grid body vertex set, the grid vertex set comprises a plurality of grid vertices, a triangular grid index corresponding to the grid vertex set and texture coordinates and a normal corresponding to each grid vertex in the grid vertex set are determined based on the grid vertex set, and a map grid body corresponding to the grid vertex set is generated by utilizing programming modeling, so that a base frame corresponding to a map is constructed in the illusion engine, and the map is generated in the illusion engine.
In an application scenario, the mesh vertex set includes at least four mesh vertices, and a map mesh body corresponding to a map object is generated based on the mesh vertex set, including: determining a plurality of triangular mesh indexes corresponding to the mesh vertex set based on the positions of all mesh vertices in the mesh vertex set; determining texture coordinates corresponding to the mesh vertex set based on the position of any mesh vertex in the mesh vertex set relative to the positions of other mesh vertices; determining a normal corresponding to each grid vertex in all triangular grid indexes based on the positions of three grid vertices in all triangular grid indexes; in the illusion engine, a map mesh volume corresponding to the map object is generated based on all the triangular mesh indices and the normal and texture coordinates corresponding to each mesh vertex.
Specifically, the grid body constructed based on the grid data matrix is a rectangle or other polygons, the grid vertex set includes at least four grid vertices, for convenience of description, taking the grid body as a square, the height corresponding to the grid data matrix is h, the width corresponding to the grid data matrix is w, and h = w, and all the grid vertices of the grid body corresponding to the grid data matrix are (0, 0), (0, h, 0), (w, h, 0), and (w, 0), respectively. The mesh vertices corresponding to the mesh body are numbered 0,1, 2 and 3, the triangular mesh indices of the generated mesh body are respectively 0,1, 2 and 2, 3 and 0, so as to generate the mesh body based on the triangular mesh indices, the texture coordinates are respectively (0, 0), (0, 1), (1, 1) and (1, 0) so as to perform texture mapping, the normal corresponding to each vertex is calculated so as to determine the illumination direction corresponding to each vertex, the vertices, the triangular mesh indices, the texture coordinates and the normal generated in the above steps are input, and the map mesh body is generated by using the programmed modeling function of the illusion engine.
In a specific application scenario, determining a normal corresponding to each mesh vertex in all triangular mesh indexes based on positions of three mesh vertices in all triangular mesh indexes includes: based on the positions corresponding to the three grid vertexes in each triangular grid index, performing cross multiplication on two vectors between any grid vertex and the other two grid vertexes in each triangular grid index, and performing normalization processing on cross multiplication results to obtain the normal corresponding to all the grid vertexes.
Specifically, the normal of each mesh vertex in the triangular mesh index is calculated in a mode of cross multiplication of two vectors consisting of the mesh vertex and the other two mesh vertices on a triangular surface where the mesh vertex is located, and normalization processing is performed on cross multiplication results to obtain the mesh vertex so as to obtain an accurate normal position.
In some implementation scenarios, the step S503 specifically includes: creating a texture pointer variable in the illusion engine, and storing data characteristics corresponding to the raster data into the texture pointer variable so as to convert the raster data into textures of a map grid body and generate a map corresponding to the raster data; wherein the data characteristics are related to the remote sensing image corresponding to the raster data.
Specifically, a texture pointer variable is created in the illusion engine, data features corresponding to the raster data are stored in the texture pointer variable, so that the raster data are converted into textures of a map grid body in the illusion engine, the textures are used as map grid body maps, and a map corresponding to the raster data is generated, wherein the data features are directly related to remote sensing images corresponding to the raster data, so that the remote sensing images can serve as a map in the illusion engine to show the whole image of the whole area, and can also serve as scales and references for modeling in the illusion engine, and the modeling accuracy in the illusion engine is improved.
Further, after converting the raster data into the texture of the map mesh and generating the map corresponding to the raster data in the illusion engine, the method further includes: and creating a material pointer variable in the illusion engine, and generating a material matched with the texture for the map grid body in the material pointer variable to obtain the material of the map grid body.
Specifically, a material pointer variable is created in the illusion engine, and a material matched with the texture is generated for the map grid body and stored in the material pointer variable, so that the material of the map grid body is obtained.
In a specific application scenario, a pointer variable TextPack of an UPackage type is created in a phantom engine and used for storing texture assets, a pointer variable NewTexture of a UTexture2D type is newly created in the TextPack and used for storing texture data, the size, channel and type information of a remote sensing image is assigned to a PlatformData variable of NewTexture, a pointer variable Mip of an FTexture2DMipMap type is created in the phantom engine, an RGBA value of the remote sensing image is written into BulkData of Mip, if the pixel size of the remote sensing image is larger than the maximum texture size in the phantom engine, the pixel size of the remote sensing image needs to be scaled to be smaller than or equal to the maximum texture size, the Mip is assigned to a Mips of the PlatformData of NewTexture, and the texture asset TextPack is stored, and texture assets of a map mesh are obtained.
Further, creating a pointer variable Package of an UPackage type for storing the material assets, and creating a pointer variable UnrealMaterial of a UMaterial type in the Package for storing the material data; generating a material node of the UMaterialExpressionTextureSimple type, and assigning Texture variables to the node after Texture assets are imported; assigning the material node to an Expression variable of BaseCoor of UnrealMaterial; saving the material assets; the material asset is assigned to the map grid volume via the SetMaterial function.
In some embodiments, please refer to fig. 6, where fig. 6 is a schematic flowchart of an embodiment of a method for constructing a plane based on a phantom engine according to the present application, where a process of obtaining vector data based on a remote sensing image, introducing a vector line or a vector plane in the vector data into the phantom engine, and generating a plane corresponding to the vector data in the phantom engine includes:
s601: and obtaining vector data obtained based on the remote sensing image, and reading out a vector line and a vector plane in the vector data.
Specifically, the remote sensing image is a grid structure with pixel as a unit, vector data can be obtained by vector extraction of the grid structure in the remote sensing image, and vector lines and vector planes in the vector data are read out, so that the illusion engine can be compatible with the remote sensing image and obtain the vector lines and the vector planes.
In an application mode, a remote sensing image is read by using a spatial data conversion library, a grid structure of the remote sensing image is determined, vector extraction is carried out on the grid structure in the remote sensing image to obtain vector data, vector lines are extracted from the vector data based on data attributes corresponding to the vector lines in the vector data, and vector planes are extracted from the vector data based on data attributes corresponding to the vector planes in the vector data.
In another application mode, a vector file in a Shapefile or GeoJson format corresponding to the remote sensing image is read by using a spatial data conversion library, and a vector line and a vector plane in vector data are extracted from the vector file.
S602: a planar object is generated in the illusion engine.
Specifically, a planar object is generated in the illusion engine, wherein the planar object is used to correspond to a vector line or a vector plane, and the vector plane corresponding to the planar object does not include a height feature.
In one application, a flat object is created in the ghost engine that inherits the geographic global object.
In another application, a flat object is created in the phantom engine that inherits the base class of the phantom engine.
S603: and responding to the vector line corresponding to the plane object, adding all contour points in the vector line into the plane object to obtain a first original vertex in the plane object, performing width expansion on the first original vertex in the plane object to obtain a new vertex, and obtaining a plane grid body corresponding to the vector line based on the first original vertex and the new vertex.
Specifically, when the planar object corresponds to a vector line, all contour points obtained from the vector line are added to the planar object to obtain a first original vertex of the planar object, so that the contour points on the vector line are introduced into the planar object, and the contour points are used as the first original vertex of the planar object.
Furthermore, the first original vertex in the planar object is not enough to form a plane, the width of the first original vertex in the planar object is expanded, and a new vertex corresponding to the first original vertex is generated, so that the first original vertex and the new vertex can jointly form a plane.
In an application mode, a connecting line between every two adjacent first original vertexes is determined based on the positions of the first original vertexes in the plane object, and width expansion is performed in a direction perpendicular to the connecting line between the adjacent first original vertexes to obtain two newly added vertexes corresponding to each first original vertex.
In another application mode, based on the position of the first original vertex in the planar object, the connecting lines of all the first original vertices in the planar object which are sequentially connected are determined, a re-engraved connecting line parallel to the connecting lines is generated in the illusion engine, and at least one new vertex corresponding to each first original vertex is obtained.
And further, generating a plane grid body corresponding to the vector line in the illusion engine on the basis of the first original vertex and the newly added vertex.
In an application mode, a mesh body matched with a plane object is generated based on the positions of a first original vertex and a newly-added vertex in an illusion engine, the mesh vertex, a triangular mesh index, texture coordinates and a normal of the mesh body are determined, modeling is carried out based on the mesh vertex, the triangular mesh index, the texture coordinates and the normal corresponding to the mesh body by utilizing a programmed modeling function, and the plane mesh body corresponding to a vector line is generated in the illusion engine.
In another application mode, the planar object comprises a mesh attribute, a mesh body matched with the planar object is generated based on the positions of the first original vertex and the newly-added vertex in the illusion engine, the mesh vertex, the triangular mesh index, the texture coordinate and the normal of the mesh body are determined, the mesh vertex, the triangular mesh index, the texture coordinate and the normal of the mesh body are stored in the mesh attribute, the mesh attribute is called by a programming modeling function in the illusion engine, and the planar mesh body corresponding to the vector line is generated.
S604: and responding to the vector plane corresponding to the planar object, adding all contour points in the vector plane into the planar object, obtaining a second original vertex in the planar object, and obtaining a planar mesh body corresponding to the vector plane based on the second original vertex.
Specifically, when the planar object corresponds to a vector plane, the vector plane does not include a height feature, all contour points obtained from the vector plane are added to the planar object to obtain a second original vertex of the planar object, so that the contour points on the vector plane are introduced into the planar object, and the contour points are used as the second original vertex of the planar object. In an application mode, a mesh body matched with the plane object is generated based on the position of the second original vertex in the illusion engine, the mesh vertex, the triangular mesh index, the texture coordinate and the normal of the mesh body are determined, modeling is carried out based on the mesh vertex, the triangular mesh index, the texture coordinate and the normal corresponding to the mesh body by utilizing a programming modeling function, and the plane mesh body corresponding to the vector plane is generated in the illusion engine.
In another application mode, the planar object comprises a mesh attribute, a mesh body matched with the planar object is generated based on the position of the second original vertex in the illusion engine, the mesh vertex, the triangular mesh index, the texture coordinate and the normal of the mesh body are determined, the mesh vertex, the triangular mesh index, the texture coordinate and the normal of the mesh body are stored in the mesh attribute, the mesh attribute is called by using a programmed modeling function in the illusion engine, and the planar mesh body corresponding to the vector plane is generated.
S605: and assigning materials for the plane grids corresponding to the vector lines or vector planes, and generating planes corresponding to the vector data in the illusion engine.
Specifically, the material corresponding to the planar mesh is related to the physical object corresponding to the vector line or vector plane in the real world, the material is specified for the planar mesh, and the plane corresponding to the vector data is generated in the illusion engine.
In an application scene, the vector line corresponds to a road in the real world, the material related to the road is specified for the plane grid body, and the plane corresponding to the vector line is generated in the illusion engine, so that the plane corresponding to the virtual road is generated in the illusion engine.
In another application scenario, the vector plane corresponds to a real-world canal, the material related to the river is specified for the plane grid, and the plane corresponding to the vector plane is generated in the illusion engine, so that the plane corresponding to the virtual river is generated in the illusion engine.
In this embodiment, vector data obtained based on a remote sensing image is obtained, a vector line and a vector plane in the vector data are read, a planar object is generated in the illusion engine, a contour point on the vector line is added to the planar object, the contour point is used as a first original vertex of the planar object, and the first original vertex is subjected to width expansion, so that the first original vertex and a newly added vertex can jointly form a plane, a planar mesh body corresponding to the vector line is generated in the illusion engine based on the first original vertex and the newly added vertex, or the contour point on the vector plane is added to the planar object, the contour point is used as a second original vertex of the planar object, a planar mesh body corresponding to the vector plane is generated in the illusion engine based on the second original vertex, and a material is assigned to the planar mesh body, so that a plane corresponding to a real world can be conveniently generated in the illusion engine, and convenience in constructing a plane in the illusion engine is improved.
In some implementation scenarios, the step S601 specifically includes: reading the remote sensing image by using a spatial data conversion library to obtain vector data and corresponding data attributes thereof; the data attributes comprise a surface data attribute and a line data attribute; based on the data attributes of the vector data, vector lines corresponding to the line data attributes and vector planes corresponding to the plane data attributes are read out from the vector data.
Specifically, a remote sensing image is read by using a spatial data conversion library, a vector file corresponding to vector data is obtained from the remote sensing image, and data attributes corresponding to the vector data are determined, wherein the data attributes include a surface data attribute and a line data attribute, each vector file can store a plurality of layers, a plurality of features can be stored in each layer, the data attributes corresponding to the features are determined for each feature, a vector corresponding to a wired data attribute in the vector data is read, a vector line corresponding to the line data attribute is obtained, a vector corresponding to the surface data attribute in the vector data is read, a vector surface corresponding to the surface data attribute is obtained, and the vector surface does not correspond to a height feature, that is, a vector surface for constructing a plane corresponds to a plane in the real world.
In a specific application scene, reading a remote sensing image by using GDAL, obtaining a vector file consisting of vector data corresponding to the remote sensing image, wherein the format of the vector file comprises but is not limited to Shapefile and GeoJson formats, extracting vector lines in the vector data from the vector file so that the illusion engine can be compatible with the remote sensing image, and obtaining the vector lines based on the vector data corresponding to the remote sensing image so as to construct a plane.
In some implementation scenes, the illusion engine corresponds to a coordinate system and a geographic global object, wherein the coordinate system comprises a geographic coordinate system, a projection coordinate system, an engine global coordinate system and an engine local coordinate, and the geographic global object comprises an engine origin and a conversion relation between any two coordinate systems in the coordinate system; the system comprises a geographic coordinate system, a projection coordinate system, an engine global coordinate system, an engine local coordinate system and an engine virtual coordinate system, wherein the geographic coordinate system and the projection coordinate system are related to the global positioning system, the engine global coordinate system is used for positioning any object in a virtual engine, an engine origin corresponds to an origin coordinate in the projection coordinate system, the engine global coordinate in the engine global coordinate system is obtained by subtracting the origin coordinate from the projection coordinate in the projection coordinate system, and the engine local coordinate in the engine local coordinate system is determined based on an offset value of a vertex on a current object relative to the engine global coordinate of the current object; the first original vertex, the new vertex, and the second original vertex include, in the ghost engine, engine local coordinates in an engine local coordinate system.
Specifically, the first original vertex and the newly added vertex correspond to a mesh vertex in a triangular mesh index, the second original vertex corresponds to a mesh vertex in the triangular mesh index, and the mesh vertex finally generates a planar mesh body.
In an implementation scenario, based on the first original vertex and the new vertex, a planar mesh corresponding to a vector line is obtained, which specifically includes: performing triangulation based on engine local coordinates corresponding to the first original vertex and the newly added vertex to obtain a plurality of triangular mesh indexes corresponding to the planar object, and storing vertexes in each triangular mesh index in a plane appointed direction; each triangular mesh index comprises three mesh vertexes, and the mesh vertexes correspond to the first original vertex and the newly-added vertex; determining a normal corresponding to each mesh vertex in all triangular mesh indexes and a texture coordinate corresponding to each mesh vertex based on the position of each mesh vertex in the illusion engine; in the illusion engine, a planar mesh volume corresponding to the vector lines is generated based on all the triangular mesh indices and the normal and texture coordinates corresponding to each mesh vertex.
Specifically, a plane which can be formed by connecting a first original vertex and a new vertex is determined based on engine local coordinates corresponding to the first original vertex and the new vertex, triangulation is performed on the plane formed by the first original vertex and the new vertex to obtain a plurality of triangular mesh indexes corresponding to a plane object, vertexes in each triangular mesh index are stored according to a plane designated direction, wherein the plane designated direction comprises clockwise storage towards a positive direction of a z axis of an engine global coordinate, namely clockwise storage in an observation room at a position from top to bottom in an illusion engine, each triangular mesh index corresponds to three mesh vertexes, each mesh vertex corresponds to the first original vertex and the new vertex, namely each triangular mesh index comprises the first original vertex and/or the new vertex, texture coordinates of the mesh vertex are generated based on the position of each mesh vertex to facilitate texture mapping, a normal line corresponding to each mesh vertex is calculated based on the position of each mesh vertex to facilitate determination of an illumination direction corresponding to each mesh vertex, the mesh vertexes, the mesh indexes, the triangular mesh coordinates and a normal line generated in the steps are input, and a programming function of a texture corresponding to an illusion an engine plane vector is generated by using the texture of each mesh vertex.
In another implementation scenario, obtaining the planar mesh corresponding to the vector surface based on the second original vertex specifically includes: performing triangulation based on the engine local coordinates corresponding to the second original vertex to obtain a plurality of triangular mesh indexes corresponding to the planar object, and storing the vertex in each triangular mesh index according to the plane designated direction; each triangular mesh index comprises three mesh vertexes, and the mesh vertexes correspond to the second original vertex; determining a normal corresponding to each mesh vertex in all triangular mesh indexes and a texture coordinate corresponding to each mesh vertex based on the position of each mesh vertex in the illusion engine; in the illusion engine, a planar mesh body corresponding to the vector plane is generated based on all the triangular mesh indexes and the normal and texture coordinates corresponding to each mesh vertex.
Specifically, a plane which can be formed by connecting second original vertexes is determined based on engine local coordinates corresponding to the second original vertexes, triangulation is performed on the plane formed based on the second original vertexes, a plurality of triangular mesh indexes corresponding to a plane object are obtained, vertexes in each triangular mesh index are stored according to a plane designated direction, wherein the plane designated direction comprises clockwise storage of a positive direction of a z-axis towards the global coordinates of the engine, namely clockwise storage of the plane designated direction in an observation room at a position from top to bottom in a phantom engine, each triangular mesh index corresponds to three mesh vertexes, the mesh vertexes correspond to the second original vertexes, namely each triangular mesh index comprises the second original vertex, texture coordinates of the mesh vertexes are generated based on the positions of the mesh vertexes so as to perform texture mapping, a normal corresponding to each mesh vertex is calculated based on the positions of the mesh vertexes so as to determine an illumination direction corresponding to each mesh vertex, the mesh vertexes, the triangular mesh indexes, the texture coordinates and a normal generated in the previous step are input, and a plane vector body corresponding to the phantom engine vector surface modeling is generated by using a programmed function of the phantom engine.
Further, for any of the above implementation scenarios, determining, based on the position of each mesh vertex in the illusion engine, the normal corresponding to each mesh vertex in all the triangular mesh indexes, and the texture coordinate corresponding to each mesh vertex, includes: based on engine local coordinates corresponding to three grid vertexes in each triangular grid index, cross-multiplying two vectors between any grid vertex and other two grid vertexes in each triangular grid index, and normalizing the cross-multiplication result to obtain normals corresponding to all the grid vertexes; and (4) scaling the local engine coordinates corresponding to each grid vertex to a texture coordinate range to obtain texture coordinates corresponding to each grid vertex.
Specifically, the normal of each grid vertex in the triangular grid index is calculated, the normal is calculated in a mode of cross multiplication of two vectors formed by the grid vertex and the other two grid vertices on a triangular surface where the grid vertex is located, and normalization processing is performed on cross multiplication results to obtain the grid vertex so as to obtain an accurate normal position.
Furthermore, the local engine coordinates corresponding to each mesh vertex are scaled to the texture coordinate range to obtain the texture coordinates corresponding to each mesh vertex, so as to map the planar mesh body.
In a specific implementation scenario, the scaling of the local engine coordinates corresponding to each mesh vertex to a texture coordinate range to obtain texture coordinates corresponding to each mesh vertex includes: and respectively carrying out scaling processing on the transverse coordinate and the longitudinal coordinate of the engine local coordinate corresponding to each grid vertex to obtain the transverse component and the longitudinal component of the texture coordinate corresponding to each grid vertex, and determining the texture coordinate corresponding to all the grid vertices.
Specifically, the horizontal coordinate and the vertical coordinate in the engine local coordinate corresponding to all the grid vertexes are divided by the scaling parameter respectively to obtain the horizontal component and the vertical component of the texture coordinate corresponding to each grid vertex, so that the horizontal component and the vertical component are scaled to the texture coordinate range to obtain the texture coordinate corresponding to all the grid vertexes, and the planar grid body is mapped conveniently.
In a specific application scenario, the lateral component U in the texture coordinate UV values of all vertices is the lateral coordinate of the engine local coordinate in the mesh vertex divided by 100, and the vertical component V is the vertical coordinate of the engine local coordinate in the mesh vertex divided by 100. The scaling parameter in the application scenario is 100, and in other application scenarios, the scaling parameter may be defined based on the engine local coordinates of the mesh vertex, and the horizontal component and the vertical component of the texture coordinate are scaled to 0-1.
Optionally, generating a planar object in the illusion engine specifically includes: generating a planar object inherited from the geographic global object in the illusion engine; the plane object comprises a vertex attribute, a mesh attribute and a material attribute; the vertex attributes are used for storing contour points on vector lines or vector planes, the grid attributes are used for storing plane grids corresponding to the vector lines or the vector planes, and the material attributes are used for storing the materials of the plane grids.
Specifically, the illusion engine comprises a coordinate system and a geographic global object, AGEoActor, and a planar object, APolynemeActor, which inherits the geographic global object is generated in the illusion engine, wherein the planar object comprises a vertex attribute, a grid attribute and a material attribute. The vertex attribute is used for storing contour points on a vector line or a vector plane, the grid attribute is used for storing a plane grid body corresponding to the vector line or the vector plane, the material attribute is used for storing the material of the plane grid body, and corresponding storage spaces are provided for the contour points, the plane grid body and the material.
Further, adding all contour points in the vector line to the planar object to obtain a first original vertex in the planar object, including: and converting the positions corresponding to all contour points on the vector line into a geographic coordinate system, and storing the geographic coordinates corresponding to all contour points on the vector line into a vertex attribute to obtain a first original vertex in the planar object.
Specifically, positions corresponding to all contour points on the vector line are converted into a geographic coordinate system, and geographic coordinates corresponding to all contour points on the vector line are stored into the vertex attributes, so that the contour points on the vector line are used as first original vertexes in the plane object, and the geographic coordinates corresponding to the contour points are stored in the vertex attributes for calling.
It can be understood that adding all contour points in the vector plane to the planar object results in a second original vertex in the planar object, including: and converting the positions corresponding to all the contour points on the vector surface into a geographic coordinate system, and storing the geographic coordinates corresponding to all the contour points on the vector surface into a vertex attribute to obtain a second original vertex in the planar object.
Specifically, positions corresponding to all contour points on the vector surface are converted into a geographic coordinate system, and geographic coordinates corresponding to all contour points on the vector surface are stored into the vertex attributes, so that the contour points on the vector surface are used as second original vertices in the planar object, and the geographic coordinates corresponding to the contour points are stored in the vertex attributes for calling.
In an application scene, the width expansion is carried out on a first original vertex in a plane object to obtain a newly added vertex, and the method comprises the following steps: converting the geographic coordinates of all first original vertexes in the plane object into a projection coordinate system by using a spatial data conversion library, converting the projection coordinates in the projection coordinate system into an engine global coordinate system, converting the engine global coordinate in the engine global coordinate system into an engine local coordinate system, and determining the engine local coordinate of the first original vertex in the engine local coordinate system; taking an engine global coordinate of one first original vertex in the plane object in an engine global coordinate system as the position of the plane object in the illusion engine; based on engine local coordinates of all first original vertexes in the plane object in an engine local coordinate system, performing width expansion on the first original vertexes in the plane object in a direction perpendicular to a connecting line between the adjacent first original vertexes to obtain new vertexes; the expanded width between the first original vertex and the newly added vertex is related to the line width attribute of the vector line; and determining the engine local coordinates of the newly added vertexes corresponding to the first original vertexes in the engine local coordinate system based on the engine local coordinates of the first original vertexes in the engine local coordinate system.
Specifically, the geographic coordinate corresponding to a first original vertex can be obtained by calling the geographic coordinate stored in the vertex attribute, the geographic coordinate is converted into a projection coordinate system and then is converted into an engine global coordinate system, the engine global coordinate corresponding to one first original vertex is used as the position of a plane object in the illusion engine, so that the plane object is matched with the position of a vector line in the real world, the engine global coordinate of the first original vertex is converted into an engine local coordinate system, the engine local coordinate corresponding to the first original vertex is obtained, the first original vertex is outwards expanded according to the width of the line width attribute in the direction perpendicular to a connecting line between adjacent first original vertices, a new vertex is obtained, and the vector line is expanded into a plane so as to be suitable for carrying out vector line corresponding to entity targets such as roads, canals and the like in the illusion model. Wherein the line width attribute stored in the vector line is associated with the entity object corresponding to the vector line.
Further, after the width expansion is performed on the first original vertex in the planar object to obtain a newly added vertex, the method further includes: and determining the engine local coordinates of the newly added vertexes corresponding to the first original vertexes in the engine local coordinate system based on the engine local coordinates of the first original vertexes in the engine local coordinate system.
Specifically, engine local coordinates of newly added vertexes corresponding to the first original vertexes in an engine local coordinate system are determined based on the engine local coordinates corresponding to the first original vertexes. And each first original vertex and each newly added vertex are stored according to the plane appointed direction so as to be convenient for positioning the position of the newly added vertex and displaying the first original vertex and the newly added vertex.
Optionally, obtaining vector data obtained based on the remote sensing image, and before reading out a vector line in the vector data, further includes: obtaining a coordinate setting attribute corresponding to the engine origin, determining position information in the vector data when the coordinate setting attribute corresponds to an allowable setting, and resetting the engine origin of the coordinate system based on the position information; wherein the coordinate setting attribute includes an enable setting and a disable setting.
Specifically, the coordinate setting attribute of the engine origin is determined, if the setting is permitted, the engine origin of the coordinate system is reset based on the position information in the vector data, and the coordinate setting attribute is set to the prohibited setting after the resetting, and if the setting is prohibited, the existing engine origin is used. Furthermore, when the setting is allowed, a new engine origin can be set based on the position information corresponding to the vector data in priority, so that the value of the engine global coordinate of the plane-solid object corresponding to the vector data can be effectively reduced, the processing load of the illusion engine can be effectively reduced, and the calculation efficiency can be improved.
In some embodiments, please refer to fig. 7, where fig. 7 is a schematic flowchart of an embodiment of a three-dimensional construction method based on an illusion engine according to the present application, acquiring vector data obtained based on a remote sensing image, introducing a vector plane in the vector data into the illusion engine, and generating a three-dimensional process corresponding to the vector data in the illusion engine, specifically including:
s701: and obtaining vector data obtained based on the remote sensing image, reading out a vector plane in the vector data, and determining the height characteristic of the vector plane.
Specifically, the remote sensing image is a grid structure with pixel as a unit, vector data can be obtained by vector extraction of the grid structure in the remote sensing image, a vector plane in the vector data is read, and the height characteristic of the vector plane is determined, so that the illusion engine can be compatible with the remote sensing image and obtain the vector plane.
In an application mode, a remote sensing image is read by using a spatial data conversion library, a grid structure of the remote sensing image is determined, vector extraction is carried out on the grid structure in the remote sensing image to obtain vector data, a vector plane is extracted from the vector data based on data attributes corresponding to the vector plane in the vector data, and the height characteristics of the vector plane are determined based on attribute fields corresponding to the vector plane, wherein the attribute fields are used for storing the height characteristics of an entity target corresponding to the vector plane.
In another application mode, a vector file in a Shapefile or GeoJson format corresponding to the remote sensing image is read by using a spatial data conversion library, a vector plane in vector data is extracted from the vector file, and the height characteristic of the vector plane is determined based on an attribute field corresponding to the vector plane, wherein the attribute field is used for storing the height characteristic of an entity target corresponding to the vector plane.
S702: and generating a three-dimensional object in the illusion engine, and adding all contour points in the vector plane to the three-dimensional object to obtain a plane vertex in the three-dimensional object.
Specifically, a three-dimensional object is generated in the illusion engine, all contour points obtained from a vector plane are added to the three-dimensional object, and a plane vertex of the three-dimensional object is obtained, so that the contour points on the vector plane are introduced into the three-dimensional object, and the contour points are used as the plane vertex of the three-dimensional object.
In an application mode, a three-dimensional object is created in a ghost engine, wherein the three-dimensional object comprises vertex attributes, the vertex attributes are used for storing contour points on a vector plane, and all contour points obtained from the vector plane are added into the vertex attributes of the three-dimensional object to obtain a plane vertex of the three-dimensional object.
In another application, a stereo object is created in the illusion engine, where the stereo object includes a position attribute, the position attribute is used to store positions of contour points on the vector plane, the positions of all contour points obtained from the vector plane are added to the position attribute of the stereo object, and a plane vertex of the stereo object is obtained by indexing based on the positions of the contours in the position attribute.
S703: and obtaining a three-dimensional mesh body corresponding to the three-dimensional object based on the plane vertex, wherein the three-dimensional mesh body comprises a height vertex matched with the plane vertex, and the position of the height vertex in the illusion engine is related to the height characteristic.
Specifically, the plane vertex in the three-dimensional object is not three-dimensional, the height of the plane vertex in the three-dimensional object is expanded based on the height feature of the vector plane, and the height vertex corresponding to the plane vertex is generated, so that the plane vertex and the height vertex can jointly form a three-dimensional structure.
In an application mode, a bottom surface surrounded by a plane vertex is determined based on the position of the plane vertex in a three-dimensional object, and the plane vertex is subjected to height expansion in a direction perpendicular to the bottom surface based on height characteristics to obtain a height vertex.
Further, a stereoscopic mesh body corresponding to the stereoscopic object is generated in the illusion engine based on the plane vertex and the height vertex.
In an application mode, a mesh body matched with a three-dimensional object is generated based on the positions of a plane vertex and a height vertex in an illusion engine, the mesh vertex, a triangular mesh index, a texture coordinate and a normal plane of the mesh body are determined, modeling is carried out based on the mesh vertex, the triangular mesh index, the texture coordinate and the normal plane corresponding to the mesh body by utilizing a programmed modeling function, and the three-dimensional mesh body corresponding to the three-dimensional object is generated in the illusion engine.
In another application mode, the three-dimensional object comprises a mesh attribute, a mesh body matched with the three-dimensional object is generated based on the positions of the plane vertex and the height vertex in the illusion engine, the mesh vertex, the triangular mesh index, the texture coordinate and the normal plane of the mesh body are determined, the mesh vertex, the triangular mesh index, the texture coordinate and the normal plane of the mesh body are stored in the mesh attribute, the mesh attribute is called by using a programming modeling function in the illusion engine, and the three-dimensional mesh body corresponding to the three-dimensional object is generated.
S704: and assigning materials for the three-dimensional grid body, and generating a three-dimensional body corresponding to the vector data in the illusion engine.
Specifically, the material corresponding to the stereoscopic mesh body is related to the entity target corresponding to the vector plane in the real world, the material is specified for the stereoscopic mesh body, and the stereoscopic body corresponding to the vector data is generated in the illusion engine.
In an application scene, the vector surface corresponds to a building in the real world, the material related to the building is appointed for the three-dimensional grid body, and the three-dimensional body corresponding to the vector surface is generated in the illusion engine, so that the three-dimensional body corresponding to the virtual road is generated in the illusion engine.
In another application scenario, the vector plane corresponds to a real-world vegetation, the material related to the vegetation is specified for the stereoscopic mesh body, and the stereoscopic body corresponding to the vector plane is generated in the illusion engine, so that the stereoscopic body corresponding to the virtual vegetation is generated in the illusion engine.
In this embodiment, vector data obtained based on a remote sensing image is obtained, a vector surface in the vector data is read out, height characteristics of the vector surface are determined, a stereoscopic object is generated in an illusion engine, contour points on the vector surface are added to the stereoscopic object, the contour points are used as plane vertexes of the stereoscopic object, and a stereoscopic mesh body corresponding to the stereoscopic object is generated in the illusion engine based on the plane vertexes, wherein the stereoscopic mesh body includes height vertexes matched with the plane vertexes, positions of the height vertexes in the illusion engine are related to the height characteristics, and materials are assigned to the stereoscopic mesh body, so that a stereoscopic body corresponding to a real world can be conveniently generated in the illusion engine, and convenience in constructing the stereoscopic body in the illusion engine is improved.
In some implementation scenarios, the step S701 specifically includes: reading the remote sensing image by using a spatial data conversion library to obtain vector data and corresponding data attributes thereof; the data attributes comprise a surface data attribute and a line data attribute; based on the data attribute of the vector data, the vector plane corresponding to the plane data attribute and the height feature of the vector plane are read from the vector data.
Specifically, a remote sensing image is read by using a spatial data conversion library, a vector file corresponding to vector data is obtained from the remote sensing image, and data attributes corresponding to the vector data are determined, wherein the data attributes include a surface data attribute and a line data attribute, each vector file can store a plurality of layers, each layer can store a plurality of features, the data attributes corresponding to the features are respectively determined for each feature, a vector corresponding to the surface data attribute in the vector data is read, a vector surface corresponding to the surface data attribute is obtained, and a height feature corresponding to the vector surface is read, so that a height feature corresponding to the vector surface is obtained.
In a specific application scene, reading a remote sensing image by using GDAL, obtaining a vector file consisting of vector data corresponding to the remote sensing image, wherein the format of the vector file comprises but is not limited to Shapefile and GeoJson formats, extracting a vector plane from the vector file so that the illusion engine can be compatible with the remote sensing image, and obtaining the vector plane based on the vector data corresponding to the remote sensing image, thereby constructing a stereo based on the vector plane and the height characteristics.
In some implementation scenarios, the illusion engine corresponds to a coordinate system and a geographic global object, the coordinate system includes a geographic coordinate system, a projection coordinate system, an engine global coordinate system and an engine local coordinate, and the geographic global object includes an engine origin and a conversion relationship between any two coordinate systems in the coordinate system; the system comprises a geographical coordinate system, a projection coordinate system, an engine global coordinate system, an engine local coordinate system and an engine local coordinate system, wherein the geographical coordinate system and the projection coordinate system are related to the global positioning system, the engine global coordinate system is used for positioning any object in the illusion engine, an engine origin corresponds to origin coordinates in the projection coordinate system, the engine global coordinates in the engine global coordinate system are obtained by subtracting the origin coordinates from projection coordinates in the projection coordinate system, and engine local coordinates in the engine local coordinate system are determined based on an offset value of a vertex on the current object relative to the engine global coordinates of the current object; a planar vertex and a height vertex, in the illusion engine comprising engine local coordinates in an engine local coordinate system.
Specifically, the plane vertex and the height vertex correspond to a mesh vertex in a triangular mesh index, and the mesh vertex finally generates a three-dimensional mesh body, so that the plane vertex and the height vertex can obtain engine local coordinates of the plane vertex and the height vertex in an engine local coordinate system corresponding to the three-dimensional object, and the calculation load when the position of the mesh vertex is called is effectively reduced.
In an implementation scenario, a plane surrounded by a connection line between the vertices of the planes is a bottom plane, and the step S703 specifically includes: performing triangulation based on engine local coordinates corresponding to the plane vertex to obtain a plurality of triangular mesh indexes corresponding to the bottom surface; generating a height vertex corresponding to the plane vertex based on the plane vertex and the height characteristics, and triangulating based on the positions of the plane vertex and the height vertex in the illusion engine to obtain a plurality of triangular mesh indexes of other surfaces which are different from the bottom surface; each triangular mesh index comprises three mesh vertexes, and the mesh vertexes correspond to the plane vertexes and the height vertexes; determining a normal corresponding to each mesh vertex in all triangular mesh indexes and a texture coordinate corresponding to each mesh vertex based on the position of each mesh vertex in the illusion engine; in the illusion engine, a stereoscopic mesh volume corresponding to the stereoscopic object is generated based on all the triangular mesh indices and the normal and texture coordinates corresponding to each mesh vertex.
Specifically, please refer to fig. 8, fig. 8 is a schematic view of an application scenario of an embodiment of the three-dimensional construction method based on the illusion engine according to the present application, and for convenience of description, a bottom surface surrounded by a plane vertex is taken as an example of a square, the number corresponding to the plane vertex is 0-3, and triangulation is performed based on the position of the plane vertex in the illusion engine to obtain a plurality of triangular mesh indexes corresponding to the bottom surface, including 0, 3, 2 and 2, 1, 0.
Optionally, the triangle mesh index corresponding to the bottom surface is stored according to the bottom surface designated direction, so that the mesh vertex corresponding to the bottom surface can be displayed, wherein the bottom surface designated direction includes a clockwise direction toward the negative direction of the z-axis.
Further, based on the height features of the vector plane and the corresponding engine local coordinates of the plane vertex in the illusion engine, the height of the plane vertex is expanded to obtain a height vertex, and numbers corresponding to the height vertex in fig. 8 are 4, 5, 6 and 7. And triangulating the other surfaces different from the bottom surface based on the positions of the plane vertex and the height vertex in the illusion engine to obtain triangular mesh indexes of the other five surfaces, wherein the triangular mesh indexes correspond to three mesh vertices, and the mesh vertices of the triangular mesh indexes corresponding to the different surfaces comprise the plane vertex and/or the height vertex.
And further, generating texture coordinates of the mesh vertexes based on the positions of the mesh vertexes in the illusion engine so as to facilitate texture mapping, calculating a normal line corresponding to each mesh vertex based on the positions of the mesh vertexes so as to facilitate determination of the illumination direction corresponding to each mesh vertex, inputting the mesh vertexes, the triangular mesh indexes, the texture coordinates and the normal lines generated in the steps, and generating the stereoscopic mesh body by using a programmed modeling function of the illusion engine.
In an application scenario, generating a height vertex corresponding to the plane vertex based on the plane vertex and the height feature, triangulating based on positions of the plane vertex and the height vertex in the illusion engine, and obtaining a plurality of triangular mesh indexes of other surfaces distinguished from a bottom surface, including: stretching the plane vertexes based on the engine local coordinates and the height characteristics of the plane vertexes in the engine local coordinate system to obtain height vertexes corresponding to the plane vertexes, wherein a surface surrounded by connecting lines among the height vertexes is a top surface; triangulating the top surface based on the engine local coordinates of the height vertex in the engine local coordinate system to obtain a plurality of triangular mesh indexes corresponding to the top surface, and storing the mesh vertices in the plurality of triangular mesh indexes corresponding to the top surface in a top surface specified direction; the grid vertexes in the triangular grid indexes corresponding to the bottom surface are stored according to the bottom surface designated direction, and the top surface designated direction is opposite to the bottom surface designated direction; and triangulating each side surface intersected with the top surface and the bottom surface based on the positions of the top surface and the top surface to obtain a plurality of triangular mesh indexes corresponding to all the side surfaces.
Specifically, taking fig. 8 as an example, the stretching height is calculated according to the height features and the stretching scale of the vector surface, a group of new vertices with the same positions as the original vertices X and Y, but the Z value is the stretching height is used as height vertices, a surface surrounded by a connecting line between the height vertices is a top surface, the top surface is triangulated based on engine local coordinates of the height vertices in an engine local coordinate system, a plurality of triangular mesh indexes corresponding to the top surface are obtained, mesh vertices in the plurality of triangular mesh indexes corresponding to the top surface are stored in a top surface specified direction, so that the mesh vertices corresponding to the top surface can be displayed, wherein the top surface specified direction includes a clockwise direction toward a positive direction of the Z axis, the mesh vertices in the plurality of triangular mesh indexes corresponding to the bottom surface are stored in a bottom surface specified direction, and the top surface specified direction is opposite to the bottom surface specified direction, so that the mesh vertices corresponding to the top surface and the bottom surface can be clearly displayed when viewed in a top-down direction in the illusion engine.
Further, triangular mesh indexes of the side faces are generated by mesh vertexes corresponding to the top face and the bottom face according to the clockwise direction towards an observer, the generation mode is that traversal is carried out from ip =1 to num-1 of the total number of vertexes, two groups of triangular indexes of num + ip-1, num + ip, ip and ip-1 are respectively generated, and when ip = num-1, the generated indexes are num + ip, num, ip, num and 0,ip. Corresponding to the stereoscopic mesh body shown in fig. 8, numbers of the generated triangular mesh indexes are 4, 5, 0 and 5, 1,0, 5, 6, 1 and 6, 2, 1,6, 7, 3 and 7, 3, 2,7, 4, 3 and 4, 0 and 3, and finally triangular mesh indexes of all sides are obtained, so that the stereoscopic mesh body can be mapped and the material can be generated.
Optionally, determining, based on the position of each mesh vertex in the illusion engine, a normal corresponding to each mesh vertex in all the triangular mesh indexes and texture coordinates corresponding to each mesh vertex, includes: based on engine local coordinates corresponding to three grid vertexes in each triangular grid index, cross-multiplying two vectors between any grid vertex and other two grid vertexes in each triangular grid index, and normalizing the cross-multiplication result to obtain normals corresponding to all the grid vertexes; and scaling the local engine coordinate corresponding to each mesh vertex to a texture coordinate range to obtain the texture coordinate corresponding to each mesh vertex.
Specifically, the normal of each mesh vertex in the triangular mesh index is calculated in a mode of cross multiplication of two vectors consisting of the mesh vertex and the other two mesh vertices on a triangular surface where the mesh vertex is located, and normalization processing is performed on cross multiplication results to obtain the mesh vertex so as to obtain an accurate normal position.
Furthermore, the local engine coordinates corresponding to each mesh vertex are scaled to the texture coordinate range to obtain the texture coordinates corresponding to each mesh vertex, so as to map the planar mesh body.
In a specific implementation scenario, the scaling of the engine local coordinate corresponding to each mesh vertex to a texture coordinate range to obtain a texture coordinate corresponding to each mesh vertex includes: scaling the vertical coordinate of the engine local coordinate corresponding to each grid vertex to obtain the vertical component of the texture coordinate corresponding to each grid vertex; and setting the horizontal component of the texture coordinate of each grid vertex as a preset component to obtain the texture coordinate corresponding to each grid vertex.
Specifically, the vertical coordinates in the engine local coordinates corresponding to each grid vertex are divided by the scaling parameters respectively to obtain the vertical components of the texture coordinates corresponding to each grid vertex, so that the vertical components are scaled to the texture coordinate range, the three-dimensional grid bodies can be subjected to uniform mapping in the vertical direction, mapping effects with uniform heights are generated, the horizontal components of the texture coordinates of each grid vertex are set as preset classifications in the texture coordinate range, and the texture coordinates corresponding to all the grid vertices are obtained, so that the three-dimensional grid bodies are subjected to mapping.
In a specific application scenario, the vertical component V in the UV values of all vertex texture coordinates is the vertical coordinate of the engine local coordinate in the vertex of the mesh divided by 100, and the horizontal component U is the predetermined component 0. The scaling parameter in the application scene is 100, the preset component is 0, the scaling parameter in other application scenes can be defined based on the engine local coordinate of the grid vertex, the horizontal component and the vertical component of the texture coordinate are scaled to 0-1, and a user-defined preset component value in the texture coordinate range is specified. By such UV spreading, all the stereoscopic meshes can display the same mapping effect, such as self-luminous effect, etc., at the same height.
In an application scenario, the step S702 specifically includes: generating a stereoscopic object inherited from the geographic global object in the illusion engine; the three-dimensional object comprises a vertex attribute, a grid attribute, a material attribute and a characteristic attribute; the vertex attribute is used for storing contour points on a vector surface, the grid attribute is used for storing a three-dimensional grid body corresponding to a three-dimensional object, the material attribute is used for storing the material of the three-dimensional grid body, and the characteristic attribute is used for storing height characteristics corresponding to the vector surface; storing the height characteristics corresponding to the vector surface into the characteristic attributes, converting the positions corresponding to all contour points on the vector surface into a geographic coordinate system, and storing the geographic coordinates corresponding to all contour points on the vector surface into the vertex attributes to obtain the plane vertex in the three-dimensional object.
Specifically, the illusion engine comprises a coordinate system and a geographic global object, AGEoActor, and generates a stereo object, APolygonActor, in the illusion engine, inheriting the geographic global object, wherein the stereo object comprises a vertex attribute, a grid attribute, a material attribute and a feature attribute. The vertex attributes are used for storing contour points on the vector surface, the grid attributes are used for storing a three-dimensional grid body corresponding to the three-dimensional object, the material attributes are used for storing the material of the three-dimensional grid body, corresponding storage spaces are provided for the contour points, the three-dimensional grid body and the material, and the feature attributes are used for storing height features corresponding to the vector surface.
Further, storing the height features corresponding to the vector plane into the feature attributes, converting the positions corresponding to all contour points on the vector plane into a geographic coordinate system, and storing the geographic coordinates corresponding to all contour points on the vector plane into the vertex attributes, so that the contour points on the vector plane are used as plane vertices in the three-dimensional object, and the geographic coordinates corresponding to the contour points are stored in the vertex attributes for calling.
Further, generating a three-dimensional object in the illusion engine, adding all contour points in the vector plane to the three-dimensional object, and obtaining a plane vertex in the three-dimensional object, the method further includes: converting the geographic coordinates of all plane vertexes in the three-dimensional object into a projection coordinate system by using a spatial data conversion library, converting the projection coordinates in the projection coordinate system into an engine global coordinate system, converting the engine global coordinate in the engine global coordinate system into an engine local coordinate system, and determining the engine local coordinate of the plane vertex in the engine local coordinate system; and taking the engine global coordinate of one plane vertex in the solid object in the engine global coordinate system as the position of the solid object in the illusion engine.
Specifically, the geographic coordinates corresponding to the plane vertices can be obtained by calling the geographic coordinates stored in the vertex attributes, the geographic coordinates are converted into a projection coordinate system and then into an engine global coordinate system, and the engine global coordinate corresponding to one of the plane vertices is used as the position of the three-dimensional object in the illusion engine, so that the three-dimensional object is matched with the position of the vector plane in the real world.
Optionally, obtaining vector data obtained based on the remote sensing image, and before reading out a vector surface in the vector data, further includes: obtaining a coordinate setting attribute corresponding to the engine origin, determining position information in the vector data when the coordinate setting attribute corresponds to allowable setting, and resetting the engine origin of the coordinate system based on the position information; wherein the coordinate setting attribute includes an enable setting and a disable setting.
Specifically, the coordinate setting attribute of the engine origin is determined, if the setting is permitted, the engine origin of the coordinate system is reset based on the position information in the vector data, and the coordinate setting attribute is set to the prohibited setting after the resetting, and if the setting is prohibited, the existing engine origin is used. Furthermore, when the setting is allowed, a new engine origin can be set based on the position information corresponding to the vector data, so that the value of the engine global coordinate of the three-dimensional object corresponding to the vector data can be effectively reduced, the processing load of the illusion engine can be effectively reduced, and the calculation efficiency can be improved.
In some embodiments, please refer to fig. 9, where fig. 9 is a flowchart illustrating an embodiment of the method for constructing an icon based on an illusion engine according to the present application, after generating a virtual model corresponding to a remote sensing image in the illusion engine based on locations of a terrain, a map, a plane, and a solid in a coordinate system, the method further includes:
s901: and acquiring an icon information file, and acquiring at least one icon and an icon position and an icon attribute corresponding to the icon based on the icon information file.
Specifically, an icon information file is obtained, wherein the icon information file is used for creating icons in a ghost engine in batch, and the icon information file comprises at least one icon and icon positions and icon attributes corresponding to the icons.
In an application mode, an icon information file is obtained, all icons and icon positions and icon attributes corresponding to the icons are read from the icon information file, wherein the icon attributes comprise icon styles of the icons.
In another application mode, an icon information file is obtained, icons in the icon information file are used as interest points of the illusion engine, the interest points are traversed to obtain icon positions and icon attributes corresponding to the icons, and the icon attributes comprise icon sizes and icon styles of the icons.
S902: and generating a current icon grid body corresponding to the current icon based on the current icon position and the current icon attribute corresponding to the current icon.
Specifically, traversing all icons in the icon information file, and when a current icon is obtained, generating a current icon mesh corresponding to the current icon in the illusion engine based on the current icon position and the current icon attribute corresponding to the current icon, so as to obtain a plane matched with the current icon attribute of the current icon, so that target icons are generated in batches after traversing is completed, and point location calibration is performed by using the target icons.
In an application mode, the icon attributes comprise the icon style of the icon, a plane matched with the icon style is generated on the point position matched with the current icon position in the illusion engine based on the current icon position and the current icon attributes corresponding to the current icon, the triangular mesh index, the texture coordinate and the normal corresponding to the vertex are determined based on the vertex on the plane matched with the icon style, modeling is carried out based on the vertex, the triangular mesh index, the texture coordinate and the normal corresponding to the plane by utilizing a programmed modeling function, and the current icon mesh body corresponding to the current icon is generated in the illusion engine.
In another application mode, the icon attributes comprise the icon style and the icon size of the icon, a plane matched with the icon style and the icon size is generated on the basis of the current icon position and the current icon attribute corresponding to the current icon in the illusion engine, the triangular mesh index, the texture coordinate and the normal corresponding to the vertex are determined on the basis of the vertex on the plane matched with the icon style and the icon size, modeling is performed on the basis of the vertex, the triangular mesh index, the texture coordinate and the normal corresponding to the plane by using a programming modeling function, and the current icon mesh body corresponding to the current icon is generated in the illusion engine.
S903: and converting the current icon attribute into the texture of the current icon mesh body, and generating a target icon corresponding to the current icon in the illusion engine.
Specifically, the attributes of the current icon are converted into the texture of the current icon grid body, the texture is used as a map of the current icon grid body, a target icon corresponding to the current icon is generated in the illusion engine until all icons in the icon information file are traversed, the corresponding icons in the icon information file are generated in batches, and the icon generation efficiency is improved.
In an application mode, a texture corresponding to a current icon grid body is generated in an illusion engine based on features corresponding to attributes of the current icon, the texture is given to the current icon grid body, and a target icon corresponding to the current icon is generated. And the characteristics corresponding to the attributes of the current icon are related to the icon style of the current icon.
In another application, a texture pointer variable is created in the illusion engine, and a feature corresponding to the current icon attribute is stored in the texture pointer variable, so that the icon attribute is converted into a texture of the current icon mesh, and a target icon corresponding to the current icon mesh is generated, wherein the feature corresponding to the current icon attribute is related to the icon style and the icon size of the current icon.
S904: and responding to all the icons in the traversal icon information file, and adjusting all the target icons to be over against the current view angle based on the positions of all the target icons and the current view angle of the illusion engine.
Specifically, when all the icons in the icon information file are traversed, all the target icons are adjusted to be over against the current view angle of the ghost engine based on the respective deviations of all the target icons from the current view angle of the ghost engine, so that the target icons can be observed and used conveniently.
In an application mode, a current visual angle corresponding to a current frame in the illusion engine is determined, positions of all target icons in the current frame are determined, angles of all the target icons are adjusted based on respective deviations of the current target icons relative to the current visual angle of the illusion engine, so that all the target icons are adjusted to be over against the current visual angle of the illusion engine, and further deviations of the target icons and the current visual angle are determined frame by frame, and therefore the target icons are kept to face the current visual angle of a user all the time.
In another application mode, a rotation function module is preset in the illusion engine, and after all icons in the icon information file are traversed, the rotation function module is called to adjust all target icons to be over against the current visual angle of the illusion engine, wherein the rotation function module connects different function functions based on a blueprint function in the illusion engine, so that the positions of the target icons and the current visual angle of the illusion engine are determined, and then the angles of all the target icons are adjusted based on the deviation of the current target icons relative to the current visual angle of the illusion engine.
In this embodiment, all icons in an icon information file are obtained from the icon information file, an icon position and an icon attribute corresponding to at least one icon and the icon are obtained, a current icon grid body corresponding to the current icon is generated based on the current icon position and the current icon attribute corresponding to the current icon, the current icon attribute is converted into texture of the current icon grid body, a target icon corresponding to the current icon is generated for the current icon grid body map, after traversal of all the icons in the icon information file is completed, batch generation of target icons is realized in a ghost engine so as to perform point location calibration by using the target icons, and all the target icons are adjusted to be directly opposite to the current view angle based on the positions of all the target icons and the current view angle of the ghost engine so as to observe and use the target icons.
In some implementation scenarios, the step S901 specifically includes: reading the icon information file to obtain icon positions and icon attributes corresponding to all icons in the icon information file; the icon attributes comprise icon sizes and icon styles corresponding to the icons, the icon positions comprise point location coordinates corresponding to the icons, and the point location coordinates comprise longitudes, latitudes and altitudes.
Specifically, an icon information file is obtained, all icons in the icon information file and icon positions and icon attributes corresponding to the icons are read, and the icons are bound with the corresponding icon positions and icon attributes, wherein the icon attributes comprise icon sizes and icon styles, the icon positions comprise point location coordinates corresponding to the icons, and the point location coordinates comprise longitudes, latitudes and altitudes, so that the point location coordinates corresponding to the icons are set in a geographic coordinate system and are finally converted into engine global coordinates in a ghost engine.
Further, before converting the current icon attribute into the texture of the current icon mesh and generating the target icon corresponding to the current icon in the illusion engine, the method further includes: converting the point location coordinates of the current icon into a geographic coordinate system to obtain geographic coordinates of the current icon in the geographic coordinate system; converting the geographic coordinate corresponding to the current icon to a projection coordinate system by using a spatial data conversion library to obtain a projection coordinate of the current icon in the projection coordinate system; and converting the projection coordinate corresponding to the current icon into an engine global coordinate system to obtain the engine global coordinate of the current icon mesh body corresponding to the current icon in the engine global coordinate system.
Specifically, longitude, latitude and altitude in a point location coordinate corresponding to a current icon in an icon information file are read, the position of the point location coordinate in a geographic coordinate system is determined, the geographic coordinate corresponding to the current icon is converted into a projection coordinate system by using a spatial data conversion library to obtain a projection coordinate of the current icon in the projection coordinate system, an origin coordinate of an engine origin is subtracted from the projection coordinate corresponding to the current icon to convert the projection coordinate corresponding to the current icon into an engine global coordinate, and the engine global coordinate of a current icon mesh body corresponding to the current icon in the engine global coordinate system is obtained, so that the current icon in a virtual engine corresponds to the position in the real world, and the position accuracy of a finally generated target icon is improved.
Further, generating a current icon mesh corresponding to the current icon based on the current icon position and the current icon attribute corresponding to the current icon, including: generating a grid vertex set corresponding to the current icon in the illusion engine based on the current icon position and the current icon attribute; wherein the mesh vertex set comprises a plurality of mesh vertices; and generating a current icon grid body corresponding to the current icon based on the grid vertex set.
Specifically, a mesh vertex corresponding to a current icon is determined in a ghost engine based on a current icon position and current icon attributes, a mesh vertex set is generated, wherein the mesh vertex set comprises a plurality of mesh vertices, a triangular mesh index corresponding to the mesh vertex set and texture coordinates and a normal corresponding to each mesh vertex in the mesh vertex set are determined based on the mesh vertex set, and a current icon mesh corresponding to the mesh vertex set is generated by using programmed modeling, so that a base frame corresponding to the icon is constructed in the ghost engine, and a target icon is generated in the ghost engine.
In an application scenario, the grid vertex set includes at least four grid vertices, and a current icon grid body corresponding to a current icon is generated based on the grid vertex set, including: determining a plurality of triangular mesh indexes corresponding to the mesh vertex set based on the positions of all mesh vertices in the mesh vertex set; determining texture coordinates corresponding to the mesh vertex set based on the position of any mesh vertex in the mesh vertex set relative to the positions of other mesh vertices; determining a normal corresponding to each grid vertex in all triangular grid indexes based on the positions of three grid vertices in all triangular grid indexes; and in the illusion engine, generating a current icon grid body corresponding to the current icon based on all the triangular grid indexes, the normal corresponding to each grid vertex in all the triangular grid indexes and the texture coordinates corresponding to the grid vertex set.
Specifically, the mesh body constructed based on the current icon attribute is a rectangle or other polygons, the mesh vertex set includes at least four mesh vertices, for convenience of description, the mesh body is a square, the current icon attribute includes an icon size, a height correspondence of an icon in the icon size is h, a width correspondence is w, and h = w, mesh vertices in the mesh vertex set are (0, 0), (0, h, 0), (w, h, 0), and (w, 0), numbers of the mesh vertices are 0,1, 2, and 3, triangular mesh indices of the generated mesh body are 0,1, 2, and 2, 3, 0, respectively, so as to generate the mesh body based on the triangular mesh indices, texture coordinates are (0, 0), (0, 1), (1, 1), and (1, 0), so as to perform texture mapping, calculate a normal line corresponding to each vertex, so as to determine an illumination direction corresponding to each vertex, input the vertex coordinates generated in the above step, the triangular mesh indices, coordinates, and a programming engine function of the current mesh body to generate a phantom.
In a specific application scenario, determining a normal corresponding to each mesh vertex in all triangular mesh indexes based on positions of three mesh vertices in all triangular mesh indexes comprises: based on the positions corresponding to the three grid vertexes in each triangular grid index, performing cross multiplication on two vectors between any one grid vertex and the other two grid vertexes in each triangular grid index, and performing normalization processing on cross multiplication results to obtain normals corresponding to all the grid vertexes.
Specifically, the normal of each mesh vertex in the triangular mesh index is calculated in a mode of cross multiplication of two vectors consisting of the mesh vertex and the other two mesh vertices on a triangular surface where the mesh vertex is located, and normalization processing is performed on cross multiplication results to obtain the mesh vertex so as to obtain an accurate normal position.
In some implementation scenarios, the step S903 specifically includes: and creating a texture pointer variable in the illusion engine, and storing the icon style corresponding to the current icon into the texture pointer variable so as to convert the icon style corresponding to the current icon into the texture of the current icon grid body and obtain the target icon corresponding to the current icon.
Specifically, a texture pointer variable is created in the illusion engine, an icon style corresponding to the current icon is stored in the texture pointer variable, so that the icon style corresponding to the current icon is converted into the texture of the current icon grid body in the illusion engine, the texture is used as a map of the current icon grid body, a target icon corresponding to the icon style is generated, the icon style is used as a texture asset of the current icon grid body, and the target icon in a specified style can be accurately created in the illusion engine.
Further, after converting the current icon attribute into the texture of the current icon mesh and generating the target icon corresponding to the current icon in the illusion engine, the method further includes: and creating a material pointer variable in the illusion engine, and generating a material matched with the texture for the current icon grid body in the material pointer variable to obtain the material of the current icon grid body.
Specifically, a material pointer variable is created in the illusion engine, a material matched with the texture is generated for the current icon grid body and stored in the material pointer variable, and the material of the current icon grid body is obtained, wherein the material pointer variable comprises an upper node and a lower node, and a parameter corresponding to the texture is given to the upper node corresponding to the material as the lower node.
In a specific application scene, reading the icon storage position of the point, reading the picture data corresponding to the icon into an array CompressedData through a FFileHelper in a ghost engine, creating a TSharededPtr < IImageWrapper > ImageWrapper variable through GetImageWrapperByExtension in the ghost engine, transmitting the data of the array CompressedData to the ImageWrapper, creating a pointer variable TextPack of an UPackage type for storing texture assets, newly creating a pointer variable NewTexture of a UTexture2D type for storing texture data in the TextPack, acquiring the decompressed data through the ImageWrapper, assigning image size, channel and type information to the NewTextTexture, and storing the TextPack of the texture assets.
Further, a material node of the UMaterial type is created, the mixed mode is set to be semi-transparent, a texture node of the UMaterial expression sample type is added to the material node, the texture is assigned to the texture node, the RGB value of the UMaterial expression sample node is used as a basic color input material node, the Alpha value is used as an opacity input material node, the material asset is stored, and the material is assigned to the current icon grid body.
In some implementation scenarios, before the step S901, the method further includes: in the illusion engine, a position function used for determining the position of an object, a view angle function used for determining the current view angle of the illusion engine and a rotation function used for determining an angle difference value are connected to generate a rotation function module in the illusion engine.
Specifically, various function functions are connected to obtain a function module by means of a blueprint function of the illusion engine, wherein in order to realize the function of rotating along with the current visual angle, a position function for determining the position of an object, a visual angle function for determining the current visual angle of the illusion engine and a rotation function for determining an angle difference value are connected to obtain the rotation function module, so that a plurality of target icons generated in batches can be uniformly adjusted, and the convenience and efficiency of adjustment are improved.
Further, in response to traversing all icons in the icon information file, adjusting all target icons to be directly opposite to the current perspective based on the positions of all target icons and the current perspective of the illusion engine, including: responding to all icons in the traversal icon information file, calling a rotating function module, determining the positions of all target icons by using a position function, and determining the current visual angle of the illusion engine by using a visual angle function; and determining the angle difference value of the positions of all the target icons relative to the current visual angle by using a rotation function, and adjusting all the target icons to be over against the current visual angle based on the angle difference value corresponding to each image.
Specifically, when all the icons in the icon information file are traversed, the rotation function module is called, the positions of all the target icons are determined by using the position function, the current visual angle of the illusion engine is determined by using the visual angle function, the angle difference value of the positions of all the target icons relative to the current visual angle is determined by using the rotation function, and all the target icons are adjusted to be opposite to the current visual angle based on the angle difference value corresponding to each image, so that the target icons can be observed and used conveniently.
Further, a side rotation function module is called in each frame of the illusion engine to ensure that the target icon in the illusion engine can always directly face the current view angle, so that point location calibration can be performed by using the target icon.
In some implementation scenarios, referring to fig. 10, fig. 10 is a schematic flowchart of an embodiment of a method for building a ghost engine-based thermodynamic diagram according to the present application, where in response to traversing all icons in an icon information file, after adjusting all target icons to be right opposite to a current view angle based on positions of all target icons and a current view angle of the ghost engine, the method further includes:
s11: and acquiring a thermodynamic information file corresponding to the thermodynamic diagram, and acquiring a plurality of radiation points and point position and radiation attributes corresponding to the radiation points on the basis of the thermodynamic information file.
Specifically, a thermodynamic information file corresponding to the thermodynamic diagram is obtained, wherein the thermodynamic information file is used for creating a target thermodynamic diagram in the phantom engine so as to determine density information in the phantom engine, and the thermodynamic information file comprises a plurality of radiation points and point position and radiation attributes corresponding to the radiation points.
In an application mode, a thermodynamic information file corresponding to the thermodynamic diagram is obtained, all radiation points, point positions and radiation attributes corresponding to the radiation points are read from the thermodynamic information file, and the radiation attributes comprise radiation ranges of the radiation points.
In another application mode, a thermodynamic information file corresponding to the thermodynamic diagram is obtained, the radiation points are used as interest points, the interest points are traversed to obtain point position and radiation attributes corresponding to the radiation points, the radiation attributes comprise the maximum radiation radius and radiation intensity of the radiation points, and the radiation radius with the radiation points as the center can be determined based on the maximum radiation radius and the radiation intensity.
S12: and determining the corresponding thermodynamic size of the thermodynamic diagram in the illusion engine based on the point position corresponding to all the radiation points.
Specifically, point positions corresponding to all the radiation points are converted into a phantom engine, and the thermodynamic size corresponding to the thermodynamic diagram is determined in the phantom engine.
In an application mode, point positions corresponding to all the radiation points are converted into an engine global coordinate system of the fantasy engine, and the corresponding thermodynamic size of the thermodynamic diagram in the fantasy engine is determined based on the engine global coordinates of all the radiation points.
In another application mode, one radiation point is extracted from all radiation points to serve as an appointed radiation point, the point position corresponding to the appointed radiation point is converted into an engine global coordinate system of the illusion engine, the engine global coordinate of the appointed radiation point serves as the engine global coordinate of the thermodynamic diagram in the illusion engine, the engine local coordinates of all the radiation points in an engine local coordinate system of the illusion engine are determined, and the thermodynamic size corresponding to the thermodynamic diagram in the illusion engine is determined based on the engine local coordinates of all the radiation points.
S13: and determining the radiation values of all the radiation points and the radiated points in the range of the thermal dimension based on the radiation properties corresponding to all the radiation points, wherein the radiated points are different from the radiation points.
Specifically, the range which can be influenced by all the radiation points is determined based on the radiation properties of all the radiation points, and the radiation values of all the radiation points and the radiation points are determined in the thermal dimension range, wherein the radiation points are different from the radiation points, the radiation points do not have the radiation properties, and when the distances between the radiation points are close, the radiation values are overlapped among the close radiation points.
In an application mode, the radiation attribute comprises a radiation range of a radiation point, the radiation value in the radiation range linearly decreases by taking the radiation point as a center, the radiation values of all the radiation points and the radiation points are determined in a range of thermal dimension based on the radiation range corresponding to the radiation point, all the radiation values corresponding to the same radiation point or radiation point are superposed, and the final radiation value on each point is determined.
In another application mode, the radiation attributes comprise the maximum radiation radius and the radiation intensity of the radiation points, the radiation values of all the radiation points and the radiation-receiving points are determined within the range of the thermal dimension based on the product between the maximum radiation radius and the radiation intensity corresponding to the radiation points, all the radiation values corresponding to the same radiation point or radiation-receiving point are superposed, and the final radiation value at each point is determined.
S14: and generating thermal texture corresponding to the thermodynamic diagrams based on the radiation values corresponding to the radiation points and the radiation points, setting corresponding thermal materials for the thermal texture, and generating target thermodynamic diagrams corresponding to the thermodynamic diagrams in the illusion engine.
Specifically, based on the radiation values corresponding to the radiation points and the radiation points, the texture corresponding to the radiation values is determined, the thermodynamic texture corresponding to the thermodynamic diagram is generated, and the corresponding thermodynamic material is set for the thermodynamic texture, so that the target thermodynamic diagram corresponding to the thermodynamic diagram is generated in the phantom engine, and whether point positions exist or not or differences among a plurality of point positions are displayed in the phantom engine, and whether correlation exists among different point positions is detected.
In an application mode, thermal texture corresponding to radiation values is determined in the illusion engine based on the radiation values of the radiation points and the radiation receiving points, the radiation values are graded, corresponding thermal materials are set for the thermal texture based on grading results, and therefore a target thermodynamic diagram corresponding to the thermodynamic diagram is generated in the illusion engine.
In another application mode, a texture pointer variable is created in the phantom engine, the radiation values of the radiation points and the radiation points are stored in the texture pointer variable, so that the radiation values are converted into thermal textures of point locations, a material pointer variable is created, the thermal material of each point location is determined based on the numerical value of the radiation values of the radiation points and the radiation points, and the thermal material is stored in the material pointer variable, so that a target thermodynamic diagram corresponding to the thermodynamic diagram is generated in the phantom engine.
In this embodiment, all the radiation points in the thermal information file are obtained from the thermal information file, the point positions and the radiation attributes corresponding to the multiple radiation points and the radiation points are obtained, the thermodynamic size of the thermodynamic diagram in the phantom engine is determined based on the point positions of all the radiation points, the radiation values of all the radiation points and the radiation-receiving points are determined based on the radiation sizes of all the radiation points within the range of the thermodynamic size, the thermodynamic texture corresponding to the thermodynamic diagram is generated based on the radiation values corresponding to all the radiation points and all the radiation-receiving points, the thermodynamic texture corresponding to the thermodynamic diagram is set, the target thermodynamic diagram corresponding to the thermodynamic diagram is generated in the phantom engine, so as to determine the density information in the phantom engine, observe whether the point positions exist or display the differences among the multiple point positions, and detect whether the correlation exists among different point positions.
In some implementation scenarios, the step S11 specifically includes: reading a thermal information file, and storing all radiation points in the thermal information file, point positions and radiation attributes corresponding to the radiation points in a memory of the illusion engine; wherein the point location comprises longitude and latitude of the radiation point, and the radiation property comprises maximum radiation radius and radiation intensity of the radiation point.
Specifically, the thermal information file is read, all the radiation points in the thermal information file and the corresponding point locations and radiation attributes are read out and stored in the memory of the phantom engine, so that each radiation point and the corresponding point location and radiation attribute thereof are called. The radiation properties comprise the maximum radiation radius and the radiation intensity of the radiation point, the radiation intensity is used for defining the radiation intensity change of the radiation point, and the maximum radiation radius is used for defining the maximum distance of outward radiation with the radiation point as the center.
Further, the point location of each radiation point comprises longitude and latitude, which are used for locating the position of the radiation point, and the point location of the radiation point can be customized and modified, so that the finally formed target thermodynamic diagram can be arranged at any position in the illusion engine.
In an application scenario, the point location of each radiation point is set in the WGS84 coordinate system, the maximum radiation radius of each radiation point is in units of pixels, and each pixel corresponds to an actual length unit.
Further, based on the point location corresponding to all the radiation points, determining a thermodynamic diagram before the corresponding thermodynamic size in the phantom engine, further includes: converting the point position of each radiating point into a geographic coordinate system to obtain the geographic coordinate of each radiating point in the geographic coordinate system; converting the geographic coordinates corresponding to the radiation points to a projection coordinate system by using a spatial data conversion library to obtain projection coordinates of the radiation points in the projection coordinate system; and converting the projection coordinates corresponding to each radiation point into an engine global coordinate system to obtain the engine global coordinates of each radiation point in the engine global coordinate system.
Specifically, longitude and latitude in the point location corresponding to each radiation point are read, the location of the point location in a geographic coordinate system is determined, the geographic coordinate corresponding to the radiation point is converted into a projection coordinate system by using a spatial data conversion library, a projection coordinate of the radiation point in the projection coordinate system is obtained, an origin coordinate of an engine origin is subtracted from the projection coordinate corresponding to the radiation point, so that the projection coordinate corresponding to the radiation point is converted into an engine global coordinate, the engine global coordinate of each radiation point in the engine global coordinate system is obtained, the radiation point can be finally converted into the phantom engine based on the location defined in the geographic coordinate system, and the degree of freedom of the target thermodynamic diagram set in the phantom engine is improved.
In an implementation scenario, determining the corresponding thermodynamic size of the thermodynamic diagram in the phantom engine based on the point position corresponding to all the radiation points includes: determining circumscribed rectangles corresponding to all the radiation points based on the radiation attributes corresponding to the radiation points and the engine global coordinates of the radiation points in the engine global coordinate system; and taking the size of the circumscribed rectangle as the thermodynamic size of the thermodynamic diagram in the unreal engine, and determining the engine global coordinate of the thermodynamic diagram in an engine global coordinate system based on the engine global coordinate corresponding to the preset corner of the circumscribed rectangle.
Specifically, engine global coordinates corresponding to all the radiation points are traversed, an engine global coordinate extreme value including a maximum value and a minimum value in all the radiation points is determined, an influence range of a point position corresponding to the engine global coordinate extreme value is determined based on the maximum radiation radius of the radiation points, a circumscribed rectangle corresponding to all the radiation points is determined based on the influence range, the size of the circumscribed rectangle is used as the thermodynamic size of the thermodynamic diagram in the illusion engine, the engine global coordinates corresponding to preset corners of the circumscribed rectangle are determined, and the engine global coordinates corresponding to the preset corners are used as the engine global coordinates of the thermodynamic diagram in an engine global coordinate system.
In a specific application scenario, the preset corner is the upper left corner of the circumscribed rectangle, and in other application scenarios, the preset corner may also be a corner in other directions.
In another implementation scenario, the determining the radiation values of all the radiation points and the radiated points within the range of the thermal dimensions based on the radiation properties corresponding to all the radiation points includes: generating a radiation matrix corresponding to the thermodynamic diagram based on the thermodynamic size; determining a matrix point position of each radiation point in the radiation matrix based on an engine global coordinate of each radiation point in an engine global coordinate system; and determining the radiation values of all the radiation points and all the radiated points in the radiation matrix based on the matrix point positions, the radiation intensities and the maximum radiation radiuses corresponding to the radiation points.
Specifically, initializing an array with a size matched with a thermal dimension as a radiation matrix corresponding to the thermodynamic diagram, traversing each radiation point as the radiation matrix corresponding to the thermodynamic diagram, determining matrix points of each radiation point in the radiation matrix based on engine global coordinates of each radiation point in an engine global coordinate system, namely converting each radiation point into the radiation matrix, thereby determining a radiation range of the current radiation point in the radiation matrix based on the maximum radiation radius and the radiation intensity of the current radiation point, determining radiation values corresponding to all points in the radiation range of the current radiation point, after traversing all the radiation points in the radiation matrix, when the same radiation point or radiation-receiving point corresponds to a plurality of radiation values, superposing all the radiation values corresponding to the same radiation point or radiation-receiving point, and determining the radiation values finally corresponding to each radiation point and radiation-receiving point, thereby ensuring that the radiation values corresponding to all the radiation points and radiation-receiving points are determined in the range corresponding to the thermal dimension.
In an application scenario, determining radiation values of all radiation points and all radiated points in a radiation matrix based on matrix point positions, radiation intensities and maximum radiation radiuses corresponding to the radiation points comprises: in response to traversing to the current radiation point, determining a radiation range corresponding to the current radiation point and radiation receiving points in the radiation range based on a matrix point position and a maximum radiation radius corresponding to the current radiation point, wherein the current radiation point comprises radiation points sequentially extracted from a radiation matrix; determining sub-radiation values of all radiation receiving points in a radiation range corresponding to the current radiation point based on the distance between each radiation receiving point and the current radiation point, and the radiation intensity and the maximum radiation radius corresponding to the current radiation point; and in response to traversing all the radiation points in the radiation matrix, adding the radiation original value corresponding to each radiation point and all the sub-radiation values to obtain a radiation value corresponding to each radiation point, and adding all the sub-radiation values corresponding to each radiated point to obtain a radiation value corresponding to each radiated point.
Specifically, sequentially traversing the radiation points from the radiation matrix in order, after obtaining the current radiation point, determining a radiation range corresponding to the current radiation point and radiation receiving points in the radiation range based on the matrix point location and the maximum radiation radius corresponding to the current radiation point, wherein when traversing to the current radiation point, all the points in the radiation range of the current radiation point are radiation receiving points, and therefore, the radiation receiving points may be other radiation points different from the current radiation point or radiated points.
Furthermore, traversing each radiation receiving point in the radiation range, calculating the distance from each radiation receiving point in the radiation range to the current radiation point, calculating the sub-radiation value of each radiation receiving point in the radiation range from the current radiation point according to the distance, after traversing all the radiation points in the radiation matrix, adding the radiation original value corresponding to each radiation point and all the sub-radiation values which can be received to obtain the radiation value corresponding to each radiation point, and adding all the sub-radiation values which can be received and correspond to each radiated point to obtain the radiation value corresponding to each radiated point, so as to improve the accuracy of the radiation values of each radiation point and radiated point. The above process is formulated as follows:
Figure BDA0003749352320000271
wherein S represents the radiation intensity, D represents the distance from the current pixel point to the radiation point, R represents the maximum radiation radius,
Figure BDA0003749352320000272
the ratio of D to R is truncated to 0-1, and 0 is selected when the ratio is less than 0, and 1 is selected when the ratio is more than 1.
Optionally, when the radiation value corresponding to any radiation point or radiation-receiving point exceeds the radiation upper limit value, the radiation value corresponding to the point is set as the radiation upper limit value, so as to prevent the radiation value from overflowing.
In a specific application scenario, the radiation matrix corresponds to an FColor type array with 0 RGB value and 255 transparency matching the thermal dimension, and the radiation upper limit value is 255.
In some implementation scenarios, the step S14 specifically includes: creating texture pointer variables in the illusion engine, and storing radiation values corresponding to the radiation points and the radiation points into the texture pointer variables to obtain thermal textures corresponding to the thermodynamic diagrams; grading each radiation point based on the radiation value corresponding to each radiation point and each radiated point to obtain a plurality of radiation grading sets; each radiation grading set corresponds to a radiation value interval, and radiation values corresponding to radiation points and/or radiated points in the same radiation grading set belong to the same radiation value interval; setting material patterns matched with the radiation grading sets for the thermal textures corresponding to the radiation points and/or the radiated points in each radiation grading set to obtain thermal materials corresponding to the thermal textures; wherein, the material style is related to the radiation value interval; and creating a material pointer variable in the illusion engine, storing the thermal material into the material pointer variable, and generating a target thermodynamic diagram corresponding to the thermodynamic diagram.
Specifically, a texture pointer variable is created in the illusion engine, and radiation values corresponding to each radiation point and each radiated point are stored in the texture pointer variable, so that the radiation values are converted into thermal textures, and thermal textures corresponding to the thermal maps are obtained.
Furthermore, each radiation point is graded according to the numerical value based on the numerical value corresponding to the radiation value corresponding to each radiation point and each radiated point to obtain a plurality of radiation grading sets, each radiation grading set corresponds to a radiation value interval, the radiation values corresponding to the radiation points and/or the radiated points in the same radiation grading set belong to the same radiation value interval, that is, the same radiation grading set may only include the radiation points or the radiated points or both the radiation points and the radiated points, the thermal texture corresponding to the radiation points and/or the radiated points in each radiation grading set is provided with a material pattern matched with the radiation grading set to obtain the thermal material corresponding to the thermal texture, wherein the material pattern is related to the radiation value interval, that is, each radiation grading set corresponds to a material pattern, and the radiation points and/or the radiated points in the same radiation grading set are provided with the same material pattern, so that the gradient change of the material is observed. And creating a material pointer variable in the illusion engine, storing the thermal material into the material pointer variable, and generating a target thermodynamic diagram corresponding to the thermodynamic diagram.
In a specific application scenario, a pointer variable Texture2D of UTexit 2D type is newly created for storing Texture data, a pointer variable Mip of FTexit 2DMipMap type is created, RGBA values of a radiation intensity matrix are written into BulkData of Mip, and Mip is assigned to Mips of PlatformData of Texture 2D.
Furthermore, a material pointer variable is newly built, a material node in the material pointer variable is set to be a non-luminous attribute, then four constant nodes are added for setting material styles, and the color of each material style is adjustable, wherein the radiation value interval with the maximum numerical value is red correspondingly, and the radiation value interval with the minimum numerical value is green correspondingly, so that whether point positions exist or not or differences among a plurality of point positions exist or not can be observed in a phantom engine, and whether correlation exists among different point positions or not can be detected.
Optionally, after generating a target thermodynamic diagram corresponding to the thermodynamic diagram in the illusion engine, the method further includes: generating a thermodynamic object corresponding to the thermodynamic diagram in the illusion engine; the thermal object comprises a position attribute, a grid attribute and a material attribute; generating a mesh vertex set corresponding to the thermodynamic diagram in the illusion engine based on the thermodynamic size; wherein the mesh vertex set comprises a plurality of mesh vertices; generating a thermal grid body corresponding to the thermal object based on the grid vertex set; setting thermal texture and thermal material for the thermal grid body to generate a target thermodynamic diagram corresponding to the thermal grid body; the position attribute is used for storing point positions corresponding to the radiation points, the grid attribute is used for storing thermal grid bodies corresponding to the thermal objects, and the material attribute is used for storing thermal materials.
Specifically, a thermal object ADynamicActor matched with the raster data is generated in the phantom engine, and the thermal object ADynamicActor inherits a base class object AActor of the phantom engine, wherein the thermal object comprises a position attribute, a grid attribute and a material attribute.
Further, a mesh body matched with the thermal size is generated in the illusion engine, the vertex of the mesh body is determined to obtain a mesh body vertex set, the mesh vertex set comprises a plurality of mesh vertices, a triangular mesh index corresponding to the mesh vertex set and texture coordinates and a normal line corresponding to each mesh vertex in the mesh vertex set are determined based on the mesh vertex set, the thermal mesh body corresponding to the mesh vertex set is generated through programmed modeling, thermal textures and thermal materials are set for the thermal mesh body, a target thermodynamic diagram corresponding to the thermal mesh body is generated, and the target thermodynamic diagram can be created in the illusion engine at a position corresponding to the thermal mesh body through the creation of the thermal mesh body, so that the target thermodynamic diagram can be called independently.
It can be understood that the position attribute is used for storing the point location corresponding to each radiation point, the grid attribute is used for storing the thermal grid body corresponding to the thermal object, and the material attribute is used for storing the thermal material, so that a storage space is provided for the point location corresponding to the radiation point, the thermal grid body corresponding to the thermal object, and the thermal material.
In an application scenario, the mesh vertex set includes at least four mesh vertices, and a thermal mesh body corresponding to a thermal object is generated based on the mesh vertex set, including: determining a plurality of triangular mesh indexes corresponding to the mesh vertex set based on the positions of all mesh vertices in the mesh vertex set; determining texture coordinates corresponding to the mesh vertex set based on the position of any mesh vertex in the mesh vertex set relative to the positions of other mesh vertices; determining a normal corresponding to each grid vertex in all triangular grid indexes based on the positions of three grid vertices in all triangular grid indexes; in the illusion engine, a thermal mesh volume corresponding to the thermal object is generated based on all the triangular mesh indices and the normal and texture coordinates corresponding to each mesh vertex.
Specifically, the mesh body constructed based on the thermal dimension is a rectangle or other polygons, the mesh vertex set includes at least four mesh vertices, for convenience of description, taking the mesh body as a square as an example, the thermal dimension has a height h and a width w, and h = w, the mesh vertices in the mesh vertex set are (0, 0), (0, h, 0), (w, h, 0), and (w, 0), the mesh vertices are numbered 0,1, 2, and 3, the triangular mesh indices of the generated mesh body are 0,1, 2, and 2, 3, and 0, respectively, so as to generate the mesh body based on the triangular mesh indices, generate texture coordinates are (0, 0), (0, 1), (1, 1), and (1, 0), so as to perform texture mapping, calculate a normal corresponding to each vertex, so as to determine an illumination direction corresponding to each vertex, input the vertices, the triangular mesh indices, a texture coordinate engine, and generate the thermal mesh body using a programmed modeling function of the phantom.
In a specific application scenario, determining a normal corresponding to each mesh vertex in all triangular mesh indexes based on positions of three mesh vertices in all triangular mesh indexes includes: based on the positions corresponding to the three grid vertexes in each triangular grid index, performing cross multiplication on two vectors between any grid vertex and the other two grid vertexes in each triangular grid index, and performing normalization processing on cross multiplication results to obtain the normal corresponding to all the grid vertexes.
Specifically, the normal of each mesh vertex in the triangular mesh index is calculated in a mode of cross multiplication of two vectors consisting of the mesh vertex and the other two mesh vertices on a triangular surface where the mesh vertex is located, and normalization processing is performed on cross multiplication results to obtain the mesh vertex so as to obtain an accurate normal position.
It can be understood that after the target icon and the target thermodynamic diagram are obtained, the target icon and the target thermodynamic diagram are added to the virtual model so as to facilitate point location calibration and determination of density information.
Further, after adding the target icon and the target thermodynamic diagram in the virtual model, the method further comprises the following steps: starting a weather control plug-in the illusion engine, and constructing a weather system in the virtual model by using the weather control plug-in; wherein the weather system includes a plurality of weather conditions.
Specifically, a weather preset set is created in the weather control plug-in, the set is equivalent to a group of parameter sets of specific weather conditions, including weather such as sunny days, cloudy days, rainy days, snowy days and heavy fog, the weather conditions of the specific weather system at the appointed time can be called by setting the planet radius, the editing time and the weather preset selection in the pneumatic control plug-in, and a parameter setting interface of the weather system is transmitted to an operation interface through UMG (unified modeling group), so that the function of switching the weather system during operation can be realized, and the fidelity of the virtual model is improved.
In a specific application scene, the weather control plug-in is a Skycreator plug-in a phantom model, a sky dome matched with the earth's atmosphere is created in the plug-in, and the real weather condition is simulated by setting relevant attributes of the sky dome, the sky atmosphere, the volume cloud, the sky light, the sun, the moon, the index height fog, the star map, the shelter, the wind, the weather special effect, the weather material special effect, the contact and the like.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an embodiment of an electronic device of the present application, where the electronic device 110 includes a memory 1101 and a processor 1102 coupled to each other, where the memory 1101 stores program data (not shown), and the processor 1102 calls the program data to implement the method in any of the embodiments described above, and for a description of relevant contents, reference is made to the detailed description of the method embodiment described above, which is not described again here.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an embodiment of a computer-readable storage medium 120 of the present application, the computer-readable storage medium 120 stores program data 1200, and when the program data 1200 is executed by a processor, the method in any of the above embodiments is implemented, and for a description of relevant contents, reference is made to the detailed description of the above method embodiments, which is not repeated here.
It should be noted that, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (12)

1. A method for constructing a plane based on an illusion engine is characterized by comprising the following steps:
obtaining vector data obtained based on a remote sensing image, and reading a vector line and a vector plane in the vector data;
generating a planar object in the illusion engine;
responding to the plane object corresponding to the vector line, adding all contour points in the vector line into the plane object to obtain a first original vertex in the plane object, performing width expansion on the first original vertex in the plane object to obtain a newly added vertex, and obtaining a plane grid body corresponding to the vector line based on the first original vertex and the newly added vertex;
responding to the planar object corresponding to the vector plane, adding all contour points in the vector plane into the planar object to obtain a second original vertex in the planar object, and obtaining a planar grid body corresponding to the vector plane based on the second original vertex;
and assigning materials for the plane grids corresponding to the vector lines or the vector planes, and generating planes corresponding to the vector data in the illusion engine.
2. The illusion-engine-based plane construction method of claim 1, wherein the obtaining of vector data obtained based on a remote sensing image and reading out of vector lines and vector planes in the vector data comprises:
reading the remote sensing image by using a spatial data conversion library to obtain the vector data and the corresponding data attribute thereof; wherein the data attributes comprise a plane data attribute and a line data attribute;
and reading out a vector line corresponding to the line data attribute and a vector plane corresponding to the plane data attribute from the vector data based on the data attribute of the vector data.
3. The illusion-engine-based plane construction method of claim 1,
the virtual engine corresponds to a coordinate system and a geographic global object, the coordinate system comprises a geographic coordinate system, a projection coordinate system, an engine global coordinate system and an engine local coordinate, and the geographic global object comprises an engine origin and a conversion relation between any two coordinate systems in the coordinate system;
the geographic coordinate system and the projection coordinate system are related to a global positioning system, the engine global coordinate system is used for positioning any object in the illusion engine, the engine origin corresponds to origin coordinates in the projection coordinate system, the engine global coordinates in the engine global coordinate system are obtained by subtracting the origin coordinates from the projection coordinates in the projection coordinate system, and the engine local coordinates in the engine local coordinate system are determined based on an offset value of a vertex on the current object relative to the engine global coordinates of the current object; the first original vertex, the new vertex, and the second original vertex, including in the ghost engine local coordinates in the engine local coordinate system.
4. The illusion-engine-based plane construction method of claim 3, wherein the obtaining of the plane mesh corresponding to the vector line based on the first original vertex and the newly added vertex comprises:
performing triangulation based on the engine local coordinates corresponding to the first original vertex and the newly added vertex to obtain a plurality of triangular mesh indexes corresponding to the planar object, and storing vertexes in each triangular mesh index in a plane appointed direction; each triangular mesh index comprises three mesh vertexes, and the mesh vertexes correspond to the first original vertex and the newly added vertex;
determining a normal corresponding to each mesh vertex in all the triangular mesh indexes and a texture coordinate corresponding to each mesh vertex based on the position of each mesh vertex in the illusion engine;
and in the illusion engine, generating a planar grid body corresponding to the vector line based on all the triangular grid indexes and the normal line and texture coordinates corresponding to each grid vertex.
5. The illusion-engine-based plane construction method of claim 3, wherein the obtaining of the plane mesh corresponding to the vector plane based on the second original vertex comprises:
triangulating based on the engine local coordinates corresponding to the second original vertex to obtain a plurality of triangular mesh indexes corresponding to the planar object, and storing the vertex in each triangular mesh index according to the plane designated direction; wherein each of the triangular mesh indices includes three mesh vertices corresponding to the second original vertex;
determining a normal corresponding to each mesh vertex in all the triangular mesh indexes and a texture coordinate corresponding to each mesh vertex based on the position of each mesh vertex in the illusion engine;
and in the illusion engine, generating a plane grid body corresponding to the vector plane based on all the triangular grid indexes and the normal line and texture coordinates corresponding to each grid vertex.
6. The illusion-engine-based plane construction method of claim 4 or 5, wherein the determining the normal corresponding to each mesh vertex in all the triangular mesh indexes and the texture coordinate corresponding to each mesh vertex based on the position of each mesh vertex in the illusion engine comprises:
based on engine local coordinates corresponding to three grid vertexes in each triangular grid index, cross-multiplying two vectors between any grid vertex and other two grid vertexes in each triangular grid index, and normalizing the cross-multiplication result to obtain normals corresponding to all the grid vertexes;
and scaling the local engine coordinate corresponding to each grid vertex to a texture coordinate range to obtain the texture coordinate corresponding to each grid vertex.
7. The illusion-engine-based plane construction method of claim 6, wherein the scaling the engine local coordinates corresponding to each of the mesh vertices to a texture coordinate range to obtain texture coordinates corresponding to each of the mesh vertices comprises:
and respectively carrying out scaling processing on the transverse coordinates and the longitudinal coordinates of the engine local coordinates corresponding to each grid vertex to obtain the transverse components and the longitudinal components of the texture coordinates corresponding to each grid vertex, and determining the texture coordinates corresponding to all the grid vertices.
8. A phantom engine based plane construction method according to claim 3, wherein said generating a plane object in said phantom engine comprises:
generating in said illusion engine a planar object that inherits said geographic global object; the plane object comprises a vertex attribute, a mesh attribute and a material attribute; the vertex attribute is used for storing the vector line or the contour point on the vector plane, the grid attribute is used for storing a planar grid body corresponding to the vector line or the vector plane, and the material attribute is used for storing the material of the planar grid body;
adding all contour points in the vector line to the planar object to obtain a first original vertex in the planar object, including:
converting the positions corresponding to all contour points on the vector line into the geographic coordinate system, and storing the geographic coordinates corresponding to all contour points on the vector line into the vertex attribute to obtain a first original vertex in the planar object;
adding all contour points in the vector plane to the planar object to obtain a second original vertex in the planar object, including:
and converting the positions corresponding to all the contour points on the vector surface into the geographic coordinate system, and storing the geographic coordinates corresponding to all the contour points on the vector surface into the vertex attribute to obtain a second original vertex in the planar object.
9. The illusion-engine-based plane construction method of claim 8, wherein the expanding the width of the first original vertex in the plane object to obtain a new vertex comprises:
converting the geographic coordinates of all first original vertexes in the planar object into the projection coordinate system by using a spatial data conversion library, converting the projection coordinates in the projection coordinate system into the engine global coordinate system, converting the engine global coordinates in the engine global coordinate system into the engine local coordinate system, and determining the engine local coordinates of the first original vertexes in the engine local coordinate system;
taking an engine global coordinate of one of the first original vertexes in the engine global coordinate system in the plane object as a position of the plane object in the illusion engine;
based on engine local coordinates of all first original vertexes in the plane object in the engine local coordinate system, performing width expansion on the first original vertexes in the plane object in a direction perpendicular to a connecting line between the adjacent first original vertexes to obtain new vertexes; wherein the width expanded between the first original vertex and the newly added vertex is related to the line width attribute of the vector line;
and determining engine local coordinates of the newly added vertexes corresponding to the first original vertexes in the engine local coordinate system based on the engine local coordinates of the first original vertexes in the engine local coordinate system.
10. The illusion-engine-based plane construction method of claim 3, wherein before obtaining vector data obtained based on a remote sensing image and reading out a vector line in the vector data, the method further comprises:
obtaining a coordinate setting attribute corresponding to the engine origin, determining position information in the vector data when the coordinate setting attribute corresponds to an allowable setting, and resetting the engine origin of the coordinate system based on the position information; wherein the coordinate setting attribute includes an enable setting and a disable setting.
11. An electronic device, comprising: a memory and a processor coupled to each other, wherein the memory stores program data that the processor invokes to perform the method of any of claims 1-10.
12. A computer-readable storage medium, on which program data are stored, which program data, when being executed by a processor, carry out the method of any one of claims 1-10.
CN202210837695.3A 2022-07-15 2022-07-15 Plane construction method based on illusion engine, electronic device and storage medium Pending CN115409958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210837695.3A CN115409958A (en) 2022-07-15 2022-07-15 Plane construction method based on illusion engine, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210837695.3A CN115409958A (en) 2022-07-15 2022-07-15 Plane construction method based on illusion engine, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN115409958A true CN115409958A (en) 2022-11-29

Family

ID=84158180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210837695.3A Pending CN115409958A (en) 2022-07-15 2022-07-15 Plane construction method based on illusion engine, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN115409958A (en)

Similar Documents

Publication Publication Date Title
CN113506370B (en) Three-dimensional geographic scene model construction method and device based on three-dimensional remote sensing image
CN113516769B (en) Virtual reality three-dimensional scene loading and rendering method and device and terminal equipment
CN115409957A (en) Map construction method based on illusion engine, electronic device and storage medium
US20110316854A1 (en) Global Visualization Process Terrain Database Builder
CN102117497B (en) Method and system for three-dimensional terrain modeling
WO2022227910A1 (en) Virtual scene generation method and apparatus, and computer device and storage medium
CN105976426B (en) A kind of quick three-dimensional atural object model building method
CN104835202A (en) Quick three-dimensional virtual scene constructing method
Kolivand et al. Cultural heritage in marker-less augmented reality: A survey
CN112530009A (en) Three-dimensional topographic map drawing method and system
CN108733711A (en) Distribution line space length acquisition methods based on three-dimension GIS technology
CN114627237A (en) Real-scene three-dimensional model-based front video image generation method
CN115409962B (en) Method for constructing coordinate system in illusion engine, electronic device and storage medium
CN112419460A (en) Method, apparatus, computer device and storage medium for baking model charting
JP2832463B2 (en) 3D model reconstruction method and display method
Deng et al. Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images
CN116342724A (en) Thermodynamic diagram object generation method, device and computer readable storage medium
Ariff et al. Exploratory study of 3D point cloud triangulation for smart city modelling and visualization
CN115631317A (en) Tunnel lining ortho-image generation method and device, storage medium and terminal
CN115409958A (en) Plane construction method based on illusion engine, electronic device and storage medium
CN115409963A (en) Icon construction method based on illusion engine, electronic device and storage medium
CN115409959A (en) Three-dimensional construction method based on illusion engine, electronic device and storage medium
CN115439619A (en) Terrain construction method based on illusion engine, electronic device and storage medium
CN115409961A (en) Thermodynamic diagram construction method based on illusion engine, electronic equipment and storage medium
JP3024666B2 (en) Method and system for generating three-dimensional display image of high-altitude image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination