CN116385672A - Construction method of three-dimensional terrain scene model data product - Google Patents

Construction method of three-dimensional terrain scene model data product Download PDF

Info

Publication number
CN116385672A
CN116385672A CN202310153754.XA CN202310153754A CN116385672A CN 116385672 A CN116385672 A CN 116385672A CN 202310153754 A CN202310153754 A CN 202310153754A CN 116385672 A CN116385672 A CN 116385672A
Authority
CN
China
Prior art keywords
dimensional
file
data
texture
scene model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310153754.XA
Other languages
Chinese (zh)
Inventor
刘建军
高崟
刘剑炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202310153754.XA priority Critical patent/CN116385672A/en
Publication of CN116385672A publication Critical patent/CN116385672A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The application provides a construction method of a three-dimensional terrain scene model data product, which comprises the following steps: extracting three-dimensional terrain elevation information of a target area based on DEM/DSM, discrete elevation points, dense matching points, point clouds and the like; generating a three-dimensional terrain relief model of the target area according to a triangular network or polygonal network subdivision modeling mode; the three-dimensional topographic relief model is formed by sequentially arranging a plurality of three-dimensional patches; respectively extracting texture data of each three-dimensional surface patch; and constructing a three-dimensional terrain scene model of the target area according to the texture data of each three-dimensional patch. The three-dimensional terrain scene model construction mode based on the three-dimensional geometric surface patch can represent all surface data of a target area by a large amount of surface data taking the geometric surface patch as a unit, so that the requirement of hardware and the calculation cost are reduced under the condition that the model guarantees the authenticity, and the three-dimensional real scene modeling technology is easier to popularize and use.

Description

Construction method of three-dimensional terrain scene model data product
Technical Field
The application relates to the technical field of modeling, in particular to a method for constructing a three-dimensional terrain scene model data product.
Background
The three-dimensional live-action modeling is a three-dimensional model manufactured on the basis of two-dimensional geographic information, and a user can rotate, amplify and the like the three-dimensional model by issuing interactive operation instructions, so that the user can observe a certain region in a more visual and real mode.
In the traditional technology, the three-dimensional live-action modeling needs to acquire basic data based on space-time data technologies such as a digital elevation model and a remote sensing image, and uses a professional 3D geographic information rendering engine to render a digital base scene, and massive original data such as DEM, DOM and the like need to be processed during rendering, so that the method has high requirements on hardware, consumes time and labor in processing, has high professional threshold in application, and is not suitable for large-scale popularization and use.
Disclosure of Invention
In view of the above, an object of the present application is to provide a method for constructing a three-dimensional terrain scene model data product, which is used for solving the problem in the prior art that the requirement on hardware is too high when three-dimensional live-action modeling is performed.
In a first aspect, an embodiment of the present application provides a method for constructing a three-dimensional terrain scene model data product, where the method includes:
extracting three-dimensional terrain elevation information of a target region based on geographic information data; the geographic information data comprises DEM/DSM, discrete elevation points, dense matching points and point clouds;
Generating a three-dimensional terrain relief model of the target area according to a triangular network or polygonal network subdivision modeling mode; the three-dimensional topographic relief model is formed by sequentially arranging a plurality of three-dimensional patches;
respectively extracting texture data of each three-dimensional surface patch;
and constructing a three-dimensional terrain scene model of the target area according to the texture data of each three-dimensional patch.
Optionally, extracting three-dimensional terrain elevation information of the target region based on the geographic information data includes:
setting a plurality of interpolation points on the ground surface of a target area according to a preset sampling interval;
determining position data of each interpolation point in the target area; the position data comprises plane coordinates and elevation values;
three-dimensional terrain elevation information of the target region is generated based on the plane coordinates and the elevation values of all the interpolation points.
Optionally, extracting texture data of each three-dimensional patch separately includes:
determining, for each three-dimensional patch, an orthographic projection pattern of the three-dimensional patch;
respectively extracting texture data of the earth surface of a target area corresponding to each orthographic projection graph;
and carrying out three-dimensional mapping and image conversion on the texture data of the earth surface of the target area corresponding to each orthographic projection graph so as to determine the texture data of each three-dimensional patch.
Optionally, constructing a three-dimensional terrain scene model of the target area according to the texture data of each three-dimensional patch includes:
respectively establishing a mapping relation between the position data of each three-dimensional surface patch and the texture data of each three-dimensional surface patch;
determining coordinate system parameters of the digital scene model;
and constructing a three-dimensional terrain scene model of the target area based on the coordinate system parameters and the mapping relation between the position data of each three-dimensional patch and the texture data of each three-dimensional patch.
Optionally, the construction method further includes:
generating a header file of a three-dimensional terrain scene model of a target area, a position file carrying three-dimensional patch position data, a texture file carrying three-dimensional patch texture data and a mapping relation file recording the mapping relation between the position file and the texture file;
the header file, the position file, the texture file and the mapping relation file are respectively associated with the codes of the target region and then stored to obtain a three-dimensional terrain scene model file of the target region;
or alternatively, the first and second heat exchangers may be,
splitting the header file, the position file, the texture file and the mapping relation file according to the spatial position relation of the three-dimensional terrain scene model respectively to obtain the header files, the position files, the texture files and the mapping relation files of a plurality of three-dimensional digital scene sub-models, and packaging and storing the header files, the position files, the texture files and the mapping relation files of the three-dimensional digital scene sub-models respectively to obtain the three-dimensional terrain scene model files of the target region.
Optionally, the header file carries file composition information, analysis rule data, coordinate system data and metadata of a three-dimensional terrain scene model of the target region;
the position file carries the spatial position data of all three-dimensional patches in the three-dimensional terrain scene model of the target area; the three-dimensional surface patch is triangular, quadrilateral or polygonal.
Optionally, the construction method further includes:
generating a header file of a three-dimensional terrain scene model about the target region, a three-dimensional mesh file carrying mesh unit data of the three-dimensional terrain scene model, and a texture file carrying three-dimensional patch texture data;
the header file, the three-dimensional grid file and the texture file are associated with the codes of the target region and then stored to obtain a three-dimensional terrain scene model file of the target region;
or alternatively, the first and second heat exchangers may be,
splitting the header file, the three-dimensional grid file and the texture file according to the spatial position relation of the three-dimensional terrain scene model to obtain the header file, the three-dimensional grid file and the texture file of the three-dimensional digital scene sub-model, and packaging and storing the header file, the three-dimensional grid file and the texture file of the three-dimensional digital scene sub-model to obtain the three-dimensional terrain scene model file of the target area.
Optionally, the construction method further includes:
in response to the scene rendering request, repeating the following steps until all three-dimensional terrain scene models of the target area are rendered:
acquiring a coordinate range of a current window by using a rendering engine;
requesting a corresponding position file according to the coordinate range to construct a three-dimensional space skeleton of the target region in the current view;
reading a target texture file corresponding to the three-dimensional space skeleton by using a mapping relation file;
performing map rendering on the real-time three-dimensional space skeleton by utilizing the target texture file, and judging whether all rendering work of the three-dimensional terrain scene model of the target area is completed or not;
if not, the coordinate range of the current window is adjusted, and the step of re-executing is carried out to acquire the coordinate range of the current window by using the rendering engine.
Optionally, the construction method further includes:
responding to the vector data superposition request, and obtaining the plane coordinates of the added object;
calculating the elevation value of the added object on the three-dimensional surface patch in the three-dimensional terrain scene model of the target area by utilizing the three-dimensional terrain scene model file of the target area and the plane coordinates of the added object;
and drawing the adding object in the three-dimensional terrain scene model of the target area based on the plane coordinates and the elevation value of the adding object.
Optionally, the construction method further includes:
responding to the raster data superposition request to acquire a target raster unit;
calculating plane coordinates and elevation values corresponding to the grid units by using a three-dimensional terrain scene model file of the target area, and drawing pixel values of the target grid units by using symbols;
and drawing the target grid unit in the three-dimensional terrain scene model of the target region according to the plane coordinates, the elevation values and the pixel values corresponding to the grid unit.
In a second aspect, an embodiment of the present application provides a construction apparatus for a three-dimensional terrain scene model data product, the construction apparatus including:
the first extraction module is used for extracting three-dimensional terrain elevation information of the target region based on the geographic information data; the geographic information data comprises DEM/DSM, discrete elevation points, dense matching points and point clouds;
the generation module is used for generating a three-dimensional terrain relief model of the target area according to a triangular network or polygonal network subdivision modeling mode; the three-dimensional topographic relief model is formed by sequentially arranging a plurality of three-dimensional patches;
the second extraction module is used for respectively extracting texture data of each three-dimensional surface patch;
and the construction module is used for constructing a three-dimensional terrain scene model of the target area according to the texture data of each three-dimensional patch.
In a third aspect, embodiments of the present application provide a computer device including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the above-described method.
The method for constructing the three-dimensional terrain scene model data product comprises the steps of firstly, extracting three-dimensional terrain elevation information of a target area based on geographic information data; the geographic information data comprises DEM/DSM, discrete elevation points, dense matching points and point clouds; secondly, generating a three-dimensional terrain relief model of the target area according to a triangular network or polygonal network subdivision modeling mode; the three-dimensional topographic relief model is formed by sequentially arranging a plurality of three-dimensional patches; then, respectively extracting texture data of each three-dimensional surface patch; and finally, constructing a three-dimensional terrain scene model of the target area according to the texture data of each three-dimensional patch.
In some embodiments, the three-dimensional terrain scene model construction mode based on the three-dimensional geometric surface patch can represent all surface data of a target area by a large amount of surface data taking the geometric surface patch as a unit, so that the hardware requirement and the calculation cost are reduced under the condition that the model ensures the authenticity, and the three-dimensional real scene modeling technology is easier to popularize and use.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a method for constructing a three-dimensional terrain scene model data product according to an embodiment of the present application;
fig. 2 is a flow chart of a method for extracting three-dimensional terrain elevation information of a target area based on geographic information data according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for determining texture data of each three-dimensional patch according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of a method for constructing a three-dimensional terrain scene model of a target area according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of a device for constructing a three-dimensional terrain scene model data product according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application;
fig. 7 is a schematic flow chart of a method for constructing a three-dimensional terrain scene model data product according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
The data and software processing process used by the three-dimensional live-action modeling technology in the traditional technology is relatively complex and too fine, so that the technology can only be carried out by professional software when being implemented, and the popularization and the use of the technology are greatly influenced.
Aiming at the problems, the application provides a construction method of a three-dimensional terrain scene model data product, a storage method and various application methods of the three-dimensional terrain scene model data product based on the construction method.
First, a method for constructing a three-dimensional terrain scene model data product is described, as shown in fig. 1, and the method comprises the following steps:
s101, extracting three-dimensional terrain elevation information of a target region based on geographic information data; the geographic information data comprises DEM/DSM, discrete elevation points, dense matching points and point clouds;
s102, generating a three-dimensional terrain relief model of a target area according to a triangular network or polygonal network subdivision modeling mode; the three-dimensional topographic relief model is formed by sequentially arranging a plurality of three-dimensional patches;
s103, respectively extracting texture data of each three-dimensional surface patch;
s104, constructing a three-dimensional terrain scene model of the target area according to the texture data of each three-dimensional patch.
In step S101, DEM refers to an index elevation model (Digital Elevation Model), and DSM refers to an index three-dimensional terrain relief model (Digital Surface Model, DSM); in extraction, only one DEM and DSM is used, but DOM is necessary.
The three-dimensional terrain (ground/surface) elevation relief information of the target area is generally obtained by interpolation, specifically, as shown in fig. 2, step S101 may include the following steps:
s1011, setting a plurality of interpolation points on the ground surface of a target area according to a preset sampling interval;
s1012, determining position data of each interpolation point in the target area; the position data comprises plane coordinates and elevation values;
s1013, three-dimensional terrain elevation information of the target region is generated based on the plane coordinates and the elevation values of all the interpolation points.
In step S1011, the preset sampling interval refers to the distance between two adjacent interpolation points (where the interpolation points are usually generated by triangulation or mesh subdivision, and the interpolation density is mainly determined by modeling accuracy), or the sampling interval reflects the density of the interpolation points, and this parameter directly affects the fineness of the finally established three-dimensional terrain scene model, so that, in this step, the value of the specific interval can be determined by the selection of the user. If efficiency of the processing device is considered, reference may be made to the operating pressure of the current processing device (the main body of execution of the method provided by the present solution), and when the operating pressure is small, a larger sampling interval may be used, whereas when the operating pressure is large, a smaller sampling interval may be used (i.e. the operating pressure of the processing device has a negative correlation with the sampling interval). The magnitude of the operating pressure may be determined based on the CPU occupancy and hardware parameters.
In a specific operation, a plurality of values suitable for the current operation can be selected from a plurality of different sampling intervals according to the operation pressure, and provided for a user to select, and the final sampling interval is determined according to the selection of the user. If the operating pressure can be divided into 1-5 steps from small to large, the sampling interval can be divided into 5 steps from small to large. When the operating pressure is 2 levels, the sampling interval can be dense, for example, two levels of AB can be selected, when the operating pressure is 5 levels, the system pressure is larger, the sampling interval is too dense, so that the system is down, and at the moment, two levels of DE can be selected.
Then, in step S1012, the plane coordinates (x 0, y 0) of each interpolation point are determined, and then the elevation value (h 0) is extracted from the three-dimensional data including the ground surface elevation information, such as the digital elevation model, the digital surface model, the discrete elevation point, the oblique photography model, the Lidar laser point cloud, etc., of the corresponding region according to the plane coordinates (x 0, y 0), so as to construct the three-dimensional coordinates (x 0, y0, h 0) of the interpolation point. After the three-dimensional coordinates of each interpolation point are determined, the three-dimensional terrain elevation of the target area can be determined. In other words, the three-dimensional terrain elevation information is a set of three-dimensional coordinates of all interpolation points. Generally, the position data characterizes spatial position morphology information characterized by the three-dimensional patches. Three-dimensional patches are generally composed of two-dimensional surfaces and normal vectors or true three-dimensional surfaces.
Further, in step S102, a three-dimensional terrain relief model of the target area may be generated by sampling and modeling using a triangular mesh or a polygonal mesh (triangular mesh may be used as a rule or a rectangular mesh or a polygonal mesh may be used as a rule). The three-dimensional topographic relief model is formed by sequentially arranging a plurality of three-dimensional patches. Specifically, the three-dimensional surface patch is a planar pattern in a three-dimensional space, and the three-dimensional surface patch may be a triangle, a quadrangle, or a polygon. Meanwhile, different three-dimensional patches are mutually non-overlapping and seamless.
Similar to the case of interpolation points, the number and density of three-dimensional patches determine the fineness of the final three-dimensional terrain scene model, and thus, the number and/or density of three-dimensional patches may also be considered to have a negative correlation with the operating pressure.
The model of triangle net or polygonal net subdivision model is generally known as Triangulated Irregular Network, which approximates the surface of the terrain by a series of triangular faces connected together without intersecting each other and overlapping each other, and thus has the advantage that it can describe the surface of the terrain with different levels of resolution.
In the present application, the texture data extracted from each three-dimensional patch may be two-dimensional texture data extracted based on the existing texture data extraction method, or may be three-dimensional texture data extracted by the following steps, as shown in fig. 3, and step S103 may be divided into the following steps when implemented specifically:
S1021, determining an orthographic projection graph of each three-dimensional patch;
s1022, respectively extracting texture data of the earth surface of the target area corresponding to each orthographic projection graph;
s1023, carrying out three-dimensional mapping and image conversion on the texture data of the earth surface of the target area corresponding to each orthographic projection graph so as to determine the texture data of each three-dimensional patch.
In step S1021, each three-dimensional patch needs to be processed separately, that is, in specific implementation, each determined three-dimensional patch needs to be processed in the manner of step S1021. The orthographic projection pattern of the three-dimensional surface patch is obtained by projecting the three-dimensional surface patch on the xy plane, and the three-dimensional surface patch may be triangular, quadrangular or polygonal in shape, so that the orthographic projection may be triangular, quadrangular or polygonal.
Then, in step S1022, texture data WO of the corresponding region needs to be extracted from data representing the texture composition of the surface coverage, such as the remote sensing image, the aerial image, the surface coverage, and the investigation monitoring result of the corresponding region, based on the range of the orthographic projection pattern.
Finally, three-dimensional mapping and image conversion can be performed on the texture data W0 according to the position data of the three-dimensional patch, so as to obtain texture data W1 overlapped with the three-dimensional space position, wherein the texture data W1 is the final texture data of the three-dimensional patch.
Wherein, the three-dimensional mapping and image conversion of the texture data W0 may be performed according to the following formula:
Figure BDA0004091610870000101
wherein x, y, z are respectively pre-processing texture data coordinates, and x ', y ', z ' are respectively pre-processing texture data coordinates. a. b and c are scaling factors respectively, namely the formula shows that the x, y and z axes are scaled by a, b and c times respectively.
As shown in fig. 4, step S104 may be divided into the following steps:
s1041, respectively establishing a mapping relation between the position data of each three-dimensional surface patch and the texture data of each three-dimensional surface patch;
s1042, determining coordinate system parameters of a digital scene model;
s1043, constructing a three-dimensional terrain scene model of the target area based on the coordinate system parameters and the mapping relation between the position data of each three-dimensional patch and the texture data of each three-dimensional patch.
Because the position data and the texture data are both attribute parameters describing the three-dimensional patch, the two are required to be associated before storing, that is, in step S1041, a mapping relationship between the position data and the texture data of the three-dimensional patch is established, and in fact, the mapping relationship may be expressed as: the association relation between the serial number of the three-dimensional surface patch (identity information of the three-dimensional surface patch) and the position data and texture data of the three-dimensional surface patch is obtained so as to clearly express the object described by the position data and the texture data.
After that, in step S1042, it is necessary to define the three-dimensional geographical coordinate starting point of the set HWn and X, Y, Z the coordinate frame chord, the main purpose of which is to define the metric standard, and after the coordinate starting point (origin of coordinates) is determined, the position data in the world coordinate system can be converted into the coordinate system defined in this step. In general, the starting points are generally constructed with characteristic vertices of the scene, such as the northeast, southwest corners of a mesh scene, etc., or inflection points of typical landmarks, etc.
Finally, a three-dimensional terrain scene model of the target region can be constructed by using the position data and texture parameters of the three-dimensional surface patch and the mapping relation under the coordinate system of the resume in step S1042. The mode of constructing the model converts the original complex construction process into the construction of a large number of patches, saves the energy consumption of the system, and ensures that the technology has no excessively strong requirements on hardware when being realized.
The foregoing provides a method for constructing a three-dimensional terrain scene model of a target area, and the present invention provides a storage method and an application method in addition to the construction method, and the storage method is described below. Specifically, there are two implementations of the storage method, one is three-dimensional patch storage, and one is three-dimensional grid storage, and the three-dimensional patch storage is first described below.
Specifically, there are two types of storage methods of three-dimensional patches:
first, this scheme still includes the following step:
generating a header file of a three-dimensional terrain scene model of a target area, a position file carrying three-dimensional patch position data, a texture file carrying three-dimensional patch texture data and a mapping relation file recording the mapping relation between the position file and the texture file;
the header file, the position file, the texture file and the mapping relation file are respectively associated with the codes of the target region and then stored to obtain a three-dimensional terrain scene model file of the target region;
second, the present solution further includes the following steps:
generating a header file of a three-dimensional terrain scene model of a target area, a position file carrying three-dimensional patch position data, a texture file carrying three-dimensional patch texture data and a mapping relation file recording the mapping relation between the position file and the texture file;
splitting the header file, the position file, the texture file and the mapping relation file according to the spatial position relation of the three-dimensional terrain scene model respectively to obtain the header files, the position files, the texture files and the mapping relation files of a plurality of three-dimensional digital scene sub-models, and packaging and storing the header files, the position files, the texture files and the mapping relation files of the three-dimensional digital scene sub-models respectively to obtain the three-dimensional terrain scene model files of the target region.
In the above scheme, the file header records global basic information of the model, including file composition (index), analysis rules (how to read), coordinate system, metadata, and the like. Metadata is data describing other data (e.g., file composition, parsing rules, coordinate system, etc.), such as date of manufacture of the data, data source, precision, producer, technical method, etc.
Compared with the two modes, the main difference is that one file is independently stored, namely, one three-dimensional terrain scene model corresponds to a plurality of subfiles; one is file combination storage, that is, the sorted header file, location file, texture file and mapping relation file are packaged together for storage.
Specifically, when storing, three-dimensional affine transformation may be performed first in the above manner, and then the obtained three-dimensional texture may be stored, or two-dimensional texture data may be directly stored without performing three-dimensional affine transformation. Meanwhile, the texture data of each three-dimensional surface patch can be stored independently, or the texture data of a plurality of three-dimensional surface patches can be packaged together for storage, i.e. one texture file can store the texture data of a plurality of three-dimensional surface patches.
Specifically, two storage modes of the three-dimensional grid are respectively:
in the first type of this,
generating a header file of a three-dimensional terrain scene model about the target region, a three-dimensional mesh file carrying mesh unit data of the three-dimensional terrain scene model, and a texture file carrying three-dimensional patch texture data;
the header file, the three-dimensional grid file and the texture file are associated with the codes of the target region and then stored to obtain a three-dimensional terrain scene model file of the target region; the files with different file types are stored respectively, for example, all header files are packaged together for storage, all three-dimensional grid files are packaged together for storage, and all texture files are packaged together for storage.
In the second type of the method, the second type of method,
generating a header file of a three-dimensional terrain scene model about the target region, a three-dimensional mesh file carrying mesh unit data of the three-dimensional terrain scene model, and a texture file carrying three-dimensional patch texture data;
splitting the header file, the three-dimensional grid file and the texture file according to the spatial position relation of the three-dimensional terrain scene model to obtain the header file, the three-dimensional grid file and the texture file of the three-dimensional digital scene sub-model, and packaging and storing the header file, the three-dimensional grid file and the texture file of the three-dimensional digital scene sub-model to obtain the three-dimensional terrain scene model file of the target area.
It can be seen that the main difference between three-dimensional patch storage and three-dimensional grid storage is that one is storage in patch units and one is storage in grid units. Storing in units of grids does not require a mapping file.
The three-dimensional grid file is composed of grid cell data of regular slices, and the regular slices follow grid division standards such as standard OGC WMTS. Each grid unit data is formed by storing surface three-dimensional information by an irregular triangular network, the irregular triangular network is formed by vertexes and topological rules, wherein the vertexes are vertex sets of all irregular triangles, and the topological rules define topological composition relations between all triangles and the vertexes.
Meanwhile, the two main differences of storage in units of grids are also that one is independent storage and the other is combined storage. In the case of the storage in units of a grid, the file structure, the analysis rule, the coordinate system, and the metadata are the same as those in units of a three-dimensional patch, and will not be described here.
Finally, the present application also provides several exemplary modes of application:
the first application, exchange sharing. The main purpose of exchange sharing is to share the constructed three-dimensional terrain scene model file to other network ends. Specifically, the three-dimensional terrain scene model files (including header files, location files, texture files, mapping relation files, etc.) obtained by packing and storing mentioned in the foregoing description may be shared to other network ends.
The second application, parse reads. As written in the preamble, the header file carries the analysis rule data, so that the stored three-dimensional terrain scene model file can be derived by using the analysis rule data to obtain the three-dimensional terrain scene model.
A third application, scene rendering, specifically comprises the steps of:
in step 1001, in response to the scene rendering request, the following steps are repeatedly performed until all rendering of the three-dimensional terrain scene model of the target area is completed:
step 1002, obtaining a coordinate range of a current window by using a rendering engine;
step 1003, requesting a corresponding position file according to the coordinate range to construct a three-dimensional space skeleton of the target region in the current view;
step 1004, reading a target texture file corresponding to the three-dimensional space skeleton by using a mapping relation file;
step 1005, performing map rendering on the real-time three-dimensional space skeleton by using the target texture file, and judging whether all rendering work of the three-dimensional terrain scene model of the target area is completed or not;
if step 1005 is no, the coordinate range of the current window is adjusted, and the step is re-executed to obtain the coordinate range of the current window by using the rendering engine.
That is, when rendering, an operation is performed for each window position, first, the current window position is determined, and then, a skeleton of the three-dimensional space is constructed using the three-dimensional patches that can be observed by the window. And then, performing map rendering on the framework by utilizing the corresponding texture file. Thereafter, the window position is moved and step 1002 is re-executed.
And fourthly, scene analysis. The method is to operate the constructed three-dimensional terrain scene model by using irregular triangular meshes, irregular four-sided meshes and other methods. The operations can be carried out by inquiring the coordinates of any point, measuring the space distance between any two points, measuring the distance difference between any two points, measuring the height difference and the like; based on an irregular triangular network and an irregular four-sided network, the calculation of contour lines, gradient bands and the like of any scene range is supported, and the calculation of shielding analysis, view analysis, astronomical line analysis and the like is supported; based on a certain light source position, illumination analysis, shadow analysis and the like are supported.
And a fifth application, a superposition application. The method is characterized in that data superposition is carried out on the basis of a constructed three-dimensional terrain scene model, wherein two types of superposed data are vector data, and one type of superposed data is raster data.
The step of superimposing the vector data is as follows:
responding to the vector data superposition request, and obtaining the plane coordinates of the added object;
calculating the elevation value of the added object on the three-dimensional surface patch in the three-dimensional terrain scene model of the target area by utilizing the three-dimensional terrain scene model file of the target area and the plane coordinates of the added object;
and drawing the adding object in the three-dimensional terrain scene model of the target area based on the plane coordinates and the elevation value of the adding object.
The vector data may be two-dimensional points, two-dimensional lines, two-dimensional planes, three-dimensional volumes, and the like. Specifically, when two-dimensional vector point data are overlapped, two-dimensional coordinates (x, y) of points can be read firstly, a triangular patch surface elevation z corresponding to the two-dimensional coordinates is calculated in a digital scene model, then point symbols are rendered at the positions (x, y, z), and two-dimensional vector point overlapping is completed;
when two-dimensional vector line data are overlapped, two-dimensional coordinate strings (x 1, y 1), (x 2, y 2) … … (xn, yn) can be sequentially read, three-dimensional patch surface elevations (z 1), (z 2) … … (zn) corresponding to the two-dimensional coordinate strings are calculated in a digital scene model, then line nodes are sequentially drawn in (x 1, y1, z 1), (x 2, y2, z 2) … … (xn, yn, zn), connecting lines of the nodes are sequentially drawn, and two-dimensional line overlapping is completed;
When two-dimensional vector surface data are overlapped, polygons can be sequentially read, polygon nodes (x 1, y 1), (x 2, y 2) … … (xn, yn) are respectively read for each polygon, three-dimensional surface elevation (z 1), (z 2) … … (zn) corresponding to a two-dimensional coordinate string is calculated in a digital scene model, then line nodes are sequentially drawn in (x 1, y1, z 1), (x 2, y2, z 2) … … (xn, yn, zn), connecting the starting point and the end point, and filling with a single-color or texture symbol drawing surface to finish two-dimensional surface overlapping;
when the vector three-dimensional volume data are superimposed, firstly, the surfaces of the three-dimensional volume are sequentially selected, for each surface, nodes (x 1, y1, z 1), (x 2, y2, z 2) … … (xn, yn, zn) of the surfaces are sequentially read and drawn, connecting lines of the nodes are sequentially drawn, starting points and end points are connected, and the surfaces are drawn by monocolor or texture symbols to be filled, so that the three-dimensional volume data are superimposed.
After all vector data are overlapped, the vector data exist in the form of layers in the scene, all layers support to adjust the upper and lower overlapping sequence, and the rendering result is updated in the scene; after all the vector data are overlapped, interactive selection and inquiry are supported in the scene, attribute information of the selected vector layer is fed back, and geometric calculation is supported.
The step of superimposing raster data is as follows:
sequentially reading grid units (r, c) according to rows and columns, calculating geographic coordinates (x, y) corresponding to the grid units (r, c), then calculating corresponding elevations (z), and drawing values of pixels (x, y, z) by using symbols to finish superposition of grids; after all raster data are overlapped, the raster data exist in a raster layer form in a scene, the upper and lower overlapping sequences are supported to be adjusted, and a rendering result is updated in the scene; after all the grid data are overlapped, interactive selection and inquiry are supported in the scene, the value of the selected grid pixel is fed back, and geometric calculation is supported.
Overall, the value of this solution is mainly:
(1) Data integration encapsulation, independent of professional software, use threshold low: the digital scene model based on the three-dimensional geometric surface patch and the texture can carry out integrated real and visual expression on the three-dimensional shape and the coverage characteristic of the earth surface, and the problems that the traditional three-dimensional terrain landscape has high use threshold (relying on third party professional software), high hardware requirement (relying on computer hardware to extract information and render and draw in real time), is difficult to share and apply and the like are solved;
(2) The adoption of data prerendering has low dependence on hardware, and is convenient for large-scale popularization and application: the data is directly stored into the three-dimensional surface patch and the texture data, the data use and rendering efficiency is high, the calculation cost is low, the efficient browsing of large-scale geographic scenes can be supported, and the scene rendering and application of the mobile terminal are facilitated; generally, all data (such as related data of a three-dimensional surface patch, specifically, position data and texture data) required to be subjected to data rendering in the scheme are constructed and stored in advance, and a rendering engine directly renders the data; the rendering mode in the traditional technology needs to utilize basic data to calculate the space frame and texture data of the rendered scene in real time, and then drawing; the method provided herein omits the process of rendering data computation and is therefore referred to herein as prerendering.
(3) The data is self-lean and decryption, which is convenient for sharing application in various network environments: the data is convenient for sharing and exchanging, avoids the secret-related result exchange of the original DEM, DOM and the like, solves the confidentiality problem of the traditional geospatial data to a certain extent, and is beneficial to public service;
(4) The superposition analysis of various data is supported naturally, and flexible application can be supported: and carrying out multi-source geospatial data superposition based on the geographic scene, and supporting light scene application.
Next, as shown in FIG. 7, a representative implementation flow of the core steps of the method for constructing a three-dimensional terrain scene model data product is provided, comprising
S701, extracting three-dimensional terrain elevation information (discrete true three-dimensional points) of a target area based on 2.5-dimensional data (DEM/DSM, discrete elevation points, dense matching points, point clouds and the like);
s702, generating a three-dimensional terrain relief model of a target area according to a triangular network or polygonal network subdivision modeling mode; the three-dimensional topographic relief model is formed by sequentially arranging a plurality of three-dimensional patches (generally formed by a two-dimensional surface and a normal vector or a true three-dimensional surface);
s703, respectively extracting texture data of each three-dimensional patch based on the remote sensing image, the aerial image, the earth surface coverage, the investigation data and the like (generally derived from the remote sensing image, the aerial image, the earth surface coverage, the investigation monitoring result and the like);
S704, constructing a three-dimensional terrain scene model data product (a true three-dimensional model with continuous seamless coverage) for forming a target area according to the three-dimensional texture data of each three-dimensional patch.
The embodiment of the application provides a construction device of a three-dimensional terrain scene model data product, as shown in fig. 5, the construction device comprises:
a first extraction module 501, configured to extract three-dimensional terrain elevation information of a target region based on geographic information data; the geographic information data comprises DEM/DSM, discrete elevation points, dense matching points and point clouds;
the generating module 502 is configured to generate a three-dimensional terrain relief model of the target area according to a triangle mesh or polygonal mesh subdivision modeling mode; the three-dimensional topographic relief model is formed by sequentially arranging a plurality of three-dimensional patches;
a second extraction module 503, configured to extract texture data of each three-dimensional patch respectively;
a construction module 504 is configured to construct a three-dimensional terrain scene model of the target region according to the texture data of each three-dimensional patch.
Optionally, the first extraction module includes:
the setting unit is used for setting a plurality of interpolation points on the ground surface of the target area according to a preset sampling interval;
a first determining unit configured to determine position data of each interpolation point in the target region; the position data comprises plane coordinates and elevation values;
And the first generation unit is used for generating three-dimensional terrain elevation information of the target area based on the plane coordinates and the elevation values of all the interpolation points.
Optionally, the second extraction module includes:
a second determining unit configured to determine, for each three-dimensional patch, an orthographic projection pattern of the three-dimensional patch;
a third determining unit, configured to extract texture data of a surface of a target area corresponding to each orthographic projection pattern;
and a fourth determining unit, configured to perform three-dimensional mapping and image conversion on texture data of a surface of the target area corresponding to each orthographic projection graph, so as to determine texture data of each three-dimensional patch.
Optionally, the building module includes:
the building unit is used for respectively building the mapping relation between the position data of each three-dimensional surface patch and the texture data of each three-dimensional surface patch;
a fifth determining unit for determining coordinate system parameters of the digital scene model;
the first construction unit is used for constructing a three-dimensional terrain scene model of the target area based on the coordinate system parameters and the mapping relation between the position data of each three-dimensional patch and the texture data of each three-dimensional patch.
Optionally, the building device further includes:
A second generating unit for generating a header file of a three-dimensional terrain scene model about the target region, a position file carrying three-dimensional patch position data, a texture file carrying three-dimensional patch texture data, and a mapping relation file recording mapping relation between the position file and the texture file;
the first storage unit is used for respectively associating the header file, the position file, the texture file and the mapping relation file with the codes of the target region and then storing the associated header file, the position file, the texture file and the mapping relation file to obtain a three-dimensional terrain scene model file of the target region;
or alternatively, the first and second heat exchangers may be,
and the second storage unit is used for respectively splitting the header file, the position file, the texture file and the mapping relation file according to the spatial position relation of the three-dimensional terrain scene model to obtain the header file, the position file, the texture file and the mapping relation file of the three-dimensional digital scene sub-model, and respectively packaging and storing the header file, the position file, the texture file and the mapping relation file of the three-dimensional digital scene sub-model to obtain the three-dimensional terrain scene model file of the target area.
Optionally, the header file carries file composition information, analysis rule data, coordinate system data and metadata of a three-dimensional terrain scene model of the target region;
The position file carries the spatial position data of all three-dimensional patches in the three-dimensional terrain scene model of the target area; the three-dimensional surface patch is triangular, quadrilateral or polygonal.
Optionally, the building device further includes:
a third generation unit for generating a header file of a three-dimensional terrain scene model concerning the target region, a three-dimensional mesh file carrying mesh unit data of the three-dimensional terrain scene model, and a texture file carrying three-dimensional patch texture data;
the third storage unit is used for storing the head file, the three-dimensional grid file and the texture file after being associated with the codes of the target region to obtain a three-dimensional terrain scene model file of the target region;
or alternatively, the first and second heat exchangers may be,
and the fourth storage unit is used for splitting the header files, the three-dimensional grid files and the texture files according to the spatial position relation of the three-dimensional terrain scene model respectively to obtain the header files, the three-dimensional grid files and the texture files of the three-dimensional digital scene sub-models, and then packaging and storing the header files, the three-dimensional grid files and the texture files of the three-dimensional digital scene sub-models respectively to obtain the three-dimensional terrain scene model files of the target region.
Optionally, the building device further includes:
the rendering module includes:
the acquisition unit is used for acquiring the coordinate range of the current window by using the rendering engine;
the second construction unit is used for requesting a corresponding position file according to the coordinate range so as to construct a three-dimensional space skeleton of the target region in the current view;
the reading unit is used for reading the target texture file corresponding to the three-dimensional space skeleton by using the mapping relation file;
the judging unit is used for carrying out mapping rendering on the real-time three-dimensional space skeleton by utilizing the target texture file and judging whether all rendering work of the three-dimensional terrain scene model of the target area is completed or not;
and the adjusting unit is used for adjusting the coordinate range of the current window if not, and re-executing the step to acquire the coordinate range of the current window by using the rendering engine.
Optionally, the building device further includes:
the first superposition module is used for responding to the vector data superposition request and acquiring the plane coordinates of the added object;
the first calculation module is used for calculating the elevation value of the added object on the three-dimensional surface patch in the three-dimensional terrain scene model of the target area by utilizing the three-dimensional terrain scene model file of the target area and the plane coordinates of the added object;
And the first drawing module is used for drawing the adding object in the three-dimensional terrain scene model of the target area based on the plane coordinates and the elevation value of the adding object.
Optionally, the building device further includes:
the second superposition module is used for responding to the raster data superposition request and acquiring a target raster unit;
the second calculation module is used for calculating plane coordinates and elevation values corresponding to the grid units by using the three-dimensional terrain scene model file of the target area and drawing pixel values of the target grid units by using the symbols;
and the second drawing module is used for drawing the target grid unit in the three-dimensional terrain scene model of the target area according to the plane coordinates, the elevation values and the pixel values corresponding to the grid unit.
Corresponding to the method for constructing the three-dimensional terrain scene model data product in fig. 1, the embodiment of the application further provides a computer device 600, as shown in fig. 6, where the device includes a memory 601, a processor 602, and a computer program stored in the memory 601 and capable of running on the processor 602, where the method for constructing the three-dimensional terrain scene model data product is implemented when the processor 602 executes the computer program.
Specifically, the above memory 601 and the processor 602 can be general-purpose memories and processors, which are not limited herein, and when the processor 602 runs the computer program stored in the memory 601, the above method for constructing the three-dimensional terrain scene model data product can be executed, so as to solve the problem in the prior art that the requirement on hardware is too high when three-dimensional real scene modeling is performed.
Corresponding to the method for constructing the three-dimensional terrain scene model data product in fig. 1, the embodiment of the application further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, performs the steps of the method for constructing the three-dimensional terrain scene model data product.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk and the like, and when the computer program on the storage medium is run, the method for constructing the three-dimensional terrain scene model data product can be executed, so that the problem that hardware requirements are too high when three-dimensional real scene modeling is carried out in the prior art is solved.
In the embodiments provided in the present application, it should be understood that the disclosed methods and apparatuses may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments provided in the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that: like reference numerals and letters in the following figures denote like items, and thus once an item is defined in one figure, no further definition or explanation of it is required in the following figures, and furthermore, the terms "first," "second," "third," etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present application, and are not intended to limit the scope of the present application, but the present application is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, the present application is not limited thereto. Any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or make equivalent substitutions for some of the technical features within the technical scope of the disclosure of the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the corresponding technical solutions. Are intended to be encompassed within the scope of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for constructing a three-dimensional terrain scene model data product, the method comprising:
extracting three-dimensional terrain elevation information of a target region based on geographic information data; the geographic information data comprises DEM/DSM, discrete elevation points, dense matching points and point clouds;
Generating a three-dimensional terrain relief model of the target area according to a triangular network or polygonal network subdivision modeling mode; the three-dimensional topographic relief model is formed by sequentially arranging a plurality of three-dimensional patches;
respectively extracting texture data of each three-dimensional surface patch;
and constructing a three-dimensional terrain scene model of the target area according to the texture data of each three-dimensional patch.
2. The construction method according to claim 1, wherein extracting three-dimensional terrain elevation information of the target region based on the geographic information data comprises:
setting a plurality of interpolation points on the ground surface of a target area according to a preset sampling interval;
determining position data of each interpolation point in the target area; the position data comprises plane coordinates and elevation values;
three-dimensional terrain elevation information of the target region is generated based on the plane coordinates and the elevation values of all the interpolation points.
3. The method of claim 1, wherein extracting texture data for each three-dimensional patch separately comprises:
determining, for each three-dimensional patch, an orthographic projection pattern of the three-dimensional patch;
respectively extracting texture data of the earth surface of a target area corresponding to each orthographic projection graph;
and carrying out three-dimensional mapping and image conversion on the texture data of the earth surface of the target area corresponding to each orthographic projection graph so as to determine the texture data of each three-dimensional patch.
4. The construction method according to claim 1, wherein constructing a three-dimensional terrain scene model of the target region from the texture data of each three-dimensional patch comprises:
respectively establishing a mapping relation between the position data of each three-dimensional surface patch and the texture data of each three-dimensional surface patch;
determining coordinate system parameters of the digital scene model;
and constructing a three-dimensional terrain scene model of the target area based on the coordinate system parameters and the mapping relation between the position data of each three-dimensional patch and the texture data of each three-dimensional patch.
5. The construction method according to claim 1, characterized in that the construction method further comprises:
generating a header file of a three-dimensional terrain scene model of a target area, a position file carrying three-dimensional patch position data, a texture file carrying three-dimensional patch texture data and a mapping relation file recording the mapping relation between the position file and the texture file;
the header file, the position file, the texture file and the mapping relation file are respectively associated with the codes of the target region and then stored to obtain a three-dimensional terrain scene model file of the target region;
or alternatively, the first and second heat exchangers may be,
splitting the header file, the position file, the texture file and the mapping relation file according to the spatial position relation of the three-dimensional terrain scene model respectively to obtain the header files, the position files, the texture files and the mapping relation files of a plurality of three-dimensional digital scene sub-models, and packaging and storing the header files, the position files, the texture files and the mapping relation files of the three-dimensional digital scene sub-models respectively to obtain the three-dimensional terrain scene model files of the target region.
6. The construction method according to claim 5, wherein the header file carries file configuration information, analysis rule data, coordinate system data, and metadata of a three-dimensional terrain scene model of the target region;
the position file carries the spatial position data of all three-dimensional patches in the three-dimensional terrain scene model of the target area; the three-dimensional surface patch is triangular, quadrilateral or polygonal.
7. The construction method according to claim 1, characterized in that the construction method further comprises:
generating a header file of a three-dimensional terrain scene model about the target region, a three-dimensional mesh file carrying mesh unit data of the three-dimensional terrain scene model, and a texture file carrying three-dimensional patch texture data;
the header file, the three-dimensional grid file and the texture file are associated with the codes of the target region and then stored to obtain a three-dimensional terrain scene model file of the target region;
or alternatively, the first and second heat exchangers may be,
splitting the header file, the three-dimensional grid file and the texture file according to the spatial position relation of the three-dimensional terrain scene model to obtain the header file, the three-dimensional grid file and the texture file of the three-dimensional digital scene sub-model, and packaging and storing the header file, the three-dimensional grid file and the texture file of the three-dimensional digital scene sub-model to obtain the three-dimensional terrain scene model file of the target area.
8. The construction method according to any one of claims 5 or 7, further comprising:
in response to the scene rendering request, repeating the following steps until all three-dimensional terrain scene models of the target area are rendered:
acquiring a coordinate range of a current window by using a rendering engine;
requesting a corresponding position file according to the coordinate range to construct a three-dimensional space skeleton of the target region in the current view;
reading a target texture file corresponding to the three-dimensional space skeleton by using a mapping relation file;
performing map rendering on the real-time three-dimensional space skeleton by utilizing the target texture file, and judging whether all rendering work of the three-dimensional terrain scene model of the target area is completed or not;
if not, the coordinate range of the current window is adjusted, and the step of re-executing is carried out to acquire the coordinate range of the current window by using the rendering engine.
9. The construction method according to claim 5, characterized in that the construction method further comprises:
responding to the vector data superposition request, and obtaining the plane coordinates of the added object;
calculating the elevation value of the added object on the three-dimensional surface patch in the three-dimensional terrain scene model of the target area by utilizing the three-dimensional terrain scene model file of the target area and the plane coordinates of the added object;
And drawing the adding object in the three-dimensional terrain scene model of the target area based on the plane coordinates and the elevation value of the adding object.
10. The construction method according to claim 7, characterized in that the construction method further comprises:
responding to the raster data superposition request to acquire a target raster unit;
calculating plane coordinates and elevation values corresponding to the grid units by using a three-dimensional terrain scene model file of the target area, and drawing pixel values of the target grid units by using symbols;
and drawing the target grid unit in the three-dimensional terrain scene model of the target region according to the plane coordinates, the elevation values and the pixel values corresponding to the grid unit.
CN202310153754.XA 2023-02-13 2023-02-13 Construction method of three-dimensional terrain scene model data product Pending CN116385672A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310153754.XA CN116385672A (en) 2023-02-13 2023-02-13 Construction method of three-dimensional terrain scene model data product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310153754.XA CN116385672A (en) 2023-02-13 2023-02-13 Construction method of three-dimensional terrain scene model data product

Publications (1)

Publication Number Publication Date
CN116385672A true CN116385672A (en) 2023-07-04

Family

ID=86962350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310153754.XA Pending CN116385672A (en) 2023-02-13 2023-02-13 Construction method of three-dimensional terrain scene model data product

Country Status (1)

Country Link
CN (1) CN116385672A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197382A (en) * 2023-11-02 2023-12-08 广东省测绘产品质量监督检验中心 Live-action three-dimensional data construction method and device
CN117609401A (en) * 2024-01-19 2024-02-27 贵州北斗空间信息技术有限公司 White mold visual display method, device and system in three-dimensional terrain scene
CN117611781A (en) * 2024-01-23 2024-02-27 埃洛克航空科技(北京)有限公司 Flattening method and device for live-action three-dimensional model

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197382A (en) * 2023-11-02 2023-12-08 广东省测绘产品质量监督检验中心 Live-action three-dimensional data construction method and device
CN117197382B (en) * 2023-11-02 2024-01-12 广东省测绘产品质量监督检验中心 Live-action three-dimensional data construction method and device
CN117609401A (en) * 2024-01-19 2024-02-27 贵州北斗空间信息技术有限公司 White mold visual display method, device and system in three-dimensional terrain scene
CN117609401B (en) * 2024-01-19 2024-04-09 贵州北斗空间信息技术有限公司 White mold visual display method, device and system in three-dimensional terrain scene
CN117611781A (en) * 2024-01-23 2024-02-27 埃洛克航空科技(北京)有限公司 Flattening method and device for live-action three-dimensional model
CN117611781B (en) * 2024-01-23 2024-04-26 埃洛克航空科技(北京)有限公司 Flattening method and device for live-action three-dimensional model

Similar Documents

Publication Publication Date Title
CN116385672A (en) Construction method of three-dimensional terrain scene model data product
CN110675496B (en) Grid subdivision and visualization method and system based on three-dimensional urban geological model
CN104835202A (en) Quick three-dimensional virtual scene constructing method
CN111784840B (en) LOD (line-of-sight) level three-dimensional data singulation method and system based on vector data automatic segmentation
CN113516769A (en) Virtual reality three-dimensional scene loading and rendering method and device and terminal equipment
CN111581776B (en) Iso-geometric analysis method based on geometric reconstruction model
CN113628331B (en) Data organization and scheduling method for photogrammetry model in illusion engine
CN115796712B (en) Regional land ecosystem carbon reserve estimation method and device and electronic equipment
US20160180586A1 (en) System and method for data compression and grid regeneration
CN115861527A (en) Method and device for constructing live-action three-dimensional model, electronic equipment and storage medium
Samavati et al. Interactive 3D content modeling for digital earth
CN112687007A (en) LOD technology-based stereo grid map generation method
CN116912441A (en) Ocean hydrological meteorological data-oriented visualization method, device and medium
Yu et al. Saliency computation and simplification of point cloud data
CN115631317B (en) Tunnel lining ortho-image generation method and device, storage medium and terminal
CN113470172B (en) Method for converting OBJ three-dimensional model into 3DTiles
CN115186347A (en) Building CityGML modeling method combining house type plan and inclined model
CN114283266A (en) Three-dimensional model adjusting method and device, storage medium and equipment
CN112967396A (en) Mirror reflection-based 3D model spherical surface area-preserving parameterization method and system
CN112465973A (en) High-precision simulation mapping technical method for digital ground model
Chio et al. The establishment of 3D LOD2 objectivization building models based on data fusion
KR20190113669A (en) Apparatus and method for data management for reconstruct in 3d object surface
CN109191556B (en) Method for extracting rasterized digital elevation model from LOD paging surface texture model
Zhu et al. Reconstruction of 3D maps for 2D satellite images
Hua et al. Review of 3D GIS Data Fusion Methods and Progress

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination