Three-dimensional scene local area dynamic flattening method and device based on flattening polygon
Technical Field
The invention belongs to the field of spatial information, and particularly relates to a three-dimensional scene local area dynamic flattening method and device based on a flattening polygon.
Background
The three-dimensional model is an important data basis of three-dimensional visualization, the accuracy of the three-dimensional model is closely related to the visualization effect of a three-dimensional scene, the fine model can improve the quality of the three-dimensional visualization, and the appearance of the low-quality model is influenced and even influences the application effect. The fine model is generally modeled by manual modeling or lidar modeling, but because the production cost is higher, a fine modeling method is generally adopted only for key targets. Oblique photography is a full-element three-dimensional modeling method, has low production cost and high production speed, can realize batch three-dimensional reconstruction of large scenes containing geometric structures and textures, but is limited by the prior art, has poor modeling effect on fine and scattered targets such as trees, electric poles and the like, and is easy to cause the generation of a large number of low-quality models.
Specialized retouching software can be employed to remove low quality models to improve visualization. The removal operation is essentially a secondary modeling of the original model, requiring a re-acquisition of data (e.g., texture images) to achieve a three-dimensional reconstruction of the removed region. At present, some trimming software such as mesh mixer, wish3D and the like provide a local flattening function of the model, and directly flatten the low-quality model (such as a tree) to the ground without data acquisition again, so that the requirement of removing the low-quality model is indirectly met. However, this method has the following problems: (1) the flattening requires professional user operation and also requires professional mold repair software support, and is basically not feasible for ordinary users except for the addition of extra workload. (2) Mould repair software flattening is essentially physical damage to a three-dimensional model, and changes the problem of reflecting the high and low quality of the model into the problem of reflecting the existence of the model of '0-1', so that a virtual scene and an actual scene are new and inconsistent, and the operation is not reversible, so that the method is unacceptable in partial application.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a three-dimensional scene local area dynamic flattening method and device based on a flattening polygon, which have the advantages of low limitation on a flattening area, no limitation on a horizontal plane, capability of being any inclined plane, strong interactivity, capability of conveniently checking an actual flattening effect and dynamic adjustment.
The technical scheme is as follows: the invention provides a three-dimensional scene local area dynamic flattening method based on a flattening polygon, which specifically comprises the following steps:
(1) defining a flattening polygon in a user coordinate system to determine a flattening area: defining a flattening polygon according to the target to be flattened, so that the flattening polygon is tightly attached to a flattening area of the target to be flattened;
(2) creating a flattening camera according to the flattening polygon, and setting an observation matrix, a projection matrix and a viewport matrix;
(3) generating a flattened polygon depth map using a flattening camera;
(4) transmitting the observation matrix, the projection matrix and the flattened polygon depth map of the flattening camera into a GPU programmable pipeline;
(5) in a vertex shader of the GPU, judging the inclusion relation between the flattening polygon and the vertex, and performing displacement flattening on the vertex which is positioned in the flattening polygon and is higher than the flattening polygon.
Further, the step (2) comprises the steps of:
(21) calculating a flattened polygonal bounding box: the bounding box of the flattened polygon is a minimum axis-aligned cube containing all vertices of the flattened polygon, wherein the upper base Z value bTop is equal to the maximum of all vertex Z values, the lower base Z value bBottom is equal to the minimum of all vertex Z values, and the modified bTop is max (bTop, bBottom + f), where f is any value greater than 0;
(22) setting a flattening camera observation matrix: determining a straight line L by flattening the centers of the upper bottom surface and the lower bottom surface of the polygonal bounding box, selecting any space point higher than the bounding box on the L as an observation coordinate system origin O, defining X, Y and Z axis of an observation coordinate system to be consistent with the directions of X, Y and Z axis of a user coordinate system respectively, establishing an observation coordinate system, and setting a flattening camera observation matrix according to the observation coordinate system;
(23) setting a flattening camera projection matrix: setting the flattening polygon bounding box as an observation space of a flattening camera, and then setting a projection matrix of the flattening camera according to the orthogonal projection type and the observation space of the flattening camera;
(24) setting a flattening camera viewport matrix: setting a viewport width W and a height H of the flattening camera, wherein W and H are both greater than 0; a viewport matrix of the flattened camera is set according to the viewport width and height.
Further, the step (3) includes the steps of:
(31) decomposing the flattened polygon into a triangular mesh;
(32) outputting a flattened polygon depth map: and closing the color cache, opening the depth cache, inputting the decomposed triangulation network into a GPU, and generating a flattened polygon depth map.
Further, the step (5) includes the steps of:
(51) in a vertex shader, transforming vertex coordinates V0(x0, y0 and z0) in a user coordinate system into a flattening camera texture space according to an observation matrix and a projection matrix of a flattening camera, and setting the transformed coordinates as V1(x1, y1 and z 1);
(52) if x1 and y1 are both in the range of [0, 1], sampling the flattened polygon depth map with coordinates (x1, y1) to obtain depth value z2, and if z2<1.0, inverse transforming coordinates (x1, y1, z2) to the user coordinate system according to the view matrix, projection matrix of the flattened camera to obtain coordinates (x3, y3, z 3): if z3< z0, then the vertex needs to be flattened, modifying the z value of V0 to be z 3; otherwise, keeping the VO unchanged;
(53) and V0 is used for participating in the normal rendering process.
Based on the same inventive concept, the invention further provides a three-dimensional scene local area dynamic flattening device based on a flattening polygon, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the three-dimensional scene local area dynamic flattening method based on the flattening polygon when being loaded to the processor.
Has the advantages that: compared with the prior art, the invention has the beneficial effects that: compared with the traditional flattening mode in which the model file needs to be modified, the method does not need to modify the three-dimensional model, has strong interactivity, and can conveniently check the actual flattening effect and dynamically adjust the actual flattening effect; meanwhile, the invention has lower limitation on the flattened area, is not limited to the horizontal plane, can be any inclined plane, and can define the flattened area by adopting an inaccurate coplanar space polygon according to the needs of users.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a visual effect diagram before the electric pole is flattened;
FIG. 3 is a diagram of an interactive definition of flattened polygons;
FIG. 4 is a schematic view of a flattening camera setup;
fig. 5 is a visual effect diagram of the electric pole after being flattened.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The invention provides a three-dimensional scene local area dynamic flattening method based on a flattening bounding sphere, which specifically comprises the following steps of:
step 1: defining a flattening polygon in a user coordinate system to determine a flattening area: and defining a flattening polygon according to the target to be flattened, so that the flattening polygon is tightly attached to a flattening area of the target to be flattened.
Fig. 2 is a visual effect diagram before the electric pole is flattened, and as shown in fig. 3, screen coordinates of vertices of a flattened polygon are obtained in a screen point-taking manner, and the flattened polygon is required to be closely attached to a flattening area of a target to be flattened. The flattening area, i.e. the reference plane to which the model is to be flattened to the bottom, generally coincides with the ground, the flattening polygon is a user-defined spatial polygon approximating the flattening area, and then the screen coordinates are transformed to the user coordinate system by means of rendering a viewport matrix, a projection matrix and a view matrix under the camera viewpoint. In the 3D rendering engine library OpenGL, it can be implemented using the glununproject function.
Step 2: a flattening camera is created from the flattened polygon, and an observation matrix, a projection matrix, and a viewport matrix are set.
(2.1) calculating a flattened polygon bounding box: as shown in fig. 4, the bounding box of the flattened polygon is a minimum axis-aligned cube containing all vertices of the flattened polygon, wherein the upper base Z-value bTop is equal to the maximum Z-value of all vertices, and the lower base Z-value bBottom is equal to the minimum Z-value of all vertices, and in order to avoid the failure of the bounding box due to the coplanarity of the upper and lower bases, the modified bTop is max (bTop, bBottom + f), where f is an adjustment factor, and is an arbitrary value greater than 0. Since the flattening polygon is close to horizontal, if the adjustment factor f is 1, the final bTop is bBottom + 1.
(2.2) setting a flattening camera observation matrix: as shown in fig. 4, a straight line L is determined by flattening the centers of the upper and lower bottom surfaces of the polygonal bounding box, any space point higher than the bounding box on L is selected as an observation coordinate system origin O, X, Y and the Z axis defining the observation coordinate system are respectively consistent with the directions of X, Y and the Z axis of the user coordinate system, the observation coordinate system is established, and a flattened camera observation matrix is set according to the observation coordinate system. In the 3D rendering engine library OpenGL, an observation matrix may be automatically set according to the observation coordinate system information using a gluloogat function.
(2.3) setting a flattening camera projection matrix: the flattened polygon bounding box is set to the viewing space of the flattening camera, and then the projection matrix of the flattening camera is set according to the orthogonal projection type and the viewing space of the flattening camera. In the 3D rendering engine library OpenGL, the orthogonal projection matrix can be automatically set from the viewing space using the gluOtho function.
(2.4) setting a flattened camera viewport matrix: setting the width W and the height H of a viewport of the flattening camera, wherein the width W and the height H are both required to be more than 0, and the actual width and the actual height of a window can be directly taken as values; a viewport matrix of the flattened camera is set according to the viewport width and height. In the 3D rendering engine library OpenGL, the viewport matrix can be automatically set using the glViewport function.
And step 3: a flattened polygon depth map is generated using a flattening camera.
Decomposing the flattened polygon into a triangular mesh; outputting a flattened polygon depth map: and closing the color cache, opening the depth cache, inputting the decomposed triangulation network into a GPU, and generating a depth map of the flattened polygon based on the FBO technology.
And 4, step 4: and transmitting the observation matrix, the projection matrix and the flattened polygon depth map of the flattening camera into a GPU programmable pipeline.
And 5: in a vertex shader of the GPU, judging the inclusion relation between the flattening polygon and the vertex, and performing displacement flattening on the vertex which is positioned in the flattening polygon and is higher than the flattening polygon.
(5.1) normally rendering a three-dimensional scene, in a vertex shader, converting vertex coordinates V0(x0, y0, z0) into a texture space of a flattening camera according to a flattening camera view matrix Mview, a projection matrix mprject and a viewport matrix Mviewport, wherein an overall transformation matrix M may be set to Mview mprject Mviewport, and transforming and normalizing V0 by using the M matrix to obtain transformed coordinates V1(x1, y1, z 1).
(5.2) if x1 and y1 are both within the range of [0, 1], sampling the flattened polygon depth map with coordinates (x1, y1) to obtain depth value z2, if z2<1.0, indicating that the projection of the point on the XY plane is covered by the flattened polygon, transforming the coordinates (x1, y1, z2) with the inverse of the M matrix to obtain transformed z-coordinates z3(x3, y3, z 3): if z3< z0, then the vertex needs to be flattened, modifying the z value of V0 to be z 3; otherwise, the vertex does not need to be flattened, keeping the VO unchanged.
And (5.3) participating in a normal rendering process by using V0. The final effect is shown in fig. 5.
The invention also provides a three-dimensional scene local area dynamic flattening device based on the flattening polygon, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the computer program realizes the three-dimensional scene local area dynamic flattening method based on the flattening polygon when being loaded to the processor.