CN113178014B - Scene model rendering method and device, electronic equipment and storage medium - Google Patents

Scene model rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113178014B
CN113178014B CN202110584749.5A CN202110584749A CN113178014B CN 113178014 B CN113178014 B CN 113178014B CN 202110584749 A CN202110584749 A CN 202110584749A CN 113178014 B CN113178014 B CN 113178014B
Authority
CN
China
Prior art keywords
cluster
rendering
voxel
cluster array
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110584749.5A
Other languages
Chinese (zh)
Other versions
CN113178014A (en
Inventor
韦杨淞
吴彧文
张子豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110584749.5A priority Critical patent/CN113178014B/en
Publication of CN113178014A publication Critical patent/CN113178014A/en
Application granted granted Critical
Publication of CN113178014B publication Critical patent/CN113178014B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the invention provides a scene model rendering method, a device, electronic equipment and a storage medium, comprising the following steps: acquiring a scene model formed by a plurality of triangles according to the attribute information patches of the triangle patches; dividing the scene model to obtain a cluster array; the cluster array comprises a plurality of cluster objects, wherein the cluster objects are formed by a plurality of triangular patches; determining a first target cluster array and a second target cluster array according to screen projection sizes corresponding to the plurality of cluster objects; performing raster rendering on the first target cluster array; and carrying out voxel rendering on the second target cluster array. According to the embodiment of the invention, the triangular surface patch level can be removed from the scene model, and the self-adaptive decision adopts rasterization rendering or voxel rendering when rendering the removed triangular surface patch, so that the rendering efficiency is improved to the greatest extent.

Description

Scene model rendering method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of model processing, in particular to a scene model rendering method, a scene model rendering device, electronic equipment and a storage medium.
Background
The next-generation game requires that the game screen be as close to the real scene as possible. While a life-like three-dimensional scene is made up of many models, such as terrain models, which are polygonal representations of objects, and the basis for which the models are composed is a triangular patch. A model often requires a very large number of triangular patches if it is to truly render an object in reality. However, rendering a large number of triangular patches is very time consuming due to device performance limitations, but for such interactivity-emphasized applications of games, it is desirable that each frame is rendered for no more than 33ms, which would otherwise severely impact the player's gaming experience. Therefore, in order to ensure the frame rate of a game, there is a strict limit on the number of faces of a model when creating the model in a game scene.
In order to make the model representation more realistic under the face number budget of a limited triangular patch, an art staff is required to pay additional time cost for modeling, but the rendering efficiency of the processing mode is not high.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are presented to provide a scene model rendering method and a corresponding scene model rendering device, electronic apparatus, storage medium, which overcome or at least partially solve the above problems.
In order to solve the above problems, an embodiment of the present invention discloses a scene model rendering method, including:
acquiring a scene model formed by a plurality of triangular patches;
dividing the scene model according to the attribute information of the triangular patches to obtain a cluster array; the cluster array comprises a plurality of cluster objects, wherein the cluster objects are formed by a plurality of triangular patches;
determining a first target cluster array and a second target cluster array according to screen projection sizes corresponding to the plurality of cluster objects;
performing raster rendering on the first target cluster array;
and carrying out voxel rendering on the second target cluster array.
Optionally, the dividing the scene model according to the attribute information of the triangular patch to obtain a cluster array includes:
dividing the scene model into a plurality of cluster objects according to a specified number of triangular patches, wherein the cluster objects form a cluster array; the cluster objects are formed by triangular patches with similar spatial positions and consistent normal directions;
obtaining the maximum value and the minimum value of the vertex position of the triangular patch in the cluster object, and forming a bounding box of the cluster object according to the maximum value and the minimum value;
acquiring a normal direction of a triangular patch in the cluster object, and forming a Cone structure of the cluster object according to the normal direction; the Cone structure is used for defining a conical range in a space;
Voxelized is carried out on the cluster array to obtain voxel data of the scene model;
and storing the voxel data by adopting a sparse voxel octree SOV structure, wherein the voxel data is stored in leaf nodes of the SOV structure tree.
Optionally, after storing the voxel data in a sparse voxel octree SOV structure, storing the voxel data in leaf nodes of the SOV structure tree further comprises:
and performing a Mipmap operation on the SOV structure tree storing the voxel data, so that leaf nodes and non-leaf nodes in the SOV structure tree store the voxel data.
Optionally, the determining the first target cluster array and the second target cluster array according to the screen projection sizes corresponding to the plurality of cluster objects includes:
determining a screen projection size of the bounding box of the cluster object in a screen space;
performing patch elimination on the cluster object according to the screen projection size to obtain a first target cluster array; the face patch eliminating step is to eliminate triangular face patches with screen projection sizes smaller than a preset threshold value;
and taking the cluster objects remained after the patch elimination as a second target cluster array.
Optionally, before the performing patch elimination on the cluster object according to the screen projection size to obtain the first target cluster array, the method further includes:
and performing back face elimination on the cluster object according to the Cone structure of the cluster object, and performing view Cone elimination on the cluster object according to the bounding box of the cluster object.
Optionally, the method further comprises: and recording the screen pixel coordinates corresponding to the second target cluster array in a test template.
Optionally, performing rasterization rendering or voxel rendering on the target cluster array, including:
and carrying out voxel rendering on the third cluster object and carrying out rasterization rendering on the fourth cluster object.
Optionally, the voxel rendering of the second target cluster array includes:
acquiring screen pixel coordinates corresponding to each screen pixel;
if the screen pixel coordinates are matched with the screen pixel coordinates in the test template, determining that the screen pixel coordinates pass the template test;
transmitting a ray according to the screen pixel coordinates of the screen pixels passing the template test and the line-of-sight direction;
when the intersection point exists between the ray and the voxel space, traversing the SOV structure tree downwards from the corresponding cluster object to obtain target voxel data;
Calculating the color value of the screen pixel according to the target voxel data and a preset illumination model;
and filling the color value into the screen pixel.
Optionally, said filling the color value into the screen pixel includes:
acquiring the depth of the voxels of the intersection point, and performing a depth test on the depth;
when the depth passes the depth test, the color value is padded to the screen pixel.
Optionally, after rasterizing rendering or voxel rendering the target cluster array, the method further comprises:
and storing the scene model subjected to rasterization rendering or voxel rendering as a binary file.
The embodiment of the invention also discloses a scene model rendering device, which comprises:
the acquisition module is used for acquiring a scene model formed by a plurality of triangular patches;
the division module is used for dividing the scene model according to the attribute information of the triangular patches to obtain a cluster array; the cluster array comprises a plurality of cluster objects, wherein the cluster objects are formed by a plurality of triangular patches;
the rejecting module is used for determining a first target cluster array and a second target cluster array according to the screen projection sizes corresponding to the plurality of cluster objects;
The rasterization rendering module is used for rasterization rendering of the first target cluster array;
and the voxel rendering module is used for performing voxel rendering on the second target cluster array.
Optionally, the dividing module is configured to divide the scene model into a plurality of cluster objects according to a specified number of triangular patches, where the cluster objects form a cluster array; the cluster objects are formed by triangular patches with similar spatial positions and consistent normal directions; obtaining the maximum value and the minimum value of the vertex position of the triangular patch in the cluster object, and forming a bounding box of the cluster object according to the maximum value and the minimum value; acquiring a normal direction of a triangular patch in the cluster object, and forming a Cone structure of the cluster object according to the normal direction; the Cone structure is used for defining a conical range in a space; voxelized is carried out on the cluster array to obtain voxel data of the scene model; and storing the voxel data by adopting a sparse voxel octree SOV structure, and storing the voxel data in leaf nodes of the SOV structure tree.
Optionally, the partitioning module is configured to perform a Mipmap operation on the SOV structure tree storing the voxel data, so that leaf nodes and non-leaf nodes in the SOV structure tree store the voxel data.
Optionally, the rejection module is configured to determine a screen projection size of the bounding box of the cluster object in a screen space; performing patch elimination on the cluster object according to the screen projection size to obtain a first target cluster array; the face patch eliminating step is to eliminate triangular face patches with screen projection sizes smaller than a preset threshold value; and taking the cluster objects remained after the patch elimination as a second target cluster array.
Optionally, the rejection module is configured to reject the cluster object from the back according to the Cone structure of the cluster object, and reject the cluster object from the Cone according to the bounding box of the cluster object.
Optionally, the rejection module is configured to record the screen pixel coordinates corresponding to the second target cluster array in a test template.
Optionally, the rasterization rendering module is configured to obtain screen pixel coordinates corresponding to each screen pixel; if the screen pixel coordinates are matched with the screen pixel coordinates in the test template, determining that the screen pixel coordinates pass the template test; transmitting a ray according to the screen pixel coordinates of the screen pixels passing the template test and the line-of-sight direction; when the intersection point exists between the ray and the voxel space, traversing the SOV structure tree downwards from the corresponding cluster object to obtain target voxel data; calculating the color value of the screen pixel according to the target voxel data and a preset illumination model; and filling the color value into the screen pixel.
Optionally, the rasterization rendering module is configured to obtain a depth of the voxel of the intersection point, and perform a depth test on the depth; when the depth passes the depth test, the color value is padded to the screen pixel.
Optionally, the apparatus further comprises: and the scene model storage module is used for storing the scene model subjected to rasterization rendering or voxel rendering as a binary file.
The embodiment of the invention discloses an electronic device, which comprises a processor, a memory and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the steps of the scene model rendering method when being executed by the processor.
The embodiment of the invention discloses a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and the computer program realizes the steps of the scene model rendering method when being executed by a processor.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, when model rendering is performed, a scene model formed by a plurality of patches is acquired, the scene model is divided into cluster arrays according to attribute information of the triangular patches, a first target cluster array and a second target cluster array are determined according to screen projection sizes corresponding to a plurality of cluster objects, then rasterization rendering is performed on the first target cluster array, and voxel rendering is performed on the second target cluster array. According to the embodiment of the invention, the triangular patch level division can be realized on the scene model, and meanwhile, when the triangular patches after division are rendered, the self-adaptive decision adopts rasterization rendering or voxel rendering, so that the rendering efficiency is improved to the greatest extent.
Drawings
FIG. 1 is a flow chart of steps of an embodiment of a scene model rendering method of the present invention;
FIG. 2 is a schematic diagram of the overall architecture of a rendering framework of the present invention;
FIG. 3 is a schematic representation of a pretreatment stage of the present invention;
FIG. 4 is a schematic diagram of a culling stage of the present invention;
FIG. 5 is a result diagram of a rasterized rendering of the present invention;
FIG. 6 is a schematic illustration of a position marker of the present invention;
FIG. 7 is a flow chart of a voxel rendering stage of the present invention;
FIG. 8 is a result diagram of a voxel rendering of the present invention;
FIG. 9 is a result diagram of a fused rasterized rendering and voxel rendering of the present invention;
fig. 10 is a block diagram showing an embodiment of a scene model rendering apparatus according to the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
In a specific implementation, to make the model representation more realistic with a limited face count budget for the triangular patches, an additional time cost is required for the art staff to make the modeling. Taking the working flow of the current art staff for carrying out the sub-time modeling as an example, the method completes a model which accords with the face number budget and simultaneously reserves the real details, and comprises the following steps:
Step 1, a low modulus (low surface number model) is manufactured by modeling software such as 3ds Max or Maya. Step 1 is to prepare for making a high modulus (high surface number model), but of course, if the model structure is relatively simple, the high modulus can also be directly made.
And 2, engraving the low modulus by using digital engraving and drawing software such as ZBrush and the like. After the low mode is fabricated in step 1, it is rewiring, detail added and engraved using ZBrush.
And 3, changing the high-mode topology into a low-mode topology. The high modulus engraved by ZBrush has a very large number of faces, and if the high modulus is directly used, the rendering performance problem exists, for example, the rendering time of one frame exceeds 33ms, so that the game experience of a player is seriously affected. Therefore, it must be topologically out of a low modulus.
And 4, unfolding the low-mode UV mapping. Specifically, the low mode, which is topologically out of the high mode of step 3, needs to be developed with UV mapping for baking mapping.
And 5, baking. The baking process is to map the detail information of the high mode onto the low mode, and the detail information is expressed as normal line mapping, color mapping, metalization mapping, roughness mapping and the like.
It can be seen that in order to improve the performance of the model while guaranteeing the rendering efficiency, additional manpower and time costs are required to optimize the model. If the problem of rendering performance caused by the number of the faces of a huge amount of triangular face sheets can be solved, the steps 3 to 4 can be omitted, and the high model created in the step 2 can be directly used, so that a developer can save the time of the trivial matters, and can put energy on more important matters, such as polishing the model to make the model finer, and further improve the quality of game pictures.
For rendering models of a huge number of triangular patches, there are many related technologies today such as: LOD, pixel, geometry image, culling, etc. all improve rendering efficiency to a certain extent. These techniques are briefly described below.
1. LOD (Levels of Detail)
LOD is widely used in sandboxes or world map games, and its principle is to reduce the complexity of a model according to the importance of the model, the position, the speed or the viewing angle-related parameters. For example, the same object displays different models at different distances from the camera: when the object is far away from the camera, the occupied screen pixels are fewer, and at the moment, even if the model has a lot of details, the model is hard to see after being projected on the screen, and at the moment, the model with low precision can be used for rendering, so that the efficiency can be improved under the condition that the rendering quality is not affected.
The use of LOD techniques in projects can involve many flows such as modeling, rendering pipelines, resource loading, etc. These processes involve a number of difficulties that need to be addressed, so if one wants to fully exploit good LOD technology in a development project, one needs to consider the following aspects:
1) An art worker is required to handle. The key point of using the LOD technology is that models with different surface numbers are required to be made, so that when a certain model is made, multiple models with different surface numbers are required to be made.
2) The effect of LOD model switching needs to be considered. For discrete LODs, there must be an effect of visual abrupt changes as the camera approaches the model (commonly referred to as pop), so considering how the LOD switches are more visually natural, this usually requires a more complex page-changing algorithm that is geared to the project. The continuous LOD is generally modified smoothly from one LOD level to another LOD level by some interpolation method, and this method introduces more memory consumption.
3) The value and cost overhead associated with LOD needs to be weighed. The use of LOD does improve rendering efficiency while minimizing the quality of rendering, but as mentioned above, the use of LOD techniques introduces additional costs including, but not limited to, additional memory usage, complex scene fabrication, reliance on and even development of related software tools, etc.
2. Voxel (Voxel)
Voxels are another expression of the model, and the method of rendering voxels is called voxel rendering. Voxels may appear as sampled points in three-dimensional coordinates, stored in a three-dimensional array. Each voxel may store properties of color, normal, texture, etc. for that spatial location. Compared with the traditional triangle mesh representation model, the voxel data has simple and uniform structure, is easy to modify and is more convenient for Mipmap. The voxel data may be rendered using a direct voxel rendering algorithm, such as a ray casting (RayCasting) algorithm, which emits rays in the line of sight direction from each pixel of the screen image, each ray passing through the voxel data and resampling in the process, and finally converting it into a color value for the pixel point by a transfer function from the voxel data of the sampling point. The use of voxel rendering methods can avoid the problem of rendering efficiency caused by excessive scene surface numbers, because the efficiency of direct voxel rendering algorithms is only related to the efficiency of resampling voxel data. The Mipmap technology is somewhat similar to the LOD technology, but the difference is that LOD aims at model resources, and the Mipmap aims at texture map resources, and after the Mipmap is used, the maps with different precision can be selected according to the distance between cameras.
The rendering quality of voxel rendering depends on the resolution of the voxel data. For example, increasing from one voxel per cubic meter to one voxel per centimeter can enhance the resolution of the rendering. However, the footprint of voxel data grows in a cubic order. A voxel space of 256A 3 contains about 16M voxels, while a voxel space of 1024A 3 grows to contain about 10 hundred million voxels. Therefore, if the quality of voxel rendering is to be improved, a large amount of memory space overhead is required to store voxel data.
3、GeometryImage
The principle of the geometry image algorithm is to cut the mesh of the model and convert the mesh into an image for storage by parameterization. The advantage of using image storage is that the data size can be compressed using wavelets or other filters that can preserve model features. The grid is reconstructed in real time from the images at run-time.
The method has the following defects: details may be lost during the parameterization of the grid into an image due to accuracy problems and a large number of cuts to the model are required during the conversion process, which creates a large number of discontinuous boundaries. In addition, since the mesh is reconstructed from the image in real time, it introduces additional reconstruction overhead, affecting the final rendering efficiency.
4. Rejection of
For rendering of models of huge amounts of triangular patches, one intuitive approach is to cull. In practice, the proportion of the number of faces of the triangle patch finally rendered to the game screen to the total number of faces of the triangle patch of the scene is very low. Therefore, it is unnecessary to render all the triangular patches of the scene, and if the triangular patches that do not contribute to the final rendering can be removed, the rendering efficiency can be greatly improved. The common rejection at present comprises view cone rejection, shielding rejection, distance rejection and the like.
The view cone rejection is to reject the model outside the view cone of the camera, the distance rejection is to judge whether the object is rejected by the distance between the object and the camera, and the shielding rejection is to reject the object which is inside the view cone but is shielded in the view direction. The essence of the elimination is that a small amount of computing resources are consumed, as many objects as possible are eliminated, and a proper elimination method is needed to be selected according to different conditions, so that the rendering efficiency is improved.
However, conventional culling algorithms typically run in the application phase, consume the computational resources of the CPU (central processing unit ), and are typically only capable of object level culling, such as a grass or a stone, due to CPU hardware computational limitations. It is envisaged that when each model object in a game scene itself comprises a number of triangular patches, coarse-grained object level culling does not effectively guarantee a maximized boost in rendering frame rate.
The above methods, while providing an optimization concept for the rendering of models of a vast number of triangular patches to some extent, all suffer from varying degrees of drawbacks.
Aiming at the problems, the embodiment of the invention provides a rendering frame which can maximally improve the rendering efficiency on the premise of ensuring the rendering quality aiming at a scene model of a huge amount of triangular patches. Specifically, in order to improve the rendering efficiency of the scene model of the huge amount of triangle patches, the rendering quality of the scene model is ensured. Specifically, the rendering framework of the embodiment of the invention is a rendering framework based on GPU driver and SVO, can realize the elimination of the triangular surface patch level, and meanwhile, when the triangular surface patch is submitted for rendering, the self-adaptive decision adopts a rasterization rendering or voxel rendering mode, so that the rendering efficiency is improved to the greatest extent.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a scene model rendering method according to the present invention may specifically include the following steps:
step S101, a scene model composed of a plurality of triangular patches is acquired.
Step S102, dividing the scene model according to the attribute information of the triangular patches to obtain a cluster array; the cluster array includes a plurality of cluster objects, the cluster objects being composed of a plurality of triangular patches.
Step S103, determining a first target cluster array and a second target cluster array according to the screen projection sizes corresponding to the plurality of cluster objects.
And step S104, performing raster rendering on the first target cluster array.
And step 105, performing voxel rendering on the second target cluster array.
In the scene model rendering method, when model rendering is performed, a scene model formed by a plurality of patches is acquired, the scene model is divided into cluster arrays according to attribute information of the triangular patches, a first target cluster array and a second target cluster array are determined according to screen projection sizes corresponding to a plurality of cluster objects, then rasterization rendering is performed on the first target cluster array, and voxel rendering is performed on the second target cluster array. According to the embodiment of the invention, the triangular patch level division can be realized on the scene model, and meanwhile, when the triangular patches after division are rendered, the self-adaptive decision adopts rasterization rendering or voxel rendering, so that the rendering efficiency is improved to the greatest extent.
In an exemplary embodiment, the step S102 of dividing the scene model according to the attribute information of the triangular patch to obtain a cluster array may include the following steps:
Dividing the scene model into a plurality of cluster objects according to a specified number of triangular patches, wherein the cluster objects form a cluster array; the cluster objects are formed by triangular patches with similar spatial positions and consistent normal directions;
obtaining the maximum value and the minimum value of the vertex position of the triangular patch in the cluster object, and forming a bounding box of the cluster object according to the maximum value and the minimum value;
acquiring a normal direction of a triangular patch in the cluster object, and forming a Cone structure of the cluster object according to the normal direction; the Cone structure is used for defining a conical range in a space;
voxelized is carried out on the cluster array to obtain voxel data of the scene model;
and storing the voxel data by adopting a sparse voxel octree SOV structure, wherein the voxel data is stored in leaf nodes of the SOV structure tree.
Wherein the attribute information of the triangular patches may include at least a spatial position and a normal direction of the triangular patches, and the like. The specified number may be 64, and each 64 triangular patches may constitute a Cluster. SVO (Sparse Voxel Octree ) is one organization of voxel data.
Specifically, the embodiment of the invention can divide each 64 triangular patches into a plurality of cluster objects, and then form a cluster array by the cluster objects with similar spatial positions and consistent normal directions. Furthermore, the maximum value and the minimum value of the vertex positions of the triangular patches in the cluster object can be used for forming a bounding box of the cluster object, and the Cone structure of the cluster object can be obtained in the normal direction of the triangular patches in the cluster object, wherein the bounding box and the Cone structure can be used as the basis for visual Cone elimination and back elimination of the cluster object.
In the embodiment of the invention, in order to store the cluster array through the SVO structure, the cluster array is voxelized to obtain the voxel data of the scene model, and then the voxel data can be stored by adopting the SOV structure, so that the voxel data is stored in leaf nodes of the SOV structure tree.
In an exemplary embodiment, after storing the voxel data in a sparse voxel octree SOV structure, the SOV structure tree further comprises the steps of:
and performing a Mipmap operation on the SOV structure tree storing the voxel data, so that leaf nodes and non-leaf nodes in the SOV structure tree store the voxel data.
Specifically, the SOV structure that is built up at the beginning stores only voxel data, so each ray needs to be recursively directed to the SOV structure's leaf nodes if the SOV structure is directly utilized to render voxels directly. Therefore, the embodiment of the invention carries out the Mipmap operation on the SOV structure, so that the non-leaf nodes also store the voxel data, and therefore, the voxel data can be obtained without traversing the leaf nodes of the SOV structure during rendering, thereby improving the efficiency of voxel rendering and realizing an effect similar to LOD.
In an exemplary embodiment, the step S103 of determining the first target cluster array and the second target cluster array according to the screen projection sizes corresponding to the plurality of cluster objects may include the following steps:
determining a screen projection size of the bounding box of the cluster object in a screen space;
performing patch elimination on the cluster object according to the screen projection size to obtain a first target cluster array; the face patch eliminating step is to eliminate triangular face patches with screen projection sizes smaller than a preset threshold value;
and taking the cluster objects remained after the patch elimination as a second target cluster array.
Wherein Small primary culling (patch culling/patch culling): and eliminating cluster objects with screen pixel projection areas (screen projection sizes) smaller than a preset threshold value.
In the embodiment of the invention, the cluster object can be subjected to patch elimination according to the screen projection size of the bounding box of the cluster object in the screen space, namely, the triangular patch with the screen projection size smaller than the preset threshold value is used as a first target cluster array for rasterization rendering, and the triangular patch with the screen projection size larger than or equal to the preset threshold value is used as a second target cluster array for voxel rendering.
In an exemplary embodiment, before the performing patch elimination on the cluster object according to the screen projection size to obtain the first target cluster array, the method may further include the following steps:
and performing back face elimination on the cluster object according to the Cone structure of the cluster object, and performing view Cone elimination on the cluster object according to the bounding box of the cluster object.
In the embodiment of the invention, the cluster object can be subjected to back face elimination according to the Cone structure of the cluster object, the cluster object is subjected to visual Cone elimination according to the bounding box of the cluster object, and then the cluster object subjected to back face elimination and visual Cone elimination is subjected to patch elimination to obtain a first target cluster array and a second target cluster array.
In an exemplary embodiment, the method may further include the steps of: and storing the scene model subjected to rasterization rendering or voxel rendering as a binary file.
In the embodiment of the invention, the model data of the scene model after rendering can be stored as the binary file, and when the next rendering is needed, the data stored in the binary file can be directly used for rendering, so that repeated steps are reduced, and the rendering efficiency is improved.
Referring to fig. 2, a schematic diagram of an overall architecture of a rendering frame according to the present invention is shown, where the present invention mixes raster rendering and voxel rendering, and adaptively selects a rendering mode based on a screen projection size occupied by a triangular patch, so as to maximize the rendering efficiency. Specifically, the rendering framework mainly comprises 4 rendering steps, namely preprocessing a model, removing a patch at a GPU end, performing rasterization rendering, and finally performing a RayCasting algorithm to complete voxel rendering, so that the rendering efficiency is improved. Referring to fig. 2, the steps for implementing a rendering framework according to an embodiment of the present invention may include:
101, inputting original model data of a scene model. The rendering framework proposed by the present invention is input as various types of model data, such as obj, fbx, etc. The vertex and face information of the model, material properties and the like are mainly extracted.
102, a pretreatment stage. And in the preprocessing stage, an input model is received, the model which is imported for the first time is preprocessed, and a Cluster and SVO structure of the model is mainly generated. The phase operates at the GPU end, and the construction speed is increased by utilizing the advantage of the parallel capability of the GPU end. The generated clusterings and SVO structures are used in the subsequent culling stage and voxel rendering stage.
103, the processed model data. Compared with the original model data, the method additionally comprises the following information:
cluster: every 64 triangular patches constitute a Cluster. During construction, triangular patches with similar spatial positions and normal directions are mainly selected to form a Cluster. Each Cluster contains one AABB and Cone structure for the cull phase.
SVO: voxel data obtained by voxelizing the model by the preprocessing stage is generated. In SVO, leaf nodes store memory indexes pointing to the current voxel data, which includes information such as color, normal, texture, etc. The non-leaf node stores an index pointing to the first child node of the current node.
The processed model data can be stored as a binary file, and when the next rendering is needed, the preprocessing stage can be skipped, and the data stored in the binary file can be directly used for rendering.
104, a reject stage. In computer loader, view cone culling, back culling, small primary culling are performed on Cluster arrays in parallel. The specific rule of rejection is as follows:
view cone rejection: cluster outside the view cone is rejected.
Back face rejection: and eliminating Cluster with normal direction consistent with the sight direction.
Small primary culling: and eliminating Cluster with screen pixel projection area smaller than the specific threshold value. This step also marks the location of the screen pixel affected by the removed Cluster for the RayCasting algorithm at stage 106.
105, rasterizing the rendering phase. And rendering the triangular patches in the rest Cluster after being removed by a rasterization method. The rendering command is submitted by mainly calling an ExecuteIndriect interface, a Draw Command List structure is filled in a 104 stage to serve as a parameter of a Draw call, the whole process is carried out at a GPU end, returning of data from the GPU to a CPU end is avoided, and calling efficiency of the Draw call is greatly improved.
106, voxel rendering stage. The stage mainly uses a RayCasting algorithm to directly render voxel data. In the segment loader, a ray is projected per pixel to intersect the SVO structure. Instead of performing the intersection operation on all pixels, the RayCasting operation is performed only on 104-stage marked pixels.
107, saving the preprocessed model data as a binary file. The binary file may be used as a model data file specific to the rendering framework, and is output data stored in step 103, and may be used as input data in subsequent step 104. When the input file is the binary file, the stage 103 can be omitted, and the rendering process is directly executed by using the model data stored in the binary file, so that the time cost of the stage 103 is saved.
When the embodiment of the invention is implemented, the patch level rejection can be realized based on the GPU driver, and the cost of the voxel storage space and the voxel rendering efficiency are optimized by utilizing the SVO structure.
The core idea of GPU Driven Pipeline is to reduce communication between the CPU and the GPU, place all rendering related transactions on the GPU as much as possible, and save the CPU computation power for performing computation in physical and AI aspects. Meanwhile, due to excellent parallelism and computing power of the GPU, finer granularity rejection can be performed, and therefore rendering of the GPU end is improved to a certain extent.
As described above, storing a three-dimensional array of complete voxel space requires a large amount of storage space and grows in a cubic order depending on the resolution of the voxel data. In practice, most of the space is occupied by empty voxels, so SVO can be used to compress these raw voxel data. Specifically, SVO is a tree structure, each node containing 8 sub-nodes according to a spatial division. Leaf nodes of the SVO hold all voxels that contain model rendering data, while empty voxel data is filtered. Meanwhile, the voxel data is organized in the form of octree, so that the voxel space is sampled quickly during voxel rendering, and the efficiency of a voxel rendering algorithm is improved.
However, the two points are not enough, and although GPU driver can reject most of the triangular patches, many triangular patches remain after being rejected in some cases. The voxel rendering based on SVO has better performance than the traditional rasterization rendering under the condition of more scene surfaces, but is limited by the resolution of voxel space, and the rendering quality is insufficient compared with the rasterization rendering. When a game scene is rendered, various conditions are often faced, and a more flexible rendering frame is needed to adapt to different rendering conditions, so that the optimal rendering efficiency is obtained on the premise of ensuring the rendering quality.
Referring to fig. 3, a schematic diagram of a preprocessing stage according to an embodiment of the present invention is shown, where the preprocessing stage constructs a Cluster array and an SVO structure for original model data, and converts the Cluster array and the SVO structure into data types suitable for a rendering framework according to the present invention.
201, input model data. The embodiment of the invention can accept common model file formats as input, including obj, fbx, stl, gltf and the like.
202, reordering the surface indexes of the original model. For the inputted model data, it is necessary to construct a Cluster in units of 64 triangular patches. When Cluster is selected, triangular patches with similar spatial positions and consistent normal directions are classified as the same Cluster as much as possible. After the Cluster is built, the face indexes of the model are reordered according to the order of the Cluster, so that the subscript of the Cluster can be directly used for indexing the triangle face contained in the Cluster, and no additional variable is needed for recording the triangle face indexes contained in the Cluster.
203, generating a bounding box of the Cluster. When the Cluster is divided into triangular faces of the original model, the maximum value and the minimum value of the vertex positions of the triangular face pieces in the Cluster are recorded to form a bounding box of the Cluster, and the bounding box is used for carrying out rejection judgment in the stage 104.
204, generating Cone structure of Cluster. The Cone structure is used for the back face culling judgment of the 104 stage, and the construction of the Cone structure is related to the normal direction of the triangle patch in the Cluster. Cone structure defines a conical range in space, when the line of sight is within the range, i.e. all triangular patches in Cluster are located at the back of the line of sight, all triangular patches in Cluster can be safely removed.
205, voxelizing the original model. The whole voxelization process is parallelized by means of the rasterization processing of the rendering flow, and the voxelization efficiency is improved. Firstly, an AABB bounding box of the grid model is calculated, and an observation angle and a projection plane are set according to the bounding box. When a triangular patch is projected, it is necessary to select a projection direction in which the projected area of the triangular patch is maximized, so that it is possible to ensure that all triangular patches are sufficiently voxelized. Meanwhile, during rasterization rendering, the quality of the rasterization rendering is ensured by adopting a rasterization rendering mode.
206, constructing Sparse Voxel Octree (SOV structure) according to the voxelized result. 205 generate voxel data of the model using a rasterization process, and if stored using a 3-dimensional map, it will contain many empty voxel data, occupying a lot of memory space. It is therefore desirable to construct Sparse Voxel Octree to compress the raw voxel data to maximize storage space savings. The process is completed in the computer loader, and the parallelism of the GPU is fully utilized.
207, performing a Mipmap operation on Sparse Voxel Octree. Sparse Voxel Octree constructed in stage 206 only leaves store voxel data. This means that each ray needs to be recursively to the leaf nodes of Octree when the voxels are directly rendered at stage 106. And the Mipmap operation is carried out on Sparse Voxel Octree, so that the non-leaf nodes also store voxel data, and when the RayCasting algorithm is executed, the data of the voxels can be obtained without traversing the leaf nodes of Octree, thereby improving the voxel rendering efficiency and realizing an effect similar to LOD. Specifically, this is due to the fact that the Mipmap operation is performed such that each level (leaf nodes and non-leaf nodes) of the SVO structure stores voxel data, which can be understood as being a duplicate of a different scale of scaled down detail of the main graph, and then when a ray strikes one of the nodes, the data can be obtained directly for rendering without having to recursively obtain the data from the leaf node to the Octree.
The rendering framework provided by the embodiment of the invention executes the rejection judgment in the computer loader. Due to the parallelism advantage of the GPU, cluster can be used as input, and finer granularity rejection than that of a traditional CPU (Central processing Unit) terminal is achieved. Referring to fig. 4, which is a schematic diagram of a stage of the rejection processing according to the present invention, the specific process of rejection includes:
301, input of a culling stage. The Cluster generated in stage 102 is used as input to the stage of presentation. Each GPU thread processes one Cluster.
302, back face culling is performed on Cluster. Cone structures are created at stage 102 from normal vectors of triangular faces contained in Cluster. And when the back face is rejected, judging whether the current sight line is positioned in the safe rejection range of the Cluster by using a Cone structure. When the line of sight is within the cone, meaning that all triangular patches in the Cluster are outside the line of sight, the Cluster can be eliminated.
303, view cone culling is performed on the Cluster object. Basic idea of view cone culling: and judging whether the object is in the camera view cone (the intersection is also included), if so, not eliminating, and if not, eliminating. Cluster contains its own bounding box variables, and for each plane of the view cone, testing the relationship between the bounding box and the plane of the Cluster can be terminated if the bounding box is on the back side during a test, indicating that the bounding box is outside the view cone. If the bounding box is located on all planar faces, the bounding box is illustrated as being located in the view cone, and the remainder illustrates the intersection of the bounding box and the plane of the view cone.
304, eliminating Cluster with smaller screen projection area. In the step, the projection of 8 vertexes of the bounding box of the Cluster in the screen space is calculated, and the screen projection area of the bounding box in the screen space is calculated, so that whether the bounding box is rejected or not is judged according to the span of the screen projection area in the x axis or the y axis. For a pruned Cluster, it is necessary to mark the pixel position to which it is projected in the Stencil Texture, and this part of Cluster needs voxel rendering by using RayCasting. Typically, when the triangular patch is smaller than 4 pixels, the rendering efficiency of hardware rasterization is affected. Therefore, the Cluster can be determined to adopt rasterization rendering or voxel rendering according to the condition as a judging standard.
305, filling the DrawCall parameter. The rendering framework provided by the embodiment of the invention mainly calls an execueendirect interface when executing rasterization rendering. The interface may specify GPU Buffer as a parameter to submit draw call. Thus, in the culling stage, the DrawCommandList structure may be populated according to the triangular patches contained by the remaining Cluster after culling. The structure mainly comprises an index starting position and an index number of the triangular patches to be submitted for rendering. Through this operation, it can be specified which triangular patches are rendered by rasterization.
Many triangular patches that do not contribute to the rendered picture last can be filtered out through the culling stage. Compared with the traditional CPU end rejection, the method and the device have finer rejection granularity, and can reject as many triangular patches in a scene as possible, so that the rasterization rendering efficiency is improved maximally. Meanwhile, for small triangle patches that may affect the rendering efficiency, step 304 filters out them, and this part uses voxel rendering to optimize the overall rendering efficiency.
FIG. 5 shows a graph of the result of performing a rasterized rendering after culling, where many pixel holes are visible, due to the small triangle patches being culled at step 304. For these hole pixels, their locations are marked in the Stencil Texture, as shown in FIG. 6. In the voxel rendering stage, a RayCasting algorithm is executed at a corresponding position according to the Stencil Texture, and the corresponding pixel color is calculated.
In an exemplary embodiment, the method may further include the steps of: and recording the screen pixel coordinates corresponding to the second target cluster array in a test template.
In the embodiment of the invention, in order to facilitate determining which screen pixels are subjected to voxel rendering, the screen pixel coordinates corresponding to the second target cluster array for voxel rendering may be recorded in a test template.
In an exemplary embodiment, the voxel rendering of the second target cluster array includes:
acquiring screen pixel coordinates corresponding to each screen pixel;
if the screen pixel coordinates are matched with the screen pixel coordinates in the test template, determining that the screen pixel coordinates pass the template test;
transmitting a ray according to the screen pixel coordinates of the screen pixels passing the template test and the line-of-sight direction;
when the intersection point exists between the ray and the voxel space, traversing the SOV structure tree downwards from the corresponding cluster object to obtain target voxel data;
calculating the color value of the screen pixel according to the target voxel data and a preset illumination model;
and filling the color value into the screen pixel.
In an embodiment of the invention, the screen pixel coordinates of the current screen pixel in the test template are checked, and the screen pixel is determined to pass the template test only when the screen pixel coordinates are recorded in the test template, otherwise, the screen pixel is ignored. And then, carrying out voxel rendering on the screen pixels passing the template test, specifically, when an intersection point exists between the ray and the voxel space according to the line of sight direction, traversing the SOV structure tree downwards from the corresponding cluster object to obtain target voxel data, calculating the color value of the screen pixels according to the target voxel data and a preset illumination model, and finally filling the color value into the screen pixels.
In an exemplary embodiment, said filling said color values into said screen pixels comprises:
acquiring the depth of the voxels of the intersection point, and performing a depth test on the depth;
when the depth passes the depth test, the color value is padded to the screen pixel.
Further, a depth test is performed, wherein the depth of the voxel position of the intersection node of the ray and the depth is subjected to the depth test with a depth buffer, when the depth test is not passed, the color value of the screen pixel is abandoned, and when the depth test is passed, the color value is written into the screen pixel.
The voxel rendering mainly uses a RayCasting algorithm, and the small triangular patches screened in the rendering step 304 are the second target cluster array. Referring to fig. 7, which is a schematic diagram of a voxel rendering stage of the present invention, the method may specifically include the following steps:
401, each screen pixel corresponds to a screen pixel coordinate. Each GPU thread is responsible for processing a screen pixel, which can be calculated from the ID of the GPU thread.
A stencil test is performed on the screen pixel 402. Step 304 uses a template test to record the screen pixel for which the RayCasting algorithm needs to be performed, and the GPU thread examines the template test only when the screen pixel coordinates of the screen pixel are marked, otherwise ignoring the screen pixel.
403, generating a ray according to the line of sight direction according to the position of the screen pixel, and using the ray to sample the voxel space by the screen pixel.
404, intersecting the rays generated by the screen pixel coordinates of the screen pixel with the voxel space. If the ray does not intersect the voxel space, the model will not render to the screen pixel coordinates of the screen pixel, which may be ignored.
405, when the ray intersects the voxel space, the GPU thread traverses the Sparse Voxel Octree structure downward, and may terminate the traversal in advance until traversing to a leaf node or when the node space is smaller than the pixel size, to obtain voxel data contained in the node.
406, calculating the color value of the screen pixel according to the voxel data acquired in step 405 by referring to the illumination model.
407, the depth of the voxel position of the intersection node of the ray and the ray is subjected to depth test with a depth buffer. When the depth test is not passed, the color value of the screen pixel is discarded. Otherwise, the color value is written to the screen pixel.
Specifically, using the Stencil Texture shown in fig. 6, the result of voxel rendering is shown in fig. 8, and the result of voxel rendering is combined with the rasterized rendering result shown in fig. 5, to finally obtain the rendering effect of fig. 9.
It can be seen that, on the premise of ensuring the rendering quality, the rendering framework provided by the embodiment of the invention reduces the number of faces submitted to the triangle patches of the rasterization rendering through multiple types of rejection in step 104, and meanwhile, voxel rendering shown in step 106 is adopted for the rendering of the small triangle patches, so that performance bottleneck is avoided, and rendering efficiency is improved maximally.
In summary, the embodiment of the invention combines the advantages of GPU driver and voxel rendering, so that the method and the device have at least the following advantages:
1) And (5) eliminating the level of the dough sheet. And (3) realizing the view cone rejection, back face rejection, small triangular face rejection and the like of the surface patch level at the GPU end by utilizing a computer loader through a GPU driver technology.
2) And the communication between the GPU and the CPU is reduced. According to the reject result, a Draw Command List is directly built by using a GPU Buffer for submitting a drawing command, and the whole process does not need CPU participation, so that the submitting efficiency of Draw call is greatly improved.
3) Voxel rendering is used instead of rendering of small triangular patches. Rendering of small triangle patches has a large impact on rendering efficiency. And for the pixels covered by the small triangular patches, a RayCasting algorithm is called to sample voxel data, so that a rendering result is obtained, and the performance bottleneck problem caused by the rendering of the small triangular patches is avoided.
4) The adoption of rasterization rendering or voxel rendering is adaptively determined according to the pixel size covered by the triangular patch, and due to the structural characteristics of SVO, voxel data after different levels of Mipmap are saved, and when the voxel rendering is performed, the voxel data of different levels are sampled and transited smoothly, and compared with an LOD technology, the abrupt visual effect can not be generated.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 10, a block diagram of an embodiment of a scene model rendering device according to the present invention is shown, where the embodiment of the present invention may specifically include the following modules:
an acquisition module 1001 for acquiring a scene model composed of a plurality of triangular patches;
the division module 1002 is configured to divide the scene model according to the attribute information of the triangular patch to obtain a cluster array; the cluster array comprises a plurality of cluster objects, wherein the cluster objects are formed by a plurality of triangular patches;
A rejection module 1003, configured to determine a first target cluster array and a second target cluster array according to screen projection sizes corresponding to the plurality of cluster objects;
a rasterization rendering module 1004, configured to perform rasterization rendering on the first target cluster array;
and a voxel rendering module 1005, configured to perform voxel rendering on the second target cluster array.
In an exemplary embodiment, the dividing module 1002 is configured to divide the scene model into a plurality of cluster objects according to a specified number of triangular patches, where the cluster objects form a cluster array; the cluster objects are formed by triangular patches with similar spatial positions and consistent normal directions; obtaining the maximum value and the minimum value of the vertex position of the triangular patch in the cluster object, and forming a bounding box of the cluster object according to the maximum value and the minimum value; acquiring a normal direction of a triangular patch in the cluster object, and forming a Cone structure of the cluster object according to the normal direction; the Cone structure is used for defining a conical range in a space; voxelized is carried out on the cluster array to obtain voxel data of the scene model; and storing the voxel data by adopting a sparse voxel octree SOV structure, and storing the voxel data in leaf nodes of the SOV structure tree.
In an exemplary embodiment, the partitioning module 1002 is configured to further include: and performing a Mipmap operation on the SOV structure tree storing the voxel data, so that leaf nodes and non-leaf nodes in the SOV structure tree store the voxel data.
In an exemplary embodiment, the culling module 1003 is configured to determine a screen projection size of the bounding box of the cluster object in a screen space; performing patch elimination on the cluster object according to the screen projection size to obtain a first target cluster array; the face patch eliminating step is to eliminate triangular face patches with screen projection sizes smaller than a preset threshold value; and taking the cluster objects remained after the patch elimination as a second target cluster array.
In an exemplary embodiment, the culling module 1003 is configured to perform back surface culling on the cluster object according to the Cone structure of the cluster object, and perform Cone culling on the cluster object according to the bounding box of the cluster object.
Optionally, the rejection module 1003 is configured to record the screen pixel coordinates corresponding to the second target cluster array in a test template.
In an exemplary embodiment, the rasterized rendering module 1004 is configured to obtain screen pixel coordinates corresponding to each screen pixel; if the screen pixel coordinates are matched with the screen pixel coordinates in the test template, determining that the screen pixel coordinates pass the template test; transmitting a ray according to the screen pixel coordinates of the screen pixels passing the template test and the line-of-sight direction; when the intersection point exists between the ray and the voxel space, traversing the SOV structure tree downwards from the corresponding cluster object to obtain target voxel data; calculating the color value of the screen pixel according to the target voxel data and a preset illumination model; and filling the color value into the screen pixel.
In an exemplary embodiment, the rendering module 1004 is configured to obtain a depth of the voxel of the intersection point, and perform a depth test on the depth; when the depth passes the depth test, the color value is padded to the screen pixel.
In an exemplary embodiment, the apparatus further comprises: and storing the scene model subjected to rasterization rendering or voxel rendering as a binary file.
In the embodiment of the invention, when model rendering is performed, a scene model formed by a plurality of patches is acquired, the scene model is divided into cluster arrays according to attribute information of the triangular patches, a first target cluster array and a second target cluster array are determined according to screen projection sizes corresponding to a plurality of cluster objects, then rasterization rendering is performed on the first target cluster array, and voxel rendering is performed on the second target cluster array. According to the embodiment of the invention, the triangular patch level division can be realized on the scene model, and meanwhile, when the triangular patches after division are rendered, the self-adaptive decision adopts rasterization rendering or voxel rendering, so that the rendering efficiency is improved to the greatest extent.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The embodiment of the invention discloses an electronic device, which comprises a processor, a memory and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the steps of the scene model rendering method embodiment when being executed by the processor.
The embodiment of the invention discloses a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and the computer program realizes the steps of the scene model rendering method embodiment when being executed by a processor.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The above description of the scene model rendering method, the scene model rendering device, the electronic device and the storage medium provided by the invention applies specific examples to illustrate the principles and the implementation of the invention, and the description of the above examples is only used for helping to understand the method and the core idea of the invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (12)

1. A method of rendering a scene model, the method comprising:
acquiring a scene model formed by a plurality of triangular patches;
dividing the scene model according to the attribute information of the triangular patches to obtain a cluster array; the cluster array comprises a plurality of cluster objects, wherein the cluster objects are formed by a plurality of triangular patches;
determining a first target cluster array and a second target cluster array according to screen projection sizes corresponding to the plurality of cluster objects; filtering out the triangular patches which do not contribute to the rendered picture when determining the first target cluster array and the second target cluster array;
Performing raster rendering on the first target cluster array;
and carrying out voxel rendering on the second target cluster array.
2. The method according to claim 1, wherein the dividing the scene model according to the attribute information of the triangular patch to obtain a cluster array includes:
dividing the scene model into a plurality of cluster objects according to a specified number of triangular patches, wherein the cluster objects form a cluster array; the cluster objects are formed by triangular patches with similar spatial positions and consistent normal directions;
obtaining the maximum value and the minimum value of the vertex position of the triangular patch in the cluster object, and forming a bounding box of the cluster object according to the maximum value and the minimum value;
acquiring a normal direction of a triangular patch in the cluster object, and forming a Cone structure of the cluster object according to the normal direction; the Cone structure is used for defining a conical range in a space;
voxelized is carried out on the cluster array to obtain voxel data of the scene model;
and storing the voxel data by adopting a sparse voxel octree SOV structure, wherein the voxel data is stored in leaf nodes of the SOV structure tree.
3. The method of claim 2, wherein after storing the voxel data in a sparse voxel octree SOV structure, storing the voxel data in leaf nodes of the SOV structure tree, further comprising:
And performing a Mipmap operation on the SOV structure tree storing the voxel data, so that leaf nodes and non-leaf nodes in the SOV structure tree store the voxel data.
4. The method of claim 2, wherein determining the first target cluster array and the second target cluster array according to the screen projection sizes corresponding to the plurality of cluster objects comprises:
determining a screen projection size of the bounding box of the cluster object in a screen space;
performing patch elimination on the cluster object according to the screen projection size to obtain a first target cluster array; the face patch eliminating step is to eliminate triangular face patches with screen projection sizes smaller than a preset threshold value;
and taking the cluster objects remained after the patch elimination as a second target cluster array.
5. The method of claim 4, wherein prior to the performing patch elimination on the cluster object according to the screen projection size to obtain the first target cluster array, the method further comprises:
and performing back face elimination on the cluster object according to the Cone structure of the cluster object, and performing view Cone elimination on the cluster object according to the bounding box of the cluster object.
6. The method as recited in claim 4, further comprising:
and recording the screen pixel coordinates corresponding to the second target cluster array in a test template.
7. The method of claim 6, wherein voxel rendering the second array of target clusters comprises:
acquiring screen pixel coordinates corresponding to each screen pixel;
if the screen pixel coordinates are matched with the screen pixel coordinates in the test template, determining that the screen pixel coordinates pass the template test;
transmitting a ray according to the screen pixel coordinates of the screen pixels passing the template test and the line-of-sight direction;
when the intersection point exists between the ray and the voxel space, traversing the SOV structure tree downwards from the corresponding cluster object to obtain target voxel data;
calculating the color value of the screen pixel according to the target voxel data and a preset illumination model;
and filling the color value into the screen pixel.
8. The method of claim 7, wherein said filling the color values into the screen pixels comprises:
acquiring the depth of the voxels of the intersection point, and performing a depth test on the depth;
When the depth passes the depth test, the color value is padded to the screen pixel.
9. The method according to claim 1, wherein the method further comprises:
and storing the scene model subjected to rasterization rendering or voxel rendering as a binary file.
10. A scene model rendering device, the device comprising:
the acquisition module is used for acquiring a scene model formed by a plurality of triangular patches;
the division module is used for dividing the scene model according to the attribute information of the triangular patches to obtain a cluster array; the cluster array comprises a plurality of cluster objects, wherein the cluster objects are formed by a plurality of triangular patches;
the rejecting module is used for determining a first target cluster array and a second target cluster array according to the screen projection sizes corresponding to the plurality of cluster objects; filtering out the triangular patches which do not contribute to the rendered picture when determining the first target cluster array and the second target cluster array;
the rasterization rendering module is used for rasterization rendering of the first target cluster array;
and the voxel rendering module is used for performing voxel rendering on the second target cluster array.
11. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program implementing the steps of the scene model rendering method according to any of claims 1 to 9 when executed by the processor.
12. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the scene model rendering method according to any of claims 1 to 9.
CN202110584749.5A 2021-05-27 2021-05-27 Scene model rendering method and device, electronic equipment and storage medium Active CN113178014B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110584749.5A CN113178014B (en) 2021-05-27 2021-05-27 Scene model rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110584749.5A CN113178014B (en) 2021-05-27 2021-05-27 Scene model rendering method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113178014A CN113178014A (en) 2021-07-27
CN113178014B true CN113178014B (en) 2023-06-13

Family

ID=76927795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110584749.5A Active CN113178014B (en) 2021-05-27 2021-05-27 Scene model rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113178014B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004925B (en) * 2021-09-16 2022-11-22 北京城市网邻信息技术有限公司 WebGL-based model rendering method, electronic device and storage medium
CN113902856B (en) * 2021-11-09 2023-08-25 浙江商汤科技开发有限公司 Semantic annotation method and device, electronic equipment and storage medium
CN114241839A (en) * 2021-11-30 2022-03-25 中广核(北京)仿真技术有限公司 Virtual reality field operation training method and system
CN114170367B (en) * 2021-12-10 2022-08-16 北京优锘科技有限公司 Method, apparatus, storage medium, and device for infinite-line-of-sight pyramidal heatmap rendering
CN114529666B (en) * 2021-12-29 2023-03-10 浙江中测新图地理信息技术有限公司 Three-dimensional scene rapid construction method based on fusion of oblique photography and next generation modeling technology
CN117011487A (en) * 2022-05-20 2023-11-07 腾讯科技(深圳)有限公司 Image rendering method, device, equipment and medium
CN114679549B (en) * 2022-05-27 2022-09-02 潍坊幻视软件科技有限公司 Cross-platform video communication method
CN115103134B (en) * 2022-06-17 2023-02-17 北京中科深智科技有限公司 LED virtual shooting cutting synthesis method
CN116051713B (en) * 2022-08-04 2023-10-31 荣耀终端有限公司 Rendering method, electronic device, and computer-readable storage medium
CN116152039B (en) * 2023-04-18 2023-07-21 北京渲光科技有限公司 Image rendering method
CN116570925A (en) * 2023-05-19 2023-08-11 韶关学院 Resource management method based on AR development
CN116977523B (en) * 2023-07-25 2024-04-26 快速直接(深圳)精密制造有限公司 STEP format rendering method at WEB terminal
CN117541744B (en) * 2024-01-10 2024-04-26 埃洛克航空科技(北京)有限公司 Rendering method and device for urban live-action three-dimensional image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6072497A (en) * 1997-05-30 2000-06-06 Hewlett-Packard Company Volumetric pre-clipping method that guarantees minimal number of sample points through a volume
WO2005104042A1 (en) * 2004-04-20 2005-11-03 The Chinese University Of Hong Kong Block-based fragment filtration with feasible multi-gpu acceleration for real-time volume rendering on standard pc

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018109266A1 (en) * 2016-12-15 2018-06-21 Nokia Technologies Oy A method and technical equipment for rendering media content
EP3662451B1 (en) * 2017-07-31 2021-09-01 Ecole Polytechnique Fédérale de Lausanne (EPFL) A method for voxel ray-casting of scenes on a whole screen
WO2019229293A1 (en) * 2018-05-31 2019-12-05 Nokia Technologies Oy An apparatus, a method and a computer program for volumetric video
CN110910505B (en) * 2019-11-29 2023-06-16 西安建筑科技大学 Accelerated rendering method of scene model
CN111340928B (en) * 2020-02-19 2022-05-03 杭州群核信息技术有限公司 Ray tracing-combined real-time hybrid rendering method and device for Web end and computer equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6072497A (en) * 1997-05-30 2000-06-06 Hewlett-Packard Company Volumetric pre-clipping method that guarantees minimal number of sample points through a volume
WO2005104042A1 (en) * 2004-04-20 2005-11-03 The Chinese University Of Hong Kong Block-based fragment filtration with feasible multi-gpu acceleration for real-time volume rendering on standard pc

Also Published As

Publication number Publication date
CN113178014A (en) 2021-07-27

Similar Documents

Publication Publication Date Title
CN113178014B (en) Scene model rendering method and device, electronic equipment and storage medium
US20240054715A1 (en) Reduced acceleration structures for ray tracing systems
CN108648269B (en) Method and system for singulating three-dimensional building models
Zach Fast and high quality fusion of depth maps
US8570322B2 (en) Method, system, and computer program product for efficient ray tracing of micropolygon geometry
US8725466B2 (en) System and method for hybrid solid and surface modeling for computer-aided design environments
CN112347546A (en) BIM rendering method, device and computer-readable storage medium based on lightweight device
CN102044089A (en) Method for carrying out self-adaption simplification, gradual transmission and rapid charting on three-dimensional model
WO2018113502A1 (en) Method for automatically generating grid and multi-level of detail of shader
CN1655191A (en) Programmable graphic hardware based rapid voxel method for polygonal grid model
Panozzo et al. Automatic construction of quad-based subdivision surfaces using fitmaps
WO2023043993A1 (en) Displaced micro-meshes for ray and path tracing
RU2680355C1 (en) Method and system of removing invisible surfaces of a three-dimensional scene
Merry et al. Moving least-squares reconstruction of large models with GPUs
CN111028349A (en) Hierarchical construction method suitable for rapid visualization of massive three-dimensional live-action data
Lee et al. Bimodal vertex splitting: Acceleration of quadtree triangulation for terrain rendering
CN111275806A (en) Parallelization real-time rendering system and method based on points
CN114332411A (en) Method for generating three-dimensional graph real-time grid
CN114155327A (en) Intelligent multi-resolution feature optimization three-dimensional reconstruction method and system
US11488347B2 (en) Method for instant rendering of voxels
WO2023184139A1 (en) Methods and systems for rendering three-dimensional scenes
CN117274527B (en) Method for constructing three-dimensional visualization model data set of generator equipment
RU2749749C1 (en) Method of synthesis of a two-dimensional image of a scene viewed from a required view point and electronic computing apparatus for implementation thereof
Li et al. Robust surface light field modeling
Yang et al. Massive 3D point cloud visualization by generating artificial center points from multi-resolution cube grid structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant