CN110443893B - Large-scale building scene rendering acceleration method, system, device and storage medium - Google Patents

Large-scale building scene rendering acceleration method, system, device and storage medium Download PDF

Info

Publication number
CN110443893B
CN110443893B CN201910710424.XA CN201910710424A CN110443893B CN 110443893 B CN110443893 B CN 110443893B CN 201910710424 A CN201910710424 A CN 201910710424A CN 110443893 B CN110443893 B CN 110443893B
Authority
CN
China
Prior art keywords
rendering
scene
data
batch
primitives
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910710424.XA
Other languages
Chinese (zh)
Other versions
CN110443893A (en
Inventor
戴钎
颜世增
杨立福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Glodon Co Ltd
Original Assignee
Glodon Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Glodon Co Ltd filed Critical Glodon Co Ltd
Priority to CN201910710424.XA priority Critical patent/CN110443893B/en
Publication of CN110443893A publication Critical patent/CN110443893A/en
Application granted granted Critical
Publication of CN110443893B publication Critical patent/CN110443893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the field of computer graphics in engineering construction, and relates to a three-dimensional building graphics rendering technology which is a core part of BIM (Building Information Modeling) technology, in particular to a large-scale building scene rendering acceleration method, a system, a device and a storage medium. The current rendering engine system, architecture and algorithm face the situation that the adaptation and expansion cannot be well carried out when facing larger-scale models, and a plurality of defects and defects exist for displaying a plurality of building engineering three-dimensional models with larger mass. The invention is based on the space index (OOSI) of the internal and external shielding relation, because of static preprocessing, when browsing the whole building, the huge expenditure of dynamic preprocessing according to the view point is completely avoided, and meanwhile, because the outer layer graphic element is endowed with higher visual weight, the optimization is achieved on the rendering efficiency of the vision without wasting precious rendering resources on graphic elements with low visual weight.

Description

Large-scale building scene rendering acceleration method, system, device and storage medium
Technical Field
The invention belongs to the field of computer graphics in engineering construction, and relates to a three-dimensional building graphics rendering technology which is a core part of BIM (Building Information Modeling) technology, in particular to a large-scale building scene rendering acceleration method, a system, a device and a storage medium.
Background
BIM (building information model) is one of the ideas of the core in the field of building informatization in recent years, and the data basis is the three-dimensional information model of the building. Compared with the traditional two-dimensional design and drawing, the BIM technology comprehensively utilizes the three-dimensional graphic technology, takes the three-dimensional graphics of a building (building component and building whole) as a carrier, further connects various building information parameters to form a building information model, and then carries out full life cycle management of the building and even the component. It can be said that the three-dimensional graphics are muscle and skin of BIM technology, and visual three-dimensional graphics expression and processing can effectively help the BIM key application to land, realizing the value thereof, such as visualization of building model, collision detection, 5D virtual construction, etc. These applications are difficult to imagine to accomplish in a world of non-graphics, or two-dimensional graphics.
For the display of three-dimensional information models, the current industries or application fields similar to the construction engineering field are the game industry and the Geographic Information System (GIS), respectively. A conventional rendering flow of the related art will be described with reference to fig. 1.
1.1 construction scene and spatial index
For three-dimensional graphics rendering systems, the objects to be rendered are generally referred to as primitives, while the collection of primitives is referred to as a scene. While constructing a scene, a spatial index is generally established for the scene primitives, and common spatial indexes include BSP, BVH, octree, and the like. The space index divides the three-dimensional space range occupied by the scene primitive into corresponding space index units according to a certain rule, then the scene primitive is sequentially filled into the space index units corresponding to the space positions, and subsequent traversal is performed according to the division characteristics and the traversal rule of the space index, so that the query access speed of the scene primitive is increased.
1.2 static batch mixing (optional)
When the rendering system renders a scene, triangular mesh data and rendering state data of the scene primitives are pushed to a display card, and in general, each primitive is pushed once, and as the scene scale increases, the number of times of pushing increases, so that the rendering efficiency decreases. In order to accelerate rendering, triangle mesh data of scene primitives with the same rendering state data are combined and pushed to a display card, so that the pushing times are reduced, and the optimization of rendering efficiency is achieved.
1.3 effect baking (optional)
For building scenes with larger scene primitive sizes, under certain use situations, the requirements for advanced rendering effects are also met, but obviously, the addition of the effects can seriously slow the rendering efficiency, and even cannot be used. To circumvent this problem, this step takes a relatively long time when rendering the scene for the first time, renders the scene in a very time-consuming but very efficient rendering manner, and renders the result to the surface texture (picture) of the primitive, such as lighting effects, shadow effects, etc. Then if no change has occurred to the scene, then these effects need not be rendered again at the time of the second rendering of the scene, but instead the texture after baking is rendered directly, similar to a photo-substituted live-action.
1.4 initialization rendering scene data (full frame)
Because the primitive size of the building scene is relatively large, it is generally adopted to render the whole scene gradually in a batch mode, and this step is that the whole scene is rendered to start action, and many initialization functions are needed to be processed, such as initialization of a rendering data structure, resetting of a timer, and the like.
1.5 incoming view point, viewport information
This step is a conventional step of the rendering engine to prepare relevant viewpoint, viewport data for the subsequent rendering pipeline for data transformation and processing of the pipeline.
1.6 traversing spatial index
The main functions of the spatial index created in the step 1.1 are the acceleration of space traversal and query, the units of the spatial index are part of space division, and on the basis of the viewpoint and view port information in the step 1.5, the units of the spatial index can be rapidly judged not to be in the current visual range, so that the units of the spatial index which do not need to be accessed are rapidly skipped, and the acceleration is achieved.
1.7 Loading scene primitive rendering data (optional)
Rendering data of scene primitives is generally stored in a process memory operated by a rendering system, but sometimes from the perspective of saving memory, the rendering data may be migrated to an external memory (disk), and if such a policy is adopted, the rendering data on the external memory (disk) needs to be loaded into the process memory. If such a strategy is not employed, it is not required.
1.8 occlusion culling (optional)
The method comprises the following steps 1.9 and 1.10, which are screening work of the following rendering steps aiming at scene primitives. If an occlusion culling strategy is employed, then occlusion relationships between scene primitives at a given viewpoint will be calculated. For example, a point of view standing outside a building will only see the exterior walls of the building and individual primitives seen through the windows, and the other primitives are blocked from view, so that the scene primitives will not be selected for submission to a subsequent rendering step.
1.9 Cone rejection
Namely, the elimination of the visual field range is similar to photographing of a camera, the visual point position is the position of the camera, the visual field angle which can be observed by the camera is limited due to factors such as the lens opening angle, for example, the observation range of a wide-angle lens is wider than that of a common lens, and the part which appears outside the lens cannot be selected and submitted to the subsequent rendering step.
1.10 calculation selection LOD (optional)
Individual scene primitives may be complicated in modeling, resulting in very large rendering data, e.g., a large number of triangular meshes in the rendering data, which can significantly slow down rendering efficiency. LOD-Level of Detail, i.e. Level Detail technology, will generate versions with different Detail granularities for the rendering data of the scene primitive, dynamically select the rendering data of the corresponding Level Detail according to the distance from the viewpoint to the scene primitive when rendering, for example, when the distance is closer, high-definition rendering data is adopted to maintain the reality of vision, and when the distance is farther, low-definition rendering data is adopted to save rendering cost and improve rendering efficiency. Further, if the distance is very large, some very small scene primitives may be determined to be invisible, skipping directly.
1.11 dynamic Pre-processing rendered data
Through traversing the scene, we can choose the rendering data to be rendered, but in what order the rendering data is to be rendered, dynamic preprocessing is required. Typically comprising several pretreatment steps: 1. the three-dimensional entity model scene primitives are ordered according to the space occupation ratio, and the scene primitives with large space occupation ratio are preferentially rendered; 2. rendering the semitransparent scene primitives from far to near according to the sight distance due to the mixed operation of the transparent algorithm; 3. other ordering of rendering data with a specified rendering order requirement, etc.
1.12 rendering first batch of rendered data (first frame)
1.13 drawing subsequent batches (subsequent frames, optional)
1.14 end of Current full frame rendering
Step 1.12, step 1.13 and step 1.14 are real steps of sending rendering data to a display card for rendering, and gradually rendering the whole scene by adopting a progressive batch drawing mode, wherein each batch is called a batch frame. Step 1.12, drawing a first batch (first frame); step 1.13 is that a scene draws a subsequent batch (subsequent frame) in the case that 1 batch is not drawn; step 1.14 is a rendering end action, which resets some rendering related variables and prepares for rendering the next complete frame.
1.15 scene Change
Since the entire scene has been rendered after step 1.14, several cases are listed in step 1.15 that trigger a scene redraw operation, i.e. the entire scene is rendered from scratch. Three redrawing requirements due to scene itself data changes are roughly listed here: 1. integral: scene overall data changed: adding, deleting, etc.; 2. visibility: the visibility of the scene primitives changes; 3. rendering state: color, map, etc. When the 1 st and the 2 nd conditions occur, the step 1.1 is skipped to start redrawing, and when the 3 rd conditions occur, the step 1.2 is skipped to start redrawing.
1.16 viewpoint Change
Similar to step 1.15, it is illustrated that in case of viewpoint change, it is necessary to jump to step 1.4 to start redrawing the entire scene.
The game industry focusing model displays the reality, aesthetic feeling and interactive experience, a scene main body is often edited and created by a professional editor, the whole scene is relatively static, the scene scale is limited, other main scene bodies except for the dynamic change of some elements such as roles, illumination, weather and the like are prefabricated by the editor of a correlation engine and are subjected to the correlation static optimization, for example, the scene main body is adjusted to be optimal by adopting the technologies such as static batch combination (step 1.2), effect baking (step 1.3), calculation, selection LOD (step 1.10), dynamic preprocessing rendering data (step 1.11) and the like, so that the rendering speed is accelerated, and the real-time rendering high frame rate is achieved. However, for BIM scenes highly related to business, the user needs to integrate models, filter models and the like at any time, which results in the whole scene being highly variable, so that the related technologies for optimizing the game fields cannot be practically used under the use scenes like BIM model browsing and the like. In the case of several scene changes shown in step 1.15, the scene changes as a whole, the visibility of individual primitives changes, and the rendering state changes, which occur very frequently in BIM model applications, while in games, other elements hardly change except for a few models such as characters which change frequently. In addition, the specific scene of the game generally has stronger locality, namely, the number of the primitives in a single scene is very limited (generally, the number of the primitives does not exceed thousands), so that all the primitives can be drawn in a complete frame, and the frame frequency is ensured. This is not basically applicable except for very small scenes in the case of the big-and-small BIM model, which reaches the level of hundreds of thousands.
Geographic Information Systems (GIS) also face the difficulty of large model data volume display. However, unlike GIS, which have huge data size, the GIS mainly includes regular triangle mesh data and bitmap data, including simple three-dimensional data and two-dimensional picture data, and multi-level pyramid data can be generally constructed, so that data with corresponding accuracy can be called according to the viewpoint and displayed progressively. Similar to Google Earth, the whole model takes two dimensions as the main and three dimensions as the auxiliary, and the requirements on the whole display precision of the model and the detail of the model are not high. The rendering engine in the GIS industry generally organizes multi-level detail data in a form of a golden tower, corresponding level detail data are dynamically loaded under different viewing distances, the presentation of the data is delayed to a certain extent, and meanwhile, three-dimensional representation details and the richness of a three-dimensional model cannot be compared with the display requirements of a BIM high-precision model.
The core technology in the GIS system, namely calculating and selecting LOD (step 1.10), but for a building, the model volume and the model quantity after LOD processing are still huge, meanwhile, excessive LOD processing can not only bring about further expansion of data, but also can cause the problem that the LOD coefficient is difficult to accurately teach, so that display deformation and loss are caused, and the method is basically characterized by the complexity and diversity of the shape of a building component. Therefore, the problem of oversized model cannot be solved by simply relying on LOD.
The system is also used for BIM and GIS, such as OSG system, which adopts scene space index like octree, BVH, BSP tree (step 1.1) and the like when in internal rendering, and the space index can provide good rejection of invisible primitives (outside view) through view cone rejection (View Frustum Culling) (step 1.9) when in local observation, so as to reduce rendering scale, thereby accelerating rendering effect, which is a pretreatment mechanism needed to be carried out for each frame of drawing, but can not play an accelerating role when BIM whole model is browsed.
Another type of rendering acceleration mechanism, namely occlusion rejection (step 1.8), is widely used in various rendering engines, namely, other primitives occluded by other primitives from view points are rejected through occlusion relation query among objects, so that the rendering scale is reduced to accelerate the rendering speed. However, this approach has a great problem that since the occlusion query is related to the viewpoint, the preprocessing is required every time a scene is drawn, and the time for the occlusion query preprocessing is greatly increased, thereby occupying the scene drawing time.
BIM requires the integration of multiple specialized design models such as architecture, structure, electromechanical equipment, etc. of the design stage for overall management of the construction during the construction stage, and it is expected that the problem of dramatic model scale expansion will be faced in addition to the requirement for model accuracy, which is often difficult to solve with simple games, GIS-related optimization strategies, or rendering strategies including general design class software.
One difficulty with the model rendering technology that plagues the BIM industry at present is that the model template is huge, and especially for the model rendering at the construction stage, the model data of each design source end is accepted, including all-floor and all-specialized models, and the volume of the model rendering technology is far more than the limit that the rendering engine can bear compared with the general three-dimensional graphic rendering system. In addition, for applications in the BIM industry, the accuracy of the model is also very important, that is, the accuracy of the model display is guaranteed, and the related details of the model display cannot be lost when the details are observed.
Aiming at large-scale scene rendering of building classes, design software, model browsing software and a rendering engine related to BIM industry are similar to a general large-scale scene display technology, the current method is based on spatial indexes, viewpoint dynamic sequencing (preprocessing) is carried out, the spatial indexes of the scenes are focused on octree, BVH and BSP trees (step 1.1), view cone rejection and sequencing are carried out according to viewpoint changes, and after the dynamic preprocessing based on viewpoints and spatial indexes is carried out on the whole scene, progressive display is carried out, so that better rendering efficiency and experience are obtained. In general, the overall display visual effect of the model may be enhanced by having objects closer to the viewpoint, larger, displayed prior to other scene primitives. However, this approach still suffers from two problems:
1. The integrity of the building model is not only guaranteed by the close-far size, for example, if the viewpoint is switched to the axial observation position, most of the building is generally square, corner primitives are displayed instead of being integral, meanwhile, due to the existence of large primitives such as floors, users see the primitives such as floors first, and a large number of incomplete neutral gears appear when the model is interactively displayed.
2. Both line-of-sight and size ordering need to be done for the full scene according to the change in viewpoint, which would also undoubtedly increase the dynamic preprocessing time and shorten the rendering time budget.
In summary, the present rendering engine system, architecture and algorithm, when facing larger models, are not well adapted and expanded, and many flaws and defects exist for displaying three-dimensional models of some building engineering with larger volume.
Disclosure of Invention
Aiming at the defects of the existing rendering system, the invention provides a method, a system, a device and a storage medium for accelerating the rendering of a large-scale building scene, which are more expandable and more efficient.
The invention provides a large-scale building scene rendering acceleration method, which comprises the following steps:
S1, rendering preparation, which comprises the following two preparation works:
s1.1, rendering preparation before scene rendering is carried out in a main thread;
s1.2, carrying out static preprocessing in the sub-thread synchronously with the main thread, and obtaining first-batch rendering data, wherein primitives with higher visual weight in a scene are screened out as the first-batch rendering data during preprocessing;
s2, performing scene rendering in a main thread, wherein the scene rendering comprises the following two cases:
s2.1, when the first batch of rendering data is not read, directly performing scene rendering;
s2.2, when the first batch of rendering data is read, firstly rendering the scene primitives related to the first batch of rendering data, and then rendering the scene primitives outside the first batch of rendering data.
Optionally, in the step S1.1, the rendering preparation includes:
s1.1.1, constructing a scene and a spatial index;
s1.1.2 initializing rendering scene data;
s1.1.3 into the viewpoint and viewport information.
Optionally, in the step S1.2, the primitives with higher visual weights are screened out by using a hierarchical nested mesh index OOSI, the hierarchical nested mesh index (OOSI) is established according to a spatial hierarchical nested relation, the primitives of the outer layer are extracted by using the hierarchical nested mesh index (OOSI), and the scene primitives with the spatial duty ratio exceeding a certain threshold are screened out as the first batch of rendering data.
Optionally, the hierarchical nesting relationship of the space is determined by an occlusion relationship.
Optionally, the threshold is adjustable.
Optionally, constructing the hierarchical nested grid index (OOSI) includes the steps of:
s1.2.1, uniformly dividing a scene into N multiplied by N three-dimensional grid cells;
s1.2.2 scene primitives are filled into three-dimensional grid cells according to the space range;
s1.2.3, judging the hierarchy through the shielding relation among the three-dimensional grid cells;
s1.2.4 generates a hierarchical nested grid index OOSI.
Optionally, in step s1.2.1, all primitives in the scene are traversed, and the minimum vertex and the maximum vertex of the scene are obtained to form an axis alignment bounding box of the scene, and X, Y, Z dimensions of a cuboid space of the axis alignment bounding box are uniformly subdivided, where the granularity of each dimension is N.
Optionally, in step S1.2.2, an intersection decision is made for each three-dimensional mesh, with the intersected scene primitive as what the three-dimensional mesh can index.
Optionally, the intersection decision uses triangle data of the scene primitive or an axis alignment bounding box of the scene primitive itself, if accuracy is pursued, the former is used; the latter is used if speed is sought.
Optionally, in step S1.2.3, the hierarchy is determined by occlusion relationships along the 6-way, 10-way, or 26-way between the three-dimensional grid cells.
Optionally, in step S1.2.4, a hierarchical nested grid index OOSI is generated based on the inner and outer relationships and the nesting depth of the three-dimensional grid cells.
Optionally, sorting is performed according to the size of the axis alignment bounding box of the outer layer primitive, that is, the primitive space ratio, and then screening is performed.
Optionally, after the first batch of rendering data is obtained in step S1.2, static batch merging and/or effect baking is performed as needed.
Optionally, in the step S2.1, directly performing scene rendering includes the steps of:
s2.1.1 traversing the spatial index;
s2.1.2 view cone rejection;
s2.1.3 dynamically preprocessing the subsequent batch rendering data;
s2.1.4 ends the current full frame rendering.
Optionally, scene primitive rendering data and/or occlusion culling is loaded as needed between steps S2.1.1 and S2.1.2.
Optionally, LOD is calculated, selected as needed between steps S2.1.2 and S2.1.3.
Optionally, subsequent batches are drawn as needed between steps S2.1.3 and S2.1.4.
Optionally, in the step S2.2, when the viewpoint enters the room from the outside, the hierarchical nested mesh index OOSI is traversed again to dynamically construct all batches of rendering data and render, wherein the algorithm of the construction is as follows: (1) Performing view cone rejection on the internal three-dimensional grid cells, (2) performing quick dynamic sorting on the three-dimensional grid cells left after rejection, sorting according to the distance between the three-dimensional grid cells and the view point, and obtaining higher rendering priority of the graphics primitive in the three-dimensional grid cells close to the view point.
Alternatively, when the scene is not changed but only the viewpoint is changed, the flow goes to step S1.1, and the method is performed starting from the initialized rendering scene data for rendering preparation.
Optionally, the method is re-executed when the overall data or visibility of the scene changes.
Optionally, when the rendering state of the scene changes, the first batch of rendering data is reconstructed.
The invention provides a large-scale building scene rendering acceleration system, which comprises:
a rendering preparation unit which performs rendering preparation before scene rendering in the main thread on the one hand in the preparation process; on the other hand, static preprocessing is carried out in the sub-thread synchronously with the main thread, so as to obtain first batch of rendering data, and primitives with higher visual weight in a scene are screened out as the first batch of rendering data during preprocessing;
the scene rendering unit is used for directly rendering the scene when the first batch of rendering data is not read; when the first batch of rendering data is read, firstly, rendering the scene primitives related to the first batch of rendering data, and then rendering the scene primitives outside the first batch of rendering data.
Optionally, the rendering preparation unit includes a main thread rendering preparation module and a sub thread static preprocessing module.
Optionally, the work to be completed by the main thread rendering preparation module sequentially includes: constructing a scene and a spatial index, initializing rendering scene data and incoming view point and view port information.
Optionally, the static preprocessing module of the sub-thread filters out the primitives with higher visual weight through a hierarchical nested grid index (OOSI), the hierarchical nested grid index (OOSI) is established according to a spatial hierarchical nested relation, the primitives of the outer layer are extracted through the hierarchical nested grid index (OOSI), and scene primitives with the space occupation ratio exceeding a certain threshold are screened out as the first batch of rendering data.
Optionally, the hierarchical nesting relationship of the space is determined by an occlusion relationship.
Optionally, the threshold is adjustable.
Optionally, the sub-thread static preprocessing module constructs the hierarchical nested mesh index (OOSI) by: uniformly dividing the scene into N multiplied by N three-dimensional grid cells; filling the scene primitive into a three-dimensional grid unit according to the space range; judging the hierarchy through shielding relation among the three-dimensional grid units; generating a hierarchical nested grid index OOSI.
Optionally, when the scene primitives are filled into the three-dimensional grid units according to the space scope, each three-dimensional grid is subjected to intersection judgment, and the scene primitives intersected with each three-dimensional grid are used as the content which can be indexed by the three-dimensional grid.
Optionally, the intersection decision uses triangle data of the scene primitive or an axis alignment bounding box of the scene primitive itself, if accuracy is pursued, the former is used; the latter is used if speed is sought.
Optionally, the scene rendering unit directly performs scene rendering, including the steps of: traversing the spatial index; removing the viewing cone; dynamically preprocessing the rendering data of the subsequent batch; and ending the current complete frame rendering.
Optionally, when the first batch of rendering data is drawn, if the viewpoint enters the room from outside, the scene rendering unit dynamically constructs all batches of rendering data and renders by traversing the hierarchical nested grid index OOSI again, wherein the constructed algorithm is as follows: (1) Performing view cone rejection on the internal three-dimensional grid cells, (2) performing quick dynamic sorting on the three-dimensional grid cells left after rejection, sorting according to the distance between the three-dimensional grid cells and the view point, and obtaining higher rendering priority of the graphics primitive in the three-dimensional grid cells close to the view point.
The invention also provides a large-scale building scene rendering acceleration device, which comprises a memory for storing computer readable instructions; and a processor for executing the computer readable instructions such that the processor when run implements any of the methods described above.
The present invention also provides a storage medium storing computer readable instructions that, when executed by a computer, cause the computer to perform any of the methods described above.
In the invention, the static preprocessing including the visual weight is the core of the whole rendering system invention based on the hierarchical nested index constructed by the occlusion relation among the occlusion scene primitives.
The beneficial effects are that:
1. based on hierarchical nesting, spatial index of internal and external occlusion relationship (Occlusion Oriented Spatial Index, OOSI for short): the method has the advantages that the huge cost of dynamic preprocessing according to the view point is completely avoided when the whole building is browsed through static preprocessing, meanwhile, the outer layer of primitives are endowed with higher visual weight, so that the optimization is achieved on the visual rendering efficiency, namely, the integrity and fidelity of the model are maximally reflected at extremely high speed when the model is interactively displayed, such as translation, scaling and rotation, and precious rendering resources are not wasted on primitives with low visual weight (such as primitives with low space occupation ratio on the inner side of the building);
2. the method is characterized in that the quick calculation of the primitive division based on the OOSI is accelerated by adopting a parallel calculation mode, and the core strategy utilizes multi-core multi-thread parallel calculation of a CPU (Central processing Unit) end;
3. When the viewpoint is inside the building, if the OOSI calculation is completed, the OOSI can still be utilized for accelerated rejection and sorting. If the computation is not complete, then the scene itself octree spatial index may be utilized for view cone culling.
The foregoing description is only an overview of the disclosed technology, and may be implemented in accordance with the disclosure of the present disclosure, so that the above-mentioned and other objects, features and advantages of the present disclosure can be more clearly understood, and the following detailed description of the preferred embodiments is given with reference to the accompanying drawings.
Drawings
FIG. 1 is a schematic diagram of a conventional rendering flow in the prior art;
FIG. 2 is a flow chart of a rendering acceleration method according to the present invention.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It is noted that various aspects of the embodiments are described below within the scope of the following claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the present disclosure, one skilled in the art will appreciate that one aspect described herein may be implemented independently of any other aspect, and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. In addition, such apparatus may be implemented and/or such methods practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should also be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided in order to provide a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the invention provides a large-scale building scene rendering acceleration method. The rendering acceleration method provided in this embodiment may be executed by a computing device, which may be implemented as software, or as a combination of software and hardware, and the computing device may be integrally provided in a server, a terminal device, or the like. As shown in fig. 2, the scene rendering acceleration method of the present invention is clearly and completely described, and specific steps are as follows:
2.1 constructing scenes and spatial index
This step is identical to step 1.1 of the prior art, and the construction of spatial indexes is accelerated for subsequent scene view cone removal and pick-up functions while the scene is being constructed.
2.2 static Pre-processing of first batch of rendered data
The main objective of the method is to preprocess the visual weight, that is, according to the specific situation of the scene, the primitives with higher visual weight (the space duty ratio exceeds a certain threshold value, the threshold value is adjustable) are calculated and filtered through a brand new static space index-hierarchical nested grid index (OOSI) OOSI (Occlusion Oriented Spatial Index is OOSI) as the first batch of rendering data.
For example, for a scene of a building, the external observation is the outermost layer of the whole building, and the hierarchical nested grid index (OOSI) is different from the spatial indexes of BSP, BVH, tree octree and the like which are widely used at present, but is completely built according to the spatial hierarchical nested relation, which is represented by the internal and external relation of the spatial hierarchy, and the relation is determined by the occlusion relation. If the external wall of a building is seen from the outside, the external wall of the building is cut off as a static display primitive like a knife, the part of the primitive does not need any dynamic preprocessing following the viewpoint, and directly enters a graphics card rendering pipeline through a renderer, so that the intermediate flow can be saved, the most important primitive can be rapidly drawn, and the fidelity of the rendering visual effect is ensured. Meanwhile, the process is fast enough, and when the scene changes, for example, the visibility, the position and the display state change, the scene can be updated quickly without affecting the user experience. Meanwhile, the processing process is performed in the background sub-thread, so that the interaction operation of the user is not blocked.
Besides building scenes, some rendering scenes with similar characteristics, such as a huge number of scene primitives, and scenes with inner and outer layer shielding relations, such as large ships and large infrastructures (such as tunnels), are calculated through OOSI, and a first group of primitives with higher visual weight, such as shell primitives of the ships and outer wall primitives of the tunnels, are extracted for first group display so as to improve visual drawing efficiency of the whole scene.
The detailed algorithm within this step will be described in detail below:
2.2.1 uniformly dividing the scene into N three-dimensional grid cells
The concept of "axis aligned bounding box" AABB (axis-aligned bounding box) is drawn here by traversing all vertex data (X i ,Y i ,Z i ) Respectively taking the vertex data as the minimum value (X min ,Y min ,Z min ) Maximum value (X) max ,Y max ,Z max ) Thereby forming the coordinates of the minimum corner point and the maximum corner point of the AABB thereof. This step is actually to traverse all primitives of the scene and find the AABB of the scene. Then, the cuboid space of the AABB is uniformly subdivided into X, Y, Z dimensions, and the default granularity of each dimension is N. Finally, a three-dimensional grid cell array of the scene with the granularity of N multiplied by N is obtained. The 3DGridcell (XIndex, YIndex, ZIndex) represents a specific three-dimensional grid cell in the array, XIndex, YIndex, ZIndex respectively refers to serial number indexes (0 to (N-1) of the three-dimensional grid cell in the X axis, the Y axis and the Z axis, and the smaller the number is, the smaller the coordinate value component of the serial number at the center point of the three-dimensional grid in the corresponding axial direction is, the smaller the coordinate value component is, because the three-dimensional grid cells are uniformly divided, the sizes of all three-dimensional grid cells are the same, and the size of the 3DGridcell can be obtained easily:
3DGridCell.Size=(AABB.Size.X/N,AABB.Size.Y/N,AABB.Size.Z/N)
And center point of 3DGridCell
(XIndex*3dGridCell.Size.X+3dGridCell.Size.X/2,
YIndex*3dGrid Cell.Size.Y+3dGrid Cell.Size.Y/2,
ZIndex*3dGrid Cell.Size.Z+3dGrid Cell.Size.Z/2)。
2.2.2 filling the scene primitives into the three-dimensional grid cell (3 DGriccell) according to the spatial extent
After uniform scene division, we need to make intersection decision for each three-dimensional grid to see which scene primitives intersect it, and then this part of primitives will be the content that the three-dimensional grid can index. The decision of intersection may take the triangle data of the scene primitive or the AABB of the scene primitive itself, the former if accuracy is pursued and the latter if speed is pursued. The choice of algorithm in a specific calculation process needs to be determined by looking at the scale, complexity and business requirements of the model. In the intersection operation, the situation that the same scene primitive is intersected by a plurality of three-dimensional grid cells (3 DGridCell) may occur, and the first intersection is in this moment, the three-dimensional grid cell (3 DGridCell) is in control.
2.2.3, 6/10/26 th of the level of occlusion decisions between three-dimensional grid cells
The occlusion calculation is a main means of a calculation hierarchy, and the invention adopts a simplified appointed occlusion calculation method, namely, starting from an appointed three-dimensional grid unit (3 DGridcell), the method comprises the steps of forward, backward, left, right, up and down (6 directions); or forward, backward, left, right, up, down, front left, front right, rear left, rear right (10 directions in total); or further, forward, backward, left, right, up, down, front left, front right, back left, back right, front up, front down, back up, back down, left up, left down, right up, right down, front left up, front right down, front right up, back left down, back left up, back right down, back right up (26 directions altogether), extending to find three-dimensional grid cells in each direction, judging whether filled scene primitives exist, and if filled scene primitives, establishing a occlusion relationship in the direction; if there is occlusion in all directions (6/10/26), then the primitive is an internal primitive as compared to the other primitives. Further, the nesting depth of the grid can be determined according to the blocked times of the grid.
2.2.4 generating a hierarchical nested grid index (OOSI)
On the basis of step 2.2.3, a hierarchical nesting index can be easily obtained, and for each three-dimensional grid cell (3 DGridcell), we can determine the inner and outer relationships and nesting depth after calculation.
The hierarchical nested grid index (OOSI) nested depth calculation method comprises the following steps: and removing the three-dimensional grid unit with the nesting depth of 0, and then obtaining the three-dimensional grid unit (3 DGridcell) with the nesting depth of 1 by the judging method of the step 2.2.3, and the like until no three-dimensional grid unit (3 DGridcell) remains.
2.2.5 outer grid cell primitive extraction
A three-dimensional mesh cell (3 DGridCell) with a nesting depth of 0 (without being fully occluded by any other three-dimensional mesh cell filling the primitives) is taken as an external three-dimensional mesh cell (3 DGridCell), from which the primitives drawn in the first batch are subsequently screened.
2.2.6 ordering according to space ratio
If we divide the scene according to the inner and outer relations as the first order, this step is the second order according to the size of the AABB of the primitive, i.e. the AABB is used as the measurement basis of the space occupancy ratio of the primitive. In such a way that all vertex data (X i ,Y i ,Z i ) Taking the smallest (X min ,Y min ,Z min ) Maximum value (X) max ,Y max ,Z max ) Thereby constituting coordinates of the minimum corner point and the maximum corner point of the AABB thereof. Then the sum of the square distances between the maximum point and the minimum point of the AABB calculated by us is taken as the basis for determining the size of the AABB, namely, the basis for determining the space occupation ratio of the graphic primitive:
AABB,size_sqr_length=(X max -X min ) 2 +(Y max -Y min ) 2 +(Z max -Z min ) 2
2.2.7 constructing first batch of rendered data
And extracting rendering data from the primitives of the outer layer grid, organizing according to the data format of the rendering engine pipeline, and preparing to submit to the bottom layer rendering pipeline driver for drawing.
2.2.8 static batch merger (optional)
The step can further integrate some rendering optimization technologies of the existing engine, namely static batch merging technology (same as step 1.2), according to specific scene characteristics and efficiency requirements. If this optimization mechanism is employed, the drawing efficiency of the first batch of primitives can be further accelerated.
2.2.9 effect baking (optional)
Similar to step 2.2.8, but more biased toward pre-processing of effects, this step is also embodied in the existing optimization techniques of the engine: as in step 1.3. The specific method is that some illumination effects such as shadows are rendered on textures of the surface of the primitive in advance, so that additional effect calculation is not needed in the follow-up real rendering, and the efficiency can be greatly improved.
Step 2.2.8 and step 2.2.9 of this embodiment are substantially the same as the processes of step 1.2 and step 1.3 of the prior art, but are different from the execution strategy that is performed in the background to prevent blocking of foreground rendering threads, thereby causing a stuck.
2.3 initializing render scene data
This step is the actual starting point for the complete start of rendering of a scene, and will initialize a lot of scene-related status and data information, such as timers etc., and starts in parallel with step 2.2.
2.4, incoming viewpoint, viewport information
The camera position for scene view imaging is determined, as well as the device viewport size that is ultimately projected onto the screen. The same as step 1.5 of the prior art is a fixed step of the existing rendering engine.
2.5 reading first batch of rendered data
And receiving the first batch of rendering data sent by the step 2.2. This step is very different from the existing rendering engine, and it can be seen that, through a series of generation and subsequent processing of OOSI, the rendering data of the first batch of primitives for drawing a large scene is already prepared, so that at the beginning of the real drawing flow, we can use the allocated time slices all for rendering the primitives, reducing unnecessary overhead.
2.6, first batch of rendering data viewing cone rejection
In theory, minimizing the time overhead of additional preprocessing of the first batch of rendering data is a basic requirement of the invention, but when the viewpoint is very close to the whole scene, the most rapid view cone rejection (the same as steps 1.9 and 2.12 in the prior art) is performed on the rendering data to be sent to the rendering pipeline in the first batch, so that the time overhead of the first batch of rendering is increased, but the number of drawing primitives in the first batch of rendering can be greatly reduced when the viewpoint is close to the scene, thereby achieving efficiency optimization.
2.7 drawing first batch of rendered data
And directly delivering the first batch of rendering data which is subjected to view cone rejection to a display card pipeline for drawing.
From step 2.8 to step 2.18 below, it is substantially the same as the conventional rendering flow (1.6-1.16) of the prior art, except that step 2.9 requires traversing OOSI instead of conventional spatial index.
2.8 traversing spatial index
The spatial index of step 2.8 refers to a conventional spatial index (octree, etc.), and the trigger scene is mainly an outdoor scene, i.e. when the viewpoint is outside the building, there are two cases where this step needs to be performed: since the operation of generating the OOSI is calculated in the sub-thread, it is highly likely that the operation is not yet completed in the rendering process, and if the operation is not completed (the reading in step 2.5 is failed), the rendering process of the general rendering system needs to be executed in this step, and the scene is rendered from traversing the spatial index.
Specifically two cases: case 1: after the OOSI is generated, the first group of primitives are successfully extracted, and then the first group of primitives need to be traversed and displayed; and then displaying the rest primitives according to the conventional rendering flow, wherein the specific method is to traverse the spatial index and reject out the first primitive batch which is already generated by the OOSI so as to generate the rendering data of the subsequent batch. Case 2: in the scene change, the OOSI needs to be regenerated, but in a short stage (generally within 1s and the limit can not exceed 5 s) when the OOSI is not completed by the background thread, if the user operation model causes redrawing, the spatial index is still traversed according to the conventional rendering flow, and all batches of rendering data are generated. Without the need to cull the first lot of primitives (not yet generated). Case 1 is a normal OOSI rendering state and case 2 is a transient temporary state in the OOSI update that needs to be filled with existing conventional rendering flows. Case 2 is still applicable in an indoor scenario, see step 2.9 for details in an indoor scenario.
2.9, traversing OOSI to construct all batch rendering data
The triggering scene in step 2.9 is an indoor scene, when OOSI is generated and when the viewpoint enters the interior of a building, the OOSI extracts the first statically prepared drawing primitive failure (the outermost primitive where the three-dimensional grid unit is nested to be 0) at the front, and at this time, the OOSI needs to be traversed again to dynamically construct all batches of rendering data, and the constructing algorithm is as follows: (1) View cone rejection is carried out on the internal three-dimensional grid cells (3 DGridcell), and (2) quick dynamic sorting is carried out on the three-dimensional grid cells (3 DGridcell) left after rejection, wherein the sorting is based on the specific algorithm:
3DGridCell.Priotiry=(3DGridCell.Center–Viewpoint.Position).dot(Viewpoint.Direction)
I.e. ordered by how far from the viewpoint the three-dimensional grid cells are, the primitives in the three-dimensional grid cells (3 DGridCell) close to the viewpoint will get a higher rendering priority. And (2) dynamically generating all batch rendering data according to the steps (1) and (2).
Unlike the outdoor situation, since the first batch of rendering data in the room needs to be dynamically generated, the priority statistics method is adopted, so that the primitives which often enter the first batch are statically processed and reside in the first batch of rendering data set. Thereby achieving similar effects to those of step 2.2.7.
In the indoor scene, the OOSI is not generated yet, and in this case, as in case 2 of step 2.8, a conventional rendering process is performed in step 2.8, instead of traversing a conventional spatial index (such as octree) to temporarily fill the empty window period transition.
2.10, loading scene primitive rendering data
If the rendering data of the scene primitive is not stored in the memory but exists on the disk, as a part of the primitive with the largest visual weight is displayed through the step 2.5, the part of the primitive can be placed on the disk, and even if the loading speed is slower, the visual effect of full scene rendering can be ensured to the greatest extent. Compared with the prior art, the rendering process 1.7 can load the rendering data from the disk, and the visual effect in the rendering interaction process can be obviously affected due to the lack of the mechanism.
The help in loading rendering data after the OOSI is employed is explained in detail here in two aspects:
(1) The viewpoint is outside the building and OOSI has been generated, then the first batch of rendered data constructed by OOSI will reside in memory and the subsequent batch of data will be stored on disk to save memory. In the interactive rendering process, the first batch of data with higher visual weight can be displayed all the time without being loaded from the disk, so that the visual loss of the overall scene rendering is less, and the overall scene rendering sense is enhanced. Although the rendering data of the subsequent batch is slowly loaded, the whole visual effect of the scene is not obviously influenced, and is hardly perceived visually.
(2) The view is within the building and the OOSI has been generated, but it is apparent that the first statically prepared outermost layer of rendering data has failed and that the first batch of rendering data obtained by dynamic calculation (the first batch of rendering data for all batches ordered by view and OOSI weight calculation) will have its data resident in memory as in case (1). Although the first batch of data is dynamically generated and changed, as the obtained first batch of data is always the batch with the largest visual weight, the primitives which are the first batch of data most frequently become the first batch of data in the repeatedly changed first batch of data can obtain the largest priority, make the first batch of data static and store in memory, and the action of the rendering data of the subsequent batch needing to be loaded from a disk can have less influence on visual effects than the action of normal traversing of the conventional spatial index for loading the rendering data, so that the rendering efficiency is optimized visually at the maximum.
2.11 occlusion rejection (optional)
Occlusion culling can be handled the same as the occlusion mechanism of the prior art general rendering flow, and is optionally available, as OOSI already roughly embodies the occlusion relationship between scene primitives. If further accurate judgment is needed, the same operation as in step 1.8 can be adopted, the mutual occlusion relation between the primitives under a certain viewpoint is accurately calculated, and the scene primitives which are occluded but not visible are removed.
2.12 Cone rejection
As in the conventional rendering process of the prior art, as described in step 1.9, scene primitives outside the camera viewing angle range will be culled out and not enter the subsequent rendering process.
2.13, calculate, select LOD (optional)
For some complex structures, the number of primitives with huge triangle meshes is very large, and the detail data of different layers of the primitives need to be selected according to the far and near viewpoints.
2.14 dynamically preprocessing the subsequent batch render data
The step is basically the same as step 1.11, and mainly completes the sequencing work of the rendering data of the scene primitive, except that the first rendering data is not required to be dynamically preprocessed, because the rendering data is already drawn in advance.
2.15 drawing subsequent batches (subsequent frames, optional)
When the number of the primitives in the scene is small, the first batch can draw all the primitives, otherwise, the primitives of the subsequent batches need to be continuously drawn.
2.16, end the rendering of the current complete frame
As in step 1.14, all the drawing of the current scene is completed, some drawing-related variables are reset, and preparation is made for drawing the next complete frame.
2.17 scene Change
As in step 1.17, a redrawing operation of the scene is triggered, i.e. the entire scene is drawn from scratch. Three redrawing requirements due to scene itself data changes are roughly listed here: 1. integral: scene overall data changed: adding, deleting, modifying, etc.; 2. visibility: the visibility of the scene primitives changes; 3. rendering state: color, map, etc. The 1 st 'integral' change is a more thorough change, so that the change has a great influence on the basic structure of the scene, the scene and the spatial index need to be updated (step 2.1), and meanwhile, the complete step of 2.2 is executed in parallel from the step 2.1 to update the OOSI; however, the change of the visibility has similar effects to the addition and deletion although there is no addition and deletion action with respect to the change of the visibility of the 2 nd step, and therefore, the step of returning to the 2.1 step is also required to regenerate the spatial index and the OOSI. The difference is that the 3 rd "rendering state" is changed, and since OOSI is generated mainly according to the spatial topological relation of scene primitives rather than the display appearance, this step does not trigger the recalculation of the whole spatial index and OOSI, but only needs to be performed again when the first batch of rendering data is constructed.
2.18 viewpoint Change
In general, the user rotates, translates and zooms the change caused by the view through interacting with the rendering window, in this case, the spatial index and OOSI are weakly related to the view point, and the whole scene is directly redrawn from step 2.3 without re-executing 2.1 and 2.2. This procedure is similar to the procedure of 1.16 to 1.4.
Summary of the above examples it can be seen that: according to the invention, the first batch of rendering data is statically preprocessed, so that the cost of dynamic preprocessing is greatly saved, and therefore, the faster rendering speed is obtained during real-time rendering, meanwhile, the rendering effect and the model integrity are more excellent through the calculation of the visual weights (step 2.2.4 and step 2.2.5), and the effect is particularly obvious when the model with huge scale is integrally rendered.
Case: test scene scale: 280 ten thousand primitives, according to the conventional rendering flow, the pre-drawing preparation step from 1.4 to 1.11 is performed, according to the fastest way, the middle optional step is removed, namely, 1.4- >1.5- >1.6- >1.9- >1.11 is sequentially performed, the whole time is nearly 150ms, according to 6FPS (6 frames per second, which is the minimum frame rate requirement of CAD/CAM software generally), the whole time reserved for the rendering of the first batch (first frame) 1.12 is 1/6-150 ms=16.6 ms, it can be seen that for rendering the first batch data, the rendering preparation time is nearly 10 times, which directly results in a serious decline of the rendering capability of the first batch, and the result is that the whole model is displayed in a defect when the model is interacted (viewpoint change: translation, rotation, scaling).
In the rendering system flow of the invention, since the first batch of rendering data irrelevant to the view point is generated in the background thread in advance, the execution of the first batch of rendering data from 2.3- >2.4- >2.5- >2.6 is carried out, and because the complex operation of traversing the spatial index and dynamically preprocessing the rendering data (both steps obviously increase along with the increase of the number of scene primitives) is not needed, the time is greatly saved, and within 20ms, the rendering time left for the first batch of rendering data is 1/6-20 ms=146.6ms, the rendering capacity of the first eye is improved by more than 10 times in the interaction process of the view point change, and furthermore, the pre-calculation of the visual weight is carried out, the overall sense of model display is obviously enhanced in the processes of rotation, translation and scaling, and no obvious defect phenomenon appears.
The invention provides a large-scale building scene rendering acceleration system, which comprises:
a rendering preparation unit which performs rendering preparation before scene rendering in the main thread on the one hand in the preparation process; on the other hand, static preprocessing is carried out in the sub-thread synchronously with the main thread, so as to obtain first batch of rendering data, and primitives with higher visual weight in a scene are screened out as the first batch of rendering data during preprocessing;
The scene rendering unit is used for directly rendering the scene when the first batch of rendering data is not read; when the first batch of rendering data is read, firstly, rendering the scene primitives related to the first batch of rendering data, and then rendering the scene primitives outside the first batch of rendering data.
Alternatively, various variations of the above method may be performed by the large-scale architectural scene rendering acceleration system.
The invention also provides a large-scale building scene rendering acceleration device, which comprises a memory for storing computer readable instructions; and a processor for executing the computer readable instructions such that the processor when run implements any of the methods described above.
The present invention also provides a storage medium storing computer readable instructions that, when executed by a computer, cause the computer to perform any of the methods described above.
In the foregoing, although the steps in the foregoing method embodiments are described in the foregoing order, it should be clear to those skilled in the art that the steps in the embodiments of the disclosure are not necessarily performed in the foregoing order, but may be performed in reverse order, parallel, cross, etc., and other steps may be further added to those skilled in the art on the basis of the foregoing steps, and these obvious modifications or equivalent manners are also included in the protection scope of the disclosure and are not repeated herein.
The following is an embodiment of a system of the present disclosure, and an embodiment of a device of the present disclosure may be used to perform steps implemented by an embodiment of a method of the present disclosure, where only a portion related to the embodiment of the present disclosure is shown for convenience of description, and specific technical details are not disclosed, and reference is made to the embodiment of the method of the present disclosure.
The devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via a communication device, or installed from a storage device, or installed from ROM. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by a processing device.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).

Claims (32)

1. A method for accelerating the rendering of a large-scale building scene, which is characterized by comprising the following steps:
S1, rendering preparation, which comprises the following two preparation works:
s1.1, rendering preparation before scene rendering is carried out in a main thread;
s1.2, carrying out static preprocessing in the sub-thread synchronously with the main thread, and obtaining first-batch rendering data, wherein primitives with higher visual weight in a scene are screened out as the first-batch rendering data during preprocessing;
s2, performing scene rendering in a main thread, wherein the scene rendering comprises the following two cases:
s2.1, when the first batch of rendering data is not read, directly performing scene rendering;
s2.2, when the first batch of rendering data is read, firstly rendering scene primitives related to the first batch of rendering data, and then rendering scene primitives outside the first batch of rendering data;
in the step S1.2, the primitives with higher visual weight are screened out through a hierarchical nested grid index OOSI, the hierarchical nested grid index OOSI is established according to a spatial hierarchical nested relation, the primitives of the outer layer are extracted through the hierarchical nested grid index OOSI, and scene primitives with the spatial duty ratio exceeding a certain threshold are screened out as first batch of rendering data.
2. The large-scale building scene rendering acceleration method of claim 1, wherein: in said step S1.1, the rendering preparation comprises:
S1.1.1, constructing a scene and a spatial index;
s1.1.2 initializing rendering scene data;
s1.1.3 into the viewpoint and viewport information.
3. The large-scale building scene rendering acceleration method of claim 1, wherein: the hierarchical nesting relationship of the space is determined by the occlusion relationship.
4. The large-scale building scene rendering acceleration method of claim 1, wherein: the threshold is adjustable.
5. The large-scale building scene rendering acceleration method of claim 1, wherein: constructing the hierarchical nested grid index OOSI comprises the following steps:
s1.2.1, uniformly dividing a scene into N multiplied by N three-dimensional grid cells;
s1.2.2 scene primitives are filled into three-dimensional grid cells according to the space range;
s1.2.3, judging the hierarchy through the shielding relation among the three-dimensional grid cells;
s1.2.4 generates a hierarchical nested grid index OOSI.
6. The large-scale building scene rendering acceleration method of claim 5, wherein: in step s1.2.1, all the primitives in the scene are traversed, the minimum vertex and the maximum vertex of the scene are obtained to form an axis alignment bounding box of the scene, the cuboid space of the axis alignment bounding box is uniformly subdivided into X, Y, Z dimensions, and the granularity of each dimension is N.
7. The large-scale building scene rendering acceleration method of claim 5, wherein: in step S1.2.2, an intersection determination is made for each three-dimensional mesh, and the scene primitive intersected with it serves as what the three-dimensional mesh can index.
8. The large-scale building scene rendering acceleration method of claim 7, wherein: the intersection judgment adopts triangle data of the scene primitive or an axis alignment bounding box of the scene primitive, and if the accuracy is pursued, the former is used; the latter is used if speed is sought.
9. The large-scale building scene rendering acceleration method of claim 5, wherein: in step S1.2.3, the hierarchy is determined by the occlusion relationship along the 6-way, 10-way, or 26-way between the three-dimensional grid cells.
10. The large-scale building scene rendering acceleration method of claim 5, wherein: in step S1.2.4, a hierarchical nested grid index OOSI is generated based on the inner and outer relationships and the nesting depth of the three-dimensional grid cells.
11. The large-scale building scene rendering acceleration method of claim 1, wherein: and sorting according to the size of the axis alignment bounding box of the outer layer primitive, namely the space occupation ratio of the primitive, and then screening.
12. The large-scale building scene rendering acceleration method of claim 1, wherein: after the first batch of rendering data is obtained in step S1.2, static batch merging and/or effect baking is performed as needed.
13. The large-scale building scene rendering acceleration method of claim 1, wherein: in the step S2.1, directly performing scene rendering includes the steps of:
s2.1.1 traversing the spatial index;
s2.1.2 view cone rejection;
s2.1.3 dynamically preprocessing the subsequent batch rendering data;
s2.1.4 ends the current full frame rendering.
14. The large-scale building scene rendering acceleration method of claim 13, wherein: scene primitive rendering data and/or occlusion culling is loaded as needed between steps S2.1.1 and S2.1.2.
15. The large-scale building scene rendering acceleration method of claim 14, wherein: LOD is calculated and selected as needed between steps S2.1.2 and S2.1.3.
16. The large-scale building scene rendering acceleration method of claim 15, wherein: subsequent batches were drawn between steps S2.1.3 and S2.1.4 as needed.
17. The large-scale building scene rendering acceleration method of claim 1, wherein: in the step S2.2, when the viewpoint enters the room from the outside, the hierarchical nested mesh index OOSI is traversed again to dynamically construct all batches of rendering data and render, wherein the constructed algorithm is as follows: (1) Performing view cone rejection on the internal three-dimensional grid cells, (2) performing quick dynamic sorting on the three-dimensional grid cells left after rejection, sorting according to the distance between the three-dimensional grid cells and the view point, and obtaining higher rendering priority of the graphics primitive in the three-dimensional grid cells close to the view point.
18. The large-scale architectural scene rendering acceleration method of any one of claims 1-17, wherein: when the scene is not changed but only the viewpoint is changed, the process goes to step S1.1, and the method starts from the initialized rendering scene data for rendering preparation.
19. The large-scale architectural scene rendering acceleration method of any one of claims 1-17, wherein: when the overall data or visibility of the scene changes, the method is re-executed.
20. The large-scale architectural scene rendering acceleration method of any one of claims 1-17, wherein: when the rendering state of the scene changes, reconstructing the first batch of rendering data.
21. A large-scale architectural scene rendering acceleration system, the system comprising:
a rendering preparation unit which performs rendering preparation before scene rendering in the main thread on the one hand in the preparation process; on the other hand, static preprocessing is carried out in the sub-thread synchronously with the main thread, so as to obtain first batch of rendering data, and primitives with higher visual weight in a scene are screened out as the first batch of rendering data during preprocessing;
the scene rendering unit is used for directly rendering the scene when the first batch of rendering data is not read; when the first batch of rendering data is read, firstly rendering the scene primitives related to the first batch of rendering data, and then rendering the scene primitives outside the first batch of rendering data;
The rendering preparation unit comprises a sub-thread static preprocessing module, wherein the sub-thread static preprocessing module screens out the primitives with higher visual weight through a hierarchical nested grid index OOSI, the hierarchical nested grid index OOSI is established according to the hierarchical nested relation of the space, the primitives of the outer layer are extracted through the hierarchical nested grid index OOSI, and scene primitives with the space occupation ratio exceeding a certain threshold value are screened out as first-batch rendering data.
22. The large-scale architectural scene rendering acceleration system of claim 21, wherein: the rendering preparation unit includes a main thread rendering preparation module.
23. The large-scale architectural scene rendering acceleration system of claim 22, wherein: the work to be completed by the main thread rendering preparation module sequentially comprises: constructing a scene and a spatial index, initializing rendering scene data and incoming view point and view port information.
24. The large-scale architectural scene rendering acceleration system of claim 21, wherein: the hierarchical nesting relationship of the space is determined by the occlusion relationship.
25. The large-scale architectural scene rendering acceleration system of claim 21, wherein: the threshold is adjustable.
26. The large-scale architectural scene rendering acceleration system of claim 21, wherein: the sub-thread static preprocessing module constructs the hierarchical nested grid index OOSI by: uniformly dividing the scene into N multiplied by N three-dimensional grid cells; filling the scene primitive into a three-dimensional grid unit according to the space range; judging the hierarchy through shielding relation among the three-dimensional grid units; generating a hierarchical nested grid index OOSI.
27. The large-scale architectural scene rendering acceleration system of claim 26, wherein: when the scene primitives are filled into the three-dimensional grid units according to the space range, each three-dimensional grid is subjected to intersection judgment, and the scene primitives intersected with each three-dimensional grid are used as the content which can be indexed by the three-dimensional grid.
28. The large scale architectural scene rendering acceleration system of claim 27, wherein: the intersection judgment adopts triangle data of the scene primitive or an axis alignment bounding box of the scene primitive, and if the accuracy is pursued, the former is used; the latter is used if speed is sought.
29. The large-scale architectural scene rendering acceleration system of claim 21, wherein: the scene rendering unit directly performs scene rendering, including the steps of: traversing the spatial index; removing the viewing cone; dynamically preprocessing the rendering data of the subsequent batch; and ending the current complete frame rendering.
30. The large-scale architectural scene rendering acceleration system of claim 21, wherein: when the first batch of rendering data is drawn, if the view point enters the room from outside, the scene rendering unit re-traverses the hierarchical nested grid index OOSI to dynamically construct all batches of rendering data and render, wherein the constructed algorithm is as follows: (1) Performing view cone rejection on the internal three-dimensional grid cells, (2) performing quick dynamic sorting on the three-dimensional grid cells left after rejection, sorting according to the distance between the three-dimensional grid cells and the view point, and obtaining higher rendering priority of the graphics primitive in the three-dimensional grid cells close to the view point.
31. A large-scale building scene rendering accelerating device is characterized in that: the apparatus includes a memory for storing computer readable instructions; and a processor for executing the computer readable instructions such that the processor when run implements the method according to any one of claims 1-20.
32. A storage medium, characterized by: for storing computer readable instructions which, when executed by a computer, cause the computer to perform the method of any of claims 1-20.
CN201910710424.XA 2019-08-02 2019-08-02 Large-scale building scene rendering acceleration method, system, device and storage medium Active CN110443893B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910710424.XA CN110443893B (en) 2019-08-02 2019-08-02 Large-scale building scene rendering acceleration method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910710424.XA CN110443893B (en) 2019-08-02 2019-08-02 Large-scale building scene rendering acceleration method, system, device and storage medium

Publications (2)

Publication Number Publication Date
CN110443893A CN110443893A (en) 2019-11-12
CN110443893B true CN110443893B (en) 2023-04-25

Family

ID=68432877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910710424.XA Active CN110443893B (en) 2019-08-02 2019-08-02 Large-scale building scene rendering acceleration method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN110443893B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969568B (en) * 2019-11-29 2023-06-13 广联达科技股份有限公司 BIM model double-sided display accelerated rendering method, system, product and storage medium
CN111080766B (en) * 2019-12-30 2023-09-01 中科星图股份有限公司 GPU (graphics processing unit) acceleration mass target efficient rendering method based on WebGL
CN111210521B (en) * 2020-01-06 2022-09-16 江南造船(集团)有限责任公司 Ship giant data model lightweight method, system, terminal and medium for VR
CN111369656B (en) * 2020-03-04 2021-08-27 杭州群核信息技术有限公司 WebGL-based editable large-scene progressive real-time rendering method
CN111415401B (en) * 2020-03-25 2023-05-30 上海城建信息科技有限公司 Large-scale scene rendering method based on WebGL
CN111388996A (en) * 2020-04-10 2020-07-10 网易(杭州)网络有限公司 Three-dimensional virtual object display method, device and system, storage medium and equipment
CN111832105B (en) * 2020-06-24 2023-03-24 万翼科技有限公司 Model fusion method and related device
CN111880918B (en) * 2020-07-28 2021-05-18 南京市城市与交通规划设计研究院股份有限公司 Road network front end rendering method and device and electronic equipment
CN111950057A (en) * 2020-08-06 2020-11-17 万翼科技有限公司 Loading method and device of Building Information Model (BIM)
CN111986323B (en) * 2020-08-24 2024-03-22 格兰菲智能科技有限公司 Model simplifying method
CN112257135B (en) * 2020-10-30 2023-09-05 久瓴(上海)智能科技有限公司 Model loading method and device based on multithreading, storage medium and terminal
CN112257134B (en) * 2020-10-30 2022-09-16 久瓴(上海)智能科技有限公司 Model management method and device and electronic equipment
CN113822961B (en) * 2021-09-22 2024-04-26 广州博冠信息科技有限公司 Method, device, equipment and medium for 2D rendering of 3D model
CN116012506A (en) * 2021-10-22 2023-04-25 华为技术有限公司 Processing method, generating method and related device of three-dimensional model data
CN113901062B (en) * 2021-12-07 2022-03-18 浙江高信技术股份有限公司 Pre-loading system based on BIM and GIS
CN114419256B (en) * 2022-01-24 2024-01-23 正元地理信息集团股份有限公司 Urban level BIM data light weight method and system based on multistage shell extraction algorithm
CN116977556B (en) * 2023-07-18 2024-02-06 广东国地规划科技股份有限公司 Rendering method, device and storage medium of CIM system
CN116912395B (en) * 2023-09-14 2024-01-12 武汉蜂鸟龙腾软件有限公司 Graphics hybrid rendering method and device based on OpenGL and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281654A (en) * 2008-05-20 2008-10-08 上海大学 Method for processing cosmically complex three-dimensional scene based on eight-fork tree
CN103942306A (en) * 2014-04-18 2014-07-23 重庆市勘测院 Three-dimensional city model self-adaption scheduling method
CN107481311A (en) * 2017-08-24 2017-12-15 中煤航测遥感集团有限公司 D Urban model rendering intent and device
CN108520557A (en) * 2018-04-10 2018-09-11 中国人民解放军战略支援部队信息工程大学 A kind of magnanimity building method for drafting of graph image fusion
WO2019088865A1 (en) * 2017-11-01 2019-05-09 Вебгирз А Гэ Method and system for removing hidden surfaces from a three-dimensional scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281654A (en) * 2008-05-20 2008-10-08 上海大学 Method for processing cosmically complex three-dimensional scene based on eight-fork tree
CN103942306A (en) * 2014-04-18 2014-07-23 重庆市勘测院 Three-dimensional city model self-adaption scheduling method
CN107481311A (en) * 2017-08-24 2017-12-15 中煤航测遥感集团有限公司 D Urban model rendering intent and device
WO2019088865A1 (en) * 2017-11-01 2019-05-09 Вебгирз А Гэ Method and system for removing hidden surfaces from a three-dimensional scene
CN108520557A (en) * 2018-04-10 2018-09-11 中国人民解放军战略支援部队信息工程大学 A kind of magnanimity building method for drafting of graph image fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于OSG的大规模建筑模型渲染系统设计与实现;利广杰;《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》;20180715;C038-95 *

Also Published As

Publication number Publication date
CN110443893A (en) 2019-11-12

Similar Documents

Publication Publication Date Title
CN110443893B (en) Large-scale building scene rendering acceleration method, system, device and storage medium
CN112270756B (en) Data rendering method applied to BIM model file
CN110070613B (en) Large three-dimensional scene webpage display method based on model compression and asynchronous loading
CN108520557B (en) Massive building drawing method with graphic and image fusion
US8692825B2 (en) Parallelized streaming accelerated data structure generation
US8384711B2 (en) Ray tracing a three dimensional scene using a grid
US7561156B2 (en) Adaptive quadtree-based scalable surface rendering
US20100188396A1 (en) Updating Ray Traced Acceleration Data Structures Between Frames Based on Changing Perspective
US9208610B2 (en) Alternate scene representations for optimizing rendering of computer graphics
US8988433B2 (en) Systems and methods for primitive intersection in ray tracing
US20110285710A1 (en) Parallelized Ray Tracing
CN110309458B (en) BIM model display and rendering method based on WebGL
CN113674389B (en) Scene rendering method and device, electronic equipment and storage medium
JP6864495B2 (en) Drawing Global Illumination in 3D scenes
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
US20230230311A1 (en) Rendering Method and Apparatus, and Device
US11756255B2 (en) Method for constructing and traversing accelerating structures
CN111986304A (en) Rendering a scene using a combination of ray tracing and rasterization
RU2680355C1 (en) Method and system of removing invisible surfaces of a three-dimensional scene
CN112906125B (en) Light-weight loading method for BIM model of railway fixed facility
CN108280887A (en) A kind of echo determines method and device
Scholz et al. Level of Detail for Real-Time Volumetric Terrain Rendering.
KR101228118B1 (en) Method for constructing a Kd-tree based on polygon importance
CN112102450B (en) WebGL three-dimensional map-based general method for special effect of marquee
CN116824082B (en) Virtual terrain rendering method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant