CN110738721B - Three-dimensional scene rendering acceleration method and system based on video geometric analysis - Google Patents

Three-dimensional scene rendering acceleration method and system based on video geometric analysis Download PDF

Info

Publication number
CN110738721B
CN110738721B CN201910969273.XA CN201910969273A CN110738721B CN 110738721 B CN110738721 B CN 110738721B CN 201910969273 A CN201910969273 A CN 201910969273A CN 110738721 B CN110738721 B CN 110738721B
Authority
CN
China
Prior art keywords
model
nodes
rendering
node
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910969273.XA
Other languages
Chinese (zh)
Other versions
CN110738721A (en
Inventor
韩宇韬
吕琪菲
张至怡
陈银
党建波
阳松江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Aerospace Shenkun Technology Co ltd
Original Assignee
Sichuan Aerospace Shenkun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Aerospace Shenkun Technology Co ltd filed Critical Sichuan Aerospace Shenkun Technology Co ltd
Priority to CN201910969273.XA priority Critical patent/CN110738721B/en
Publication of CN110738721A publication Critical patent/CN110738721A/en
Application granted granted Critical
Publication of CN110738721B publication Critical patent/CN110738721B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to the technical field of three-dimensional scene rendering, and particularly discloses a three-dimensional scene rendering acceleration method and system based on video geometric analysis.

Description

Three-dimensional scene rendering acceleration method and system based on video geometric analysis
Technical Field
The application relates to the technical field of three-dimensional scene rendering, in particular to a three-dimensional scene rendering acceleration method and system based on video geometric analysis.
Background
The current three-dimensional scene rendering technology mainly comprises methods of scene visibility elimination, multi-resolution model simplification, model data organization and the like. Scene visibility elimination, namely eliminating a scene part which does not contribute to final image rendering before a model coordinate conversion stage, and then sending the rest of the scene to a drawing pipeline, so that the complexity of the scene and the burden of a graphics pipeline can be effectively reduced, and the method is a very effective method for improving the scene drawing efficiency. The multi-resolution model simplification technology can generate a Detail Level model (LOD) during preprocessing, so that tiny, unimportant parts and far-end scenes in the model are simplified, and meanwhile, the scene content is ensured not to be seriously distorted. The model data organization needs to organize the scene information according to a certain spatial data structure, so that the query speed can be effectively improved. Multi-video geometry analysis currently employs multi-threading for video analysis.
In the fusion visualization process of the multi-source massive real-time monitoring video and the unified three-dimensional virtual scene, a large amount of time and memory are consumed for large-scale three-dimensional virtual scene rendering and multi-video geometric analysis processing. Scene visibility culling techniques in dynamic scenes, due to the interaction of the observer with the scene, the occlusion tree corresponding to the scene changes as well. If the occlusion tree is dynamically generated, it is costly and may even exceed the rendering time of the entire scene. Multi-resolution model simplification in practical application, the static LOD model switches between adjacent layers to appear visual jitter. The dynamic LOD model needs to perform error calculation on each vertex in real time before display, when the data volume is large, the calculated amount is huge, and meanwhile, the data is required to be read into the memory entirely, so that a large amount of memory is occupied. Model data organization also typically requires a preprocessing process and is therefore more suitable for static scenarios.
Disclosure of Invention
In view of the above, the present application provides a method and a system for accelerating three-dimensional scene rendering based on video geometric analysis, which can solve or at least partially solve the above-mentioned problems.
In order to solve the technical problems, the technical scheme provided by the application is a three-dimensional scene rendering acceleration method based on video geometric analysis, which comprises the following steps:
s1: preprocessing three-dimensional scene data;
s2: constructing a hierarchical detail model for the three-dimensional scene data;
s3: and calculating the three-dimensional scene data and the video data by adopting a parallel acceleration calculation method combined with the GPU.
Preferably, the method of step S1 includes:
s11: importing and analyzing the scene model objects to be loaded in batches, and obtaining the central position, the maximum value and the minimum value parameters of the scene model object bounding box;
s12: constructing a tree index structure for the three-dimensional model with acquired bounding box center position, maximum value and minimum value parameters, and constructing an R tree for the tree index structure by adopting a space R tree index method, wherein each node corresponds to a minimum bounding cube containing a corresponding space object;
s13: dividing the constructed R tree from leaf nodes to the upper part by a paging technology to obtain independent R trees on each space page, and recording current level information and file pointing to the next layer in the dividing process;
s14: and sequentially placing the multiple R trees after division into corresponding queues, directly exporting the R trees if the R trees do not contain leaf nodes, and exporting the R trees after detail level LOD processing is performed on the R trees if the R trees contain leaf nodes.
Preferably, the method of step S13 includes:
constructing a view body in the view field range of the observation view angle, performing intersection operation with a space bounding cube of the three-dimensional model, and acquiring a bounding cube index positioned in the view body range through a spatial relationship among nodes, wherein corresponding nodes corresponding to the R tree represent model parts needing rendering and display;
and dividing the R tree part corresponding to the selected node by a paging technology, taking the local highest-level non-leaf node as a root node, and constructing an independent small R tree.
Preferably, the method of step S2 includes:
s21: dividing an object needing to construct a hierarchical detail model into a plurality of nodes according to a quadtree, wherein corresponding details exist on the quadtree of the object surface node as a next-stage node, and the segmentation of the quadtree determines which layer the finally rendered leaf node is;
s22: cutting out vertexes of an area outside the range of the view, setting segmentation information of the area as false, and setting segmentation information of leaf nodes in the range of the view as true;
s23: the Euclidean distance threshold value from the observation point to a specific node and the node complexity threshold value in a certain area are set to serve as evaluation standards for node segmentation, wherein the distance threshold value is set to be 4 levels, and the complexity threshold value is judged to be more priority than the distance threshold value;
s24: when cracks appear in the resolution switching process, deleting one grid edge at the node side with higher resolution level, or adding one edge at the node side with lower resolution level, merging adjacent grids, and simultaneously, ensuring that the adjacent resolution level difference is not more than 2 levels;
s25: and performing depth-first traversal nodes on the quadtree which is currently segmented by using a recursion method, and rendering all leaf nodes which are traversed and have the segmentation information of true, thereby completing the construction and rendering of the hierarchical detail model.
Preferably, the method of step S23 includes:
generating basic model nodes, namely nodes reserved in all observation distances, performing traversal calculation on Euclidean distances from the nodes to the top of the view body of the observation view, setting a distance threshold value as 4 levels, and increasing the number of the model nodes in a node segmentation mode sequentially from far to near along with the view point;
setting a distance weighted average value among model nodes, namely a complexity threshold value, for a local area of a more complex model, and carrying out local complexity detection while the number of hierarchical model nodes is increased to determine whether to enable the model nodes with higher complexity, wherein the threshold value detection is higher than the distance threshold value in priority.
Preferably, the method of step S3 includes:
s31: preprocessing a three-dimensional scene building and a terrain model object to obtain a large number of small R trees, and loading tree indexes into a GPU (graphics processing unit) for accelerated rendering;
s32: dividing video data into corresponding small blocks according to specific analysis requirements in a matrix form, and synchronously loading the video data into a GPU (graphics processing unit) for parallel processing to realize the accelerating operation processing effect of the GPU;
s33: and combining and outputting scene data and video data analysis results to finish the finally required rendering effect.
Preferably, the method of step S32 includes:
and after the video texture is rendered to be attached to the model surface, classifying the vertexes at the corresponding positions in the vertex shader according to a plurality of planes of the model surface within the mapping range, completing fragment segmentation in the fragment shader, and then performing block parallel computing processing by using a cluster computing unit of the GPU, and synchronously rendering the video texture, thereby accelerating rendering.
The application also provides a three-dimensional scene rendering acceleration system based on video geometric analysis, which comprises:
the processing module is used for preprocessing the three-dimensional scene data;
the construction module comprises: constructing a hierarchical detail model for the three-dimensional scene data;
and the computing module is used for computing the three-dimensional scene data and the video data by adopting a parallel acceleration computing method combined with the GPU.
The application also provides a three-dimensional scene rendering acceleration system based on video geometric analysis, which comprises:
a memory for storing a computer program;
and the processor is used for executing the computer program to realize the steps of the three-dimensional scene rendering acceleration method based on the video geometric analysis.
The present application also provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the three-dimensional scene rendering acceleration method based on video geometry analysis.
Compared with the prior art, the application has the following beneficial effects: the application provides a method for improving video information mapping and model dynamic loading speed in a three-dimensional scene based on a GPU parallel acceleration technology, which comprises the steps of preprocessing three-dimensional scene data, segmenting an index structure, constructing a hierarchical detail model which is convenient for accelerated rendering for the three-dimensional model, constructing a segmentable matrix for video information, and then importing the segmented model index, the hierarchical detail model and a video image matrix into the GPU for parallel computation, so that accelerated rendering of a complex three-dimensional scene and multiple videos is realized.
Drawings
For a clearer description of embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described, it being apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
Fig. 1 is a flow chart of a three-dimensional scene rendering acceleration method based on video geometric analysis according to an embodiment of the present application;
fig. 2 is a schematic flow chart of preprocessing three-dimensional scene data according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of constructing a hierarchical detail model on three-dimensional scene data according to an embodiment of the present application;
fig. 4 is a schematic flow chart of calculating three-dimensional scene data and video data by using a parallel acceleration calculation method combined with a GPU according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a three-dimensional scene rendering acceleration system based on video geometric analysis according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. Based on the embodiments of the present application, all other embodiments obtained by a person of ordinary skill in the art without making any inventive effort are within the scope of the present application.
In order to make the technical solution of the present application better understood by those skilled in the art, the present application will be further described in detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, an embodiment of the present application provides a three-dimensional scene rendering acceleration method based on video geometric analysis, including:
s1: preprocessing three-dimensional scene data;
s2: constructing a hierarchical detail model for the three-dimensional scene data;
s3: and calculating the three-dimensional scene data and the video data by adopting a parallel acceleration calculation method combined with the GPU.
As shown in fig. 2, aiming at the problems of higher hardware requirement and lower rendering efficiency in the prior art, a model data preprocessing method suitable for efficient rendering in a three-dimensional urban scene is provided, a large-scale building model is reorganized, so that a user only needs to load a model in a view range when rendering the scene, the rendering requirement is greatly reduced, and the rendering efficiency is improved, and the specific method of step S1 comprises:
s11: and importing and analyzing the scene model objects to be loaded in batches, and obtaining the central position, the maximum value and the minimum value parameters of the scene model object bounding box.
Specifically, a scene model to be loaded, particularly a large-scale building three-dimensional model with a complex structure, is used as a preprocessing object, is converted into a data format supported by a three-dimensional display platform by using an IO read-write plug-in unit, meets the requirements of being capable of processing and displaying, and is imported into a three-dimensional scene in batches; and simultaneously acquiring the central position, the maximum value and the minimum value parameters of the model object bounding box.
S12: and constructing a tree index structure for the three-dimensional model with acquired bounding box central position, maximum value and minimum value parameters, and constructing an R tree for the tree index structure by adopting a space R tree index method, wherein each node corresponds to the minimum bounding cube containing the corresponding space object.
Specifically, building a tree index structure of the three-dimensional model into which the bounding box position parameters are imported and acquired, and building an R tree by adopting a space R tree index method, wherein each node corresponds to a minimum bounding cube containing a corresponding space object, and the specific building scheme is as follows: using a minimum boundary cube to contain a space area with a model, taking a building model cluster as an example, splitting the cluster into building units of different levels according to a certain rule, surrounding each split level by using the minimum boundary cube, and recording three non-coplanar vertex coordinates of the cube to represent the node index; and taking the coordinate index of the whole bounding cube of the highest level as a root node, wherein the bounding cubes of the minimum units correspond to leaf nodes, all non-leaf node cubes comprise lower-layer cubes, and the like, and constructing the whole R tree of the regional building.
S13: and (3) carrying out space page division on the constructed R tree from the leaf node upwards through a paging technology to obtain independent R trees on each space page, and recording current level information and file pointing to the next layer in the dividing process.
Specifically, the method of step S13 includes:
constructing a view body in the view field range of the observation view angle, performing intersection operation with a space bounding cube of the three-dimensional model, and acquiring a bounding cube index positioned in the view body range through a spatial relationship among nodes, wherein corresponding nodes corresponding to the R tree represent model parts needing rendering and display;
and dividing the R tree part corresponding to the selected node by a paging technology, taking the local highest-level non-leaf node as a root node, and constructing an independent small R tree.
Specifically, space page division is carried out on the constructed R tree from the leaf node upwards through a paging technology, an independent R tree on each space page is obtained, and current level information and file pointing to the next layer are recorded in the dividing process. The technology corresponds to an actual display rendering process, and the specific implementation scheme is as follows: firstly, constructing a view body in the view field range of an observation view angle, performing intersection operation with a space bounding cube of a model, and acquiring a bounding cube index positioned in the view body range through a spatial relationship among nodes, wherein corresponding nodes corresponding to R trees represent the model part needing rendering and display; and then dividing the R tree part corresponding to the selected node by a paging technology, taking the local highest-level non-leaf node as a root node, and constructing an independent small R tree.
S14: and sequentially placing the multiple R trees after division into corresponding queues, directly exporting the R trees if the R trees do not contain leaf nodes, and exporting the R trees after detail level LOD processing is performed on the R trees if the R trees contain leaf nodes.
Specifically, a display node queue is constructed and used for a preselected process of rendering pipeline processing, nodes to be displayed corresponding to a plurality of small R tree indexes which are divided are sequentially put into the queue, and trees which do not contain leaf nodes are directly exported for further processing; if the fruit tree contains leaf nodes, the detail level LOD processing is carried out on the model of the corresponding surrounding cube of the R tree, and then the detail level LOD processing is exported.
As shown in fig. 2, aiming at the problems of low direct rendering efficiency, severe frame number fluctuation when moving the view angle, etc. caused by increasing scale of building models in the current three-dimensional scene, complex structure and increasing DEM, the specific method of step S2 includes:
s21: dividing an object needing to construct a hierarchical detail model into a plurality of nodes according to a quadtree, wherein corresponding details exist on the quadtree of the object surface node as a next-stage node, and the segmentation of the quadtree determines which layer the finally rendered leaf node is.
Specifically, after the small R tree corresponding to the three-dimensional scene model object (which may be a building or a plot unit) to be rendered in step S14 is determined and derived, the R tree index including the leaf nodes is divided into a plurality of levels of nodes according to the quadtree segmentation, the corresponding details exist as the next level of nodes on the quadtree of the object surface node, and the segmentation result of the quadtree determines which layer the finally rendered leaf node is located, so that the corresponding model object is constructed into a multi-level detail model according to the observation distance of the viewpoint.
S22: and (3) carrying out vertex clipping work on the area outside the range of the view body, setting the segmentation information of the area outside the range of the view body as false, and setting the segmentation information of the leaf nodes in the range of the view body as true.
Specifically, the vertex clipping work is performed on the area outside the view range, and the segmentation discrimination information is set to false, namely, the paging processing is not performed on the corresponding R tree part, and the original whole R tree structure is still reserved, so that the corresponding model nodes cannot participate in the construction queue of the display nodes, and the effect of not rendering the model outside the view range is achieved. The segmentation information of the leaf nodes in the view volume is set to true.
S23: the Euclidean distance threshold value from the observation point to a specific node and the node complexity threshold value in a certain area are set to serve as evaluation standards for node segmentation, wherein the distance threshold value is set to be 4 levels, and the complexity threshold value is judged to be more priority than the distance threshold value.
The method of step S23 includes:
generating basic model nodes, namely nodes reserved in all observation distances, performing traversal calculation on Euclidean distances from the nodes to the top of the view body of the observation view, setting a distance threshold value as 4 levels, and increasing the number of the model nodes in a node segmentation mode sequentially from far to near along with the view point;
setting a distance weighted average value among model nodes, namely a complexity threshold value, for a local area of a more complex model, and carrying out local complexity detection while the number of hierarchical model nodes is increased to determine whether to enable the model nodes with higher complexity, wherein the threshold value detection is higher than the distance threshold value in priority.
Specifically, setting an Euclidean distance threshold value from an observation viewpoint to a specific node and a node complexity threshold value in a certain area as an evaluation standard for node segmentation, wherein the complexity threshold value is more prioritized than the distance threshold value; the specific scheme is as follows: firstly, generating basic model nodes, namely nodes reserved on all observation distances, performing traversal calculation on Euclidean distances from the nodes to the top of a view body of an observation view, setting a distance threshold as 4 levels, and increasing the number of the model nodes in a node segmentation mode sequentially from far to near along with view points so as to improve model complexity; meanwhile, a distance weighted average value among model nodes, namely a complexity threshold value, is set for a more complex model local area, local complexity detection is carried out while the number of the hierarchical model nodes is increased, whether the model nodes with higher complexity are started or not is determined, and the threshold detection is higher than the distance threshold value in priority.
S24: when cracks appear in the resolution switching process, deleting one grid edge at the node side with higher resolution level, or adding one edge at the node side with lower resolution level, merging adjacent grids, and meanwhile, the adjacent resolution level difference is not more than 2 levels.
Specifically, when the model object with multi-layer complexity meets the model node switching condition, the problem of asynchronous local areas can occur due to the reasons of view point position, threshold value calculation and the like, so that the phenomenon of splicing cracks occurs between models with different complexity, and the solution to the problem is as follows: one grid edge can be deleted on the side of the model node with higher complexity level, or one grid edge can be added on the side of the node with lower complexity level, and adjacent grids are combined, and the difference between the adjacent complexity levels is not more than 2 levels.
S25: and performing depth-first traversal nodes on the quadtree which is currently segmented by using a recursion method, and rendering all leaf nodes which are traversed and have the segmentation information of true, thereby completing the construction and rendering of the hierarchical detail model.
Specifically, depth-first traversing is performed on the current quadtree hierarchy node subjected to segmentation by using a recursion method, all leaf nodes which are traversed and have segmentation information of true, namely in an observation range, the node complexity level which actually participates in rendering is determined according to the current observation viewpoint, and rendering preparations are derived together with model objects which do not contain the corresponding leaf node tree in step S14, so that the construction of a hierarchical detail model is completed.
As shown in fig. 4, in step S3, three-dimensional scene data and multi-video data adopt a parallel acceleration calculation method combined with a GPU, and render model object nodes which are generated in the previous two steps and are led into a rendering queue by means of hardware acceleration, so as to realize a substantial visual effect, which specifically includes:
s31: preprocessing a three-dimensional scene building and a terrain model object to obtain a large number of small R trees, and loading tree indexes into the GPU for accelerated rendering.
Specifically, the R tree indexes of the three-dimensional model segmentation of the building generated in the data preprocessing stage are integrated, the R tree indexes are imported into the GPU computing unit according to the depth-first traversal sequence, and the model which is segmented into a large number of small R tree index organizations is subjected to accelerated rendering by means of the GPU.
S32: the video data is divided into corresponding small blocks according to specific analysis requirements in a matrix form, and the small blocks are synchronously loaded into the GPU for parallel processing, so that the GPU acceleration operation processing effect is realized.
The method of step S32 includes:
and after the video texture is rendered to be attached to the model surface, classifying the vertexes at the corresponding positions in the vertex shader according to a plurality of planes of the model surface within the mapping range, completing fragment segmentation in the fragment shader, and then performing block parallel computing processing by using a cluster computing unit of the GPU, and synchronously rendering the video texture, thereby accelerating rendering.
Specifically, video information which is accessed from the outside and is acquired by monitoring equipment or a network is mapped into a three-dimensional scene, the video information is divided into corresponding small blocks in a matrix form according to specific analysis requirements, and the small blocks are synchronously loaded into a GPU for parallel processing, so that the operation processing effect of GPU acceleration is realized. The specific scheme is as follows: video information data mapped to the three-dimensional model surface in a scene is obtained, after video textures are rendered and attached to the model surface, corresponding position vertexes are classified in a vertex shader according to a plurality of planes of the model surface in a mapping range, fragment segmentation is completed in a fragment shader, then a cluster computing unit of a GPU is utilized for carrying out block parallel computing processing, and the video textures are synchronously rendered, so that the rendering is accelerated.
S33: and combining and outputting scene data and video data analysis results to finish the finally required rendering effect.
Specifically, in the three-dimensional scene, the background is output together with the hierarchical detail model which finishes accelerated rendering by utilizing the GPU and the video texture on the hierarchical detail model to finish the final required rendering effect, and in addition, the application of target recognition, tracking monitoring, quantity statistics and other analysis on targets such as people, vehicles and the like in the video can be expanded on the basis of video information.
According to the three-dimensional scene rendering acceleration method provided by the application, in the process of fusing and visualizing the multi-source massive real-time monitoring video and the unified three-dimensional virtual scene, complex three-dimensional scene rendering and geometric analysis of the multi-video are performed by adopting parallel acceleration calculation combined with the GPU, the rendering speed of the three-dimensional scene and the geometric analysis speed of the video are improved under a certain graphic hardware condition, and the real-time drawing frame rate is ensured. The main advantages include: (1) The method can ensure the sense of reality and immersion of the three-dimensional scene and the fluency of multi-video geometric analysis, and ensure the timely response of user operation interaction. (2) And by adopting parallel acceleration rendering, the overall scene data processing capacity of the three-dimensional terrain in unit time is effectively increased, so that the processing efficiency and the processing quality of the data are ensured.
As shown in fig. 5, an embodiment of the present application further provides a three-dimensional scene rendering acceleration system based on video geometric analysis, including:
the processing module is used for preprocessing the three-dimensional scene data;
the construction module comprises: constructing a hierarchical detail model for the three-dimensional scene data;
and the computing module is used for computing the three-dimensional scene data and the video data by adopting a parallel acceleration computing method combined with the GPU.
The description of the features of the embodiment corresponding to fig. 5 may be referred to the related description of the embodiment corresponding to fig. 1 to 4, and will not be repeated here.
The embodiment of the application also provides a three-dimensional scene rendering acceleration system based on video geometric analysis, which comprises the following steps: a memory for storing a computer program; a processor for executing the computer program to implement the steps of the three-dimensional scene rendering acceleration method based on video geometry analysis as described above.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the steps of the three-dimensional scene rendering acceleration method based on video geometric analysis when being executed by a processor.
The three-dimensional scene rendering acceleration method, system and computer readable storage medium based on video geometric analysis provided by the embodiment of the application are described in detail. In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the application can be made without departing from the principles of the application and these modifications and adaptations are intended to be within the scope of the application as defined in the following claims.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.

Claims (6)

1. A three-dimensional scene rendering acceleration method based on video geometric analysis is characterized by comprising the following steps:
s1: preprocessing three-dimensional scene data;
s2: constructing a hierarchical detail model for the three-dimensional scene data, specifically dividing an object to be constructed into a plurality of nodes according to a quadtree, wherein corresponding details exist on the quadtree of the object surface node as a next-stage node, and the segmentation of the quadtree determines which layer the finally rendered leaf node is; cutting out vertexes of an area outside the range of the view, setting segmentation information of the area as false, and setting segmentation information of leaf nodes in the range of the view as true; generating basic model nodes, namely nodes reserved in all observation distances, performing traversal calculation on Euclidean distances from the nodes to the top of the view body of the observation view, setting a distance threshold value as 4 levels, and increasing the number of the model nodes in a node segmentation mode sequentially from far to near along with the view point; setting a distance weighted average value among model nodes, namely a complexity threshold value, for a local area of a more complex model, carrying out local complexity detection while the number of the hierarchical model nodes is increased, and determining whether to enable the model nodes with higher complexity, wherein the threshold value detection is higher than the distance threshold value in priority; when cracks appear in the resolution switching process, deleting one grid edge at the node side with higher resolution level, or adding one edge at the node side with lower resolution level, merging adjacent grids, and simultaneously, ensuring that the adjacent resolution level difference is not more than 2 levels; performing depth-first traversal on nodes of the quadtree which is currently segmented by a recursion method, and rendering all leaf nodes which are traversed and have segmentation information of true, so that the construction and rendering of the hierarchical detail model are completed;
s3: calculating three-dimensional scene data and video data by adopting a parallel acceleration calculation method combined with a GPU (graphics processing unit), specifically, preprocessing three-dimensional scene building and terrain model objects to obtain a large number of small R trees, and loading tree indexes into the GPU for accelerated rendering; acquiring video data mapped to the three-dimensional model surface in a scene, after rendering the video texture attached to the model surface, classifying the vertexes at corresponding positions in a vertex shader according to a plurality of planes of the model surface in a mapping range, completing fragment segmentation in a fragment shader, and then performing block parallel computing processing by using a cluster computing unit of a GPU (graphics processing unit), and synchronously rendering the video texture, thereby accelerating rendering; and combining and outputting scene data and video data analysis results to finish the finally required rendering effect.
2. The method of accelerating the rendering of a three-dimensional scene based on geometric analysis of video according to claim 1, wherein the method of step S1 comprises:
s11: importing and analyzing the scene model objects to be loaded in batches, and obtaining the central position, the maximum value and the minimum value parameters of the scene model object bounding box;
s12: constructing a tree index structure for the three-dimensional model with acquired bounding box center position, maximum value and minimum value parameters, and constructing an R tree for the tree index structure by adopting a space R tree index method, wherein each node corresponds to a minimum bounding cube containing a corresponding space object;
s13: dividing the constructed R tree from leaf nodes to the upper part by a paging technology to obtain independent R trees on each space page, and recording current level information and file pointing to the next layer in the dividing process;
s14: and sequentially placing the multiple R trees after division into corresponding queues, directly exporting the R trees if the R trees do not contain leaf nodes, and exporting the R trees after detail level LOD processing is performed on the R trees if the R trees contain leaf nodes.
3. The method of accelerating the rendering of a three-dimensional scene based on geometric analysis of video according to claim 2, wherein the method of step S13 comprises:
constructing a view body in the view field range of the observation view angle, performing intersection operation with a space bounding cube of the three-dimensional model, and acquiring a bounding cube index positioned in the view body range through a spatial relationship among nodes, wherein corresponding nodes corresponding to the R tree represent model parts needing rendering and display;
and dividing the R tree part corresponding to the selected node by a paging technology, taking the local highest-level non-leaf node as a root node, and constructing an independent small R tree.
4. A three-dimensional scene rendering acceleration system based on video geometry analysis, comprising:
the processing module is used for preprocessing the three-dimensional scene data;
the construction module comprises: constructing a hierarchical detail model for the three-dimensional scene data, specifically dividing an object to be constructed into a plurality of nodes according to a quadtree, wherein corresponding details exist on the quadtree of the object surface node as a next-stage node, and the segmentation of the quadtree determines which layer the finally rendered leaf node is; cutting out vertexes of an area outside the range of the view, setting segmentation information of the area as false, and setting segmentation information of leaf nodes in the range of the view as true; generating basic model nodes, namely nodes reserved in all observation distances, performing traversal calculation on Euclidean distances from the nodes to the top of the view body of the observation view, setting a distance threshold value as 4 levels, and increasing the number of the model nodes in a node segmentation mode sequentially from far to near along with the view point; setting a distance weighted average value among model nodes, namely a complexity threshold value, for a local area of a more complex model, carrying out local complexity detection while the number of the hierarchical model nodes is increased, and determining whether to enable the model nodes with higher complexity, wherein the threshold value detection is higher than the distance threshold value in priority; when cracks appear in the resolution switching process, deleting one grid edge at the node side with higher resolution level, or adding one edge at the node side with lower resolution level, merging adjacent grids, and simultaneously, ensuring that the adjacent resolution level difference is not more than 2 levels; performing depth-first traversal on nodes of the quadtree which is currently segmented by a recursion method, and rendering all leaf nodes which are traversed and have segmentation information of true, so that the construction and rendering of the hierarchical detail model are completed;
the computing module is used for computing three-dimensional scene data and video data by adopting a parallel acceleration computing method combined with a GPU, specifically, preprocessing a three-dimensional scene building and a terrain model object to obtain a large number of small R trees, and loading tree indexes into the GPU for accelerated rendering; acquiring video data mapped to the three-dimensional model surface in a scene, after rendering the video texture attached to the model surface, classifying the vertexes at corresponding positions in a vertex shader according to a plurality of planes of the model surface in a mapping range, completing fragment segmentation in a fragment shader, and then performing block parallel computing processing by using a cluster computing unit of a GPU (graphics processing unit), and synchronously rendering the video texture, thereby accelerating rendering; and combining and outputting scene data and video data analysis results to finish the finally required rendering effect.
5. A three-dimensional scene rendering acceleration system based on video geometry analysis, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the steps of the video geometry analysis-based three-dimensional scene rendering acceleration method of any one of claims 1 to 4.
6. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the steps of the three-dimensional scene rendering acceleration method based on video geometry analysis according to any one of claims 1 to 4.
CN201910969273.XA 2019-10-12 2019-10-12 Three-dimensional scene rendering acceleration method and system based on video geometric analysis Active CN110738721B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910969273.XA CN110738721B (en) 2019-10-12 2019-10-12 Three-dimensional scene rendering acceleration method and system based on video geometric analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910969273.XA CN110738721B (en) 2019-10-12 2019-10-12 Three-dimensional scene rendering acceleration method and system based on video geometric analysis

Publications (2)

Publication Number Publication Date
CN110738721A CN110738721A (en) 2020-01-31
CN110738721B true CN110738721B (en) 2023-09-01

Family

ID=69268830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910969273.XA Active CN110738721B (en) 2019-10-12 2019-10-12 Three-dimensional scene rendering acceleration method and system based on video geometric analysis

Country Status (1)

Country Link
CN (1) CN110738721B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402382B (en) * 2020-03-18 2023-04-07 东南数字经济发展研究院 Classification optimization method for improving data rendering efficiency of layered and partitioned three-dimensional model
CN111563948B (en) * 2020-03-30 2022-09-30 南京舆图科技发展有限公司 Virtual terrain rendering method for dynamically processing and caching resources based on GPU
CN111899585B (en) * 2020-07-23 2022-04-15 国网上海市电力公司 Simulation training system and method for manufacturing cable accessories
CN112070909A (en) * 2020-09-02 2020-12-11 中国石油工程建设有限公司 Engineering three-dimensional model LOD output method based on 3D Tiles
CN114463473A (en) * 2020-11-09 2022-05-10 中兴通讯股份有限公司 Image rendering processing method and device, storage medium and electronic equipment
CN112215935B (en) * 2020-12-02 2021-04-16 江西博微新技术有限公司 LOD model automatic switching method and device, electronic equipment and storage medium
CN112231020B (en) * 2020-12-16 2021-04-20 成都完美时空网络技术有限公司 Model switching method and device, electronic equipment and storage medium
CN113342999B (en) * 2021-05-07 2022-08-05 上海大学 Variable-resolution-ratio point cloud simplification method based on multi-layer skip sequence tree structure
CN116012506A (en) * 2021-10-22 2023-04-25 华为技术有限公司 Processing method, generating method and related device of three-dimensional model data
CN114581573A (en) * 2021-12-13 2022-06-03 北京市建筑设计研究院有限公司 Local rendering method and device of three-dimensional scene, electronic equipment and storage medium
CN114359500B (en) * 2022-03-10 2022-05-24 西南交通大学 Three-dimensional modeling and visualization method for landslide hazard range prediction
CN115311412B (en) * 2022-08-09 2023-03-31 北京飞渡科技股份有限公司 Load-balanced large-volume three-dimensional scene LOD construction method and device and electronic equipment
CN115661327B (en) * 2022-12-09 2023-05-30 北京盈建科软件股份有限公司 Distributed virtual node rendering method and device of BIM platform graphic engine
CN116309974B (en) * 2022-12-21 2023-11-28 四川聚川诚名网络科技有限公司 Animation scene rendering method, system, electronic equipment and medium
CN116433821B (en) * 2023-04-17 2024-01-23 上海臻图信息技术有限公司 Three-dimensional model rendering method, medium and device for pre-generating view point index
CN116740249B (en) * 2023-08-15 2023-10-27 湖南马栏山视频先进技术研究院有限公司 Distributed three-dimensional scene rendering system
CN117611472B (en) * 2024-01-24 2024-04-09 四川物通科技有限公司 Fusion method for metaspace and cloud rendering

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012083508A1 (en) * 2010-12-24 2012-06-28 中国科学院自动化研究所 Fast rendering method of third dimension of complex scenes in internet
CN102867331A (en) * 2012-08-31 2013-01-09 电子科技大学 Graphics processing unit (GPU)-orientated large-scale terrain fast drawing method
CN103268342A (en) * 2013-05-21 2013-08-28 北京大学 DEM dynamic visualization accelerating system and method based on CUDA
US8913068B1 (en) * 2011-07-12 2014-12-16 Google Inc. Displaying video on a browser
CN104751505A (en) * 2013-06-19 2015-07-01 国家电网公司 Three-dimensional scene rendering algorithm based on LOD (Levels of Detail) model and quadtree level structure
US9330486B1 (en) * 2012-08-07 2016-05-03 Lockheed Martin Corporation Optimizations of three-dimensional (3D) geometry
CN105957149A (en) * 2016-05-31 2016-09-21 浙江科澜信息技术有限公司 Urban three-dimensional model data preprocessing method suitable for high-efficiency rendering
CN106027855A (en) * 2016-05-16 2016-10-12 深圳迪乐普数码科技有限公司 Method and terminal for realizing virtual rocker arm
CN107340501A (en) * 2017-07-02 2017-11-10 中国航空工业集团公司雷华电子技术研究所 Radar video method of processing display based on OpenGL ES
CN107835436A (en) * 2017-09-25 2018-03-23 北京航空航天大学 A kind of real-time virtual reality fusion live broadcast system and method based on WebGL
CN107886564A (en) * 2017-10-13 2018-04-06 上海秉匠信息科技有限公司 The method shown for realizing three-dimensional scenic
CN109945817A (en) * 2019-05-07 2019-06-28 四川航天神坤科技有限公司 One population Molded Depth degree measuring device
CN110084739A (en) * 2019-03-28 2019-08-02 东南大学 A kind of parallel acceleration system of FPGA of the picture quality enhancement algorithm based on CNN

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7961194B2 (en) * 2003-11-19 2011-06-14 Lucid Information Technology, Ltd. Method of controlling in real time the switching of modes of parallel operation of a multi-mode parallel graphics processing subsystem embodied within a host computing system
US9214137B2 (en) * 2012-06-18 2015-12-15 Xerox Corporation Methods and systems for realistic rendering of digital objects in augmented reality
US9996976B2 (en) * 2014-05-05 2018-06-12 Avigilon Fortress Corporation System and method for real-time overlay of map features onto a video feed
US11416282B2 (en) * 2015-05-26 2022-08-16 Blaize, Inc. Configurable scheduler in a graph streaming processing system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012083508A1 (en) * 2010-12-24 2012-06-28 中国科学院自动化研究所 Fast rendering method of third dimension of complex scenes in internet
US8913068B1 (en) * 2011-07-12 2014-12-16 Google Inc. Displaying video on a browser
US9330486B1 (en) * 2012-08-07 2016-05-03 Lockheed Martin Corporation Optimizations of three-dimensional (3D) geometry
CN102867331A (en) * 2012-08-31 2013-01-09 电子科技大学 Graphics processing unit (GPU)-orientated large-scale terrain fast drawing method
CN103268342A (en) * 2013-05-21 2013-08-28 北京大学 DEM dynamic visualization accelerating system and method based on CUDA
CN104751505A (en) * 2013-06-19 2015-07-01 国家电网公司 Three-dimensional scene rendering algorithm based on LOD (Levels of Detail) model and quadtree level structure
CN106027855A (en) * 2016-05-16 2016-10-12 深圳迪乐普数码科技有限公司 Method and terminal for realizing virtual rocker arm
CN105957149A (en) * 2016-05-31 2016-09-21 浙江科澜信息技术有限公司 Urban three-dimensional model data preprocessing method suitable for high-efficiency rendering
CN107340501A (en) * 2017-07-02 2017-11-10 中国航空工业集团公司雷华电子技术研究所 Radar video method of processing display based on OpenGL ES
CN107835436A (en) * 2017-09-25 2018-03-23 北京航空航天大学 A kind of real-time virtual reality fusion live broadcast system and method based on WebGL
CN107886564A (en) * 2017-10-13 2018-04-06 上海秉匠信息科技有限公司 The method shown for realizing three-dimensional scenic
CN110084739A (en) * 2019-03-28 2019-08-02 东南大学 A kind of parallel acceleration system of FPGA of the picture quality enhancement algorithm based on CNN
CN109945817A (en) * 2019-05-07 2019-06-28 四川航天神坤科技有限公司 One population Molded Depth degree measuring device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
虚拟现实引擎的设计与实现;罗冠,郝重阳,淮永建,张先勇,高晓滨;《计算机学报》;全文 *

Also Published As

Publication number Publication date
CN110738721A (en) 2020-01-31

Similar Documents

Publication Publication Date Title
CN110738721B (en) Three-dimensional scene rendering acceleration method and system based on video geometric analysis
US8570322B2 (en) Method, system, and computer program product for efficient ray tracing of micropolygon geometry
CN108520557B (en) Massive building drawing method with graphic and image fusion
KR101546703B1 (en) System for processing massive bim data of building
KR101546705B1 (en) Method for visualizing building-inside bim data by bim data process terminal
CN113034656B (en) Rendering method, device and equipment for illumination information in game scene
CN112308974B (en) Large-scale point cloud visualization method for improving octree and adaptive reading
CN105205861A (en) Tree three-dimensional visualization model realization method based on Sphere-Board
CN113034657B (en) Rendering method, device and equipment for illumination information in game scene
CN112906125B (en) Light-weight loading method for BIM model of railway fixed facility
Deng et al. Multiresolution foliage for forest rendering
CN113470172B (en) Method for converting OBJ three-dimensional model into 3DTiles
CN116958457A (en) OSGEarth-based war misting effect drawing method
Scholz et al. Level of Detail for Real-Time Volumetric Terrain Rendering.
CN110738719A (en) Web3D model rendering method based on visual range hierarchical optimization
CN112528508B (en) Electromagnetic visualization method and device
Bittner Hierarchical techniques for visibility determination
CN117237503B (en) Geographic element data accelerated rendering and device
KR100726031B1 (en) A terrain rendering method using a cube mesh structure
KR102061835B1 (en) How to implement LOD in non-square Grid data with NaN
Zhang Foliage Simplification Based on Multi-viewpoints for Efficient Rendering.
Hoppe et al. Adaptive meshing and detail-reduction of 3D-point clouds from laser scans
Wang et al. Visibility-culling-based geometric rendering of large-scale particle data
Ying et al. Implementation of a fast simulation algorithm for terrain based on Dynamic LOD
CN115496871A (en) Three-dimensional visualization method and device for multi-resolution digital elevation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant