CN112509118B - Large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling - Google Patents

Large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling Download PDF

Info

Publication number
CN112509118B
CN112509118B CN202011388457.6A CN202011388457A CN112509118B CN 112509118 B CN112509118 B CN 112509118B CN 202011388457 A CN202011388457 A CN 202011388457A CN 112509118 B CN112509118 B CN 112509118B
Authority
CN
China
Prior art keywords
nodes
point cloud
node
lod
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011388457.6A
Other languages
Chinese (zh)
Other versions
CN112509118A (en
Inventor
汪俊
黄安义
谢乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Yuntong Technology Co.,Ltd.
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202011388457.6A priority Critical patent/CN112509118B/en
Publication of CN112509118A publication Critical patent/CN112509118A/en
Application granted granted Critical
Publication of CN112509118B publication Critical patent/CN112509118B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Abstract

The invention relates to a large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling, comprising the following steps of S1, constructing an octree point cloud spatial index structure of an OBB outer enclosure box, and reading point cloud data; s2, constructing an LOD model based on the octree point cloud space index structure; s3, selecting an LOD level needing visualization based on the distance of the viewing point of the cone; selecting nodes according to visibility judgment of view frustum shielding and removing; step S4, when the view angle changes, preloading the nodes possibly used in the next frame in advance according to the view angle information of the current frame; step S5, reading the node selected in the step S3, and performing visual rendering; and step S6, when the visual angle is not changed, performing point cloud self-adaptive filling, and improving the point cloud visualization effect. The method solves the problems that in the prior art, data loading is slow when the visual angle of large-scale point cloud visualization changes, resource budget is wasted when the visual angle does not change, and the like.

Description

Large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling
Technical Field
The invention belongs to the technical field of three-dimensional data visualization, and particularly relates to a large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling.
Background
With the construction of digital cities, the rapid development of large-scale three-dimensional data acquisition technology, three-dimensional laser scanning, dense matching of space flight and aviation images and the like, mass point cloud data are generated. Research shows that if a system capable of long-term stable operation is established, the total investment of information data accounts for more than half of the total investment. In view of this, spatial data plays an important role in the geographic information industry. The gradual improvement of the hardware level of the computer and the progress of the space three-dimensional modeling technology enable the application of three-dimensional information to be deepened into a plurality of fields and promote the development of the fields, such as the crossing fields of multiple disciplines of digital twin, military simulation, city planning, game entertainment and the like. The three-dimensional information data can record the three-dimensional information of the object space and the geometric information of the object surface, and by utilizing the information, the three-dimensional geometric form of the object can be correspondingly analyzed, and the three-dimensional ground objects can be observed from multiple angles in practical application, so that more detailed ground object information can be obtained. However, the scale of hundred million-magnitude point clouds is commonly available in many fields, the data volume is very large, and how to perform real-time visualization on the large-scale point clouds is particularly critical.
The method has the advantages of reasonable and efficient data structure, visualization strategy and rapid access of large-scale point cloud, and is directly related to the visualization effect of point cloud data in the actual engineering. The visualization quality and real-time performance of the large-scale point cloud are related to the efficiency of reconstruction, processing and the like of the large-scale point cloud data, which is also a problem that needs to be solved when the three-dimensional laser technology can be widely applied.
In recent years, a plurality of point cloud visualization methods are proposed, but in the prior art, when large-scale point cloud visualization is performed, the problems of slow data loading when the viewing angle is changed, resource budget waste when the viewing angle is not changed and the like still exist, and an effective solution for the problems is not available at present.
Disclosure of Invention
The invention provides a large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling aiming at the defects in the prior art, and solves the problems that in the large-scale point cloud visualization in the prior art, data loading is slow when the visual angle changes, resource budget waste is caused when the visual angle does not change, and the like.
In order to achieve the purpose, the invention adopts the following technical scheme:
a large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling comprises the following steps:
s1, constructing an octree point cloud space index structure of an OBB outer enclosure box, and reading point cloud data;
s2, constructing an LOD model based on the octree point cloud space index structure;
s3, selecting an LOD level needing visualization based on the distance of the viewing point of the cone; selecting nodes according to visibility judgment of view frustum shielding and removing;
step S4, when the view angle changes, preloading the nodes possibly used in the next frame in advance according to the view angle information of the current frame;
step S5, reading the node selected in the step S3, and performing visual rendering;
and step S6, when the visual angle is not changed, performing point cloud self-adaptive filling, and improving the point cloud visualization effect.
Further, step S1 includes:
s101, setting a minimum outer bounding box according to an OBB method, dividing the minimum outer bounding box into a plurality of sub outer bounding boxes with corresponding layers, and calculating a corresponding coordinate position of each sub outer bounding box;
step S102, dividing the whole point cloud data into a plurality of small batches to be read in, and putting the points into corresponding octree leaf nodes surrounded outside the sub-tree according to the coordinates of the points; if the number of points in the leaf node exceeds the maximum storage point number threshold of the set node, the current leaf node is taken as a father node, a new leaf node of the next layer is generated, and all point clouds in the current leaf node are distributed to the next layer; processing of each batch is completed, and data is stored once to reduce the occupation of the memory; until all the point clouds finally fall into the leaf nodes of the octree; finally, traversing the leaf nodes, deleting the empty nodes, and if all eight child nodes of the parent node of the leaf nodes are empty, deleting the empty nodes;
and S103, storing the point cloud data in the final octree structure.
Further, in step S101, all father nodes of the bounding box of the octree are uniformly divided into eight sub outer bounding boxes according to the length, width and height of the parent nodes in half, and the coordinate position corresponding to each sub outer bounding box is obtained through calculation, so that the points are rapidly allocated to the corresponding leaf nodes in the next step.
Further, step S2 includes:
s201, conducting Poisson disc down-sampling on point cloud data of leaf nodes from bottom to top, splicing the point clouds of the down-sampled leaf nodes and placing the point clouds into a corresponding father node of the previous layer, conducting down-sampling layer by layer according to the rule and placing the point clouds into the father node until the position of a root node is reached;
and S202, storing the octree after the point cloud is transmitted, and setting an LOD model according to layers.
Further, in step S201, each point cloud sampled from the child node and filled into the parent node is deleted from the current child node, and finally the original point cloud is stored in blocks in each node of the octree, and a complete scanning point cloud file is divided into different components for storage.
Further, step S3 includes:
setting a standard value S defined as the minimum distance Dst between the projection area Projectionarea of the outer bounding box of the LOD model on the screen and the viewpoint of the LOD modelminIs determined by the ratio of (a) to (b), the formula is as follows:
S=ProjectionArea/Dstmin (4)
the projection Area projectionirea is defined as the product of the projection Area of the remaining six vertices on the near plane of the viewing pyramid after removing the bounding box vertices farthest and closest from the viewpoint, the projection size pointprojectionirea of each three-dimensional point cloud on the near plane of the viewing pyramid of the screen, and a weight w, and is expressed as follows:
ProjectionArea=w·PointProjectionArea·Area (5)
the weight w is a constant weight factor of the size of each point on the screen, DstminDefining the distance between the top of the bounding box closest to the viewpoint and the viewpoint;
the formula of the standard value S corresponding to the LOD layer l is as follows:
S∈[(l-1)·(Smax-Smin)/n,l·(Smax-Smin)/n](6)
wherein S ismax,SminThe maximum and minimum values of S are respectively, and n is the total number of LOD layers.
When the LOD model is displayed, the LOD layer corresponding to the formula is searched, the LOD level needing to be loaded is determined according to the current S value, the 1, 2-level data for drawing the LOD model needs to be loaded if the S value is in the value of the 2 nd layer, the 3 rd level data of the LOD is loaded if the S value is enlarged to the next range, and the 3 rd level data point is deleted if the S value is reduced to the previous range.
Further, step S4 includes
Step S401, obtaining the view angle parameter information of the current frame, and selecting nodes according to the step S3 to obtain node information meeting the requirements;
step S402, starting another independent thread to predict nodes possibly used by the next frame; in terms of LOD, preloading a next-layer LOD node of a low LOD layer in the currently satisfactory nodes into a Vector _ Prediction of a Prediction node storage container opened in a memory, wherein the low LOD layer is an LOD layer which is one third of the total LOD layer; in the aspect of judging the view vertebral body nodes, loading nodes which are not in the view vertebral body and are adjacent to the nodes meeting the requirements on the same LOD layer and the positions of the nodes into a Prediction node storage container Vector _ Prediction in advance; when the view of the next frame is not changed, prediction is stopped.
Further, step S5 includes:
step S501, reading the point cloud stored in the memory or the hard disk by the node selected in the step S3 by using a thread;
step S502, continuously rendering the points in the nodes which are stored in the memory and need to be visualized by using another independent rendering process, so that the point cloud reading process and the rendering process are simultaneously carried out, and the smoothness of point cloud display is improved.
Further, in step S501, the scheduling process continuously selects a node according to the condition of step S3, and allows the node meeting the requirement to stay in the visual node storage container Vector _ Rendering at all times; for nodes which do not accord with the rules in the visualization process, storing the nodes into another secondary memory temporary storage container Vector _ Temp, and clearing the visualization node storage container Vector _ Rendering to make room for accommodating new points; when a next frame needs a new node, preferentially searching in a Vector _ Prediction storage container of a Prediction node, then searching in a Vector _ Temp, if the needed node exists, directly storing in the Vector _ Rendering, and deleting in the Vector _ Temp and the Vector _ Prediction; if not, reading the corresponding node in the hard disk; if the Vector _ Temp and the Vector _ Prediction exceed the limited number of nodes in the storage process, deleting the oldest stored node.
Further, step S6 includes:
step S601, in the visualization process, when the user does not change the visual angle, all the sub-nodes of all the nodes obtained in the step S3 are subjected to non-coincident random sampling by using one thread if the sub-nodes exist, and after one layer of sampling is finished, the next layer of sampling layer by layer is carried out; continuously loading the data into a single visualization point storage container Vector _ Fill within a hardware budget range;
step S602, continuously rendering the points needing visualization stored in the Vector _ Fill by using a separate rendering process, so that the details of the visualization are gradually improved, and once the visual angle changes, all the points in the Vector _ Fill are deleted.
The invention has the beneficial effects that:
the invention provides a real-time large-scale point cloud visualization method with high efficiency and better visualization effect for solving the problems of slow data loading when the viewing angle of large-scale point cloud visualization is changed, resource budget waste when the viewing angle is not changed and the like in the prior art, and improves the rendering speed when the viewing angle is changed and the resource budget utilization rate when the viewing angle is not changed in the visualization process.
Drawings
FIG. 1 is a flow chart of a large-scale point cloud visualization method of the present invention;
FIG. 2 is a flow chart of a node prediction when the view angle changes;
FIG. 3 is a flow chart of fast reading and visual rendering of nodes;
FIG. 4 is a visual comparison of two different random shuffle samples;
fig. 5 is a comparison before and after adaptive filling.
Detailed Description
The preloading node and the self-adaptive filling large-scale point cloud visualization method of the invention are further described in detail in the following by combining the attached drawings and specific embodiments.
As shown in fig. 1, a method for visualizing a large-scale point cloud with nodes capable of being preloaded and adaptive filling includes the following steps:
and S1, constructing an octree point cloud space index structure of an OBB outer enclosure box, and reading point cloud data. The method comprises the following steps:
and S101, setting a minimum outer bounding box according to an OBB method, wherein the direction of a cube of the minimum outer bounding box is always along the principal component direction, and the minimum outer bounding box meets the octree index. Because the side length of the OBB enclosure is always the same as the principal component direction of the point cloud data, the characteristic value and the characteristic vector of the point cloud data can be calculated by constructing a covariance matrix of a point cloud data model, and therefore the enclosure box is determined. The covariance matrix a of the OBB bounding box is generated as follows:
Figure BDA0002811548860000051
in this case, conv is exemplified by conv (x, y).
Figure BDA0002811548860000052
Wherein
Figure BDA0002811548860000053
Respectively, the mean values of the x and y axes.
The eigenvectors and eigenvalues of the covariance matrix A are calculated through the constructed covariance matrix A, and the three eigenvectors of the covariance matrix A are vertical to each other according to the properties of the symmetric matrix, so that the three axes of the OBB model are determined.
And then normalizing the feature vectors to determine the directions of three axes of the OBB bounding box, then projecting the coordinates of the point cloud onto the three axes, and determining the maximum and minimum values of the axes, thus determining the OBB bounding box. And establishing the minimum external bounding box of the point cloud octree by taking the maximum side length of the bounding box as the side length of the minimum external bounding box of the square and taking the center of the bounding box as the center of the minimum external bounding box of the square.
In this embodiment, all father nodes of the bounding box of the octree are uniformly divided into eight sub outer bounding boxes layer by layer according to the length, the width and the height of the father nodes of the bounding box of the octree, and the corresponding coordinate position of each sub outer bounding box is obtained through calculation and used for rapidly allocating points to corresponding leaf nodes in the next step. This step is to speed up the construction of the early tree.
And S102, reading every five million points of the whole point cloud data into a small batch, and putting the points into corresponding octree leaf nodes surrounded outside the sub-tree according to the coordinates of the points. And if the point number in the leaf node exceeds the maximum storage point number threshold value of the set node, taking the current leaf node as a father node, generating a new leaf node of the next layer, and distributing all point clouds in the current leaf node to the next layer. And finishing the processing of each batch, and storing the data once to reduce the occupation of the memory. Until all point clouds eventually fall among the leaf nodes of the octree. And finally, traversing the leaf nodes, deleting the empty (leaf) nodes, and if all the eight child (leaf) nodes of the parent node of the leaf nodes are empty, deleting the nodes.
And S103, storing the point cloud data in the final octree structure.
And S2, constructing an LOD model based on the octree point cloud space index structure. The method comprises the following steps:
step S201, on the basis that the octree index is constructed and the original point cloud data is completely distributed to the leaf nodes at the lowest layer, a Poisson disc sampling method is used, the point cloud data extracted from the child nodes are filled into the parent nodes at the upper layer according to the preset sampling rate, and the like, and finally all the nodes in the octree index are filled with data points. In the process, in order to prevent each layer of LOD model from generating repeated data points to cause data redundancy, in the actual sampling and filling process, each point data which is sampled from a child node and filled into a parent node is deleted from the current child node, and finally, the original point cloud is stored in each node of the octree in blocks, and a complete scanning point cloud file is divided into different components to be stored, and meanwhile, the data redundancy is not caused. In order to approximately equalize the number of points in all nodes in the final octree structure, a dynamic sampling rate for the change of the number of child nodes is designed, and the formula is as follows:
SamplingRate=1/Numchildnode (3)
wherein, SamplingRate is the sampling rate of the child node sampling of each father node, NumchildnodeIndicating the number of child nodes of the current node. Through which is passedAnd calculating the sampling rate of the current node according to the formula.
And S202, storing the octree after the point cloud is transmitted, and setting an LOD model according to layers.
And step S3, selecting an LOD level needing visualization based on the distance of the viewing cone viewing point. And selecting the nodes according to the visibility judgment of the view frustum shielding elimination.
Determining the level of the selected LOD according to the viewpoint distance requires determining a standard value as a basis for judgment, and a rule that a reference range of the value corresponds to the resolution of the LOD level.
Setting a standard value S defined as the minimum distance Dst between the projection area Projectionarea of the outer bounding box of the LOD model on the screen and the viewpoint of the LOD modelminIs determined by the ratio of (a) to (b), the formula is as follows:
S=ProjectionArea/Dstmin (4)
in the present invention, projection Area projectionirea is defined as the product of the projection Area of the remaining six vertices on the visual cone near plane, the projection size pointprojectionirea of each three-dimensional point cloud on the screen visual cone near plane, and a weight w, after removing the bounding box vertices farthest from the viewpoint and closest to the viewpoint, and the formula is as follows:
ProjectionArea=w·PointProjectionArea·Area (5)
the weight w is a constant weighting factor of the size of each point on the screen, and since when projected on a two-dimensional screen, there is a situation where some points overlap, the weighting factor w can reflect the size of all points on the screen in the area more evenly. DstminDefined as the distance between the closest bounding box vertex to the viewpoint and the viewpoint.
When the point cloud model is closer to the screen, the projection area of the point cloud model on the two-dimensional screen is larger, DstminThe smaller the value of E, the higher the resolution of the hierarchy that needs to be displayed, so the larger S, the larger the number of levels scheduled to be displayed. Formula corresponding to standard value S and LOD layer lThe following were used:
S∈[(l-1)·(Smax-Smin)/n,l·(Smax-Smin)/n] (6)
wherein S ismax,SminThe maximum and minimum values of S are respectively, and n is the total number of LOD layers.
When the LOD model is displayed, the LOD layer corresponding to the formula is searched, the LOD level needing to be loaded is determined according to the current S value, the 1, 2-level data for drawing the LOD model needs to be loaded if the S value is in the value of the 2 nd layer, the 3 rd level data of the LOD is loaded if the S value is enlarged to the next range, and the 3 rd level data point is deleted if the S value is reduced to the previous range.
On the basis of scheduling LOD level point clouds according to the perspective, a node selection method based on visibility occlusion rejection is added. And further screening the nodes needing to be loaded in the selected LOD level according to whether the nodes exist in the view frustum or not.
To determine whether a point is in the view cone, the plane equation of six surfaces of the view cone needs to be determined first, and usually, the plane equation can be determined by a hexahedron calculation method.
In this algorithm, it is initially assumed that the observation matrix and the world matrix are both unity matrices, i.e. the viewpoint is at the origin of coordinates by default, and the solved plane equation also determines the view cone when the viewpoint is at the origin of coordinates. When the distance between the model and the viewpoint begins to be far away, if the model is static and the viewpoint is moved, the view cone body can also translate, so that the view cone body cutting can be performed on the point cloud model with the distance being close to each other. Then, the determined view cone is used to perform a drawing determination of the point cloud. The judgment basis is that when point cloud nodes are loaded under the level of the selected LOD, if the nodes are in the view centrum or are intersected with the view centrum, the nodes are read into the memory and are delivered to the GPU to draw points, and if the nodes are not in the view centrum range, the points are not drawn. And judging nodes based on the visual cone region.
And the LOD hierarchy is selected based on the viewpoint distance and the node is displayed according to the visibility judgment of the shielding elimination, so that the scheduling efficiency of the point cloud data is effectively improved, and the occupancy rate of the memory is reduced.
Step S4, when the view angle changes, preloading nodes that may be used in the next frame in advance according to the view angle information of the current frame. The method comprises the following steps:
and S401, obtaining the view angle parameter information of the current frame, and selecting nodes according to the step S3 to obtain node information meeting the requirements.
Step S402, another single thread is started to predict nodes possibly used in the next frame. And in the Prediction of the LOD layer, preloading the next LOD node of the lower LOD layer in the nodes meeting the requirements at present into a Prediction node storage container Vector _ Prediction opened in a memory, wherein the lower LOD layer is the LOD layer of the first one third of the total LOD layer number. In the aspect of judging the view vertebral body nodes, nodes which are not in the view vertebral body and are adjacent to the nodes meeting the requirements in the same LOD layer are loaded into the Prediction node storage container Vector _ Prediction in advance. When the view of the next frame is not changed, prediction is stopped. By loading nodes possibly needed by the next frame in advance through the step, the visualization frame rate in the visualization visual angle change process is greatly improved, and visualization is smoother. The specific process is shown in fig. 2.
And step S5, reading the node selected in the step S3, and performing visualization rendering. The method comprises the following steps:
step S501, in order to reduce time consumed for visualization analysis in the point cloud scheduling process. Firstly, in order to ensure that a sufficient frame rate exists in the instant drawing process of the point cloud, all nodes are not directly read in, but after an independent thread is used for calculating the nodes needing to be selected, the nodes are flexibly read in a self-adaptive manner between a hard disk and a memory, and the balance between rendering details and visualization instantaneity is achieved. The scheduling process continues to select nodes according to the condition of step S3, and allows the nodes meeting the requirement to stay in the visual node storage container Vector _ Rendering at all times. And for nodes which do not accord with the rules in the visualization process, storing the nodes into another secondary memory temporary storage container Vector _ Temp, and clearing the visualization node storage container Vector _ Rendering to make room for accommodating new points. When a next frame needs a new node, preferentially searching in a Vector _ Prediction storage container of the Prediction node, then searching in a Vector _ Temp, if the needed node exists, directly storing in the Vector _ Rendering, and deleting in the Vector _ Temp and the Vector _ Prediction. And if not, reading the corresponding node in the hard disk. If the Vector _ Temp and the Vector _ Prediction exceed the limited number of nodes in the storage process, deleting the oldest stored node. The frequency of repeated reading of the nodes in the internal and external memories is greatly reduced in the step, and the visualization real-time performance of the point cloud is improved.
Step S502, continuously rendering the points in the nodes which are stored in the memory and need to be visualized by using another independent rendering process, so that the point cloud reading process and the rendering process are simultaneously carried out, and the smoothness of point cloud display is improved. The specific process is shown in fig. 3.
And step S6, when the visual angle is not changed, performing point cloud self-adaptive filling, and improving the point cloud visualization effect. The method comprises the following steps:
step S601, if yes, a thread is used to sample a random sampling which is not overlapped in the sub-nodes, and after one layer of sampling is finished, the sub-nodes of the layer of nodes are sampled layer by layer again. For further example, a node being visualized is firstly added to the visualization by continuously and randomly taking out points from the child nodes of the visualized node, and when all the points of the child node are sampled, the same operation is performed on the child node of the visualized node, and the points are continuously loaded into a single visualized point storage container Vector _ Fill within the range of hardware budget.
A random sampling method is mentioned which maps each sequence number in the sequence to another sequence number in the same set without collision, i.e. without repetition. For example, we assume that the input is the index of the point in the original point array and the output is the position of the point in the array after the order is broken. This allows us to copy points directly to locations in the shuffled array without synchronization between threads.
And setting an original point array sequence as [0,1 … …, P-1], wherein P is a prime number which is identical to modulo 4 and is 3, wherein k is a tag sequence in the array as input, using the following formula to carry out a scrambling sequence, and carrying out sequential sampling according to a new sequence. Where the order in which the final points are obtained is targetIndex (k). It was found that a better visualization effect, specifically a comparison, was achieved after two scrambling sequences were performed, as shown in fig. 4.
Figure BDA0002811548860000091
targetIndex(k)=permute(permute(k)) (8)
Finally, it is noted that the number of point clouds is not equal to the proper prime number. In this case, we find the next smaller prime number P ≦ N, which is the total number of point clouds and shuffles all points in the range, while not shuffling the remaining points. The number of non-scrambled sequential points is negligible due to the small distance between consecutive prime numbers.
Step S602, continuously rendering the points needing visualization stored in the Vector _ Fill by using a separate rendering process, so that the details of the visualization are gradually improved, and once the visual angle changes, all the points in the Vector _ Fill are deleted. Specific visualization effects, as shown in fig. 5.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (7)

1. A large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling is characterized by comprising the following steps:
s1, constructing an octree point cloud space index structure of an OBB outer enclosure box, and reading point cloud data;
s2, constructing an LOD model based on the octree point cloud space index structure;
step S3, based on the distance of the viewing cone, selecting the LOD level to be visualized,
setting a standard value S defined as the minimum distance Dst between the projection area Projectionarea of the outer bounding box of the LOD model on the screen and the viewpoint of the LOD modelminIs determined by the ratio of (a) to (b), the formula is as follows:
S=ProjectionArea/Dstmin
the projection Area projectionidea is defined as the product of the projection Area of the remaining six vertices on the near plane of the viewing frustum after removing the bounding box vertices farthest and closest from the viewpoint, the projection size pointprojectionidea of each three-dimensional point cloud on the near plane of the screen viewing frustum, and a weight w, and is expressed as follows:
ProjectionArea=w·PointProjectionArea·Area,
the weight w is a constant weight factor of the size of each point on the screen, DstminDefining the distance between the top of the bounding box closest to the viewpoint and the viewpoint;
the formula of the standard value S corresponding to the LOD layer l is as follows:
S∈[(l-1)·(Smax-Smin)/n,l·(Smax-Smin)/n],
wherein S ismax,SminThe maximum value and the minimum value of S are respectively, and n is the total number of layers of LOD;
when the LOD model is displayed, the LOD layer is retrieved according to a formula, the LOD level needing to be loaded is determined according to the current S value, 1 and 2 levels of data for drawing the LOD model are required to be loaded if the S value is in the value of the 2 nd layer, if the S value is enlarged to the next range, the data of the 3 rd level of the LOD is loaded, and if the S value is reduced to the previous range, the data point of the 3 rd level is deleted;
selecting nodes according to visibility judgment of view frustum shielding and removing;
step S4, when the view angle changes, preloading the nodes possibly used in the next frame in advance according to the view angle information of the current frame;
step S5, reading the node selected in the step S3, and performing visual rendering;
step S6, when the visual angle is not changed, carrying out point cloud self-adaptive filling, and improving the point cloud visualization effect, specifically: if all the sub-nodes of all the nodes obtained in the step S3 exist, one thread is used for carrying out non-coincident random sampling on the sub-nodes, and after one layer of sampling is finished, the next layer of sampling layer by layer is carried out; continuously loading the data into a single visualization point storage container Vector _ Fill within a hardware budget range; and continuously rendering the points needing visualization stored in the Vector _ Fill by using a separate rendering process, so that the detail of the visualization is gradually improved, and once the visual angle changes, all the points in the Vector _ Fill are deleted.
2. The method for large-scale point cloud visualization with preloading nodes and adaptive filling according to claim 1, wherein step S1 comprises:
s101, setting a minimum outer bounding box according to an OBB method, dividing the minimum outer bounding box into a plurality of sub outer bounding boxes with corresponding layers, and calculating a corresponding coordinate position of each sub outer bounding box;
step S102, dividing the whole point cloud data into a plurality of small batches to be read in, and putting the points into corresponding octree leaf nodes surrounded outside the sub-tree according to the coordinates of the points; if the number of points in the leaf node exceeds the maximum storage point number threshold of the set node, the current leaf node is taken as a father node, a new leaf node of the next layer is generated, and all point clouds in the current leaf node are distributed to the next layer; processing of each batch is completed, and data is stored once to reduce the occupation of the memory; until all the point clouds finally fall into the leaf nodes of the octree; finally, traversing the leaf nodes, deleting the empty nodes, and if all eight child nodes of the parent node of the leaf nodes are empty, deleting the empty nodes;
and S103, storing the point cloud data in the final octree structure.
3. The large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling according to claim 2, wherein in step S101, all father nodes of the bounding box of the octree are uniformly divided into eight sub outer bounding boxes according to length, width and height by half layer, and the corresponding coordinate position of each sub outer bounding box is obtained through calculation and used for rapidly allocating points to corresponding leaf nodes in the next step.
4. The method for large-scale point cloud visualization through preloading nodes and adaptive filling according to claim 2 or 3, wherein the step S2 comprises:
s201, conducting Poisson disc down-sampling on point cloud data of leaf nodes from bottom to top, splicing the point clouds of the down-sampled leaf nodes and placing the point clouds into a corresponding father node of the previous layer, conducting down-sampling layer by layer according to the rule and placing the point clouds into the father node until the position of a root node is reached;
and S202, storing the octree after the point cloud is transmitted, and setting an LOD model according to layers.
5. The method as claimed in claim 4, wherein in step S201, each point cloud sampled from a child node and filled into a parent node is deleted from the current child node, and finally the original point cloud is stored in blocks in each node of the octree, and a complete scanning point cloud file is divided into different components for storage.
6. The method for large-scale point cloud visualization with preloading nodes and adaptive filling according to claim 1, wherein step S4 comprises
Step S401, obtaining the view angle parameter information of the current frame, and selecting nodes according to the step S3 to obtain node information meeting the requirements;
step S402, starting another independent thread to predict nodes possibly used by the next frame; in terms of LOD, preloading a next-layer LOD node of a low LOD layer in the currently qualified nodes into a Vector _ Prediction of a Prediction node storage container opened in a memory, wherein the low LOD layer is an LOD layer which is one third of the total LOD layer; in the aspect of judging the view cone nodes, loading nodes which are not in the view cone of the same LOD layer and are adjacent to the nodes meeting the requirements in advance into a Prediction node storage container Vector _ Prediction; when the view of the next frame is not changed, prediction is stopped.
7. The method for large-scale point cloud visualization with preloading nodes and adaptive filling according to claim 1, wherein step S5 comprises:
step S501, reading the point cloud of the node selected in the step S3 stored in the memory or the hard disk by using a thread, continuously selecting the node by the scheduling process according to the condition of the step S3, and allowing the node meeting the requirement to be always in a visual node storage container Vector _ Rendering; for nodes which do not accord with the rules in the visualization process, storing the nodes into another secondary memory temporary storage container Vector _ Temp, and clearing the visualization node storage container Vector _ Rendering to make room for accommodating new points; when a next frame needs a new node, preferentially searching in a Vector _ Prediction storage container of a Prediction node, then searching in a Vector _ Temp, if the needed node exists, directly storing in the Vector _ Rendering, and deleting in the Vector _ Temp and the Vector _ Prediction; if not, reading the corresponding node in the hard disk; if the Vector _ Temp and the Vector _ Prediction exceed the limited number of nodes in the storage process, deleting the node stored earliest;
step S502, continuously rendering the points in the nodes which are stored in the memory and need to be visualized by using another independent rendering process, so that the point cloud reading process and the rendering process are simultaneously carried out, and the smoothness of point cloud display is improved.
CN202011388457.6A 2020-12-02 2020-12-02 Large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling Active CN112509118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011388457.6A CN112509118B (en) 2020-12-02 2020-12-02 Large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011388457.6A CN112509118B (en) 2020-12-02 2020-12-02 Large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling

Publications (2)

Publication Number Publication Date
CN112509118A CN112509118A (en) 2021-03-16
CN112509118B true CN112509118B (en) 2021-10-08

Family

ID=74969333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011388457.6A Active CN112509118B (en) 2020-12-02 2020-12-02 Large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling

Country Status (1)

Country Link
CN (1) CN112509118B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113901062B (en) * 2021-12-07 2022-03-18 浙江高信技术股份有限公司 Pre-loading system based on BIM and GIS
CN114387375B (en) * 2022-01-17 2023-05-16 重庆市勘测院(重庆市地图编制中心) Multi-view rendering method for massive point cloud data
CN114663282A (en) * 2022-03-28 2022-06-24 南京航空航天大学深圳研究院 Large aircraft point cloud splicing method based on global measurement field and hierarchical graph optimization
CN116109752B (en) * 2023-04-12 2023-06-20 深圳市其域创新科技有限公司 Point cloud real-time acquisition structuring and rendering method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111768490A (en) * 2020-05-14 2020-10-13 华南农业大学 Plant three-dimensional modeling method and system based on iteration nearest point and manual intervention

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077549B (en) * 2012-10-24 2016-12-21 华南理工大学 A kind of real-time large-scale terrain the Visual Implementation method based on kd tree
CN104376590A (en) * 2014-11-18 2015-02-25 武汉海达数云技术有限公司 Mass data circle-based indexing and space displaying method
US10984541B2 (en) * 2018-04-12 2021-04-20 Samsung Electronics Co., Ltd. 3D point cloud compression systems for delivery and access of a subset of a compressed 3D point cloud
CN110458939B (en) * 2019-07-24 2022-11-18 大连理工大学 Indoor scene modeling method based on visual angle generation
CN110706341B (en) * 2019-09-17 2021-03-30 广州市城市规划勘测设计研究院 High-performance rendering method and device of city information model and storage medium
CN110910505B (en) * 2019-11-29 2023-06-16 西安建筑科技大学 Accelerated rendering method of scene model
CN111524229A (en) * 2020-03-30 2020-08-11 中南大学 Three-dimensional geometric morphology information extraction system and method for rock particles
CN111612911A (en) * 2020-05-23 2020-09-01 缪盾 Dynamo-based point cloud BIM automatic modeling method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111768490A (en) * 2020-05-14 2020-10-13 华南农业大学 Plant three-dimensional modeling method and system based on iteration nearest point and manual intervention

Also Published As

Publication number Publication date
CN112509118A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN112509118B (en) Large-scale point cloud visualization method capable of preloading nodes and self-adaptive filling
CN112308974B (en) Large-scale point cloud visualization method for improving octree and adaptive reading
CN110738721B (en) Three-dimensional scene rendering acceleration method and system based on video geometric analysis
JP7125512B2 (en) Object loading method and device, storage medium, electronic device, and computer program
US7804498B1 (en) Visualization and storage algorithms associated with processing point cloud data
CN103093499B (en) A kind of city three-dimensional model data method for organizing being applicable to Internet Transmission
Crassin et al. Gigavoxels: Ray-guided streaming for efficient and detailed voxel rendering
US8570322B2 (en) Method, system, and computer program product for efficient ray tracing of micropolygon geometry
CN105261066B (en) A kind of three-dimensional geographic information system real-time rendering multithreading distribution and control method
Richter et al. Out-of-core real-time visualization of massive 3D point clouds
CN108520557A (en) A kind of magnanimity building method for drafting of graph image fusion
GB2583513A (en) Apparatus, system and method for data generation
US11625888B2 (en) Methods and apparatus for modifying a bounding volume hierarchy for raytracing
Stolte et al. Parallel spatial enumeration of implicit surfaces using interval arithmetic for octree generation and its direct visualization
CN115953541B (en) Quadtree LOD terrain generation method, device, equipment and storage medium
CA2235233C (en) Three-dimensional object data processing method and system
Lux et al. GPU-based ray casting of multiple multi-resolution volume datasets
Yin et al. Multi-screen Tiled Displayed, Parallel Rendering System for a Large Terrain Dataset.
US20040181373A1 (en) Visual simulation of dynamic moving bodies
JP3724006B2 (en) High speed rendering method and apparatus
CN116883575B (en) Building group rendering method, device, computer equipment and storage medium
CN113096248B (en) Photon collection method and photon mapping rendering method based on shared video memory optimization
Zhao et al. Real-time animating and rendering of large scale grass scenery on gpu
CN114037791A (en) Three-dimensional model rendering display system based on webgl and using method
Alexandre-Barff et al. A GPU-based out-of-core architecture for interactive visualization of AMR time series data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220421

Address after: 211106 room 1003-1005, No. 1698, Shuanglong Avenue, Jiangning District, Nanjing, Jiangsu Province (Jiangning Development Zone)

Patentee after: Nanjing Yuntong Technology Co.,Ltd.

Address before: No. 29, Qinhuai District, Qinhuai District, Nanjing, Jiangsu

Patentee before: Nanjing University of Aeronautics and Astronautics

TR01 Transfer of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Large Scale Point Cloud Visualization Method with Pre loading Nodes and Adaptive Filling

Effective date of registration: 20231010

Granted publication date: 20211008

Pledgee: Bank of Nanjing Co.,Ltd. Jiangning sub branch

Pledgor: Nanjing Yuntong Technology Co.,Ltd.

Registration number: Y2023980060594

PE01 Entry into force of the registration of the contract for pledge of patent right