CN111179394A - Point cloud scene rendering method, device and equipment - Google Patents

Point cloud scene rendering method, device and equipment Download PDF

Info

Publication number
CN111179394A
CN111179394A CN201911164820.3A CN201911164820A CN111179394A CN 111179394 A CN111179394 A CN 111179394A CN 201911164820 A CN201911164820 A CN 201911164820A CN 111179394 A CN111179394 A CN 111179394A
Authority
CN
China
Prior art keywords
point cloud
voxels
scene
voxel
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911164820.3A
Other languages
Chinese (zh)
Inventor
寿如阳
郭晋文
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhijia Technology Co Ltd
PlusAI Corp
Original Assignee
Suzhou Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhijia Technology Co Ltd filed Critical Suzhou Zhijia Technology Co Ltd
Priority to CN201911164820.3A priority Critical patent/CN111179394A/en
Publication of CN111179394A publication Critical patent/CN111179394A/en
Priority to PCT/CN2020/098284 priority patent/WO2021103513A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/30Clipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/36Level of detail
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering

Abstract

The application provides a point cloud scene rendering method, device and equipment, wherein the method comprises the following steps: acquiring a first viewpoint, a first visual angle and origin coordinates of all voxels in a target point cloud scene; determining voxels in a visible area according to a first viewpoint, a first view angle and origin coordinates of all voxels in the target point cloud scene; storing the point cloud data corresponding to the voxels in the visible area into a cache; respectively adding the point cloud data corresponding to the voxels in the visible region stored in the cache to a plurality of levels of detail; and rendering the target point cloud scene according to the point cloud data in each level of the detail level. In the embodiment of the application, the point cloud data corresponding to the voxels in the visible area in the target point cloud scene can be loaded in a block mode by taking the voxels as units, so that the data reading efficiency is effectively improved, and the point cloud scene can be rendered efficiently and smoothly.

Description

Point cloud scene rendering method, device and equipment
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method, an apparatus, and a device for rendering a point cloud scene.
Background
In techniques such as autopilot and mapping, three-dimensional scene data captured by a device such as a laser radar is often expressed in the form of point cloud data. Point Cloud (Point Cloud) is a data set composed of a large number of points in three-dimensional space, and no topological connection relationship exists between any two points in the Point Cloud data, so that understanding of semantics of a Point Cloud scene is difficult, and certain challenges are also created for rendering the Point Cloud scene. The point cloud is commonly used in modules such as sensor calibration, obstacle identification and tracking, semantic target identification, map positioning and the like in an automatic driving technology, and relates to a plurality of technical directions such as calibration, perception, navigation and the like. Therefore, improving the efficiency and ability of point cloud scene rendering can be of great significance in many areas including automotive driving.
In the prior art, a Level of Detail (LOD) technology is generally adopted to improve efficiency and capability of point cloud scene rendering. When the drawing scene is constructed by the detail levels, a plurality of copies are generated for objects in the target scene, each copy is called a level, and three-dimensional models of the objects with different degrees of detail are reserved in different levels. When an object is placed within the visual range of the scene, a lower level of detail replaces a higher level of detail as the scene object or model moves away from the viewer. For the entire scene, when the observer is close to an object, the number of other objects in the field of view must be reduced, so that the observed object can exhibit more details, and when the observer is far away from an object, more other objects enter the field of view, so that the details that can be exhibited by each object should be less. The detail level is to maintain the complexity of the entire visual range of the scene at a relatively constant level by maintaining the simple relationship, so as to ensure the rendering fluency. Meanwhile, objects far away are not visually perceived to have reduced details.
However, when a point cloud of an oversized scene is rendered, due to the fact that the point cloud lacks a topological connection relation, a clear boundary of an object does not exist, and the whole point cloud scene is only a complete object without any semantic processing no matter how large the scale and the spatial scale of the point cloud are. The detail level needs to construct a group of models for each object in the scene independently, so that the detail level loads all point cloud data to perform detail drawing or ignores all details when constructing the drawn scene, and the point cloud scene cannot be rendered efficiently and smoothly by adopting the prior art.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a point cloud scene rendering method, a point cloud scene rendering device and point cloud scene rendering equipment, and aims to solve the problem that point cloud scene rendering cannot be efficiently and smoothly performed in the prior art.
The embodiment of the application provides a point cloud scene rendering method, which comprises the following steps: acquiring a first viewpoint, a first visual angle and origin coordinates of all voxels in a target point cloud scene; determining voxels in a visible area according to a first viewpoint, a first view angle and origin coordinates of all voxels in the target point cloud scene; storing the point cloud data corresponding to the voxels in the visible area into a cache; respectively adding the point cloud data corresponding to the voxels in the visible region stored in the cache to a plurality of levels of detail; and rendering the target point cloud scene according to the point cloud data in each level of the detail level.
In one embodiment, before acquiring the first viewpoint, the first view angle and the origin coordinates of all voxels in the target point cloud scene, the method further includes: carrying out voxelization segmentation on the target point cloud scene to obtain a plurality of voxels of the target point cloud scene; acquiring origin coordinates of each voxel in the target point cloud scene; respectively storing point cloud data corresponding to each voxel of the target point cloud scene to obtain a point cloud file corresponding to each voxel; and establishing an index file according to the origin coordinates of the voxels in the target point cloud scene, wherein the index file comprises the point cloud files corresponding to the voxels and the association relationship between the origin coordinates of the voxels in the target point cloud scene.
In one embodiment, respectively storing point cloud data corresponding to each voxel of the target point cloud scene to obtain a point cloud file corresponding to each voxel, including: acquiring point cloud data corresponding to a target voxel and an origin coordinate of the target voxel in a target point cloud scene; converting coordinates of each point in the point cloud data corresponding to the target voxel into an offset value relative to an origin coordinate of the target voxel in the target point cloud scene; and storing the origin coordinates of the target voxels in the target point cloud scene and the offset values of the points relative to the origin coordinates of the target voxels in the target point cloud scene to obtain point cloud files corresponding to the target voxels.
In one embodiment, before storing the point cloud data corresponding to the voxels within the visible area into a cache, the method further includes: acquiring the index file; determining a point cloud file corresponding to the voxel in the visible area according to the index file and the origin coordinates of the voxel in the visible area in the target point cloud scene; and determining point cloud data corresponding to the voxels in the visible area according to the offset value of each point in the point cloud file relative to the origin coordinates of the voxels in the target point cloud scene.
In one embodiment, adding the point cloud data corresponding to the voxels in the visible region stored in the cache to a plurality of levels of detail respectively comprises: determining the number of levels of a detail level; respectively randomly dividing the point cloud data corresponding to each voxel in the visible region to obtain a plurality of subsets, wherein the number of the subsets obtained by randomly dividing the point cloud data corresponding to each voxel is equal to the number of layers of the detail layer; and respectively adding a plurality of subsets obtained by randomly dividing the point cloud data corresponding to each voxel in the visible region into a plurality of levels of detail.
In one embodiment, after storing the point cloud data corresponding to the voxels within the visible area in a cache, the method further includes: determining whether the data volume of the point cloud data corresponding to the voxels in the visible area is larger than the data volume storable in the cache; and under the condition that the data quantity of the point cloud data corresponding to the voxels in the visible area is determined to be larger than the data quantity which can be stored in the cache, removing the voxels which are farthest from the first viewpoint from the voxels in the visible area from the cache.
In one embodiment, after determining whether the data amount of the point cloud data corresponding to the voxels in the visible region is larger than the data amount storable in the cache, the method further includes: under the condition that the data volume of the point cloud data corresponding to the voxels in the visible area is determined to be not larger than the data volume storable in the cache, determining whether the data volume of the point cloud data corresponding to the voxels in the visible area is smaller than the data volume storable in the cache or not; and under the condition that the data volume of the point cloud data corresponding to the voxels in the visible area is determined to be smaller than the data volume storable in the cache, storing the voxels, which are outside the visible area and adjacent to the first viewpoint, into the cache until the data volume of the point cloud data stored in the cache reaches the data volume storable in the cache.
In one embodiment, after rendering the target point cloud scene, further comprising: determining whether the first viewpoint or the first view angle moves; under the condition that the first viewpoint or the first visual angle is determined to move, taking the moved first viewpoint as a second viewpoint and the moved first visual angle as a second visual angle; re-determining the voxels in the visible area according to the second viewpoint, the second view angle and the origin coordinates of all the voxels; and modifying the point cloud data stored in the cache according to the redetermined voxels in the visual area.
In one embodiment, storing the point cloud data corresponding to the voxels within the visible region in a cache comprises: and storing the point cloud data corresponding to the voxels in the visible area into a cache by using a least-used algorithm.
The embodiment of the present application further provides a point cloud scene rendering device, including: the acquisition module is used for acquiring a first viewpoint, a first visual angle and origin coordinates of all voxels in a target point cloud scene; the determining module is used for determining voxels in a visible area according to a first viewpoint, a first view angle and origin coordinates of all voxels in the target scene; the storage module is used for storing the point cloud data corresponding to the voxels in the visible area into a cache; the processing module is used for respectively adding the point cloud data corresponding to the voxels in the visible region stored in the cache into a plurality of levels of detail; and the rendering module is used for rendering the target point cloud scene according to the point cloud data in each level of the detail level.
In one embodiment, further comprising: the voxel segmentation unit is used for carrying out voxel segmentation on the target point cloud scene to obtain a plurality of voxels of the target point cloud scene; the acquisition unit is used for acquiring the origin coordinates of each voxel in the target point cloud scene; the storage unit is used for respectively storing the point cloud data corresponding to each voxel of the target point cloud scene to obtain a point cloud file corresponding to each voxel; and the index file establishing unit is used for establishing an index file according to the origin coordinates of the voxels in the target point cloud scene, wherein the index file comprises the association relationship between the point cloud file corresponding to the voxels and the origin coordinates of the voxels in the target point cloud scene.
The embodiment of the application also provides point cloud scene rendering equipment, which comprises a processor and a memory for storing executable instructions of the processor, wherein the processor executes the instructions to realize the steps of the point cloud scene rendering method.
Embodiments of the present application also provide a computer-readable storage medium having stored thereon computer instructions, which when executed, implement the steps of the point cloud scene rendering method.
The embodiment of the application provides a point cloud scene rendering method, which can determine voxels in a visible area according to a first viewpoint, a first view angle and origin coordinates of all voxels in a target point cloud scene by acquiring the first viewpoint, the first view angle and the origin coordinates of all voxels in the target point cloud scene, and store point cloud data corresponding to the voxels in the visible area into a cache, so that point cloud data corresponding to the voxels in the visible area in the target point cloud scene can be loaded in blocks by taking the voxels as a unit, all point cloud data in the target point cloud scene do not need to be loaded at one time, the data reading efficiency is effectively improved, and point cloud data required for rendering can be efficiently acquired. Further, the point cloud data corresponding to the voxels in the visible area stored in the cache can be added to multiple levels of the detail level respectively, and the target point cloud scene is rendered according to the point cloud data in each level of the detail level, so that when the target point cloud scene is rendered by adopting the detail level, detail drawing can be performed according to the point cloud data corresponding to the voxels in the visible area, the data processing amount is effectively reduced, and the point cloud scene rendering can be performed efficiently and smoothly.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, are incorporated in and constitute a part of this application, and are not intended to limit the application. In the drawings:
fig. 1 is a schematic step diagram of a point cloud scene rendering method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a point cloud scene rendering method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a storage structure provided in accordance with an embodiment of the present application;
FIG. 4 is a schematic diagram of a process of changing a viewpoint and a viewing angle provided according to a specific embodiment of the present application;
fig. 5 is a schematic structural diagram of a point cloud scene rendering apparatus provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a point cloud scene rendering device according to an embodiment of the present application.
Detailed Description
The principles and spirit of the present application will be described with reference to a number of exemplary embodiments. It should be understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the present application, and are not intended to limit the scope of the present application in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present application may be embodied as a system, apparatus, device, method or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
Since the point cloud file storing the point cloud data usually describes the point cloud as a set of three-dimensional coordinates plus a record formed by data fields, and there is no topological connection relationship between any two points in the point cloud data. When a point cloud scene is loaded, the point cloud file needs to be completely read from a storage medium into a general memory of a computer and a memory of a display card, and if the point cloud file needing to be read exceeds the upper limit of the memory, point cloud data in the point cloud scene cannot be completely loaded, and the point cloud scene cannot be rendered. Therefore, the existing point cloud scene rendering has great limitation, and the point cloud scene rendering cannot be carried out efficiently and smoothly.
Based on the above problems, an embodiment of the present invention provides a point cloud scene rendering method, as shown in fig. 1, which may include the following steps:
s101: and acquiring a first viewpoint, a first visual angle and origin coordinates of all voxels in a target point cloud scene.
Since at least one observer or observation device usually exists in the target point cloud scene, a first viewpoint, a first view angle, and origin coordinates of all voxels in the target point cloud scene may be acquired first. The target point cloud scene may be a three-dimensional scene formed by point clouds that need to be rendered, and the target point cloud scene may include: obstacle identification and road identification in the automatic driving process of the automobile, a point cloud map to be rendered and a point cloud scene in a game.
The first viewpoint and the first viewing angle may be an initial viewpoint and an initial viewing angle of an observer or observation equipment in a target point cloud scene, the first viewpoint may be used to represent a position of the observer or observation equipment in the target point cloud scene, and the first viewing angle may be used to represent an included angle formed by light rays led out from two ends (upper, lower, left, and right) of an object at a viewpoint center when the object is observed.
The observation device may include, but is not limited to, at least one of: laser radar, contact scanner, structured light, triangulation ranging, stereo camera, transit time camera, and the like. The voxel is a short name of a volume element, is the minimum unit of digital data on three-dimensional space segmentation, and a solid containing the voxel can be represented by solid rendering or by extracting a polygonal isosurface of a given threshold contour.
In one embodiment, since there is no topological connection relationship between any two points in the point cloud data, if the complete point cloud data is read when the point cloud scene is loaded, not only much time is consumed, but also the limitation of memory capacity is easily applied, so that all the point cloud data cannot be loaded for rendering. Therefore, the complete target point cloud scene can be subjected to voxelization segmentation, the space where the target point cloud scene is located is segmented into a plurality of voxels, wherein each side length of each voxel can be set according to requirements, and each side length direction of each voxel is parallel to three axes of a coordinate system of the target point cloud scene space. The voxels obtained by segmentation may be cubes or cuboids, and may be determined according to actual conditions, which is not limited in the present application.
After obtaining a plurality of voxels of the target point cloud scene, further obtaining the origin coordinates of each segmented voxel in the target point cloud scene, where the origin coordinates of each voxel in the target point cloud scene can be used to represent the position of each voxel in the entire target point cloud scene. The origin may be a point closest to the origin of the target point cloud scene space coordinate system in the voxel, or a geometric center point of the voxel or a cross point of three axes of length, width and height in the voxel, and may be determined specifically according to an actual situation, which is not limited in the present application.
In one embodiment, the point cloud data corresponding to each voxel in the target point cloud scene may be stored respectively to obtain a point cloud file corresponding to each voxel, that is, any point in the target point cloud scene belongs to a certain voxel in position, and points belonging to the same voxel are collected together and stored in a corresponding point cloud file, so that the point cloud data in the target point cloud scene may be read and loaded in blocks.
In order to facilitate searching and reading point cloud data corresponding to any voxel in the target point cloud scene, in an embodiment, an index file may be established according to an origin coordinate of each voxel in the target point cloud scene, where the index file includes an association relationship between the point cloud file corresponding to each voxel and the origin coordinate of each voxel in the target point cloud scene. The association relationship may be embodied in a form of table, key value, or the like, and specifically, what manner to establish the association relationship may be determined according to an actual situation, which is not limited in the present application. Therefore, the voxel at any position in the target point cloud scene and the point cloud file corresponding to the voxel can be visually determined through the index file.
In a specific implementation process, when point cloud data corresponding to each voxel of a target point cloud scene is stored respectively to obtain a point cloud file corresponding to each voxel, point cloud data corresponding to the target voxel and an origin coordinate of the target voxel in the target point cloud scene can be obtained, and coordinates of each point in the point cloud data corresponding to the target voxel are converted into an offset value relative to the origin coordinate of the target voxel in the target point cloud scene. The origin coordinates of the target voxels in the target point cloud scene and the offset values of the points relative to the origin coordinates of the target voxels in the target point cloud scene can be stored to obtain point cloud files corresponding to the target voxels, so that the data volume stored in each point cloud file in the target point cloud scene can be compressed, and the storage space is used for storing more point cloud files.
Correspondingly, when the point cloud data in the point cloud file is loaded or read, the offset of each point in the point cloud file needs to be restored to the original coordinates of each point in the target point cloud scene. The origin coordinates of the voxels corresponding to the point cloud file in the target point cloud scene can be obtained according to the index file, and the relative offset of each point is superposed with the origin coordinates of the voxels corresponding to the point cloud file in the target point cloud scene for translation transformation, so that the original coordinates of each point in the target point cloud scene can be restored.
S102: and determining the voxels in the visible area according to the first viewpoint, the first visual angle and the origin coordinates of all the voxels in the target point cloud scene.
Since the specific position of the observer or the observation device in the target point cloud scene can be determined according to the first viewpoint and the first viewing angle in the target point cloud scene, the specific position of each voxel in the target point cloud scene can be determined according to the origin coordinates of all voxels in the target scene. Therefore, the voxels in the visible region of the observer or the observation device can be determined from the first viewpoint, the first view angle and the origin coordinates of all the voxels in the target point cloud scene. The method can remove scenes in advance on the voxel level in the point cloud scene according to the first viewpoint and the first view angle, and remove voxels which are not in the visible area of an observer or observation equipment, so that rendering cost is further reduced on the basis of block loading of point cloud data, and scene loading efficiency is improved.
S103: and storing the point cloud data corresponding to the voxels in the visual area into a cache.
After determining the voxels in the visible region, the point cloud file corresponding to each voxel in the visible region may be obtained according to the origin coordinates of the voxels in the visible region in the target point cloud scene and the index file, so as to obtain the point cloud data corresponding to each voxel in the visible region, and store the point cloud data in the cache.
Compared with an internal memory and a magnetic disk, the cache has higher data reading efficiency and lower data response time, so that the data transmission and reading time can be reduced by caching the point cloud data corresponding to each voxel in the visible area, and the processing efficiency is improved. The cache can open a fixed area from the memory for storing the point cloud data and the metadata corresponding to the voxels. In addition, the cache can realize an array for indexing and managing the voxel objects loaded into the data area, and each element of the array points to the point cloud data corresponding to a certain loaded voxel.
In a specific implementation process, the cache may store point cloud data corresponding to all voxels in the visible area by using a replacement policy of a priority queue of a Least recently used algorithm (LRU). The LRU is one of page replacement algorithms, and eliminates and arranges data according to historical access records of the data, and the core idea is as follows: if the data was accessed recently, then the chance of future access is higher. The point cloud data is stored by using the LRU, so that the data in the cache can be replaced more conveniently and effectively under the condition that the first viewpoint and the first visual angle are changed. For example: when the observer is at a of the target point cloud scene, and the observer continuously moves through another B in space and reaches the destination C, a is closer to C than B, since a is most recently observed and B is not observed, if LRU is used, the probability of moving to a next time is higher than B, and a is preferably replaced.
After the point cloud data corresponding to the voxels in the visible region is stored in the cache, in an embodiment, a data amount storable in the cache and a data amount of the point cloud data corresponding to the voxels in the visible region may be first obtained, and it may be determined whether the data amount of the point cloud data corresponding to the voxels in the visible region is greater than the data amount storable in the cache, and in a case where it is determined that the data amount of the point cloud data corresponding to the voxels in the visible region is greater than the data amount storable in the cache, that is, the data amount of the point cloud data corresponding to the voxels in the visible region exceeds an upper limit of the data amount storable in the cache, the voxels in the visible region farthest from the first viewpoint may be removed from the cache.
Further, whether the point cloud data amount required to be stored exceeds the upper limit of the data amount which can be stored in the cache or not can be continuously determined, if the point cloud data amount still exceeds the upper limit of the data amount which can be stored in the cache, the corresponding voxel is continuously removed according to the principle of being farthest away from the viewpoint until the point cloud data amount required to be stored does not exceed the upper limit of the data amount which can be stored in the cache.
In one embodiment, in the case that it is determined that the data amount of the point cloud data corresponding to the voxels in the visible region is not greater than the data amount storable in the cache, it may be further determined whether the data amount of the point cloud data corresponding to the voxels in the visible region is smaller than the data amount storable in the cache, that is, if the data amount of the point cloud data corresponding to the voxels in the visible region is equal to the data amount storable in the cache, the operations of removing, adding, and the like of the voxels may not be required. And if the data volume of the point cloud data corresponding to the voxels in the visible area is determined to be smaller than the data volume storable in the cache, storing the voxels, which are adjacent to the first viewpoint, outside the visible area into the cache until the data volume of the point cloud data stored in the cache reaches the data volume storable in the cache. That is, when it is determined that the data amount of the point cloud data corresponding to the voxels in the visible region is smaller than the data amount storable in the cache, the voxels outside the visible region may be sequentially stored in the cache according to the sequence from the near to the far from the first viewpoint until the data amount of the point cloud data stored in the cache reaches the data amount storable in the cache.
S104: and respectively adding the point cloud data corresponding to the voxels in the visible region stored in the cache to a plurality of levels of detail.
In one embodiment, a Level of Detail (LOD) technology may be used to improve efficiency and capability of point cloud scene rendering, where the Level of Detail is used to depict the same scene with different precision, and the working principle is as follows: when the viewpoint is close to the object, the details of the observed model are rich, when the viewpoint is far from the model, the observed details are gradually blurred, and the system can timely select the corresponding details for displaying, so that the time waste caused by loading and rendering the details with relatively little meaning is avoided, and the relation between the continuity and the resolution of the picture is effectively adjusted. Therefore, the point cloud data corresponding to the voxels in the visible region stored in the cache can be respectively added to a plurality of levels of detail levels, and different levels in the detail levels retain the point cloud data of different degrees of detail of each voxel.
In a specific real-time process, the number of levels of a detail level may be determined first, where the number of levels may be a positive integer greater than or equal to 1, for example: 4. 5, etc., which can be determined according to practical situations, and the application is not limited to this. After the number of layers of the detail layer is determined, the point cloud data corresponding to each voxel in the visible region may be respectively randomly divided to obtain a plurality of subsets, where the number of subsets obtained by randomly dividing the point cloud data corresponding to each voxel is equal to the number of layers of the detail layer.
Further, a plurality of subsets obtained by randomly dividing the point cloud data corresponding to each voxel in the visible region may be added to a plurality of levels of detail level, respectively. For example: when the number of levels of the detail level is 5, the point cloud data corresponding to the voxel 1 in the visible region may be randomly divided to obtain 5 subsets (subset 1, subset 2, subset 3, subset 4, and subset 5) corresponding to the voxel 1, and the subset 1 is added to the 1 st level of the detail level, the subset 2 is added to the 2 nd level of the detail level, the subset 3 is added to the 3 rd level of the detail level, the subset 4 is added to the 4 th level of the detail level, and the subset 5 is added to the 5 th level of the detail level. Correspondingly, point cloud data of different degrees of detail of each voxel are reserved in different levels of detail levels, and when the ith level of detail levels is used, all the point cloud data in the 1 st to the ith subsets are loaded correspondingly.
S105: and rendering the target point cloud scene according to the point cloud data in each level of the detail level.
After the point cloud data corresponding to the voxels in the visible region stored in the cache are added to the multiple levels of the detail level, the target point cloud scene can be rendered according to the point cloud data in each level of the detail level, wherein the voxels closer to the viewpoint can be rendered by adopting a relatively larger number of levels of detail levels to show more details of the observed object.
In one embodiment, the observed object may exhibit more detail because for the entire point cloud scene, as the observer approaches an object, the number of other objects within the field of view must decrease, and as the observer moves away from an object, more other objects enter the field of view, and thus the detail that each object can exhibit should be less. Therefore, the number of layers of detail levels required to be adopted by each voxel in the visual area can be determined according to the first viewpoint and the first visual angle, and the point cloud scene rendering can be performed according to the determined number of layers of detail levels required to be adopted by each voxel in the visual area and the point cloud data in the plurality of detail levels.
After the target point cloud scene is rendered, it may be continuously determined whether the first viewpoint or the first perspective changes or moves, and when it is determined that the first viewpoint or the first perspective moves, the moved first viewpoint may be used as a second viewpoint and the moved first perspective may be used as a second perspective. The voxels in the visible region may be re-determined according to the second viewpoint, the second view angle, and the origin coordinates of all the voxels, and the point cloud data stored in the cache may be modified according to the re-determined voxels in the visible region.
The result of the movement of the first viewpoint and/or the first perspective includes two cases: only the scaling of the view angle, involving the replacement of voxels. When the change of the first viewpoint or the first viewing angle only causes the scaling of the viewing angle, the voxels in the visible range are not newly added; and when the change of the first view point or the first view angle causes some new voxels to enter the visual area and other voxels originally in the visual area to exit the visual area, the replacement of the voxels is involved.
In a specific implementation process, when the change of the first viewpoint or the first view angle only causes the scaling of the view angle, only the number of layers of detail layers required to be used by each voxel is required to be adjusted, point cloud data corresponding to the voxels in the visible area stored in the cache is not required to be modified, and the target point cloud scene is re-rendered according to the number of layers of detail layers adjusted by each voxel.
When the change of the first viewpoint or the first view angle causes some new voxels to enter the visual area and other voxels originally in the visual area exit the visual area, the voxels in the visual area need to be re-determined according to the second viewpoint, the second view angle and the origin coordinates of all the voxels, and the point cloud data corresponding to the new voxels entering the visual area is loaded into the cache.
And if the data amount storable in the cache reaches an upper limit, determining whether any voxel exits the visual area, and in the case of determining that any voxel exits the visual area, selecting a voxel with a distance from the viewpoint greater than that of the voxel entering the cache from the viewpoint, and removing the voxel with the longest time away from the view from the cache. In one embodiment, a reconciliation function may be used to combine the most recently observed time and the distance from the viewpoint, give each voxel in the cache a current composite score, and select a replacement voxel based on the score result, which may be referred to as a combination of LRU and a priority queue based on the viewpoint distance. In one embodiment, the voxels in the cache that have exited the visible region may continue to be deleted based on the above-described strategy until the point cloud data corresponding to the new voxels that entered the field of view may be fully loaded into the cache. Correspondingly, when a voxel exits the visible area, the point cloud data corresponding to the voxel is unloaded from the detail level, so that the storage space is released for storing the point cloud data corresponding to the voxel in the visible area.
From the above description, it can be seen that the embodiments of the present application achieve the following technical effects: by acquiring the first viewpoint, the first view angle and the origin coordinates of all voxels in the target point cloud scene, the voxels in the visible area can be determined according to the first viewpoint, the first view angle and the origin coordinates of all voxels in the target point cloud scene, and the point cloud data corresponding to the voxels in the visible area are stored in a cache, so that the point cloud data corresponding to the voxels in the visible area in the target point cloud scene can be loaded in blocks by taking the voxels as a unit, and all point cloud data in the target point cloud scene do not need to be loaded at one time, the data reading efficiency is effectively improved, and the point cloud data required for rendering can be efficiently acquired. Further, the point cloud data corresponding to the voxels in the visible area stored in the cache can be added to multiple levels of the detail level respectively, and the target point cloud scene is rendered according to the point cloud data in each level of the detail level, so that when the target point cloud scene is rendered by adopting the detail level, detail drawing can be performed according to the point cloud data corresponding to the voxels in the visible area, the data processing amount is effectively reduced, and the point cloud scene rendering can be performed efficiently and smoothly.
The above method is described below with reference to a specific example, however, it should be noted that the specific example is only for better describing the present application and is not to be construed as limiting the present application.
The embodiment of the invention provides a point cloud scene rendering method, as shown in fig. 2, which may include:
step 1: and carrying out voxelization segmentation on the point cloud scene.
Generally, a file format for storing point clouds describes the point clouds as a record formed by a group of three-dimensional coordinates and data fields, when a point cloud scene needs to be loaded, the files are completely read from a storage medium into a computer general memory and a display card memory, and if the point cloud files needing to be read are too large, the data cannot be completely loaded, the point cloud scene cannot be rendered. Therefore, in one embodiment, a complete point cloud scene can be subjected to voxelization segmentation, each voxel obtained by segmentation is stored as a separate point cloud file, and the association relationship between all voxels and their original positions in the original complete point cloud scene is recorded through an index file.
Correspondingly, one can be designed to include: the storage structure of an index file and a plurality of data files (point cloud files) is used for appropriately cutting a finished point cloud scene so that the point cloud scene can be loaded in blocks and the data volume of the point cloud scene can be appropriately compressed, wherein the storage structure is shown in fig. 3. In fig. 3, the index file is on the left side, and the associated point cloud file is on the right side, wherein the intensity in the table of fig. 3: 25. label: GROUND, etc. are exemplary data, which are data attributes of points in the point cloud file corresponding to coordinate offsets. The algorithm for generating one index file and a plurality of point cloud files from the point cloud scene of a single file is as follows:
inputting: point cloud C ═ pi=(xi,yi,zi,di)},i∈[1,N]And the voxel side length R is more than 0. Wherein p isiCoordinates representing the ith point of the point cloud model, i.e. pi=(xi,yi,zi,di),diIs a point piThe data attributes may include, but are not limited to, at least one of: intensity, reflectivity, etc.
And (3) outputting: index I, point cloud file
Figure BDA0002287153660000121
The process is as follows:
1) and (6) dividing.
Generating the point cloud C:
Figure BDA0002287153660000122
wherein, the above
Figure BDA0002287153660000123
Wherein, oiIs piOrigin coordinates of voxels in a point cloud scene obtained after R-voxelization on a voxel side length, and pi-oiThen the coordinate offset (relative coordinates) of each point in the voxel with respect to the origin coordinate of the voxel; gamma (z)iR) is a function for determining z is not less thaniMaximum z ofiA multiple of/R; gamma (a, R) is the product of the integer of a/R (floating) and then R; a is a variable.
2) And (4) carrying out voxelization.
Will be provided with
Figure BDA0002287153660000124
The element in (1) is expressed asiPolymerizing to obtain a voxelized point cloud scene as follows:
Figure BDA0002287153660000125
wherein the content of the first and second substances,
Figure BDA0002287153660000126
above VjRepresents
Figure BDA0002287153660000127
The (th) voxel (j) of (c),
Figure BDA0002287153660000128
and djkRespectively belong to VjAnd its corresponding data attributes, wherein,
Figure BDA0002287153660000129
Figure BDA00022871536600001210
for all p in the point cloudiAnd o corresponding theretoiEach p ofiAll correspond to an oi(ii) a After polymerization
Figure BDA00022871536600001211
In, same oiP of (a)iAre classified together to form VjAt this time each VjAll correspond to an oj
3) And (5) storing.
Storing index file I ═<O,R>=<{oj→Dj},R>。
I.e. each voxel VjStoring the corresponding point cloud data into respective point cloud files
Figure BDA00022871536600001212
The index file contains the origin coordinates o of all voxelsj,oj→DjThe method comprises the following steps of (1) obtaining an association relation between origin coordinates of voxels and point cloud files corresponding to the voxels; to enable data file DjThe coordinate offset of the point is restored to the original coordinate of each point, and the voxel edge can be recoveredThe long R is also stored in the index file.
Therefore, the space where a complete point cloud scene is located is cut into a dense cube array with the side length of R square, each cube is called a voxel, and the side length direction of the voxel is parallel to three axes of a coordinate system of the point cloud scene space. Any point in the point cloud scene belongs to a certain voxel on the position, and the point belonging to the same voxel is collected together and stored in a corresponding data point cloud file. And storing the origin coordinates of the voxels in the complete point cloud scene and the address indexes of the point cloud files corresponding to the origin coordinates in the index files. In the point cloud file, the coordinates of the points are converted into the offset relative to the origin coordinates of the voxels in the complete point cloud scene according to the formula, so that the data volume in the point cloud scene can be appropriately compressed, and more data bits can be used for storing numerical precision. When point cloud data in a point cloud file is loaded/read, the offset needs to be restored to the original coordinates of the midpoint of the voxel corresponding to the point cloud file, at this time, the record of the corresponding voxel needs to be retrieved from the index file, and the relative offsets of all points in the voxel are superposed on the original coordinates of the voxel in the complete point cloud scene for translation transformation.
Step 2: and loading point cloud data corresponding to all voxels in the visible area into a cache according to the visual angle and the viewpoint of an observer.
Firstly, loading an index file of a point cloud scene, and analyzing origin coordinates of all voxels in the index file to obtain the accurate size of the point cloud scene and the position of each voxel in the complete point cloud scene. Therefore, no matter where the observer is in the point cloud scene, it can be calculated by the origin coordinates of each voxel to obtain which voxels are in the observer view range class. In one embodiment, scene elimination can be performed in advance according to the view angle and the viewpoint position of an observer on the voxel level in the point cloud scene, and voxels which are not in the view field range of the observer are eliminated, so that rendering overhead is reduced, scene loading efficiency is improved, and storage pressure is relieved.
During initialization, point cloud data corresponding to all voxels in the visible area are loaded into a cache according to an initial viewing angle and an initial viewpoint of an observer, wherein the cache stores the point cloud data corresponding to all voxels in the visible area by using a replacement strategy of a priority queue of a Least Recently Used (LRU) algorithm. The LUR is one of page replacement algorithms, and eliminates and arranges data according to historical access records of the data, and the core idea is as follows: if the data was accessed recently, then the chance of future access is higher.
Further, whether the storage space in the cache is filled or not can be determined, if not, cache preloading can be performed, namely the point cloud data corresponding to the unloaded voxels are selected nearby after the point cloud data is expanded out of the visual range; if the point cloud data amount corresponding to all voxels in the visible area exceeds the upper limit of the data amount which can be stored in the cache, discarding voxel data which is relatively far away from the initial viewpoint in the voxels in the visible area, namely determining the voxels loaded in the cache based on the priority queue of the viewpoint distance.
And step 3: and (5) constructing a detail level.
Point cloud data corresponding to each voxel in the cache may be loaded into multiple levels of detail levels. Specifically, for a level of detail comprising L levels, the points in each voxel are randomly divided into L subsets, wherein the number of points in the ith subset accounts for 2i/[ L × (L +1) ] of the total number of points comprised by the voxel. In one embodiment, how many levels of detail need to be employed may be determined based on the distance of the viewer from the voxel, and the points in the 1 st to i th subsets are loaded accordingly when the ith level of detail is used.
And 4, step 4: and (6) rendering.
The number of layers of detail levels required to be adopted can be determined according to the position information of the observer, and point cloud scene rendering is carried out according to the determined number of layers of detail levels and point cloud data loaded in the plurality of detail levels.
For the whole point cloud scene, when an observer is close to an object, the number of other objects in the visual field of the observer is necessarily reduced, so that the observed object can show more details, and when the observer is far away from the object, more other objects enter the visual field, so that the details which can be shown by each object are reduced.
In some embodiments, when the position of the viewer is shifted, some voxels may enter the field of view and other voxels that were originally in the field of view may exit the field of view. Due to the change of the visual angle, the point cloud data corresponding to the new voxel needing to enter the visual field is loaded into the cache. If the storable space of the cache is full, selecting the voxels with the distance from the viewpoint greater than the distance from the viewpoint of the voxels entering the cache from the view, and removing the voxels with the longest time away from the view from the cache, wherein the strategy is the combination of the LRU and a priority queue based on the distance from the viewpoint. And based on the strategy, the voxels in the cache can be deleted continuously until the point cloud data corresponding to the new voxels entering the view field can be loaded into the cache completely. Correspondingly, when the observer position moves, if the original voxels in the cache are replaced, the corresponding point cloud data is unloaded from the detail level to free up storage space for the voxels within the visible range.
As shown in fig. 4, when the position of the observer moves, the corresponding voxels in the visible range may also change according to the change process of the viewpoint and the viewing angle in fig. 4. And the voxels closer to the viewer in the visible range need to be rendered with more detail, which means that when the position of the viewer moves, the level of detail that each voxel in the visible range needs to be rendered changes.
In a practical application, in the verification tool for positioning the point cloud map, the point cloud scene rendering method can be applied to rendering the point cloud map with the total file size of TB level dense point cloud with the total file size of more than 300 km. This is not possible with the existing memory-based rendering scheme, and the point cloud scene rendering method can achieve rendering response at an interaction level. The LRU cache size is set to 2GB, the limit is 2000 voxels, the side length of the voxels is set to 50 meters, and the number of layers of the detail level is set to 5. Under the configuration, the Frame Per Second (FPS) of operations for triggering detail level replacement such as scaling and rotation reaches more than 30FPS, no click feeling exists, translation triggers cache replacement, when the translation amount of a visual angle is not large, the adopted LRU strategy plays a role, cache replacement and re-rendering are within 1 second, when the transformation of the visual angle is large, full LRU refreshing is triggered, and at the moment, 5-10 seconds are consumed.
Based on the same inventive concept, the embodiment of the present application further provides a point cloud scene rendering apparatus, as in the following embodiments. The principle of the point cloud scene rendering device for solving the problems is similar to that of the point cloud scene rendering method, so the implementation of the point cloud scene rendering device can refer to the implementation of the point cloud scene rendering method, and repeated parts are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated. Fig. 5 is a block diagram of a point cloud scene rendering apparatus according to an embodiment of the present disclosure, and as shown in fig. 5, the point cloud scene rendering apparatus may include: the following describes the structure of the image processing apparatus, including an acquisition module 501, a determination module 502, a storage module 503, a processing module 504, and a rendering module 505.
The acquiring module 501 may be configured to acquire a first viewpoint, a first view angle, and origin coordinates of all voxels in a target point cloud scene;
a determining module 502, configured to determine voxels within a visible region according to a first viewpoint, a first view angle, and origin coordinates of all voxels in a target scene;
a storage module 503, configured to store point cloud data corresponding to voxels in the visible region in a cache;
a processing module 504, configured to add point cloud data corresponding to voxels in the visible region stored in the cache to multiple levels of detail levels, respectively;
the rendering module 505 may be configured to render the target point cloud scene according to the point cloud data in each level of the level of detail.
In one embodiment, the point cloud scene rendering apparatus may further include: the voxel segmentation unit is used for carrying out voxel segmentation on the target point cloud scene to obtain a plurality of voxels of the target point cloud scene; the acquisition unit is used for acquiring the origin coordinates of each voxel in a target point cloud scene; the storage unit is used for respectively storing point cloud data corresponding to each voxel of a target point cloud scene to obtain a point cloud file corresponding to each voxel; the index file establishing unit is used for establishing an index file according to the origin coordinates of each voxel in the target point cloud scene, wherein the index file comprises the association relationship between the point cloud file corresponding to each voxel and the origin coordinates of each voxel in the target point cloud scene.
The embodiment of the present application further provides an electronic device, which may specifically refer to a schematic structural diagram of the electronic device based on the point cloud scene rendering method provided in the embodiment of the present application shown in fig. 6, and the electronic device may specifically include an input device 61, a processor 62, and a memory 63. The input device 61 may be specifically configured to input a first viewpoint, a first view angle, and origin coordinates of all voxels in the target point cloud scene. The processor 62 may be specifically configured to determine voxels within the visible region according to a first viewpoint, a first view angle, and origin coordinates of all voxels in the target point cloud scene; storing point cloud data corresponding to the voxels in the visible region into a cache; respectively adding point cloud data corresponding to the voxels in the visible region stored in the cache to a plurality of levels of detail levels; and rendering the target point cloud scene according to the point cloud data in each level of the detail level. The memory 63 may be specifically configured to store parameters such as a first viewpoint, a first view angle, and origin coordinates of all voxels in the target point cloud scene.
In this embodiment, the input device may be one of the main apparatuses for information exchange between a user and a computer system. The input device may include a keyboard, a mouse, a camera, a scanner, a light pen, a handwriting input board, a voice input device, etc.; the input device is used to input raw data and a program for processing the data into the computer. The input device can also acquire and receive data transmitted by other modules, units and devices. The processor may be implemented in any suitable way. For example, the processor may take the form of, for example, a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, an embedded microcontroller, and so forth. The memory may in particular be a memory device used in modern information technology for storing information. The memory may include multiple levels, and in a digital system, the memory may be any memory as long as it can store binary data; in an integrated circuit, a circuit without a physical form and with a storage function is also called a memory, such as a RAM, a FIFO and the like; in the system, the storage device in physical form is also called a memory, such as a memory bank, a TF card and the like.
In this embodiment, the functions and effects specifically realized by the electronic device can be explained by comparing with other embodiments, and are not described herein again.
The embodiment of the present application further provides a computer storage medium based on a point cloud scene rendering method, where the computer storage medium stores computer program instructions, and when the computer program instructions are executed, the computer storage medium may implement: acquiring a first viewpoint, a first visual angle and origin coordinates of all voxels in a target point cloud scene; determining voxels in a visible area according to a first viewpoint, a first view angle and origin coordinates of all voxels in a target point cloud scene; storing point cloud data corresponding to the voxels in the visible region into a cache; respectively adding point cloud data corresponding to the voxels in the visible region stored in the cache to a plurality of levels of detail levels; and rendering the target point cloud scene according to the point cloud data in each level of the detail level.
In the present embodiment, the storage medium includes, but is not limited to, a Random Access Memory (RAM), a Read-Only Memory (ROM), a Cache (Cache), a Hard disk (HDD), or a Memory Card (Memory Card). The memory may be used to store computer program instructions. The network communication unit may be an interface for performing network connection communication, which is set in accordance with a standard prescribed by a communication protocol.
In this embodiment, the functions and effects specifically realized by the program instructions stored in the computer storage medium can be explained by comparing with other embodiments, and are not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the embodiments of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different from that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Although the present application provides method steps as described in the above embodiments or flowcharts, additional or fewer steps may be included in the method, based on conventional or non-inventive efforts. In the case of steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application. When the method is executed in an actual device or end product, the method can be executed sequentially or in parallel according to the embodiment or the method shown in the figure (for example, in the environment of a parallel processor or a multi-thread processing).
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many embodiments and many applications other than the examples provided will be apparent to those of skill in the art upon reading the above description. The scope of the application should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the pending claims along with the full scope of equivalents to which such claims are entitled.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and it will be apparent to those skilled in the art that various modifications and variations can be made in the embodiment of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (13)

1. A point cloud scene rendering method is characterized by comprising the following steps:
acquiring a first viewpoint, a first visual angle and origin coordinates of all voxels in a target point cloud scene;
determining voxels in a visible area according to a first viewpoint, a first view angle and origin coordinates of all voxels in the target point cloud scene;
storing the point cloud data corresponding to the voxels in the visible area into a cache;
respectively adding the point cloud data corresponding to the voxels in the visible region stored in the cache to a plurality of levels of detail;
and rendering the target point cloud scene according to the point cloud data in each level of the detail level.
2. The method of claim 1, further comprising, prior to acquiring the origin coordinates of the first viewpoint, the first perspective, and all voxels in the target point cloud scene:
carrying out voxelization segmentation on the target point cloud scene to obtain a plurality of voxels of the target point cloud scene;
acquiring origin coordinates of each voxel in the target point cloud scene;
respectively storing point cloud data corresponding to each voxel of the target point cloud scene to obtain a point cloud file corresponding to each voxel;
and establishing an index file according to the origin coordinates of the voxels in the target point cloud scene, wherein the index file comprises the point cloud files corresponding to the voxels and the association relationship between the origin coordinates of the voxels in the target point cloud scene.
3. The method of claim 2, wherein the step of storing point cloud data corresponding to each voxel of the target point cloud scene to obtain a point cloud file corresponding to each voxel comprises:
acquiring point cloud data corresponding to a target voxel and an origin coordinate of the target voxel in a target point cloud scene;
converting coordinates of each point in the point cloud data corresponding to the target voxel into an offset value relative to an origin coordinate of the target voxel in the target point cloud scene;
and storing the origin coordinates of the target voxels in the target point cloud scene and the offset values of the points relative to the origin coordinates of the target voxels in the target point cloud scene to obtain point cloud files corresponding to the target voxels.
4. The method according to claim 3, further comprising, before storing the point cloud data corresponding to the voxels within the visible region in a cache:
acquiring the index file;
determining a point cloud file corresponding to the voxel in the visible area according to the index file and the origin coordinates of the voxel in the visible area in the target point cloud scene;
and determining point cloud data corresponding to the voxels in the visible area according to the offset value of each point in the point cloud file relative to the origin coordinates of the voxels in the target point cloud scene.
5. The method of claim 1, wherein adding the point cloud data corresponding to the voxels within the viewable area stored in the cache to a plurality of levels of detail respectively comprises:
determining the number of levels of a detail level;
respectively randomly dividing the point cloud data corresponding to each voxel in the visible region to obtain a plurality of subsets, wherein the number of the subsets obtained by randomly dividing the point cloud data corresponding to each voxel is equal to the number of layers of the detail layer;
and respectively adding a plurality of subsets obtained by randomly dividing the point cloud data corresponding to each voxel in the visible region into a plurality of levels of detail.
6. The method according to claim 1, further comprising, after storing the point cloud data corresponding to the voxels within the visible area in a cache:
determining whether the data volume of the point cloud data corresponding to the voxels in the visible area is larger than the data volume storable in the cache;
and under the condition that the data quantity of the point cloud data corresponding to the voxels in the visible area is determined to be larger than the data quantity which can be stored in the cache, removing the voxels which are farthest from the first viewpoint from the voxels in the visible area from the cache.
7. The method of claim 6, after determining whether the data amount of the point cloud data corresponding to the voxels in the visible region is larger than the data amount storable in the cache, further comprising:
under the condition that the data volume of the point cloud data corresponding to the voxels in the visible area is determined to be not larger than the data volume storable in the cache, determining whether the data volume of the point cloud data corresponding to the voxels in the visible area is smaller than the data volume storable in the cache or not;
and under the condition that the data volume of the point cloud data corresponding to the voxels in the visible area is determined to be smaller than the data volume storable in the cache, storing the voxels, which are outside the visible area and adjacent to the first viewpoint, into the cache until the data volume of the point cloud data stored in the cache reaches the data volume storable in the cache.
8. The method of claim 1, after rendering the target point cloud scene, further comprising:
determining whether the first viewpoint or the first view angle moves;
under the condition that the first viewpoint or the first visual angle is determined to move, taking the moved first viewpoint as a second viewpoint and the moved first visual angle as a second visual angle;
re-determining the voxels in the visible area according to the second viewpoint, the second view angle and the origin coordinates of all the voxels;
and modifying the point cloud data stored in the cache according to the redetermined voxels in the visual area.
9. The method of claim 1, wherein storing point cloud data corresponding to the voxels within the visible region in a cache comprises: and storing the point cloud data corresponding to the voxels in the visible area into a cache by using a least-used algorithm.
10. A point cloud scene rendering apparatus, comprising:
the acquisition module is used for acquiring a first viewpoint, a first visual angle and origin coordinates of all voxels in a target point cloud scene;
the determining module is used for determining voxels in a visible area according to a first viewpoint, a first view angle and origin coordinates of all voxels in the target scene;
the storage module is used for storing the point cloud data corresponding to the voxels in the visible area into a cache;
the processing module is used for respectively adding the point cloud data corresponding to the voxels in the visible region stored in the cache into a plurality of levels of detail;
and the rendering module is used for rendering the target point cloud scene according to the point cloud data in each level of the detail level.
11. The apparatus of claim 10, further comprising:
the voxel segmentation unit is used for carrying out voxel segmentation on the target point cloud scene to obtain a plurality of voxels of the target point cloud scene;
the acquisition unit is used for acquiring the origin coordinates of each voxel in the target point cloud scene;
the storage unit is used for respectively storing the point cloud data corresponding to each voxel of the target point cloud scene to obtain a point cloud file corresponding to each voxel;
and the index file establishing unit is used for establishing an index file according to the origin coordinates of the voxels in the target point cloud scene, wherein the index file comprises the association relationship between the point cloud file corresponding to the voxels and the origin coordinates of the voxels in the target point cloud scene.
12. A point cloud scene rendering apparatus comprising a processor and a memory for storing processor-executable instructions which, when executed by the processor, implement the steps of the method of any of claims 1 to 9.
13. A computer readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the method of any one of claims 1 to 9.
CN201911164820.3A 2019-11-25 2019-11-25 Point cloud scene rendering method, device and equipment Pending CN111179394A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911164820.3A CN111179394A (en) 2019-11-25 2019-11-25 Point cloud scene rendering method, device and equipment
PCT/CN2020/098284 WO2021103513A1 (en) 2019-11-25 2020-06-24 Method, device, and apparatus for rendering point cloud scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911164820.3A CN111179394A (en) 2019-11-25 2019-11-25 Point cloud scene rendering method, device and equipment

Publications (1)

Publication Number Publication Date
CN111179394A true CN111179394A (en) 2020-05-19

Family

ID=70650050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911164820.3A Pending CN111179394A (en) 2019-11-25 2019-11-25 Point cloud scene rendering method, device and equipment

Country Status (2)

Country Link
CN (1) CN111179394A (en)
WO (1) WO2021103513A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111617480A (en) * 2020-06-04 2020-09-04 珠海金山网络游戏科技有限公司 Point cloud rendering method and device
WO2021103513A1 (en) * 2019-11-25 2021-06-03 Suzhou Zhijia Science & Technologies Co., Ltd. Method, device, and apparatus for rendering point cloud scene
CN113486276A (en) * 2021-08-02 2021-10-08 北京京东乾石科技有限公司 Point cloud compression method, point cloud rendering method, point cloud compression device, point cloud rendering equipment and storage medium
CN113689533A (en) * 2021-08-03 2021-11-23 长沙宏达威爱信息科技有限公司 High-definition modeling cloud rendering method
WO2022068672A1 (en) * 2020-09-30 2022-04-07 中兴通讯股份有限公司 Point cloud data processing method and apparatus, and storage medium and electronic apparatus
CN114756798A (en) * 2022-06-13 2022-07-15 中汽创智科技有限公司 Point cloud rendering method and system based on Web end and storage medium
CN115086502A (en) * 2022-06-06 2022-09-20 中亿启航数码科技(北京)有限公司 Non-contact scanning device
CN115205434A (en) * 2022-09-16 2022-10-18 中汽创智科技有限公司 Visual processing method and device for point cloud data
CN115269763A (en) * 2022-09-28 2022-11-01 北京智行者科技股份有限公司 Local point cloud map updating and maintaining method and device, mobile tool and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708369B (en) * 2022-03-15 2023-06-13 荣耀终端有限公司 Image rendering method and electronic equipment
CN115984827B (en) * 2023-03-06 2024-02-02 安徽蔚来智驾科技有限公司 Point cloud sensing method, computer equipment and computer readable storage medium
CN116385571B (en) * 2023-06-01 2023-09-15 山东矩阵软件工程股份有限公司 Point cloud compression method and system based on multidimensional dynamic variable resolution

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867174A (en) * 2015-05-08 2015-08-26 腾讯科技(深圳)有限公司 Three-dimensional map rendering and display method and system
US20180350044A1 (en) * 2017-06-02 2018-12-06 Wisconsin Alumni Research Foundation Systems, methods, and media for hierarchical progressive point cloud rendering

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3343506A1 (en) * 2016-12-28 2018-07-04 Thomson Licensing Method and device for joint segmentation and 3d reconstruction of a scene
WO2018148924A1 (en) * 2017-02-17 2018-08-23 深圳市大疆创新科技有限公司 Method and device for reconstructing three-dimensional point cloud
US10861196B2 (en) * 2017-09-14 2020-12-08 Apple Inc. Point cloud compression
CN110070613B (en) * 2019-04-26 2022-12-06 东北大学 Large three-dimensional scene webpage display method based on model compression and asynchronous loading
CN111179394A (en) * 2019-11-25 2020-05-19 苏州智加科技有限公司 Point cloud scene rendering method, device and equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867174A (en) * 2015-05-08 2015-08-26 腾讯科技(深圳)有限公司 Three-dimensional map rendering and display method and system
US20180350044A1 (en) * 2017-06-02 2018-12-06 Wisconsin Alumni Research Foundation Systems, methods, and media for hierarchical progressive point cloud rendering

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021103513A1 (en) * 2019-11-25 2021-06-03 Suzhou Zhijia Science & Technologies Co., Ltd. Method, device, and apparatus for rendering point cloud scene
CN111617480A (en) * 2020-06-04 2020-09-04 珠海金山网络游戏科技有限公司 Point cloud rendering method and device
WO2022068672A1 (en) * 2020-09-30 2022-04-07 中兴通讯股份有限公司 Point cloud data processing method and apparatus, and storage medium and electronic apparatus
CN113486276A (en) * 2021-08-02 2021-10-08 北京京东乾石科技有限公司 Point cloud compression method, point cloud rendering method, point cloud compression device, point cloud rendering equipment and storage medium
CN113689533A (en) * 2021-08-03 2021-11-23 长沙宏达威爱信息科技有限公司 High-definition modeling cloud rendering method
CN115086502A (en) * 2022-06-06 2022-09-20 中亿启航数码科技(北京)有限公司 Non-contact scanning device
CN114756798A (en) * 2022-06-13 2022-07-15 中汽创智科技有限公司 Point cloud rendering method and system based on Web end and storage medium
CN114756798B (en) * 2022-06-13 2022-10-18 中汽创智科技有限公司 Point cloud rendering method and system based on Web end and storage medium
CN115205434A (en) * 2022-09-16 2022-10-18 中汽创智科技有限公司 Visual processing method and device for point cloud data
CN115269763A (en) * 2022-09-28 2022-11-01 北京智行者科技股份有限公司 Local point cloud map updating and maintaining method and device, mobile tool and storage medium
CN115269763B (en) * 2022-09-28 2023-02-10 北京智行者科技股份有限公司 Local point cloud map updating and maintaining method and device, mobile tool and storage medium

Also Published As

Publication number Publication date
WO2021103513A1 (en) 2021-06-03

Similar Documents

Publication Publication Date Title
CN111179394A (en) Point cloud scene rendering method, device and equipment
CN101615191B (en) Storage and real-time visualization implementation method of mass cloud data
US10803561B2 (en) Systems, methods, and media for hierarchical progressive point cloud rendering
Elseberg et al. Efficient processing of large 3d point clouds
Nießner et al. Real-time 3D reconstruction at scale using voxel hashing
CN108664619B (en) Primitive storage and scheduling method for mass line-drawing topographic map of tile-like technology
Fleishman et al. Automatic camera placement for image‐based modeling
US9443345B2 (en) Method and apparatus for rendering three-dimensional (3D) object
US9665800B1 (en) Rendering virtual views of three-dimensional (3D) objects
JP5133418B2 (en) Method and apparatus for rendering a virtual object in a real environment
JP6919864B2 (en) Methods and devices for reconstructing 3D point clouds
US7561156B2 (en) Adaptive quadtree-based scalable surface rendering
Richter et al. Out-of-core real-time visualization of massive 3D point clouds
EP3671639A1 (en) Mesh reconstruction using data-driven priors
JP2017054516A (en) Method and device for illustrating virtual object in real environment
JP2006163841A5 (en)
US20160247313A1 (en) Methods and Systems for Viewing a Three-Dimensional (3D) Virtual Object
CA2235089C (en) Object search method and object search system
EP2589933B1 (en) Navigation device, method of predicting a visibility of a triangular face in an electronic map view
GB2558027A (en) Quadrangulated layered depth images
JP4246848B2 (en) Frame buffer memory system that reduces page loss when expressed using color and z-buffers
US20120299919A1 (en) Image display device
WO2014094047A1 (en) A method for efficent streaming of octree data for access
CN101122997A (en) Graphic processing system and method for storing texture data in graphic processing system
Argudo et al. Interactive inspection of complex multi-object industrial assemblies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200519