WO2021103513A1 - Method, device, and apparatus for rendering point cloud scene - Google Patents

Method, device, and apparatus for rendering point cloud scene Download PDF

Info

Publication number
WO2021103513A1
WO2021103513A1 PCT/CN2020/098284 CN2020098284W WO2021103513A1 WO 2021103513 A1 WO2021103513 A1 WO 2021103513A1 CN 2020098284 W CN2020098284 W CN 2020098284W WO 2021103513 A1 WO2021103513 A1 WO 2021103513A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
voxels
scene
view
visible area
Prior art date
Application number
PCT/CN2020/098284
Other languages
French (fr)
Inventor
Ruyang SHOU
Jinwen GUO
Lei Wang
Original Assignee
Suzhou Zhijia Science & Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhijia Science & Technologies Co., Ltd. filed Critical Suzhou Zhijia Science & Technologies Co., Ltd.
Publication of WO2021103513A1 publication Critical patent/WO2021103513A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/30Clipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/36Level of detail
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering

Definitions

  • the present application relates to the technical field of data processing, and in particular to a method, device, and apparatus for rendering a point cloud scene.
  • a point cloud is a data set comprising a large number of points in a three-dimensional space.
  • semantics meaning
  • a point cloud scene is a scene represented by data in the point cloud and rendering means to display the data in a visual format from the point of view of an observer.
  • the point cloud may be rendered from the point of view of an observer at a specified location.
  • a point cloud radar data or data other than visual data the point cloud may rendered in visual form. Therefore rendering a point cloud scene may be considered to be a form of three dimensional image processing.
  • Point clouds are often used in sensor calibration, obstacle recognition and tracking, semantic target recognition, map positioning and other aspects in the autonomous driving technology. They may for example be used for, calibration, sensing, and navigation. Therefore, improving the efficiency and capability of point cloud scene rendering is of great significance in many fields related to autonomous driving.
  • FIG. 1 is a schematic diagram of blocks of a method for rendering a point cloud scene according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of the method for rendering a point cloud scene according to a specific embodiment of the present application
  • FIG. 3 is a schematic diagram of a storage structure according to a specific embodiment of the present application.
  • FIG. 4 is a schematic diagram of a change process of a point of view and an angle of view according to a specific embodiment of the present application
  • FIG. 5 is a schematic structural diagram of a device for rendering a point cloud scene according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an apparatus for rendering a point cloud scene according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of blocks of a method for rendering a point cloud scene according to an embodiment of the present application.
  • the embodiments of the present application may be implemented as a system, device, apparatus, method, or computer program product. Therefore, the disclosure of the present application can be specifically implemented in the following forms, namely: complete hardware, complete software (comprising firmware, resident software, microcode, etc. ) , or a combination of hardware and software.
  • level of detail One approach to improving the efficiency and capability of point cloud scene rendering is known as level of detail (LOD) technology.
  • level of detail technology multiple copies are generated for an object in a target scene when the scene is being constructed and drawn. Each copy is referred to as a level, and each level includes a three-dimensional model of the object. Each level has different levels of detail.
  • LOD level of detail
  • a lower level of detail When an object is placed within the visible range of the scene, as the object in the scene moves away from an observer, a lower level of detail will replace a higher level of detail.
  • the observer gets close to a certain object, the number of other objects in his/her field of view will be reduced, so more details about the observed object can be shown.
  • a point cloud file which stores point cloud data usually a set of three-dimensional coordinates and data fields corresponding to the points in the point cloud. There may be no topological connection relationship between any two points in the point cloud data. Therefore, in a conventional approach, when a point cloud scene is loaded, the entire point cloud file is read from a storage medium to a general memory of a computer and/or a memory of a video card. If the point cloud file needing to be read exceeds the upper limit of point cloud files that can be stored in the memory, then the point cloud data in the point cloud scene cannot be completely loaded, and the point cloud scene cannot be rendered. Therefore, this approach to point cloud scene rendering may not be able to cope with large point clouds.
  • a first aspect of the present disclosure proposes a method for rendering a point cloud scene, comprising:
  • the visible area may be an area or range of the point cloud which is visible to the observer.
  • the first point of view and first angle may be a location and a viewing angle of the observer.
  • the ‘target’ point cloud scene is the point cloud scene which is to be rendered, which may for example be a view of the point cloud from the perspective of the observer.
  • acquiring the origin coordinates of the voxels may comprise acquiring the original coordinates of all the voxels in the target point cloud scene.
  • determining the origin coordinates of the voxels may comprise determining the original coordinates of all the voxels in the target point cloud scene.
  • each piece of point cloud data may correspond to a respective voxel in the visible area.
  • a cache may be a memory which is dedicated to providing a processor with relatively quick access to data which may be frequently used.
  • a cache may have a relatively small size in order to maintain speed of access, which means that the capacity of the cache may be limited.
  • the cache may for example be a cache memory of a computing device which may be accessible by a processor which is configured to perform the above described method and render the point cloud scene.
  • Separately adding the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail may include dividing each piece of point cloud data into multiple subsets of point cloud data and storing each subset separately, wherein each subset corresponds to a respective level of detail.
  • Rendering a target point cloud scene according to the pieces of point cloud data in the various levels of detail may comprise rendering the target point cloud scene at a particular level of detail by using subsets of point cloud data corresponding to said particular level of detail and subsets corresponding to levels of detail which are lower than said particular level of detail.
  • the level of detail to be rendered for a voxel may be determined based on a distance between the observer and the voxel, for instance a distance between the first point of view and the origin coordinates of the voxel.
  • a second aspect of the present disclosure provides a method for rendering a point cloud scene, comprising:
  • point cloud data corresponding to voxels in the visible area into a cache, wherein point cloud data corresponding to voxels outside the visible area is not stored in the cache;
  • storing the point cloud data includes dividing the point cloud data in each voxel in the visible area into a plurality of subsets and storing each subset separately;
  • the determined level of detail may be a level of detail selected from among a plurality of levels of detail. In some examples, the level of detail may be determined separately for each voxel, for instance based on a distance of the observer from the voxel.
  • a third aspect of the present disclosure provides an apparatus for rendering a point cloud scene according to the first or second aspect of the present disclosure.
  • the apparatus may comprise a processor and a memory for storing instructions executable by the processor, wherein the instructions, when executed by the processor, implement the method of the first or second aspect of the present disclosure.
  • a fourth aspect of the present disclosure provides a non-transitory computer readable storage medium storing machine readable instructions which are executable by a processor to perform the method of the first or second aspect of the present disclosure.
  • the first point of view, the first angle of view, and the origin coordinates of voxels in the target point cloud scene are acquired, so that voxels lying within the visible area can be determined based on the first point of view, the first angle of view, and the origin coordinates of voxels in the target point cloud scene.
  • pieces of point cloud data corresponding to the voxels in the visible area can be stored into the cache, while pieces of point cloud data not corresponding to voxels in the visible area are not stored in the cache, thus saving memory resources.
  • the pieces of point cloud data corresponding to the voxels in the visible area in the target point cloud scene can be loaded in blocks with a voxel as a unit without the need to load all point cloud data in the target point cloud scene in one go, which may effectively improve the data reading efficiency, enabling the point cloud data required for rendering to be acquired efficiently.
  • the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache are separately added to the multiple levels of detail, and the target point cloud scene is rendered according to the pieces of point cloud data using pieces of point cloud data corresponding to the voxels in the visible area at the determined level of detail. The amount of data processing may be reduced thus enabling the rendering to be carried out more efficiently using fewer processing resources.
  • FIG. 1 shows an example method for rendering a point cloud scene.
  • the method may comprises the following blocks:
  • S101 acquiring a first point of view, a first angle of view, and origin coordinates of a plurality of voxels in a target point cloud scene. For example, origin coordinates of all voxels in the target point cloud scene may be acquired.
  • the target point cloud scene may be a three-dimensional scene composed of point cloud that needs to be rendered.
  • the rendered target point cloud scene may be used by an autonomous driving system for obstacle recognition, road recognition and/or to produce map. In other examples, the rendered point cloud map may need to be used in a computer game or simulation.
  • the first point of view and the first angle of view can be an initial point of view and an initial angle of view of the observer or the observation apparatus in the target point cloud scene, the first point of view can be used to characterize a position of the observer or observation apparatus in the target point cloud scene, and the first angle of view can be used to characterize an included angle formed at a point of view by the light rays drawn from both ends (upper, lower, left, or right) of an object when the object is observed.
  • the observation apparatus may include, but is not limited to, at least one of the following: a laser radar, a contact scanner, a structured light, triangulation, a stereo camera, a transit time camera, etc.
  • the voxel is an abbreviation of volume pixel and may be a minimum unit in segmentation of digital data in a three-dimensional space.
  • a voxel-containing cube can be represented through stereoscopic rendering or extraction of polygonal iso surfaces of a given threshold profile.
  • voxelized segmentation is performed on a complete target point cloud scene, and the space where the target point cloud scene is located may be divided into multiple voxels.
  • Each side length of the voxel may be set according to the situation or specific implementation, and each side length direction of the voxel may be parallel to three axes of a coordinate system of the space of the target point cloud scene.
  • the voxels obtained through segmentation may be cubes or cuboids or other shapes. The present application is not limited to any particular shape of voxel.
  • the origin coordinates of various voxels obtained through segmentation in the target point cloud scene can be further acquired.
  • the origin coordinates of the various voxels in the target point cloud scene can be used to characterize positions of the various voxels in the entire target point cloud scene.
  • the above origin may be the point in the voxels that is closest to the origin of the coordinate system of the space of the target point cloud scene, or may be the geometric center point of a voxel or the intersection of the length, width, and height axes in the voxel, and may be specifically determined according to the actual situation, which is not limited in the present application.
  • pieces of point cloud data corresponding to the various voxels in the target point cloud scene are separately stored to obtain point cloud files corresponding to the various voxels, that is, any point in the target point cloud scene belongs to a voxel in terms of position, the points belonging to the same voxel are grouped together and stored in a corresponding point cloud file, so that the point cloud data in the target point cloud scene can be read and loaded in blocks.
  • an index file can be created according to the origin coordinates of the various voxels in the target point cloud scene, wherein the index file comprises a correlation between the point cloud files corresponding to the various voxels and the origin coordinates of the various voxels in the target point cloud scene.
  • the correlation can be embodied in the form of a table, a key-value pair, etc.
  • the method for establishing the correlation can be determined according to the actual situation, which is not limited in the present application. Therefore, a voxel at any position in the target point cloud scene and a point cloud file corresponding to the voxel can be intuitively determined by means of the index file.
  • a piece of point cloud data corresponding to a target voxel and an origin coordinate of the target voxel in the target point cloud scene can be acquired, and coordinates of various points in the point cloud data corresponding to the target voxel can be converted into offset values relative to the origin coordinate of the target voxel in the target point cloud scene.
  • offsets of various points in the point cloud file need to be restored to original coordinates of the various points in the target point cloud scene.
  • An origin coordinate of a voxel corresponding to the point cloud file in the target point cloud scene can be acquired according to the index file, and the relative offsets of the various point are superimposed and translation transformation is performed on the origin coordinate of the voxel corresponding to the point cloud file in the target point cloud scene, such that the original coordinates of the various points in the target point cloud scene can be restored.
  • S102 determining voxels in a visible area according to the first point of view, the first angle of view, and the origin coordinates of all the voxels in the target point cloud scene.
  • the specific positions of the various voxels in the target point cloud scene can be determined according to the origin coordinates of all the voxels in the target scene above. Therefore, it is possible to determine voxels, in all the voxels, in the visible area of the observer or observation apparatus according to the first point of view, the first angle of view, and the origin coordinates of all the voxels in the target point cloud scene.
  • scene culling can be performed in advance at a voxel level in the point cloud scene according to the first point of view and the first angle of view, to cull voxels beyond the visible area of the observer or observation apparatus, so as to reduce rendering overheads and improve the scene loading efficiency on the basis of loading point cloud data in blocks.
  • point cloud files corresponding to the various voxels in the visible area can be acquired according to the origin coordinates of the voxels in the visible area in the target point cloud scene and the index file, so that pieces of point cloud data corresponding to the various voxels in the visible area can be acquired and stored into the cache.
  • a cache may have a higher data reading efficiency and a shorter data response time.
  • a cache may be close to a processor and may have low latency. Therefore, the use of the cache to store the pieces of point cloud data corresponding to the various voxels in the visible area can reduce a data transmission and reading time, thereby improving a processing efficiency.
  • the cache can open a fixed area from the memory for storing point cloud data and metadata corresponding to voxels.
  • the cache can implement an array for indexing and managing voxel objects loaded into a data area. Each element in the array points to point cloud data corresponding to a loaded voxel.
  • the cache can store the pieces of point cloud data corresponding to all the voxels in the visible area by using the replacement strategy for a priority queue of a least recently used (LRU) algorithm.
  • LRU is a type of page replacement algorithm, wherein data is eliminated and arranged according to a historical access record of data. The core idea thereof is: if data has been accessed recently, the probability of being accessed in the future is also higher.
  • the LRU to store point cloud data, the data in the cache can be more conveniently and effectively replaced when the first point of view and first angle of view change.
  • the observer continuously moves through another B in space and reaches a destination C when the observer is at a position A in the target point cloud scene, wherein A is closer to C than B in distance, because A has been observed recently, while B has not been observed, if the LRU is used in this case, the probability of moving to A next time is higher than to B, and a priority is given to replace A.
  • the amount of data that can be stored in the cache and the amount of point cloud data corresponding to the voxels in the visible area may be acquired first, and then it may be determined whether the amount of point cloud data corresponding to the voxels in the visible area is greater than the amount of data that can be stored in the cache, and when it is determined that the amount of point cloud data corresponding to the voxels in the visible area is greater than the amount of data that can be stored in the cache, that is, the amount of point cloud data corresponding to the voxels in the visible area exceeds the upper limit of the amount of data that can be stored in the cache, voxels in the visible area that are farthest from the first point of view may be removed from the cache.
  • the amount of point cloud data corresponding to the voxels in the visible area is not greater than the amount of data that can be stored in the cache
  • voxels outside the visible area that are adjacent to the first point of view are stored into the cache, until the amount of point cloud data stored in the cache reaches the amount of data that can be stored in the cache. That is, when it is determined that the amount of point cloud data corresponding to the voxels in the visible area is less than the amount of data that can be stored in the cache, voxels outside the visible area that are placed in ascending order of distances from the first point of view are sequentially stored in the cache until the amount of point cloud data stored in the cache reaches the amount of data that can be stored in the cache.
  • S104 separately adding the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail.
  • the level of detail (LOD) technology can be used to improve the efficiency and ability of point cloud scene rendering.
  • the above level of detail is used to describe the same scene with different precisions, and its working principle is: when the point of view is close to the object, the details of the model that can be observed are rich, and when the point of view is far from the model, the observed details are gradually blurred.
  • the system will select the corresponding details to display in time, so as to avoid waste of time caused by loading and rendering of those details that have relatively little meaning, thus effectively adjusting the relationship between picture continuity and resolution. Therefore, the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache can be separately added to multiple levels of detail, and different levels of detail retain pieces of point cloud data with different degrees of detail for various voxels.
  • the number of levels of detail can be determined first, wherein the number of levels can be a positive integer greater than or equal to 1, for example: 4, 5, etc., which can be specifically determined according to the actual situation and is not limited in the present application.
  • the pieces of point cloud data corresponding to the various voxels in the visible area can be separately and randomly divided to obtain multiple subsets, wherein the number of subsets obtained by randomly dividing the point cloud data corresponding to each of the voxels is equal to the number of levels of detail.
  • the point cloud data may be divided into multiple subsets using a hash or other algorithm instead of randomly dividing into multiple subsets.
  • the multiple subsets obtained by dividing the pieces of point cloud data corresponding to the various voxels in the visible area can be separately added to multiple levels of detail. For example, when the number of levels of detail is 5, the point cloud data corresponding to voxel 1 in the visible area can be randomly divided to obtain 5 subsets (subset 1, subset 2, subset 3, subset 4, subset 5) corresponding to voxel 1, and the subset 1 is added to the first level of detail, the subset 2 is added to the second level of detail, the subset 3 is added to the third level of detail, the subset 4 is added to the fourth level of detail, and the subset 5 is added to the fifth level of detail.
  • different levels of detail retain pieces of point cloud data with different degrees of detail for various voxels. When the i th level of detail is used, all the pieces of point cloud data in the first to the i th subsets will be loaded accordingly.
  • S105 rendering the target point cloud scene according to the pieces of point cloud data in the various levels of detail.
  • the target point cloud scene can be rendered according to the pieces of point cloud data in the various levels of detail, wherein voxels closer to the point of view can be rendered with relatively more levels of detail to show more details of the observed object.
  • the number of levels of detail to be used for the various voxels in the visible area can be determined according to the first point of view and the first angle of view, and a point cloud scene is rendered according to the determined number of levels of detail to be used for the various voxels in the visible area and pieces of point cloud data in multiple levels of detail.
  • the target point cloud scene After rendering the target point cloud scene, it is possible to continue to determine whether the first point of view or the first angle of view has changed or moved. When it is determined that the first point of view or the first angle of view moves, it is possible to take a first point of view obtained after movement as a second point of view, and take a first angle of view obtained after movement as a second angle of view.
  • the voxels in the visible area can be re-determined according to the second point of view, the second angle of view and the origin coordinates of all voxels, and the pieces of point cloud data stored in the cache can be modified according to the re-determined voxels in the visible area.
  • the result of the movement of the first point of view and/or the first angle of view comprises two cases: zooming of the angle of view only, and occurring of voxel replacement. That is, when changes in the first point of view or the first angle of view only causes the zooming of the angle of view, no voxels within the visible range are newly added at this time; and when the first point of view or the first angle of view changes, so that some new voxels enter the visible area, and some voxels that were originally in the visible area exit the field of view, voxel replacement occurs at this time.
  • the first point of view or the first angle of view changes, so that some new voxels enter the visible area, and some voxels that were originally in the visible area exit the field of view, it is necessary to re-determine the voxels in the visible area according to the second point of view, the second angle of view and the origin coordinates of all the voxels, and load the pieces of point cloud data corresponding to the voxels newly entering the visible area into the cache.
  • the amount of data that can be stored in the cache has reached the upper limit, it is determined whether any voxel exits the visible area, and if it is determined that there is a voxel exiting the visible area, among the voxels that have exited the visible area, a voxel with a distance from the point of view greater than that of the voxels that have been stored in the cache is selected, and a voxel that has been out of the field of view for the longest time will be culled from the cache.
  • a harmonic function may be used to synthesize the most recently observed time and the distance from the point of view, to give each voxel in the cache a current comprehensive score, and select a voxel to be replaced based on the score result.
  • This strategy may be called a combination of LRU with a priority queue based on distances from the point of view.
  • the voxels that have exited the visible area may be continuously deleted from the cache based on the above strategy until all pieces of point cloud data corresponding to the new voxels that have entered the view can be loaded into the cache.
  • the point cloud data corresponding to the voxel is unloaded from the levels of detail to free up storage space for storing pieces of point cloud data corresponding to voxels within the visible range.
  • the embodiments of the present application achieve the following technical effects: the first point of view, the first angle of view, and the origin coordinates of all voxels in the target point cloud scene are acquired, so that the voxels in the visible area can be determined according to the first point of view, the first angle of view, and the origin coordinates of all voxels in the target point cloud scene; and the pieces of point cloud data corresponding to the voxels in the visible area are stored into the cache, so that the pieces of point cloud data corresponding to the voxels in the visible area in the target point cloud scene can be loaded in blocks with a voxel as a unit without the need to load all point cloud data in the target point cloud scene at one time, which effectively improves the data reading efficiency, such that point cloud data required for rendering can be efficiently acquired.
  • the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache are separately added to the multiple levels of detail, and the target point cloud scene is rendered according to the pieces of point cloud data in the various levels of detail, so that detail drawing can be performed according to the pieces of point cloud data corresponding to the voxels in the visible area when the target point cloud scene is rendered at the level of detail, which effectively reduces the amount of data processing, such that point cloud scene rendering can be performed efficiently and smoothly.
  • An embodiment of the present disclosure provides a method for rendering a point cloud scene, as shown in FIG. 2, which may comprise:
  • Block 1 voxelized segmentation on the point cloud scene.
  • the file format of a storage point cloud describes the point cloud as a record consisting of a set of three-dimensional coordinates with data fields.
  • the file is completely read from the storage medium into the computer’s general memory and graphics card memory. If the point cloud file to be read is too large and the data cannot be completely loaded, the point cloud scene cannot be rendered. Therefore, in one embodiment, it is possible to perform voxelized segmentation on a complete point cloud scene. Each voxel obtained after the segmentation is stored as a separate point cloud file, and all voxels and their correlation with their original positions in the complete point cloud scene are recorded through an index file.
  • a storage structure which comprises: an index file and several data files (point cloud files) .
  • This structure appropriately cuts a complete point cloud scene, so that the point cloud scene can be loaded in blocks, and the data amount of the point cloud scene can be appropriately compressed, wherein the storage structure is shown as in FIG. 3.
  • the index file is on the left side
  • the associated point cloud files are on the right side.
  • “Label: GROUND” etc.
  • the algorithm for generating a corresponding index file and several point cloud files from the point cloud scene of a single file is as follows:
  • d i is the data attribute of point p i
  • the data attribute may include but not limited to at least one of the following: intensity, reflectivity, etc.
  • o i is the origin coordinate of a voxel obtained after p i is voxelized with a voxel side length R in the point cloud scene, and p i -o i is the coordinate offset (relative coordinate) of each point in the voxel relative to the origin coordinate of the voxel;
  • ⁇ (z i , R) is a function used to determine the largest multiple of z i /R not less than z i ;
  • ⁇ (a, R) is used to make flooring to a/R and then multiply it by R; and a is a variable.
  • V j represents the j th voxel of and d jk are respectively a point belonging to V j and its corresponding data attribute, where is a collection of all p i and their corresponding o i in the point cloud, each p i corresponding to an o i .
  • each V j corresponds to an o j .
  • point cloud data corresponding to each voxel V j is stored to its respective point cloud file
  • the index file contains the origin coordinates o j of all voxels, and o j ⁇ D j is the correlation between the origin coordinates of the voxels and the point cloud files corresponding to the voxels.
  • the voxel side length R may also be stored in the index file.
  • a point cloud file the coordinate of a point is converted to an offset relative to the origin coordinate of the voxel in the complete point cloud scene according to the above formula, so that the amount of data in the point cloud scene can be appropriately compressed to save more data bits for numerical precision.
  • point cloud data in a point cloud file is loaded/read, it is necessary to restore the offset to the original coordinate of the midpoint of the voxel corresponding to the point cloud file. In this case, a record of a corresponding voxel needs to be retrieved from the index file.
  • the relative offsets of all points in the voxel can be superimposed on the origin coordinate of the voxel in the above complete point cloud scene to perform translation transformation.
  • Block 2 loading of pieces of point cloud data corresponding to all voxels in the visible area into the cache according to the observer’s angle of view and point of view.
  • the index file of the point cloud scene is loaded first.
  • the precise size of the point cloud scene and the positions of various voxels in the complete point cloud scene can be obtained. Therefore, no matter what position the observer is in the point cloud scene, it is possible to calculate which voxels are in the observer’s field of view according to the origin coordinates of the various voxels.
  • scene culling can be performed in advance at the voxel level in the point cloud scene according to the observer’s angle of view and point of view position, to cull voxels that are not within the observer’s field of view, so as to reduce rendering overheads and improve scene loading efficiency, thereby relieving the storage pressure.
  • LUR least recently used
  • the cache pre-loading can be carried out, that is, it extends beyond the visible range to select the point cloud data corresponding to the unloaded voxels nearby. If the amount of point cloud data corresponding to all voxels in the visible area exceeds the upper limit of the amount of data that can be stored in the cache, then the voxel data that is relatively far away from the initial point of view in the voxels in the visible area is discarded. That is, the voxels to be loaded in the cache can be determined based on a priority queue of distances form the point of view.
  • Block 3 level of detail construction.
  • the pieces of point cloud data corresponding to various voxels in the cache can be loaded into multiple levels of detail. Specifically, for L levels of detail, the points within each voxel are randomly divided into L subsets, wherein the number of points in the i th subset accounts for 2i/ [L ⁇ (L + 1) ] of the total number of points contained in the voxel. In one embodiment, it is possible to determine how many levels of detail need to be used according to the distance from the observer to the voxel. When the i th level of detail is used, the points in the first to the i th subset will be loaded accordingly.
  • Block 4 rendering.
  • the number of levels of detail to be used can be determined according to the position information of the observer, and the point cloud scene rendering can be performed according to the determined number of levels of detail and the point cloud data loaded in multiple levels of detail.
  • some voxels when the position of the observer moves, some voxels will enter the field of view and some voxels that are originally in the visible range will exit the field of view. Due to the change of angle of view, the pieces of point cloud data corresponding to the new voxels that need to enter the field of view are loaded into the cache. When the storage space of the cache is full, among the voxels that have exited the field of view, a voxel with a distance from the point of view greater than that of the voxels that have been stored in the cache is selected, and a voxel that has been out of the field of view for the longest time will be culled from the cache.
  • This strategy is a combination of LRU and a priority queue based on distances from the point of view.
  • the voxels may be continuously deleted from the cache based on the above strategy until all pieces of point cloud data corresponding to the new voxels that have entered the view can be loaded into the cache.
  • the corresponding point cloud data is unloaded from the levels of detail to free up storage space for voxels within the visible range.
  • the above process may be specifically shown as in FIG. 4.
  • the corresponding voxels within the visible range will also change.
  • voxels closer to the observer need to be rendered with more details, which means that when the position of the observer moves, the levels of detail that need to be rendered for various voxels within the visible range is also changing.
  • the above method for rendering a point cloud scene can be applied to rendering point cloud maps over 300 kilometers, with a total file size of TB-level dense point clouds. This cannot be done with existing memory-based rendering solutions, and the above-mentioned method for rendering a point cloud scene can achieve interactive level rendering response.
  • the size of the LRU cache is set to be 2GB, the number of voxels is limited to be 2000, the side length of the voxel is set to be 50 meters, and the number of levels of detail is set to be 5.
  • the rendering frame rate (Frames per second, FPS) of operations, such as zooming and rotation, that trigger the replacement of levels of detail reaches more than 30 FPS, so that there is no sense of stuttering, and translation triggers cache replacement.
  • FPS Fres per second
  • an embodiment of the present application further provides a device for rendering a point cloud scene, as shown in embodiments below. Since the principle of the device for rendering a point cloud scene to solve the problem is similar to the method for rendering a point cloud scene, the implementation of the device for rendering a point cloud scene can refer to the implementation of the method for rendering a point cloud scene, and the repetition will not be repeated.
  • the term “unit” or “module” may implement a combination of software and/or hardware that achieves a predetermined function.
  • the devices described in the embodiments below are preferably implemented in software, implementation of hardware or a combination of software and hardware is also possible and conceived. FIG.
  • FIG. 5 is a structural block diagram of a device for rendering a point cloud scene according to an embodiment of the present application, as shown in FIG. 5, which may comprise: an acquisition module 501, a determination module 502, a storage module 503, a processing module 504, and a rendering module 505.
  • an acquisition module 501 may comprise: an acquisition module 501, a determination module 502, a storage module 503, a processing module 504, and a rendering module 505.
  • the structure will be described below.
  • the acquisition module 501 may be configured to obtain a first point of view, a first angle of view and origin coordinates of all voxels in the target point cloud scene.
  • the determination module 502 may be configured to determine voxels in a visible area according to the first point of view, the first angle of view, and the origin coordinates of all the voxels in the target scene.
  • the storage module 503 may be configured to store pieces of point cloud data corresponding to the voxels in the visible area into a cache.
  • the processing module 504 may be configured to separately add the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail.
  • the rendering module 505 may be configured to render the target point cloud scene according to the pieces of point cloud data in the various levels of detail.
  • the above device for rendering a point cloud scene may further comprise: a voxelized segmentation unit configured to perform voxelized segmentation on the target point cloud scene to obtain multiple voxels in the target point cloud scene; an acquisition unit configured to acquire the origin coordinates of the various voxels in the target point cloud scene; a storage unit configured to separately store pieces of point cloud data corresponding to the various voxels in the target point cloud scene to obtain point cloud files corresponding to the various voxels; and an index file creation unit configured to create an index file according to the origin coordinates of the various voxels in the target point cloud scene, wherein the index file comprises a correlation between the point cloud files corresponding to the various voxels and the origin coordinates of the various voxels in the target point cloud scene.
  • Embodiments of the present application also provide an electronic apparatus.
  • the electronic apparatus may specifically comprise an input apparatus 61, a processor 62, and a memory 63.
  • the input apparatus 61 may specifically be configured to input the first point of view, the first angle of view, and the origin coordinates of all voxels in the target point cloud scene.
  • the processor 62 may be specifically configured to determine the voxels in the visible area according to the first point of view, the first angle of view, and the origin coordinates of all voxels in the target point cloud scene; store the pieces of point cloud data corresponding to the voxels in the visible area into the cache; separately add the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail; and render the target point cloud scene according to the pieces of point cloud data in the various levels of detail.
  • the memory 63 may be specifically configured to store parameters such as the first point of view, the first angle of view, and the origin coordinates of all voxels in the target point cloud scene.
  • the input apparatus may specifically be one of the main devices for information exchange between a user and a computer system.
  • the input apparatus may comprise a keyboard, a mouse, a camera, a scanner, a light pen, a handwriting input board, a voice input device, etc.
  • the input apparatus is configured to input raw data and programs that are used to process these pieces of data into the computer.
  • the input apparatus may also acquire and receive data transmitted from other modules, units and apparatuses.
  • the processor may be implemented in any suitable way.
  • the processor may take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro) processor, a logic gate, a switch, an application specific integrated circuit (ASIC) , a programmable logic controller and an embedded microcontroller, etc.
  • the memory may specifically be a memory apparatus for storing information in modern information technology.
  • the memory may comprise multiple levels. In a digital system, anything that can store binary data may be called a memory.
  • a circuit that has a storage function without a physical form is also called a memory, such as a RAM, a FIFO, etc.
  • a storage apparatus with a physical form is also called a memory, such as a memory stick, a TF card, etc.
  • the embodiments of the present application also provide a computer storage medium based on a method for rendering a point cloud scene.
  • the computer storage medium stores computer program instructions that, when executed, can implement: acquiring a first point of view, a first angle of view, and origin coordinates of all voxels in a target point cloud scene; determining voxels in a visible area according to the first point of view, the first angle of view, and the origin coordinates of all the voxels in the target point cloud scene; storing the pieces of point cloud data corresponding to the voxels in the visible area into the cache; separately adding the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail; and rendering the target point cloud scene according to the pieces of point cloud data in the various levels of detail.
  • the above storage medium includes but is not limited to a random access memory (RAM) , a read-only memory (ROM) , a cache, a hard disk (HDD) or a memory card.
  • the memory may be configured to store computer program instructions.
  • a network communication unit may be an interface configured to perform network connection communication and is set according to a standard prescribed by a communication protocol.
  • FIG. 7 depicts a schematic diagram of blocks of a method for rendering a point cloud scene according to a further example in accordance with the present disclosure.
  • a point cloud scene is divided into a plurality of voxels, each voxel including point cloud data.
  • voxelized segmentation may be performed on a entire point cloud scene.
  • Each voxel obtained after the segmentation may be stored as a separate point cloud file, and all voxels and their correlation with their original positions in the complete point cloud scene may be recorded through an index file.
  • a first point of view, a first angle of view and origin coordinates of voxels in a target point cloud scene are acquired.
  • voxels in a visible area are determined according to the first point of view, the first angle of view and the origin coordinates of voxels in a target point cloud scene.
  • the index file of the point cloud scene may be loaded first. By parsing the origin coordinates of all voxels in the index file, the precise size of the point cloud scene and the positions of various voxels in the complete point cloud scene can be obtained. Therefore, no matter what position the observer is in the point cloud scene, it is possible to calculate which voxels are in the observer’s field of view according to the origin coordinates of the various voxels.
  • scene culling may be performed in advance at the voxel level in the point cloud scene according to the observer’s angle of view and point of view position, to cull voxels that are not within the observer’s field of view, so as to reduce rendering overheads and improve scene loading efficiency, thereby relieving the storage pressure.
  • point cloud data corresponding to voxels in the visible area is stored into a cache, wherein point could data corresponding to voxels outside the visible area is not stored in the cache.
  • Storing the point cloud data may include dividing the point cloud data in each voxel in the visible area into a plurality of subsets and storing each subset separately.
  • pieces of point cloud data corresponding to all voxels in the visible area may be loaded into the cache.
  • the storage space in the cache is filled. If it is not filled, then cache pre-loading may be carried out, that is, it extends beyond the visible range to select the point cloud data corresponding to the unloaded voxels nearby. If the amount of point cloud data corresponding to all voxels in the visible area exceeds the upper limit of the amount of data that can be stored in the cache, then the voxel data that is relatively far away from the initial point of view in the voxels in the visible area may be discarded. That is, the voxels to be loaded in the cache can be determined based on a priority queue of distances form the point of view.
  • the pieces of point cloud data corresponding to various voxels in the cache can be loaded into multiple levels of detail. For example, for L levels of detail, the points within each voxel are randomly divided into L subsets. Each subset is stored in the cache separately.
  • a target point cloud scene is rendered at a determined level of detail by rendering each voxel using a number of subsets according to the determined level of detail.
  • the number of levels of detail to be used can be determined according to the position information of the observer, and the point cloud scene rendering can be performed according to the determined number of levels of detail and the point cloud data loaded in multiple levels of detail, i.e. multiple subsets.
  • the various modules or blocks of the present disclosure described in the embodiments of the present application can be implemented by a general-purpose computing device that can be centralized on a single computing device or distributed across a network formed by multiple computing devices.
  • they may be implemented by program codes executable by the computing device, such that they may be stored in a storage device and executed by the computing device, and in some cases, the blocks shown or described may be performed in a sequence different from the sequence described herein, or they may be respectively fabricated into individual integrated circuit modules, or multiple modules or blocks thereof may be implemented as a single integrated circuit module.
  • the embodiments of the present application are not limited to any specific combination of hardware and software.
  • the machine readable instructions may be stored on a non-transitory computer readable medium, which when executed by a processor cause the processor to perform any of the above methods.
  • the present application provides operation blocks of the method as described in the above embodiments or flowcharts, more or less operation blocks may be included in the method based on conventional or non-creative labor. For blocks that do not logically have a necessary cause and effect relationship, the execution sequence of these blocks is not limited to the execution sequence provided in the embodiments of the present application.
  • the method When the method is executed in an actual device or terminal product, it may be executed sequentially or in parallel according to the method shown in the embodiments or the drawings (for example, a parallel processor or a multi-threaded processing environment) .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

A method and an apparatus for rendering a point cloud scene are provided. A point cloud scene is rendered by acquiring a first point of view, a first angle of view, and origin coordinates of voxels in a target point cloud scene (S101). Voxels in a visible area according to the first point of view, the first angle of view, and the origin coordinates of the voxels in the target point cloud scene are determined (S102). Point cloud data corresponding to the voxels in the visible area are stored into a cache (S103) and a point cloud scene is rendered at a level of detail based on the stored point cloud data (S105).

Description

METHOD, DEVICE, AND APPARATUS FOR RENDERING POINT CLOUD SCENE TECHNICAL FIELD
The present application relates to the technical field of data processing, and in particular to a method, device, and apparatus for rendering a point cloud scene.
BACKGROUND
In technologies such as autonomous driving and surveying and mapping, three-dimensional scene data captured by an apparatus such as a laser radar is often expressed in the form of a point cloud. A point cloud is a data set comprising a large number of points in a three-dimensional space. As there is no topological connection relationship between any two points in point cloud data, this can make it difficult to understand the semantics (meaning) of data in a point cloud scene (for example which points relate to which objects) . This presents a challenge when rendering a point cloud scene. In this context a point cloud scene is a scene represented by data in the point cloud and rendering means to display the data in a visual format from the point of view of an observer. For example the point cloud may be rendered from the point of view of an observer at a specified location. Although in many cases a point cloud radar data or data other than visual data, the point cloud may rendered in visual form. Therefore rendering a point cloud scene may be considered to be a form of three dimensional image processing.
Point clouds are often used in sensor calibration, obstacle recognition and tracking, semantic target recognition, map positioning and other aspects in the autonomous driving technology. They may for example be used for, calibration, sensing, and navigation. Therefore, improving the efficiency and capability of point cloud scene rendering is of great significance in many fields related to autonomous driving.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings described herein are used to provide a further understanding of the present application, which constitute a part of the present application, and do not constitute a limitation on the present application. In the drawings:
FIG. 1 is a schematic diagram of blocks of a method for rendering a point cloud scene according to an embodiment of the present application;
FIG. 2 is a schematic diagram of the method for rendering a point cloud scene according to a specific embodiment of the present application;
FIG. 3 is a schematic diagram of a storage structure according to a specific embodiment of the present application;
FIG. 4 is a schematic diagram of a change process of a point of view and an angle of view according to a specific embodiment of the present application;
FIG. 5 is a schematic structural diagram of a device for rendering a point cloud scene according to an embodiment of the present application; and
FIG. 6 is a schematic structural diagram of an apparatus for rendering a point cloud scene according to an embodiment of the present application.
FIG. 7 is a schematic diagram of blocks of a method for rendering a point cloud scene according to an embodiment of the present application.
DETAILED DESCRIPTION
The principle and spirit of the present application will be described below with reference to several exemplary embodiments. It should be understood that these embodiments are provided merely to enable those skilled in the art to better understand and thus to implement the present application, rather than limiting the scope of the present application by any means. Rather, these embodiments are provided to make the disclosure of the present application more thorough and complete, and to fully convey the scope of the present disclosure to those skilled in the art.
Those skilled in the art know that the embodiments of the present application may be implemented as a system, device, apparatus, method, or computer program product. Therefore, the disclosure of the present application can be specifically implemented in the following forms, namely: complete hardware, complete software (comprising firmware, resident software, microcode, etc. ) , or a combination of hardware and software.
One approach to improving the efficiency and capability of point cloud scene rendering is known as level of detail (LOD) technology. In level of detail technology, multiple copies are generated for an object in a target scene when the scene is being constructed and drawn. Each copy is referred to as a level, and each level includes a three-dimensional model of the object. Each level has different levels of detail. When an object is placed within the visible range of the scene, as the object in the scene moves away from an observer, a lower level of detail will replace a higher level of detail. When the observer gets close to a certain object, the number of other objects in his/her field of view will be reduced, so more details about the observed object can be shown. When the observer moves away from a certain object, there will be more other objects coming into view, so less details about each object are shown. Thus when the observer is further away, more objects are shown at a lower level of detail; and when the observer is closer, fewer objects are shown at a higher level of detail. This level of detail approach may ensure smooth rendering by maintaining this relationship so that the complexity of the entire scene within the visible range is maintained at a relatively constant level. Furthermore, for objects that are far away, the observer will not be visually aware of the reduction in details about objects.
However, when a very large point cloud scene is rendered, because the point cloud lacks a topological connection relationship between points, there is no clear object boundary. Therefore no matter what the size and spatial scale of the point cloud are, without any semantic processing, the entire point cloud scene is treated as a single object by the data processing system. Therefore, at each level of detail, all point cloud data included in that  level of detail is loaded for the entire point cloud, which consumes large amount of processor power and/or memory making it difficult or impossible to render the point cloud scene in an efficient manner.
A point cloud file which stores point cloud data usually a set of three-dimensional coordinates and data fields corresponding to the points in the point cloud. There may be no topological connection relationship between any two points in the point cloud data. Therefore, in a conventional approach, when a point cloud scene is loaded, the entire point cloud file is read from a storage medium to a general memory of a computer and/or a memory of a video card. If the point cloud file needing to be read exceeds the upper limit of point cloud files that can be stored in the memory, then the point cloud data in the point cloud scene cannot be completely loaded, and the point cloud scene cannot be rendered. Therefore, this approach to point cloud scene rendering may not be able to cope with large point clouds.
Accordingly, a first aspect of the present disclosure proposes a method for rendering a point cloud scene, comprising:
acquiring a first point of view, a first angle of view, and origin coordinates of voxels in a target point cloud scene;
determining voxels in a visible area according to the first point of view, the first angle of view, and the origin coordinates of all the voxels in the target point cloud scene;
storing pieces of point cloud data corresponding to the voxels in the visible area into a cache;
separately adding the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail; and
rendering the target point cloud scene according to the pieces of point cloud data in the various levels of detail.
The visible area may be an area or range of the point cloud which is visible to the observer. The first point of view and first angle may be a location and a viewing angle of the observer. The ‘target’ point cloud scene is the point cloud scene which is to be rendered, which may for example be a view of the point cloud from the perspective of the observer.
In some examples, acquiring the origin coordinates of the voxels may comprise acquiring the original coordinates of all the voxels in the target point cloud scene. In some examples, determining the origin coordinates of the voxels may comprise determining the original coordinates of all the voxels in the target point cloud scene.
In some examples, each piece of point cloud data may correspond to a respective voxel in the visible area. In some examples a cache may be a memory which is dedicated to providing a processor with relatively quick access to data which may be frequently used. A cache may have a relatively small size in order to maintain speed of access, which means that the capacity of the cache may be limited. The cache may for example be a cache memory of a computing device which may be accessible by a processor which is configured to perform the above described method and render the point cloud scene.
Separately adding the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail may include dividing each piece of point cloud data into multiple  subsets of point cloud data and storing each subset separately, wherein each subset corresponds to a respective level of detail. Rendering a target point cloud scene according to the pieces of point cloud data in the various levels of detail may comprise rendering the target point cloud scene at a particular level of detail by using subsets of point cloud data corresponding to said particular level of detail and subsets corresponding to levels of detail which are lower than said particular level of detail. For instance, if there are five levels of detail and it is determined to render the point cloud scene at a third level of detail, then subsets of point cloud data corresponding to the first level of detail, second level of detail and third level of detail may be used. However, if it is determined to render the point cloud scene at a second level of detail, then subsets of point cloud data corresponding to the first level of detail and second level of detail may be used. In one example, the level of detail to be rendered for a voxel may be determined based on a distance between the observer and the voxel, for instance a distance between the first point of view and the origin coordinates of the voxel.
A second aspect of the present disclosure provides a method for rendering a point cloud scene, comprising:
dividing the point cloud scene into a plurality of voxels each voxel including point cloud data;
acquiring a first point of view, a first angle of view, and origin coordinates of voxels in a target point cloud scene;
determining voxels in a visible area according to the first point of view, the first angle of view, and the origin coordinates of the voxels in the target point cloud scene;
storing point cloud data corresponding to voxels in the visible area into a cache, wherein point cloud data corresponding to voxels outside the visible area is not stored in the cache;
wherein storing the point cloud data includes dividing the point cloud data in each voxel in the visible area into a plurality of subsets and storing each subset separately; and
rendering a target point cloud scene at a determined level of detail by rendering each voxel using a number of subsets according to a determined level of detail. The determined level of detail may be a level of detail selected from among a plurality of levels of detail. In some examples, the level of detail may be determined separately for each voxel, for instance based on a distance of the observer from the voxel.
A third aspect of the present disclosure provides an apparatus for rendering a point cloud scene according to the first or second aspect of the present disclosure. The apparatus may comprise a processor and a memory for storing instructions executable by the processor, wherein the instructions, when executed by the processor, implement the method of the first or second aspect of the present disclosure.
A fourth aspect of the present disclosure provides a non-transitory computer readable storage medium storing machine readable instructions which are executable by a processor to perform the method of the first or second aspect of the present disclosure.
Further aspects and features of the present disclosure are provided in the dependent claims.
In the above described approaches, the first point of view, the first angle of view, and the origin coordinates of voxels in the target point cloud scene are acquired, so that voxels lying within the visible area can  be determined based on the first point of view, the first angle of view, and the origin coordinates of voxels in the target point cloud scene. Thus pieces of point cloud data corresponding to the voxels in the visible area can be stored into the cache, while pieces of point cloud data not corresponding to voxels in the visible area are not stored in the cache, thus saving memory resources. Furthermore, in some examples, the pieces of point cloud data corresponding to the voxels in the visible area in the target point cloud scene can be loaded in blocks with a voxel as a unit without the need to load all point cloud data in the target point cloud scene in one go, which may effectively improve the data reading efficiency, enabling the point cloud data required for rendering to be acquired efficiently. Further, the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache are separately added to the multiple levels of detail, and the target point cloud scene is rendered according to the pieces of point cloud data using pieces of point cloud data corresponding to the voxels in the visible area at the determined level of detail. The amount of data processing may be reduced thus enabling the rendering to be carried out more efficiently using fewer processing resources.
Exemplary embodiments of the present disclosure will now be described with reference to the diagrams.
FIG. 1 shows an example method for rendering a point cloud scene. The method may comprises the following blocks:
S101: acquiring a first point of view, a first angle of view, and origin coordinates of a plurality of voxels in a target point cloud scene. For example, origin coordinates of all voxels in the target point cloud scene may be acquired.
Since there is usually at least one observer or observation apparatus in the target point cloud scene, the first point of view, the first angle of view, and the origin coordinates of all the voxels in the target point cloud scene can be obtained first. The target point cloud scene may be a three-dimensional scene composed of point cloud that needs to be rendered. The rendered target point cloud scene may be used by an autonomous driving system for obstacle recognition, road recognition and/or to produce map. In other examples, the rendered point cloud map may need to be used in a computer game or simulation.
The first point of view and the first angle of view can be an initial point of view and an initial angle of view of the observer or the observation apparatus in the target point cloud scene, the first point of view can be used to characterize a position of the observer or observation apparatus in the target point cloud scene, and the first angle of view can be used to characterize an included angle formed at a point of view by the light rays drawn from both ends (upper, lower, left, or right) of an object when the object is observed.
The observation apparatus may include, but is not limited to, at least one of the following: a laser radar, a contact scanner, a structured light, triangulation, a stereo camera, a transit time camera, etc. The voxel is an abbreviation of volume pixel and may be a minimum unit in segmentation of digital data in a three-dimensional space. A voxel-containing cube can be represented through stereoscopic rendering or extraction of polygonal iso surfaces of a given threshold profile.
Since there is no topological connection relationship between any two points in point cloud data, reading complete point cloud data when a point cloud scene is loaded not only takes more time, but may also result in failure to load all point cloud data needed for rendering due to the memory capacity limitation. Therefore, in one example, voxelized segmentation is performed on a complete target point cloud scene, and the space where the target point cloud scene is located may be divided into multiple voxels. Each side length of the voxel may be set according to the situation or specific implementation, and each side length direction of the voxel may be parallel to three axes of a coordinate system of the space of the target point cloud scene. The voxels obtained through segmentation may be cubes or cuboids or other shapes. The present application is not limited to any particular shape of voxel.
After obtaining multiple voxels of the target point cloud scene, the origin coordinates of various voxels obtained through segmentation in the target point cloud scene can be further acquired. The origin coordinates of the various voxels in the target point cloud scene can be used to characterize positions of the various voxels in the entire target point cloud scene. The above origin may be the point in the voxels that is closest to the origin of the coordinate system of the space of the target point cloud scene, or may be the geometric center point of a voxel or the intersection of the length, width, and height axes in the voxel, and may be specifically determined according to the actual situation, which is not limited in the present application.
In one embodiment, pieces of point cloud data corresponding to the various voxels in the target point cloud scene are separately stored to obtain point cloud files corresponding to the various voxels, that is, any point in the target point cloud scene belongs to a voxel in terms of position, the points belonging to the same voxel are grouped together and stored in a corresponding point cloud file, so that the point cloud data in the target point cloud scene can be read and loaded in blocks.
In order to facilitate searching and reading of point cloud data corresponding to any voxel in the target point cloud scene, in one embodiment, an index file can be created according to the origin coordinates of the various voxels in the target point cloud scene, wherein the index file comprises a correlation between the point cloud files corresponding to the various voxels and the origin coordinates of the various voxels in the target point cloud scene. The correlation can be embodied in the form of a table, a key-value pair, etc. Specifically, the method for establishing the correlation can be determined according to the actual situation, which is not limited in the present application. Therefore, a voxel at any position in the target point cloud scene and a point cloud file corresponding to the voxel can be intuitively determined by means of the index file.
In a specific implementation process, when the pieces of point cloud data corresponding to the various voxels in the target point cloud scene are separately stored to obtain the point cloud files corresponding to the various voxels, a piece of point cloud data corresponding to a target voxel and an origin coordinate of the target voxel in the target point cloud scene can be acquired, and coordinates of various points in the point cloud data corresponding to the target voxel can be converted into offset values relative to the origin coordinate of the target voxel in the target point cloud scene. the origin coordinate of the target voxel in the target point cloud scene and the offset values of the various points relative to the origin coordinate of the target voxel in the target point cloud  scene to obtain a point cloud file corresponding to the target voxel, so that the amount of data stored in each point cloud file in the target point cloud scene can be reduced to leave a storage space to store more point cloud files.
Correspondingly, when point cloud data in a point cloud file is loaded or read, offsets of various points in the point cloud file need to be restored to original coordinates of the various points in the target point cloud scene. An origin coordinate of a voxel corresponding to the point cloud file in the target point cloud scene can be acquired according to the index file, and the relative offsets of the various point are superimposed and translation transformation is performed on the origin coordinate of the voxel corresponding to the point cloud file in the target point cloud scene, such that the original coordinates of the various points in the target point cloud scene can be restored.
S102: determining voxels in a visible area according to the first point of view, the first angle of view, and the origin coordinates of all the voxels in the target point cloud scene.
Since it is possible to determine the specific position of the observer or observation apparatus in the target point cloud scene according to the first point of view and the first angle of view in the target point cloud scene, the specific positions of the various voxels in the target point cloud scene can be determined according to the origin coordinates of all the voxels in the target scene above. Therefore, it is possible to determine voxels, in all the voxels, in the visible area of the observer or observation apparatus according to the first point of view, the first angle of view, and the origin coordinates of all the voxels in the target point cloud scene. That is, scene culling can be performed in advance at a voxel level in the point cloud scene according to the first point of view and the first angle of view, to cull voxels beyond the visible area of the observer or observation apparatus, so as to reduce rendering overheads and improve the scene loading efficiency on the basis of loading point cloud data in blocks.
S103: storing pieces of point cloud data corresponding to the voxels in the visible area into a cache.
After the voxels in the visible area are determined, point cloud files corresponding to the various voxels in the visible area can be acquired according to the origin coordinates of the voxels in the visible area in the target point cloud scene and the index file, so that pieces of point cloud data corresponding to the various voxels in the visible area can be acquired and stored into the cache.
In some examples, compared with a normal memory or a magnetic disk, a cache may have a higher data reading efficiency and a shorter data response time. In some examples a cache may be close to a processor and may have low latency. Therefore, the use of the cache to store the pieces of point cloud data corresponding to the various voxels in the visible area can reduce a data transmission and reading time, thereby improving a processing efficiency. The cache can open a fixed area from the memory for storing point cloud data and metadata corresponding to voxels. In addition, the cache can implement an array for indexing and managing voxel objects loaded into a data area. Each element in the array points to point cloud data corresponding to a loaded voxel.
In a specific implementation process, the cache can store the pieces of point cloud data corresponding to all the voxels in the visible area by using the replacement strategy for a priority queue of a least recently used (LRU) algorithm. The LRU is a type of page replacement algorithm, wherein data is eliminated and arranged according to a historical access record of data. The core idea thereof is: if data has been accessed recently, the  probability of being accessed in the future is also higher. Using the LRU to store point cloud data, the data in the cache can be more conveniently and effectively replaced when the first point of view and first angle of view change. For example, if the observer continuously moves through another B in space and reaches a destination C when the observer is at a position A in the target point cloud scene, wherein A is closer to C than B in distance, because A has been observed recently, while B has not been observed, if the LRU is used in this case, the probability of moving to A next time is higher than to B, and a priority is given to replace A.
After the pieces of point cloud data corresponding to the voxels in the visible area are stored into the cache, in one embodiment, the amount of data that can be stored in the cache and the amount of point cloud data corresponding to the voxels in the visible area may be acquired first, and then it may be determined whether the amount of point cloud data corresponding to the voxels in the visible area is greater than the amount of data that can be stored in the cache, and when it is determined that the amount of point cloud data corresponding to the voxels in the visible area is greater than the amount of data that can be stored in the cache, that is, the amount of point cloud data corresponding to the voxels in the visible area exceeds the upper limit of the amount of data that can be stored in the cache, voxels in the visible area that are farthest from the first point of view may be removed from the cache.
Further, it may be continued to determine whether the amount of point cloud data that needs to be stored exceeds the upper limit of the amount of data that can be stored in the cache, and if the upper limit of the amount of data that can be stored in the cache is still exceeded, it may be continued to remove corresponding voxels according to the above principle of “farthest from the point of view” until the amount of point cloud data that needs to be stored does not exceed the upper limit of the amount of data that can be stored in the cache.
In one embodiment, when it is determined that the amount of point cloud data corresponding to the voxels in the visible area is not greater than the amount of data that can be stored in the cache, it may be further determined whether the amount of point cloud data corresponding to the voxels in the visible area is less than the amount of data that can be stored in the cache, that is, if the amount of point cloud data corresponding to the voxels in the visible area is equal to the amount of data that can be stored in the cache, there is no need to perform operations such as removing, adding, on the voxels. If it is determined that the amount of point cloud data corresponding to the voxels in the visible area is less than the amount of data that can be stored in the cache, voxels outside the visible area that are adjacent to the first point of view are stored into the cache, until the amount of point cloud data stored in the cache reaches the amount of data that can be stored in the cache. That is, when it is determined that the amount of point cloud data corresponding to the voxels in the visible area is less than the amount of data that can be stored in the cache, voxels outside the visible area that are placed in ascending order of distances from the first point of view are sequentially stored in the cache until the amount of point cloud data stored in the cache reaches the amount of data that can be stored in the cache.
S104: separately adding the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail.
In one embodiment, the level of detail (LOD) technology can be used to improve the efficiency and ability of point cloud scene rendering. The above level of detail is used to describe the same scene with different precisions, and its working principle is: when the point of view is close to the object, the details of the model that can be observed are rich, and when the point of view is far from the model, the observed details are gradually blurred. The system will select the corresponding details to display in time, so as to avoid waste of time caused by loading and rendering of those details that have relatively little meaning, thus effectively adjusting the relationship between picture continuity and resolution. Therefore, the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache can be separately added to multiple levels of detail, and different levels of detail retain pieces of point cloud data with different degrees of detail for various voxels.
In one example, the number of levels of detail can be determined first, wherein the number of levels can be a positive integer greater than or equal to 1, for example: 4, 5, etc., which can be specifically determined according to the actual situation and is not limited in the present application. After determining the number of levels of detail, the pieces of point cloud data corresponding to the various voxels in the visible area can be separately and randomly divided to obtain multiple subsets, wherein the number of subsets obtained by randomly dividing the point cloud data corresponding to each of the voxels is equal to the number of levels of detail. In some examples, the point cloud data may be divided into multiple subsets using a hash or other algorithm instead of randomly dividing into multiple subsets.
Further, the multiple subsets obtained by dividing the pieces of point cloud data corresponding to the various voxels in the visible area can be separately added to multiple levels of detail. For example, when the number of levels of detail is 5, the point cloud data corresponding to voxel 1 in the visible area can be randomly divided to obtain 5 subsets (subset 1, subset 2, subset 3, subset 4, subset 5) corresponding to voxel 1, and the subset 1 is added to the first level of detail, the subset 2 is added to the second level of detail, the subset 3 is added to the third level of detail, the subset 4 is added to the fourth level of detail, and the subset 5 is added to the fifth level of detail. Correspondingly, different levels of detail retain pieces of point cloud data with different degrees of detail for various voxels. When the i th level of detail is used, all the pieces of point cloud data in the first to the i th subsets will be loaded accordingly.
S105: rendering the target point cloud scene according to the pieces of point cloud data in the various levels of detail.
After separately adding the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail, the target point cloud scene can be rendered according to the pieces of point cloud data in the various levels of detail, wherein voxels closer to the point of view can be rendered with relatively more levels of detail to show more details of the observed object.
In one embodiment, for the entire point cloud scene, when the observer gets closer to a certain object, the number of other objects in his/her field of view is certainly reduced, so more details about the observed object can be shown, and when the observer moves away from a certain object, there will be more other objects coming into view, so less details about each object can be shown. Therefore, the number of levels of detail to be used for  the various voxels in the visible area can be determined according to the first point of view and the first angle of view, and a point cloud scene is rendered according to the determined number of levels of detail to be used for the various voxels in the visible area and pieces of point cloud data in multiple levels of detail.
After rendering the target point cloud scene, it is possible to continue to determine whether the first point of view or the first angle of view has changed or moved. When it is determined that the first point of view or the first angle of view moves, it is possible to take a first point of view obtained after movement as a second point of view, and take a first angle of view obtained after movement as a second angle of view. The voxels in the visible area can be re-determined according to the second point of view, the second angle of view and the origin coordinates of all voxels, and the pieces of point cloud data stored in the cache can be modified according to the re-determined voxels in the visible area.
The result of the movement of the first point of view and/or the first angle of view comprises two cases: zooming of the angle of view only, and occurring of voxel replacement. That is, when changes in the first point of view or the first angle of view only causes the zooming of the angle of view, no voxels within the visible range are newly added at this time; and when the first point of view or the first angle of view changes, so that some new voxels enter the visible area, and some voxels that were originally in the visible area exit the field of view, voxel replacement occurs at this time.
In a specific implementation process, when changes in the first point of view or the first angle of view only causes the zooming of the angle of view, it is only necessary to adjust the number of levels of detail to be used by the various voxels without modifying the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache, and the target point cloud scene is re-rendered according to the number of levels of detail adjusted for the various voxels.
When the first point of view or the first angle of view changes, so that some new voxels enter the visible area, and some voxels that were originally in the visible area exit the field of view, it is necessary to re-determine the voxels in the visible area according to the second point of view, the second angle of view and the origin coordinates of all the voxels, and load the pieces of point cloud data corresponding to the voxels newly entering the visible area into the cache.
If the amount of data that can be stored in the cache has reached the upper limit, it is determined whether any voxel exits the visible area, and if it is determined that there is a voxel exiting the visible area, among the voxels that have exited the visible area, a voxel with a distance from the point of view greater than that of the voxels that have been stored in the cache is selected, and a voxel that has been out of the field of view for the longest time will be culled from the cache. In one embodiment, a harmonic function may be used to synthesize the most recently observed time and the distance from the point of view, to give each voxel in the cache a current comprehensive score, and select a voxel to be replaced based on the score result. This strategy may be called a combination of LRU with a priority queue based on distances from the point of view. In one embodiment, the voxels that have exited the visible area may be continuously deleted from the cache based on the above strategy until all pieces of point cloud data corresponding to the new voxels that have entered the view can be loaded into  the cache. Correspondingly, when a voxel exits the visible area, the point cloud data corresponding to the voxel is unloaded from the levels of detail to free up storage space for storing pieces of point cloud data corresponding to voxels within the visible range.
From the above description, it can be seen that the embodiments of the present application achieve the following technical effects: the first point of view, the first angle of view, and the origin coordinates of all voxels in the target point cloud scene are acquired, so that the voxels in the visible area can be determined according to the first point of view, the first angle of view, and the origin coordinates of all voxels in the target point cloud scene; and the pieces of point cloud data corresponding to the voxels in the visible area are stored into the cache, so that the pieces of point cloud data corresponding to the voxels in the visible area in the target point cloud scene can be loaded in blocks with a voxel as a unit without the need to load all point cloud data in the target point cloud scene at one time, which effectively improves the data reading efficiency, such that point cloud data required for rendering can be efficiently acquired. Further, the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache are separately added to the multiple levels of detail, and the target point cloud scene is rendered according to the pieces of point cloud data in the various levels of detail, so that detail drawing can be performed according to the pieces of point cloud data corresponding to the voxels in the visible area when the target point cloud scene is rendered at the level of detail, which effectively reduces the amount of data processing, such that point cloud scene rendering can be performed efficiently and smoothly.
The above method will be described below in conjunction with a specific embodiment. However, it is worth noting that this specific embodiment is merely for better description of the present application, and does not constitute an undue limitation on the present application.
An embodiment of the present disclosure provides a method for rendering a point cloud scene, as shown in FIG. 2, which may comprise:
Block 1: voxelized segmentation on the point cloud scene.
Typically, the file format of a storage point cloud describes the point cloud as a record consisting of a set of three-dimensional coordinates with data fields. When it is necessary to load a point cloud scene, the file is completely read from the storage medium into the computer’s general memory and graphics card memory. If the point cloud file to be read is too large and the data cannot be completely loaded, the point cloud scene cannot be rendered. Therefore, in one embodiment, it is possible to perform voxelized segmentation on a complete point cloud scene. Each voxel obtained after the segmentation is stored as a separate point cloud file, and all voxels and their correlation with their original positions in the complete point cloud scene are recorded through an index file.
Correspondingly, it is possible to design a storage structure which comprises: an index file and several data files (point cloud files) . This structure appropriately cuts a complete point cloud scene, so that the point cloud scene can be loaded in blocks, and the data amount of the point cloud scene can be appropriately compressed, wherein the storage structure is shown as in FIG. 3. In FIG. 3, the index file is on the left side, and the associated point cloud files are on the right side. “Intensity: 25” , “Label: GROUND” , etc., in the table in FIG. 3 are some examples of data, which are data attributes of points corresponding to coordinate offsets in the point cloud files.  The algorithm for generating a corresponding index file and several point cloud files from the point cloud scene of a single file is as follows:
Input: point cloud C = {p i = (x i, y i, z i, d i) } , i ∈ [1, N] , voxel side length R > 0, where p i represents the coordinates of the i th point in the point cloud model, that is, p i = (x i, y i, z i, d i) , d i is the data attribute of point p i, and the data attribute may include but not limited to at least one of the following: intensity, reflectivity, etc.
Output: Index I, point cloud file
Figure PCTCN2020098284-appb-000001
Process:
1) Segmentation.
For the above point cloud C generation: 
Figure PCTCN2020098284-appb-000002
where the above o i= (γ (z i, R) , γ (z i, R) , γ (z i, R) ) , 
Figure PCTCN2020098284-appb-000003
where o i is the origin coordinate of a voxel obtained after p i is voxelized with a voxel side length R in the point cloud scene, and p i-o i is the coordinate offset (relative coordinate) of each point in the voxel relative to the origin coordinate of the voxel; γ (z i, R) is a function used to determine the largest multiple of z i/R not less than z i; γ(a, R) is used to make flooring to a/R and then multiply it by R; and a is a variable.
2) Voxelization.
Elements in
Figure PCTCN2020098284-appb-000004
will be aggregated according to o i to get a voxelized point cloud scene: 
Figure PCTCN2020098284-appb-000005
where
Figure PCTCN2020098284-appb-000006
The above V j represents the j th voxel of
Figure PCTCN2020098284-appb-000007
and d jk are respectively a point belonging to V j and its corresponding data attribute, where
Figure PCTCN2020098284-appb-000008
Figure PCTCN2020098284-appb-000009
is a collection of all p i and their corresponding o i in the point cloud, each p i corresponding to an o i. After aggregation, in the
Figure PCTCN2020098284-appb-000010
p i of the same o i are grouped together to form V j, in this case each V j corresponds to an o j.
3) Storage.
An index file I = <O, R> = < {o j → D j} , R > is stored.
That is, point cloud data corresponding to each voxel V j is stored to its respective point cloud file 
Figure PCTCN2020098284-appb-000011
The index file contains the origin coordinates o j of all voxels, and o j → D j is the correlation between the origin coordinates of the voxels and the point cloud files corresponding to the voxels. In order to be able to restore the coordinate offset of the data file D j to the original coordinates of each point, the voxel side length R may also be stored in the index file.
It can be concluded that the space where a complete point cloud scene is located is cut into a dense array of cubes with side length R, each cube is called a voxel, and the side length directions of the voxel are parallel to the three axes of the coordinate system of the point cloud scene space. Any point in the point cloud scene belongs to a voxel in position, and points that belong to a same voxel are grouped together and stored in a corresponding data point cloud file. In the above index file, origin coordinates of voxels in the above complete point cloud scene and address indexes of their corresponding point cloud files are saved. In a point cloud file, the coordinate of a point is converted to an offset relative to the origin coordinate of the voxel in the complete point  cloud scene according to the above formula, so that the amount of data in the point cloud scene can be appropriately compressed to save more data bits for numerical precision. When point cloud data in a point cloud file is loaded/read, it is necessary to restore the offset to the original coordinate of the midpoint of the voxel corresponding to the point cloud file. In this case, a record of a corresponding voxel needs to be retrieved from the index file. The relative offsets of all points in the voxel can be superimposed on the origin coordinate of the voxel in the above complete point cloud scene to perform translation transformation.
Block 2: loading of pieces of point cloud data corresponding to all voxels in the visible area into the cache according to the observer’s angle of view and point of view.
The index file of the point cloud scene is loaded first. By parsing the origin coordinates of all voxels in the index file, the precise size of the point cloud scene and the positions of various voxels in the complete point cloud scene can be obtained. Therefore, no matter what position the observer is in the point cloud scene, it is possible to calculate which voxels are in the observer’s field of view according to the origin coordinates of the various voxels. In one embodiment, scene culling can be performed in advance at the voxel level in the point cloud scene according to the observer’s angle of view and point of view position, to cull voxels that are not within the observer’s field of view, so as to reduce rendering overheads and improve scene loading efficiency, thereby relieving the storage pressure.
During initialization, according to the observer’s initial angle of view and initial point of view, pieces of point cloud data corresponding to all voxels in the visible area are loaded into the cache, wherein the cache uses the replacement strategy for a priority queue of the least recently used (LRU) algorithm to store the pieces of point cloud data corresponding to all voxels in the visible area. The LUR is a type of page replacement algorithm, wherein data is eliminated and arranged according to a historical access record of data. The core idea thereof is: If data has been accessed recently, the probability of being accessed in the future is also higher.
Further, it can be determined whether the storage space in the cache is filled. If it is not filled, the cache pre-loading can be carried out, that is, it extends beyond the visible range to select the point cloud data corresponding to the unloaded voxels nearby. If the amount of point cloud data corresponding to all voxels in the visible area exceeds the upper limit of the amount of data that can be stored in the cache, then the voxel data that is relatively far away from the initial point of view in the voxels in the visible area is discarded. That is, the voxels to be loaded in the cache can be determined based on a priority queue of distances form the point of view.
Block 3: level of detail construction.
The pieces of point cloud data corresponding to various voxels in the cache can be loaded into multiple levels of detail. Specifically, for L levels of detail, the points within each voxel are randomly divided into L subsets, wherein the number of points in the i th subset accounts for 2i/ [L × (L + 1) ] of the total number of points contained in the voxel. In one embodiment, it is possible to determine how many levels of detail need to be used according to the distance from the observer to the voxel. When the i th level of detail is used, the points in the first to the i th subset will be loaded accordingly.
Block 4: rendering.
The number of levels of detail to be used can be determined according to the position information of the observer, and the point cloud scene rendering can be performed according to the determined number of levels of detail and the point cloud data loaded in multiple levels of detail.
For the entire point cloud scene, when the observer gets closer to a certain object, the number of other objects in his/her field of view is certainly reduced, so more details about the observed object can be shown, and when the observer moves away from a certain object, there will be more other objects coming into view, so less details about each object can be shown. Therefore, when position movement of the observer only causes zooming of the angle of view, it is only necessary to adjust the number of levels of detail to be used without modifying the voxels stored in the cache, and the point cloud scene is further re-rendered according to the adjusted number of levels of detail.
However, in some embodiments, when the position of the observer moves, some voxels will enter the field of view and some voxels that are originally in the visible range will exit the field of view. Due to the change of angle of view, the pieces of point cloud data corresponding to the new voxels that need to enter the field of view are loaded into the cache. When the storage space of the cache is full, among the voxels that have exited the field of view, a voxel with a distance from the point of view greater than that of the voxels that have been stored in the cache is selected, and a voxel that has been out of the field of view for the longest time will be culled from the cache. This strategy is a combination of LRU and a priority queue based on distances from the point of view. The voxels may be continuously deleted from the cache based on the above strategy until all pieces of point cloud data corresponding to the new voxels that have entered the view can be loaded into the cache. Correspondingly, when the position of the observer moves, if an original voxel in the cache is replaced, the corresponding point cloud data is unloaded from the levels of detail to free up storage space for voxels within the visible range.
The above process may be specifically shown as in FIG. 4. According to the changing process of the point of view and the angle of view in FIG. 4, it can be known that when the position of the observer moves, the corresponding voxels within the visible range will also change. And among voxels within the visible range, voxels closer to the observer need to be rendered with more details, which means that when the position of the observer moves, the levels of detail that need to be rendered for various voxels within the visible range is also changing.
In a practical application, in the location point cloud map verification tool, the above method for rendering a point cloud scene can be applied to rendering point cloud maps over 300 kilometers, with a total file size of TB-level dense point clouds. This cannot be done with existing memory-based rendering solutions, and the above-mentioned method for rendering a point cloud scene can achieve interactive level rendering response. The size of the LRU cache is set to be 2GB, the number of voxels is limited to be 2000, the side length of the voxel is set to be 50 meters, and the number of levels of detail is set to be 5. In the above configuration, the rendering frame rate (Frames per second, FPS) of operations, such as zooming and rotation, that trigger the replacement of levels of detail, reaches more than 30 FPS, so that there is no sense of stuttering, and translation triggers cache replacement. When the amount of translation of angle of view is not large, the used LRU strategy comes into play.  The cache replacement and re-rendering occur within 1 second, and full LRU refreshing is triggered when the angle of view changes greatly, which takes 5 to 10 seconds.
Based on the same concept, an embodiment of the present application further provides a device for rendering a point cloud scene, as shown in embodiments below. Since the principle of the device for rendering a point cloud scene to solve the problem is similar to the method for rendering a point cloud scene, the implementation of the device for rendering a point cloud scene can refer to the implementation of the method for rendering a point cloud scene, and the repetition will not be repeated. As used below, the term “unit” or “module” may implement a combination of software and/or hardware that achieves a predetermined function. Although the devices described in the embodiments below are preferably implemented in software, implementation of hardware or a combination of software and hardware is also possible and conceived. FIG. 5 is a structural block diagram of a device for rendering a point cloud scene according to an embodiment of the present application, as shown in FIG. 5, which may comprise: an acquisition module 501, a determination module 502, a storage module 503, a processing module 504, and a rendering module 505. The structure will be described below.
The acquisition module 501 may be configured to obtain a first point of view, a first angle of view and origin coordinates of all voxels in the target point cloud scene.
The determination module 502 may be configured to determine voxels in a visible area according to the first point of view, the first angle of view, and the origin coordinates of all the voxels in the target scene.
The storage module 503 may be configured to store pieces of point cloud data corresponding to the voxels in the visible area into a cache.
The processing module 504 may be configured to separately add the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail.
The rendering module 505 may be configured to render the target point cloud scene according to the pieces of point cloud data in the various levels of detail.
In one embodiment, the above device for rendering a point cloud scene may further comprise: a voxelized segmentation unit configured to perform voxelized segmentation on the target point cloud scene to obtain multiple voxels in the target point cloud scene; an acquisition unit configured to acquire the origin coordinates of the various voxels in the target point cloud scene; a storage unit configured to separately store pieces of point cloud data corresponding to the various voxels in the target point cloud scene to obtain point cloud files corresponding to the various voxels; and an index file creation unit configured to create an index file according to the origin coordinates of the various voxels in the target point cloud scene, wherein the index file comprises a correlation between the point cloud files corresponding to the various voxels and the origin coordinates of the various voxels in the target point cloud scene.
Embodiments of the present application also provide an electronic apparatus. Specific references can be made to FIG. 6, in the schematic diagram of a composition structure of an electronic apparatus based on a method for rendering a point cloud scene provided by an embodiment of the present application, the electronic apparatus may specifically comprise an input apparatus 61, a processor 62, and a memory 63. The input apparatus  61 may specifically be configured to input the first point of view, the first angle of view, and the origin coordinates of all voxels in the target point cloud scene. The processor 62 may be specifically configured to determine the voxels in the visible area according to the first point of view, the first angle of view, and the origin coordinates of all voxels in the target point cloud scene; store the pieces of point cloud data corresponding to the voxels in the visible area into the cache; separately add the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail; and render the target point cloud scene according to the pieces of point cloud data in the various levels of detail. The memory 63 may be specifically configured to store parameters such as the first point of view, the first angle of view, and the origin coordinates of all voxels in the target point cloud scene.
In this embodiment, the input apparatus may specifically be one of the main devices for information exchange between a user and a computer system. The input apparatus may comprise a keyboard, a mouse, a camera, a scanner, a light pen, a handwriting input board, a voice input device, etc. The input apparatus is configured to input raw data and programs that are used to process these pieces of data into the computer. The input apparatus may also acquire and receive data transmitted from other modules, units and apparatuses. The processor may be implemented in any suitable way. For example, the processor may take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro) processor, a logic gate, a switch, an application specific integrated circuit (ASIC) , a programmable logic controller and an embedded microcontroller, etc. The memory may specifically be a memory apparatus for storing information in modern information technology. The memory may comprise multiple levels. In a digital system, anything that can store binary data may be called a memory. In an integrated circuit, a circuit that has a storage function without a physical form is also called a memory, such as a RAM, a FIFO, etc. In a system, a storage apparatus with a physical form is also called a memory, such as a memory stick, a TF card, etc.
In this embodiment, the functions and effects specifically implemented by the electronic apparatus can be explained in comparison with other embodiments, and will not be repeated herein.
The embodiments of the present application also provide a computer storage medium based on a method for rendering a point cloud scene. The computer storage medium stores computer program instructions that, when executed, can implement: acquiring a first point of view, a first angle of view, and origin coordinates of all voxels in a target point cloud scene; determining voxels in a visible area according to the first point of view, the first angle of view, and the origin coordinates of all the voxels in the target point cloud scene; storing the pieces of point cloud data corresponding to the voxels in the visible area into the cache; separately adding the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail; and rendering the target point cloud scene according to the pieces of point cloud data in the various levels of detail.
In this embodiment, the above storage medium includes but is not limited to a random access memory (RAM) , a read-only memory (ROM) , a cache, a hard disk (HDD) or a memory card. The memory may be configured to store computer program instructions. A network communication unit may be an interface configured  to perform network connection communication and is set according to a standard prescribed by a communication protocol.
In this embodiment, the functions and effects specifically implemented by the program instructions stored in the computer storage medium can be explained in comparison with other embodiments, and will not be repeated herein.
FIG. 7 depicts a schematic diagram of blocks of a method for rendering a point cloud scene according to a further example in accordance with the present disclosure.
At block S701, a point cloud scene is divided into a plurality of voxels, each voxel including point cloud data.
By way of example, voxelized segmentation may be performed on a entire point cloud scene. Each voxel obtained after the segmentation may be stored as a separate point cloud file, and all voxels and their correlation with their original positions in the complete point cloud scene may be recorded through an index file.
At block S702, a first point of view, a first angle of view and origin coordinates of voxels in a target point cloud scene are acquired.
At block S703, voxels in a visible area are determined according to the first point of view, the first angle of view and the origin coordinates of voxels in a target point cloud scene.
The index file of the point cloud scene may be loaded first. By parsing the origin coordinates of all voxels in the index file, the precise size of the point cloud scene and the positions of various voxels in the complete point cloud scene can be obtained. Therefore, no matter what position the observer is in the point cloud scene, it is possible to calculate which voxels are in the observer’s field of view according to the origin coordinates of the various voxels.
Further, scene culling may be performed in advance at the voxel level in the point cloud scene according to the observer’s angle of view and point of view position, to cull voxels that are not within the observer’s field of view, so as to reduce rendering overheads and improve scene loading efficiency, thereby relieving the storage pressure.
At block S704, point cloud data corresponding to voxels in the visible area is stored into a cache, wherein point could data corresponding to voxels outside the visible area is not stored in the cache. Storing the point cloud data may include dividing the point cloud data in each voxel in the visible area into a plurality of subsets and storing each subset separately.
By way of example, during initialization, according to the observer’s initial angle of view and initial point of view, pieces of point cloud data corresponding to all voxels in the visible area may be loaded into the cache.
Further, it can be determined whether the storage space in the cache is filled. If it is not filled, then cache pre-loading may be carried out, that is, it extends beyond the visible range to select the point cloud data corresponding to the unloaded voxels nearby. If the amount of point cloud data corresponding to all voxels in the visible area exceeds the upper limit of the amount of data that can be stored in the cache, then the voxel data that is  relatively far away from the initial point of view in the voxels in the visible area may be discarded. That is, the voxels to be loaded in the cache can be determined based on a priority queue of distances form the point of view.
The pieces of point cloud data corresponding to various voxels in the cache can be loaded into multiple levels of detail. For example, for L levels of detail, the points within each voxel are randomly divided into L subsets. Each subset is stored in the cache separately.
At block S705, a target point cloud scene is rendered at a determined level of detail by rendering each voxel using a number of subsets according to the determined level of detail.
By way of example, the number of levels of detail to be used can be determined according to the position information of the observer, and the point cloud scene rendering can be performed according to the determined number of levels of detail and the point cloud data loaded in multiple levels of detail, i.e. multiple subsets.
It will be apparent to a person skilled in the art that the various modules or blocks of the present disclosure described in the embodiments of the present application can be implemented by a general-purpose computing device that can be centralized on a single computing device or distributed across a network formed by multiple computing devices. Optionally, they may be implemented by program codes executable by the computing device, such that they may be stored in a storage device and executed by the computing device, and in some cases, the blocks shown or described may be performed in a sequence different from the sequence described herein, or they may be respectively fabricated into individual integrated circuit modules, or multiple modules or blocks thereof may be implemented as a single integrated circuit module. In this way, the embodiments of the present application are not limited to any specific combination of hardware and software. In some examples, the machine readable instructions may be stored on a non-transitory computer readable medium, which when executed by a processor cause the processor to perform any of the above methods.
Although the present application provides operation blocks of the method as described in the above embodiments or flowcharts, more or less operation blocks may be included in the method based on conventional or non-creative labor. For blocks that do not logically have a necessary cause and effect relationship, the execution sequence of these blocks is not limited to the execution sequence provided in the embodiments of the present application. When the method is executed in an actual device or terminal product, it may be executed sequentially or in parallel according to the method shown in the embodiments or the drawings (for example, a parallel processor or a multi-threaded processing environment) .
It should be understood that the above description is for illustration and not for limitation. From reading the above description, many embodiments and many applications beyond the provided examples will be apparent to those skilled in the art. Therefore, the scope of the present application shall not be determined by reference to the above description, but shall be determined by reference to the foregoing claims and the full scope of their equivalents.
The foregoing description is merely illustrative of the preferred embodiments of the present application and is not intended to limit the present application, and various changes and modifications in the  present application may be made by those skilled in the art. Any modifications, equivalent replacements, and improvements made without departing from the spirit and principle of the present application shall fall within the protection scope of the present application.

Claims (18)

  1. A method for rendering a point cloud scene, comprising:
    acquiring a first point of view, a first angle of view, and origin coordinates of voxels in a target point cloud scene;
    determining voxels in a visible area according to the first point of view, the first angle of view, and the origin coordinates of the voxels in the target point cloud scene;
    storing pieces of point cloud data corresponding to the voxels in the visible area into a cache;
    separately adding the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail; and
    rendering the target point cloud scene according to the pieces of point cloud data in the various levels of detail.
  2. The method according to claim 1, wherein before said acquiring a first point of view, a first angle of view, and origin coordinates of all voxels in a target point cloud scene, the method further comprises:
    performing voxelized segmentation on the target point cloud scene to obtain multiple voxels in the target point cloud scene;
    acquiring the origin coordinates of the various voxels in the target point cloud scene;
    separately storing pieces of point cloud data corresponding to the various voxels in the target point cloud scene to obtain point cloud files corresponding to the various voxels; and
    creating an index file according to the origin coordinates of the various voxels in the target point cloud scene, wherein the index file comprises a correlation between the point cloud files corresponding to the various voxels and the origin coordinates of the various voxels in the target point cloud scene.
  3. The method according to claim 2, wherein said separately storing pieces of point cloud data corresponding to the various voxels in the target point cloud scene to obtain point cloud files corresponding to the various voxels comprises:
    acquiring a piece of point cloud data corresponding to a target voxel and an origin coordinate of the target voxel in the target point cloud scene;
    converting coordinates of various points in the point cloud data corresponding to the target voxel into offset values relative to the origin coordinate of the target voxel in the target point cloud scene; and
    storing the origin coordinate of the target voxel in the target point cloud scene and the offset values of the various points relative to the origin coordinate of the target voxel in the target point cloud scene to obtain a point cloud file corresponding to the target voxel.
  4. The method according to claim 3, wherein before said storing pieces of point cloud data corresponding to the voxels in the visible area into a cache, the method further comprises:
    acquiring the index file;
    determining point cloud files corresponding to the voxels in the visible area according to the index file and the origin coordinates of the voxels in the visible area in the target point cloud scene; and
    determining the pieces of point cloud data corresponding to the voxels in the visible area according to offset values of the various points in the point cloud files relative to the origin coordinates of the voxels in the target point cloud scene.
  5. The method according to claim 1, wherein said separately adding the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail comprises:
    determining the number of levels of detail;
    separately and randomly dividing the pieces of point cloud data corresponding to the various voxels in the visible area to obtain multiple subsets, wherein the number of subsets obtained by randomly dividing the point cloud data corresponding to each of the voxels is equal to the number of levels of detail; and
    separately adding the multiple subsets obtained by randomly dividing the pieces of point cloud data corresponding to the various voxels in the visible area to multiple levels of detail.
  6. The method according to claim 1, wherein after said storing pieces of point cloud data corresponding to the voxels in the visible area into a cache, the method further comprises:
    determining whether the amount of point cloud data corresponding to the voxels in the visible area is greater than the amount of data that can be stored in the cache; and
    removing from the cache, when it is determined that the amount of point cloud data corresponding to the voxels in the visible area is greater than the amount of data that can be stored in the cache, voxels in the visible area that are farthest from the first point of view.
  7. The method according to claim 6, wherein after said determining whether the amount of point cloud data corresponding to the voxels in the visible area is greater than the amount of data that can be stored in the cache, the method further comprises:
    determining, when it is determined that the amount of point cloud data corresponding to the voxels in the visible area is not greater than the amount of data that can be stored in the cache, whether the amount of point cloud data corresponding to the voxels in the visible area is less than the amount of data that can be stored in the cache; and
    storing, when the amount of point cloud data corresponding to the voxels in the visible area is less than the amount of data that can be stored in the cache, voxels outside the visible area that are adjacent to the first point of view into the cache, until the amount of point cloud data stored in the cache reaches the amount of data that can be stored in the cache.
  8. The method according to claim 1, wherein after said rendering the target point cloud scene, the method further comprises:
    determining whether the first point of view or the first angle of view moves;
    taking, when it is determined that the first point of view or the first angle of view moves, a first point of view obtained after movement as a second point of view, and taking a first angle of view obtained after movement as a second angle of view;
    re-determining the voxels in the visible area according to the second point of view, the second angle of view, and the origin coordinates of all the voxels; and
    modifying, according to the re-determined voxels in the visible area, point cloud data stored in the cache.
  9. The method according to claim 1, wherein said storing pieces of point cloud data corresponding to the voxels in the visible area into a cache comprises: storing pieces of point cloud data corresponding to the voxels in the visible area into the cache by using a least recently used algorithm.
  10. A device for rendering a point cloud scene, comprising:
    an acquisition module configured to acquire a first point of view, a first angle of view, and origin coordinates of all voxels in a target point cloud scene;
    a determination module configured to determine voxels in a visible area according to the first point of view, the first angle of view, and the origin coordinates of all the voxels in the target scene;
    a storage module configured to store pieces of point cloud data corresponding to the voxels in the visible area into a cache;
    a processing module configured to separately add the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail; and
    a rendering module configured to render the target point cloud scene according to the pieces of point cloud data in the various levels of detail.
  11. The device according to claim 10, further comprising:
    a voxelized segmentation unit configured to perform voxelized segmentation on the target point cloud scene to obtain multiple voxels in the target point cloud scene;
    an acquisition unit configured to acquire the origin coordinates of the various voxels in the target point cloud scene;
    a storage unit configured to separately store pieces of point cloud data corresponding to the various voxels in the target point cloud scene to obtain point cloud files corresponding to the various voxels; and
    an index file creation unit configured to create an index file according to the origin coordinates of the various voxels in the target point cloud scene, wherein the index file comprises a correlation between the point cloud files corresponding to the various voxels and the origin coordinates of the various voxels in the target point cloud scene.
  12. The method of any of claims 1-9 wherein acquiring the origin coordinates of the voxels comprises acquiring the original coordinates of all the voxels in the target point cloud scene and determining the origin coordinates of the voxels comprises determining the original coordinates of all the voxels in the target point cloud scene.
  13. The method of any of claims 1-9 or 12 wherein each piece of point cloud data corresponds to a respective voxel.
  14. The method of any of claims 1-9 or 12-13 wherein separately adding the pieces of point cloud data corresponding to the voxels in the visible area that are stored in the cache to multiple levels of detail comprises  dividing each piece of point cloud data into multiple subsets of point cloud data and storing each subset separately, wherein each subset corresponds to a respective level of detail.
  15. The method of any of claims 1-9 or 12-14 wherein rendering a target point cloud scene according to the pieces of point cloud data in the various levels of detail comprises rendering the target point cloud scene at a level of detail by using subsets of point cloud data corresponding to the selected level of detail and subsets corresponding to levels of detail which are lower than the selected level of detail.
  16. A method for rendering a point cloud scene, comprising:
    dividing the point cloud scene into a plurality of voxels each voxel including point cloud data;
    acquiring a first point of view, a first angle of view, and origin coordinates of voxels in a target point cloud scene;
    determining voxels in a visible area according to the first point of view, the first angle of view, and the origin coordinates of the voxels in the target point cloud scene;
    storing point cloud data corresponding to voxels in the visible area into a cache, wherein point cloud data corresponding to voxels outside the visible area is not stored in the cache;
    wherein storing the point cloud data includes dividing the point cloud data in each voxel in the visible area into a plurality of subsets and storing each subset separately; and
    rendering a target point cloud scene at a selected level of detail by rendering each voxel using a number of subsets according to the selected level of detail.
  17. An apparatus for rendering a point cloud scene, comprising a processor and a memory for storing instructions executable by the processor, wherein the instructions, when executed by the processor, implement the method of any one of claims 1-9 or 12-16.
  18. A non-transitory computer readable storage medium storing instructions which are executable by a processor to perform the method of any one of claims 1-9 or 12-16.
PCT/CN2020/098284 2019-11-25 2020-06-24 Method, device, and apparatus for rendering point cloud scene WO2021103513A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911164820.3 2019-11-25
CN201911164820.3A CN111179394A (en) 2019-11-25 2019-11-25 Point cloud scene rendering method, device and equipment

Publications (1)

Publication Number Publication Date
WO2021103513A1 true WO2021103513A1 (en) 2021-06-03

Family

ID=70650050

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098284 WO2021103513A1 (en) 2019-11-25 2020-06-24 Method, device, and apparatus for rendering point cloud scene

Country Status (2)

Country Link
CN (1) CN111179394A (en)
WO (1) WO2021103513A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947659A (en) * 2021-09-09 2022-01-18 广州南方卫星导航仪器有限公司 Stable display rendering method and system for three-dimensional laser mass point cloud
CN114049256A (en) * 2021-11-09 2022-02-15 苏州中科全象智能科技有限公司 Uniform downsampling method based on online spliced point cloud
CN114708369A (en) * 2022-03-15 2022-07-05 荣耀终端有限公司 Image rendering method and electronic equipment
CN115205434A (en) * 2022-09-16 2022-10-18 中汽创智科技有限公司 Visual processing method and device for point cloud data
CN115984827A (en) * 2023-03-06 2023-04-18 安徽蔚来智驾科技有限公司 Point cloud sensing method, computer device and computer readable storage medium
CN116385571A (en) * 2023-06-01 2023-07-04 山东矩阵软件工程股份有限公司 Point cloud compression method and system based on multidimensional dynamic variable resolution
CN117876556A (en) * 2024-03-13 2024-04-12 江西求是高等研究院 Incremental point cloud rendering method, system, readable storage medium and computer
CN118037924A (en) * 2024-04-10 2024-05-14 深圳市其域创新科技有限公司 Gao Sidian cloud-based rendering method and device, computer equipment and storage medium

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179394A (en) * 2019-11-25 2020-05-19 苏州智加科技有限公司 Point cloud scene rendering method, device and equipment
CN111617480A (en) * 2020-06-04 2020-09-04 珠海金山网络游戏科技有限公司 Point cloud rendering method and device
CN111968211A (en) * 2020-08-28 2020-11-20 北京睿呈时代信息科技有限公司 Memory, and drawing method, system and equipment based on point cloud data
CN112492385A (en) * 2020-09-30 2021-03-12 中兴通讯股份有限公司 Point cloud data processing method and device, storage medium and electronic device
CN113239216B (en) * 2021-04-09 2024-08-16 广东南方数码科技股份有限公司 Point cloud processing method, device, equipment and storage medium
CN113486276A (en) * 2021-08-02 2021-10-08 北京京东乾石科技有限公司 Point cloud compression method, point cloud rendering method, point cloud compression device, point cloud rendering equipment and storage medium
CN113689533A (en) * 2021-08-03 2021-11-23 长沙宏达威爱信息科技有限公司 High-definition modeling cloud rendering method
CN115086502B (en) * 2022-06-06 2023-07-18 中亿启航数码科技(北京)有限公司 Non-contact scanning device
CN114756798B (en) * 2022-06-13 2022-10-18 中汽创智科技有限公司 Point cloud rendering method and system based on Web end and storage medium
CN115269763B (en) * 2022-09-28 2023-02-10 北京智行者科技股份有限公司 Local point cloud map updating and maintaining method and device, mobile tool and storage medium
WO2024130737A1 (en) * 2022-12-23 2024-06-27 京东方科技集团股份有限公司 Image rendering method and apparatus, and electronic device
CN118411495A (en) * 2023-12-25 2024-07-30 深圳迈嘉城科信息科技有限公司 Concurrent visualization method, system, terminal and storage medium for three-dimensional scene

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867174A (en) * 2015-05-08 2015-08-26 腾讯科技(深圳)有限公司 Three-dimensional map rendering and display method and system
WO2018122087A1 (en) * 2016-12-28 2018-07-05 Thomson Licensing Method and device for joint segmentation and 3d reconstruction of a scene
WO2018148924A1 (en) * 2017-02-17 2018-08-23 深圳市大疆创新科技有限公司 Method and device for reconstructing three-dimensional point cloud
WO2019055772A1 (en) * 2017-09-14 2019-03-21 Apple Inc. Point cloud compression
CN110070613A (en) * 2019-04-26 2019-07-30 东北大学 Large-scale three dimensional scene web page display method based on model compression and asynchronous load
CN111179394A (en) * 2019-11-25 2020-05-19 苏州智加科技有限公司 Point cloud scene rendering method, device and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10803561B2 (en) * 2017-06-02 2020-10-13 Wisconsin Alumni Research Foundation Systems, methods, and media for hierarchical progressive point cloud rendering

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867174A (en) * 2015-05-08 2015-08-26 腾讯科技(深圳)有限公司 Three-dimensional map rendering and display method and system
WO2018122087A1 (en) * 2016-12-28 2018-07-05 Thomson Licensing Method and device for joint segmentation and 3d reconstruction of a scene
WO2018148924A1 (en) * 2017-02-17 2018-08-23 深圳市大疆创新科技有限公司 Method and device for reconstructing three-dimensional point cloud
WO2019055772A1 (en) * 2017-09-14 2019-03-21 Apple Inc. Point cloud compression
CN110070613A (en) * 2019-04-26 2019-07-30 东北大学 Large-scale three dimensional scene web page display method based on model compression and asynchronous load
CN111179394A (en) * 2019-11-25 2020-05-19 苏州智加科技有限公司 Point cloud scene rendering method, device and equipment

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947659A (en) * 2021-09-09 2022-01-18 广州南方卫星导航仪器有限公司 Stable display rendering method and system for three-dimensional laser mass point cloud
CN114049256A (en) * 2021-11-09 2022-02-15 苏州中科全象智能科技有限公司 Uniform downsampling method based on online spliced point cloud
CN114049256B (en) * 2021-11-09 2024-05-14 苏州中科全象智能科技有限公司 Uniform downsampling method based on online splice point cloud
CN114708369A (en) * 2022-03-15 2022-07-05 荣耀终端有限公司 Image rendering method and electronic equipment
CN115205434A (en) * 2022-09-16 2022-10-18 中汽创智科技有限公司 Visual processing method and device for point cloud data
CN115984827A (en) * 2023-03-06 2023-04-18 安徽蔚来智驾科技有限公司 Point cloud sensing method, computer device and computer readable storage medium
CN115984827B (en) * 2023-03-06 2024-02-02 安徽蔚来智驾科技有限公司 Point cloud sensing method, computer equipment and computer readable storage medium
CN116385571A (en) * 2023-06-01 2023-07-04 山东矩阵软件工程股份有限公司 Point cloud compression method and system based on multidimensional dynamic variable resolution
CN116385571B (en) * 2023-06-01 2023-09-15 山东矩阵软件工程股份有限公司 Point cloud compression method and system based on multidimensional dynamic variable resolution
CN117876556A (en) * 2024-03-13 2024-04-12 江西求是高等研究院 Incremental point cloud rendering method, system, readable storage medium and computer
CN117876556B (en) * 2024-03-13 2024-05-10 江西求是高等研究院 Incremental point cloud rendering method, system, readable storage medium and computer
CN118037924A (en) * 2024-04-10 2024-05-14 深圳市其域创新科技有限公司 Gao Sidian cloud-based rendering method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111179394A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
WO2021103513A1 (en) Method, device, and apparatus for rendering point cloud scene
US11107272B2 (en) Scalable volumetric 3D reconstruction
US20240265561A1 (en) Mesh reconstruction using data-driven priors
US9665800B1 (en) Rendering virtual views of three-dimensional (3D) objects
Elseberg et al. One billion points in the cloud–an octree for efficient processing of 3D laser scans
US10706611B2 (en) Three-dimensional representation by multi-scale voxel hashing
CN107209853B (en) Positioning and map construction method
US8792708B2 (en) Method and apparatus for rendering a three-dimensional object from a two-dimensional image
US10460510B2 (en) Methods and systems for viewing a three-dimensional (3D) virtual object
CN111340922B (en) Positioning and map construction method and electronic equipment
US20120075433A1 (en) Efficient information presentation for augmented reality
EP2533213A2 (en) Coherent point-based global illumination using out-of-core storage
CN103983334A (en) Information processing method and electronic equipment
US20220375164A1 (en) Method and apparatus for three dimensional reconstruction, electronic device and storage medium
CN103198473A (en) Depth image generating method and device
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
Zhang et al. Research on 3D architectural scenes construction technology based on augmented reality
CN114511661A (en) Image rendering method and device, electronic equipment and storage medium
CN110096993A (en) The object detection apparatus and method of binocular stereo vision
US11270449B2 (en) Method and system for location detection of photographs using topographic techniques
Xu et al. Crosspatch-based rolling label expansion for dense stereo matching
US10891780B2 (en) Methods and systems for viewing a three-dimensional (3D) virtual object
Kuhn et al. Incremental division of very large point clouds for scalable 3d surface reconstruction
CN115511731A (en) Noise processing method and noise processing equipment
CN114463409A (en) Method and device for determining image depth information, electronic equipment and medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20894535

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20894535

Country of ref document: EP

Kind code of ref document: A1