CN117274032A - Layered and extensible new view synthesis method - Google Patents

Layered and extensible new view synthesis method Download PDF

Info

Publication number
CN117274032A
CN117274032A CN202311316525.1A CN202311316525A CN117274032A CN 117274032 A CN117274032 A CN 117274032A CN 202311316525 A CN202311316525 A CN 202311316525A CN 117274032 A CN117274032 A CN 117274032A
Authority
CN
China
Prior art keywords
scene
blocks
hierarchy
sampling
new view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311316525.1A
Other languages
Chinese (zh)
Inventor
陆定波
杨晓妍
李洋
王长波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202311316525.1A priority Critical patent/CN117274032A/en
Publication of CN117274032A publication Critical patent/CN117274032A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a layered and extensible new view synthesis method, which is characterized in that a scene representation method based on a nerve radiation field is adopted, a self-adaptive mechanism is designed, a large scene is divided into different levels, and rapid modeling is performed with less memory cost; and a layering frame is introduced to model a space, four layers of voxels, groups, blocks and blocks are used for dividing the scene, the design of each layer aims at reconstructing different functions of the extended scene, and the layering frame is utilized to realize dynamic loading and connection of the scene, so that better immersive experience is brought. Compared with the prior art, the method has the advantages that the reconstruction and the rendering of the large indoor scene are realized by using lower calculation resource consumption, the memory expenditure is reduced by about 50 percent, only one tenth of training steps are needed, the calculation expenditure which can be born by a consumer-level display card can be used, the rendering result matched with the prior method is realized, and the method has a certain practical value.

Description

Layered and extensible new view synthesis method
Technical Field
The invention relates to the technical field of computer vision and computer graphics, in particular to an efficient and layered extensible new view synthesis method.
Background
In recent years, the neural radiation field (NeRF) has made significant progress in synthesizing realistic new perspectives. However, neRF is still less explored for modeling unlimited and unbounded scenes. The expression and rendering of infinite scenes plays an important role in computer graphics. The technology enables users to freely and naturally explore the borderless environment, and has wide application in the fields of game entertainment and the like.
Some research has focused on exploring NeRF representation methods to enable modeling and rendering of large or very large scenes so that urban areas can be modeled and rendered with autopilot datasets and even urban level maps. However, these methods have difficulty in training and rendering extended scenes on limited hardware resources. Modeling of infinitely extended spaces is a challenging problem. In order to achieve the representation and rendering of extended scenes on limited hardware resources, scene segmentation and dynamic loading are required to handle scene indexes without boundaries. The explicit expression method has advantages over the implicit method in that the former allows segmentation and editing of scenes.
The prior art generation-based methods can build infinite scenes, but they cannot re-render images exactly similar to real scenes. While some existing NeRF methods for large-scale scenes employ dynamic loading mechanisms and theoretically allow for unlimited creation of scenes, they result in high memory overhead and typically require multiple non-consumer GPUs, as shown in fig. 8. Thus, it remains a challenge to express infinitely extended scenes in a lightweight and efficient manner.
Inventive content
The invention aims to provide a layered and expandable new view synthesis method designed for overcoming the defects of the prior art, a scene representation method based on a nerve radiation field is adopted, a layered framework is introduced to model a space, an adaptive mechanism is designed, and a large scene is divided into different levels by using four levels of voxels, groups, blocks and blocks so as to quickly model with less memory cost, and local loading of scene blocks for training or rendering is realized. The scene is divided, and each level of design aims at reconstructing different functions of the extended scene. The block hierarchy is used for realizing asynchronous dynamic loading of a scene between a CPU and a GPU in the training process, the group hierarchy focuses on improving the visualization performance through tensor decomposition and interpolation between different voxels, and the voxel hierarchy plays a role in initializing 3D priori and clearly and efficiently representing the scene geometry. By using the layered framework, dynamic loading and connection of scenes can be realized, so that better immersive experience is brought. The invention realizes reconstruction and rendering of large indoor scenes by using the computing expense which can be borne by the consumer-level display card, has simple and convenient method and good use effect, and has good application prospect and commercial value.
The purpose of the invention is realized in the following way: the utility model provides a layering and scalable new view synthetic method, its characteristic is that adopts the scene characterization method based on nerve radiation field, designed an adaptive mechanism, can divide into different levels with large-scale scene to with the quick modeling of less memory cost, and introduced a layering frame to model the space, use voxel, group, piece and four levels of block to divide the scene, realize the dynamic loading and the connection to the scene, new view synthetic specifically includes following steps:
step 1: calculate rays and sample importance
The origin and direction of the ray from the camera position to each pixel on the focal plane is calculated from the camera's internal and external parameters, and two-stage sampling is performed on the ray. In the coarse sampling stage, dividing the rays into a fixed number of intervals, sampling once on each interval, and equalizing the probability of sampling all the positions in each interval; in the fine sampling stage, the volume density obtained in the coarse stage is used as a sampling weight, and a higher probability is given to a position where an object is more likely to occur, so that the vicinity of the position is more easily sampled.
Step 2: block hierarchy for computing scene visibility
By utilizing the property that in a closed indoor scene, the camera view intersects only a small part of the scene overlapping, all camera tracks are divided intoThe groups, and for each group of camera settings, the visible 'blocks' form a 'block', which is loaded onto the GPU as a whole, and this simplified collision detection scheme eliminates the need for complex collision checks on each ray, facilitating a quick determination of the blocks that need to be loaded into the GPU memory for rendering operations.
Step 3: dynamically loaded block hierarchy
In order to store the entire scene in portions on the GPU and CPU, an adaptive Kd-Tree partitioning method based on the distribution of the input point clouds is used to partition the entire scene into "blocks". First, the axis with the largest range of coverage X, Y and Z axis is selected as the cutting boundary, and the point set merging operation is performed using the median, and this process is repeated until the result is obtainedA set of points, whereinRepresenting the number of blocks. The self-adaptive Kd-Tree dividing method based on the input point cloud distribution avoids unnecessary operation on a blank area and optimizes the resource utilization rate by closely representing the space near the surface.
Step 4: group hierarchy based on vector matrix decomposition
4-1: dividing the "group" hierarchy in "blocks" for vector matrix decomposition, all active groups being indexed by indexThe three-dimensional space is divided into an ordered list of groups by means of a re-indexing function, and feature filling is performed around the edges of the decomposition vector.
4-2: the "group" hierarchy is partitioned in "blocks" for vector matrix decomposition,the same strategy as initializing activated voxels is used to identify activated groups, each group having a corresponding decomposition basis vectorDecomposing the basis matrix. All active groupsAccording to indexesAt the position ofOn-axis ordering and re-indexing. Wherein,is the total number of all active groups. Assuming that the position of the original group isThe radius of the group isBy pointingDivided byTo calculate a three-dimensional indexAnd one-dimensional index. The values on the X-axis, Y-axis and Z-axis are indicated by superscripts, by re-indexing the functionThe space may be divided into an ordered list of groups.
4-3: directly stitching together the combination of space and segmentation results in a sampling space discontinuity. Thus, for each component matrixSum vectorActual features are filled around the edges of the decomposition vector. If the surrounding tensors are not from the active group, then fill with their own boundary tensors. After the filling process, the liquid is filled,one voxel at each end, andboth the length and the width of (a) are increased by two voxels.
Step 5: voxel hierarchy characterizing scene geometry
A sparse initialization strategy is used to initialize the voxel hierarchy, only voxels close to the geometric surface are activated, resource utilization is optimized by focusing on the region of interest, and a stepwise upsampling process is employed to gradually improve reconstruction quality from coarse to fine. In the initialization process of the voxel level, a sparse initialization strategy is adopted, and only voxels close to the geometric surface are activated. The ray sampling and integration only considers samples within activated voxels and employs a step up sampling process to refine the representation. The active state is determined by evaluating the intersection proportion and calculating the overlap between the current voxel and the previous level voxel.
Step 6: volume rendering module
Substituting the volume density and the color of the sampling point on the ray into a volume rendering equation, constructing a volume rendering module, and calculating the color of the corresponding ray.
Compared with the prior art, the invention has a lightweight and fast convergence framework, can realize the reconstruction of an expandable indoor scene and the rendering of a new view angle with lower calculation resource consumption, can realize the rendering result matched with the prior method under the conditions of reducing the memory cost by about 50 percent and only needing one tenth of training steps, so that the method based on the nerve radiation field can realize the reconstruction and the rendering of a large indoor scene with the calculation cost which can be borne by a consumer-level display card, and has simple and convenient method, good use effect and good application prospect and commercial value.
Drawings
FIG. 1 is a schematic flow chart of the present invention;
FIG. 2 is a diagram of an example hierarchy of a hierarchical framework;
FIG. 3 is a schematic illustration of approximate collision detection;
FIG. 4 is an exemplary diagram of adaptive partitioning and dynamic loading;
FIG. 5 is an exemplary diagram of voxel initialization, ray sampling, and tensor decomposition;
FIG. 6 is a visual schematic of tensor population;
FIG. 7 is a graph of qualitative comparison of the present invention with the prior art;
FIG. 8 is a graph comparing the present invention method with the prior art optimal method in terms of video memory, reconstruction quality and number of training steps.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings and examples.
Referring to fig. 1-2, the present invention comprises the following specific steps:
step 1: calculate rays and sample importance
The origin and direction of the ray may be calculated by the camera's internal parameters (i.e. focal length, focal position and inherent properties such as lens distortion) and external parameters (i.e. viewing position and viewing angle), each ray emerging from the camera's location passing through the location of the corresponding pixel on the focal plane.
Assume that the internal reference matrix of the camera isThe external matrix isThe pixel coordinates areThe origin and direction of the ray may be calculated as follows:
1-1: coordinates of pixelsConverted into three-dimensional coordinates in a camera coordinate system by the following (a)
(a)。
1-2: three-dimensional coordinates in a camera coordinate systemConversion to three-dimensional coordinates in world coordinate systemThis conversion can be achieved by the following equation (b):
(b)。
1-3: the starting point and direction of the ray are calculated, and a two-stage importance sampling method is adopted to improve the quality of the rendered image and reduce noise. In the coarse sampling phase, the ray is equally divided into 128 intervals, and an equiprobable one-time sampling is performed on each interval. In the fine sampling stage, the radiance coefficient of each sampling point is converted into a probability by using a softmax function according to the radiance coefficient obtained in the coarse sampling stage. At this time, a higher probability value is more likely to be given to the position of the object so that the vicinity of the position is more easily selected.
Step 2: block hierarchy for computing scene visibility
Due to the closed nature of indoor scenes, the camera view intersects only a small portion of the scene overlap. In the reasoning rendering processThe whole space occupies only a small part of it. Thus, step 2 divides all camera trajectories intoThe groups, and for each group of camera settings, the visible "blocks" constitute one "tile" that is loaded onto the GPU as a whole. The view cone emanating from the camera origin is divided intoEach cube containing a partial view cone arranged from smallest to largest.
Referring to fig. 3, collisions are detected between these cubes and the scene and the range of intersection with the block is determined. This simplified collision detection scheme eliminates the need for complex collision checks on each ray, facilitating quick determination of the blocks that need to be loaded into GPU memory for rendering operations.
Step 3: dynamically loaded block hierarchy
To store the entire scene in portions on the GPU and CPU, the entire scene is divided into "blocks". During training and reasoning, these blocks can be loaded independently from the camera's perspective.
Referring to fig. 4, if the scene is uniformly divided by euclidean space, the resulting blocks are typically unbalanced compared to the geometric surface. In order to realize more adaptive point cloud division, the invention provides an adaptive Kd-Tree dividing method based on input point cloud distribution. First, the axis with the largest range of coverage X, Y and Z axis is selected as the cutting boundary, and the point set merging operation is performed using the median, and this process is repeated until the result is obtainedA set of points, whereinIs a super parameter indicating the number of blocks. By closely representing the space near the surface, such a partitioning approach effectively avoids unnecessary operations on the empty areas and optimizes resource utilization.
Step 4: group hierarchy based on vector matrix decomposition
Referring to fig. 5c, the present invention introduces a "group" hierarchy to represent a scene and performs vector matrix decomposition with an initial resolution of voxels within the group ofThe final resolution is. The intermediate resolution of the voxels grows exponentially and the same strategy as initializing activated voxels is used to identify activated groups, each group having a corresponding decomposition basis vectorDecomposing the basis matrixThe method comprises the following specific steps:
4-1: all active groupsAccording to indexesAt the position ofOn-axis ordering and re-indexingWhereinIs the total number of all active groups. Assuming that the position of the original group isThe radius of the group is. By pointingDivided byCalculating a three-dimensional index from the following (c)And one-dimensional index
(c)。
Wherein the method comprises the steps ofRepresenting a downward rounding function. Values on the X-axis, Y-axis and Z-axis are indicated by superscripts. By re-indexing the functionThe space may be divided into an ordered list of groups.
4-2: directly stitching together the combination of space and segmentation results in a discontinuous sampling space. Thus, for each component matrixSum vectorThe edges of the decomposition vector are filled with actual features around the space.
Referring to fig. 6, if the surrounding tensors are not from the active group, the filling tensor fills with its own boundary tensor. After the filling process, the liquid is filled,one voxel at each end, andboth the length and the width of (a) are increased by two voxels.
4-3: within each group, a filled tensor is usedAndperforming independent tensor decomposition to give a pointCalculate density and color asAnd. Wherein,is a multi-layer perceptron for color prediction; density predictorSum color predictorThe specific calculation of (d) is defined as follows:
(d)。
wherein for each spatial axisRepresentation and representationTwo axes in orthogonal to simplify computation (e.g)。Is used for calculating multi-channel color characteristicsAnd (5) adding a feature base vector.Is an index of the current group ID for selecting the corresponding group. The final value is obtained by the outer product of the vector matrix summation.
4-4: for points not belonging to the activated group, the corresponding densityAnd colorIs set to 0, when the coordinates are to beSubtracting the corresponding groupAfter the origin of (a), local coordinates within the group are obtained. Then, feature interpolation is performed according to the local coordinates within the group. Experiments have shown that using smaller decomposition groups results in reduced performance, while larger groups require more GPU memory.
Step 5: voxel hierarchy characterizing scene geometry
Referring to fig. 5a, a sparse initialization strategy is employed during the initialization of the voxel hierarchy, activating only voxels that are close to the geometric surface. This approach optimizes resource utilization by focusing on the region of interest.
Referring to fig. 5b, only samples within activated voxels are considered in the subsequent density integration for ray sampling and integration.
However, using too small initialization voxels may result in lost geometric details and poor end results due to the sparsity and discontinuities inherent in the point cloud. Conversely, using too coarse a granularity of voxels will produce a rough representation of the geometry, limiting the fidelity of the rendered image. To address these challenges, a gradual upsampling process is employed to gradually refine the representation from coarse to fine. For lists containing upsampling stepsThe upsampled activated voxels are obtained in two ways. First, the activation state is determined by evaluating the proportion of the intersection with the input point, which is consistent with the initialization process. Second, the overlap between the voxels of the current level and the voxels of the previous level is calculated among the voxels of the current level that have been sampled. If any overlapping voxels were active at the previous level, the current voxel is marked as active.
Step 6: volume rendering module
Substituting the volume density and color of the sampling point on the ray into a volume rendering equation to obtain a corresponding ray represented by the following formula (e)Color of (2)
(e)。
Wherein,is the radial firstPoints and the firstThe spacing between the points of interest is such that,indicating the location of the ith point on the ray,the volume density is indicated as being the sum of the volume densities,representing color.
Referring to fig. 7, where GT is the true value and Ours is the result of the present invention. Scannet Dataset is a data set commonly used in indoor scene new view synthesis tasks, and compared with the existing optimal method, the algorithm provided by the invention has the advantages that convergence is faster under the same training step number, visual effect is obviously better, and more details in the scene, such as fragments on a table, books on a bookshelf and pictures on a wall, are optimized.
Finally, it should be emphasized that the above examples are only typical examples of the invention. It is obvious that the technical solution according to the invention is not limited to the embodiments described above, but that many other variations and modifications are possible. Various modifications and alterations to the above described embodiments may be made by those skilled in the art without departing from the novel concepts of the present invention. Thus, the scope of protection is not limited by the embodiments described above, but should be accorded the broadest scope consistent with the innovative features set forth in the claims.

Claims (5)

1. The utility model provides a layering and scalable new view synthetic method which is characterized in that a scene representation method based on a nerve radiation field is adopted, a self-adaptive mechanism is used for dividing a large scene into different levels, a layering frame is introduced for modeling a space, four levels of voxels, groups, blocks and blocks are used for dividing the scene, the quick modeling is carried out with less memory cost, the dynamic loading and connection of the scene are realized, and the new view synthetic method specifically comprises the following steps:
step 1: calculate rays and sample importance
Calculating the starting point and the direction of rays between each pixel on the focal plane from the camera position through the internal and external parameters of the camera, and carrying out coarse and fine two-stage sampling on the rays, wherein the coarse sampling stage equally divides the rays into a fixed number of sections for sampling; the fine sampling stage takes the volume density obtained in the coarse stage as sampling weight;
step 2: block hierarchy for computing scene visibility
The visible 'blocks' are formed into a 'block' to be loaded on the GPU, and the blocks which need to be loaded in the GPU memory to be subjected to rendering operation are determined;
step 3: dynamically loaded block hierarchy
Using input-point-basedThe self-adaptive Kd-Tree dividing method of cloud distribution divides the whole scene into 'blocks', stores the whole scene on the GPU and CPU, the scene division firstly selects the axis with the largest coverage X, Y and Z axis range as the cutting boundary, then uses the median to carry out the point set merging operation, and repeats the process until obtainingA set of points; the saidRepresenting the number of blocks;
step 4: group hierarchy based on vector matrix decomposition
Dividing the "group" hierarchy in "blocks" for vector matrix decomposition, all active groups being indexed by indexSorting on the axis, dividing the three-dimensional space into an ordered group list through a re-indexing function, and performing feature filling around the edge of the decomposition vector;
step 5: voxel hierarchy characterizing scene geometry
Initializing a voxel level using a sparse initialization strategy, optimizing resource utilization by focusing on a region of interest, and employing a gradual upsampling process to gradually improve reconstruction quality from coarse to fine;
step 6: volume rendering module
Substituting the volume density and the color of the sampling points on the rays into a volume rendering equation construction body rendering module to calculate the color of the corresponding rays.
2. The method according to claim 1, wherein step 2 is designed to calculate the block level of scene visibility, i.e. the visible blocks are loaded as a block whole onto the GPU, and to detect the collision to quickly determine the blocks that need to be loaded into the GPU memory for rendering operations.
3. The method for synthesizing the layered and extensible new view according to claim 1, wherein the step 3 designs a dynamically loaded block hierarchy, and adopts a self-adaptive Kd-Tree method based on input point cloud distribution to divide scenes, so as to realize the partial storage of the whole scenes on the GPU and the CPU.
4. The hierarchical and scalable new view synthesis method according to claim 1, wherein said step 4 designs a set hierarchy and re-indexing function based on vector matrix decompositionThe group hierarchy based on vector matrix decomposition orders the activated groups on the coordinate axis, each group has corresponding decomposition base vector +.>And decomposing the basis matrix->Decomposing information in space onto a base vector and a base matrix; said re-indexing function->The space is divided into an ordered list of groups and the actual features are filled around the edges of the decomposition vector.
5. The hierarchical and scalable new view synthesis method according to claim 1, wherein step 5 designs a voxel hierarchy characterizing the scene geometry, activates only voxels close to the geometric surface using a sparse initialization strategy, determines activation states by evaluating the proportion of intersecting parts, calculates overlap condition between the current voxel and the previous-level voxel, marks the activation states, and calculates the volume density and color of the active voxels.
CN202311316525.1A 2023-10-12 2023-10-12 Layered and extensible new view synthesis method Pending CN117274032A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311316525.1A CN117274032A (en) 2023-10-12 2023-10-12 Layered and extensible new view synthesis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311316525.1A CN117274032A (en) 2023-10-12 2023-10-12 Layered and extensible new view synthesis method

Publications (1)

Publication Number Publication Date
CN117274032A true CN117274032A (en) 2023-12-22

Family

ID=89219488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311316525.1A Pending CN117274032A (en) 2023-10-12 2023-10-12 Layered and extensible new view synthesis method

Country Status (1)

Country Link
CN (1) CN117274032A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689791A (en) * 2024-02-02 2024-03-12 山东再起数据科技有限公司 Three-dimensional visual multi-scene rendering application integration method
CN117689791B (en) * 2024-02-02 2024-05-17 山东再起数据科技有限公司 Three-dimensional visual multi-scene rendering application integration method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689791A (en) * 2024-02-02 2024-03-12 山东再起数据科技有限公司 Three-dimensional visual multi-scene rendering application integration method
CN117689791B (en) * 2024-02-02 2024-05-17 山东再起数据科技有限公司 Three-dimensional visual multi-scene rendering application integration method

Similar Documents

Publication Publication Date Title
US8570322B2 (en) Method, system, and computer program product for efficient ray tracing of micropolygon geometry
US9972129B2 (en) Compression of a three-dimensional modeled object
US7940279B2 (en) System and method for rendering of texel imagery
JP7029283B2 (en) Image complement
US11640690B2 (en) High resolution neural rendering
EP3008702A1 (en) Scalable volumetric 3d reconstruction
WO2012096790A2 (en) Planetary scale object rendering
US11189096B2 (en) Apparatus, system and method for data generation
Wald et al. Ray tracing structured AMR data using ExaBricks
Westerteiger et al. Spherical Terrain Rendering using the hierarchical HEALPix grid
CN115170741A (en) Rapid radiation field reconstruction method under sparse visual angle input
Wald et al. CPU volume rendering of adaptive mesh refinement data
Kivi et al. Real-time rendering of point clouds with photorealistic effects: a survey
JP2023178274A (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions
CN117274032A (en) Layered and extensible new view synthesis method
Zellmann et al. Memory‐Efficient GPU Volume Path Tracing of AMR Data Using the Dual Mesh
Argudo et al. Interactive inspection of complex multi-object industrial assemblies
Dobashi et al. An interactive rendering system using hierarchical data structure for earth-scale clouds
Wu et al. GPU-based adaptive data reconstruction for large-scale statistical visualization
Jin et al. Research on 3D Visualization of Drone Scenes Based on Neural Radiance Fields
US11776207B2 (en) Three-dimensional shape data processing apparatus and non-transitory computer readable medium
CN112215951B (en) Out-of-core multi-resolution point cloud representation method and point cloud display method
Lee et al. Geometry-Aware Projective Mapping for Unbounded Neural Radiance Fields
Li View-dependent Adaptive HLOD: real-time interactive rendering of multi-resolution models
Li et al. Neural Adaptive Scene Tracing (NAScenT).

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination