CN117671112A - Virtual-real fusion drawing method and system based on nerve radiation field and voxelized characterization - Google Patents

Virtual-real fusion drawing method and system based on nerve radiation field and voxelized characterization Download PDF

Info

Publication number
CN117671112A
CN117671112A CN202311645398.XA CN202311645398A CN117671112A CN 117671112 A CN117671112 A CN 117671112A CN 202311645398 A CN202311645398 A CN 202311645398A CN 117671112 A CN117671112 A CN 117671112A
Authority
CN
China
Prior art keywords
density
radiation field
color
virtual
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311645398.XA
Other languages
Chinese (zh)
Inventor
李蔚清
刘辅强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202311645398.XA priority Critical patent/CN117671112A/en
Publication of CN117671112A publication Critical patent/CN117671112A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)

Abstract

The invention provides a virtual-real fusion drawing method and a system based on a nerve radiation field and voxel characterization, wherein the nerve radiation field is adopted to express geometric, material and illumination information in a real scene, and density occupation grids in the scene are optimized in the geometrical model training process; converting the grid expression of the virtual object into voxel expression, and updating the density occupied grid; adopting a light projection algorithm, and completing importance sampling by means of density occupation grids; and acquiring density, color, reflectivity, roughness and normal from the nerve radiation field and the voxel model, randomly sampling the ejection direction of the first light ray in the hemisphere where the normal is located, judging whether each light ray hits the virtual model, and calculating the color of each pixel according to the density and depth mixed color along a plurality of sampling points of the light ray. The invention can more accurately represent the geometric shielding relation and illumination consistency between the virtual object and the real scene, and improves the virtual-real fusion effect of the real object and the virtual object in the scene.

Description

Virtual-real fusion drawing method and system based on nerve radiation field and voxelized characterization
Technical Field
The invention relates to the field of augmented reality, in particular to a virtual-real fusion drawing method and system based on a nerve radiation field and voxel characterization.
Background
The virtual-real fusion technology mainly comprises geometric consistency and illumination consistency, wherein the geometric consistency refers to that a virtual object meets the correct shielding relation and position relation in a three-dimensional real scene, and the illumination consistency refers to that the virtual object rendered by the virtual-real fusion drawing system has the effect of being consistent with the illumination of the real scene, so that the surface of the virtual object shows the correct illumination phenomena such as brightness change, reflection, shadow and the like.
In the implementation of geometric consistency, a three-dimensional reconstruction-based shielding processing method is widely used, a three-dimensional reconstruction process is decomposed into two stages, in an offline stage, a RGB-D camera is used for obtaining a depth map of a real scene, three-dimensional coordinates of each point in a global coordinate system are obtained, and in an online stage, a three-dimensional point cloud alignment method based on a GPU is adopted, so that the convergence speed is increased, and the iteration times are reduced. And (3) automatically obtaining the correct relation between the real object and the virtual object by comparing the Z coordinate values of each pixel point of the real object and the virtual object in a small area. The illumination consistency then expresses the light source by a spherical gaussian method, using a three-dimensional spherical gaussian to represent the surface radiation (including the visible surface and the outer surface of the field of view) throughout the scene, and then rendering the illumination at any spatial location and view angle by standard volumetric three-dimensional rendering techniques. However, the expression of the point cloud to the real scene is too sparse, the part of the scene with too sparse expression is easy to calculate and shade errors, while the spherical Gaussian can only express low-frequency illumination and hardly express high-frequency illumination with obvious brightness change, so that the two drawing methods have the problem that editable virtual objects cannot be fused in a nerve radiation field to perform unified rendering, and the drawing effect is not ideal enough.
Disclosure of Invention
The invention aims to provide a virtual-real fusion drawing method and system based on a nerve radiation field and voxel characterization, which more accurately represent the geometric shielding relation and illumination consistency between a virtual object and a real scene and improve the virtual-real fusion effect of the real object and the virtual object in the scene.
The technical scheme for realizing the aim of the invention is as follows: a virtual-real fusion drawing method based on nerve radiation field and voxel characterization comprises the following steps:
step 1, reconstructing implicit voxel expression of a real scene based on a nerve radiation field, converting a virtual object into explicit voxel expression, initializing a density occupation grid based on the nerve radiation field, and updating the density occupation grid after converting the virtual object into the same world coordinate system as the real scene;
step 2, calculating sampling points through a density occupation grid along rays taking a viewpoint as a starting point and a sight line as a direction, wherein the sampling points acquire density, color, normal, reflectivity and roughness from implicit voxel expression and explicit voxel expression;
step 3, obtaining illumination information from a light field expressed by a nerve radiation field, substituting the illumination information and the color, normal, reflectivity and roughness obtained in the step 2 into a rendering equation based on physics to calculate a color value of a sampling point;
and 4, calculating the color duty ratio of the sampling points through accumulation of the densities based on the color values of the sampling points in the step 3, and obtaining the color of the final pixel, thereby realizing drawing.
Further, the implicit voxel expression for reconstructing the real scene based on the nerve radiation field in the step 1 specifically includes:
firstly, preprocessing picture data obtained by surrounding a photographed real scene to obtain internal and external parameters of a used camera and the pose of an image, namely, the focal length, the pixel width and the resolution of the camera and a projection matrix from a world coordinate system represented by quaternion and translation vectors to the camera coordinate system, and reconstructing the geometry, the material and illumination of the data-supervised real scene based on the preprocessed data;
expressing the geometry of a scene by adopting a directional distance field, expressing the directional distance field value at a position x by adopting a multi-layer perceptron, and mapping the directional distance field into density sigma to obtain the density as follows:
wherein the method comprises the steps ofAs a probability density function, E geo For multi-resolution hash coding, MLP represents a multi-layer perceptron about a directed distance field;
after the reconstruction of scene geometry is completed, illumination and material reconstruction are carried out through the multi-layer perceptron, and the obtained material and illumination expression is as follows:
c,r,s=MLP mat (x)
L=MLP i (x,d)
wherein c represents the basic color of the material of the object, r represents roughness, s represents reflectivity, x represents three-dimensional coordinates, L represents illumination, and d represents illumination direction.
Further, converting the virtual object into an explicit voxel representation in step 1 specifically includes:
calculating an AABB bounding box of the virtual object, setting the resolution of voxels according to the size of the bounding box, calculating an observation matrix and an orthogonal matrix of the virtual camera in the directions of X, Y and Z axes, setting the buffer size according to the resolution of the voxels, and buffering for reading the color, the normal and the reflectivity after interpolation of the element shader;
the position of the camera is the center point of the bounding box matrix in each direction, the vector which points to the inside outside of the bounding box is used as the orientation of the camera, and the observation matrix and the orthogonal matrix in the direction with the largest triangular projection area are selected and converted into a cutting space;
and (3) obtaining the interpolated color, normal and reflectivity from the triangle primitives in the clipping space by adopting rasterization, and sending the obtained color, normal and reflectivity into a primitive shader to store the three-dimensional texture.
Further, initializing the density occupation grid based on the neural radiation field in the step 1, and updating the density occupation grid after converting the virtual object to the same world coordinate system as the real scene specifically includes:
firstly traversing each grid unit, calculating corresponding three-dimensional coordinates of the grids in a scene according to the level and the offset of each grid, and acquiring a density initialization density occupied grid from a network by using the three-dimensional coordinates;
and converting the coordinates of the voxels from a space coordinate system to a world coordinate system of the real scene by using a transformation matrix, calculating the offset of the density occupied grid in each level according to the coordinates of the voxels, and updating the corresponding density occupied grid according to the offset.
Further, step 2 specifically includes:
a ray is emitted from the direction from the central point to each pixel from the starting point of the central point of the camera, a pre-sampling point is obtained according to a preset step length, the level and the offset of an occupied grid are calculated according to the position of the pre-sampling point, whether the occupied grid is a blank space or not is judged according to the density, if the occupied grid is the blank space, the ray continues to step, and if the occupied grid is not the blank space, the point is taken as a sampling point; and respectively acquiring density sigma, color c, roughness r, reflectivity s and normal n from the nerve radiation field and the voxels according to the three-dimensional coordinates of the sampling points.
Further, the step 3 specifically includes:
let the three-dimensional coordinates of the sampling points be x, uniformly obtain N points in the hemisphere by using a fibonacci grid sampling method on the hemisphere determined by the normal line of the position, and the direction vector from the position x to the ith point is represented as w i The area on the hemispherical surface in this direction is the differential area d s Querying in location x and direction w through a neural network i Is the illumination intensity L of (2) i Then the point is in the viewing direction w 0 Is the illumination intensity L of (2) o The method comprises the following steps:
F(w o ,h,s)=s+(1-s)(1-(h·w o )) 5
wherein h is half-path vector, k is intermediate variable related to roughness r, and emergent illumination intensity L o I.e. the sample point color values observed at that viewpoint.
Further, in step 4, the color ratio of the sampling point is calculated by accumulating the densities, which specifically includes:
let o be the starting point and the position of any point on the ray with v being the direction vector be denoted r (t) =o+vt, the transparency at the sampling point r (t) is calculated by accumulating the density on the ray as:
wherein σ is the density value at the sampling point r (t), which is:
σ=σ nv
σ n sum sigma v Representing the densities acquired in the neural radiation field and the explicit voxel, respectively, the weight w (t) at the sampling point r (t) is expressed as:
w(t)=T(t)(1-e -σ(t) )
color value C of sampling point r (t) passes sigma n Sum sigma v Interpolation of the weights occupying sigma is obtained as follows:
C=(C nn +C vv )/σ
C n and C v The geometry, material and illumination obtained from the neural radiation field and explicit voxels, respectively, at the sampling points are substituted into the color calculated by the rendering equation.
Further, the color of the final pixel is:
setting raysThe set of sampling points on the upper part isn represents the number of sampling points and the final color C of the pixel p Expressed as the sum of each sample point multiplied by the weight, is:
a virtual-real fusion drawing system based on nerve radiation field and voxelization characterization comprises a nerve radiation field reconstruction module, a ray projection rendering module, a grid model voxelization module and a density occupation grid updating module,
the neural radiation field reconstruction module is used for reconstructing implicit voxel expression of a real scene through a radiation field;
the grid model voxelization module is used for converting the virtual object into an explicit voxel expression;
the density occupation grid updating module initializes the density occupation grid by adopting the density distribution of the nerve radiation field, and updates the density occupation grid after the explicit voxels are subjected to coordinate transformation;
the ray projection rendering module comprises a sampling calculation unit and a rendering unit, wherein the sampling calculation unit calculates sampling points through a density occupation grid along rays taking a viewpoint as a starting point and a sight line as a direction, and the sampling points acquire density, color, normal, reflectivity and roughness from implicit voxel expression and explicit voxel expression; the rendering unit calculates the color duty ratio of the sampling points through accumulation of densities based on the color values of the sampling points, and obtains the color of the final pixel, thereby realizing drawing.
A computer storage medium storing an executable program for execution by a processor to perform the steps of implementing the virtual-real fusion rendering method based on neural radiation fields and voxelized characterization.
The invention has the positive effects that: aiming at the problem that the conventional nerve radiation field cannot be fused with the editable virtual object to perform unified rendering, the virtual object expressed by the voxels can be added in the established nerve radiation field, the nerve radiation field and the virtual object expressed by the voxels are unified with density occupation grid expression and rendered by using a ray projection algorithm, and the invention can more accurately represent the geometrical shielding relation and illumination consistency between the virtual object and the real scene and improve the virtual-real fusion effect of the real object and the virtual object in the scene.
Drawings
Fig. 1 is a diagram of a virtual-real fusion drawing system according to the present invention.
Fig. 2 is a flow chart of the virtual-real fusion drawing algorithm of the present invention.
Fig. 3 is a schematic diagram of the light traveling algorithm based on the occupied grid of the present invention.
Fig. 4 is a schematic view of illumination sampling during light projection according to the present invention.
Detailed Description
The invention will be described in further detail with reference to the drawings and the detailed description.
The virtual-real fusion drawing method based on the neural radiation field and the voxelized representation of the embodiment is implemented based on a virtual-real fusion drawing system, wherein the virtual-real fusion drawing system is shown in the figure 1, and the figure 1 comprises a neural radiation field reconstruction module, a light projection rendering module, a grid model voxelization module and a density occupation grid updating module.
The neural radiation field reconstruction module is used for reconstructing implicit voxel expression of the real scene through the radiation field;
the ray projection rendering module is used for sampling and rendering and comprises a sampling calculation unit and a rendering unit, wherein the sampling calculation unit acquires density, color, normal, reflectivity and roughness along ray sampling through a ray projection algorithm based on implicit voxel expression and voxel expression; the rendering unit includes: acquiring illumination information from a light field expressed by a nerve radiation field, substituting the illumination information and acquired density, color, normal, reflectivity and roughness into a rendering equation based on physics to calculate a color value of a sampling point, and calculating the color occupation ratio of the sampling point through accumulation of the densities so as to realize drawing;
the grid model voxelization module is used for converting a virtual object (triangular grid model) into an explicit voxel expression;
the density occupation grid updating module initializes the density occupation grid by adopting the density distribution of the nerve radiation field, and updates the density occupation grid after the explicit voxels are subjected to coordinate transformation.
The embodiment also provides a virtual-real fusion drawing method based on the neural radiation field and the voxel characterization, wherein a virtual model characterized by voxels is added into the implicit expression of the neural radiation field reconstructed from the real scene and virtual-real fusion drawing is performed, in the light projection process, according to the geometric and material information, illumination information of sampling points after multiple light ejection is calculated and substituted into a rendering equation to obtain color information of the sampling points, the density and the depth are used as weights to accumulate the colors of the sampling points on the light, and the color value of a pixel is calculated, and the specific implementation is as shown in fig. 2, and the method comprises the following steps:
(1) calculating internal and external parameters and gestures of a camera from dense images of a loop shooting scene, and reconstructing geometry, materials and illumination in a real scene into implicit neural expression by using a method of a neural radiation field, wherein the method specifically comprises the following steps of:
the first step: preprocessing a picture data set to obtain the internal and external parameters of a camera and the pose of an image, namely the conversion matrix from a camera focal length, pixel width and resolution and a world coordinate system to a camera coordinate system, using a quaternion to represent rotation and using a three-dimensional vector to represent translation.
And a second step of: the data obtained in the preprocessing are used for supervising the reconstruction of the neural radiation field on the scene, the directional distance field is adopted to express the geometric surface, the directional distance field f (x) is input into a three-dimensional coordinate of space, the three-dimensional coordinate is output as the minimum value from the point to the model surface, the negative number represents the inside of the model, the positive number represents the outside of the model, the value zero represents the model surface, and the expression of the surface S relative to the position x is as follows:
S={x∈R 3 |f(x)=0}
the specific formula for converting the value of the directed distance field into density is as follows:
wherein the method comprises the steps ofAs a probability density function, the following was developed:
E geo for multi-resolution hash coding, x represents a three-dimensional coordinate point and MLP represents a multi-layer perceptron about a directed distance field.
And a third step of: after the reconstruction of the scene geometry is completed, reconstructing the material properties such as roughness, reflectivity and the like through a multi-layer perceptron, further considering direct illumination and indirect illumination, and completing the reconstruction through the neural network combined training of the geometry, the material and the illumination, wherein the formula is as follows:
c,r,s=MLP mat (x)
c i =MLP i (x,d)
c represents the basic color of the material of the object, r represents roughness, s represents reflectivity, x represents three-dimensional coordinates, c i Indicating illumination, d indicating illumination direction. Substituting the materials and illumination into a rendering equation to render pictures according to the internal and external parameters of the camera obtained in preprocessing, and calculating error optimization neural network parameters.
(2) The resolution of the voxels is determined by calculating the bounding box of the virtual model expressed by the triangle grid, the virtual model is converted into a voxel grid coordinate system through an orthogonal matrix, the rasterization interpolation operation is carried out in three directions of X, Y and Z axis, the voxel position of the fragment mapping is calculated according to the depth of the fragment in the fragment shader, the color, the normal line and the reflectivity value after the rasterization interpolation are written into the corresponding buffer, and the buffer is read and stored as a local file after the rendering is finished.
The first step: and calculating an AABB bounding box of the grid model, setting voxel resolution, and calculating an observation matrix and an orthogonal matrix of the object, which are respectively rendered by the three virtual cameras from three directions of the bounding box of the object.
And a second step of: and setting the buffer size according to the voxel resolution, wherein the buffer is used for reading the color, normal and reflectivity after interpolation of the fragment shader, closing the depth test and the back face rejection, and ensuring that all triangles of the model enter the fragment shader to obtain a correct voxelization result.
And a third step of: if the triangle is perpendicular to the selected projection plane, the voxel model is cracked after voxelization, so three direction vectors (0, 1), (0, 1, 0) and (1, 0) are respectively multiplied by normal vector points of the triangle, and a projection matrix with the largest value is selected.
Fourth step: and storing the interpolated normal, reflectivity, roughness and basic color in a patch by adopting a conservative rasterization algorithm, sending the patch into a patch shader, calculating a corresponding subscript in the three-dimensional texture in the patch shader according to the depth and offset coordinates of the patch, and storing the normal, reflectivity, roughness and basic color in the three-dimensional texture.
(3) The density occupation grid is a multi-scale cascade occupation grid used for optimizing the sampling stage of the light projection process, the density occupation grid is required to be updated in both the training process of the nerve radiation field and the placement of the voxel model in the field, the sampling points with low grid cell density can be directly skipped by inquiring the density occupation grid in the light travelling process, and the performance loss of sampling the empty region is reduced.
The first step: updating density occupation grids in the reconstruction process of the nerve radiation field: firstly, traversing each grid unit, calculating corresponding three-dimensional coordinates of the grids in the scene according to the level of each grid and the offset, and acquiring density update density occupied grids from the network by using the three-dimensional coordinates.
And a second step of: and converting the coordinates of the voxel model from a model space coordinate system to a world space coordinate system by using a model transformation matrix, calculating the offset of the density occupied grid in each level according to the coordinates of the voxels, and updating the corresponding density occupied grid according to the offset.
(4) As shown in the flow of the volume rendering algorithm in fig. 3, each pixel of the camera is used as a starting point to emit rays, sampling points are calculated according to the density occupation grid, geometric and material information is acquired from the nerve radiation field and the voxels respectively, illumination information is acquired from the light field expressed by the nerve radiation field, and the geometric and material information is combined and substituted into a rendering equation to calculate the color value of the sampling points. And performing density accumulation on each ray, and after a certain threshold value is reached, not needing to sample the ray so as to improve the performance, wherein the color of each sampling point is obtained by interpolating the color by taking the density obtained from the nerve radiation field and the voxel as the weight, and the obtained sampling points on the ray are accumulated according to the density weight to obtain the color finally on the pixel.
The first step: the emission direction v of each ray is calculated according to the offset of the position o of the camera and the pixel relative to the origin of the camera, and the position of any point on the ray can be expressed as r (t) =o+vt.
And a second step of: and calculating a pre-sampling point according to a preset step length, calculating the level and the offset of the occupied grid according to the position of the pre-sampling point, judging whether the occupied grid is blank space according to the occupied grid as shown in fig. 3, if the occupied grid is blank space, continuing to step the light, and if the occupied grid is blank space, returning the point as the sampling point. Finally obtaining a sampling point set
And a third step of: for any sampling point in the sampling point set, respectively acquiring density sigma, color c, roughness r, reflectivity s and normal N from a nerve radiation field and an explicit voxel according to a three-dimensional coordinate x of the sampling point, uniformly sampling N points in a hemisphere by using a fibonacci grid sampling on a hemisphere determined by the normal, wherein for an ith point, the direction from the sampling point to the ith point is a light acquisition direction w i The area corresponding to the angle is differential area d s Querying light intensity L through neural network i Then the point is observed in the direction w of the point of view o Is the illumination intensity L of (2) o The calculation formula is as follows:
F(w o ,h,s)=s+(1-s)(1-(h·w o )) 5
where h is the half-way vector, k is the expression for roughness r, and the intensity of the emitted light L o I.e. the sample point color value C observed at that viewpoint.
As shown in fig. 4, if the sampling point of the camera light is sampled from the neural radiation field, if the light ejected once hits the x point of the virtual object represented by the voxel, the radiation intensity at the x point is calculated according to the above equation, and then the emissivity in the direction is calculated according to the bidirectional reflectance distribution function. If the light is not ejected on the virtual object, the light can be directly sampled from the light field reconstructed according to the real scene.
And a third step of: the density and the color calculated by a rendering equation are respectively obtained from the nerve radiation field and the voxels according to the three-dimensional coordinates of the sampling point, and then the transparency at the sampling point r (t) is calculated by density accumulation, wherein the transparency formula is as follows:
σ is the density formula expressed as the density value at the r (t) point.
Fourth step: according to the density mixed color of the nerve radiation field and the voxels at the sampling point, the formula is as follows:
σ=σ nv
C=(C nn +C vv )/σ
σ n sum sigma v Representing the density of the nerve radiation field and the voxel, C n And C v Representing the calculated colors in the neural radiation field and explicit voxels, respectively.
Fifth step: the weight calculation formula for any point r (t) is:
w(t)=T(t)(1-e -σ(t) )
the final color of the pixel can be obtained by accumulating sampling points on the ray according to the weight, and the formula is as follows:
C i ,w i representing the color and weight of each sample point, C p Representing the pixel color and n the number of sampling points.
The invention provides a virtual-real fusion drawing method based on a nerve radiation field and voxel characterization, which solves the problem that editable virtual objects cannot be fused in the nerve radiation field for unified rendering.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present invention without departing from the spirit or scope of the embodiments of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims and the equivalents thereof, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A virtual-real fusion drawing method based on nerve radiation field and voxel characterization is characterized by comprising the following steps:
step 1, reconstructing implicit voxel expression of a real scene based on a nerve radiation field, converting a virtual object into explicit voxel expression, initializing a density occupation grid based on the nerve radiation field, and updating the density occupation grid after converting the virtual object into the same world coordinate system as the real scene;
step 2, calculating sampling points through a density occupation grid along rays taking a viewpoint as a starting point and a sight line as a direction, wherein the sampling points acquire density, color, normal, reflectivity and roughness from implicit voxel expression and explicit voxel expression;
step 3, obtaining illumination information from a light field expressed by a nerve radiation field, substituting the illumination information and the color, normal, reflectivity and roughness obtained in the step 2 into a rendering equation based on physics to calculate a color value of a sampling point;
and 4, calculating the color duty ratio of the sampling points through accumulation of the densities based on the color values of the sampling points in the step 3, and obtaining the color of the final pixel, thereby realizing drawing.
2. The virtual-real fusion drawing method based on the neural radiation field and the voxelized representation according to claim 1, wherein the implicit voxel expression for reconstructing the real scene based on the neural radiation field in the step 1 specifically comprises:
firstly, preprocessing picture data obtained by surrounding a photographed real scene to obtain internal and external parameters of a used camera and the pose of an image, namely, the focal length, the pixel width and the resolution of the camera and a projection matrix from a world coordinate system represented by quaternion and translation vectors to the camera coordinate system, and reconstructing the geometry, the material and illumination of the data-supervised real scene based on the preprocessed data;
expressing the geometry of a scene by adopting a directional distance field, expressing the directional distance field value at a position x by adopting a multi-layer perceptron, and mapping the directional distance field into density sigma to obtain the density as follows:
wherein the method comprises the steps ofAs a probability density function, E geo For multi-resolution hash coding, MLP represents a multi-layer perceptron about a directed distance field;
after the reconstruction of scene geometry is completed, illumination and material reconstruction are carried out through the multi-layer perceptron, and the obtained material and illumination expression is as follows:
c,r,s=MLP max (x)
L=MLP i (x,d)
wherein c represents the basic color of the material of the object, r represents roughness, s represents reflectivity, x represents three-dimensional coordinates, L represents illumination, and d represents illumination direction.
3. The virtual-real fusion rendering method based on neural radiation fields and voxel characterization according to claim 1, wherein the converting the virtual object into an explicit voxel representation in step 1 specifically comprises:
calculating an AABB bounding box of the virtual object, setting the resolution of voxels according to the size of the bounding box, calculating an observation matrix and an orthogonal matrix of the virtual camera in the directions of X, Y and Z axes, setting the buffer size according to the resolution of the voxels, and buffering for reading the color, the normal and the reflectivity after interpolation of the element shader;
the position of the camera is the center point of the bounding box matrix in each direction, the vector which points to the inside outside of the bounding box is used as the orientation of the camera, and the observation matrix and the orthogonal matrix in the direction with the largest triangular projection area are selected and converted into a cutting space;
and (3) obtaining the interpolated color, normal and reflectivity from the triangle primitives in the clipping space by adopting rasterization, and sending the obtained color, normal and reflectivity into a primitive shader to store the three-dimensional texture.
4. The virtual-real fusion drawing method based on the neural radiation field and the voxel characterization according to claim 1, wherein initializing the density occupation grid based on the neural radiation field in the step 1, and updating the density occupation grid after converting the virtual object to the same world coordinate system as the real scene specifically comprises:
firstly traversing each grid unit, calculating corresponding three-dimensional coordinates of the grids in a scene according to the level and the offset of each grid, and acquiring a density initialization density occupied grid from a network by using the three-dimensional coordinates;
and converting the coordinates of the voxels from a space coordinate system to a world coordinate system of the real scene by using a transformation matrix, calculating the offset of the density occupied grid in each level according to the coordinates of the voxels, and updating the corresponding density occupied grid according to the offset.
5. The virtual-real fusion rendering method based on neural radiation field and voxelized characterization according to claim 1, wherein step 2 specifically comprises:
a ray is emitted from the direction from the central point to each pixel from the starting point of the central point of the camera, a pre-sampling point is obtained according to a preset step length, the level and the offset of an occupied grid are calculated according to the position of the pre-sampling point, whether the occupied grid is a blank space or not is judged according to the density, if the occupied grid is the blank space, the ray continues to step, and if the occupied grid is not the blank space, the point is taken as a sampling point; and respectively acquiring density sigma, color c, roughness r, reflectivity s and normal n from the nerve radiation field and the voxels according to the three-dimensional coordinates of the sampling points.
6. The virtual-real fusion rendering method based on neural radiation field and voxel characterization according to claim 1, wherein the step 3 specifically comprises:
three with sampling pointsThe dimension coordinate is x, N points are uniformly obtained in the hemisphere on the hemispherical surface determined by the normal line of the position by using a fibonacci grid sampling method, and the direction vector from the position x to the ith point is expressed as w i The area on the hemispherical surface in this direction is the differential area d s Querying in location x and direction w through a neural network i Is the illumination intensity L of (2) i Then the point is in the viewing direction w 0 Is the illumination intensity L of (2) o The method comprises the following steps:
F(w o ,h,s)=s+(1-s)(1-(h·w o )) 5
wherein h is half-path vector, k is intermediate variable related to roughness r, and emergent illumination intensity L o I.e. the sample point color values observed at that viewpoint.
7. The virtual-real fusion drawing method based on neural radiation field and voxel characterization according to claim 1, wherein in step 4, the color ratio of the sampling points is calculated by accumulating the densities, and the method specifically comprises the following steps:
let o be the starting point and the position of any point on the ray with v being the direction vector be denoted r (t) =o+vt, the transparency at the sampling point r (t) is calculated by accumulating the density on the ray as:
wherein σ is the density value at the sampling point r (t), which is:
σ=σ nv
σ n sum sigma v Representing the densities acquired in the neural radiation field and the explicit voxel, respectively, the weight w (t) at the sampling point r (t) is expressed as:
w(t)=T(t)(1-e -σ(t) )
color value C of sampling point r (t) passes sigma n Sum sigma v Interpolation of the weights occupying sigma is obtained as follows:
C=(C nn +C vv )/σ
C n and C v The geometry, material and illumination obtained from the neural radiation field and explicit voxels, respectively, at the sampling points are substituted into the color calculated by the rendering equation.
8. The virtual-real fusion rendering method based on neural radiation field and voxelized representation according to claim 7, wherein the colors of the final pixels are:
let the sampling point set on the ray ben represents the number of sampling points and the final color C of the pixel p Expressed as the sum of each sample point multiplied by the weight, is:
9. a virtual-real fusion drawing system based on nerve radiation field and voxelization characterization is characterized by comprising a nerve radiation field reconstruction module, a ray projection rendering module, a grid model voxelization module and a density occupation grid updating module, wherein,
the neural radiation field reconstruction module is used for reconstructing implicit voxel expression of a real scene through a radiation field;
the grid model voxelization module is used for converting the virtual object into an explicit voxel expression;
the density occupation grid updating module initializes the density occupation grid by adopting the density distribution of the nerve radiation field, and updates the density occupation grid after the explicit voxels are subjected to coordinate transformation;
the ray projection rendering module comprises a sampling calculation unit and a rendering unit, wherein the sampling calculation unit calculates sampling points through a density occupation grid along rays taking a viewpoint as a starting point and a sight line as a direction, and the sampling points acquire density, color, normal, reflectivity and roughness from implicit voxel expression and explicit voxel expression; the rendering unit calculates the color duty ratio of the sampling points through accumulation of densities based on the color values of the sampling points, and obtains the color of the final pixel, thereby realizing drawing.
10. A computer storage medium, characterized in that it stores an executable program that is executed by a processor to implement the steps of the virtual-real fusion rendering method based on neural radiation fields and voxelized characterization according to any one of claims 1 to 8.
CN202311645398.XA 2023-12-01 2023-12-01 Virtual-real fusion drawing method and system based on nerve radiation field and voxelized characterization Pending CN117671112A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311645398.XA CN117671112A (en) 2023-12-01 2023-12-01 Virtual-real fusion drawing method and system based on nerve radiation field and voxelized characterization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311645398.XA CN117671112A (en) 2023-12-01 2023-12-01 Virtual-real fusion drawing method and system based on nerve radiation field and voxelized characterization

Publications (1)

Publication Number Publication Date
CN117671112A true CN117671112A (en) 2024-03-08

Family

ID=90082096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311645398.XA Pending CN117671112A (en) 2023-12-01 2023-12-01 Virtual-real fusion drawing method and system based on nerve radiation field and voxelized characterization

Country Status (1)

Country Link
CN (1) CN117671112A (en)

Similar Documents

Publication Publication Date Title
Kopanas et al. Point‐Based Neural Rendering with Per‐View Optimization
US11625894B2 (en) Virtual photogrammetry
US7940279B2 (en) System and method for rendering of texel imagery
US8659593B2 (en) Image processing apparatus, method and program
CN115100339B (en) Image generation method, device, electronic equipment and storage medium
Zach Fast and high quality fusion of depth maps
US20200160587A1 (en) Systems and methods for reducing rendering latency
CN115082639B (en) Image generation method, device, electronic equipment and storage medium
US11804002B2 (en) Techniques for traversing data employed in ray tracing
US11282261B2 (en) Ray tracing hardware acceleration with alternative world space transforms
US9761039B2 (en) Method and apparatus for hybrid rendering
KR20210119417A (en) Depth estimation
CN113269862A (en) Scene-adaptive fine three-dimensional face reconstruction method, system and electronic equipment
CN113345063B (en) PBR three-dimensional reconstruction method, system and computer storage medium based on deep learning
CN114820906A (en) Image rendering method and device, electronic equipment and storage medium
CN115170741A (en) Rapid radiation field reconstruction method under sparse visual angle input
CN116958492B (en) VR editing method for reconstructing three-dimensional base scene rendering based on NeRf
WO2020169959A1 (en) Image processing to determine object thickness
US20140267357A1 (en) Adaptive importance sampling for point-based global illumination
CN116681839A (en) Live three-dimensional target reconstruction and singulation method based on improved NeRF
CN117671112A (en) Virtual-real fusion drawing method and system based on nerve radiation field and voxelized characterization
CN117333676B (en) Point cloud feature extraction method and point cloud visual detection method based on graph expression
CN117036643A (en) Method and device for generating image rendering model and image rendering method and device
WO2023184139A1 (en) Methods and systems for rendering three-dimensional scenes
Fujimura 3D Reconstruction in Scattering Media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination