CN115439616B - Heterogeneous object characterization method based on multi-object image alpha superposition - Google Patents
Heterogeneous object characterization method based on multi-object image alpha superposition Download PDFInfo
- Publication number
- CN115439616B CN115439616B CN202211383316.4A CN202211383316A CN115439616B CN 115439616 B CN115439616 B CN 115439616B CN 202211383316 A CN202211383316 A CN 202211383316A CN 115439616 B CN115439616 B CN 115439616B
- Authority
- CN
- China
- Prior art keywords
- alpha
- model
- camera
- axis
- under
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Generation (AREA)
Abstract
The invention discloses a heterogeneous object characterization method based on multi-object image alpha superposition, belonging to the field of computer graphics and comprising the following steps: s1, establishing a virtual modeling environment coordinate system, calibrating and calculating camera parameters including internal parameters Kc and external parameters Rc according to the position of a virtual camera, and recording light source information L in the current environment in And time T C (ii) a S2, performing layered rendering on the object model in the current virtual modeling environment, and outputting corresponding rendering layers of the models determined by the camera parameters in the step S1; and S3, carrying out multi-layer superposition on each type of object in the visual range of the virtual camera in the scene according to the distance between each layer and the virtual camera in an alpha channel, and finishing the representation of the heterogeneous object. The invention can reduce a large amount of information acquisition such as complex light and shadow, reflection, multiple visual angles and the like caused by multi-object rendering, and greatly reduce computational rendering resources.
Description
Technical Field
The invention relates to the field of computer graphics, in particular to a heterogeneous object characterization method based on multi-object image alpha superposition.
Background
The existing high and new video technologies such as free viewpoint video, interactive video, immersive video and the like become hot spots, and the existing object modeling method mainly comprises the following steps: camera array, mesh modeling, point cloud, neural rendering NERF and other methods which have advantages and disadvantages in different scenes.
Scene object reconstruction typically models the entire scene and all objects within it using a rendering engine, which is usually expensive in terms of many and complex objects, shadows, reflections, etc. In some scenes, such as stage shows or scene reconstruction at some specific viewing angles, the scenes usually do not need to obtain full information of the whole scene or all objects, and only need to be visible at a given specific shooting angle. Therefore, a flexible and highly compatible scene representation mode is needed, which can satisfy the most real and liveliest virtual and real combined picture presentation while controlling the manufacturing cost sufficiently low.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a heterogeneous object characterization method based on multi-object image alpha superposition, which can reduce acquisition of a large amount of information such as complex light and shadow, reflection, multiple visual angles and the like caused by multi-object rendering and greatly reduce computational rendering resources and the like.
The purpose of the invention is realized by the following scheme:
a heterogeneous object characterization method based on multi-object image alpha superposition comprises the following steps:
s1, establishing a virtual modeling environment coordinate system, calibrating and calculating camera parameters including internal parameters Kc and external parameters Rc according to the position of a virtual camera, and recording light source information L in the current environment in And time T C ;
S2, performing layered rendering on the object model in the current virtual modeling environment, and outputting corresponding rendering layers of the models determined by the camera parameters in the step S1;
and S3, carrying out multi-layer superposition on various types of objects within the visual range of the virtual camera in the scene in an alpha channel according to the distance between the layers and the virtual camera, and finishing the representation of the heterogeneous objects.
Further, in step S2, the substeps of:
s21, inputting the calibrated Kc, rc and the recorded light source information L in Time T C ;
S22, performing partial rendering on different model objects under the condition of camera parameter determination;
s23, under the condition that the camera parameters are determined, RGB alpha pictures of i objects in the virtual environment are output according to the current camera view angle and the imaging size, and each picture records the distance D [ i ] from the focal plane.
Further, in step S2, the object model includes a NERF model, a MESH model, and a point cloud model.
Further, in step S3, the objects of each type are all completely opaque static objects and dynamic objects.
Further, in step S3, the performing the multilayer stacking on the α channel includes: alpha output under NERF model, alpha output under Mesh model and alpha output under cloud model.
Further, the α output under the NERF model includes the steps of:
s301, NERF model scene expression, model input, model output, model rendering and view rendering; wherein the scene of the NERF is expressed as,For the mapping function expressed for the NERF scene,xis the position information of the three-dimensional space,dis the direction of the viewing angle,x、din order to be of a known quantity,c= (r, g, b) viewing angle dependent 3D dot color,is the voxel density; the model is input asxAnd d, output iscAnd(ii) a In model rendering, the camera ray is expressed as, Is a camera ray expression function, o is a ray origin, t is a ray distance, and the near-end and far-end boundaries of t aret n Andt f (ii) a Ray color integration of;
Wherein T (T) is an accumulated transparency function,is the voxel density of the camera ray,the color of the camera ray is the direction of the camera, andthe threshold range of T (T) is (0~1),is composed ofS is a discrete point object on T, and is a completely non-transparent object, i.e., T (T) w ) Has a value of 0,t w Is the point of the light ray on the surface of the object; in the perspective rendering, when the virtual camera shoots a perspective and the position is determined, namely d is determined, a rendering image at the perspective is output:
S302, outputting an alpha channel: and synthesizing the rendered image under the selected visual angle with the transparent channel alpha, and outputting the transparent channel alpha 1 of the object.
Further, in step S3, the α output under the Mesh model includes the sub-steps of:
s311, model expression of Mesh: definition M = (T) i ,C i ) N is a positive integer, and M is a data set of n triangles forming the object,T i Is a triangular spatial coordinate, T i =(x i1 , x i2, x i3, y i1 , y i2, y i3, z i1 , z i2, z i3 ) Wherein x is i ,y i ,z i The spatial coordinate positions of three vertexes of the ith triangle in the x axis, the y axis and the z axis respectively are i1 ,y i1 ,z i1 The 1 st vertex of the ith triangle is at the space coordinate position of the x axis, the y axis and the z axis respectively i2 ,y i2 ,z i2 The 2 nd vertex of the ith triangle is at the space coordinate position of the x axis, the y axis and the z axis respectively i3 ,y i3 ,z i3 The 3 rd vertex of the ith triangle is at the spatial coordinate position of the x-axis, the y-axis and the z-axis, C i The color of the triangle;
s312, α channel output: in the case of camera view and orientation determination, capturing Mesh two-dimensional image information I under the view m ,I m =(T id ,C id ) Wherein T is id And C id Regarding the position information and the color information of the visible triangle under the current visual angle, considering that the object is non-transparent and the blocked triangle information is out of consideration; and synthesizing the two-dimensional image under the specific visual angle of the Mesh with the alpha channel to form a transparent channel alpha 2 of the object.
Further, in step S3, the α output under the point cloud model includes the sub-steps of:
s321, outputting alpha under the point cloud model: the point cloud part is a plurality of discrete points sampled in space, and the model is D = (x) i ,y i ,z i ) I = n, n being the number of sampled points; color definition is carried out on the point cloud model in the x, y and z directions, and a point cloud model D with color information is generated c =( x i ,y i ,z i C) is r, g and b values in x, y and z directions in a coordinate system;
s322, α channel output: outputting point cloud model D with color information under specific visual angle c The two-dimensional image of (1), the image being expressible as if the image were transparent regardless of the presence of the object modelD c =( x id ,y id ,z id ,C d ),x id ,y id ,z id As positional information at the angle of view, C d For color information, a 2d picture is obtained from the final composite output.
Further, the step of performing multi-layer superposition on the alpha channel includes: and (3) superposing the background, the NERF model, the Mesh model and the point cloud model on an alpha channel.
Further, in step S22, the different model objects are subjected to partial rendering under camera parameter determination, that is, only model parts within a camera shooting range are rendered.
The beneficial effects of the invention include:
according to the method, the whole scene and all objects in the scene do not need to be modeled by the scene object, and only the 2D image is formed at the view angle of the audience, so that a large amount of information acquisition such as complex light and shadow, reflection, multi-view angle and the like caused by multi-object rendering can be reduced, and the computational rendering resources are greatly reduced. Meanwhile, the method supports various heterogeneous objects such as Mesh modeling, voxel, point cloud, NERF deep learning and the like, does not need to carry out object design and modeling again for adapting to the method, can support various heterogeneous objects existing in the prior art, and has good compatibility and usability.
The technical scheme of the embodiment of the invention can be compatible with various heterogeneous objects such as Mesh modeling, voxels, point clouds, NERF deep learning and the like, and the objects can be designed, collected, reconstructed, represented and rendered. Different objects can be represented by different methods, such as surfaces, voxels, point clouds, deep learning and the like, and under the requirements of the pose and illumination of a unified scene, each object is rendered by the representation method of the object, and a 2D image with a channel is output.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a diagram illustrating a unified abstraction of object hierarchy rendering in an embodiment of the present invention;
FIG. 2 is a schematic representation of a scene representation of alpha output of NERF in an embodiment of the present invention;
FIG. 3 is a schematic view rendering of alpha output of NERF in an embodiment of the present invention;
FIG. 4 is a diagram illustrating a model representation of Mesh for alpha output of Mesh in an embodiment of the present invention;
FIG. 5 is a diagram illustrating an α channel output of an α output of Mesh in an embodiment of the present invention;
FIG. 6 is a schematic diagram of an α channel output of the α output under the point cloud model according to the embodiment of the present invention;
FIG. 7 is a schematic diagram of obtaining a 2d picture according to a final synthesized output according to an embodiment of the present invention;
FIG. 8 is a flowchart illustrating steps of a method according to an embodiment of the present invention.
Detailed Description
All features disclosed in all embodiments in this specification, or all methods or process steps implicitly disclosed, may be combined and/or expanded, or substituted, in any way, except for mutually exclusive features and/or steps.
The technical scheme of the embodiment of the invention can be compatible with various heterogeneous objects such as Mesh modeling, voxels, point clouds, NERF deep learning and the like, and the objects can be designed, collected, reconstructed, characterized and rendered. Different objects can be represented by different methods, such as surfaces, voxels, point clouds, deep learning and the like, and under the requirements of the pose and illumination of a unified scene, each object is rendered by the representation method of the object, and a 2D image with a channel is output.
The embodiment of the invention provides a heterogeneous object characterization method based on alpha superposition of multi-object images, which comprises three steps of viewpoint determination, distributed rendering and alpha superposition, and is shown in fig. 8.
step 1), camera parameters (internal reference Kc, external reference Rc, time Tc), light source information Lin, and time t (dynamic object valid) are input.
And step 2), rendering.
And step 3), outputting i RGB alpha picture G [ m ], [ n ] (camera pixel coordinate system) arrays, the distance d [ i ] between each picture record and a focal plane, and the existence of 'penetrating' marked by an effective visual angle flag. In the free shooting area, the upper is not worn theoretically, and if the angle is limited, the presence or absence of the upper needs to be determined.
1) Alpha output of NERF: scene is expressed as,For the mapping function expressed for the NERF scene,xis the position information of the three-dimensional space,dis the direction of the viewing angle,x、din order to be of a known quantity,c= (r, g, b) viewing angle dependent 3D dot color,is the voxel density; the model is input asxAnd d, output iscAndas shown in fig. 2.
In model rendering, the camera ray is expressed asO is the origin of the ray, and t is the proximal and distal boundaries of t n And t f . Ray color integration. Wherein T (T) is an accumulated transparency function:
the threshold range of T (T) is (0~1). Embodiments of the present invention consider an object as a completely non-transparent object, i.e., T (T) w ) The value of (a) is 0,is composed ofS is a discrete point object on T, and is a completely non-transparent object, i.e., T (T) w ) Has a value of 0,t w Is the point of the light ray on the object surface.
In the perspective rendering, when the virtual camera shoots a perspective and the position is determined, d is determined, and a rendering image under the perspective is output:
the image rendered under the specific viewing angle is synthesized with the transparent channel α, and the transparent channel α 1 of the object is output, as shown in fig. 3.
2) α output of Mesh: model representation of Mesh, as shown in figure 4. Mesh is a data structure used in computer graphics to model various irregular objects. Definition M = (T) i ,C i ) I = n (M is a data set of n triangles constituting an object),T i =(x i1 , x i2, x i3, y i1 , y i2, y i3, z i1 , z i2, z i3 ) Wherein x is i ,y i ,z i For the spatial coordinate position of three vertices of each triangle, x i1 ,y i1 ,z i1 The 1 st vertex of the ith triangle is at the space coordinate position of the x axis, the y axis and the z axis respectively i2 ,y i2 ,z i2 The 2 nd vertex of the ith triangle is at the space coordinate position of the x axis, the y axis and the z axis respectively i3 ,y i3 ,z i3 The 3 rd vertex of the ith triangle is at the spatial coordinate position of the x-axis, the y-axis and the z-axis, C i The color of the triangle.
Alpha channel output, as shown in FIG. 5, in the case of camera view angle and orientation determination, the Mesh two-dimensional image information I at the view angle is captured m ,I m =(T id ,C id ) Wherein T is id And C id For the position information and the color information of the visible triangle at the current viewing angle, the embodiment of the invention considers that the object is non-transparent, so the information of the blocked triangle is not considered. And synthesizing the two-dimensional image under the specific visual angle of the Mesh with the alpha channel to form a transparent channel alpha 2 of the object.
3) And (3) outputting alpha under a point cloud model: as shown in fig. 6 and 7, the point cloud part is a plurality of discrete points sampled in space, and the model is D = (x) i ,y i ,z i ) I = n, n being the number of sampled points. Because the original data model generated by the point cloud only records the position information of the sampling points, the color definition in the x, y and z directions needs to be carried out on the point cloud model to generate a point cloud model D with color information c =( x i ,y i ,z i C) is the r, g and b values in the x, y and z directions in the coordinate system.
Outputting alpha channel, outputting point cloud model D with color information under specific visual angle c Is detected. Since transparency is not considered to exist in the object model, the image can be expressed as D c =( x id ,y id ,z id ,C d ) (i.e., the position and color information at that viewing angle) a 2d picture is obtained from the final composite output.
As shown in fig. 7, the right objects (1) (2) (3) represent different model types, i.e. NERF model, MESH model, point cloud model (including but not limited to the above models), respectively, (4) represent the background, and the four objects are superimposed on the α channel to form a top view on the left.
Example 1
A heterogeneous object characterization method based on multi-object image alpha superposition comprises the following steps:
s1, establishing a virtual modeling environment coordinate system, calibrating and calculating camera parameters including internal parameters Kc and external parameters Rc according to the position of a virtual camera, and recording light source information L in the current environment in And time T C ;
S2, performing layered rendering on the object model in the current virtual modeling environment, and outputting corresponding rendering layers of the models determined by the camera parameters in the step S1;
and S3, carrying out multi-layer superposition on each type of object in the visual range of the virtual camera in the scene according to the distance between each layer and the virtual camera in an alpha channel, and finishing the representation of the heterogeneous object.
Example 2
On the basis of embodiment 1, in step S2, the method includes the sub-steps of:
s21, inputting the calibrated Kc, rc and the recorded light source information L in Time T C ;
S22, performing partial rendering on different model objects under the condition of camera parameter determination;
s23, under the condition that the camera parameters are determined, RGB alpha pictures of i objects in the virtual environment are output according to the current camera view angle and the imaging size, and each picture records the distance D [ i ] from the focal plane.
Example 3
On the basis of embodiment 1, in step S2, the object model includes a NERF model, a MESH model, and a point cloud model.
Example 4
On the basis of embodiment 1, in step S3, all the types of objects are completely opaque static objects and dynamic objects.
Example 5
On the basis of embodiments 1 to 4, in step S3, the performing multilayer superposition on the α channel includes: alpha output under NERF model, alpha output under Mesh model and alpha output under cloud model.
Example 6
On the basis of embodiment 5, further, the α output under the NERF model includes the steps of:
s301, NERF model scene expression, model input, model output, model rendering and view rendering; wherein the scene of NERF is expressed as, For the mapping function expressed for the NERF scene,xis the position information of the three-dimensional space,dis the direction of the viewing angle,x、din order to be of a known quantity,c= (r, g, b) viewing angle dependent 3D dot color,is the voxel density; the model is input asxAnd d, output iscAnd(ii) a In model rendering, the camera ray is expressed as, Is a camera ray expression function, o is a ray origin, t is a ray distance, and the near-end and far-end boundaries of t aret n Andt f (ii) a Ray color integration of;
Wherein T (T) is an accumulated transparency function,is the voxel density of the camera ray,the color of the camera ray is the direction of the camera, andthe threshold range of T (T) is (0~1),is composed ofS is a discrete point object on T, and is a completely non-transparent object, i.e., T (T) w ) Has a value of 0,t w Is the point of the light ray on the surface of the object; in the view angle rendering, when the virtual camera shooting view angle and the position are determined, namely d is determined, a rendering image under the view angle is output:
S302, outputting an alpha channel: and synthesizing the rendered image under the selected visual angle with the transparent channel alpha, and outputting the transparent channel alpha 1 of the object.
Example 7
On the basis of embodiment 5, in step S3, the α output under the Mesh model includes the sub-steps of:
s311, model expression of Mesh: definition M = (T) i ,C i ) I ranges from 1 to n, n is a positive integer, and M is a groupData set of n triangles of an object, T i Is a spatial coordinate of a triangle, T i =(x i1 , x i2, x i3, y i1 , y i2, y i3, z i1 , z i2, z i3 ) Wherein x is i ,y i ,z i The spatial coordinate positions of three vertexes of the ith triangle in the x axis, the y axis and the z axis respectively are i1 ,y i1 ,z i1 The 1 st vertex of the ith triangle is at the space coordinate position of the x axis, the y axis and the z axis respectively i2 ,y i2 ,z i2 The 2 nd vertex of the ith triangle is at the space coordinate position of the x axis, the y axis and the z axis respectively i3 ,y i3 ,z i3 The 3 rd vertex of the ith triangle is at the spatial coordinate position of the x-axis, the y-axis and the z-axis, C i The color of the triangle;
s312, α channel output: in the case of camera view and orientation determination, capturing Mesh two-dimensional image information I under the view m ,I m =(T id ,C id ) Wherein T is id And C id Regarding the position information and the color information of the visible triangle under the current visual angle, considering that the object is non-transparent and the blocked triangle information is out of consideration; and synthesizing the two-dimensional image under the specific visual angle of the Mesh with the alpha channel to form a transparent channel alpha 2 of the object.
Example 8
On the basis of the embodiment 5, in step S3, the α output under the point cloud model includes the sub-steps of:
s321, alpha output under the point cloud model: the point cloud part is a plurality of discrete points sampled in space, and the model is D = (x) i ,y i ,z i ) I = n, n being the number of sampled points; color definition is carried out on the point cloud model in the x, y and z directions, and a point cloud model D with color information is generated c =( x i ,y i ,z i C) is r, g and b values in x, y and z directions in a coordinate system;
s322, α channel output: outputting dots with color information at specific viewing anglesCloud model D c The two-dimensional image of (1), the image being expressible as D regardless of the transparency of the object model c =( x id ,y id ,z id ,C d ),x id ,y id ,z id As position information at the viewing angle, C d For color information, a 2d picture is obtained from the final composite output.
Example 9
On the basis of embodiment 5, the performing multilayer superposition on the α channel includes the steps of: and (3) superposing the background, the NERF model, the Mesh model and the point cloud model on the alpha channel.
Example 10
On the basis of the embodiment 2, the partial rendering under the camera parameter determination is performed on different model objects, namely, only the model part in the shooting range of the camera is rendered.
The units described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations described above.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiment; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs, which when executed by one of the electronic devices, cause the electronic device to implement the method described in the above embodiments.
The parts not involved in the present invention are the same as or can be implemented using the prior art.
The above-described embodiment is only one embodiment of the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be easily made based on the application and principle of the present invention disclosed in the present application, and the present invention is not limited to the method described in the above-described embodiment of the present invention, so that the above-described embodiment is only preferred, and not restrictive.
Other embodiments than the above examples may be devised by those skilled in the art based on the foregoing disclosure, or by adapting and using knowledge or techniques of the relevant art, and features of various embodiments may be interchanged or substituted and such modifications and variations that may be made by those skilled in the art without departing from the spirit and scope of the present invention are intended to be within the scope of the following claims.
Claims (10)
1. A heterogeneous object characterization method based on multi-object image alpha superposition is characterized by comprising the following steps:
s1, establishing a virtual modeling environment coordinate system, calibrating and calculating camera parameters including internal parameters Kc and external parameters Rc according to the position of a virtual camera, and recording light source information L in the current environment in And time T C ;
S2, performing layered rendering on the object model in the current virtual modeling environment, and outputting corresponding rendering layers of the models determined by the camera parameters in the step S1;
and S3, carrying out multi-layer superposition on various types of objects within the visual range of the virtual camera in the scene in an alpha channel according to the distance between the layers and the virtual camera, and finishing the representation of the heterogeneous objects.
2. The method for characterizing heterogeneous objects based on multi-object image alpha overlay according to claim 1, comprising in step S2 the sub-steps of:
s21, inputting the calibrated Kc, rc and the recorded light source information L in Time of dayT C ;
S22, performing partial rendering on different model objects under the condition of camera parameter determination;
s23, under the condition that the camera parameters are determined, RGB alpha pictures of i objects in the virtual environment are output according to the current camera view angle and the imaging size, and each picture records the distance D [ i ] from the focal plane.
3. The method for characterizing heterogeneous objects based on alpha overlay of multi-object images according to claim 1, wherein in step S2, the object model comprises a NERF model, a MESH model, a point cloud model.
4. The method for characterizing heterogeneous objects based on alpha superposition of multi-object images according to claim 1, wherein in step S3, each type of object is a static object and a dynamic object that are completely opaque.
5. The method for characterizing heterogeneous objects based on alpha superposition of multiple object images according to claim 1~4 wherein in step S3, said performing multiple layer superposition on the alpha channel comprises the steps of: alpha output under NERF model, alpha output under Mesh model and alpha output under cloud model.
6. The method for characterizing heterogeneous objects based on alpha overlay of multi-object images according to claim 5, wherein the alpha output under the NERF model comprises the steps of:
s301, NERF model scene expression, model input, model output, model rendering and view rendering; wherein the scene of the NERF is expressed as,For the mapping function expressed for the NERF scene,xis the position information of the three-dimensional space,dwhich is the direction of the angle of view,x、din order to be of a known quantity,c= (r, g, b) viewing angle dependent 3D dot color,is the voxel density; the model is input asxAnd d, output iscAnd(ii) a In model rendering, the camera ray is expressed as ,Is a camera ray expression function, o is a ray origin, t is a ray distance, and the near-end and far-end boundaries of t aret n Andt f (ii) a Ray color integration;
Wherein T (T) is an accumulated transparency function,is the voxel density of the camera ray,the color of the camera ray is the direction of the camera, andthe threshold range of T (T) is (0~1),is composed ofIs dispersed inInformation, s is a discrete point object on T, and is a completely opaque object, i.e., T (T) w ) Has a value of 0,t w Is the point of the ray on the object surface; in the view angle rendering, when the virtual camera shooting view angle and the position are determined, namely d is determined, a rendering image under the view angle is output:
S302, outputting an alpha channel: and synthesizing the rendered image under the selected visual angle with the transparent channel alpha, and outputting the transparent channel alpha 1 of the object.
7. The method for characterizing heterogeneous objects based on alpha superposition of multi-object images according to claim 5, wherein in step S3, the alpha output under the Mesh model comprises the following sub-steps:
s311, model expression of Mesh: definition M = (T) i ,C i ) I ranges from 1 to n, n is a positive integer, M is a data set of n triangles forming the object, and T i Is a triangular spatial coordinate, T i =(x i1 , x i2, x i3, y i1 , y i2, y i3, z i1 , z i2, z i3 ) Wherein x is i ,y i ,z i The spatial coordinate positions of three vertexes of the ith triangle in the x axis, the y axis and the z axis respectively are i1 ,y i1 ,z i1 The 1 st vertex of the ith triangle is respectively the spatial coordinate position of the x axis, the y axis and the z axis i2 ,y i2 ,z i2 The 2 nd vertex of the ith triangle is respectively the space coordinate position of the x axis, the y axis and the z axis i3 ,y i3 ,z i3 The spatial coordinate position of the 3 rd vertex of the ith triangle in the x-axis, the y-axis and the z-axis, C i The color of the triangle;
s312, α channel output: in the case of camera view and orientation determination, capturing Mesh two-dimensional image information I under the view m ,I m =(T id ,C id ) Wherein T is id And C id Regarding the position information and the color information of the visible triangle under the current visual angle, considering that the object is non-transparent and the blocked triangle information is out of consideration; and synthesizing the two-dimensional image under the specific visual angle of the Mesh with the alpha channel to form a transparent channel alpha 2 of the object.
8. The method for characterizing heterogeneous objects based on alpha superposition of multi-object images according to claim 5, wherein in step S3, the alpha output under the point cloud model comprises the following sub-steps:
s321, outputting alpha under the point cloud model: the point cloud part is a plurality of discrete points sampled in space, and the model is D = (x) i ,y i ,z i ) I = n, n being the number of points sampled, x i ,y i ,z i The spatial coordinate positions of three vertexes of the ith triangle in the x axis, the y axis and the z axis are respectively; color definition is carried out on the point cloud model in the x, y and z directions, and a point cloud model D with color information is generated c =( x i ,y i ,z i C) is the r, g and b values in the x, y and z directions in a coordinate system;
s322, α channel output: outputting a point cloud model D with color information under a specific visual angle c The two-dimensional image of (1), the image being expressible as D regardless of the transparency of the object model c =(x id ,y id ,z id ,C d ) Wherein x is id ,y id ,z id Respectively, position information at the viewing angle, C d For color information, a 2d picture is obtained from the final composite output.
9. The method for characterizing the heterogeneous object based on the alpha superposition of the multi-object image according to claim 5, wherein the multi-layer superposition in the alpha channel comprises the steps of: and (3) superposing the background, the NERF model, the Mesh model and the point cloud model on the alpha channel.
10. The method for characterizing the heterogeneous objects based on the alpha superposition of the multi-object images according to claim 2, wherein in step S22, the different model objects are partially rendered under the determination of the camera parameters, i.e. only the model parts within the shooting range of the camera are rendered.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211383316.4A CN115439616B (en) | 2022-11-07 | 2022-11-07 | Heterogeneous object characterization method based on multi-object image alpha superposition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211383316.4A CN115439616B (en) | 2022-11-07 | 2022-11-07 | Heterogeneous object characterization method based on multi-object image alpha superposition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115439616A CN115439616A (en) | 2022-12-06 |
CN115439616B true CN115439616B (en) | 2023-02-14 |
Family
ID=84252680
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211383316.4A Active CN115439616B (en) | 2022-11-07 | 2022-11-07 | Heterogeneous object characterization method based on multi-object image alpha superposition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115439616B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116151777B (en) * | 2023-04-20 | 2023-07-14 | 深圳奥雅设计股份有限公司 | Intelligent automatic rendering method and system for landscape garden plan |
CN117270721B (en) * | 2023-11-21 | 2024-02-13 | 虚拟现实(深圳)智能科技有限公司 | Digital image rendering method and device based on multi-user interaction XR scene |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6064393A (en) * | 1995-08-04 | 2000-05-16 | Microsoft Corporation | Method for measuring the fidelity of warped image layer approximations in a real-time graphics rendering pipeline |
WO2020102978A1 (en) * | 2018-11-20 | 2020-05-28 | 华为技术有限公司 | Image processing method and electronic device |
WO2022018454A1 (en) * | 2020-07-24 | 2022-01-27 | Sony Interactive Entertainment Europe Limited | Method and system for generating a target image from plural multi-plane images |
CN114529650A (en) * | 2022-02-24 | 2022-05-24 | 北京鲸甲科技有限公司 | Rendering method and device of game scene |
CN114549719A (en) * | 2022-02-23 | 2022-05-27 | 北京大甜绵白糖科技有限公司 | Rendering method, rendering device, computer equipment and storage medium |
WO2022227996A1 (en) * | 2021-04-28 | 2022-11-03 | 北京字跳网络技术有限公司 | Image processing method and apparatus, electronic device, and readable storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA3090301A1 (en) * | 2018-03-08 | 2019-09-12 | Simile Inc. | Methods and systems for producing content in multiple reality environments |
-
2022
- 2022-11-07 CN CN202211383316.4A patent/CN115439616B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6064393A (en) * | 1995-08-04 | 2000-05-16 | Microsoft Corporation | Method for measuring the fidelity of warped image layer approximations in a real-time graphics rendering pipeline |
WO2020102978A1 (en) * | 2018-11-20 | 2020-05-28 | 华为技术有限公司 | Image processing method and electronic device |
WO2022018454A1 (en) * | 2020-07-24 | 2022-01-27 | Sony Interactive Entertainment Europe Limited | Method and system for generating a target image from plural multi-plane images |
WO2022227996A1 (en) * | 2021-04-28 | 2022-11-03 | 北京字跳网络技术有限公司 | Image processing method and apparatus, electronic device, and readable storage medium |
CN114549719A (en) * | 2022-02-23 | 2022-05-27 | 北京大甜绵白糖科技有限公司 | Rendering method, rendering device, computer equipment and storage medium |
CN114529650A (en) * | 2022-02-24 | 2022-05-24 | 北京鲸甲科技有限公司 | Rendering method and device of game scene |
Non-Patent Citations (1)
Title |
---|
基于Kinect的彩色三维重建;雷宝全 等;《有线电视技术》;20191215(第12期);41-45 * |
Also Published As
Publication number | Publication date |
---|---|
CN115439616A (en) | 2022-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11727587B2 (en) | Method and system for scene image modification | |
CN115439616B (en) | Heterogeneous object characterization method based on multi-object image alpha superposition | |
US5694533A (en) | 3-Dimensional model composed against textured midground image and perspective enhancing hemispherically mapped backdrop image for visual realism | |
US9282321B2 (en) | 3D model multi-reviewer system | |
US20180293774A1 (en) | Three dimensional acquisition and rendering | |
CN111968215B (en) | Volume light rendering method and device, electronic equipment and storage medium | |
JP6201476B2 (en) | Free viewpoint image capturing apparatus and method | |
JP2006053694A (en) | Space simulator, space simulation method, space simulation program and recording medium | |
JP2016537901A (en) | Light field processing method | |
JP6683307B2 (en) | Optimal spherical image acquisition method using multiple cameras | |
TWI810818B (en) | A computer-implemented method and system of providing a three-dimensional model and related storage medium | |
Unger et al. | Spatially varying image based lighting using HDR-video | |
CN115861508A (en) | Image rendering method, device, equipment, storage medium and product | |
CN113132708B (en) | Method and apparatus for acquiring three-dimensional scene image using fisheye camera, device and medium | |
WO2023004559A1 (en) | Editable free-viewpoint video using a layered neural representation | |
Chang et al. | A review on image-based rendering | |
Evers‐Senne et al. | Image based interactive rendering with view dependent geometry | |
Waschbüsch et al. | 3d video billboard clouds | |
JP4710081B2 (en) | Image creating system and image creating method | |
JP3387856B2 (en) | Image processing method, image processing device, and storage medium | |
KR100490885B1 (en) | Image-based rendering method using orthogonal cross cylinder | |
JP3392078B2 (en) | Image processing method, image processing device, and storage medium | |
CN113139992A (en) | Multi-resolution voxel gridding | |
CN115439587B (en) | 2.5D rendering method based on object visual range | |
Patel | Survey on 3D Interactive Walkthrough |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |