CN110060335B - Virtual-real fusion method for mirror surface object and transparent object in scene - Google Patents
Virtual-real fusion method for mirror surface object and transparent object in scene Download PDFInfo
- Publication number
- CN110060335B CN110060335B CN201910332095.XA CN201910332095A CN110060335B CN 110060335 B CN110060335 B CN 110060335B CN 201910332095 A CN201910332095 A CN 201910332095A CN 110060335 B CN110060335 B CN 110060335B
- Authority
- CN
- China
- Prior art keywords
- light source
- point
- gray
- transparent object
- intensity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
Abstract
The invention discloses a virtual-real fusion method for a mirror surface object and a transparent object in a scene, which belongs to the technical field of computer virtual reality. During initial light source estimation, reflection among objects is considered, material parameters of the mirror surface object and the transparent object are estimated, and differential rendering is performed by using the estimated illumination result and the model parameters to obtain a virtual-real fusion effect graph. According to the invention, through estimating the BRDF model parameters of the mirror surface object, the refractive index and the color attenuation coefficient of the transparent object, a more vivid virtual-real fusion effect is obtained, and the problem of the consistency of virtual-real fusion illumination of the mirror surface object and the transparent object in a scene is solved. Meanwhile, the invention starts from the optical principle when estimating the position of the light source, takes the reflection condition between objects into consideration, and obtains more accurate position of the light source.
Description
Technical Field
The invention belongs to the technical field of computer virtual reality, and particularly relates to a method for estimating the position and intensity of a light source, the reflection coefficient of a mirror surface object, the refractive index of a transparent object and the color attenuation coefficient in a scene.
Background
The augmented reality technology is to combine the generated virtual object with the actual scene through a computer to present the virtual object in front of a user, and in order to achieve a more realistic virtual-real fusion effect, the virtual object needs to present an illumination effect consistent with the actual scene. The main consideration of illumination uniformity is the color and brightness variations of the surface patches of the virtual objects caused by the real light sources and real objects in the scene.
The existing method for solving the problem of illumination consistency and fusing virtual and actual is mainly divided into three categories: methods with an auxiliary marker, methods with an auxiliary device, and methods without a marker or auxiliary device. The auxiliary markers are shadows and manually placed markers, and the illumination condition in the real scene is obtained through information provided by the markers. The auxiliary equipment comprises a depth camera, a light field camera, a fish eye camera and other special shooting equipment, can provide information such as depth, a light field, a full-view-angle image and the like, and provides a new solution for illumination estimation. Methods that do not require markers or auxiliary equipment acquire illumination information in a scene by way of image analysis.
The existing illumination estimation method only considers the change of a real light source in a scene or the shadow of a real object in the scene to a surface patch of a virtual object, but does not consider the influence of a defocused light spot generated by a mirror object and a transparent object in an actual scene to the virtual object.
Disclosure of Invention
The invention aims to provide a virtual-real fusion method suitable for illumination consistency of a mirror object and a transparent object in a scene aiming at the defects of the existing method, and solves the influence of the caustic phenomenon of the mirror object and the transparent object in the actual scene on a virtual object, and the technical scheme adopted by the method is as follows:
1.1, shooting a scene with a mirror surface object and a transparent object by using an RGB-D camera to obtain depth images and color images with different visual angles; the method for reconstructing the scene in three dimensions and obtaining the three-dimensional model positions of the mirror surface object and the transparent object comprises the following steps:
1.1.1, three-dimensional reconstruction is carried out on depth images with different visual angles by adopting a Kinectfusion algorithm to obtain a Truncated Symbolic Distance Function (TSDF) model and a camera posture;
1.1.2 identifying approximate areas of the mirror surface object and the transparent object by using depth images of different visual angles, taking the areas as initial positions, respectively segmenting the mirror surface object and the transparent object from a color image by combining an image segmentation algorithm, and performing three-dimensional reconstruction on the mirror surface object and the transparent object by adopting a visual shell method;
1.1.3 fusing the TSDF model with a mirror surface object and a transparent object model;
1.2 initial light source position and intensity estimation:
the initial light source estimation does not consider mirror surface objects and transparent object models, the materials of the rest models are assumed to be Lambert surfaces, and the reflection coefficient value is 1; the k point light sources are uniformly distributed on a hemispherical surface which takes a scene object as a center, and the diameter of the hemispherical surface is 2 times of that of the hemispherical surface which just surrounds the scene; each point light source emits q photons in different directions into a scene, and the specific estimation method comprises the following steps:
1.2.1 calculate the energy of each photon emitted from the jth point source:
wherein: Δ Φ (ω)p) The energy carried by each photon, IjThe intensity value of the light source at the jth point is;
1.2.2 respectively tracking photons emitted by k point light sources and storing collision point coordinates, incident photon energy and incident photon directions into k photon graphs;
1.2.3 calculating the reflected radiance L at point x at any viewing angler(x):
Wherein: n is the number of photons collected near point x using the photon map obtained in step 1.2.2; d (x) is the distance of point x from the farthest photon of the collected n photons;
1.2.4 substituting the energy of each photon emitted by the j point light source obtained in the step 1.2.1 into the reflected radiation brightness formula of the point x in the step 1.2.3 to obtain the reflected radiation brightness L of the j point light source at the point xrj(x):
Wherein: dj(x) The distance between a point x and the farthest photon in the collected n photons under the jth point light source by utilizing the jth photon map;
1.2.5, converting the color images with different visual angles collected in the step 1.1 into gray level images;
1.2.6 comparing the gray value of the gray image with the brightness L of the reflected radiation of step 1.2.4rj(x) The composition objective function is:
wherein: m is the number of gray level images participating in calculation; si(x) The pixel gray value of the ith gray image at the point x is obtained; dji(x) The distance between the point x in the ith gray scale image under the jth point light source and the farthest photon in the collected n photons; k is the number of point sources on the hemisphere; solving the objective function using a non-negative linear least squares method such that Ei(x) At a minimum, to obtain I1,I2,...Ij...Ik;
1.2.7 illuminant estimation I1,I2,...Ij...IkThe optimization comprises the following steps:
1.2.7.1 at I1,I2,...Ij...IkThe light source L with the maximum intensity value is selected1Will be reacted with L1The intensity of adjacent non-zero intensity value light sources is added to L1Intensity value a of the light source1Above, mixing L1And a1Adding the illumination estimation result set with the illumination estimation result set;
1.2.7.2 if the intensity values in the residual light sources are all 0, finishing optimization and outputting an illumination estimation result set;
1.2.7.3 if there is a light source whose intensity value is not 0 among the remaining light sources, the light source L whose intensity value is the largest is selected from the remaining light sourcesmWill be reacted with LmThe intensity of adjacent non-zero intensity value light sources is added to LmIntensity value a of the light sourcemThe above step (1); mixing L withmAnd amAdding the illumination estimation result set with the illumination estimation result set;
1.2.7.4 if am<0.5a1After the optimization is finished, outputting an illumination estimation result set;
1.2.7.5 if am≥0.5a1Is prepared by mixing LmAnd amAdding the illumination estimation result set to a step 1.2.7.2;
1.3 distinguishing the mirror object and the transparent object model obtained in step 1.1.2, comprising the following steps:
1.3.1 selecting a model generated in step 1.1.2, assuming that the model is a lambertian surface, taking the reflection coefficient as 1, using the position and intensity of a light source in the estimated illumination estimation result set to render the model at different viewing angles, and obtaining the gray value sum of the object at different viewing angles, wherein the gray value sum is as follows: b1,b2,...bfWherein f is the number of different views involved in the calculation; then, assuming the model as a transparent object, taking the refractive index as 1.2, rendering the position and the intensity of the light source in the estimated illumination estimation result set under the same f visual angles by using the position and the intensity of the light source in the estimated illumination estimation result set, and obtaining the gray value sum of the object under different visual angles, wherein the gray value sum is respectively as follows: t is t1,t2,...tf(ii) a If it isThe object is a specular object, where ciThe sum of the gray values of corresponding pixel points on the gray image generated in the step 1.2.5 is taken as the object under the ith visual angle; if it isThe object is a transparent object;
1.4 light source position optimization and isotropic word bidirectional reflectance distribution function (Ward BRDF) parameter estimation of specular objects, comprising the steps of:
1.4.1 sampling the estimated area near the position of each light source on a hemispherical surface, wherein the number of sampling points near each light source is g, each sampling point is used as a sampling point light source, and the light source intensity values of the sampling points are corresponding light source intensity values in the illumination estimation result set in the step 1.2.7;
1.4.2 reflected radiance of point x on specular object under each sample point light source corresponding to each light source in the set of illumination estimatesComprises the following steps:
wherein: s is the number of light sources in the illumination estimation result set; i is a light source intensity value; i is a vector from the point light source to the point x direction;is the included angle between the vector from the d-th point light source to the point x direction and the normal of the plane where the point x is located; f (rho)d,ρsσ) is an isotropic Ward BRDF model, whose expression is:
wherein: o is the vector of the sight line direction; h is the half angle vector between vectors i and o, h ═ i + o/| i + o |,andrespectively are included angles between a vector and a half-angle vector of the sight line direction and a normal of a plane where the point x is located; rhodReflection being diffuse reflectionRate; rhosA reflectivity that is specular reflection; sigma is a roughness parameter; the optimization problem is solved using branch-and-bound and second order cone planning:
rho corresponding to optimal solutiond、ρsσ, and e; wherein: m is a column vector M ═ M composed of pixel values of the specular object corresponding to the grayscale map obtained in step 1.2.51 M2 ... MN]T;Column vector composed of the intensities of radiation reflected at different points by specular objects
1.4.3 estimating the parameters of the WardBRDF model of the mirror surface object by the method of the step 1.4.2 respectively for s light sources in the illumination estimation result set and g sampling point light sources corresponding to the vicinity of each light source to obtain (g +1)sGroup rhod、ρsThe values of σ and e; light source position and rho corresponding to the minimum e valued、ρsAnd sigma is the optimized light source position and the estimated value of the model parameter of the mirror object wardBRDF;
1.5 refractive index and color attenuation coefficient estimation of transparent objects, comprising the steps of:
1.5.1, rendering the transparent object by using the position and the intensity of the light source optimized in the step 1.4.3, and only changing the refractive index of the transparent object in a scene by using a rendering mode of photon mapping; the refractive index is changed from 1.2 to 2, the minimum refractive index change 0.01 which can be recognized by human eyes to change the focal dispersion effect of the transparent object is taken as step length increase, and the sum z of corresponding scene gray values under different refractive indexes is calculated1,z2,...z80(ii) a The calculation formula for the estimated refractive index is:
s.t.i=1,2,...80
wherein: mu is the sum of pixel values corresponding to the gray-scale image obtained in the step 1.2.5, and the refractive index corresponding to the calculated i value is the refractive index of the transparent object;
1.5.2 transparent object color attenuation coefficient σr、σgAnd σbThe calculation formula of (2) is as follows:
wherein: sigmar、σgAnd σbRed, green and blue channel attenuation coefficients, respectively; h is the total number of pixel points participating in calculation; diThe transmission distance of the light;andare respectively at diWhen the refractive index is equal to 0, rendering the transparent object by using the refractive index estimated in the step 1.5.1 and the light source position and intensity estimated in the step 1.4.3 to obtain red, green and blue channel gray values; r isi、giAnd biRespectively the gray values of the red channel, the green channel and the blue channel of the shot color image;
and 1.6, carrying out differential rendering by using the estimated illumination result and the model parameter to obtain a virtual-real fusion effect graph.
The characteristics and beneficial effects of the invention
Compared with the existing algorithm, the method not only considers the influence of the light source on the object directly during initial light source estimation, but also simulates the reflection phenomenon of light rays between the objects, and obtains a more accurate initial illumination estimation result. By estimating the parameter of the WardBRDF model of the mirror surface object, the refractive index and the color attenuation coefficient of the transparent object, the influence of the caustic flare spots generated by the mirror surface object and the transparent object in the actual scene on the virtual object is well solved.
Drawings
FIG. 1 is a flow chart of a method for integrating illumination consistency between a mirror object and a transparent object in a scene
FIG. 2 is a diagram of the effect of the virtual-real fusion experiment of the mirror surface object existing in the scene
FIG. 3 is a diagram illustrating the effect of the virtual-real fusion experiment of the transparent objects in the scene
In fig. 2 and 3: (a) representing the actual scene image, (b) representing the effect graph after the virtual and real fusion by the method of the invention
Detailed Description
The core content of the invention is as follows: the reflection between the objects is considered in the initial light source estimation, and a more accurate estimation result is obtained. The mirror object WardBRDF model parameters, the refractive index and the color attenuation coefficient of the transparent object are estimated, and the light source position is optimized at the same time. And differential rendering is carried out by using the estimated light source and the model parameters to obtain a more vivid virtual-real fusion effect.
For the purpose of making the objects, technical solutions and advantages of the present invention clearer, the following detailed description is made with reference to the accompanying drawings and examples:
1.1, shooting a scene with a mirror surface object and a transparent object by using an RGB-D camera to obtain depth images and color images with different visual angles; the method for three-dimensionally reconstructing a scene and obtaining the three-dimensional model positions of a mirror surface object and a transparent object comprises the following steps:
1.1.1, three-dimensional reconstruction is carried out on depth images with different visual angles by adopting a Kinectfusion algorithm to obtain a Truncated Symbolic Distance Function (TSDF) model and a camera posture;
1.1.2 identifying approximate areas of the mirror surface object and the transparent object by using depth images of different visual angles, taking the areas as initial positions, respectively segmenting the mirror surface object and the transparent object from a color image by combining an image segmentation algorithm, and performing three-dimensional reconstruction on the mirror surface object and the transparent object by adopting a visual shell method;
1.1.3 fusing the TSDF model with a mirror surface object and a transparent object model;
1.2 initial light source position and intensity estimation:
the initial light source estimation does not consider mirror surface objects and transparent object models, the materials of the rest models are assumed to be Lambert surfaces, and the value of the reflection coefficient is 1; the k point light sources are uniformly distributed on a hemispherical surface which takes a scene object as a center, and the diameter of the hemispherical surface is 2 times of that of the hemispherical surface which just surrounds the scene; each point light source emits q photons in different directions into a scene, and the specific estimation method comprises the following steps:
1.2.1 calculate the energy per photon emitted from the jth point source:
wherein: Δ Φ (ω)p) The energy carried by each photon, IjThe intensity value of the light source at the jth point is;
1.2.2 respectively tracking photons emitted by the light sources at k points and storing coordinates of collision points, incident photon energy and incident photon directions into k photon sub-graphs;
1.2.3 calculating the reflected radiance L at point x at any viewing angler(x):
Wherein: n is the number of photons collected near point x using the photon map obtained in step 1.2.2; d (x) is the distance of point x from the farthest photon of the collected n photons;
1.2.4 the j point light source obtained in step 1.2.1The energy of each emitted photon is substituted into the reflected radiation brightness formula of the point x in the step 1.2.3 to obtain the reflected radiation brightness L of the point x of the light source at the j pointrj(x):
Wherein: dj(x) The distance between a point x and the farthest photon in the collected n photons under the jth point light source by utilizing the jth photon map;
1.2.5, converting the color images with different visual angles collected in the step 1.1 into gray level images;
1.2.6 comparing the gray value of the gray image with the brightness L of the reflected radiation of step 1.2.4rj(x) Composing the objective function:
wherein: m is the number of gray level images participating in calculation; si(x) The pixel gray value of the ith gray image at the point x is obtained; dji(x) The distance between the point x in the ith gray scale image under the jth point light source and the farthest photon in the collected n photons; k is the number of point sources on the hemisphere; solving the objective function using a non-negative linear least squares method, let Ei(x) At a minimum, to obtain I1,I2,...Ij...Ik;
1.2.7 illuminant estimation I1,I2,...Ij...IkThe optimization comprises the following steps:
1.2.7.1 at I1,I2,...Ij...IkThe light source L with the maximum intensity value is selected1Will be reacted with L1The intensity of adjacent non-zero intensity value light sources is added to L1Intensity value a of the light source1Above, mixing L1And a1Adding the illumination estimation result set with the illumination estimation result set;
1.2.7.2 if the intensity values in the residual light sources are all 0, finishing optimization and outputting an illumination estimation result set;
1.2.7.3 if there is a light source whose intensity value is not 0 among the remaining light sources, the light source L whose intensity value is the largest is selected from the remaining light sourcesmWill be reacted with LmThe intensity of adjacent non-zero intensity value light sources is added to LmIntensity value a of the light sourcemThe above step (1); mixing L withmAnd amAdding the illumination estimation result set with the illumination estimation result set;
1.2.7.4 if am<0.5a1After the optimization is finished, outputting an illumination estimation result set;
1.2.7.5 if am≥0.5a1Is prepared by mixing LmAnd amAdding the illumination estimation result set to a step 1.2.7.2;
1.3 distinguishing the mirror object and the transparent object model obtained in step 1.1.2, comprising the steps of:
1.3.1 selecting a model generated in step 1.1.2, assuming a lambertian surface, taking the reflection coefficient as 1, using the position and intensity of the light source in the estimated illumination estimation result set to render the model at different viewing angles, and obtaining the gray value sum of the object at different viewing angles, wherein the gray value sum is respectively: b1,b2,...bfWherein f is the number of different views involved in the calculation; and then assuming the model as a transparent object, taking the refractive index as 1.2, using the position and the intensity of the light source in the estimated illumination estimation result set to render the model under the same f visual angles, and obtaining the gray value sum of the object under different visual angles, wherein the gray value sum is respectively as follows: t is t1,t2,...tf(ii) a If it isThe object is a specular object, where ciThe sum of the gray values of corresponding pixel points on the gray image generated in the step 1.2.5 is taken as the object under the ith visual angle; if it isThe object is a transparent object;
1.4 light source position optimization and isotropic word bidirectional reflectance distribution function (Ward BRDF) parameter estimation of specular objects, comprising the steps of:
1.4.1 sampling the estimated area near the position of each light source on a hemispherical surface, wherein the number of sampling points near each light source is g, each sampling point is used as a sampling point light source, and the light source intensity values of the sampling points are corresponding light source intensity values in the illumination estimation result set in the step 1.2.7;
1.4.2 reflected radiance of point x on specular object under each sample point light source corresponding to each light source in the set of illumination estimatesComprises the following steps:
wherein: s is the number of light sources in the illumination estimation result set; i is a light source intensity value; i is a vector from the point light source to the point x direction;the included angle between the vector from the d point light source to the point x direction and the normal of the plane where the point x is located; f (rho)d,ρsσ) is an isotropic Ward BRDF model, whose expression is:
wherein: o is the vector of the sight line direction; h is the half angle vector between vectors i and o, h ═ i + o/| i + o |,andrespectively are included angles between a vector and a half-angle vector of the sight line direction and a normal of a plane where the point x is located; rhodA reflectance that is diffuse reflection; rhosA reflectivity that is specular reflection; sigma is a roughness parameter; use the branchSolving the optimization problem by a support definition method and a second-order cone programming method:
rho corresponding to optimal solutiond、ρsσ, and e; wherein: m is a column vector M ═ M composed of pixel values of the specular object corresponding to the grayscale map obtained in step 1.2.51 M2 ... MN]T;Column vector composed of the intensities of radiation reflected at different points by specular objects
1.4.3 estimating the parameters of the WardBRDF model of the mirror surface object by the method of the step 1.4.2 respectively for s light sources in the illumination estimation result set and g sampling point light sources corresponding to the vicinity of each light source to obtain (g +1)sGroup rhod、ρsThe values of σ and e; light source position and rho corresponding to the minimum e valued、ρsAnd sigma is the optimized light source position and the estimated value of the model parameter of the mirror object wardBRDF;
1.5 refractive index and color attenuation coefficient estimation of transparent objects, comprising the steps of:
1.5.1, rendering the transparent object by using the position and the intensity of the light source optimized in the step 1.4.3, and only changing the refractive index of the transparent object in a scene by using a photon mapping rendering mode; the refractive index is changed from 1.2 to 2, the minimum refractive index change 0.01 which can be recognized by human eyes to change the focal dispersion effect of the transparent object is taken as step length increase, and the sum z of corresponding scene gray values under different refractive indexes is calculated1,z2,...z80(ii) a The calculation formula for the estimated refractive index is:
s.t.i=1,2,...80
wherein: mu is the sum of pixel values corresponding to the gray-scale image obtained in the step 1.2.5, and the refractive index corresponding to the calculated i value is the refractive index of the transparent object;
1.5.2 transparent object color attenuation coefficient σr、σgAnd σbThe calculation formula of (2) is as follows:
wherein: sigmar、σgAnd σbRed, green and blue channel attenuation coefficients, respectively; h is the total number of pixel points participating in calculation; diThe transmission distance of the light;andare respectively at diWhen the refractive index is equal to 0, rendering the transparent object by using the refractive index estimated in the step 1.5.1 and the light source position and intensity estimated in the step 1.4.3 to obtain red, green and blue channel gray values; r is a radical of hydrogeni、giAnd biRespectively the gray values of the red channel, the green channel and the blue channel of the shot color image;
and 1.6, carrying out differential rendering by using the estimated illumination result and the model parameter to obtain a virtual-real fusion effect graph.
The feasibility of the virtual-real fusion method suitable for the illumination consistency of the mirror surface object and the transparent object in the scene is verified by specific tests. The initial light source estimation result of the method is compared with an illumination estimation algorithm which is proposed by Chen and only considers the influence of the light source on the object, and a virtual-real fusion effect diagram (a test sample is shot by an RGB-D camera) of the mirror surface object and the transparent object in the scene is shown.
1. The working conditions are as follows:
the experimental platform adopts Intel Core i 74.2 GHz CPU @4.20GHz 4.20GHz, the memory is 16GB, a PC running Windows 7 is adopted, and the programming languages are MATLAB language and C + + language.
2. And (3) analyzing the experimental content and the result:
table 1 shows the comparison between the initial light source estimation algorithm of the present invention and the illumination estimation method in which only the influence of light sources on objects is considered, where the error angle is the size of the included angle between the light source angle in the actual scene and the estimated light source angle, and the unit is degree, and it can be seen from table 1 that the error angle of the present invention is lower by 8.2 ° ± 7.3 ° than the error angle in the reference method.
As shown in fig. 2, fig. 2(a) shows an object in a real scene, a box in the middle of a desktop is a specular reflection object, and the desktop shows its effect of defocused light spots. FIG. 2(b) is a diagram of the effect of the virtual-real fusion using the method of the present invention, wherein the arrows indicate virtual objects. The effect of the specular object of the real scene on the virtual object can be seen from the light spots on the virtual object in fig. 2 (b).
As shown in fig. 3, fig. 3(a) shows an object in a real scene, and the defocused light spot effect of a transparent object is shown on the desktop. FIG. 3(b) is a diagram of the effect of the virtual-real fusion using the method of the present invention, wherein the arrows indicate virtual objects. The effect of the transparent objects of the real scene on the virtual objects can be seen from the light spots on the virtual objects in fig. 3 (b).
TABLE 1 error angle (unit: degree) of illumination estimation results
The method of the invention | Reference method | |
Error angle | 11.4°±2.7° | 19.6°±10° |
The experimental results show that the method obtains a more realistic virtual-real fusion effect by estimating the parameters of the Ward BRDF model of the mirror surface object, the refractive index and the color attenuation coefficient of the transparent object, and solves the problem of the consistency of the virtual-real fusion illumination of the mirror surface object and the transparent object in the scene. Meanwhile, the invention starts from the optical principle when estimating the light source position, considers the reflection condition between objects and obtains more accurate light source position, which is also superior to other illumination estimation methods.
Claims (1)
1. A virtual-real fusion method for a mirror surface object and a transparent object in a scene is characterized by comprising the following steps:
1.1, shooting a scene with a mirror surface object and a transparent object by using an RGB-D camera to obtain depth images and color images with different visual angles; the method for reconstructing the scene in three dimensions and obtaining the three-dimensional model positions of the mirror surface object and the transparent object comprises the following steps:
1.1.1, three-dimensional reconstruction is carried out on depth images with different visual angles by adopting a Kinectfusion algorithm to obtain a Truncated Symbolic Distance Function (TSDF) model and a camera posture;
1.1.2 identifying approximate areas of the mirror surface object and the transparent object by using depth images of different visual angles, taking the areas as initial positions, respectively segmenting the mirror surface object and the transparent object from a color image by combining an image segmentation algorithm, and performing three-dimensional reconstruction on the mirror surface object and the transparent object by adopting a visual shell method;
1.1.3 TSDF model is fused with the mirror surface object and the transparent object model;
1.2 initial light source position and intensity estimation:
the initial light source estimation does not consider mirror surface objects and transparent object models, the materials of the rest models are assumed to be Lambert surfaces, and the value of the reflection coefficient is 1; the k point light sources are uniformly distributed on a hemispherical surface which takes a scene object as a center, and the diameter of the hemispherical surface is 2 times of that of the hemispherical surface which just surrounds the scene; each point light source emits q photons in different directions into a scene, and the specific estimation method comprises the following steps:
1.2.1 calculate the energy per photon emitted from the jth point source:
wherein: Δ Φ (ω)p) The energy carried by each photon, IjThe intensity value of the j point light source is;
1.2.2 respectively tracking photons emitted by k point light sources and storing collision point coordinates, incident photon energy and incident photon directions into k photon graphs;
1.2.3 calculating the reflected radiance L at point x at any viewing angler(x):
Wherein: n is the number of photons collected near point x using the photon map obtained in step 1.2.2; d (x) is the distance of point x from the farthest photon of the collected n photons;
1.2.4 substituting the energy of each photon emitted by the j point light source obtained in the step 1.2.1 into the reflected radiation brightness formula of the point x in the step 1.2.3 to obtain the reflected radiation brightness L of the j point light source at the point xrj(x):
Wherein: dj(x) The distance between a point x and the farthest photon in the collected n photons under the jth point light source by utilizing the jth photon map;
1.2.5, converting the color images with different visual angles collected in the step 1.1 into gray level images;
1.2.6 comparing the gray value of the gray image with the brightness L of the reflected radiation of step 1.2.4rj(x) The composition objective function is:
wherein: m is the number of gray level images participating in calculation; si(x) The pixel gray value of the ith gray image at the point x is obtained; dji(x) The distance between the point x in the ith gray scale image under the jth point light source and the farthest photon in the collected n photons; k is the number of point sources on the hemisphere; solving the objective function using a non-negative linear least squares method such that Ei(x) At a minimum, to obtain I1,I2,...Ij...Ik;
1.2.7 illuminant estimation I1,I2,...Ij...IkThe optimization comprises the following steps:
1.2.7.1 at I1,I2,...Ij...IkThe light source L with the maximum intensity value is selected1Will be reacted with L1The intensity of adjacent non-zero intensity value light sources is added to L1Intensity value a of the light source1Above, mixing L1And a1Adding the illumination estimation result set with the illumination estimation result set;
1.2.7.2 if the intensity values in the residual light sources are all 0, finishing optimization and outputting an illumination estimation result set;
1.2.7.3 if there is a light source whose intensity value is not 0 among the remaining light sources, the light source L whose intensity value is the largest is selected from the remaining light sourcesmWill be reacted with LmAdjacent non-zero intensity value light sourcesIs added to LmIntensity value a of the light sourcemThe above step (1); mixing L withmAnd amAdding the illumination estimation result set with the illumination estimation result set;
1.2.7.4 if am<0.5a1Outputting an illumination estimation result set after the optimization is finished;
1.2.7.5 if am≥0.5a1Is prepared by mixing LmAnd amAdding the illumination estimation result set to a step 1.2.7.2;
1.3 distinguishing the mirror object and the transparent object model obtained in step 1.1.2, comprising the following steps:
1.3.1 selecting a model generated in step 1.1.2, assuming a lambertian surface, taking the reflection coefficient as 1, using the position and intensity of the light source in the estimated illumination estimation result set to render the model at different viewing angles, and obtaining the gray value sum of the object at different viewing angles, wherein the gray value sum is respectively: b is a mixture of1,b2,...bfWherein f is the number of different views involved in the calculation; then, assuming the model as a transparent object, taking the refractive index as 1.2, rendering the position and the intensity of the light source in the estimated illumination estimation result set under the same f visual angles by using the position and the intensity of the light source in the estimated illumination estimation result set, and obtaining the gray value sum of the object under different visual angles, wherein the gray value sum is respectively as follows: t is t1,t2,...tf(ii) a If it isThe object is a specular object, where ciThe sum of the gray values of corresponding pixel points on the gray image generated in the step 1.2.5 is taken as the object under the ith visual angle; if it isThe object is a transparent object;
1.4 light source position optimization and isotropic Wrad bidirectional reflectance distribution function Ward BRDF parameter estimation of specular objects, comprising the steps of:
1.4.1 sampling the estimated area near the position of each light source on a hemispherical surface, wherein the number of sampling points near each light source is g, each sampling point is used as a sampling point light source, and the light source intensity values of the sampling points are corresponding light source intensity values in the illumination estimation result set in the step 1.2.7;
1.4.2 reflected radiance of point x on specular object under each sample point light source corresponding to each light source in the set of illumination estimatesComprises the following steps:
wherein: s is the number of light sources in the illumination estimation result set; i is a light source intensity value; i is a vector from the point light source to the point x direction;is the included angle between the vector from the d-th point light source to the point x direction and the normal of the plane where the point x is located; f (rho)d,ρsσ) is an isotropic Ward BRDF model, whose expression is:
wherein: o is the vector of the sight line direction; h is the half angle vector between vectors i and o, h ═ i + o/| i + o |,andrespectively are included angles between a vector and a half-angle vector of the sight line direction and a normal of a plane where the point x is located; rhodA reflectance that is diffuse reflection; ρ is a unit of a gradientsA reflectivity that is specular reflection; sigma is a roughness parameter; the optimization problem is solved using branch-and-bound and second order cone planning:
rho corresponding to optimal solutiond、ρsσ, and e; wherein: m is a column vector M ═ M composed of pixel values of the specular object corresponding to the grayscale map obtained in step 1.2.51 M2...MN]T;Column vector composed of the intensities of radiation reflected at different points by specular objects
1.4.3 estimating the parameters of the WardBRDF model of the mirror surface object by the method of the step 1.4.2 respectively for s light sources in the illumination estimation result set and g sampling point light sources corresponding to the vicinity of each light source to obtain (g +1)sGroup ρd、ρsThe values of σ and e; light source position and rho corresponding to the minimum e valued、ρsAnd sigma is the optimized light source position and the estimated value of the model parameter of the mirror object wardBRDF;
1.5 refractive index and color attenuation coefficient estimation of transparent objects, comprising the steps of:
1.5.1, rendering the transparent object by using the position and the intensity of the light source optimized in the step 1.4.3, and only changing the refractive index of the transparent object in a scene by using a photon mapping rendering mode; the refractive index is changed from 1.2 to 2, the minimum refractive index change 0.01 which can be recognized by human eyes to change the focal dispersion effect of the transparent object is taken as step length increase, and the sum z of corresponding scene gray values under different refractive indexes is calculated1,z2,...z80(ii) a The calculation formula for the estimated refractive index is:
s.t.i=1,2,...80
wherein: mu is the sum of pixel values corresponding to the gray-scale image obtained in the step 1.2.5, and the refractive index corresponding to the calculated i value is the refractive index of the transparent object;
1.5.2 transparent object color attenuation coefficient σr、σgAnd σbThe calculation formula of (2) is as follows:
wherein: sigmar、σgAnd σbRed, green and blue channel attenuation coefficients, respectively; h is the total number of pixel points participating in calculation; diThe transmission distance of the light;andare respectively at diWhen the refractive index is equal to 0, rendering the transparent object by using the refractive index estimated in the step 1.5.1 and the light source position and intensity estimated in the step 1.4.3 to obtain red, green and blue channel gray values; r isi、giAnd biRespectively the gray values of the red channel, the green channel and the blue channel of the shot color image;
and 1.6, carrying out differential rendering by using the estimated illumination result and the model parameter to obtain a virtual-real fusion effect graph.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910332095.XA CN110060335B (en) | 2019-04-24 | 2019-04-24 | Virtual-real fusion method for mirror surface object and transparent object in scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910332095.XA CN110060335B (en) | 2019-04-24 | 2019-04-24 | Virtual-real fusion method for mirror surface object and transparent object in scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110060335A CN110060335A (en) | 2019-07-26 |
CN110060335B true CN110060335B (en) | 2022-06-21 |
Family
ID=67320396
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910332095.XA Active CN110060335B (en) | 2019-04-24 | 2019-04-24 | Virtual-real fusion method for mirror surface object and transparent object in scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110060335B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110689514B (en) * | 2019-10-11 | 2022-11-11 | 深圳大学 | Training method and computer equipment for new visual angle synthetic model of transparent object |
CN111028597B (en) * | 2019-12-12 | 2022-04-19 | 塔普翊海(上海)智能科技有限公司 | Mixed reality foreign language scene, environment and teaching aid teaching system and method thereof |
CN112651357A (en) * | 2020-12-30 | 2021-04-13 | 浙江商汤科技开发有限公司 | Segmentation method of target object in image, three-dimensional reconstruction method and related device |
CN113593049B (en) * | 2021-07-27 | 2023-08-04 | 吉林大学 | Virtual-real fusion method for geometric consistency of real object and virtual object in scene |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106803267A (en) * | 2017-01-10 | 2017-06-06 | 西安电子科技大学 | Indoor scene three-dimensional rebuilding method based on Kinect |
CN108364344A (en) * | 2018-02-08 | 2018-08-03 | 重庆邮电大学 | A kind of monocular real-time three-dimensional method for reconstructing based on loopback test |
CN108364292A (en) * | 2018-03-26 | 2018-08-03 | 吉林大学 | A kind of illumination estimation method based on several multi-view images |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9759995B2 (en) * | 2011-08-18 | 2017-09-12 | Massachusetts Institute Of Technology | System and method for diffuse imaging with time-varying illumination intensity |
EP3401879B1 (en) * | 2012-03-19 | 2021-02-17 | Fittingbox | Method for modelling a three-dimensional object from two-dimensional images of the object taken from different angles |
US10055882B2 (en) * | 2016-08-15 | 2018-08-21 | Aquifi, Inc. | System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function |
CN108537871B (en) * | 2017-03-03 | 2024-02-20 | 索尼公司 | Information processing apparatus and information processing method |
-
2019
- 2019-04-24 CN CN201910332095.XA patent/CN110060335B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106803267A (en) * | 2017-01-10 | 2017-06-06 | 西安电子科技大学 | Indoor scene three-dimensional rebuilding method based on Kinect |
CN108364344A (en) * | 2018-02-08 | 2018-08-03 | 重庆邮电大学 | A kind of monocular real-time three-dimensional method for reconstructing based on loopback test |
CN108364292A (en) * | 2018-03-26 | 2018-08-03 | 吉林大学 | A kind of illumination estimation method based on several multi-view images |
Non-Patent Citations (3)
Title |
---|
3D Reconstruction in the Presence of Glass and Mirrors by Acoustic and Visual Fusion;Yu Zhang et al;《JOURNAL OF LATEX CLASS FILES》;20140930;第13卷(第9期);第1-14页 * |
基于RGB-D的室内场景实时三维重建算法;胡正乙等;《东北大学学报(自然科学版)》;20171215(第12期);第95-99页 * |
基于灭点理论的目标姿态角单站测量;张颂等;《应用光学》;20150515;第第36卷卷(第03期);第420-423页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110060335A (en) | 2019-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110060335B (en) | Virtual-real fusion method for mirror surface object and transparent object in scene | |
US11869139B2 (en) | System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function | |
CN104952063B (en) | For indicating the method and system of virtual objects in the view of true environment | |
Meilland et al. | 3d high dynamic range dense visual slam and its application to real-time object re-lighting | |
CN105847786B (en) | System and method for determining color conversion parameters | |
Rouiller et al. | 3D-printing spatially varying BRDFs | |
JP2016114598A (en) | Method and apparatus for digitizing appearance of real material | |
CN107003184B (en) | The pigment of composite coating mixture with flashing color identifies | |
JPH07182538A (en) | Coloration method of display pixel, picture rendering deviceand incident-light estimation method | |
CN108364292B (en) | Illumination estimation method based on multiple visual angle images | |
Ping et al. | Effects of shading model and opacity on depth perception in optical see‐through augmented reality | |
CN111861632B (en) | Virtual makeup testing method and device, electronic equipment and readable storage medium | |
Hold-Geoffroy et al. | Single day outdoor photometric stereo | |
CN110134987B (en) | Optical spherical defect detection illumination design method based on ray tracing | |
Tuceryan | AR360: dynamic illumination for augmented reality with real-time interaction | |
CN110851965B (en) | Light source optimization method and system based on physical model | |
JP2022540722A (en) | Method and system for simulating texture characteristics of coatings | |
CN112907758A (en) | Data determination method and device and electronic equipment | |
Tai et al. | Luminance contrast as depth cue: Investigation and design applications | |
Wang et al. | Capturing and rendering geometry details for BTF-mapped surfaces | |
CN108335351B (en) | BRDF color gamut mapping method based on directional statistical analysis | |
Grobe et al. | Data-driven modelling of daylight scattering by Roman window glass | |
JP2011196814A (en) | Device and method for evaluating glossiness feeling | |
CN116091684B (en) | WebGL-based image rendering method, device, equipment and storage medium | |
US20230260193A1 (en) | Generating a destination texture from a plurality of source textures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |