CN110060335A - There are the virtual reality fusion methods of mirror article and transparent substance in a kind of scene - Google Patents

There are the virtual reality fusion methods of mirror article and transparent substance in a kind of scene Download PDF

Info

Publication number
CN110060335A
CN110060335A CN201910332095.XA CN201910332095A CN110060335A CN 110060335 A CN110060335 A CN 110060335A CN 201910332095 A CN201910332095 A CN 201910332095A CN 110060335 A CN110060335 A CN 110060335A
Authority
CN
China
Prior art keywords
light source
point
value
transparent substance
photon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910332095.XA
Other languages
Chinese (zh)
Other versions
CN110060335B (en
Inventor
赵岩
张艾嘉
王世刚
王学军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN201910332095.XA priority Critical patent/CN110060335B/en
Publication of CN110060335A publication Critical patent/CN110060335A/en
Application granted granted Critical
Publication of CN110060335B publication Critical patent/CN110060335B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Generation (AREA)

Abstract

There are the virtual reality fusion method category computer virtual reality technology fields of mirror article and transparent substance in a kind of scene, using the shooting of RGB-D camera, there are the scenes of mirror article and transparent substance first by the present invention, it identifies the position of mirror article and transparent substance, and three-dimensional reconstruction is carried out to scene.The reflection between object is considered in primary light source estimation, and has estimated the material parameters of mirror article and transparent substance, is carried out difference rendering using the illumination result and model parameter estimated, is obtained virtual reality fusion effect picture.The present invention solves the problems, such as the virtual reality fusion illumination consistency in scene there are mirror article and transparent substance to obtain virtual reality fusion effect more true to nature by estimation mirror article BRDF model parameter, the refractive index of transparent substance and color attenuation coefficient.Meanwhile the present invention when estimating light source position from optical principle, it is contemplated that the reflection case between object has obtained more accurate light source position.

Description

There are the virtual reality fusion methods of mirror article and transparent substance in a kind of scene
Technical field
The invention belongs to computer virtual reality technology fields, and in particular to light source position and intensity, mirror surface in a kind of scene The reflection coefficient of object and the estimation method of transparent substance refractive index, color attenuation coefficient.
Background technique
Augmented reality is that the dummy object of generation is combined by computer to be presented on user with actual scene At the moment, for more life-like virtual reality fusion effect, it is necessary to show dummy object and be imitated with the consistent illumination of actual scene Fruit.Illumination consistency mainly consider the problems of be in scene real light sources and real-world object to dummy object surface patches institute Caused by color and brightness change.
Existing solution illumination consistency virtual reality fusion method is broadly divided into three classes: by auxiliary sign object space method, by Ancillary equipment method and without marker or the method for ancillary equipment.Wherein auxiliary sign object is shade and the mark manually placed Object obtains the light conditions in real scene by the information of marker offer.Ancillary equipment has depth camera, light-field camera And the special capture apparatus such as fisheye camera is capable of providing the information such as depth, light field and full view image, provides for illumination estimation New solution.Illumination in scene is obtained by way of image analysis without the method for marker or ancillary equipment to believe Breath.
What existing illumination estimation method considered is only the shade pair of real-world object in real light sources or scene in scene Variation caused by dummy object surface patches, for the caustic hot spot of mirror article and transparent substance generation in actual scene Influence to dummy object does not account for but.
Summary of the invention
The purpose of the present invention is being directed to the deficiency of existing method, proposing one kind, there are mirror articles and saturating suitable for scene The illumination consistency virtual reality fusion method of bright object, solves the caustic phenomenon pair of mirror article and transparent substance in actual scene The influence of dummy object, technical solution used by this method is:
1.1 using RGB-D camera shooting there are the scenes of mirror article and transparent substance, obtain different perspectives depth image And color image;Three-dimensional reconstruction is carried out to scene, and obtains the threedimensional model position of mirror article and transparent substance, including following Step:
1.1.1 three-dimensional reconstruction is carried out using depth image of the KinectFusion algorithm to different perspectives, obtains truncation symbol Number distance function (TSDF) model and camera posture;
1.1.2 the depth image for using different perspectives, identifies the approximate region of mirror article and transparent substance, by the area Domain respectively splits mirror article and transparent substance from color image as initial position, in conjunction with image segmentation algorithm, Three-dimensional reconstruction is carried out to mirror article and transparent substance using visual shell method;
1.1.3 TSDF model is merged with mirror article and transparent substance model;
1.2 primary light source positions and intensity estimation:
Primary light source estimation does not consider mirror article and transparent substance model, and remaining model material assumes that as lambert's table Face, reflection coefficient value are 1;K point light source is evenly distributed on the hemisphere face centered on object scene, hemispherical diameter For 2 times of hemispherical diameter for just surrounding scene;Each point light source emits the photon of q different directions into scene, tool Body estimation method includes the following steps:
1.2.1 each photon energy emitted from j-th of point light source is calculated:
Wherein: ΔΦ (ωp) it is energy entrained by each photon, IjFor the intensity value of j-th of point light source;
1.2.2 the photon that k point light source emits is tracked respectively and by point of impingement coordinate, incident photon energy and incident photon Direction is stored in k photon figure;
1.2.3 the reflected radiation brightness L of point x under any visual angle is calculatedr(x):
Wherein: n is the photon numbers collected near point x using photon figure obtained in step 1.2.2;D (x) is point x At a distance from photon farthest in n photon being collected into;
1.2.4 by the energy of each photon of j-th of the point light source found out in step 1.2.1 transmitting, step 1.2.3 is substituted into The reflected radiation brightness-formula of midpoint x obtains j-th of point light source in point x reflected radiation brightness Lrj(x):
Wherein: dj (x) be using j-th of photon figure under j-th of point light source in point x and n photon being collected into farthest The distance of photon;
1.2.5 the color image of different perspectives collected in step 1.1 is converted into gray level image;
1.2.6 by the reflected radiation brightness L of the gray value of gray level image and step 1.2.4rj(x) objective function is formed:
Wherein: m is the gray level image quantity for participating in calculating;SiIt (x) is pixel grey scale of the i-th width gray level image at point x Value;djiIt (x) is the lower i-th width gray level image midpoint x of j-th of point light source at a distance from photon farthest in n photon being collected into;k It is point light source quantity on hemisphere;Objective function is solved using non-negative linearity least square method, makes Ei(x) minimum, obtain I1, I2,...Ij... Ik
1.2.7 light source estimated result I1,I2... Ij...IkOptimization include the following steps:
1.2.7.1 in I1,I2... Ij...IkIn select the maximum light source L of intensity value1, will be with L1Adjacent non-zero intensities The intensity of value light source is added to L1The intensity value a of light source1On, by L1And a1It is added in illumination estimation results set;
1.2.7.2 if intensity value is all 0 in remaining light sources, optimization terminates, output illumination estimated result set;
If it is maximum 1.2.7.3 to select intensity value in remaining light sources there are the light source that intensity value is not 0 in remaining light sources Light source Lm, will be with LmThe intensity of adjacent non-zero intensities value light source is added to LmThe intensity value a of light sourcemOn;By LmAnd amIt is added to light By estimate in results set;
If 1.2.7.4 am<0.5a1, optimization terminates, output illumination estimated result set;
If 1.2.7.5 am≥0.5a1, by LmAnd amIt is added in illumination estimation results set, goes to step 1.2.7.2;
Mirror article obtained in 1.3 couples of step 1.1.2 and transparent substance model distinguish, including the following steps:
1.3.1 the model generated in a step 1.1.2 is chosen, and is assumed to be Lambert surface, reflection coefficient is taken as 1, makes With light source position and intensity in the illumination estimation results set estimated, it is rendered under different perspectives, obtains difference The sum of the grayscale values of the object, is respectively as follows: b under visual angle1, b2... bh, wherein h is the different perspectives number for participating in calculating;Again by mould Type is assumed to be transparent substance, and refractive index value is 1.2, using light source position in the illumination estimation results set estimated and by force Degree, it is rendered under identical h visual angle, the sum of the grayscale values of the object under different perspectives is obtained, is respectively as follows: t1, t2... th;IfThen object is mirror article, wherein ciIt is object under i-th of visual angle in step The sum of corresponding pixel points gray value on the gray level image that rapid 1.2.5 is generated;IfThen object is Transparent substance;
The isotropism Ward bidirectional reflectance distribution function (Ward BRDF) of the optimization of 1.4 light source positions and mirror article is joined Number estimation, includes the following steps:
1.4.1 each light source position near zone estimated is sampled on hemisphere face, each light source is nearby adopted Sampling point number is g, and for each sampled point as a sampling point light source, sampled point intensity of light source value is all illumination in step 1.2.7 Corresponding intensity of light source value in estimated result set;
1.4.2 under the corresponding each sampling point light source of each light source in illumination estimation results set, on mirror article The reflected radiation brightness of point xAre as follows:
Wherein: s is the number of light source in illumination estimation results set;I is intensity of light source value;I is point light source to the direction point x Vector;For the angle where the vector and point x of d-th of point light source to the direction point x between the normal of plane;f(ρds,σ) For isotropism Ward BRDF model, expression formula are as follows:
Wherein: o is the vector of direction of visual lines;Half-angle vector (h=(i+o)/| i+o |) of the h between vector i and o, WithAngle respectively where the vector sum half-angle vector of direction of visual lines and point x between the normal of plane;ρdIt is irreflexive Reflectivity;ρsFor the reflectivity of mirror-reflection;σ is roughness parameter;It is solved using branch and bound method and Second-order cone programming excellent Change problem:
Obtain corresponding ρ when optimal solutiond、ρs, σ and e;Wherein: M is by the corresponding mirror of grayscale image obtained in step 1.2.5 Column vector M=[the M of the pixel value composition of face object1 M2 ... MN]TIt is reflected at difference for mirror article The column vector of radiance composition
1.4.3 to g sampling point light source near the s light source and corresponding each light source in illumination estimation results set, Mirror article Ward BRDF model parameter is estimated using the method for step 1.4.2 respectively, obtains (g+1)sGroup ρd、ρs、σ With the value of e;Corresponding light source position and ρ when e value minimumd、ρsWith the light source position and mirror article Ward that σ is after optimization BRDF model parameter estimation value;
The refractive index and color attenuation coefficient of 1.5 transparent substances are estimated, including the following steps:
1.5.1 using the light source position and intensity after optimizing in step 1.4.3, transparent substance is rendered, is reflected using photon The rendering mode penetrated only changes transparent substance refractive index in scene;Refractive index changes to 2 from 1.2, can recognize that with human eye The smallest variations in refractive index 0.01 that bright object caustic effect changes is step-length increase, calculates corresponding field under different refractivity Scape gray value and z1, z2... z80;Estimate the calculation formula of refractive index are as follows:
S.t.i=1,2 ... 80
Wherein: μ is the sum of grayscale image respective pixel value obtained in step 1.2.5, the corresponding refractive index of calculated i value For transparent substance refractive index;
1.5.2 transparent substance color attenuation coefficient σr、σgAnd σbCalculation formula are as follows:
Wherein: σr、σgAnd σbRespectively red, green, blue channel attenuation coefficient;H is the pixel sum for participating in calculating;diFor The transmission range of light;WithIt is in d respectivelyiWhen=0, the refractive index and step estimated in step 1.5.1 is used 1.4.3 the red, green, blue channel gray value that the light source position and intensity estimated in renders transparent substance;ri、gi And biThe red, green, blue channel gray value of the color image respectively shot;
1.6 carry out difference rendering using the illumination result and model parameter that estimate, obtain virtual reality fusion effect picture.
The features of the present invention and beneficial effect
Compared with existing algorithm, the present invention not only considers light source directly to the shadow of object when primary light source is estimated It rings, reflex of the light between object is also simulated, obtained more accurate initial illumination estimation result. By estimating Ward BRDF model parameter, the refractive index of transparent substance and the color attenuation coefficient of mirror article, well solve Influence of the caustic hot spot that mirror article and transparent substance in actual scene generate to dummy object.
Detailed description of the invention
Fig. 1 is that there are the processes of the illumination consistency virtual reality fusion method of mirror article and transparent substance suitable for scene Figure
Fig. 2 is that there are mirror article virtual reality fusion experiment effect figures in scene
Fig. 3 is that there are transparent substance virtual reality fusion experiment effect figures in scene
In Fig. 2 and Fig. 3: (a) indicating actual scene image, (b) indicate to utilize the effect after the method for the present invention virtual reality fusion Figure
Specific embodiment
Core of the invention content is: considering the reflection between object in primary light source estimation, has obtained more Accurate estimated result.Mirror article Ward BRDF model parameter, the refractive index of transparent substance and color attenuation coefficient are carried out Estimation, and light source position is optimized simultaneously.It renders to have obtained using light source and model parameter the progress difference estimated and more force Genuine virtual reality fusion effect.
To make the purpose of the present invention, technical solution and advantage are clearer, and with reference to the accompanying drawing and example is done further Narration in detail:
1.1 using RGB-D camera shooting there are the scenes of mirror article and transparent substance, obtain different perspectives depth image And color image;Three-dimensional reconstruction is carried out to scene, and obtains the threedimensional model position of mirror article and transparent substance, including following Step:
1.1.1 three-dimensional reconstruction is carried out using depth image of the KinectFusion algorithm to different perspectives, obtains truncation symbol Number distance function (TSDF) model and camera posture;
1.1.2 the depth image for using different perspectives, identifies the approximate region of mirror article and transparent substance, by the area Domain respectively splits mirror article and transparent substance from color image as initial position, in conjunction with image segmentation algorithm, Three-dimensional reconstruction is carried out to mirror article and transparent substance using visual shell method;
1.1.3 TSDF model is merged with mirror article and transparent substance model;
1.2 primary light source positions and intensity estimation:
Primary light source estimation does not consider mirror article and transparent substance model, and remaining model material assumes that as lambert's table Face, reflection coefficient value are 1;K point light source is evenly distributed on the hemisphere face centered on object scene, hemispherical diameter For 2 times of hemispherical diameter for just surrounding scene;Each point light source emits the photon of q different directions into scene, tool Body estimation method includes the following steps:
1.2.1 each photon energy emitted from j-th of point light source is calculated:
Wherein: ΔΦ (ωp) it is energy entrained by each photon, IjFor the intensity value of j-th of point light source;
1.2.2 the photon that k point light source emits is tracked respectively and by point of impingement coordinate, incident photon energy and incident photon Direction is stored in k photon figure;
1.2.3 the reflected radiation brightness L of point x under any visual angle is calculatedr(x):
Wherein: n is the photon numbers collected near point x using photon figure obtained in step 1.2.2;D (x) is point x At a distance from photon farthest in n photon being collected into;
1.2.4 by the energy of each photon of j-th of the point light source found out in step 1.2.1 transmitting, step 1.2.3 is substituted into The reflected radiation brightness-formula of midpoint x obtains j-th of point light source in point x reflected radiation brightness Lrj(x):
Wherein: dj(x) for using j-th of photon figure under j-th of point light source in point x and n photon being collected into farthest The distance of photon;
1.2.5 the color image of different perspectives collected in step 1.1 is converted into gray level image;
1.2.6 by the reflected radiation brightness L of the gray value of gray level image and step 1.2.4rj(x) objective function is formed:
Wherein: m is the gray level image quantity for participating in calculating;SiIt (x) is pixel grey scale of the i-th width gray level image at point x Value; djiIt (x) is the lower i-th width gray level image midpoint x of j-th of point light source at a distance from photon farthest in n photon being collected into;k It is point light source quantity on hemisphere;Objective function is solved using non-negative linearity least square method, makes Ei(x) minimum, obtain I1, I2,...Ij... Ik
1.2.7 light source estimated result I1,I2... Ij...IkOptimization include the following steps:
1.2.7.1 in I1,I2... Ij...IkIn select the maximum light source L of intensity value1, will be with L1Adjacent non-zero intensities The intensity of value light source is added to L1The intensity value a of light source1On, by L1And a1It is added in illumination estimation results set;
1.2.7.2 if intensity value is all 0 in remaining light sources, optimization terminates, output illumination estimated result set;
If it is maximum 1.2.7.3 to select intensity value in remaining light sources there are the light source that intensity value is not 0 in remaining light sources Light source Lm, will be with LmThe intensity of adjacent non-zero intensities value light source is added to LmThe intensity value a of light sourcemOn;By LmAnd amIt is added to light By estimate in results set;
If 1.2.7.4 am<0.5a1, optimization terminates, output illumination estimated result set;
If 1.2.7.5 am≥0.5a1, by LmAnd amIt is added in illumination estimation results set, goes to step 1.2.7.2;
Mirror article obtained in 1.3 couples of step 1.1.2 and transparent substance model distinguish, including the following steps:
1.3.1 the model generated in a step 1.1.2 is chosen, and is assumed to be Lambert surface, reflection coefficient is taken as 1, makes With light source position and intensity in the illumination estimation results set estimated, it is rendered under different perspectives, obtains difference The sum of the grayscale values of the object, is respectively as follows: b under visual angle1, b2... bh, wherein h is the different perspectives number for participating in calculating;Again by mould Type is assumed to be transparent substance, and refractive index value is 1.2, using light source position in the illumination estimation results set estimated and by force Degree, it is rendered under identical h visual angle, the sum of the grayscale values of the object under different perspectives is obtained, is respectively as follows: t1, t2... th;IfThen object is mirror article, wherein ciIt is object under i-th of visual angle in step The sum of corresponding pixel points gray value on the gray level image that rapid 1.2.5 is generated;IfThen object is Transparent substance;
The isotropism Ward bidirectional reflectance distribution function (Ward BRDF) of the optimization of 1.4 light source positions and mirror article is joined Number estimation, includes the following steps:
1.4.1 each light source position near zone estimated is sampled on hemisphere face, each light source is nearby adopted Sampling point number is g, and for each sampled point as a sampling point light source, sampled point intensity of light source value is all illumination in step 1.2.7 Corresponding intensity of light source value in estimated result set;
1.4.2 under the corresponding each sampling point light source of each light source in illumination estimation results set, on mirror article The reflected radiation brightness of point xAre as follows:
Wherein: s is the number of light source in illumination estimation results set;I is intensity of light source value;I is point light source to the direction point x Vector;For the angle where the vector and point x of d-th of point light source to the direction point x between the normal of plane;f(ρds, It σ) is isotropism Ward BRDF model, expression formula are as follows:
Wherein: o is the vector of direction of visual lines;Half-angle vector (h=(i+o)/| i+o |) of the h between vector i and o, WithAngle respectively where the vector sum half-angle vector of direction of visual lines and point x between the normal of plane;ρdIt is irreflexive Reflectivity;ρsFor the reflectivity of mirror-reflection;σ is roughness parameter;It is solved using branch and bound method and Second-order cone programming excellent Change problem:
Obtain corresponding ρ when optimal solutiond、ρs, σ and e;Wherein: M is by the corresponding mirror of grayscale image obtained in step 1.2.5 Column vector M=[the M of the pixel value composition of face object1 M2 ... MN]TIt is anti-at difference for mirror article Penetrate the column vector of radiance composition
1.4.3 to g sampling point light source near the s light source and corresponding each light source in illumination estimation results set, Mirror article Ward BRDF model parameter is estimated using the method for step 1.4.2 respectively, obtains (g+1)sGroup ρd、ρs、σ With the value of e;Corresponding light source position and ρ when e value minimumd、ρsWith the light source position and mirror article Ward that σ is after optimization BRDF model parameter estimation value;
The refractive index and color attenuation coefficient of 1.5 transparent substances are estimated, including the following steps:
1.5.1 using the light source position and intensity after optimizing in step 1.4.3, transparent substance is rendered, is reflected using photon The rendering mode penetrated only changes transparent substance refractive index in scene;Refractive index changes to 2 from 1.2, can recognize that with human eye The smallest variations in refractive index 0.01 that bright object caustic effect changes is step-length increase, calculates corresponding field under different refractivity Scape gray value and z1, z2... z80;Estimate the calculation formula of refractive index are as follows:
S.t.i=1,2 ... 80
Wherein: μ is the sum of grayscale image respective pixel value obtained in step 1.2.5, the corresponding refractive index of calculated i value For transparent substance refractive index;
1.5.2 transparent substance color attenuation coefficient σr、σgAnd σbCalculation formula are as follows:
Wherein: σr、σgAnd σbRespectively red, green, blue channel attenuation coefficient;H is the pixel sum for participating in calculating;diFor The transmission range of light;WithIt is in d respectivelyiWhen=0, the refractive index and step estimated in step 1.5.1 is used 1.4.3 the red, green, blue channel gray value that the light source position and intensity estimated in renders transparent substance;ri、gi And biThe red, green, blue channel gray value of the color image respectively shot;
1.6 carry out difference rendering using the illumination result and model parameter that estimate, obtain virtual reality fusion effect picture.
With specific test, to verify one kind provided by the invention, there are mirror articles and transparent suitable for scene below The feasibility of the illumination consistency virtual reality fusion method of object.The primary light source estimated result of the method for the present invention and Chen are proposed Only consider a light source illumination estimation algorithm of object contributions is compared, and illustrate in scene there are mirror article and thoroughly The virtual reality fusion effect picture of bright object (test sample is shot by RGB-D camera).
1. operating condition:
Experiment porch of the invention uses Intel Core i74.2GHz CPU 4.20GHz 4.20GHz, inside saves as 16GB, runs the PC machine of Windows 7, and programming language is MATLAB language and C Plus Plus.
2. experiment content and interpretation of result:
Table 1 is that estimation primary light source algorithm carries out the illumination estimation method of object contributions with light source is only considered in the present invention Comparison, wherein error angle is that light-source angle and the corner dimension of light-source angle estimated, unit are degree in actual scene, The accuracy of the light source position estimated is evaluated with it, the method for the present invention error angle compares bibliography as can be seen from Table 1 Error angle in method lower 8.2 ± 7.3.
As shown in Fig. 2, Fig. 2 (a) is real scene object, the box among desktop is mirror-reflection object, is shown on desktop Show its caustic hot spot effect.Fig. 2 (b) is using the method for the present invention virtual reality fusion effect picture, and wherein arrow meaning is void Quasi- object.It can be seen that influence of the mirror article to dummy object of real scene from the hot spot in Fig. 2 (b) on dummy object.
As shown in figure 3, Fig. 3 (a) is real scene object, the caustic hot spot effect of transparent substance is shown on desktop.Fig. 3 (b) for using the method for the present invention virtual reality fusion effect picture, wherein arrow meaning is dummy object.The dummy object from Fig. 3 (b) On hot spot can be seen that influence of the transparent substance to dummy object of real scene.
The error angle (unit: degree) of 1 illumination estimation result of table
The method of the present invention Bibliography method
Error angle 11.4±2.7 19.6±10
The experimental results showed that, the present invention passes through estimation mirror article Ward BRDF model parameter, transparent substance by above Refractive index and color attenuation coefficient solve in scene that there are mirror articles to obtain virtual reality fusion effect more true to nature With the virtual reality fusion illumination consistency problem of transparent substance.Meanwhile the present invention when estimating light source position from optical principle, The reflection case between object is considered, more accurate light source position has been obtained, this is also relative to other illumination estimation methods Superior place.

Claims (1)

1. there are the virtual reality fusion methods of mirror article and transparent substance in a kind of scene, it is characterised in that include the following steps:
1.1 using RGB-D camera shooting there are the scenes of mirror article and transparent substance, obtain different perspectives depth image and coloured silk Chromatic graph picture;Three-dimensional reconstruction is carried out to scene, and obtains the threedimensional model position of mirror article and transparent substance, including following step It is rapid:
1.1.1 three-dimensional reconstruction is carried out to the depth image of different perspectives using KinectFusion algorithm, obtain unblind away from From function (TSDF) model and camera posture;
1.1.2 the depth image for using different perspectives, identifies the approximate region of mirror article and transparent substance, which is made For initial position, mirror article and transparent substance are split in conjunction with image segmentation algorithm respectively from color image, is used Visual shell method carries out three-dimensional reconstruction to mirror article and transparent substance;
1.1.3 TSDF model is merged with mirror article and transparent substance model;
1.2 primary light source positions and intensity estimation:
Primary light source estimation does not consider mirror article and transparent substance model, and remaining model material assumes that as Lambert surface, instead Penetrating coefficient value is 1;K point light source is evenly distributed on the hemisphere face centered on object scene, and hemispherical diameter is rigid 2 times for surrounding the hemispherical diameter of scene well;Each point light source emits the photon of q different directions into scene, specifically estimates Meter method includes the following steps:
1.2.1 each photon energy emitted from j-th of point light source is calculated:
Wherein: ΔΦ (ωp) it is energy entrained by each photon, IjFor the intensity value of j-th of point light source;
1.2.2 the photon that k point light source emits is tracked respectively and by point of impingement coordinate, incident photon energy and incident photon direction It is stored in k photon figure;
1.2.3 the reflected radiation brightness L of point x under any visual angle is calculatedr(x):
Wherein: n is the photon numbers collected near point x using photon figure obtained in step 1.2.2;D (x) is point x and receives The distance of farthest photon in the n photon collected;
1.2.4 by the energy of each photon of j-th of the point light source found out in step 1.2.1 transmitting, the midpoint step 1.2.3 is substituted into The reflected radiation brightness-formula of x obtains j-th of point light source in point x reflected radiation brightness Lrj(x):
Wherein: dj(x) for using j-th of photon figure under j-th of point light source farthest photon in point x and n photon being collected into Distance;
1.2.5 the color image of different perspectives collected in step 1.1 is converted into gray level image;
1.2.6 by the reflected radiation brightness L of the gray value of gray level image and step 1.2.4rj(x) objective function is formed:
Wherein: m is the gray level image quantity for participating in calculating;SiIt (x) is grey scale pixel value of the i-th width gray level image at point x;dji It (x) is the lower i-th width gray level image midpoint x of j-th of point light source at a distance from photon farthest in n photon being collected into;K is hemisphere Upper point light source quantity;Objective function is solved using non-negative linearity least square method, makes Ei(x) minimum, obtain I1,I2, ...Ij...Ik
1.2.7 light source estimated result I1,I2... Ij...IkOptimization include the following steps:
1.2.7.1 in I1,I2... Ij...IkIn select the maximum light source L of intensity value1, will be with L1Adjacent non-zero intensities value light The intensity in source is added to L1The intensity value a of light source1On, by L1And a1It is added in illumination estimation results set;
1.2.7.2 if intensity value is all 0 in remaining light sources, optimization terminates, output illumination estimated result set;
If 1.2.7.3 there are the light sources that intensity value is not 0 in remaining light sources, the maximum light source of intensity value is selected in remaining light sources Lm, will be with LmThe intensity of adjacent non-zero intensities value light source is added to LmThe intensity value a of light sourcemOn;By LmAnd amIt is added to illumination to estimate It counts in results set;
If 1.2.7.4 am<0.5a1, optimization terminates, output illumination estimated result set;
If 1.2.7.5 am≥0.5a1, by LmAnd amIt is added in illumination estimation results set, goes to step 1.2.7.2;
Mirror article obtained in 1.3 couples of step 1.1.2 and transparent substance model distinguish, including the following steps:
1.3.1 the model that generates in a step 1.1.2 is chosen, and is assumed to be Lambert surface, reflection coefficient is taken as 1, using estimating Light source position and intensity in the illumination estimation results set counted out render it under different perspectives, obtain different perspectives The sum of the grayscale values of the lower object, is respectively as follows: b1, b2... bh, wherein h is the different perspectives number for participating in calculating;Again by model vacation It is set as transparent substance, refractive index value is 1.2, right using light source position and intensity in the illumination estimation results set estimated It is rendered under identical h visual angle, is obtained the sum of the grayscale values of the object under different perspectives, is respectively as follows: t1, t2... th; IfThen object is mirror h body, wherein ciIt is object under i-th of visual angle in step 1.2.5 The sum of corresponding pixel points gray value on the gray level image of generation;IfThen object is transparency Body;
Isotropism Ward bidirectional reflectance distribution function (Ward BRDF) parameter of the optimization of 1.4 light source positions and mirror article is estimated Meter, includes the following steps:
1.4.1 each light source position near zone estimated is sampled on hemisphere face, sampled point near each light source Number is g, and for each sampled point as a sampling point light source, sampled point intensity of light source value is all illumination estimation in step 1.2.7 Corresponding intensity of light source value in results set;
1.4.2 under the corresponding each sampling point light source of each light source in illumination estimation results set, point x on mirror article Reflected radiation brightnessAre as follows:
Wherein: s is the number of light source in illumination estimation results set;I is intensity of light source value;I be point light source to point x direction to Amount;For the angle where the vector and point x of d-th of point light source to the direction point x between the normal of plane;f(ρds, σ) and it is each To same sex Ward BRDF model, expression formula are as follows:
Wherein: o is the vector of direction of visual lines;Half-angle vector (h=(i+o)/| i+o |) of the h between vector i and o,With Angle respectively where the vector sum half-angle vector of direction of visual lines and point x between the normal of plane;ρdFor irreflexive reflection Rate;ρsFor the reflectivity of mirror-reflection;σ is roughness parameter;Optimization is solved using branch and bound method and Second-order cone programming to ask Topic:
Obtain corresponding ρ when optimal solutiond、ρs, σ and e;Wherein: M is by the corresponding mirror of grayscale image obtained in step 1.2.5 Column vector M=[the M of the pixel value composition of body1 M2 ... MN]TSpoke is reflected at difference for mirror article Penetrate the column vector of brightness composition
1.4.3 to g sampling point light source near the s light source and corresponding each light source in illumination estimation results set, respectively Mirror article WardBRDF model parameter is estimated using the method for step 1.4.2, obtains (g+1)sGroup ρd、ρs, σ and e Value;Corresponding light source position and ρ when e value minimumd、ρsWith the light source position and mirror article WardBRDF model ginseng that σ is after optimization Number estimated value;
The refractive index and color attenuation coefficient of 1.5 transparent substances are estimated, including the following steps:
1.5.1 using the light source position and intensity after optimizing in step 1.4.3, transparent substance is rendered, Photon Mapping is used Rendering mode only changes transparent substance refractive index in scene;Refractive index changes to 2 from 1.2, can recognize that transparency with human eye The smallest variations in refractive index 0.01 that body caustic effect changes is step-length increase, calculates corresponding scene ash under different refractivity Angle value and z1, z2... z80;Estimate the calculation formula of refractive index are as follows:
S.t.i=1,2 ... 80
Wherein: μ is the sum of grayscale image respective pixel value obtained in step 1.2.5, and the corresponding refractive index of calculated i value is Bright object refractive index;
1.5.2 transparent substance color attenuation coefficient σr、σgAnd σbCalculation formula are as follows:
Wherein: σr、σgAnd σbRespectively red, green, blue channel attenuation coefficient;H is the pixel sum for participating in calculating;diFor light Transmission range;WithIt is in d respectivelyiWhen=0, the refractive index and step 1.4.3 estimated in step 1.5.1 is used In the light source position that estimates and intensity red, green, blue channel gray value that transparent substance is rendered;ri、giAnd biPoint The red, green, blue channel gray value for the color image that Wei do not shoot;
1.6 carry out difference rendering using the illumination result and model parameter that estimate, obtain virtual reality fusion effect picture.
CN201910332095.XA 2019-04-24 2019-04-24 Virtual-real fusion method for mirror surface object and transparent object in scene Expired - Fee Related CN110060335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910332095.XA CN110060335B (en) 2019-04-24 2019-04-24 Virtual-real fusion method for mirror surface object and transparent object in scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910332095.XA CN110060335B (en) 2019-04-24 2019-04-24 Virtual-real fusion method for mirror surface object and transparent object in scene

Publications (2)

Publication Number Publication Date
CN110060335A true CN110060335A (en) 2019-07-26
CN110060335B CN110060335B (en) 2022-06-21

Family

ID=67320396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910332095.XA Expired - Fee Related CN110060335B (en) 2019-04-24 2019-04-24 Virtual-real fusion method for mirror surface object and transparent object in scene

Country Status (1)

Country Link
CN (1) CN110060335B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689514A (en) * 2019-10-11 2020-01-14 深圳大学 Training method and computer equipment for new visual angle synthetic model of transparent object
CN110717979A (en) * 2019-08-19 2020-01-21 北京航空航天大学 Atmospheric and three-dimensional earth surface coupling radiation simulation method based on photon tracking
CN111028597A (en) * 2019-12-12 2020-04-17 塔普翊海(上海)智能科技有限公司 Mixed reality foreign language scene, environment and teaching aid teaching system and method thereof
CN113593049A (en) * 2021-07-27 2021-11-02 吉林大学 Virtual-real fusion method for geometric consistency of real object and virtual object in scene
WO2022142311A1 (en) * 2020-12-30 2022-07-07 浙江商汤科技开发有限公司 Method for segmenting target object in image, three-dimensional reconstruction method, and related apparatus

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130044213A1 (en) * 2011-08-18 2013-02-21 Massachusetts Institute Of Technology System and method for diffuse imaging with time-varying illumination intensity
US20140055570A1 (en) * 2012-03-19 2014-02-27 Fittingbox Model and method for producing 3d photorealistic models
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect
US20180047208A1 (en) * 2016-08-15 2018-02-15 Aquifi, Inc. System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
CN108364344A (en) * 2018-02-08 2018-08-03 重庆邮电大学 A kind of monocular real-time three-dimensional method for reconstructing based on loopback test
CN108364292A (en) * 2018-03-26 2018-08-03 吉林大学 A kind of illumination estimation method based on several multi-view images
US20180255283A1 (en) * 2017-03-03 2018-09-06 Sony Corporation Information processing apparatus and information processing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130044213A1 (en) * 2011-08-18 2013-02-21 Massachusetts Institute Of Technology System and method for diffuse imaging with time-varying illumination intensity
US20140055570A1 (en) * 2012-03-19 2014-02-27 Fittingbox Model and method for producing 3d photorealistic models
US20180047208A1 (en) * 2016-08-15 2018-02-15 Aquifi, Inc. System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect
US20180255283A1 (en) * 2017-03-03 2018-09-06 Sony Corporation Information processing apparatus and information processing method
CN108364344A (en) * 2018-02-08 2018-08-03 重庆邮电大学 A kind of monocular real-time three-dimensional method for reconstructing based on loopback test
CN108364292A (en) * 2018-03-26 2018-08-03 吉林大学 A kind of illumination estimation method based on several multi-view images

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YU ZHANG ET AL: "3D Reconstruction in the Presence of Glass and Mirrors by Acoustic and Visual Fusion", 《JOURNAL OF LATEX CLASS FILES》 *
张颂等: "基于灭点理论的目标姿态角单站测量", 《应用光学》 *
胡正乙等: "基于RGB-D的室内场景实时三维重建算法", 《东北大学学报(自然科学版)》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717979A (en) * 2019-08-19 2020-01-21 北京航空航天大学 Atmospheric and three-dimensional earth surface coupling radiation simulation method based on photon tracking
CN110689514A (en) * 2019-10-11 2020-01-14 深圳大学 Training method and computer equipment for new visual angle synthetic model of transparent object
CN110689514B (en) * 2019-10-11 2022-11-11 深圳大学 Training method and computer equipment for new visual angle synthetic model of transparent object
CN111028597A (en) * 2019-12-12 2020-04-17 塔普翊海(上海)智能科技有限公司 Mixed reality foreign language scene, environment and teaching aid teaching system and method thereof
WO2022142311A1 (en) * 2020-12-30 2022-07-07 浙江商汤科技开发有限公司 Method for segmenting target object in image, three-dimensional reconstruction method, and related apparatus
CN113593049A (en) * 2021-07-27 2021-11-02 吉林大学 Virtual-real fusion method for geometric consistency of real object and virtual object in scene
CN113593049B (en) * 2021-07-27 2023-08-04 吉林大学 Virtual-real fusion method for geometric consistency of real object and virtual object in scene

Also Published As

Publication number Publication date
CN110060335B (en) 2022-06-21

Similar Documents

Publication Publication Date Title
CN110060335A (en) There are the virtual reality fusion methods of mirror article and transparent substance in a kind of scene
Lindemann et al. About the influence of illumination models on image comprehension in direct volume rendering
CN104952063B (en) For indicating the method and system of virtual objects in the view of true environment
Narasimhan et al. Acquiring scattering properties of participating media by dilution
CN104346420B (en) Method and system for digital into appearance data
CN101454806B (en) Method and apparatus for volume rendering using depth weighted colorization
CN107644453B (en) Rendering method and system based on physical coloring
CN105447906A (en) Method for calculating lighting parameters and carrying out relighting rendering based on image and model
CN106535736B (en) Image processing apparatus, image processing method and image processing program
Šoltészová et al. Chromatic shadows for improved perception
Devlin et al. Realistic visualisation of the pompeii frescoes
CN107016719B (en) A kind of Subsurface Scattering effect real-time drawing method of screen space
CN107734267A (en) Image processing method and device
Zhang et al. A systematic approach to testing and predicting light-material interactions
Grosch et al. Consistent interactive augmentation of live camera images with correct near-field illumination
Englund et al. Evaluating the perception of semi-transparent structures in direct volume rendering techniques
Retzlaff et al. Physically based computer graphics for realistic image formation to simulate optical measurement systems
CN112907758B (en) Data determination method and device and electronic equipment
Ropinski et al. Advanced volume illumination with unconstrained light source positioning
Wang et al. Capturing and rendering geometry details for BTF-mapped surfaces
Gigilashvili et al. Appearance manipulation in spatial augmented reality using image differences
Pereira et al. Photorealism in mixed reality: a systematic literature review
JP4357997B2 (en) Fast estimation method of bidirectional reflection distribution function of objects
JP6432882B2 (en) Image simulation method
Qi Measuring perceived gloss of rough surfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220621