CN113763528A - Method for synthesizing realistic dynamic illumination change of single image - Google Patents

Method for synthesizing realistic dynamic illumination change of single image Download PDF

Info

Publication number
CN113763528A
CN113763528A CN202111123478.XA CN202111123478A CN113763528A CN 113763528 A CN113763528 A CN 113763528A CN 202111123478 A CN202111123478 A CN 202111123478A CN 113763528 A CN113763528 A CN 113763528A
Authority
CN
China
Prior art keywords
image
light source
illumination
information
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111123478.XA
Other languages
Chinese (zh)
Inventor
王彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111123478.XA priority Critical patent/CN113763528A/en
Publication of CN113763528A publication Critical patent/CN113763528A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The application provides a realistic dynamic illumination change synthesis method of a single image, which is divided into three parts to perform dynamic illumination change synthesis on the single input image, wherein the three parts are respectively as follows: the method comprises the steps of firstly, estimating depth information of a single input image based on an RGB-D data set, estimating to obtain three-dimensional space configuration information of a scene, then, estimating to obtain the illumination information of the scene by adopting a method of combining image deconstruction and image rendering, and finally, combining the three-dimensional space configuration information and the illumination information of the scene, editing the illumination information, and rendering and synthesizing the illumination change image with reality.

Description

Method for synthesizing realistic dynamic illumination change of single image
Technical Field
The application relates to a single image dynamic illumination synthesis method, in particular to a single image realistic dynamic illumination change synthesis method, belonging to the technical field of single image realistic dynamic illumination change synthesis methods.
Background
Illumination analysis and processing are successfully applied to practical applications such as movies, games, 3D animations and the like, and in some movies with a sense of reality, the application of the illumination technology provides a powerful support for the reality of scenes. The illumination is of great significance to the generation of a scene image, and under the action of different illuminations, the features of illumination brightness, shadow, specular highlight and the like of objects in the scene can be correspondingly changed to form an illumination change image with reality.
At present, the numerous and complicated image information of big data volume has filled each corner of people's life, a large amount of images bear very much information, it not only needs complicated hardware condition to process, still consume a large amount of manpower and materials, need to start from the true illumination image of sola urgently, illumination information carries out pertinence analysis and processing in the image, obtain the light source condition information and the three-dimensional space structure of scene of sola image, then on the basis of light source condition information estimation result, suitably change illumination condition information, can render the synthetic and satisfy human vision, the psychology, true dynamic illumination change image.
The single image can provide information such as scene layout, shape, object shape characteristics and the like for people, and can also provide information such as dark change of surface illumination of the object, shadow projection direction in the scene and the like for people about the orientation of a light source in the scene. However, accurate three-dimensional information similar to the light source direction, the light source position, the depth information of the object in the scene, and the like cannot be intuitively obtained only by using the two-dimensional information of the single image. Thus, processing and handling based on a single image is fraught with challenges and difficulties. Heretofore, the prior art performs estimation processing on illumination information of a single image, and is roughly divided into four types: firstly, the illumination condition is estimated by means of additional equipment such as an illumination detection needle, a detection ball and a polaroid; secondly, the light source is estimated according to a specific three-dimensional structure; thirdly, based on the illumination condition estimation of the rendered single virtual image; and fourthly, a method for estimating illumination conditions based on image deconstruction and rendering of Kinect. The four methods can be used for roughly estimating and obtaining the illumination condition information of the image scene, but the four methods have great limitations, wherein the biggest limitation factor is that additional auxiliary equipment is adopted, and the additional auxiliary equipment has no universality, so that the illumination information estimation of a single image has important significance and great application value.
Among the technologies for changing the light source information in an image scene, a relighting technology of an image and a realistic image rendering technology based on a spatial structure are common. The relighting technology needs to adopt complex equipment to acquire a large number of images, performs illumination change simulation on different visual angles and orientations in a scene, and then obtains an augmented reality image under any illumination condition and visual angle change by changing the illumination condition. As an image generation method, the traditional realistic image rendering needs to be performed according to an illumination model on the premise of knowing information such as a three-dimensional space structure of a scene, illumination information, material properties of the surface of an object and the like, and then the realistic illumination image can be obtained under the subsequent operations of texture mapping, space line surface hiding and the like, but the information amount of a single image is too small to obtain various parameters of the realistic image rendering. For example, the three-dimensional spatial structure information of a scene, a single image is only the presentation of the scene information under one viewing angle, and there is a certain difficulty in estimating the three-dimensional spatial structure information of the whole scene, so how to maximally utilize the information of the single image to perform dynamic illumination synthesis is a very challenging task.
In the prior art, a plurality of methods for estimating illumination conditions in an image are available, wherein a classical method is to estimate the surface reflection property and the light source position of an object illuminated at a limited distance, a lambertian diffuse reflection model is adopted, and an iterative relaxation scheme is used to deconstruct specular light and diffuse reflection light to estimate the light source position.
In the prior art, a method for estimating illumination information based on a simple space structure and rendering a scene with an object inserted into the simple space can obtain an accurate light source position according to manual interaction of the simple space scene, but the method is not suitable for related application of light source movement because the light source is inherent in the scene.
In the prior art, a plurality of images based on the same scene, the same visual angle and different illumination directions are used for estimating the position of a scene light source, the method does not directly calculate the specific position of the light source, but extracts the manifold position of the light source from a sequence of images by using a data dimension reduction technology, and then performs spherical fitting on the extracted position of the light source by using a least square method to obtain the specific position of a light source point in a three-dimensional space. The data sampling process of the method is complex, the scene object needs to be polished by adopting professional equipment according to a specific mode, the experimental cost is large, and the estimation result may have large errors. And thus is not suitable for general illumination information estimation.
With respect to the application of illumination variation in an image, the prior art is more common to image relighting techniques. The spherical harmonic theory is used, and the discrete representation characteristic of the spherical harmonic function is used, so that the number of data sets needing to be sampled during relighting can be reduced, a spherical harmonic Map of scene illumination is obtained, and a scene image with changed illumination can be rendered and synthesized by changing the spherical harmonic function. However, a large number of image segmentation techniques are used in the process of generating the image, and Shape From shaping has a certain constraint condition on the surface attribute of the object, and is only suitable for solving the problem of the ideal diffuse reflection material object.
Besides, the image illumination change synthesis method in the prior art also has shortcomings, and the difficulties and problems to be solved in the present application mainly focus on the following aspects:
firstly, dynamic illumination processing of a single image in the prior art faces a lot of difficulties, on one hand, information such as a three-dimensional spatial structure and illumination conditions of a scene needs to be known in a traditional illumination image generation process, but according to an image imaging principle, a single image is only three-dimensional scene information projected on a two-dimensional plane, and the three-dimensional spatial structure of the whole scene is difficult to recover according to the two-dimensional information; on the other hand, currently, for the estimation of illumination information of a single image, a method of an illumination detection device is generally adopted or a method of utilizing the existing space configuration information is adopted, and both methods need to use additional measuring equipment, so that the method is not suitable for the general image illumination change synthesis work;
second, the conversion of three-dimensional information into two-dimensional information, where a large amount of information is naturally lost, often the information of a single image is not enough to obtain depth information of a scene due to the limitation of viewing angle, so it is generally difficult to recover the spatial configuration of a three-dimensional scene from the information of one image alone. In the prior art, a Kinect scanning instrument developed by Microsoft is generally adopted in a method for obtaining scene depth information, a single RGB-D image can obtain rough three-dimensional spatial configuration information of the whole scene, but the method can not be used by anyone at any time, and for the condition that no Kinect equipment or other special conditions exist, obtaining the depth information of a single image is difficult, and meanwhile, the information obtained by Kinect has great problems in precision and efficiency;
thirdly, for light source estimation and application of a single image, the prior art includes complex measuring instruments based on an illumination probe, a polarization filter and the like, a virtual image based on rendering and the like, the correlation work of the estimated light source information needs to be performed by an additional detection device or based on the virtual image, and the application under many scenes has great limitation, and the prior art lacks a proper single image light source estimation and application method;
fourthly, in the prior art, the light source information needs to be solved by depending on measuring equipment, a single image cannot be input, extra measuring equipment needs to be used, and meanwhile, a vivid image with dynamic illumination change cannot be obtained; the method has the advantages that the reflection model is not utilized for image rendering synthesis, the image deconstruction and illumination information processing results are not fully utilized, the illumination change image with reality sense cannot be synthesized, the image rendering process is very complicated, the problem of insufficient rendering parameter information in the single image rendering process cannot be solved, the obtained illumination change image is not dynamic or is obviously not vivid enough, the application prospect is not wide, the method cannot be applied to making illumination animation of a single image, and the image illumination consistency in virtual-real fusion is realized.
Disclosure of Invention
In order to solve the above problems, the key processes and obvious improvements of the present application include: firstly, removing specular highlight components in a single image, not assuming a light source, performing a new iteration of light source calculation according to the specular highlight components separated each time, and stopping iteration operation until the light source calculation is unchanged, so as to obtain light source colors close to a real scene and optimal specular highlight component separation; secondly, estimating the depth information of a single image based on an RGB-D training sample set, obtaining an approximate depth information value of the single image by adopting a training sample set statistical method, carrying out manual screening on the sample set in the process of selecting a training set sample, selecting a depth information value with high reliability to carry out data fitting according to observation and analysis of the depth information on the basis of obtaining the depth information of the single image by adopting an interactive method to obtain a regular three-dimensional scene space configuration, repairing a region containing specular highlight residue by adopting a simple image repairing method after estimating the light source information of the single image and the scene three-dimensional space configuration, and then obtaining a vivid image synthesis result of illumination change by changing the correlation editing of the light source position, the light source color and the illumination intensity of the light source information.
In order to realize the technical characteristics, the technical scheme adopted by the application is as follows:
the method comprises the steps of estimating illumination information and three-dimensional space configuration information in a single image scene, and then rendering the illumination information to edit and synthesize an illumination change image with reality sense;
the application is divided into three parts to carry out dynamic illumination change synthesis on a single input image, and the three parts are respectively: the method comprises the steps of performing image illumination rendering based on a three-dimensional space structure, estimating illumination information of a single image, editing and synthesizing a vivid dynamic illumination change image, firstly, estimating depth information of the single input image based on an RGB-D data set, estimating to obtain three-dimensional space structure information of a scene, then estimating to obtain the illumination information of the scene by adopting a method combining image deconstruction and image rendering, and finally, editing the illumination information by combining the three-dimensional space structure information and the illumination information of the scene, and rendering and synthesizing the illumination change image with reality;
the image illumination rendering method based on the three-dimensional space structure comprises the following three steps:
step 1, estimating depth information of a single image: obtaining a depth information map of a single image based on an RGB-D image set, then obtaining a rough three-dimensional space structure of a scene by using the depth information map, and obtaining a rendered specular reflection Shading map and a rendered diffuse reflection Shading map by combining the three-dimensional space structure of the scene under the action of illumination to obtain a rough Shading image under a specific illumination condition;
step 2, deconstructing the real image: decomposing an input image into a specular reflection Shading graph, a diffuse reflection Shading graph and a reflectivity graph;
step 3, optimizing the position information of the light source according to the images obtained in the step 1 and the step 2, so that the finally solved position information of the light source can simultaneously meet the data fitting of specular high light and diffuse reflection Shading;
estimating illumination information of a single image, wherein the illumination information comprises an deconstructed illumination image and estimated illumination position information, and the deconstructed illumination image comprises a deconstructed specular reflection component and a deconstructed diffuse reflection component;
thirdly, editing and synthesizing a vivid dynamic illumination change image into an image for repairing a specular highlight area and an image for editing a single image light source, and firstly starting from a depth information map of the single image and obtaining the specular highlight image according to the rendering type rendering of a Shading image; then, removing specular highlight residual components of the diffuse reflection image, and performing image repairing on a specular highlight area to obtain a repaired ideal diffuse reflection image; and finally, performing reverse linear synthesis on the specular highlight image obtained by rendering and the ideal diffuse reflection image based on the reflection model to obtain a vivid image obtained by dynamic illumination change.
A method for synthesizing the vivid dynamic illumination change of a single image further comprises the following steps:
the first step is as follows: and a pre-processing stage, wherein an alternative image which is similar to the input image on the characteristic space is obtained from the known data set: based on an RGB-D image data set, selecting a plurality of most similar images as alternative images according to the RGB space feature similarity of an input image, calculating high-level image features of each image in the data set by adopting GIST and optical flow features, performing K-NN nearest neighbor search on all the data sets, setting K to be 7 when performing the K-NN nearest neighbor search, namely, taking 7 images with RGB space features most similar to the input image as the alternative images, and taking corresponding depth maps as the alternative depth images;
the second step is that: estimating the rough depth map to obtain a rough depth map of the input image: carrying out preliminary matching deformation on the depth information map of the alternative image and the input image: carrying out deformation on the alternative image to obtain a deformed alternative image, and then carrying out dense scene calibration on the depth value of the corresponding pixel block in the deformed alternative depth image and the input image for each pixel block of the input image to obtain a rough depth information image of the input image;
the third step: optimizing the rough depth map to obtain a smooth and complete depth map: and carrying out interpolation and smoothing treatment on the rough depth information map to obtain an optimized depth estimation result, wherein the space between pixel blocks of the rough depth map in the second step is not smooth, and the optimization energy formula of the depth information is as follows:
Figure BDA0003277881910000051
wherein A is the last corresponding estimated depth information image of the input image, V is the probability normalization constant, Bt(Ai) Is an alternative depth data item, Bs(Ai) Is a spatial smoothing term, Bp(Ai) Is a priori value of depth data, a and B are corresponding coefficients, and data item Bt(Ai) Describing the difference between the estimated depth information map A and each deformed alternative depth map, i.e. calculating the residual between the alternative 7 deformed depth maps and the depth information map with estimation, Bs(Ai) Spatial smoothing is constrained by the gradient values in the x and y directions, Bp(Ai) Is a prior term of the depth information image estimate, and:
Bp(Ai)=Φ(Ai-Xi) Formula 2
Wherein x isiIs the average of the depth values at all points p in the data set, guided by a priori values.
The method for synthesizing the vivid dynamic illumination change of the single image further simplifies the three-dimensional space structure of the single image: for the three-dimensional space of the whole sceneAnd (5) performing line data fitting, namely, imaging the coordinates (X, y) of pixel points in the plane coordinate system IPCS, wherein the corresponding coordinates in the camera coordinate system are N (X)c,Yc,Zc) The depth information estimation is based on a depth data set, and the value of g is 525mm to obtain a three-dimensional space structure close to a real situation;
after obtaining the depth image, a main plane in the scene is fitted, the plane PaF after fitting being:
Di·X+Ei·y+Si·Z+Ai0-formula 3
Wherein D isi,Ei,Si,AiIs the plane coefficient of the ith plane fitted in three-dimensional space, assuming that the principal plane in the real scene is not over the origin (0,0,0), i.e. AiNot equal to 0, then equation 3 reduces to:
Di·X+Ei·y+Si1. Z-1 formula 4
If the N point is on the plane, then:
Di·Xc+Ei·Yc+Si·Zc1-formula 5
According to a conversion formula from a plane coordinate system to a camera coordinate system and formulas 3 to 5, obtaining three-dimensional coordinates of an imaging plane point N at N points in the camera system as follows:
Figure BDA0003277881910000061
Figure BDA0003277881910000062
Figure BDA0003277881910000063
and obtaining three-dimensional position information of each point on the plane in a camera coordinate system to obtain an approximate three-dimensional space structure of the whole scene.
The method for synthesizing the vivid dynamic illumination change of a single image further comprises the following steps of calculating the spatial configuration of an illumination bright and dark image: specular reflection Shading is specular highlight in a scene, and the reflection characteristic is as follows:
Qs,j(X)=Ws,j·Jl·(dot(UiT))Msformula 9
Wherein Q iss,iIs the radiance value, W, of the specular reflection at the ith channel of the corresponding pixels,iIs the specular reflection coefficient of the surface of the object, JlFor the illumination intensity, U is the viewing direction from point X to the viewpoint, T is the specular reflection light direction of the light at point X, Ms is a high light coefficient describing the degree of convergence of the specular reflection light, | U | ═ T | ═ 1, dot (U, T) is the dot product of the U vector and the T vector;
the intensity of the reflected light is related to the position of the point X in the three-dimensional space, except the intensity of the incident light and the rough degree of the surface of the object, the three-dimensional space position of the point X is determined by the viewpoint of the camera, the influence of the light on the three-dimensional structure information of the point X is considered, a space structure independent of the reflection characteristics of the light source and the surface of the object is extracted, the indoor light source is not a directional light source, the influence of the distance on the light source can not be ignored, and if the main direction of the light source is determined, the intensity of the reflected light and the self direction M of the light sourcelRelated to the angle between the light source and point X, the larger the angle, the further the point X is from the light source, the smaller the light intensity of the reflected light at that point, plus dot (M)l,Qd) In which N islDirection of light source, QdIs the vector from the source point to the object surface point X, and | QdL is not necessarily 1, again according to specular reflection:
T-Q-2 dot (M, Q). M formula 10
Wherein Q, T, M are incident light, reflected light, and normal vector at point X, respectively, and | Q | ═ M | ═ T | ═ 1, the fused specular reflection sharing spatial configuration expression is:
Figure BDA0003277881910000064
k is a coefficient, and in combination with the spatial position relationship between the light source point and the object surface point X, the spatial configuration information of the diffuse reflection light is described as follows:
Figure BDA0003277881910000065
formula 12 | | Qd||4 Coefficient 4 of (a) is | | | Q based on equation 11d||k+1And the coefficient k +1 is used for solving the verified numerical value through an experiment to obtain a reasonable shaping image.
The method for synthesizing the vivid dynamic illumination change of a single image further comprises the following steps of deconstructing mirror reflection components: the application provides a simple and effective method, which does not provide any hypothesis for illumination color, quickly separates out highlight components and diffuse reflection components in an image, and comprises the following steps:
step 1: assuming that the initial color value of the light source is (255,255,255), obtaining an SF image and an MSF image according to the input image:
Jsf,i(X)=Ji(X)-Jmin(X) formula 13
Jmsf,i(X)=Jsf,i(X) + u formula 14
Figure BDA0003277881910000071
Jmsf,i(X) at the X point, if the component is a diffuse reflection component, retaining the component value, if the component is not a diffuse reflection component, using the minimum channel value of each pixel point of the whole image as a statistical data to obtain an approximate estimate of the specular reflection component, where mean (J) in equation 15min) Is the average value of the minimum channel value of each pixel point of the whole image, delta (J)min) Is the corresponding standard deviation, n is a coefficient, and the value range is n belongs to [1,0 ]]Distinguishing diffuse reflection pixel points from specular reflection pixel points, and obtaining corresponding Mask images Mask _ i according to the detected specular reflection pixel points;
step 2: setting a variable t, and enabling the specular reflection component and the diffuse reflection component to achieve smooth transition through a least square method;
and 3, step 3: according to Mask _ i, the average value of RGB three channels of the pixel points of the mirror reflection components in the detected original image is used as the color value of the light source, and the average chroma value mean (S)sp,i(X)) is the source chromaticity, i.e.:
Ji(X)=a·Js,i(X)+b·Jd,i(X) formula 16
Such that wherein:
Ss,i(X)=mean(Ssp,i(X)) formula 17
Wherein, Js,i(X) and Jd,iAnd (X) is a specular reflection component and a diffuse reflection component which respectively form the ith channel pixel value at the image point X, a and b are respectively corresponding geometric factors, a new light source chromaticity value is obtained according to the formula 17, the original image is normalized according to the new light source color, the normalized original image value range is expanded to (0-255), and then the step 1 is returned, and the calculation of the specular reflection component, the diffuse reflection component and the light source chromaticity is carried out again until the light source chromaticity is not changed any more.
The method for synthesizing the vivid dynamic illumination change of a single image further comprises the following steps of: get the diffuse reflection Shading picture associated with illumination on the basis of getting rid of the highlight, go on the basis of the diffuse reflection image that deconstruction obtained, therefore the input image is the diffuse reflection picture, deconstructs the diffuse reflection image into diffuse reflection Shading picture and reflectivity picture, adopts the user graffiti method to deconstruct the diffuse reflection Shading picture, sets up three kinds of paintbrushes, and one is the colored paintbrush: the values of R, G, B channels of the brush are different, the color consistency region is represented by the brush, and each color brush represents a color consistency region; secondly, the gray painting brush: the values of R, G, B channels of the brush are proportional, and the brush is used for representing a brightness consistency area, and each gray brush represents a brightness consistency area; thirdly, a red painting brush: the R channel value of the paintbrush is 255, namely the brightest but not highlight area in the image, and the three paintbrushes are adopted to carry out attribute constraint on the reflectivity graph and the Shading graph of each area in the image so as to achieve a good deconstruction effect.
The method for synthesizing the vivid dynamic illumination change of a single image further estimates illumination position information: estimating the coordinates of the light source in a three-dimensional space, and providing an equation for optimizing illumination condition information:
Figure BDA0003277881910000081
wherein, a, b, c are weight coefficients of the specular reflection Shading graph, the diffuse reflection Shading graph and the light source position residual error term respectively, and Q0Is an estimated approximate initial value of the light source, C0And A0The gray values of the specular reflection Shading graph and the diffuse reflection Shading graph at the point X are respectively formed by original image decomposition, C and A are respectively gray values of the point X of an image obtained by rendering according to the formulas 11 and 12 when the light source position is Q, and the function of the whole energy equation is to solve an unknown number Q so that the whole energy equation obtains the minimum value;
the light source used in the application is a single point light source, the size of the light source is not counted, when the light source irradiates the surface of a non-homogeneous object, a bright light spot is formed at a certain point, and an initial value Q is carried out0Before setting, firstly detecting a spot peak point in a scene, obtaining an approximate light source direction according to a mirror reflection theorem, adopting a mirror shaping graph in image deconstruction as a detection image, setting a critical value according to the characteristic of large gray value of highlight points to filter out pixel points at non-main spots to obtain a circular main spot region, and further determining a central point of the circular region, namely the spot peak point;
for a specific and accurate depth map, solving a three-dimensional coordinate and a normal vector of a central point pixel corresponding to the position of the image, combining a camera viewpoint to obtain the approximate direction of a light source according to specular reflection, adopting a method of simplifying the three-dimensional space structure of a single image, fitting data to obtain a plane equation, and calculating the three-dimensional space coordinate and the three-dimensional normal vector of a specular light spot to obtain the approximate initial direction of the light source;
for equation 18, solving the first energyThe quantity equation adopts an optimization method of the light source position, continuously iterates to solve the light source position Q, and when iteration reaches a certain number of times, or | | Q0When the Q | | is smaller than a critical value, the whole iteration process is ended, and finally the obtained approximate light source position information in the whole scene is solved, wherein the specific calculation steps are as follows:
step one, a basic data acquisition stage: for an input image, obtaining a specular reflection Shading graph C according to image deconstruction0And diffuse reflection Shading map A0Training according to the training set to obtain corresponding depth information map AiM0And pair AiM0Performing smoothing operation to obtain a smoothed and normalized depth map AiM*Aiming at the input image, according to the estimated depth information graph of the image, performing data simulation on the three-dimensional space configuration of the image to obtain the coordinate of each point in the image relative to the viewpoint in the three-dimensional space coordinate;
step two, a light source information solving stage: performing least square calculation by using an energy equation of formula 18 to obtain a rough estimate of the light source position information, wherein the value of a is 1, the value of b is 8, and the value of c is different due to different images, so as to obtain the light source position solved by the first calculation;
step three, iteratively optimizing the position of the light source: and solving the light source position information on the basis of certain iteration times and critical value constraint to obtain an approximately correct light source position estimation value.
The method for synthesizing the vivid dynamic illumination change of a single image further repairs the image of a specular highlight area: obtaining an image with changed illumination by using the input image and the obtained three-dimensional information and illumination conditions of the scene, namely obtaining an image with changed illumination by using one input image, and performing image restoration on the diffuse reflection image separated from specular highlight;
the method for repairing the specular highlight area of the single image comprises the following steps:
step (1), manually selecting an image area to be repaired according to the range of the mirror surface highlight area;
step (2), extracting a repairing boundary: when the boundary is not empty, calculating the priority weight of the boundary pixel block, and selecting a target block with the maximum weight as a block to be matched;
and (3) updating the confidence coefficient, and repeating the steps (1) and (2) until the boundary is empty.
The method for synthesizing the vivid dynamic illumination change of a single image further comprises the following steps of editing a single image light source: the light source processing and editing of a single image are firstly carried out the image synthesis:
Ji(X)=a·Js,i(X)+b·Jd,i(X) formula 19
Wherein, Js,i(X) and Jd,i(X) a specular reflection component and a diffuse reflection component which form the ith channel pixel value at the image point X, wherein a and b are corresponding geometric factors respectively, and the specular highlight component image is obtained according to a rendering formula 11 according to a scene three-dimensional space structure;
the light source editing of a single image comprises the following steps:
first, moving the light source position: moving the light source in a left, right, far and near moving mode;
second, changing the color of the light source; changing the color of the light source to other common colors within a certain range;
a third category, changing both the light source color and position;
and in the fourth category, adding an ambient light to the scene while changing the light source, namely obtaining the ambient light of the scene according to the reflectivity map of the image deconstruction:
Jsur=T·Jlightformula 20
Wherein, JsurFor ambient brightness values, T is the reflectance value, JlightAnd adding ambient light to the input image for the ambient light brightness value, so that the brightness of the whole scene is improved, and a better visual effect is presented.
Compared with the prior art, the innovation points and advantages of the application are as follows:
first, the main innovative points and contributions of the present application include: firstly, a dynamic illumination processing and editing method of a single image is provided, scene three-dimensional space configuration information is obtained based on RGB-D data set training and processing, and the problem that light source information needs to be solved by means of measuring equipment is solved by combining an image deconstruction technology; secondly, a method for rendering and synthesizing images by using a reflection model is provided, the results of image deconstruction and illumination information processing are fully utilized, and the illumination change images with reality sense are synthesized, so that the complicated image rendering process is greatly simplified, and the problem of insufficient rendering parameter information in the single image rendering process is solved; the effectiveness of the method provided by the application is verified through a large number of experimental results, and the method has wide application prospects, such as the production of illumination animation of a single image, the realization of image illumination consistency in virtual-real fusion and the like;
secondly, aiming at the problems that dynamic illumination processing of a single image in the prior art faces a lot of difficulties, the three-dimensional space structure of a scene, illumination conditions and other information need to be known in the generation process of an illumination image, but the three-dimensional space structure of the whole scene is difficult to recover according to the two-dimensional information; in addition, the prior art adopts a method of using an illumination detection device or a method of using the existing spatial configuration information, and both of them need to use an additional measurement device, so that the method is not suitable for the general image illumination change synthesis work. The application proposes to divide into three parts to carry out dynamic illumination change synthesis to a single input image, and the parts are respectively: the method comprises the steps of rendering images based on three-dimensional space shape, estimating illumination information of a single image, editing and synthesizing a vivid dynamic illumination change image, firstly, estimating depth information of the single input image, estimating to obtain three-dimensional space shape information of a scene, then estimating to obtain the illumination information of the scene, and finally, editing the illumination information, rendering and synthesizing the illumination change image with reality, wherein the obtained illumination change image has a good dynamic effect, high fidelity and a wide application prospect;
thirdly, after the illumination information of a single image is estimated, rough three-dimensional information of an image scene and preliminary information of the color, direction and position of a light source are obtained, and the light source in the scene is processed and edited on the basis to obtain an image with realistic light source change. The image with the changed light source is obtained by linearly reconstructing a diffuse reflection image and a highlight image, and compared with the method of solving various surface attributes by adopting complex equipment, the method has a better processing effect on a simple real scene, and has great advantages in the simplification effect of useless edges and noise of cultural relics compared with the prior art;
fourth, key processes and obvious improvements of the present application include: firstly, the specular highlight components in a single image are removed, the light source is not assumed, a new iteration of calculation of the light source is carried out according to the specular highlight components separated each time, and the iteration operation is stopped until the light source calculation is unchanged, so that the light source color close to a real scene is obtained, and the optimized specular highlight components are separated; secondly, estimating the depth information of a single image based on an RGB-D training sample set, obtaining an approximate depth information value of the single image by adopting a training sample set statistical method, in order to solve the problem of low estimation accuracy of the depth information of the single image, carrying out targeted manual screening on the sample set in the process of selecting a training set sample, and on the basis of obtaining the depth information of the single image, selecting a depth information value with high reliability to carry out data fitting by adopting an interactive method according to the observation and analysis of the depth information so as to obtain a regular three-dimensional scene space configuration, after estimating the light source information of the single image and the scene three-dimensional space configuration, carrying out the application of light source editing on the basis of the existing data, wherein the prior art can not completely remove specular highlight in the single image, the method adopts a simple image repairing method to repair the area containing specular highlight residues, and then obtains the vivid image synthesis result of illumination change by changing the correlation editing of the light source position, the light source color and the illumination intensity of the light source information.
Drawings
Fig. 1 is a flowchart of an image illumination rendering method based on a three-dimensional space configuration.
Fig. 2 is a diagram illustrating a depth image estimation result of a single image.
FIG. 3 is a graph depicting the quantization relationship between curves based on end-point and straight-line feature probability models.
Fig. 4 is a diagram of a specular spot center detection process for estimating illumination location information.
FIG. 5 is a flow chart of mirror highlight image restoration.
Fig. 6 shows mirror Shading images obtained by three editing correspondences according to the present application.
Fig. 7 is a schematic diagram illustrating an editing effect of a light source in an experiment image.
Fig. 8 is a schematic diagram of the editing effect of the light source in the experiment two images of the present application.
Detailed description of the invention
The following describes the technical solution of the method for synthesizing realistic dynamic illumination change of a single image, which is provided by the present application, with reference to the accompanying drawings, so that those skilled in the art can better understand the present application and can implement the present application.
The illumination information and the three-dimensional space configuration information in a single image scene are obtained through analyzing and processing a single image, and then the illumination information is rendered to edit and synthesize the illumination change image with reality.
In the prior art, dynamic illumination processing of a single image faces a plurality of difficulties, on one hand, information such as a three-dimensional spatial structure and illumination conditions of a scene needs to be known in a traditional illumination image generation process, but according to an image imaging principle, a single image is only three-dimensional scene information projected on a two-dimensional plane, and the three-dimensional spatial structure of the whole scene is difficult to recover according to the two-dimensional information; on the other hand, currently, for the estimation of illumination information of a single image, a method of an illumination detection device or a method of utilizing the existing spatial configuration information is generally adopted, and both methods need to use additional measurement devices, so that the method is not suitable for the general image illumination change synthesis work.
Around the above-mentioned problem, this application is divided into three parts and is carried out dynamic illumination change synthesis to single input image, is respectively: the method comprises the steps of firstly, estimating depth information of a single input image based on an RGB-D data set, estimating to obtain three-dimensional space configuration information of a scene, then estimating to obtain the illumination information of the scene by adopting a method of combining image deconstruction and image rendering, and finally, combining the three-dimensional space configuration information and the illumination information of the scene, editing the illumination information, and rendering and synthesizing the illumination change image with reality.
Image illumination rendering based on three-dimensional space structure
Specular highlights, illumination darkness, shadows, and the like in the generated image are all sensitive to changes in illumination conditions and can change significantly with different illumination conditions. Therefore, the method and the device extract the characteristic images related to the illumination conditions from the single image, constrain the illumination information according to the characteristics, and calculate the illumination condition information close to the reality finally through iteration.
The algorithm flow for estimating the illumination information of a single image is shown in fig. 1, and the image illumination rendering method based on the three-dimensional space structure comprises three steps:
step 1, estimating depth information of a single image: obtaining a depth information map of a single image based on an RGB-D image set, then obtaining a rough three-dimensional space structure of a scene by using the depth information map, and obtaining a rendered specular reflection Shading map and a rendered diffuse reflection Shading map by combining the three-dimensional space structure of the scene under the action of illumination to obtain a rough Shading image under a specific illumination condition. In the process, the estimated scene depth information map is the basis of rendering images, and the estimated fineness directly influences the rendering of the subsequent spatial pattern, so that the step 1 focuses on the estimation and image rendering of a single image depth image;
step 2, deconstructing the real image: the input image is deconstructed into a specular reflection Shading map, a diffuse reflection Shading map and a reflectivity map, wherein the reflectivity map is the color of a scene object and cannot be changed due to the change of illumination conditions, the specular reflection Shading map and the diffuse reflection Shading map are different and are sensitive to the illumination conditions and correspondingly change along with the change of the illumination conditions, and the two component maps are obtained to contribute to accurate estimation of illumination information, namely, the step 2 is used for obtaining the specular reflection Shading map and the diffuse reflection Shading map.
And 3, optimizing the position information of the light source according to the images obtained in the steps 1 and 2, so that the finally solved position information of the light source can simultaneously meet the data fitting of specular highlight and diffuse reflection Shading, and the key point of the step 3 lies in the accuracy of the result of each step and the selection of an initial value condition.
Estimating a depth information map of a single image
Conversion of three-dimensional information into two-dimensional information, in which a large amount of information is naturally lost. Because of the limitation of viewing angle, information of a single image is often insufficient to obtain depth information of a scene, and therefore it is generally difficult to recover the spatial configuration of a three-dimensional scene from information of only one image.
In the prior art, a Kinect scanning instrument developed by microsoft corporation is generally adopted as a method for obtaining scene depth information, a single RGB-D image can obtain rough three-dimensional spatial configuration information of the whole scene, but the method can not be used by anyone at any time, and obtaining the depth information of the single image is difficult for the situation without Kinect equipment or other special conditions.
Aiming at the limitation of a Kinect direct scanning method, the application provides a single image depth information estimation method, and the process is as follows:
the first step is as follows: and a pre-processing stage, wherein an alternative image which is similar to the input image on the characteristic space is obtained from the known data set: based on the RGB-D image dataset, several images that are most similar are selected as candidate images according to the RGB spatial feature similarity with the input image. 120 pairs of RGB-D image datasets were used as sample sets, with 60 each of the RGB images and depth images. After the training set images are prepared, calculating high-level image features of each image in the data set by using GIST and optical flow features, performing K-NN nearest neighbor search on all the data sets, setting K to be 7 when performing the K-NN nearest neighbor search, namely taking 7 images with RGB space features most similar to input images as alternative images, and taking the corresponding depth maps as the alternative depth images;
the second step is that: estimating the rough depth map to obtain a rough depth map of the input image: carrying out preliminary matching deformation on the depth information map of the alternative image and the input image: and then, for each pixel block of the input image, carrying out dense scene calibration on the depth value of the corresponding pixel block in the deformed alternative depth image and the input image to obtain a rough depth information image of the input image.
The third step: optimizing the rough depth map to obtain a smooth and complete depth map: and carrying out interpolation and smoothing treatment on the rough depth information map to obtain an optimized depth estimation result, wherein the space between pixel blocks of the rough depth map in the second step is not smooth, and the optimization energy formula of the depth information is as follows:
Figure BDA0003277881910000131
wherein A is the last corresponding estimated depth information image of the input image, V is the probability normalization constant, Bt(Ai) Is an alternative depth data item, Bs(Ai) Is a spatial smoothing term, Bp(Ai) Is a priori value of depth data, a and B are corresponding coefficients, and data item Bt(Ai) Describing the difference between the estimated depth information map A and each deformed alternative depth map, i.e. calculating the residual between the alternative 7 deformed depth maps and the depth information map with estimation, Bs(Ai) Spatial smoothing is constrained by the gradient values in the x and y directions, Bp(Ai) Is a prior term of the depth information image estimate, and:
Bp(Ai)=Φ(Ai-xi) Formula 2
Wherein x isiIs the average of the depth values at all points p in the data set, guided by a priori values.
The corresponding depth map is obtained according to the single image depth map estimation method, under the condition that the training set samples are enough and the quality is good enough, the estimated depth information is better than the real condition, because the sensor error and the low resolution of the image can cause the loss of partial scanned depth information, and the prior depth map information can complement the lost information when the depth information estimation is carried out.
Fig. 2 shows the depth image estimation result of a single image, and comparing the results of (a) and (b), it can be seen that (a) in (b) is an approximate outline of the scene. Therefore, the above algorithm flow has correctness in depth estimation of a single image, and (a) the image is an image illuminated by only a single point light source in a dark environment, belongs to an image with low resolution, and the Kinect cannot obtain a complete depth map of a scene in a dark place and cannot obtain a complete depth result for a scene with a short distance. The method can still obtain better results under the condition of low resolution or in the close-range aspect.
(II) three-dimensional space structure for simplifying single image
And in order to obtain a scene space structure close to the real situation, performing data fitting on the three-dimensional space structure of the whole scene. The coordinates (X, y) of the pixel points in the imaging plane coordinate system IPCS correspond to the coordinates N (X) in the camera coordinate systemc,Yc,Zc) And g is the focal length of the camera, the size is the distance from the optical center of the camera to an imaging plane, the depth information estimation is based on the existing depth data set, and the value of g is 525mm to obtain a three-dimensional space structure close to the real situation.
Because partial depth information of a single image depth estimation map is not accurate enough, in order to obtain a spatial configuration close to an input image, the main plane in a scene is fitted after the depth image is obtained, and the plane PaF after the fitting is:
Di·X+Ei·Y+Si·Z+Ai0-formula 3
Wherein D isi,Ei,Si,AiIs the plane coefficient of the ith plane fitted in three-dimensional space, assuming that the principal plane in the real scene is not over the origin (0,0,0), i.e. AiNot equal to 0, then equation 3 reduces to:
Di·X+Ei·Y+Si1. Z-1 formula 4
If the N point is on the plane, then:
Di·Xc+Ei·Yc+Si·Zc1-formula 5
According to a conversion formula from a plane coordinate system to a camera coordinate system and formulas 3 to 5, obtaining three-dimensional coordinates of an imaging plane point N at N points in the camera system as follows:
Figure BDA0003277881910000141
Figure BDA0003277881910000142
Figure BDA0003277881910000143
and obtaining three-dimensional position information of each point on the main plane in a camera coordinate system, thereby obtaining a rough three-dimensional space shape of the whole scene, wherein the shape plays an important role in the subsequent rendering of the Shading image.
(III) calculating the spatial configuration of the illumination light and dark image
Specular reflection Shading is specular highlight in a scene, and the reflection characteristic is as follows:
Qs,i(X)=Ws,i·Jl·(dot(U,T))Msformula 9
Wherein Q iss,iIs the radiance value, W, of the specular reflection at the ith channel of the corresponding pixels,iIs the specular reflection coefficient of the surface of the object, J1For the illumination intensity, U is the viewing direction from point X to the viewpoint, T is the specular reflection light direction of the light at point X, Ms is a high light coefficient describing the degree of convergence of the specular reflection light, | U | ═ T | ═ 1, dot (U, T) is the dot product of the U vector and the T vector;
removing incident lightThe intensity of the reflected light is related to the position of the point X in the three-dimensional space, besides the rough degree of the intensity and the surface of the object, the three-dimensional space position of the point X is determined by the viewpoint of the camera, the influence of the light on the three-dimensional structure information of the point X is considered, a space structure independent of the reflection characteristics of the light source and the surface of the object is extracted, the indoor light source is not a directional light source, the influence of the distance on the light source can not be ignored, and if the main direction of the light source is determined, the intensity of the reflected light and the self direction M of the light source1Related to the angle between the light source and point X, the larger the angle, the further the point X is from the light source, the smaller the light intensity of the reflected light at that point, plus dot (M)1,Qd) In which N is1Direction of light source, QdIs the vector from the source point to the object surface point X, and | QdL is not necessarily 1, again according to specular reflection:
T-Q-2 dot (M, Q). M formula 10
Wherein Q, T, M are incident light, reflected light, and normal vector at point X, respectively, and | Q | ═ M | ═ T | ═ 1, the fused specular reflection sharing spatial configuration expression is:
Figure BDA0003277881910000151
k is a coefficient, and in combination with the spatial position relationship between the light source point and the object surface point X, the spatial configuration information of the diffuse reflection light is described as follows:
Figure BDA0003277881910000152
formula 12 | | Qd||4 Coefficient 4 of (a) is | | | Q based on equation 11d||k+1And the coefficient k +1 is obtained by solving verified values through experiments, and the data can obtain a reasonable shaping image.
Secondly, estimating illumination information of a single image
Estimating the single-image illumination information includes deconstructing the illumination image and estimating illumination location information.
Deconstructing an illumination image
According to the method and the device, the image is deconstructed into the characteristic sub-image most relevant to illumination according to the influence of the illumination condition on the generated image of the scene, namely the original image is deconstructed into the specular reflection Shading image and the diffuse reflection Shading image.
1. Deconstructing specular components
The specular highlight component exists in most images, the application provides a simple and effective method, no assumption is made on illumination color, the highlight component and the diffuse reflection component in the images are quickly separated, and the steps are as follows:
step 1: assuming that the initial color value of the light source is (255 ), obtaining an SF image and an MSF image according to the input image:
Jsf,i(X)=Ji(X)-Jmin(X) formula l3
Jmsf,i(X)=Jsf,i(X) + u formula l4
Figure BDA0003277881910000153
Jmsf,i(X) at the X point, if the component is a diffuse reflection component, retaining the component value, if the component is not a diffuse reflection component, using the minimum channel value of each pixel point of the whole image as a statistical data to obtain an approximate estimate of the specular reflection component, where mean (J) in equation 15min) Is the average value of the minimum channel value of each pixel point of the whole image, delta (J)min) Is the corresponding standard deviation, n is a coefficient, and the value range is n belongs to [1,0 ]]In the embodiment, a better effect can be obtained by adopting that m is 7.0, diffuse reflection pixel points and specular reflection pixel points are distinguished, and corresponding Mask images Mask _ i are obtained according to the detected specular reflection pixel points;
step 2: setting a variable t, and enabling the specular reflection component and the diffuse reflection component to achieve smooth transition through a least square method;
and 3, step 3: taking the average value of RGB three channels of the detected mirror surface reflection component pixel points in the original image as the color value of the light source according to Mask _ i, and averagingChroma value mean (S)sp,i(X)) is the source chromaticity, i.e.:
Ji(X)=a·Js,i(X)+b·Jd,i(X) formula 16
Such that wherein:
Ss,i(X)=mean(Ssp,i(X)) formula 17
Wherein, Js,i(X) and Jd,iAnd (X) is a specular reflection component and a diffuse reflection component which respectively form the ith channel pixel value at the image point X, a and b are respectively corresponding geometric factors, a new light source chromaticity value is obtained according to the formula 17, the original image is normalized according to the new light source color, the normalized original image value range is expanded to (0-255), and then the step 1 is returned, and the calculation of the specular reflection component, the diffuse reflection component and the light source chromaticity is carried out again until the light source chromaticity is not changed any more.
The input image of the experiment of the application is shown in fig. 3(a), and after 5 iterations on average, the operation is stopped, and the chromaticity of the light source is not changed. The result of the deconstruction of specular highlight components deconstructed by the group of images is shown in fig. 3, (a) is an input image, and (b) (c) is the result of the deconstruction of the normalized image by adopting the algorithm of the application, so that the application is improved in vision to a certain extent, the approximate illumination chromaticity and color information can be obtained by the application, the deconstruction of the illumination image irrelevant to illumination can be obtained by iteration steps, and the robustness in solving the color of the light source is better.
2. Deconstructing the diffuse reflective component
Get the diffuse reflection Shading picture associated with illumination on the basis of getting rid of the highlight, go on the basis of the diffuse reflection image that deconstruction obtained, therefore the input image is the diffuse reflection picture, deconstructs the diffuse reflection image into diffuse reflection Shading picture and reflectivity picture, adopts the user graffiti method to deconstruct the diffuse reflection Shading picture, sets up three kinds of paintbrushes, and one is the colored paintbrush: the values of R, G, B channels of the brush are different, the color consistency region is represented by the brush, and each color brush represents a color consistency region; secondly, the gray painting brush: the values of R, G, B channels of the brush are proportional, and the brush is used for representing a brightness consistency area, and each gray brush represents a brightness consistency area; thirdly, a red painting brush: the R channel value of the paintbrush is 255, namely the brightest but not highlight area in the image, and the three paintbrushes are adopted to carry out attribute constraint on the reflectivity graph and the Shading graph of each area in the image so as to achieve a good deconstruction effect.
(II) estimating illumination position information
Estimating the coordinates of the light source in the three-dimensional space according to the obtained information, and the application provides an equation for optimizing the illumination condition information:
Figure BDA0003277881910000161
wherein, a, b, c are weight coefficients of the specular reflection Shading graph, the diffuse reflection Shading graph and the light source position residual error term respectively, and Q0Is an estimated approximate initial value of the light source, C0And A0The gray values of the specular reflection Shading map and the diffuse reflection Shading map at the point X are respectively reconstructed from an original image, C and a are respectively gray values of the point X of an image rendered according to the formulas 11 and 12 when the light source position is Q, and the function of the whole energy equation is to solve an unknown number Q so that the whole energy equation obtains the minimum value.
The light source used in the application is a single point light source, the size of the light source is not counted, when the light source irradiates the surface of a non-homogeneous object, a bright light spot is formed at a certain point, and an initial value Q is carried out0Before setting, firstly detecting a spot peak point in a scene, obtaining an approximate light source direction according to a mirror reflection theorem, adopting a mirror shaping graph in image deconstruction as a detection image, setting a critical value according to the characteristic of large gray value of highlight points to filter pixel points at non-main spots, obtaining a circular main spot area, and further determining a central point of the circular area, namely the spot peak point.
For a specific and accurate depth map, three-dimensional coordinates and normal vectors of a central point pixel corresponding to an image position are obtained, and the approximate direction of a light source is obtained according to specular reflection by combining a camera viewpoint, but the depth map obtained by training a data set cannot guarantee that a depth information value at each point is correct. In order to reduce the problem of estimation of the initial direction of the light source caused by inaccurate depth information data, a method of simplifying the three-dimensional space structure of a single image is adopted, a plane equation is obtained after data are fitted, and the three-dimensional space coordinate and the three-dimensional normal vector of the specular light spot are calculated to obtain the approximate initial direction of the light source.
As shown in fig. 4, an image (a) and a depth map (b) corresponding to the image (a) are input, the map (c) is a mirror shaping map obtained by deconstructing the map (a), a circular area of a main light spot is obtained by detecting a light spot area of the map (c), a coordinate position of a light spot center on an imaging plane is obtained through the detected light spot area according to a plane obtained by data fitting, then a depth value of a corresponding pixel is found in the corresponding depth map (b), and three-dimensional space coordinate position information of a pixel point relative to a viewpoint and a three-dimensional normal value of the pixel point are obtained according to a focal length value of a known image.
For equation 18, solving the energy equation once does not necessarily result in optimal light source position information. The method adopts a light source position optimization method, continuously iterates to solve the light source position Q, and when iteration reaches a certain number of times, or Q0When the Q | | is smaller than a critical value, the whole iteration process is ended, and finally the obtained approximate light source position information in the whole scene is solved, wherein the specific calculation steps are as follows:
step one, a basic data acquisition stage: for an input image, obtaining a specular reflection Shading graph C according to image deconstruction0And diffuse reflection Shading map A0Training according to the training set to obtain corresponding depth information map AiM0And pair AiM0Performing smoothing operation to obtain a smoothed and normalized depth map AiM*Aiming at the input image, according to the estimated depth information graph of the image, performing data simulation on the three-dimensional space configuration of the image to obtain the coordinate of each point in the image relative to the viewpoint in the three-dimensional space coordinate;
step two, a light source information solving stage: performing least square calculation by using an energy equation of formula 18 to obtain a rough estimate of the light source position information, wherein the value of a is 1, the value of b is 8, and the value of c is different due to different images, so as to obtain the light source position solved by the first calculation;
step three, iteratively optimizing the position of the light source: on the basis of certain iteration times and critical value constraint, the light source position information is solved to obtain an approximately correct light source position estimated value, and the iteration times constraint adopted by the embodiment is 30 times.
Thirdly, editing and synthesizing vivid dynamic illumination change images
After illumination information of a single image is estimated, rough three-dimensional information of an image scene and preliminary information of the color, direction and position of a light source are obtained, and on the basis, the light source in the scene is processed and edited to obtain an image with realistic light source change. The image of light source change of this application is carried out linear reconstruction by diffuse reflection image and highlight image and is obtained, compares in adopting complicated equipment to carry out the solution of various surface attributes, and this application has better treatment to simple real scene.
Restoring images of specular highlights
The separation of specular highlight components is a complex and difficult problem, and the algorithms in the prior art cannot completely separate specular highlight components from a single image. The method and the device utilize the input image, the obtained three-dimensional information of the scene and the obtained illumination condition to obtain the image with changed illumination, namely, the image with changed illumination is obtained through one input image, and the diffuse reflection image with specular highlight separated is subjected to image restoration.
The method for repairing the specular highlight area of the single image comprises the following steps:
step (1), manually selecting an image area to be repaired according to the range of the mirror surface highlight area;
step (2), extracting a repairing boundary: when the boundary is not empty, calculating the priority weight of the boundary pixel block, and selecting a target block with the maximum weight as a block to be matched;
and (3) updating the confidence coefficient, and repeating the steps (1) and (2) until the boundary is empty.
Fig. 5 is a flowchart of image restoration of specular highlight region, (a) a diffuse reflection image obtained by removing highlight from an original input image is manually selected from the region to be restored in (b) to obtain the effect of restoration in (c), and (c) the scene material is relatively uniform, and the effect of restoration is relatively good as a whole.
(II) editing single image light source
Because of the limited information amount of a single image and the limitation of a shooting environment, the light source processing and editing of the single image are firstly synthesized into the image:
Ji(X)=a·Js,i(X)+b·Jd,i(X) formula 19
Wherein, Js,i(X) and Jd,i(X) are the specular and diffuse components, respectively, that make up the ith channel pixel value at image point X, and a, b are the corresponding geometric factors, respectively. And obtaining the specular highlight component image according to a rendering formula 11 according to the three-dimensional space structure of the scene.
The light source editing of a single image comprises the following steps:
first, moving the light source position: moving the light source in a left, right, far and near moving mode;
second, changing the color of the light source; changing the color of the light source to other common colors within a certain range;
a third category, changing both the light source color and position;
and in the fourth category, adding an ambient light to the scene while changing the light source, namely obtaining the ambient light of the scene according to the reflectivity map of the image deconstruction:
Jsur=T·Jlightformula 20
Wherein, JsurFor ambient brightness values, T is the reflectance value, JlightAnd adding ambient light to the input image for the ambient light brightness value, so that the brightness of the whole scene is improved, and a better visual effect is presented.
Fig. 6 is a rendering image obtained by the above various editing correspondences, where (a) is an input image, (b) is a rendering image obtained by (a) corresponding to the mirror Shading, (c) and (d) are rendering results obtained by respectively moving (a) the modified light source, and (e) and (f) are rendering results obtained by changing the color of the light source on the basis of (a) and moving the light source. As can be seen from fig. 6, the three-dimensional space model obtained by estimation can be well applied in the present application, the light source is edited to render and obtain mirror shaping images with various light source changes, and the images can be used in subsequent image synthesis.
Fourth, experimental results and analysis
The experiment shows the result of illumination change of a single image, and the average consumed time of depth information of the single image is estimated to be 72s for an input image with the resolution of 840 x 626; the separation of specular highlight components is averagely iterated for 5 times, and the total time consumption is about 1.08 s; the eigen image deconstruction average iterates for 100 times, and the total time consumption is 346 s; the light source estimation is iterated for 7 times averagely, and the total time consumption is about 2.6 s; the average time consumed for rendering the composite image is 0.48 s; to illustrate the effectiveness and utility of the present application, two sets of experimental results are shown.
The first group of images are typical indoor scene images, the images are composed of a laboratory computer desk and a computer, and a series of images with illumination changes are obtained by editing illumination conditions in the images. In the figure, (a) is an input image, and (b) is an image rendered and synthesized by corresponding illumination information, three-dimensional spatial configuration information, and the like in (a). (c) To change the composite image of the light source color, (d) to (f) are composite images obtained by moving the light source position, and (f) the image is the result of adding the ambient light. The light source variations within the scene can be seen from (a) to (f). Because the synthetic result of this application rendering is according to the reflection model, the diffuse reflection image after will separating mirror surface Shading composition to and the mirror surface Shading composition of rendering carries out linear synthesis, consider the limitation that mirror surface highlight composition got rid of the algorithm, for can obtaining more lifelike illumination change image, the input is the real scene image that the illumination is weaker, sampling result can guarantee like this to get rid of the diffuse reflection image after the mirror surface highlight and can not have too many highlights to remain, make things convenient for subsequent processing, do not influence final image synthesis effect. As can be seen from the lighting editing effect in fig. 7, the method of the present application has a significant effect.
The second group of results are complex, mainly comprise 5 planes, objects in the scene are all made of plastic or paper materials, the change of the shadow in the scene is not considered for illumination processing and editing, as shown in fig. 8, (b) is a result of synthesizing the estimated light source information and rendered Shading space information, (b) is a result of very real feeling, (c) to (e) are results of moving the position of the light source on the basis of (b), wherein, three surfaces in the (d) have obvious illumination effects, namely a green wall surface, the edge of the upper plane of the mineral water paper box and the shadow close to the blue wall surface. (f) And (c) to (h) are much brighter than (a) and (b) because the ambient light is added to the result images, so that the brightness of the whole scene is improved, and a better visual effect is generated.

Claims (9)

1. The method for synthesizing the vivid dynamic illumination change of the single image is characterized in that illumination information and three-dimensional space configuration information in a single image scene are obtained through estimation, and then the illumination information is rendered to edit and synthesize an illumination change image with reality sense;
the application is divided into three parts to carry out dynamic illumination change synthesis on a single input image, and the three parts are respectively: the method comprises the steps of performing image illumination rendering based on a three-dimensional space structure, estimating illumination information of a single image, editing and synthesizing a vivid dynamic illumination change image, firstly, estimating depth information of the single input image based on an RGB-D data set, estimating to obtain three-dimensional space structure information of a scene, then estimating to obtain the illumination information of the scene by adopting a method combining image deconstruction and image rendering, and finally, editing the illumination information by combining the three-dimensional space structure information and the illumination information of the scene, and rendering and synthesizing the illumination change image with reality;
the image illumination rendering method based on the three-dimensional space structure comprises the following three steps:
step 1, estimating depth information of a single image: obtaining a depth information map of a single image based on an RGB-D image set, then obtaining a rough three-dimensional space structure of a scene by using the depth information map, and obtaining a rendered specular reflection Shading map and a rendered diffuse reflection Shading map by combining the three-dimensional space structure of the scene under the action of illumination to obtain a rough Shading image under a specific illumination condition;
step 2, deconstructing the real image: decomposing an input image into a specular reflection Shading graph, a diffuse reflection Shading graph and a reflectivity graph;
step 3, optimizing the position information of the light source according to the images obtained in the step 1 and the step 2, so that the finally solved position information of the light source can simultaneously meet the data fitting of specular high light and diffuse reflection Shading;
estimating illumination information of a single image, wherein the illumination information comprises an deconstructed illumination image and estimated illumination position information, and the deconstructed illumination image comprises a deconstructed specular reflection component and a deconstructed diffuse reflection component;
thirdly, editing and synthesizing a vivid dynamic illumination change image into an image for repairing a specular highlight area and an image for editing a single image light source, and firstly starting from a depth information map of the single image and obtaining the specular highlight image according to the rendering type rendering of a Shading image; then, removing specular highlight residual components of the diffuse reflection image, and performing image repairing on a specular highlight area to obtain a repaired ideal diffuse reflection image; and finally, performing reverse linear synthesis on the specular highlight image obtained by rendering and the ideal diffuse reflection image based on the reflection model to obtain a vivid image obtained by dynamic illumination change.
2. The method for synthesizing realistic dynamic illumination change of a single image according to claim 1, wherein the process of a single image depth information estimation method is as follows:
the first step is as follows: and a pre-processing stage, wherein an alternative image which is similar to the input image on the characteristic space is obtained from the known data set: based on an RGB-D image data set, selecting a plurality of most similar images as alternative images according to the RGB space feature similarity of an input image, calculating high-level image features of each image in the data set by adopting GIST and optical flow features, performing K-NN nearest neighbor search on all the data sets, setting K to be 7 when performing the K-NN nearest neighbor search, namely, taking 7 images with RGB space features most similar to the input image as the alternative images, and taking corresponding depth maps as the alternative depth images;
the second step is that: estimating the rough depth map to obtain a rough depth map of the input image: carrying out preliminary matching deformation on the depth information map of the alternative image and the input image: carrying out deformation on the alternative image to obtain a deformed alternative image, and then carrying out dense scene calibration on the depth value of the corresponding pixel block in the deformed alternative depth image and the input image for each pixel block of the input image to obtain a rough depth information image of the input image;
the third step: optimizing the rough depth map to obtain a smooth and complete depth map: and carrying out interpolation and smoothing treatment on the rough depth information map to obtain an optimized depth estimation result, wherein the space between pixel blocks of the rough depth map in the second step is not smooth, and the optimization energy formula of the depth information is as follows:
Figure FDA0003277881900000021
wherein A is the last corresponding estimated depth information image of the input image, V is the probability normalization constant, Bt(Ai) Is an alternative depth data item, Bs(Ai) Is a spatial smoothing term, Bp(Ai) Is a priori value of depth data, a and B are corresponding coefficients, and data item Bt(Ai) Describing the difference between the estimated depth information map A and each deformed alternative depth map, i.e. calculating the residual between the alternative 7 deformed depth maps and the depth information map with estimation, Bs(Ai) Spatial smoothing is constrained by the gradient values in the x and y directions, Bp(Ai) Is a prior term of the depth information image estimate, and:
Bp(Ai)=Φ(Ai-xi) Formula 2
Wherein x isiIs the average of the depth values at all points p in the data set, guided by a priori values.
3. The method for synthesizing a single image with realistic dynamic illumination change according to claim 1, characterized in that the three-dimensional spatial configuration of the single image is simplified: performing data fitting on the three-dimensional space structure of the whole scene, wherein the coordinates (X, y) of pixel points in an imaging plane coordinate system IPCS correspond to the coordinates N (X) in a camera coordinate systemc,Yc,Zc) The depth information estimation is based on a depth data set, and the value of g is 525mm to obtain a three-dimensional space structure close to a real situation;
after obtaining the depth image, a main plane in the scene is fitted, the plane PaF after fitting being:
Di·X+Ei·Y+Si·Z+Ai0-formula 3
Wherein D isi,Ei,Si,AiIs the plane coefficient of the ith plane fitted in three-dimensional space, assuming that the principal plane in the real scene is not over the origin (0,0,0), i.e. AiNot equal to 0, then equation 3 reduces to:
Dj·X+Ei·Y+Si1. Z-1 formula 4
If the N point is on the plane, then:
Dj·Xc+Ei·Yc+Si·Zc1-formula 5
According to a conversion formula from a plane coordinate system to a camera coordinate system and formulas 3 to 5, obtaining three-dimensional coordinates of an imaging plane point N at N points in the camera system as follows:
Figure FDA0003277881900000031
Figure FDA0003277881900000032
Figure FDA0003277881900000033
and obtaining three-dimensional position information of each point on the plane in a camera coordinate system to obtain an approximate three-dimensional space structure of the whole scene.
4. The method for synthesizing realistic dynamic illumination change of a single image according to claim 1, wherein the spatial configuration of illumination light and dark images is calculated as follows: specular reflection Shading is specular highlight in a scene, and the reflection characteristic is as follows:
Qs,i(X)=Ws,i·Jl·(dot(U,T))Msformula 9
Wherein Q iss,iIs the radiance value, W, of the specular reflection at the ith channel of the corresponding pixels,iIs the specular reflection coefficient of the surface of the object, JlFor the illumination intensity, U is the viewing direction from point X to the viewpoint, T is the specular reflection light direction of the light at point X, Ms is a high light coefficient describing the degree of convergence of the specular reflection light, | U | ═ T | ═ 1, dot (U, T) is the dot product of the U vector and the T vector;
the intensity of the reflected light is related to the position of the point X in the three-dimensional space, except the intensity of the incident light and the rough degree of the surface of the object, the three-dimensional space position of the point X is determined by the viewpoint of the camera, the influence of the light on the three-dimensional structure information of the point X is considered, a space structure independent of the reflection characteristics of the light source and the surface of the object is extracted, the indoor light source is not a directional light source, the influence of the distance on the light source can not be ignored, and if the main direction of the light source is determined, the intensity of the reflected light and the self direction M of the light sourcelRelated to the angle between the light source and point X, the larger the angle, the further the point X is from the light source, the smaller the light intensity of the reflected light at that point, plus dot (M)l,Qd) In which N islDirection of light source, QdIs the vector from the source point to the object surface point X, and | QdL is not necessarily 1, again according to specular reflection:
T-Q-2 dot (M, Q). M formula 10
Wherein Q, T, M are incident light, reflected light, and normal vector at point X, respectively, and | Q | ═ M | ═ T | ═ 1, the fused specular reflection sharing spatial configuration expression is:
Figure FDA0003277881900000034
k is a coefficient, and in combination with the spatial position relationship between the light source point and the object surface point X, the spatial configuration information of the diffuse reflection light is described as follows:
Figure FDA0003277881900000035
formula 12 | | Qd||4Coefficient 4 of (a) is | | | Q based on equation 11d||k+1And the coefficient k +1 is used for solving the verified numerical value through an experiment to obtain a reasonable shaping image.
5. The method of claim 1, wherein deconstructing specular components: the application provides a simple and effective method, which does not provide any hypothesis for illumination color, quickly separates out highlight components and diffuse reflection components in an image, and comprises the following steps:
step 1: assuming that the initial color value of the light source is (255 ), obtaining an SF image and an MSF image according to the input image:
Jsf,i(X)=Ji(X)-Jmin(X) formula 13
Jmsf,i(X)=Jsf,i(X) + u formula 14
Figure FDA0003277881900000041
Jmsf,i(X) at the X point, if the component is a diffuse reflection component, the component value is reserved, and if the component is not the diffuse reflection component, each pixel point of the whole image is usedAs a statistical data, to obtain an approximate estimate of the specular component, mean (J) in equation 15min) Is the average value of the minimum channel value of each pixel point of the whole image, delta (J)min) Is the corresponding standard deviation, n is a coefficient, and the value range is n belongs to [1,0 ]]Distinguishing diffuse reflection pixel points from specular reflection pixel points, and obtaining corresponding Mask images Mask _ i according to the detected specular reflection pixel points;
step 2: setting a variable t, and enabling the specular reflection component and the diffuse reflection component to achieve smooth transition through a least square method;
and 3, step 3: according to Mask _ i, the average value of RGB three channels of the pixel points of the mirror reflection components in the detected original image is used as the color value of the light source, and the average chroma value mean (S)sp,i(X)) is the source chromaticity, i.e.:
Ji(X)=a·Js,i(X)+b·Jd,i(X) formula 16
Such that wherein:
Ss,i(X)=mean(Ssp,i(X)) formula 17
Wherein, Js,i(X) and Jd,iAnd (X) is a specular reflection component and a diffuse reflection component which respectively form the ith channel pixel value at the image point X, a and b are respectively corresponding geometric factors, a new light source chromaticity value is obtained according to the formula 17, the original image is normalized according to the new light source color, the normalized original image value range is expanded to (0-255), and then the step 1 is returned, and the calculation of the specular reflection component, the diffuse reflection component and the light source chromaticity is carried out again until the light source chromaticity is not changed any more.
6. The method of claim 1, wherein deconstructing the diffuse reflective components: get the diffuse reflection Shading picture associated with illumination on the basis of getting rid of the highlight, go on the basis of the diffuse reflection image that deconstruction obtained, therefore the input image is the diffuse reflection picture, deconstructs the diffuse reflection image into diffuse reflection Shading picture and reflectivity picture, adopts the user graffiti method to deconstruct the diffuse reflection Shading picture, sets up three kinds of paintbrushes, and one is the colored paintbrush: the values of R, G, B channels of the brush are different, the color consistency region is represented by the brush, and each color brush represents a color consistency region; secondly, the gray painting brush: the values of R, G, B channels of the brush are proportional, and the brush is used for representing a brightness consistency area, and each gray brush represents a brightness consistency area; thirdly, a red painting brush: the R channel value of the paintbrush is 255, namely the brightest but not highlight area in the image, and the three paintbrushes are adopted to carry out attribute constraint on the reflectivity graph and the Shading graph of each area in the image so as to achieve a good deconstruction effect.
7. The method of claim 1, wherein the estimating of the illumination location information comprises: estimating the coordinates of the light source in a three-dimensional space, and providing an equation for optimizing illumination condition information:
Figure FDA0003277881900000051
wherein, a, b, c are weight coefficients of the specular reflection Shading graph, the diffuse reflection Shading graph and the light source position residual error term respectively, and Q0Is an estimated approximate initial value of the light source, C0And A0The gray values of the specular reflection Shading graph and the diffuse reflection Shading graph at the point X are respectively formed by original image decomposition, C and A are respectively gray values of the point X of an image obtained by rendering according to the formulas 11 and 12 when the light source position is Q, and the function of the whole energy equation is to solve an unknown number Q so that the whole energy equation obtains the minimum value;
the light source used in the application is a single point light source, the size of the light source is not counted, when the light source irradiates the surface of a non-homogeneous object, a bright light spot is formed at a certain point, and an initial value Q is carried out0Before setting, firstly detecting a spot peak point in a scene, obtaining an approximate light source direction according to a mirror reflection theorem, adopting a mirror shaping graph in image deconstruction as a detection image, and setting according to the characteristic of large gray value of a highlight pointFiltering out pixel points at non-main light spots by using a critical value to obtain a circular main light spot area, and further determining a central point of the circular main light spot area, namely a light spot peak point;
for a specific and accurate depth map, solving a three-dimensional coordinate and a normal vector of a central point pixel corresponding to the position of the image, combining a camera viewpoint to obtain the approximate direction of a light source according to specular reflection, adopting a method of simplifying the three-dimensional space structure of a single image, fitting data to obtain a plane equation, and calculating the three-dimensional space coordinate and the three-dimensional normal vector of a specular light spot to obtain the approximate initial direction of the light source;
for equation 18, solving the energy equation once, using an optimization method of one light source position, continuously iterating to solve the light source position Q, when the iteration reaches a certain number of times, or | | Q0When the Q | | is smaller than a critical value, the whole iteration process is ended, and finally the obtained approximate light source position information in the whole scene is solved, wherein the specific calculation steps are as follows:
step one, a basic data acquisition stage: for an input image, obtaining a specular reflection Shading graph C according to image deconstruction0And diffuse reflection Shading map A0Training according to the training set to obtain corresponding depth information map AiM0And pair AiM0Performing smoothing operation to obtain a smoothed and normalized depth map AiM*Aiming at the input image, according to the estimated depth information graph of the image, performing data simulation on the three-dimensional space configuration of the image to obtain the coordinate of each point in the image relative to the viewpoint in the three-dimensional space coordinate;
step two, a light source information solving stage: performing least square calculation by using an energy equation of formula 18 to obtain a rough estimate of the light source position information, wherein the value of a is 1, the value of b is 8, and the value of c is different due to different images, so as to obtain the light source position solved by the first calculation;
step three, iteratively optimizing the position of the light source: and solving the light source position information on the basis of certain iteration times and critical value constraint to obtain an approximately correct light source position estimation value.
8. The method of synthesizing realistic dynamic illumination changes for single images according to claim 1, characterized in that the image of the specular highlight area is restored: obtaining an image with changed illumination by using the input image and the obtained three-dimensional information and illumination conditions of the scene, namely obtaining an image with changed illumination by using one input image, and performing image restoration on the diffuse reflection image separated from specular highlight;
the method for repairing the specular highlight area of the single image comprises the following steps:
step (1), manually selecting an image area to be repaired according to the range of the mirror surface highlight area;
step (2), extracting a repairing boundary: when the boundary is not empty, calculating the priority weight of the boundary pixel block, and selecting a target block with the maximum weight as a block to be matched;
and (3) updating the confidence coefficient, and repeating the steps (1) and (2) until the boundary is empty.
9. The method of claim 1, wherein the light source for a single image is edited to: the light source processing and editing of a single image are firstly carried out the image synthesis:
Ji(X)=a·Js,i(X)+b·Jd,i(X) formula 19
Wherein, Js,i(X) and Jd,i(X) a specular reflection component and a diffuse reflection component which form the ith channel pixel value at the image point X, wherein a and b are corresponding geometric factors respectively, and the specular highlight component image is obtained according to a rendering formula 11 according to a scene three-dimensional space structure;
the light source editing of a single image comprises the following steps:
first, moving the light source position: moving the light source in a left, right, far and near moving mode;
second, changing the color of the light source; changing the color of the light source to other common colors within a certain range;
a third category, changing both the light source color and position;
and in the fourth category, adding an ambient light to the scene while changing the light source, namely obtaining the ambient light of the scene according to the reflectivity map of the image deconstruction:
Jsur=T·Jlightformula 20
Wherein, JsurFor ambient brightness values, T is the reflectance value, JlightAnd adding ambient light to the input image for the ambient light brightness value, so that the brightness of the whole scene is improved, and a better visual effect is presented.
CN202111123478.XA 2021-09-24 2021-09-24 Method for synthesizing realistic dynamic illumination change of single image Pending CN113763528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111123478.XA CN113763528A (en) 2021-09-24 2021-09-24 Method for synthesizing realistic dynamic illumination change of single image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111123478.XA CN113763528A (en) 2021-09-24 2021-09-24 Method for synthesizing realistic dynamic illumination change of single image

Publications (1)

Publication Number Publication Date
CN113763528A true CN113763528A (en) 2021-12-07

Family

ID=78797317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111123478.XA Pending CN113763528A (en) 2021-09-24 2021-09-24 Method for synthesizing realistic dynamic illumination change of single image

Country Status (1)

Country Link
CN (1) CN113763528A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920238A (en) * 2021-12-14 2022-01-11 深圳市大头兄弟科技有限公司 Three-dimension method of two-dimensional target pattern and related equipment
CN114119849A (en) * 2022-01-24 2022-03-01 阿里巴巴(中国)有限公司 Three-dimensional scene rendering method, device and storage medium
CN115293960A (en) * 2022-07-28 2022-11-04 珠海视熙科技有限公司 Illumination adjusting method, device, equipment and medium for fused image
WO2024002086A1 (en) * 2022-06-29 2024-01-04 维沃移动通信(杭州)有限公司 Image processing method and apparatus, electronic device and readable storage medium
CN117474921A (en) * 2023-12-27 2024-01-30 中国科学院长春光学精密机械与物理研究所 Anti-noise light field depth measurement method, system and medium based on specular highlight removal

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920238A (en) * 2021-12-14 2022-01-11 深圳市大头兄弟科技有限公司 Three-dimension method of two-dimensional target pattern and related equipment
CN113920238B (en) * 2021-12-14 2022-03-15 深圳市大头兄弟科技有限公司 Three-dimension method of two-dimensional target pattern and related equipment
CN114119849A (en) * 2022-01-24 2022-03-01 阿里巴巴(中国)有限公司 Three-dimensional scene rendering method, device and storage medium
WO2024002086A1 (en) * 2022-06-29 2024-01-04 维沃移动通信(杭州)有限公司 Image processing method and apparatus, electronic device and readable storage medium
CN115293960A (en) * 2022-07-28 2022-11-04 珠海视熙科技有限公司 Illumination adjusting method, device, equipment and medium for fused image
CN115293960B (en) * 2022-07-28 2023-09-29 珠海视熙科技有限公司 Illumination adjustment method, device, equipment and medium for fused image
CN117474921A (en) * 2023-12-27 2024-01-30 中国科学院长春光学精密机械与物理研究所 Anti-noise light field depth measurement method, system and medium based on specular highlight removal
CN117474921B (en) * 2023-12-27 2024-05-07 中国科学院长春光学精密机械与物理研究所 Anti-noise light field depth measurement method, system and medium based on specular highlight removal

Similar Documents

Publication Publication Date Title
CN113763528A (en) Method for synthesizing realistic dynamic illumination change of single image
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
Maier et al. Intrinsic3D: High-quality 3D reconstruction by joint appearance and geometry optimization with spatially-varying lighting
Brostow et al. Video normals from colored lights
Cao et al. Sparse photometric 3D face reconstruction guided by morphable models
CN111523398A (en) Method and device for fusing 2D face detection and 3D face recognition
CN109712223B (en) Three-dimensional model automatic coloring method based on texture synthesis
Dellepiane et al. Flow-based local optimization for image-to-geometry projection
Xu et al. Survey of 3D modeling using depth cameras
US20240029345A1 (en) Methods and system for generating 3d virtual objects
CN109242959A (en) Method for reconstructing three-dimensional scene and system
CN106127818A (en) A kind of material appearance based on single image obtains system and method
US7280685B2 (en) Object segmentation from images acquired by handheld cameras
Okabe et al. Single-view relighting with normal map painting
CN109345570B (en) Multi-channel three-dimensional color point cloud registration method based on geometric shape
Iizuka et al. Efficiently modeling 3D scenes from a single image
Kim et al. Joint estimation of depth, reflectance and illumination for depth refinement
Troccoli et al. Building illumination coherent 3D models of large-scale outdoor scenes
KR20130003992A (en) Method of converting 3d image data and apparatus for the same
CN114820894A (en) Virtual role generation method and system
Rushmeier et al. Image-based object editing
Yan et al. Re-texturing by intrinsic video
Ter Haar et al. A comparison of systems and tools for 3D scanning
CN109360174A (en) Method for reconstructing three-dimensional scene and system based on camera pose
Matsushita et al. Lighting interpolation by shadow morphing using intrinsic lumigraphs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination