CN113920036A - Interactive relighting editing method based on RGB-D image - Google Patents

Interactive relighting editing method based on RGB-D image Download PDF

Info

Publication number
CN113920036A
CN113920036A CN202111520120.0A CN202111520120A CN113920036A CN 113920036 A CN113920036 A CN 113920036A CN 202111520120 A CN202111520120 A CN 202111520120A CN 113920036 A CN113920036 A CN 113920036A
Authority
CN
China
Prior art keywords
map
image
spherical harmonic
illumination
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111520120.0A
Other languages
Chinese (zh)
Inventor
肖春霞
鲍中运
罗飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202111520120.0A priority Critical patent/CN113920036A/en
Publication of CN113920036A publication Critical patent/CN113920036A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/048023D-info-object: information is displayed on the internal or external surface of a three dimensional manipulable object, e.g. on the faces of a cube that can be rotated by the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses an interactive relighting editing method based on RGB-D images. Firstly, preprocessing an input image to obtain a shadowless reflectivity map, a shading map and a shadow map; segmenting an original RGB image by using a MaskRCNN algorithm, acquiring a segmentation mask of an object in a scene, and then acquiring a corresponding local depth map, a shadow-free reflectivity map and a shadow-free light and shade map; decomposing the unshaded light and shade map into a global and local illumination detail map and a spherical harmonic illumination map; respectively sampling and visualizing the global and local spherical harmonic illumination maps, and obtaining the adjusted global and local spherical harmonic illumination maps through interactive editing; and synthesizing the global and local RGB relighting images edited by the ambient light according to the global and local unshaded reflectivity maps, the adjusted spherical harmonic illumination map and the shadow map, and outputting the global and local RGB relighting images. The method can effectively, conveniently and intuitively realize the enhancement of the indoor complex scene image under the low-light condition.

Description

Interactive relighting editing method based on RGB-D image
Technical Field
The invention relates to the technical field of image processing technology and computer graphics, in particular to an interactive relighting editing method based on RGB-D images.
Background
Illumination information is an important factor that contributes to the quality of a photograph, and the illumination of a photograph, especially the ambient light distribution, directly affects the visual quality of the photograph. Especially in indoor low light scenes, the shot pictures have complex ambient light, the shooting effect is difficult to be directly shown in front of people, and even under the shooting of professional photographers, the people also need to improve the illumination distribution of the environment through an additional artificial light source. However, since the real scene is complex, even professional photographers are difficult to achieve a good illumination effect through later editing, and in addition, they cannot achieve natural and real local illumination editing to achieve local detail enhancement of the scene. Therefore, it is significant to provide a simple, effective and convenient interactive illumination editing method for users.
Some existing methods for adding light again, such as simple light models using a linear light source, a point light source or a spotlight, are not suitable for complex scenes under real-time light conditions, although they can adjust the light in the image. It is also difficult to estimate the number, location and illumination intensity of the light sources accurately. Although some interactive methods for re-illuminating can overcome these disadvantages, because three-dimensional information is not used as input, a large amount of interactive operations are required to determine the spatial structure of the scene and the information such as the position, direction and intensity of the light source, and the overall and local illumination of the scene cannot be enhanced at the same time.
Disclosure of Invention
The embodiment of the application provides an interactive relighting editing method based on RGB-D images, and effectively solves the problem that in the prior art, the illumination of complex scene images under indoor low-light conditions is difficult to be effectively enhanced globally and locally conveniently and efficiently.
The embodiment of the application provides an interactive illumination editing method based on RGBD images, which comprises the following steps,
step 1, firstly, preprocessing an input RGBD image to obtain a shadowless reflectivity graph, a shading graph and a shadow graph;
step 2, decomposing the unshaded light and shade image in the step 1 into an illumination detail image and a spherical harmonic illumination image;
and 3, sampling and visualizing the spherical harmonic illumination map in the step 2, and finally obtaining the adjusted spherical harmonic illumination map through interactive editing.
And 4, synthesizing the RGB image edited by the ambient light according to the shadowless reflectivity graph, the adjusted spherical harmonic illumination graph and the shadow graph, and outputting the RGB image.
Preferably, the step 1 comprises the following substeps:
step 1.1, graying an original RGB image, filtering the grayed image by using a relative total variation model, and finally obtaining an image with removed texture;
step 1.2, optimizing the original rough depth map by combining the RGBD-Fusion method and the image with the removed texture in step 1.1 to obtain a fine depth map;
step 1.3, removing image shadows from the original RGB image by combining the shadow removal method of a single RGBD image and the fine depth map in the step 1.2, and finally obtaining a shadow map and a shadow-free image;
and step 1.4, carrying out intrinsic image decomposition on the shadowless image in the step 1.3 by using an intrinsic image decomposition method, and finally obtaining a shadowless reflectivity image and a shadowless bright-dark image.
Preferably, the step 2 comprises the following substeps:
step 2.1, segmenting the original RGB image in the step 1 by using a MaskRCNN algorithm to obtain a segmentation mask of an object in a scene;
step 2.2, performing pixel-by-pixel multiplication operation on the mask of the object and the fine depth map, the shadowless reflectivity map and the shadowless shading map to obtain the fine depth map, the shadowless reflectivity map and the shadowless shading map of the corresponding object;
step 2.3, respectively calculating corresponding normal graphs by using the fine depth map of the global scene and the fine depth map of the corresponding local scene, and normalizing the normal graphs;
step 2.4, converting the normalized normal map of step 2.3 into a normal map with a size ofN×3A normal vector matrix;
step 2.5, constructing the shadowless light and shade graph of the global scene and the shadowless light and shade graph of the corresponding local scene into a length ofNA column vector of (a);
step 2.6, obtaining spherical harmonic coefficient vectors of the global scene and the local scene by using a least square method:
Figure 301830DEST_PATH_IMAGE001
wherein the content of the first and second substances,his a spherical harmonic coefficient vector of length 9,Ais a spherical harmonic basis matrix and is characterized in that,Scolumn vectors that are unshaded light and dark maps;
step 2.7, respectively obtaining a spherical harmonic illumination map and an illumination detail map of a global scene and a spherical harmonic illumination map and an illumination detail map of a local scene by using the spherical harmonic coefficient vector, wherein the corresponding calculation method comprises the following steps:
Figure 543937DEST_PATH_IMAGE002
wherein the content of the first and second substances,L、Drespectively, the spherical harmonic illumination map and the illumination detail map,
Figure 120412DEST_PATH_IMAGE003
preferably, the step 3 comprises the following substeps: step 3.1, rendering and expanding the spherical harmonic coefficient vector corresponding to the spherical harmonic map on a cube, and finally respectively obtaining the ambient light distribution maps of the global scene and the local scene;
step 3.2, manually using tools such as a brush and the like to draw and interact on the environment light distribution map respectively to adjust the intensity and distribution of illumination, and finally obtaining the edited global and local environment light distribution map;
and 3.3, obtaining the global harmonic coefficient vector of the edited ambient light distribution map, and performing re-rendering according to the spherical harmonic coefficient vector to obtain the adjusted global and local spherical harmonic illumination maps.
Preferably, in said step 3.3
Step 3.3.1, respectively calculating corresponding normal graphs by using the global fine depth map and the corresponding fine depth map of the local scene, and normalizing the normal graphs;
step 3.3.2, converting the normalized normal map of step 3.3.1 into a size ofN×3A normal vector matrix of (a);
step 3.3.3, calculating a spherical harmonic basis matrix corresponding to the normal vector matrix by utilizing a 2-order spherical harmonic basis function, wherein the size of the spherical harmonic basis matrix isN×9
Step 3.3.4, the unshaded light and shade images of the whole scene and the corresponding unshaded light and shade images of the local scene are respectively constructed into the length of the unshaded light and shade images according to the sequence of the line main sequenceNA column vector of (a);
step 3.3.5, obtaining spherical harmonic coefficient vectors of the whole scene and the local scene by using a least square method:
Figure 577938DEST_PATH_IMAGE001
wherein the content of the first and second substances,his a spherical harmonic coefficient vector of length 9,Ais a spherical harmonic basis matrix and is characterized in that,Sis the edited ambient light profile vector.
Preferably, the method for calculating the RGB image after editing the ambient light synthesized in step 4 includes:
Figure 189048DEST_PATH_IMAGE004
wherein the content of the first and second substances,Ithe edited output RGB image for said ambient light,
Figure 73828DEST_PATH_IMAGE005
in order to be the shadow map,
Figure 137599DEST_PATH_IMAGE006
for the adjusted spherical harmonic illumination map,Ris the shadowless reflectivity map.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
in the embodiment of the application, an input RGB image and a corresponding depth image are preprocessed to obtain a shadow-free reflectance map, a shadow-free shading map and a shadow map, the shadow-free shading map is decomposed into an illumination detail map and a spherical harmonic illumination map, an adjusted spherical harmonic illumination map is finally obtained through an interactive editing method, and finally the RGB image edited by ambient light is synthesized according to the shadow-free reflectance map, the adjusted spherical harmonic illumination map and the shadow map and output. The invention preprocesses the input image, decomposes the light and shade image into an illumination detail image and a spherical harmonic illumination image, adjusts the spherical harmonic illumination image in a user interaction mode, correspondingly adjusts the shadow, and finally resynthesizes to obtain the image edited by the ambient light. The method estimates the ambient light in the image by combining with a depth map optimization algorithm, edits the ambient light in a user interaction mode and synthesizes the ambient light again, so that the ambient light condition of the image achieves the expected effect. The relighting editing method provided by the invention mainly aims at indoor complex scenes under various low-light conditions, and can be used for visually and effectively performing illumination editing through user interaction, so that relighting of global and local scenes of an image is realized, and meanwhile, the relighting editing method has better robustness for user input.
Drawings
In order to more clearly illustrate the technical solution in the present embodiment, the drawings needed to be used in the description of the embodiment will be briefly introduced below, and it is obvious that the drawings in the following description are one embodiment of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a general design flowchart of an interactive relighting editing method based on RGB-D images according to an embodiment of the present invention.
Fig. 2 is a graph of an image global relighting experiment result of the RGB-D image-based interactive relighting editing method provided in this embodiment.
Fig. 3 is a graph of an image local relighting experiment result of the RGB-D image-based interactive relighting editing method provided in this embodiment.
Detailed Description
For better understanding of the above technical solutions, the following detailed descriptions will be provided in conjunction with the drawings and the detailed description of the embodiments.
The invention provides an interactive relighting editing method based on RGB-D images, which mainly comprises the following steps:
step 1, firstly, preprocessing an input RGBD image to obtain a shadowless reflectivity graph, a shading graph and a shadow graph;
step 2, decomposing the unshaded light and shade image in the step 1 into an illumination detail image and a spherical harmonic illumination image;
and 3, sampling and visualizing the spherical harmonic illumination map in the step 2, and finally obtaining the adjusted spherical harmonic illumination map through interactive editing.
And 4, synthesizing the RGB image edited by the ambient light according to the shadowless reflectivity graph, the adjusted spherical harmonic illumination graph and the shadow graph, and outputting the RGB image.
The invention will be further described with reference to the accompanying drawings.
Referring to fig. 1 to fig. 3, the interactive relighting editing method based on RGB-D images provided in this embodiment mainly includes the following steps:
first, for an input resolution of
Figure 398816DEST_PATH_IMAGE007
The RGB image and the corresponding depth map are preprocessed, grayed, and the grayscale image is filtered by using a relative total variation model, so that the image with the removed texture, namely the structure diagram, is finally obtained.
And (3) combining an RGBD (red, green and blue) polymerization method with the image with the removed texture, and performing depth map optimization on the originally input rough depth image to obtain a fine depth map.
The shadow removing method of a single RGBD image is combined with the fine depth map to remove the shadow of the original RGB image, so that a shadow map and a shadow-free image are obtained.
And decomposing the shadow-free image by using an intrinsic image decomposition algorithm to obtain a shadow-free reflectivity map and a shadow-free light and shade map.
And respectively decomposing the light and dark images into a corresponding global illumination detail image, a global spherical harmonic illumination image, a local illumination detail image and a local spherical harmonic illumination image based on the spherical harmonic illumination. The specific process is as follows:
segmenting the original RGB image in the step 1 by using a MaskRCNN algorithm to obtain a segmentation mask of an object in a scene;
carrying out pixel-by-pixel multiplication operation on the mask of the object and the fine depth map, the shadowless reflectivity map and the shadowless shading map to obtain the fine depth map, the shadowless reflectivity map and the shadowless shading map of the corresponding object;
respectively calculating corresponding normal maps by using the fine depth map of the global scene and the fine depth map of the corresponding local scene, and normalizing the normal maps; the global scene is to perform overall operation on the whole input image, and the local scene is to segment objects in the scene through Mask-RCNN and to operate the objects in the single segmented scene.
Converting the normalized normal map to a size ofN×3A normal vector matrix of (a);
constructing the unshaded light and shade graph of the global scene and the unshaded light and shade graph of the corresponding local scene into the light and shade graph with the length ofNA column vector of (a);
obtaining spherical harmonic coefficient vectors of a global scene and a local scene by using a least square method:
Figure 864432DEST_PATH_IMAGE001
wherein the content of the first and second substances,his a spherical harmonic coefficient vector of length 9,Ais a spherical harmonic basis matrix and is characterized in that,Sthe edited vector of the ambient light distribution map is obtained;
respectively obtaining a spherical harmonic illumination map and an illumination detail map of a global scene and a spherical harmonic illumination map and an illumination detail map of a local scene by using the spherical harmonic coefficient vector, wherein the corresponding calculation method comprises the following steps:
Figure 920113DEST_PATH_IMAGE008
wherein the content of the first and second substances,L、Drespectively, the spherical harmonic illumination map and the illumination detail map,
Figure 5268DEST_PATH_IMAGE003
and editing the global spherical harmonic illumination map and the local spherical harmonic illumination map respectively in a user interaction mode to obtain the global spherical harmonic illumination map and the local spherical harmonic illumination map after the ambient light is adjusted. The specific process is as follows:
rendering and expanding the spherical harmonic coefficient vectors corresponding to the spherical harmonic illumination map on a cube, and finally obtaining the ambient light distribution maps of the global scene and the local scene respectively;
manually drawing and interacting the environment light distribution map by using tools such as a brush and the like to adjust the intensity and distribution of illumination, and finally obtaining the edited global and local environment light distribution map; the image to be edited is an unfolded hexahedral box, illumination in different directions is realized on different surfaces through white dots or lines on the picture, namely, a light source is added, the number and the thickness of the white lines represent the intensity of the illumination, for example, the white lines are drawn below the unfolded cubic box to show that a light source is added in front of the scene, and the interaction mode is simple and effective.
And 6.3, obtaining the global harmonic coefficient vector of the edited ambient light distribution map, and performing re-rendering according to the spherical harmonic coefficient vector to obtain the adjusted global and local spherical harmonic illumination maps.
And 7, synthesizing the corresponding global and local shadowless reflectivity maps, the adjusted spherical harmonic illumination map and the shadow map to finally obtain the RGB image edited by the ambient light, wherein the general calculation method for synthesizing the global and local reflectivity maps comprises the following steps:
Figure 539017DEST_PATH_IMAGE004
wherein the content of the first and second substances,Ithe edited output RGB image for said ambient light,
Figure 124720DEST_PATH_IMAGE005
the shadow map, i.e. the artwork,
Figure 616881DEST_PATH_IMAGE006
for the adjusted spherical harmonic illumination map,Ris the shadowless reflectivity map.
The interactive relighting editing method based on the RGB-D image can effectively separate the shadow map from the spherical harmonic illumination map, and is beneficial to efficient expansion of the subsequent relighting process.
Fig. 2 shows two sets of experimental results of the present invention for global image relighting, where (a) is the input original image, and (b), (c), (d), and (e) respectively show the relighting above, to the left, in front of, and on the whole, thereby achieving the result of relighting the scene. As can be seen from the figure, the method of the invention can effectively realize global relighting in a certain direction.
Fig. 3 shows two sets of results of global relighting and local relighting performed on a complex indoor scene according to the present invention, where (a) is an input original image, and (b), (d), (c), and (e) respectively represent global and local relighting performed on the scene, and it can be seen from the figure that (b) and (d) achieve a considerable global relighting effect, and (c) and (e) not only achieve a good local relighting of the scene but also enhance the local details of the scene.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to examples, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (6)

1. An interactive relighting editing method based on RGB-D images is characterized by comprising the following steps:
step 1, firstly, preprocessing an input RGBD image to obtain a shadowless reflectivity graph, a shading graph and a shadow graph;
step 2, decomposing the unshaded light and shade image in the step 1 into an illumination detail image and a spherical harmonic illumination image;
step 3, sampling and visualizing the spherical harmonic illumination map in the step 2, and finally obtaining an adjusted spherical harmonic illumination map through interactive editing;
and 4, synthesizing the RGB image edited by the ambient light according to the shadowless reflectivity graph, the adjusted spherical harmonic illumination graph and the shadow graph, and outputting the RGB image.
2. An interactive relighting editing method based on RGB-D images as claimed in claim 1, characterized in that said step 1 comprises the following sub-steps:
step 1.1, graying an original RGB image, filtering the grayed image by using a relative total variation model, and finally obtaining an image with removed texture;
step 1.2, optimizing the original rough depth map by combining the RGBD-Fusion method and the image with the removed texture in step 1.1 to obtain a fine depth map;
step 1.3, removing image shadows from the original RGB image by combining the shadow removal method of a single RGBD image and the fine depth map in the step 1.2, and finally obtaining a shadow map and a shadow-free image; and step 1.4, carrying out intrinsic image decomposition on the shadowless image in the step 1.3 by using an intrinsic image decomposition method, and finally obtaining a shadowless reflectivity image and a shadowless bright-dark image.
3. The RGB-D image-based interactive relighting editing method as recited in claim 2, wherein the step 2 includes the sub-steps of:
step 2.1, segmenting the original RGB image in the step 1 by using a MaskRCNN algorithm to obtain a segmentation mask of an object in a scene;
step 2.2, performing pixel-by-pixel multiplication operation on the mask of the object and the fine depth map, the shadowless reflectivity map and the shadowless shading map to obtain the fine depth map, the shadowless reflectivity map and the shadowless shading map of the corresponding object;
step 2.3, respectively calculating corresponding normal graphs by using the fine depth map of the global scene and the fine depth map of the corresponding local scene, and normalizing the normal graphs;
step 2.4, converting the normalized normal map of step 2.3 into a normal map with a size ofN×3A normal vector matrix of (a);
step 2.5, constructing the shadowless light and shade graph of the global scene and the shadowless light and shade graph of the corresponding local scene into a length ofNA column vector of (a);
step 2.6, obtaining spherical harmonic coefficient vectors of the global scene and the local scene by using a least square method:
Figure 785859DEST_PATH_IMAGE001
wherein the content of the first and second substances,his a spherical harmonic coefficient vector of length 9,Ais a spherical harmonic basis matrix and is characterized in that,Sthe edited vector of the ambient light distribution map is obtained;
step 2.7, respectively obtaining a spherical harmonic illumination map and an illumination detail map of a global scene and a spherical harmonic illumination map and an illumination detail map of a local scene by using the spherical harmonic coefficient vector, wherein the corresponding calculation method comprises the following steps:
Figure 857720DEST_PATH_IMAGE002
wherein the content of the first and second substances,L、Drespectively, the spherical harmonic illumination map and the illumination detail map,
Figure 297929DEST_PATH_IMAGE003
4. the RGB-D image-based interactive relighting editing method as recited in claim 3, wherein the step 3 includes the steps of:
step 3.1, rendering and expanding the spherical harmonic coefficient vector corresponding to the spherical harmonic map on a cube, and finally respectively obtaining the ambient light distribution maps of the global scene and the local scene;
step 3.2, manually using tools such as a brush and the like to draw and interact on the environment light distribution map respectively to adjust the intensity and distribution of illumination, and finally obtaining the edited global and local environment light distribution map;
and 3.3, obtaining the global harmonic coefficient vector of the edited ambient light distribution map, and performing re-rendering according to the spherical harmonic coefficient vector to obtain the adjusted global and local spherical harmonic illumination maps.
5. An interactive relighting editing method based on RGB-D images as claimed in claim 4, characterized in that said step 3.3 comprises the following sub-steps:
step 3.3.1, respectively calculating corresponding normal graphs by using the global fine depth map and the corresponding fine depth map of the local scene, and normalizing the normal graphs;
step 3.3.2, converting the normalized normal map of step 3.3.1 into a size ofN×3A normal vector matrix of (a);
step 3.3.3, calculating a spherical harmonic basis matrix corresponding to the normal vector matrix by utilizing a 2-order spherical harmonic basis function, wherein the size of the spherical harmonic basis matrix isN×9
Step 3.3.4, the unshaded light and shade images of the whole scene and the corresponding unshaded light and shade images of the local scene are respectively constructed into the light and shade images with the length ofNA column vector;
step 3.3.5, obtaining spherical harmonic coefficient vectors of the whole scene and the local scene by using a least square method:
Figure 695412DEST_PATH_IMAGE001
wherein the content of the first and second substances,his a spherical harmonic coefficient vector of length 9,Ais a spherical harmonic basis matrix and is characterized in that,Sis the edited ambient light profile vector.
6. The RGB-D image-based interactive relighting editing method as claimed in claim 5, wherein the computing method for synthesizing the RGB image edited by the ambient light in the step 4 is:
Figure 221072DEST_PATH_IMAGE004
wherein the content of the first and second substances,Ithe edited output RGB image for said ambient light,
Figure 96624DEST_PATH_IMAGE005
in order to be the shadow map,
Figure 656918DEST_PATH_IMAGE006
for the adjusted spherical harmonic illumination map,Ris the shadowless reflectivity map.
CN202111520120.0A 2021-12-14 2021-12-14 Interactive relighting editing method based on RGB-D image Pending CN113920036A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111520120.0A CN113920036A (en) 2021-12-14 2021-12-14 Interactive relighting editing method based on RGB-D image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111520120.0A CN113920036A (en) 2021-12-14 2021-12-14 Interactive relighting editing method based on RGB-D image

Publications (1)

Publication Number Publication Date
CN113920036A true CN113920036A (en) 2022-01-11

Family

ID=79249137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111520120.0A Pending CN113920036A (en) 2021-12-14 2021-12-14 Interactive relighting editing method based on RGB-D image

Country Status (1)

Country Link
CN (1) CN113920036A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581611A (en) * 2022-04-28 2022-06-03 阿里巴巴(中国)有限公司 Virtual scene construction method and device
CN115375827A (en) * 2022-07-21 2022-11-22 荣耀终端有限公司 Illumination estimation method and electronic equipment
CN115546010A (en) * 2022-09-21 2022-12-30 荣耀终端有限公司 Image processing method and electronic device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447906A (en) * 2015-11-12 2016-03-30 浙江大学 Method for calculating lighting parameters and carrying out relighting rendering based on image and model
CN106127818A (en) * 2016-06-30 2016-11-16 珠海金山网络游戏科技有限公司 A kind of material appearance based on single image obtains system and method
CN109886906A (en) * 2019-01-25 2019-06-14 武汉大学 A kind of real-time dim light video enhancement method and system of details sensitivity
CN110033055A (en) * 2019-04-19 2019-07-19 中共中央办公厅电子科技学院(北京电子科技学院) A kind of complex object image weight illumination method based on the parsing of semantic and material with synthesis
CN110570496A (en) * 2019-08-26 2019-12-13 武汉大学 RGBD image environment light editing method and system based on spherical harmonic illumination
CN111127377A (en) * 2019-12-20 2020-05-08 湖北工业大学 Weak light enhancement method based on multi-image fusion Retinex
US10665011B1 (en) * 2019-05-31 2020-05-26 Adobe Inc. Dynamically estimating lighting parameters for positions within augmented-reality scenes based on global and local features
CN113240622A (en) * 2021-03-12 2021-08-10 清华大学 Human body scene image intrinsic decomposition and relighting method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447906A (en) * 2015-11-12 2016-03-30 浙江大学 Method for calculating lighting parameters and carrying out relighting rendering based on image and model
CN106127818A (en) * 2016-06-30 2016-11-16 珠海金山网络游戏科技有限公司 A kind of material appearance based on single image obtains system and method
CN109886906A (en) * 2019-01-25 2019-06-14 武汉大学 A kind of real-time dim light video enhancement method and system of details sensitivity
CN110033055A (en) * 2019-04-19 2019-07-19 中共中央办公厅电子科技学院(北京电子科技学院) A kind of complex object image weight illumination method based on the parsing of semantic and material with synthesis
US10665011B1 (en) * 2019-05-31 2020-05-26 Adobe Inc. Dynamically estimating lighting parameters for positions within augmented-reality scenes based on global and local features
CN110570496A (en) * 2019-08-26 2019-12-13 武汉大学 RGBD image environment light editing method and system based on spherical harmonic illumination
CN111127377A (en) * 2019-12-20 2020-05-08 湖北工业大学 Weak light enhancement method based on multi-image fusion Retinex
CN113240622A (en) * 2021-03-12 2021-08-10 清华大学 Human body scene image intrinsic decomposition and relighting method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ADITYA ARORA 等: "Low Light Image Enhancement via Global and Local Context Modeling", 《ARXIV:2101.00850V1》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581611A (en) * 2022-04-28 2022-06-03 阿里巴巴(中国)有限公司 Virtual scene construction method and device
CN114581611B (en) * 2022-04-28 2022-09-20 阿里巴巴(中国)有限公司 Virtual scene construction method and device
CN115375827A (en) * 2022-07-21 2022-11-22 荣耀终端有限公司 Illumination estimation method and electronic equipment
CN115375827B (en) * 2022-07-21 2023-09-15 荣耀终端有限公司 Illumination estimation method and electronic equipment
CN115546010A (en) * 2022-09-21 2022-12-30 荣耀终端有限公司 Image processing method and electronic device
CN115546010B (en) * 2022-09-21 2023-09-12 荣耀终端有限公司 Image processing method and electronic equipment

Similar Documents

Publication Publication Date Title
Nguyen-Phuoc et al. Rendernet: A deep convolutional network for differentiable rendering from 3d shapes
Rudnev et al. Nerf for outdoor scene relighting
Oh et al. Image-based modeling and photo editing
CN113920036A (en) Interactive relighting editing method based on RGB-D image
JP4500614B2 (en) Image-based rendering and editing method and apparatus
US7450758B2 (en) Stylization of video
Paris A gentle introduction to bilateral filtering and its applications
US11055916B2 (en) Virtualizing content
Loscos et al. Interactive virtual relighting and remodeling of real scenes
CN110570496B (en) RGBD image environment light editing method and system based on spherical harmonic illumination
Rematas et al. Image-based synthesis and re-synthesis of viewpoints guided by 3d models
US20150016714A1 (en) Tagging virtualized content
US7756356B2 (en) System and method for factorizing light in a sequence of images
CN111626951B (en) Image shadow elimination method based on content perception information
JP2023520841A (en) Image processing method, apparatus, computer program, and electronic device
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
CN115100337A (en) Whole body portrait video relighting method and device based on convolutional neural network
CN116012232A (en) Image processing method and device, storage medium and electronic equipment
CN109544671B (en) Projection mapping method of video in three-dimensional scene based on screen space
Nicolet et al. Repurposing a relighting network for realistic compositions of captured scenes
Bonneel et al. Proxy-guided texture synthesis for rendering natural scenes
Stojanov et al. Application of 3ds Max for 3D Modelling and Rendering
CN113160358A (en) Non-green-curtain cutout rendering method
Dai Stylized rendering for virtual furniture layout
Shen et al. High dynamic range image tone mapping and retexturing using fast trilateral filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220111