CN110910486A - Indoor scene illumination estimation model, method and device, storage medium and rendering method - Google Patents

Indoor scene illumination estimation model, method and device, storage medium and rendering method Download PDF

Info

Publication number
CN110910486A
CN110910486A CN201911192051.8A CN201911192051A CN110910486A CN 110910486 A CN110910486 A CN 110910486A CN 201911192051 A CN201911192051 A CN 201911192051A CN 110910486 A CN110910486 A CN 110910486A
Authority
CN
China
Prior art keywords
indoor scene
encoder
self
scene illumination
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911192051.8A
Other languages
Chinese (zh)
Other versions
CN110910486B (en
Inventor
王锐
鲍虎军
李佰余
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201911192051.8A priority Critical patent/CN110910486B/en
Priority to PCT/CN2019/124383 priority patent/WO2021103137A1/en
Publication of CN110910486A publication Critical patent/CN110910486A/en
Application granted granted Critical
Publication of CN110910486B publication Critical patent/CN110910486B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an illumination estimation model, a method, a device and a storage medium for an indoor scene based on a single image, wherein the illumination estimation model, the method, the device and the storage medium comprise the following steps: an auto-encoder for encoding and decoding high-dimensional features of a partial panorama mapped from a single color low dynamic image into indoor scene illumination information represented in an estimated panorama; the network parameters of the self-encoder are determined by training a generative confrontation network composed of the self-encoder as a generator and a classifier. The indoor scene illumination information can be quickly estimated according to the image, is comprehensive and reliable, and can be used for improving the reality of the rendering effect.

Description

Indoor scene illumination estimation model, method and device, storage medium and rendering method
Technical Field
The invention relates to the technical field of illumination estimation and rendering, in particular to an illumination estimation model, method and device, a storage medium and a rendering method for an indoor scene based on a single image.
Background
In many reverse rendering applications, such as augmented reality, there is a need to infer lighting information from real scenes, which has been a hot and critical issue. Along with the development of products such as smart phones, tablet computers, AR helmets, smart glasses and the like, the mobile application of AR is more and more, and the research on the illumination estimation method is recently and continuously emerging. In these scenarios, the predicted lighting results are very important to account for realistic rendering on the newly inserted three-dimensional model. However, accurate illumination prediction is also very challenging, as many factors are to be considered, including scene geometry, material properties, complexity of light sources and capture equipment, etc.
The virtual object and the real environment have consistent illumination effect, namely the realistic AR effect is mainly reflected in illumination consistency, and the real environment has correct illumination matching relation of light and shade and the like, so that the estimated illumination information of the real scene is fed back to the virtual object for drawing and rendering, and the virtual object is more naturally fused with the real object.
According to different application scenes, illumination estimation research can be divided into indoor scene illumination estimation and outdoor scene illumination estimation, outdoor illumination estimation is relatively simple, the illumination effect mainly plays a decisive role in the whole sky and sun direction, various weather and sunlight intensity positions and the like are simulated by using a parameterized sky model, and a better illumination estimation effect can be obtained. Indoor lighting estimation is relatively difficult because it may be various common indoor light sources such as fluorescent lamps, table lamps, windows, etc. which play a decisive role, and the shape and position are uncertain, and cannot be modeled by a parametric model.
For illumination estimation of outdoor scenes, early research methods mostly use auxiliary objects including spheres with known material surface reflection properties in scenes to capture or infer illumination information in real scenes conveniently, and mostly use estimation of light sources of real scenes as main research tasks. Or by means of advanced photographing devices, such as fisheye cameras, light field cameras, the lighting situation in the scene is calculated more quickly.
From the viewpoint of practicality, research directions now tend to estimate scene illumination information based on images, which is the most difficult and promising method, and is an important development direction in the field of illumination estimation in recent years. The existing illumination estimation methods based on images have two types, one is to estimate the position and the intensity of a light source and to regard the light source as a point light source illumination model. Another is to attempt to approximate the illumination of the entire scene with a fixed multi-order spherical basis function. Both methods have certain constraints, and the complexity of the spherical signal which can be expressed is limited no matter the point light source or the spherical basis function.
Disclosure of Invention
The invention mainly aims to provide an illumination estimation model, a method, a device and a storage medium for an indoor scene based on a single image, which can quickly estimate the illumination information of the indoor scene according to the single color low dynamic image, the illumination information of the indoor scene is comprehensive and reliable, and the reality of a rendering effect can be improved by using the illumination information of the indoor scene.
Another object of the present invention is to provide a rendering method, which performs rendering based on comprehensive and reliable illumination information of an indoor scene, so as to improve the sense of reality of the rendering result.
In order to achieve the above main objective, the technical solution provided by the present invention is an illumination estimation model for an indoor scene based on a single image, comprising:
an auto-encoder for encoding and decoding high-dimensional features of a partial panorama mapped from a single color low dynamic image into indoor scene illumination information represented in an estimated panorama;
the network parameters of the self-encoder are determined by training a generating type countermeasure network composed of the self-encoder serving as a generator and a discriminator.
In order to achieve the main purpose, the technical scheme provided by the invention is an illumination estimation method of an indoor scene based on a single image, and the method comprises the following steps:
acquiring a color image or a panoramic image;
and estimating a single color low dynamic image by using the indoor scene illumination estimation model to obtain indoor scene illumination information.
In order to achieve the above main objective, the present invention provides an illumination estimation apparatus for an indoor scene based on a single image, where the illumination estimation apparatus includes one or more processors and one or more memories, where at least one instruction is stored in the one or more memories, and the at least one instruction is loaded and executed by the one or more processors to implement the operations performed by the illumination estimation method for an indoor scene.
In order to achieve the foregoing main object, the present invention provides a computer-readable storage medium, where at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the indoor scene illumination estimation method.
In order to achieve the above another object, the present invention provides a rendering method, including:
rendering is carried out by utilizing the indoor scene illumination information output by the indoor scene illumination estimation model; or the like, or, alternatively,
rendering is carried out by utilizing the indoor scene illumination information obtained by the indoor scene illumination estimation method; or the like, or, alternatively,
and rendering by using the indoor scene illumination information output by the indoor scene illumination estimation device.
The technical scheme provided by the invention has the beneficial effects that at least:
the self-encoder is used as a generator of the generating type confrontation network, and the network parameters of the self-encoder are determined by combining with the discriminant joint training of the generating type confrontation network, so that the comprehensiveness and the accuracy of the self-encoder for estimating the indoor scene illumination information of a part of panoramic pictures are improved, and the reality of the rendering effect of rendering by using the indoor scene illumination information is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a generative countermeasure network provided in an embodiment of the present invention;
fig. 2 is a schematic flowchart of a rendering method according to an embodiment of the present invention;
fig. 3 is images at various stages in the rendering method according to the embodiment of the present invention, where (a) is an acquired color image, (b) is an estimated panorama representing illumination information of an indoor scene, (c) is a result graph of rendering using the illumination information of the indoor scene, and (d) is a comparison real rendering graph.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the invention provides an illumination estimation model of an indoor scene based on a single image, which comprises a self-encoder, wherein the self-encoder is mainly used for encoding and decoding a part of panoramic image mapped by the single color low dynamic image into indoor scene illumination information represented by an estimation panoramic image from high-dimensional characteristics.
When the illumination information of the indoor scene is estimated in real time, a single color low dynamic image of the scene can be directly collected, and then the single color low dynamic image is converted into a panoramic space to obtain a partial panoramic image corresponding to the color image. Therefore, the indoor scene illumination estimation model further comprises an image preprocessing unit, which is used for mapping the received color image according to the camera direction and the view angle size to obtain a partial panorama.
In an embodiment, the received color image may be obtained by shooting with a camera, after the color image is obtained, a mapping function is obtained according to the camera direction and the view angle, and the color image is mapped to the panoramic space by using the mapping function, so as to obtain a partial panoramic image corresponding to the captured image.
In the embodiment, in order to obtain the illumination information of the indoor scene of the whole hemisphere, the partial panorama input by the limited network is expressed as a longitude and latitude panorama of 360 degrees. Namely, the data source of the self-encoder comprises 360-degree illumination information, and the illumination information of the indoor scene of the whole hemisphere can be obtained by encoding and decoding the 360-degree panorama.
In the embodiment, the panoramic image is parameterized by longitude and latitude, so that the indoor scene illumination estimation problem is converted into a filling recovery problem of a two-dimensional image, and then characteristic information in the panoramic image is extracted by utilizing convolution operation in a self-encoder, so that high-dynamic panoramic illumination output of 360 degrees is obtained, and the high-dynamic panoramic illumination output represents the luminance from each direction of a hemisphere.
In an embodiment, the self-encoder employs a convolutional neural network. The self-encoder comprises an encoder and a decoder, wherein the self-encoder is used for encoding the partial panorama from the high-dimensional features into a low-dimensional vector and outputting the low-dimensional vector to the decoder; and the decoder is used for reconstructing the input low-dimensional vector into an estimated panorama of the represented complete indoor scene illumination information and outputting the estimated panorama so as to realize the estimation of the indoor scene illumination.
The encoder is a full convolution network comprising six convolution layers, the input being a panorama representation of 256 x 512 x 3, the convolution kernel size of the first convolution layer being 4 x 4, the step size being 4, and the convolution kernels of the remaining convolution layers being the same, all 4, but the step size being 2. The convolution kernel and the step size are designed in such a way that the filling of the feature map is relatively regular each time the convolution feature map is filled, the pixel filling of the feature map in the upper, lower, left and right directions is 1, and the parameter number and the calculation amount of the self-encoder are greatly reduced so as to facilitate the faster inference prediction. Relu activating function is used between adjacent convolution layers to achieve a nonlinear transformation effect, and batch normalization function is used to help correct data distribution and achieve faster and better convergence effect.
The decoder is also a full convolution network, and specifically comprises 6 deconvolution layers, the deconvolution layers are adopted for upsampling, the size of a deconvolution layer kernel is 4 x 4, and the step length is 2, which is a special convolution mode, and a method of combining linear interpolation upsampling and common convolution can be used, so that a similar effect can be achieved. In the decoder, Relu activating function is used among the first 5 deconvolution layers to achieve a nonlinear transformation effect, batch normalization function is used to help the correction of data distribution and achieve faster and better convergence effect, and the normalization function and Relu activating function are not used in the last deconvolution layer.
The feature extraction process of the encoder obtains a low-dimensional vector, but a certain amount of information is lost, and experiments find that the recovery effect of only using the low-dimensional hidden space vector is fuzzy. Because the task is similar to the pixel-by-pixel mapping relation, the information extracted by the encoder is transmitted into the decoder by using a connection skipping mode, the encoder characteristic diagrams with the same size can be connected into the decoder characteristic diagrams in parallel by using the advantage of network symmetry, the overall tone of the generated estimated panoramic image is easier to keep consistent, more details are reserved, and the estimated panoramic image is closer to a real panoramic image.
The network parameters of the self-encoder are determined by training a generative confrontation network which is composed of the self-encoder serving as a generator and a classifier. As shown in fig. 1, in the generative countermeasure network, the self-encoder has a very good capability for extracting and recovering image features, and the discriminator can perform a judgment feedback on the generated estimated panorama, so that the estimated result has a structural sense and is closer to a real scene illumination map by constantly distinguishing the real panorama from the estimated panorama.
In the embodiment, the determination process of the network parameters of the self-encoder is as follows;
constructing a generative confrontation network, wherein the generative confrontation network comprises a generator and an arbiter, and the generator is a self-encoder and is used for encoding and decoding a part of the panoramic image into an estimated panoramic image from high-dimensional features; the discriminator is used for discriminating the gap between the real panoramic image and the estimated panoramic image;
the basic framework for generating a competing network can be represented as follows:
Figure BDA0002293816170000073
where M denotes an input panorama, y denotes a reference tag, i.e., a real panorama, G (-) denotes a generator, and D (-) denotes a discriminator.
The generator is an auto-encoder, and is mainly used for generating an estimated panorama, and the structure of the auto-encoder is the same as that of the auto-encoder, and is not described herein again.
The discriminator is used for distinguishing the difference between a real panoramic image and an estimated panoramic image, the discriminator adopts a convolutional neural network and specifically comprises 5 convolutional layers, the size of a convolutional kernel is 4, the step length of the first convolutional layer is 4, the step length of other convolutional layers is 2, batch normalization and LeakyRelu activation functions are used between adjacent convolutional layers, and finally a sigmoid activation function is added to output of the last convolutional layer to convert the value into a truth score between 0 and 1. During the training process, the discriminator is to make the output of the real panorama closer to 1, and the output of the estimated panorama closer to 0.
Constructing a loss function, wherein the loss function comprises the sum of products of a self-encoder loss function and a countercheck loss function according to respective weights, the self-encoder loss function is the average absolute error of the estimated panoramic image and the real panoramic image, and the countercheck loss function is the probability that the estimated panoramic image output by the self-encoder is true or false;
in particular, the Loss function Loss from the encoderL1Comprises the following steps:
Figure BDA0002293816170000071
loss of opposition function LossL2Comprises the following steps:
Figure BDA0002293816170000072
loss function LosstotalComprises the following steps:
Figure BDA0002293816170000081
wherein M represents the input panorama, ω represents the weighting coefficient caused by the latitude of the panorama itself, y represents the real panorama, G (M) represents the estimated panorama output by the generator,
β, gamma is two hyperparametric representative self-encoder Loss function LossL1And Loss of function LossL2The weight of (c). Through aFor the empirical adjustment, β -50 and γ -1 can be used.
And performing iterative optimization on the network parameters of the generative countermeasure network by using training data with the minimum loss function as a target, and determining the network parameters of the self-encoder after the iterative optimization is finished.
Using generative countering self-encoder Loss function Loss in networksL1And Loss of function LossL2Loss function Loss of compositiontotalCompared with the classical pixel-by-pixel loss, the structural characteristics of the image can be grasped better, namely, a countermeasure structure of a generating countermeasure network formed by adding discriminators is utilized for countermeasure training, and a generator for generating a clearer and more vivid estimated panorama, namely a self-encoder for generating the clearer and more vivid estimated panorama can be obtained.
In order to improve the rendering speed of the three-dimensional model, simplified indoor scene illumination information needs to be adopted. Therefore, the indoor scene illumination estimation model further comprises:
and the illumination information reduction unit is used for performing distortion transformation and spherical harmonic transformation on the estimated panoramic image output by the self-encoder and outputting spherical harmonic coefficients so as to obtain reduced indoor scene illumination information.
Therefore, although partial image information can be lost, the storage capacity can be greatly reduced, the spherical harmonic illumination method can be carried out only by a plurality of spherical harmonic coefficients, the rendering rate can be improved while the rendering effect is ensured, particularly when virtual reality is experienced, the simplified indoor scene illumination information is adopted for real-time rendering, the rendered virtual scene and the real scene can be fused in real time, and the virtual reality experience effect is improved. The simplified indoor scene illumination information has a better effect on drawing the diffuse reflection material model, and can be greatly applied to real-time drawing.
The indoor scene illumination estimation model provided by the embodiment takes the self-encoder as a generator of the generative confrontation network, and combines with the discriminant joint training of the generative confrontation network to determine the network parameters of the self-encoder, so that the comprehensiveness and accuracy of the self-encoder for estimating the indoor scene illumination information of a part of panoramic pictures are improved, and the sense of reality of the rendering effect of rendering by using the indoor scene illumination information is further improved.
The embodiment also provides an illumination estimation method of an indoor scene based on a single image, which comprises the following steps:
acquiring a single color low dynamic image;
and estimating a single color low dynamic image by using the indoor scene illumination estimation model to obtain indoor scene illumination information.
The structure, the model parameter determination process, the estimation process of the indoor scene illumination information and the achievable technical effects of the indoor scene illumination estimation model in the indoor scene illumination estimation method are the same as those of the indoor scene illumination estimation model, and are not repeated here.
In the indoor scene illumination estimation method, an input color image is converted into a partial panorama by using an image preprocessing unit in an indoor scene illumination estimation model, then illumination information estimation is carried out on the partial panorama by using a self-encoder, and the indoor scene illumination information represented by the estimated panorama is output. When simplified indoor scene illumination information is needed, the illumination information simplification unit is used for carrying out distortion transformation and spherical harmonic transformation on the estimated panoramic image, spherical harmonic function coefficients are output, and the simplified indoor scene illumination information is obtained.
Embodiments also provide an apparatus for single-image-based illumination estimation of an indoor scene, the apparatus including one or more processors and one or more memories, the one or more memories storing at least one instruction, the at least one instruction being loaded and executed by the one or more processors to implement the operations performed by the method for estimating illumination of an indoor scene.
The steps in the indoor scene illumination estimation method implemented when the instructions in the indoor scene illumination estimation device are executed are the same as those in the indoor scene illumination estimation method described above, and are not described herein again.
In the indoor scene illumination estimation apparatus, the memory may include one or more computer-readable storage media, which may be non-transitory. The memory may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer-readable storage medium in a memory is to store at least one instruction for execution by a processor to implement an indoor scene illumination estimation method provided by an embodiment.
Embodiments also provide a computer-readable storage medium having at least one instruction stored therein, which is loaded and executed by a processor to implement the operations performed by the above-mentioned indoor scene illumination estimation method. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The method, the device and the storage medium for estimating the indoor scene illumination provided by the embodiment use the self-encoder as a generator of the generative confrontation network, and determine the network parameters of the self-encoder by combining with the discriminant joint training of the generative confrontation network, so that the comprehensiveness and the accuracy of the self-encoder for estimating the indoor scene illumination information of a part of panoramic image are improved, and the sense of reality of a rendering effect of rendering by using the indoor scene illumination information is further improved.
As shown in fig. 2, an embodiment further provides a rendering method, including:
rendering by using the indoor scene illumination information output by the indoor scene illumination estimation model; or the like, or, alternatively,
rendering is carried out by utilizing the indoor scene illumination information obtained by the indoor scene illumination estimation method; or the like, or, alternatively,
and rendering by using the indoor scene illumination information output by the indoor scene illumination estimation device.
Namely, the specific process of the rendering method is as follows:
firstly, converting an input color image into a partial panoramic image by using an image preprocessing unit;
then, utilizing a self-encoder trained by a generative confrontation framework (namely, a generative confrontation network) to carry out illumination information estimation on the partial panorama, and outputting an estimation output (namely, an estimation panorama) representing the ambient illumination;
then, the illumination information reduction unit is used for carrying out post-processing on the estimated output, namely, the estimated output is subjected to distortion transformation and spherical harmonic transformation, spherical harmonic coefficients are output, and spherical harmonic illumination is obtained;
and finally, drawing the three-dimensional model by utilizing spherical harmonic illumination to obtain a drawn model.
The indoor scene illumination estimation model, the indoor scene illumination estimation method and the indoor scene illumination estimation device adopted in the rendering method are the same as the indoor scene illumination estimation model, the indoor scene illumination estimation method and the indoor scene illumination estimation device, and are not repeated here.
The rendering method adopts the indoor scene illumination estimation model, the indoor scene illumination estimation method and the indoor scene illumination information obtained by the indoor scene illumination estimation device to render, and further the reality of the rendering effect is realized.
Fig. 3 shows images at various stages of the specific rendering process by using the above rendering method, where (a) is a single color low dynamic image photographed by a camera, and (b) is indoor scene illumination information obtained by illumination estimation by using the above internal scene illumination estimation model, method and apparatus, and since the output is in a high dynamic HDR format, linear tone mapping is used for visualization display. (c) In order to use the indoor scene illumination information to perform virtual model drawing, the effect of embedding environment virtual-real fusion can be seen to be relatively vivid, and (d) the effect of using a real panoramic image to perform model drawing is adopted.
Comparing (c) and (d) in fig. 3, it can be obtained that the virtual model is drawn based on the indoor scene illumination information output by the indoor scene illumination estimation model, and the drawing result has illumination consistency with the scene.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. An illumination estimation model of an indoor scene based on a single image, comprising:
an auto-encoder for encoding and decoding high-dimensional features of a partial panorama mapped from a single color low dynamic image into indoor scene illumination information represented in an estimated panorama;
the network parameters of the self-encoder are determined by training a generating type countermeasure network composed of the self-encoder serving as a generator and a discriminator.
2. The indoor scene illumination estimation model of claim 1, wherein the determination of the autoencoder network parameters is;
constructing a generating countermeasure network, wherein the generating countermeasure network comprises a generator and an arbiter, the generator is a self-encoder and is used for encoding and decoding an input single color low dynamic image into an estimated panorama from high-dimensional features; the discriminator is used for discriminating the gap between the real panoramic image and the estimated panoramic image;
constructing a loss function, wherein the loss function comprises the sum of products of a self-encoder loss function and a countercheck loss function according to respective weights, the self-encoder loss function is the average absolute error of the estimated panoramic image and the real panoramic image, and the countercheck loss function is the probability that the estimated panoramic image output by the self-encoder is true or false;
and performing iterative optimization on the network parameters of the generative countermeasure network by using training data with the minimum loss function as a target, and determining the network parameters of the self-encoder after the iterative optimization is finished.
3. The indoor scene illumination estimation model of claim 1, wherein the auto-encoder employs a convolutional neural network; the discriminator adopts a convolutional neural network.
4. The indoor scene lighting estimation model of claim 1, further comprising:
and the image preprocessing unit is used for mapping the received single color low dynamic image according to the direction and the visual angle of the camera to obtain a partial panoramic image.
5. An indoor scene lighting estimation model as claimed in claim 1 or 4, further comprising:
and the illumination information reduction unit is used for performing distortion transformation and spherical harmonic transformation on the estimated panoramic image output by the self-encoder and outputting spherical harmonic coefficients so as to obtain reduced indoor scene illumination information.
6. The indoor scene illumination estimation model of claim 1, wherein the partial panorama of the self-encoder is expressed as a 360 ° longitude and latitude panorama.
7. An illumination estimation method of an indoor scene based on a single image is characterized by comprising the following steps:
acquiring a single color low dynamic image;
the indoor scene illumination estimation model of any one of claims 1 to 6 is used for estimating a single color low dynamic image to obtain indoor scene illumination information.
8. An indoor scene illumination estimation device based on a single image, characterized in that the device comprises one or more processors and one or more memories, wherein at least one instruction is stored in the one or more memories, and the at least one instruction is loaded and executed by the one or more processors to realize the operation executed by the indoor scene illumination estimation method according to claim 7.
9. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor to perform operations performed by the indoor scene illumination estimation method of claim 7.
10. A rendering method, characterized in that the rendering method comprises:
rendering with the indoor scene illumination information output by the indoor scene illumination estimation model of claims 1-6; or the like, or, alternatively,
rendering with the indoor scene illumination information obtained by the indoor scene illumination estimation method of claim 7; or the like, or, alternatively,
rendering with the indoor scene lighting information output by the indoor scene lighting estimation apparatus of claim 8.
CN201911192051.8A 2019-11-28 2019-11-28 Indoor scene illumination estimation model, method and device, storage medium and rendering method Active CN110910486B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911192051.8A CN110910486B (en) 2019-11-28 2019-11-28 Indoor scene illumination estimation model, method and device, storage medium and rendering method
PCT/CN2019/124383 WO2021103137A1 (en) 2019-11-28 2019-12-10 Indoor scene illumination estimation model, method and device, and storage medium and rendering method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911192051.8A CN110910486B (en) 2019-11-28 2019-11-28 Indoor scene illumination estimation model, method and device, storage medium and rendering method

Publications (2)

Publication Number Publication Date
CN110910486A true CN110910486A (en) 2020-03-24
CN110910486B CN110910486B (en) 2021-11-19

Family

ID=69820159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911192051.8A Active CN110910486B (en) 2019-11-28 2019-11-28 Indoor scene illumination estimation model, method and device, storage medium and rendering method

Country Status (2)

Country Link
CN (1) CN110910486B (en)
WO (1) WO2021103137A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183637A (en) * 2020-09-29 2021-01-05 中科方寸知微(南京)科技有限公司 Single-light-source scene illumination re-rendering method and system based on neural network
CN112785672A (en) * 2021-01-19 2021-05-11 浙江商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113052970A (en) * 2021-04-09 2021-06-29 杭州群核信息技术有限公司 Neural network-based light intensity and color design method, device and system and storage medium
CN113205585A (en) * 2021-03-25 2021-08-03 浙江大学 Method, device and system for real-time drawing of mutual reflection effect of dynamic object based on approximate point light source and storage medium
CN113379698A (en) * 2021-06-08 2021-09-10 武汉大学 Illumination estimation method based on step-by-step joint supervision
CN113537194A (en) * 2021-07-15 2021-10-22 Oppo广东移动通信有限公司 Illumination estimation method, illumination estimation device, storage medium, and electronic apparatus
CN113572962A (en) * 2021-07-28 2021-10-29 北京大学 Outdoor natural scene illumination estimation method and device
CN115294263A (en) * 2022-10-08 2022-11-04 武汉大学 Illumination estimation model, network, method and system
CN115439595A (en) * 2022-11-07 2022-12-06 四川大学 AR-oriented indoor scene dynamic illumination online estimation method and device
CN117392353A (en) * 2023-12-11 2024-01-12 中南大学 Augmented reality illumination estimation method, system, equipment and storage medium

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408206B (en) * 2021-06-23 2022-12-06 陕西科技大学 Indoor natural illuminance modeling method
CN114820975B (en) * 2022-04-13 2023-04-11 湖北省国土测绘院 Three-dimensional scene simulation reconstruction system and method based on all-element parameter symbolization
CN116416364B (en) * 2022-10-25 2023-11-03 北京大学 Data acquisition and estimation method and device for urban scene space variable environment illumination
CN115641333B (en) * 2022-12-07 2023-03-21 武汉大学 Indoor illumination estimation method and system based on spherical harmonic gauss
CN116152419B (en) * 2023-04-14 2023-07-11 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
CN116883578B (en) * 2023-09-06 2023-12-19 腾讯科技(深圳)有限公司 Image processing method, device and related equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7663623B2 (en) * 2006-12-18 2010-02-16 Microsoft Corporation Spherical harmonics scaling
CN107862734A (en) * 2017-11-14 2018-03-30 华南理工大学 It is a kind of that image irradiation method is rendered based on generation confrontation network
CN108154547A (en) * 2018-01-17 2018-06-12 百度在线网络技术(北京)有限公司 Image generating method and device
CN108460841A (en) * 2018-01-23 2018-08-28 电子科技大学 A kind of indoor scene light environment method of estimation based on single image
CN109523617A (en) * 2018-10-15 2019-03-26 中山大学 A kind of illumination estimation method based on monocular-camera
CN110335193A (en) * 2019-06-14 2019-10-15 大连理工大学 A kind of unsupervised image conversion method based on the aiming field guiding for generating confrontation network
CN110458939A (en) * 2019-07-24 2019-11-15 大连理工大学 The indoor scene modeling method generated based on visual angle

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10748247B2 (en) * 2017-12-26 2020-08-18 Facebook, Inc. Computing high-resolution depth images using machine learning techniques
CN109166144B (en) * 2018-07-20 2021-08-24 中国海洋大学 Image depth estimation method based on generation countermeasure network
CN110458902B (en) * 2019-03-26 2022-04-05 华为技术有限公司 3D illumination estimation method and electronic equipment
CN110148188B (en) * 2019-05-27 2023-03-10 平顶山学院 Method for estimating low-illumination image illumination distribution based on maximum difference image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7663623B2 (en) * 2006-12-18 2010-02-16 Microsoft Corporation Spherical harmonics scaling
CN107862734A (en) * 2017-11-14 2018-03-30 华南理工大学 It is a kind of that image irradiation method is rendered based on generation confrontation network
CN108154547A (en) * 2018-01-17 2018-06-12 百度在线网络技术(北京)有限公司 Image generating method and device
CN108460841A (en) * 2018-01-23 2018-08-28 电子科技大学 A kind of indoor scene light environment method of estimation based on single image
CN109523617A (en) * 2018-10-15 2019-03-26 中山大学 A kind of illumination estimation method based on monocular-camera
CN110335193A (en) * 2019-06-14 2019-10-15 大连理工大学 A kind of unsupervised image conversion method based on the aiming field guiding for generating confrontation network
CN110458939A (en) * 2019-07-24 2019-11-15 大连理工大学 The indoor scene modeling method generated based on visual angle

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183637B (en) * 2020-09-29 2024-04-09 中科方寸知微(南京)科技有限公司 Single-light-source scene illumination re-rendering method and system based on neural network
CN112183637A (en) * 2020-09-29 2021-01-05 中科方寸知微(南京)科技有限公司 Single-light-source scene illumination re-rendering method and system based on neural network
CN112785672B (en) * 2021-01-19 2022-07-05 浙江商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112785672A (en) * 2021-01-19 2021-05-11 浙江商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113205585A (en) * 2021-03-25 2021-08-03 浙江大学 Method, device and system for real-time drawing of mutual reflection effect of dynamic object based on approximate point light source and storage medium
CN113052970B (en) * 2021-04-09 2023-10-13 杭州群核信息技术有限公司 Design method, device and system for light intensity and color of lamplight and storage medium
CN113052970A (en) * 2021-04-09 2021-06-29 杭州群核信息技术有限公司 Neural network-based light intensity and color design method, device and system and storage medium
CN113379698B (en) * 2021-06-08 2022-07-05 武汉大学 Illumination estimation method based on step-by-step joint supervision
CN113379698A (en) * 2021-06-08 2021-09-10 武汉大学 Illumination estimation method based on step-by-step joint supervision
CN113537194A (en) * 2021-07-15 2021-10-22 Oppo广东移动通信有限公司 Illumination estimation method, illumination estimation device, storage medium, and electronic apparatus
CN113572962A (en) * 2021-07-28 2021-10-29 北京大学 Outdoor natural scene illumination estimation method and device
CN115294263A (en) * 2022-10-08 2022-11-04 武汉大学 Illumination estimation model, network, method and system
CN115294263B (en) * 2022-10-08 2023-02-03 武汉大学 Illumination estimation method and system
CN115439595A (en) * 2022-11-07 2022-12-06 四川大学 AR-oriented indoor scene dynamic illumination online estimation method and device
CN117392353A (en) * 2023-12-11 2024-01-12 中南大学 Augmented reality illumination estimation method, system, equipment and storage medium
CN117392353B (en) * 2023-12-11 2024-03-12 中南大学 Augmented reality illumination estimation method, system, equipment and storage medium

Also Published As

Publication number Publication date
CN110910486B (en) 2021-11-19
WO2021103137A1 (en) 2021-06-03

Similar Documents

Publication Publication Date Title
CN110910486B (en) Indoor scene illumination estimation model, method and device, storage medium and rendering method
WO2021174939A1 (en) Facial image acquisition method and system
CN113572962B (en) Outdoor natural scene illumination estimation method and device
WO2022228383A1 (en) Graphics rendering method and apparatus
WO2022100419A1 (en) Image processing method and related device
KR20220117324A (en) Learning from various portraits
WO2023020201A1 (en) Image enhancement method and electronic device
US20230368459A1 (en) Systems and methods for rendering virtual objects using editable light-source parameter estimation
CN113592726A (en) High dynamic range imaging method, device, electronic equipment and storage medium
CN115100337A (en) Whole body portrait video relighting method and device based on convolutional neural network
CN114125310A (en) Photographing method, terminal device and cloud server
CN115984447A (en) Image rendering method, device, equipment and medium
CN111836058B (en) Method, device and equipment for playing real-time video and storage medium
CN116740261A (en) Image reconstruction method and device and training method and device of image reconstruction model
WO2021151380A1 (en) Method for rendering virtual object based on illumination estimation, method for training neural network, and related products
US20240037788A1 (en) 3d pose estimation in robotics
CN117197323A (en) Large scene free viewpoint interpolation method and device based on neural network
RU2757563C1 (en) Method for visualizing a 3d portrait of a person with altered lighting and a computing device for it
CN114581316A (en) Image reconstruction method, electronic device, storage medium, and program product
JP2014164497A (en) Information processor, image processing method and program
Zhou et al. Improved YOLOv7 models based on modulated deformable convolution and swin transformer for object detection in fisheye images
Shin et al. Hdr map reconstruction from a single ldr sky panoramic image for outdoor illumination estimation
CN116416364B (en) Data acquisition and estimation method and device for urban scene space variable environment illumination
US20230289930A1 (en) Systems and Methods for Lightweight Machine Learning for Image Illumination Control
CN115222578A (en) Image style migration method, program product, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant