CN117953137A - Human body re-illumination method based on dynamic surface reflection field - Google Patents

Human body re-illumination method based on dynamic surface reflection field Download PDF

Info

Publication number
CN117953137A
CN117953137A CN202410353427.3A CN202410353427A CN117953137A CN 117953137 A CN117953137 A CN 117953137A CN 202410353427 A CN202410353427 A CN 202410353427A CN 117953137 A CN117953137 A CN 117953137A
Authority
CN
China
Prior art keywords
human body
light
dynamic
surface reflection
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410353427.3A
Other languages
Chinese (zh)
Other versions
CN117953137B (en
Inventor
张盛平
孙艺朋靖
柳青林
孟权令
吕晓倩
王晨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Weihai
Original Assignee
Harbin Institute of Technology Weihai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Weihai filed Critical Harbin Institute of Technology Weihai
Priority to CN202410353427.3A priority Critical patent/CN117953137B/en
Publication of CN117953137A publication Critical patent/CN117953137A/en
Application granted granted Critical
Publication of CN117953137B publication Critical patent/CN117953137B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Image Generation (AREA)

Abstract

本发明公开了一种基于动态表面反射场的人体重光照方法,包括以下步骤:将4D空间利用多平面及哈希表示进行分解,对多视角动态人体视频编码得到紧凑的时空位置编码;得到光线采样点的符号距离函数值、几何特征及颜色值;得到对应像素的深度、法向、颜色及材质;建模直接光照、光线可见性及间接光照;同时约束渲染图像,学习模型参数,得到动态人体重光照视频。本发明通过设计高效的4D隐式表示对人体表面反射场进行建模,克服了基于模板的方法中固有的拟合误差大和较低的运动自由度的问题,实现准确的动态人体表面反射场的估计。光照建模中通过光线追踪引入可见性及间接光,精准地模拟了二次弹射的着色效果,实现更准确的材质解算和重光照效果。

The present invention discloses a human body re-lighting method based on a dynamic surface reflection field, comprising the following steps: decomposing a 4D space using multi-plane and hash representation, encoding a multi-view dynamic human body video to obtain a compact spatiotemporal position encoding; obtaining the signed distance function value, geometric features and color value of the light sampling point; obtaining the depth, normal, color and material of the corresponding pixel; modeling direct lighting, light visibility and indirect lighting; and constraining the rendered image at the same time, learning the model parameters, and obtaining a dynamic human body re-lighting video. The present invention models the human body surface reflection field by designing an efficient 4D implicit representation, overcoming the problems of large fitting errors and low freedom of motion inherent in the template-based method, and achieving accurate estimation of the dynamic human body surface reflection field. In the illumination modeling, visibility and indirect light are introduced through ray tracing, and the shading effect of secondary ejection is accurately simulated, achieving more accurate material solution and re-lighting effects.

Description

Human body re-illumination method based on dynamic surface reflection field
Technical Field
The invention relates to the technical field of dynamic three-dimensional reconstruction and inverse rendering, in particular to a human body re-illumination method based on a dynamic surface reflection field.
Background
Dynamic human body weight illumination is an important research direction based on computer vision and graphics, and the application of the dynamic human body weight illumination covers a plurality of industries such as movie production, video game development, virtual reality and the like. The core aim is to manipulate the light and shadow to achieve a natural fusion of the dynamic human body with the new lighting environment.
Conventional approaches rely on controllable illumination systems in LIGHTSTAGE and advanced camera arrays to capture accurate body reflectivity, however expensive equipment limits their widespread use. To address these limitations, existing approaches propose explicit optimization of dynamic body geometry and reflected fields under unknown constant lighting conditions. Nevertheless, achieving fine dynamic reconstruction and high quality re-illumination effects remains a significant challenge for explicit representation. Under the promotion of the development of the implicit neural scene representation technology, the realization of realistic free viewpoint rendering is possible, and the exploration of the neural inverse rendering on the static object re-illumination method is promoted. However, they are difficult to extend to dynamic scenes, subject to representation limitations of static radiation fields. In order to simulate time-varying geometry and reflected fields with complex movements, the latest approach uses deformable body templates SMPL as explicit guides for body movements to simulate body movements. Limitations in fit errors and freedom of movement inherent in template-based modeling prevent existing flows, making it difficult to reconstruct dynamic geometric details in more challenging scenarios involving loose clothing and character interactions.
Disclosure of Invention
The invention aims to provide a human body re-illumination method based on a dynamic surface reflection field, which utilizes compact space-time implicit expression to learn human body motion with high degree of freedom and realizes fine dynamic human body geometric reconstruction and material estimation. In order to model an accurate shadow effect, the method estimates direct illumination and indirect illumination simultaneously, and adopts a physical-based rendering method to realize a vivid rendering effect.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
A human body re-illumination method based on a dynamic surface reflection field comprises the following steps:
Decomposing the 4D space by utilizing multi-plane and hash representation, and encoding the input multi-view dynamic human body video by using time-space multi-plane representation to obtain compact time-space position encoding; the method comprises the following steps: the 4D space is decomposed into a compact multi-planar feature encoder and a time-aware hash encoder. In modeling, light is emitted from a camera center point to an imaging plane, light in 4D space is sampled, each light samples a certain number of points, and space-time encoding is performed for each point using the two encoders obtained above.
Inputting the space-time position codes into a geometric network to obtain a symbol distance function value and geometric characteristics of the light sampling points; the method comprises the following steps: and (3) inputting space-time position codes of the light sampling points into a multi-layer perceptron, and obtaining symbol distance function values and geometric features of the corresponding light sampling points through rendering loss fitting.
Inputting the geometric characteristics and space-time position codes of the light sampling points into a color network to obtain color values of the light sampling points; the method comprises the following steps: and splicing the space-time position codes of the light sampling points with the geometric features, inputting the space-time position codes into a multi-layer perceptron, and obtaining color values of corresponding points through rendering loss fitting.
Integrating the density, normal direction, color and material of the sampling points on each light ray by using a volume rendering technology to obtain the depth, normal direction, color and material of the corresponding pixels; thereby obtaining a depth map, a normal map, a color map and a texture map of the dynamic human body;
for modeling of illumination, the method estimates direct illumination and indirect illumination simultaneously, the direct illumination uses a spherical Gaussian function for modeling, and parameters can be compressed and optimized, so that the parameters are easy to converge; indirect light relies on the characteristics of the neural radiation field, modeling visibility and indirect illumination using ray tracing.
Determining the positions of the surface points by using the obtained depth map, and obtaining a final rendering image by using a physical-based rendering method for each surface point; the method comprises the following steps: and obtaining the spatial positions of the surface points by sampling the light rays by utilizing the depth information, and obtaining a final rendered image by using a micro-surface model to input geometry, materials, visibility and illumination through a physical-based rendering method for each surface point.
Taking the target video as a monitor, simultaneously restricting the rendering image obtained by the volume rendering and the physical-based rendering method in the steps, and learning model parameters by minimizing the restriction; the main constraint is rendering loss with target video as supervision, and the main constraint comprises smooth loss of materials and geometric constraint.
When in re-illumination, the new ambient light map is used for replacing direct illumination in illumination modeling, and a physical-based rendering method is used for synthesizing dynamic human re-illumination video under the new illumination.
The effects provided in the summary of the invention are merely effects of embodiments, not all effects of the invention, and one of the above technical solutions has the following advantages or beneficial effects:
the invention provides a human body relighting method based on a dynamic surface reflection field, which designs an efficient 4D implicit representation to model the human body surface reflection field, overcomes the problems of large fitting error and lower motion freedom inherent in the method based on a template, and realizes accurate estimation of the dynamic human body surface reflection field. In the illumination modeling, visibility and indirect light are introduced through ray tracing, so that the coloring effect of secondary ejection is accurately simulated, and more accurate material calculation and relighting effect are realized.
Drawings
FIG. 1 is a flow chart of a human body re-illumination method based on a dynamic surface reflection field.
Detailed Description
As shown in fig. 1, a human body re-illumination method based on a dynamic surface reflection field comprises the following steps:
S1, decomposing a 4D space by using multi-plane and hash representation, and encoding an input multi-view dynamic human body video by using a time-space multi-plane representation to obtain a compact space-time position code;
s2, inputting the space-time position codes into a geometric network to obtain a symbol distance function value and geometric characteristics of the light sampling points;
s3, inputting the geometric features and space-time position codes of the light sampling points into a color network to obtain color values of the light sampling points;
S4, obtaining the depth, normal direction, color and material of the corresponding pixel by using a volume rendering technology for the light sampling points; thereby obtaining a depth map, a normal map, a color map and a texture map of the dynamic human body;
S5, modeling direct illumination by using a spherical Gaussian function, and modeling light visibility and indirect illumination by using ray tracing;
s6, determining the positions of the surface points by using the obtained depth map, and obtaining a final rendering image for each surface point by using a physical-based rendering method;
S7, taking the target video as a monitor, simultaneously restraining the rendering image obtained through the volume rendering and the physical-based rendering method in the steps, and learning model parameters by minimizing the restraint;
And S8, when the human body is relight, replacing the direct illumination by using a new ambient light map to obtain a dynamic human body relight video.
In step S1, the 4D space is decomposed into a compact multi-planar feature encoder and a time-aware hash encoder. In modeling, light is emitted from a camera center point to an imaging plane, light in 4D space is sampled, each light samples a certain number of points, and space-time encoding is performed for each point using the two encoders obtained above. For each sampling point in spaceThe space-time coding can be defined as:
Wherein, Representing a multi-planar feature encoder,/>A hash encoder representing the temporal perception,Is a low-dimensional tensor decomposed from the 4D tensor,/>Representing a splice operation,/>Representing the hadamard product.
In step S2, the position codes of the light sampling points are input into a small multi-layer perceptron, and the symbol distance function values and geometric features of the corresponding light sampling points are obtained through rendering loss fitting. The process can be expressed as: Wherein, the method comprises the steps of, wherein, Is a geometric network,/>Is a sign distance function value,/>Is a geometric feature.
In step S3, the space-time position codes of the sampling points are spliced with the geometric features, the spliced space-time position codes are input into a small multi-layer perceptron, and color values of the corresponding light sampling points are obtained through rendering loss fitting. The process can be expressed as: wherein/> For color network,/>Color values for the sample points.
In step S4, the density, normal direction, color and material of the sampling points on each ray are integrated by using the volume rendering technique, so as to obtain a depth map, a normal direction map, a color map and a material map of the dynamic human body. Taking a color chart as an example, this process can be expressed as:
Wherein, Representing the camera center,/>Representing the opposite direction of the light emitted from the camera center,/>Representing transmittance,/>Representing volume density,/>For sampling point color values,/>The color of the resulting pixel value is rendered for the volume.
In step S5, for modeling of illumination, the method estimates direct illumination and indirect illumination at the same time, the direct illumination uses a spherical gaussian function for modeling, compression can optimize the parameter quantity, so that the parameter quantity is easy to converge, indirect light depends on the characteristics of a nerve radiation field, and the visibility and the indirect illumination are obtained by using a light tracking mode.
Direct illuminationCan be expressed as:
Wherein, Representing a mixed sphere gaussian function,/>Representation for lobe/>Is/are optimized for the parametersFor the total number of lobes,Is the incident direction of the light.
Indirect light relies on the characteristics of the neural radiation field, and the visibility and indirect illumination are obtained by using a light tracking modeThe concrete representation is as follows:
Wherein, For/>Location of timetable points,/>Color of pixel value obtained for volume rendering,/>For/>Transmittance of each sample point, emission direction from surface point/>The rays issued may be expressed as: /(I). In actual sampling, N (=512) points are acquired by using a discrete sampling mode, wherein the number of points is/areFor/>Sampling intervals of the sampling points.
In step S6, the spatial positions of the surface points are obtained by sampling the light using the depth information, and for each surface point, the final rendered image is obtained by using the micro-surface model to input geometry, material, visibility and illumination through a physical-based rendering method. The physics-based rendering formula is as follows:
Wherein, Is in the normal direction/>To be from/>Incident radiance of direction reception,/>Is the emergent direction/>Is made of surface material.
In step S7, taking the target video as a supervision, and simultaneously constraining the rendered image obtained by the volume rendering and the physical-based rendering method in the above steps, wherein the main constraint is the rendering loss taking the target video as the supervision, and secondly comprises the smooth loss of the material and the geometric constraint, and learning the model parameters by minimizing the constraint.
Principal constraint lossThe definition is as follows:
Wherein, Is the color resulting from volume rendering,/>For colors based on physical rendering,/>Is the true color for supervision.
In step S8, after modeling is completed, only a new ambient light map is needed to replace direct illumination during re-illumination, so as to obtain a dynamic human body re-illumination video.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it is intended to cover all modifications or variations within the scope of the invention as defined by the claims of the present invention.

Claims (9)

1.一种基于动态表面反射场的人体重光照方法,其特征是,包括以下步骤:1. A human body re-illumination method based on a dynamic surface reflection field, characterized in that it comprises the following steps: S1、将4D空间利用多平面及哈希表示进行分解,对输入的多视角动态人体视频使用时空多平面表示进行编码,得到紧凑的时空位置编码;S1. Decompose the 4D space using multi-plane and hash representation, encode the input multi-view dynamic human body video using spatiotemporal multi-plane representation, and obtain compact spatiotemporal position coding; S2、将时空位置编码输入几何网络,得到光线采样点的符号距离函数值及几何特征;S2, input the spatiotemporal position code into the geometric network to obtain the signed distance function value and geometric features of the light sampling point; S3、将光线采样点的几何特征和时空位置编码输入颜色网络,得到光线采样点的颜色值;S3, encoding the geometric features and spatiotemporal position of the light sampling point into a color network to obtain the color value of the light sampling point; S4、对光线采样点使用体渲染技术得到对应像素的深度、法向、颜色及材质,从而得到动态人体的深度图、法向图、颜色图以及材质图;S4, using volume rendering technology to obtain the depth, normal, color and material of the corresponding pixel at the light sampling point, thereby obtaining a depth map, a normal map, a color map and a material map of the dynamic human body; S5、使用球面高斯函数建模直接光照,利用光线追踪建模光线可见性及间接光照;S5. Use spherical Gaussian functions to model direct lighting, and use ray tracing to model light visibility and indirect lighting; S6、使用S4得到的深度图确定表面点的位置,对于每一个表面点,使用基于物理的渲染方法得到渲染图像;S6, using the depth map obtained in S4 to determine the position of the surface point, and for each surface point, using a physically based rendering method to obtain a rendered image; S7、将目标视频作为监督,同时约束S4中通过体渲染得到的渲染图像及S6中基于物理的渲染方法得到的渲染图像,通过最小化该约束,学习模型参数;S7, using the target video as supervision, while constraining the rendered image obtained by volume rendering in S4 and the rendered image obtained by the physical-based rendering method in S6, and learning the model parameters by minimizing the constraints; S8、重光照时使用新的环境光贴图将直接光照进行替换,得到动态人体重光照视频。S8. When relighting, use the new ambient light map to replace the direct lighting to obtain a dynamic human body relighting video. 2.如权利要求1所述的一种基于动态表面反射场的人体重光照方法,其特征是,所述S1具体为: 将4D空间分解为一个紧凑的多平面特征编码器以及一个时间感知的哈希编码器;建模时,从相机中心点向成像平面发射光线,对4D空间中的每条光线进行采样,对于每个光线采样点使用多平面特征编码器以及哈希编码器进行时空位置编码。2. A human body re-illumination method based on dynamic surface reflection field as described in claim 1, characterized in that S1 specifically comprises: decomposing the 4D space into a compact multi-plane feature encoder and a time-aware hash encoder; when modeling, emitting light from the camera center point to the imaging plane, sampling each light ray in the 4D space, and using a multi-plane feature encoder and a hash encoder to perform spatiotemporal position encoding for each light sampling point. 3.如权利要求1所述的一种基于动态表面反射场的人体重光照方法,其特征是,所述S2具体为:将光线采样点的时空位置编码输入多层感知机,通过渲染损失拟合得到对应光线采样点的符号距离函数值及几何特征。3. A human body re-illumination method based on a dynamic surface reflection field as described in claim 1, characterized in that S2 specifically comprises: encoding the spatiotemporal position of the light sampling point into a multi-layer perceptron, and obtaining the signed distance function value and geometric features of the corresponding light sampling point through rendering loss fitting. 4.如权利要求1所述的一种基于动态表面反射场的人体重光照方法,其特征是,所述S3具体为:将光线采样点的时空位置编码与几何特征进行拼接,输入多层感知机,通过渲染损失拟合得到对应光线采样点的颜色值。4. A human body re-illumination method based on a dynamic surface reflection field as described in claim 1, characterized in that S3 specifically comprises: splicing the spatiotemporal position encoding of the light sampling point with the geometric features, inputting the encoding into a multi-layer perceptron, and obtaining the color value of the corresponding light sampling point through rendering loss fitting. 5.如权利要求1所述的一种基于动态表面反射场的人体重光照方法,其特征是,所述S4具体为:使用体渲染技术,对每条光线上的采样点的密度、法向、颜色以及材质进行积分,得到动态人体的深度图、法向图、颜色图以及材质图。5. A human body re-illumination method based on a dynamic surface reflection field as described in claim 1, characterized in that S4 specifically comprises: using volume rendering technology to integrate the density, normal, color and material of the sampling points on each light ray to obtain a depth map, normal map, color map and material map of the dynamic human body. 6.如权利要求1所述的一种基于动态表面反射场的人体重光照方法,其特征是,所述S5具体为:所述直接光照使用球面高斯函数进行建模,压缩可优化参数量,使其易于收敛;间接光依赖神经辐射场的特性,使用光线追踪的方式获取可见性及间接光照。6. A human body re-illumination method based on a dynamic surface reflection field as described in claim 1, characterized in that the S5 is specifically as follows: the direct illumination is modeled using a spherical Gaussian function, and the amount of parameters that can be optimized is compressed to make it easy to converge; the indirect light relies on the characteristics of the neural radiation field, and uses ray tracing to obtain visibility and indirect illumination. 7.如权利要求1所述的一种基于动态表面反射场的人体重光照方法,其特征是,所述S6具体为:利用深度信息通过采样光线得到表面点的空间位置,对于每一个表面点,使用微表面模型通过基于物理的渲染方法将几何、材质、可见性及光照作为输入,得到最终的渲染图像。7. A human body re-illumination method based on a dynamic surface reflection field as described in claim 1, characterized in that S6 specifically comprises: using depth information to obtain the spatial position of the surface point by sampling light, and for each surface point, using a microsurface model to use a physically based rendering method to take geometry, material, visibility and lighting as input to obtain a final rendered image. 8.如权利要求1所述的一种基于动态表面反射场的人体重光照方法,其特征是,所述S7具体为:将目标视频作为监督,同时约束S4中通过体渲染得到的渲染图像及S6中基于物理的渲染方法得到的渲染图像,主要约束为以目标视频为监督的渲染损失,其次包含材质的平滑损失以及几何的约束。8. A human body re-illumination method based on a dynamic surface reflection field as described in claim 1, characterized in that, S7 specifically comprises: taking the target video as supervision, and constraining the rendered image obtained by volume rendering in S4 and the rendered image obtained by the physically based rendering method in S6, wherein the main constraint is the rendering loss supervised by the target video, and secondly includes the smoothness loss of the material and the geometric constraints. 9.如权利要求1所述的一种基于动态表面反射场的人体重光照方法,其特征是,所述S8具体为:使用新的环境光贴图替换光照建模中的直接光照,使用基于物理的渲染方法合成新光照下的动态人体视频。9. A human body re-illumination method based on dynamic surface reflection field as described in claim 1, characterized in that S8 specifically includes: using a new ambient light map to replace the direct lighting in the lighting modeling, and using a physically based rendering method to synthesize a dynamic human body video under the new lighting.
CN202410353427.3A 2024-03-27 2024-03-27 A human body re-illumination method based on dynamic surface reflection field Active CN117953137B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410353427.3A CN117953137B (en) 2024-03-27 2024-03-27 A human body re-illumination method based on dynamic surface reflection field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410353427.3A CN117953137B (en) 2024-03-27 2024-03-27 A human body re-illumination method based on dynamic surface reflection field

Publications (2)

Publication Number Publication Date
CN117953137A true CN117953137A (en) 2024-04-30
CN117953137B CN117953137B (en) 2024-06-14

Family

ID=90796628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410353427.3A Active CN117953137B (en) 2024-03-27 2024-03-27 A human body re-illumination method based on dynamic surface reflection field

Country Status (1)

Country Link
CN (1) CN117953137B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118710821A (en) * 2024-08-30 2024-09-27 中央广播电视总台 Dynamic scene reconstruction method, device, computer equipment and storage medium
CN118941723A (en) * 2024-10-12 2024-11-12 南昌大学 A 3D face reconstruction method based on differentiable ray tracing

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183637A (en) * 2020-09-29 2021-01-05 中科方寸知微(南京)科技有限公司 A method and system for single light source scene illumination re-rendering based on neural network
CN112927341A (en) * 2021-04-02 2021-06-08 腾讯科技(深圳)有限公司 Illumination rendering method and device, computer equipment and storage medium
CN113240622A (en) * 2021-03-12 2021-08-10 清华大学 Human body scene image intrinsic decomposition and relighting method and device
WO2021223134A1 (en) * 2020-05-07 2021-11-11 浙江大学 Micro-renderer-based method for acquiring reflection material of human face from single image
CN114092625A (en) * 2021-11-19 2022-02-25 山东大学 Real-time multi-scale high-frequency material rendering method and system based on normal map
CN114429538A (en) * 2022-04-02 2022-05-03 中科计算技术创新研究院 Method for interactively editing nerve radiation field geometry
CN114972617A (en) * 2022-06-22 2022-08-30 北京大学 A Guided Rendering-Based Scene Illumination and Reflection Modeling Method
CN115131492A (en) * 2022-04-12 2022-09-30 腾讯科技(深圳)有限公司 Target object relighting method and device, storage medium and background replacement method
US20220335636A1 (en) * 2021-04-15 2022-10-20 Adobe Inc. Scene reconstruction using geometry and reflectance volume representation of scene
WO2022231582A1 (en) * 2021-04-28 2022-11-03 Google Llc Photo relighting and background replacement based on machine learning models
CN115719399A (en) * 2022-09-30 2023-02-28 中国人民解放军国防科技大学 Object illumination editing method, system and medium based on single picture
CN116051696A (en) * 2023-01-10 2023-05-02 之江实验室 Reconstruction method and device of human body implicit model capable of being re-illuminated
CN116310018A (en) * 2022-12-07 2023-06-23 西北大学 A Model Hybrid Rendering Method Based on Virtual Lighting Environment and Light Query
CN116485994A (en) * 2023-03-08 2023-07-25 浙江大学 Scene reverse rendering method and device based on neural implicit expression
CN116934948A (en) * 2023-06-15 2023-10-24 清华大学 Re-illuminable three-dimensional digital human construction method and device based on multi-view video
CN116958396A (en) * 2023-07-18 2023-10-27 咪咕文化科技有限公司 Image relighting method, device and readable storage medium
CN116977536A (en) * 2023-08-14 2023-10-31 北京航空航天大学 Novel visual angle synthesis method for borderless scene based on mixed nerve radiation field
CN117237527A (en) * 2023-08-25 2023-12-15 上海人工智能创新中心 Multi-view three-dimensional reconstruction method
CN117671126A (en) * 2023-12-12 2024-03-08 四川大学 Spatially varying indoor scene illumination estimation method based on neural radiation field

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021223134A1 (en) * 2020-05-07 2021-11-11 浙江大学 Micro-renderer-based method for acquiring reflection material of human face from single image
CN112183637A (en) * 2020-09-29 2021-01-05 中科方寸知微(南京)科技有限公司 A method and system for single light source scene illumination re-rendering based on neural network
CN113240622A (en) * 2021-03-12 2021-08-10 清华大学 Human body scene image intrinsic decomposition and relighting method and device
CN112927341A (en) * 2021-04-02 2021-06-08 腾讯科技(深圳)有限公司 Illumination rendering method and device, computer equipment and storage medium
US20220335636A1 (en) * 2021-04-15 2022-10-20 Adobe Inc. Scene reconstruction using geometry and reflectance volume representation of scene
WO2022231582A1 (en) * 2021-04-28 2022-11-03 Google Llc Photo relighting and background replacement based on machine learning models
CN114092625A (en) * 2021-11-19 2022-02-25 山东大学 Real-time multi-scale high-frequency material rendering method and system based on normal map
CN114429538A (en) * 2022-04-02 2022-05-03 中科计算技术创新研究院 Method for interactively editing nerve radiation field geometry
CN115131492A (en) * 2022-04-12 2022-09-30 腾讯科技(深圳)有限公司 Target object relighting method and device, storage medium and background replacement method
CN114972617A (en) * 2022-06-22 2022-08-30 北京大学 A Guided Rendering-Based Scene Illumination and Reflection Modeling Method
CN115719399A (en) * 2022-09-30 2023-02-28 中国人民解放军国防科技大学 Object illumination editing method, system and medium based on single picture
CN116310018A (en) * 2022-12-07 2023-06-23 西北大学 A Model Hybrid Rendering Method Based on Virtual Lighting Environment and Light Query
CN116051696A (en) * 2023-01-10 2023-05-02 之江实验室 Reconstruction method and device of human body implicit model capable of being re-illuminated
CN116485994A (en) * 2023-03-08 2023-07-25 浙江大学 Scene reverse rendering method and device based on neural implicit expression
CN116934948A (en) * 2023-06-15 2023-10-24 清华大学 Re-illuminable three-dimensional digital human construction method and device based on multi-view video
CN116958396A (en) * 2023-07-18 2023-10-27 咪咕文化科技有限公司 Image relighting method, device and readable storage medium
CN116977536A (en) * 2023-08-14 2023-10-31 北京航空航天大学 Novel visual angle synthesis method for borderless scene based on mixed nerve radiation field
CN117237527A (en) * 2023-08-25 2023-12-15 上海人工智能创新中心 Multi-view three-dimensional reconstruction method
CN117671126A (en) * 2023-12-12 2024-03-08 四川大学 Spatially varying indoor scene illumination estimation method based on neural radiation field

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
KAI ZHANG: "PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting", 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2 November 2021 (2021-11-02) *
ZHONG LI: "Relit-NeuLF: Efficient Relighting and Novel View Synthesis via Neural 4D Light Field", ACM MULTIMEDIA, 23 October 2023 (2023-10-23) *
严忻恺: "神经渲染及其硬件加速综述", 计算机研究与发展, 9 January 2024 (2024-01-09) *
吴洪宇;金鑫;: "基于图像的虚拟光影技术研究热点", 科技导报, no. 06, 28 March 2020 (2020-03-28) *
宋仪: "基于深度学习的图像重光照研究", CNKI电子硕士电子期刊, 8 September 2021 (2021-09-08) *
马晨星: "基于分解优化的人脸图像重光照技术研究", CNKI电子硕士电子期刊, 14 March 2024 (2024-03-14) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118710821A (en) * 2024-08-30 2024-09-27 中央广播电视总台 Dynamic scene reconstruction method, device, computer equipment and storage medium
CN118941723A (en) * 2024-10-12 2024-11-12 南昌大学 A 3D face reconstruction method based on differentiable ray tracing

Also Published As

Publication number Publication date
CN117953137B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN117953137A (en) Human body re-illumination method based on dynamic surface reflection field
US12211225B2 (en) Scene reconstruction using geometry and reflectance volume representation of scene
Li et al. [Retracted] Multivisual Animation Character 3D Model Design Method Based on VR Technology
CN115115688B (en) Image processing method and electronic equipment
CN114972617B (en) Scene illumination and reflection modeling method based on conductive rendering
US11663775B2 (en) Generating physically-based material maps
CN110246209B (en) Image processing method and device
CN117557714A (en) Three-dimensional reconstruction method, electronic device and readable storage medium
CN116051696B (en) Reconstruction method and device of human body implicit model capable of being re-illuminated
CN116416375A (en) A 3D reconstruction method and system based on deep learning
CN114926553A (en) Three-dimensional scene consistency stylization method and system based on nerve radiation field
CN111862278A (en) Animation obtaining method and device, electronic equipment and storage medium
CN116134491A (en) Multi-view neuro-human prediction using implicit differentiable renderers for facial expression, body posture morphology, and clothing performance capture
CN117422829A (en) Face image synthesis optimization method based on nerve radiation field
KR102291162B1 (en) Apparatus and method for generating virtual data for artificial intelligence learning
CN115222917A (en) Training method, device and equipment for three-dimensional reconstruction model and storage medium
Mittal Neural radiance fields: Past, present, and future
CN112634456A (en) Real-time high-reality drawing method of complex three-dimensional model based on deep learning
Wang et al. A new era of indoor scene reconstruction: A survey
Liu et al. Neural impostor: Editing neural radiance fields with explicit shape manipulation
CN117649478B (en) Model training method, image processing method and electronic equipment
CN115981467B (en) Image synthesis parameter determining method, image synthesis method and device
CN115953524A (en) Data processing method and device, computer equipment and storage medium
Helmrich et al. A scalable pipeline to create synthetic datasets from functional–structural plant models for deep learning
Zhang et al. Survey on controlable image synthesis with deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant