CN117953137A - Human body re-illumination method based on dynamic surface reflection field - Google Patents
Human body re-illumination method based on dynamic surface reflection field Download PDFInfo
- Publication number
- CN117953137A CN117953137A CN202410353427.3A CN202410353427A CN117953137A CN 117953137 A CN117953137 A CN 117953137A CN 202410353427 A CN202410353427 A CN 202410353427A CN 117953137 A CN117953137 A CN 117953137A
- Authority
- CN
- China
- Prior art keywords
- human body
- illumination
- light
- rendering
- space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000005286 illumination Methods 0.000 title claims abstract description 63
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000005070 sampling Methods 0.000 claims abstract description 36
- 239000000463 material Substances 0.000 claims abstract description 20
- 230000006870 function Effects 0.000 claims abstract description 14
- 238000009877 rendering Methods 0.000 claims description 49
- 238000005516 engineering process Methods 0.000 claims description 5
- 230000001537 neural effect Effects 0.000 claims description 5
- 230000005855 radiation Effects 0.000 claims description 5
- 238000013459 approach Methods 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 claims description 3
- 230000006835 compression Effects 0.000 claims description 2
- 238000007906 compression Methods 0.000 claims description 2
- 230000002194 synthesizing effect Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 10
- 238000004364 calculation method Methods 0.000 abstract description 2
- 238000004040 coloring Methods 0.000 abstract description 2
- 230000000452 restraining effect Effects 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 3
- 230000037396 body weight Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000002834 transmittance Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B20/00—Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
- Y02B20/40—Control techniques providing energy savings, e.g. smart controller or presence detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Image Generation (AREA)
Abstract
The invention discloses a human body relighting method based on a dynamic surface reflection field, which comprises the following steps: decomposing the 4D space by utilizing multi-plane and hash representation, and encoding the multi-view dynamic human body video to obtain compact space-time position encoding; obtaining a symbol distance function value, a geometric feature and a color value of a light sampling point; obtaining the depth, normal direction, color and material of the corresponding pixel; modeling direct illumination, light visibility, and indirect illumination; and simultaneously, restraining the rendered image, and learning model parameters to obtain a dynamic human body relighting video. According to the invention, the high-efficiency 4D implicit representation is designed to model the human body surface reflection field, so that the problems of large fitting error and low motion freedom degree inherent in a template-based method are overcome, and accurate estimation of the dynamic human body surface reflection field is realized. In the illumination modeling, visibility and indirect light are introduced through ray tracing, so that the coloring effect of secondary ejection is accurately simulated, and more accurate material calculation and relighting effect are realized.
Description
Technical Field
The invention relates to the technical field of dynamic three-dimensional reconstruction and inverse rendering, in particular to a human body re-illumination method based on a dynamic surface reflection field.
Background
Dynamic human body weight illumination is an important research direction based on computer vision and graphics, and the application of the dynamic human body weight illumination covers a plurality of industries such as movie production, video game development, virtual reality and the like. The core aim is to manipulate the light and shadow to achieve a natural fusion of the dynamic human body with the new lighting environment.
Conventional approaches rely on controllable illumination systems in LIGHTSTAGE and advanced camera arrays to capture accurate body reflectivity, however expensive equipment limits their widespread use. To address these limitations, existing approaches propose explicit optimization of dynamic body geometry and reflected fields under unknown constant lighting conditions. Nevertheless, achieving fine dynamic reconstruction and high quality re-illumination effects remains a significant challenge for explicit representation. Under the promotion of the development of the implicit neural scene representation technology, the realization of realistic free viewpoint rendering is possible, and the exploration of the neural inverse rendering on the static object re-illumination method is promoted. However, they are difficult to extend to dynamic scenes, subject to representation limitations of static radiation fields. In order to simulate time-varying geometry and reflected fields with complex movements, the latest approach uses deformable body templates SMPL as explicit guides for body movements to simulate body movements. Limitations in fit errors and freedom of movement inherent in template-based modeling prevent existing flows, making it difficult to reconstruct dynamic geometric details in more challenging scenarios involving loose clothing and character interactions.
Disclosure of Invention
The invention aims to provide a human body re-illumination method based on a dynamic surface reflection field, which utilizes compact space-time implicit expression to learn human body motion with high degree of freedom and realizes fine dynamic human body geometric reconstruction and material estimation. In order to model an accurate shadow effect, the method estimates direct illumination and indirect illumination simultaneously, and adopts a physical-based rendering method to realize a vivid rendering effect.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
A human body re-illumination method based on a dynamic surface reflection field comprises the following steps:
Decomposing the 4D space by utilizing multi-plane and hash representation, and encoding the input multi-view dynamic human body video by using time-space multi-plane representation to obtain compact time-space position encoding; the method comprises the following steps: the 4D space is decomposed into a compact multi-planar feature encoder and a time-aware hash encoder. In modeling, light is emitted from a camera center point to an imaging plane, light in 4D space is sampled, each light samples a certain number of points, and space-time encoding is performed for each point using the two encoders obtained above.
Inputting the space-time position codes into a geometric network to obtain a symbol distance function value and geometric characteristics of the light sampling points; the method comprises the following steps: and (3) inputting space-time position codes of the light sampling points into a multi-layer perceptron, and obtaining symbol distance function values and geometric features of the corresponding light sampling points through rendering loss fitting.
Inputting the geometric characteristics and space-time position codes of the light sampling points into a color network to obtain color values of the light sampling points; the method comprises the following steps: and splicing the space-time position codes of the light sampling points with the geometric features, inputting the space-time position codes into a multi-layer perceptron, and obtaining color values of corresponding points through rendering loss fitting.
Integrating the density, normal direction, color and material of the sampling points on each light ray by using a volume rendering technology to obtain the depth, normal direction, color and material of the corresponding pixels; thereby obtaining a depth map, a normal map, a color map and a texture map of the dynamic human body;
for modeling of illumination, the method estimates direct illumination and indirect illumination simultaneously, the direct illumination uses a spherical Gaussian function for modeling, and parameters can be compressed and optimized, so that the parameters are easy to converge; indirect light relies on the characteristics of the neural radiation field, modeling visibility and indirect illumination using ray tracing.
Determining the positions of the surface points by using the obtained depth map, and obtaining a final rendering image by using a physical-based rendering method for each surface point; the method comprises the following steps: and obtaining the spatial positions of the surface points by sampling the light rays by utilizing the depth information, and obtaining a final rendered image by using a micro-surface model to input geometry, materials, visibility and illumination through a physical-based rendering method for each surface point.
Taking the target video as a monitor, simultaneously restricting the rendering image obtained by the volume rendering and the physical-based rendering method in the steps, and learning model parameters by minimizing the restriction; the main constraint is rendering loss with target video as supervision, and the main constraint comprises smooth loss of materials and geometric constraint.
When in re-illumination, the new ambient light map is used for replacing direct illumination in illumination modeling, and a physical-based rendering method is used for synthesizing dynamic human re-illumination video under the new illumination.
The effects provided in the summary of the invention are merely effects of embodiments, not all effects of the invention, and one of the above technical solutions has the following advantages or beneficial effects:
the invention provides a human body relighting method based on a dynamic surface reflection field, which designs an efficient 4D implicit representation to model the human body surface reflection field, overcomes the problems of large fitting error and lower motion freedom inherent in the method based on a template, and realizes accurate estimation of the dynamic human body surface reflection field. In the illumination modeling, visibility and indirect light are introduced through ray tracing, so that the coloring effect of secondary ejection is accurately simulated, and more accurate material calculation and relighting effect are realized.
Drawings
FIG. 1 is a flow chart of a human body re-illumination method based on a dynamic surface reflection field.
Detailed Description
As shown in fig. 1, a human body re-illumination method based on a dynamic surface reflection field comprises the following steps:
S1, decomposing a 4D space by using multi-plane and hash representation, and encoding an input multi-view dynamic human body video by using a time-space multi-plane representation to obtain a compact space-time position code;
s2, inputting the space-time position codes into a geometric network to obtain a symbol distance function value and geometric characteristics of the light sampling points;
s3, inputting the geometric features and space-time position codes of the light sampling points into a color network to obtain color values of the light sampling points;
S4, obtaining the depth, normal direction, color and material of the corresponding pixel by using a volume rendering technology for the light sampling points; thereby obtaining a depth map, a normal map, a color map and a texture map of the dynamic human body;
S5, modeling direct illumination by using a spherical Gaussian function, and modeling light visibility and indirect illumination by using ray tracing;
s6, determining the positions of the surface points by using the obtained depth map, and obtaining a final rendering image for each surface point by using a physical-based rendering method;
S7, taking the target video as a monitor, simultaneously restraining the rendering image obtained through the volume rendering and the physical-based rendering method in the steps, and learning model parameters by minimizing the restraint;
And S8, when the human body is relight, replacing the direct illumination by using a new ambient light map to obtain a dynamic human body relight video.
In step S1, the 4D space is decomposed into a compact multi-planar feature encoder and a time-aware hash encoder. In modeling, light is emitted from a camera center point to an imaging plane, light in 4D space is sampled, each light samples a certain number of points, and space-time encoding is performed for each point using the two encoders obtained above. For each sampling point in spaceThe space-time coding can be defined as:
Wherein, Representing a multi-planar feature encoder,/>A hash encoder representing the temporal perception,Is a low-dimensional tensor decomposed from the 4D tensor,/>Representing a splice operation,/>Representing the hadamard product.
In step S2, the position codes of the light sampling points are input into a small multi-layer perceptron, and the symbol distance function values and geometric features of the corresponding light sampling points are obtained through rendering loss fitting. The process can be expressed as: Wherein, the method comprises the steps of, wherein, Is a geometric network,/>Is a sign distance function value,/>Is a geometric feature.
In step S3, the space-time position codes of the sampling points are spliced with the geometric features, the spliced space-time position codes are input into a small multi-layer perceptron, and color values of the corresponding light sampling points are obtained through rendering loss fitting. The process can be expressed as: wherein/> For color network,/>Color values for the sample points.
In step S4, the density, normal direction, color and material of the sampling points on each ray are integrated by using the volume rendering technique, so as to obtain a depth map, a normal direction map, a color map and a material map of the dynamic human body. Taking a color chart as an example, this process can be expressed as:
Wherein, Representing the camera center,/>Representing the opposite direction of the light emitted from the camera center,/>Representing transmittance,/>Representing volume density,/>For sampling point color values,/>The color of the resulting pixel value is rendered for the volume.
In step S5, for modeling of illumination, the method estimates direct illumination and indirect illumination at the same time, the direct illumination uses a spherical gaussian function for modeling, compression can optimize the parameter quantity, so that the parameter quantity is easy to converge, indirect light depends on the characteristics of a nerve radiation field, and the visibility and the indirect illumination are obtained by using a light tracking mode.
Direct illuminationCan be expressed as:
Wherein, Representing a mixed sphere gaussian function,/>Representation for lobe/>Is/are optimized for the parametersFor the total number of lobes,Is the incident direction of the light.
Indirect light relies on the characteristics of the neural radiation field, and the visibility and indirect illumination are obtained by using a light tracking modeThe concrete representation is as follows:
Wherein, For/>Location of timetable points,/>Color of pixel value obtained for volume rendering,/>For/>Transmittance of each sample point, emission direction from surface point/>The rays issued may be expressed as: /(I). In actual sampling, N (=512) points are acquired by using a discrete sampling mode, wherein the number of points is/areFor/>Sampling intervals of the sampling points.
In step S6, the spatial positions of the surface points are obtained by sampling the light using the depth information, and for each surface point, the final rendered image is obtained by using the micro-surface model to input geometry, material, visibility and illumination through a physical-based rendering method. The physics-based rendering formula is as follows:
Wherein, Is in the normal direction/>To be from/>Incident radiance of direction reception,/>Is the emergent direction/>Is made of surface material.
In step S7, taking the target video as a supervision, and simultaneously constraining the rendered image obtained by the volume rendering and the physical-based rendering method in the above steps, wherein the main constraint is the rendering loss taking the target video as the supervision, and secondly comprises the smooth loss of the material and the geometric constraint, and learning the model parameters by minimizing the constraint.
Principal constraint lossThe definition is as follows:
Wherein, Is the color resulting from volume rendering,/>For colors based on physical rendering,/>Is the true color for supervision.
In step S8, after modeling is completed, only a new ambient light map is needed to replace direct illumination during re-illumination, so as to obtain a dynamic human body re-illumination video.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it is intended to cover all modifications or variations within the scope of the invention as defined by the claims of the present invention.
Claims (9)
1. A human body heavy illumination method based on a dynamic surface reflection field is characterized by comprising the following steps:
S1, decomposing a 4D space by using multi-plane and hash representation, and encoding an input multi-view dynamic human body video by using a time-space multi-plane representation to obtain a compact space-time position code;
S2, inputting space-time position codes into a geometric network to obtain a symbol distance function value and geometric characteristics of the light sampling points;
s3, inputting the geometric features and space-time position codes of the light sampling points into a color network to obtain color values of the light sampling points;
S4, using a volume rendering technology to obtain the depth, normal direction, color and material of the corresponding pixels for the light sampling points, so as to obtain a depth map, a normal direction map, a color map and a material map of the dynamic human body;
S5, modeling direct illumination by using a spherical Gaussian function, and modeling light visibility and indirect illumination by using ray tracing;
S6, determining the positions of the surface points by using the depth map obtained in the S4, and obtaining a rendered image by using a physical-based rendering method for each surface point;
S7, taking the target video as a monitor, simultaneously constraining the rendering image obtained by volume rendering in S4 and the rendering image obtained based on a physical rendering method in S6, and learning model parameters by minimizing the constraint;
And S8, replacing the direct illumination by using a new ambient light map during the re-illumination to obtain a dynamic human body re-illumination video.
2. The method for human body heavy illumination based on dynamic surface reflection field as set forth in claim 1, wherein S1 is specifically: decomposing the 4D space into a compact multi-planar feature encoder and a time-aware hash encoder; in modeling, light rays are emitted from a camera center point to an imaging plane, each light ray in the 4D space is sampled, and a multi-plane feature encoder and a hash encoder are used for space-time position encoding for each light ray sampling point.
3. The method for human body heavy illumination based on dynamic surface reflection field as set forth in claim 1, wherein the step S2 is specifically: and (3) inputting space-time position codes of the light sampling points into a multi-layer perceptron, and obtaining symbol distance function values and geometric features of the corresponding light sampling points through rendering loss fitting.
4. The method for human body heavy illumination based on dynamic surface reflection field as set forth in claim 1, wherein the step S3 is specifically: and splicing the space-time position codes of the light sampling points with the geometric features, inputting the space-time position codes into a multi-layer perceptron, and obtaining color values of the corresponding light sampling points through rendering loss fitting.
5. The method for human body heavy illumination based on dynamic surface reflection field as set forth in claim 1, wherein S4 is specifically: and integrating the density, normal direction, color and material of the sampling points on each ray by using a volume rendering technology to obtain a depth map, a normal direction map, a color map and a material map of the dynamic human body.
6. The method for human body heavy illumination based on dynamic surface reflection field as set forth in claim 1, wherein the step S5 is specifically: modeling is carried out by using a spherical Gaussian function by direct illumination, and parameters can be optimized by compression, so that the parameters are easy to converge; indirect light relies on the characteristics of the neural radiation field, and uses a ray tracing approach to obtain visibility and indirect illumination.
7. The method for human body heavy illumination based on dynamic surface reflection field as set forth in claim 1, wherein S6 is specifically: and obtaining the spatial positions of the surface points by sampling the light rays by utilizing the depth information, and obtaining a final rendered image by using a micro-surface model to input geometry, materials, visibility and illumination through a physical-based rendering method for each surface point.
8. The method for human body heavy illumination based on dynamic surface reflection field as set forth in claim 1, wherein the step S7 is specifically: taking the target video as a monitor, and simultaneously constraining the rendering image obtained by volume rendering in the S4 and the rendering image obtained by a physical-based rendering method in the S6, wherein the primary constraint is the rendering loss taking the target video as the monitor, and the secondary constraint comprises the smooth loss of materials and geometric constraint.
9. The method for human body heavy illumination based on dynamic surface reflection field as set forth in claim 1, wherein the step S8 is specifically: and replacing direct illumination in illumination modeling by using a new ambient light map, and synthesizing a dynamic human body video under the new illumination by using a physical-based rendering method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410353427.3A CN117953137B (en) | 2024-03-27 | 2024-03-27 | Human body re-illumination method based on dynamic surface reflection field |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410353427.3A CN117953137B (en) | 2024-03-27 | 2024-03-27 | Human body re-illumination method based on dynamic surface reflection field |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117953137A true CN117953137A (en) | 2024-04-30 |
CN117953137B CN117953137B (en) | 2024-06-14 |
Family
ID=90796628
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410353427.3A Active CN117953137B (en) | 2024-03-27 | 2024-03-27 | Human body re-illumination method based on dynamic surface reflection field |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117953137B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118710821A (en) * | 2024-08-30 | 2024-09-27 | 中央广播电视总台 | Reconstruction method and device of dynamic scene, computer equipment and storage medium |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112183637A (en) * | 2020-09-29 | 2021-01-05 | 中科方寸知微(南京)科技有限公司 | Single-light-source scene illumination re-rendering method and system based on neural network |
CN112927341A (en) * | 2021-04-02 | 2021-06-08 | 腾讯科技(深圳)有限公司 | Illumination rendering method and device, computer equipment and storage medium |
CN113240622A (en) * | 2021-03-12 | 2021-08-10 | 清华大学 | Human body scene image intrinsic decomposition and relighting method and device |
WO2021223134A1 (en) * | 2020-05-07 | 2021-11-11 | 浙江大学 | Micro-renderer-based method for acquiring reflection material of human face from single image |
CN114092625A (en) * | 2021-11-19 | 2022-02-25 | 山东大学 | Real-time multi-scale high-frequency material rendering method and system based on normal map |
CN114429538A (en) * | 2022-04-02 | 2022-05-03 | 中科计算技术创新研究院 | Method for interactively editing nerve radiation field geometry |
CN114972617A (en) * | 2022-06-22 | 2022-08-30 | 北京大学 | Scene illumination and reflection modeling method based on conductive rendering |
CN115131492A (en) * | 2022-04-12 | 2022-09-30 | 腾讯科技(深圳)有限公司 | Target object relighting method and device, storage medium and background replacement method |
US20220335636A1 (en) * | 2021-04-15 | 2022-10-20 | Adobe Inc. | Scene reconstruction using geometry and reflectance volume representation of scene |
WO2022231582A1 (en) * | 2021-04-28 | 2022-11-03 | Google Llc | Photo relighting and background replacement based on machine learning models |
CN115719399A (en) * | 2022-09-30 | 2023-02-28 | 中国人民解放军国防科技大学 | Object illumination editing method, system and medium based on single picture |
CN116051696A (en) * | 2023-01-10 | 2023-05-02 | 之江实验室 | Reconstruction method and device of human body implicit model capable of being re-illuminated |
CN116310018A (en) * | 2022-12-07 | 2023-06-23 | 西北大学 | Model hybrid rendering method based on virtual illumination environment and light query |
CN116485994A (en) * | 2023-03-08 | 2023-07-25 | 浙江大学 | Scene reverse drawing method and device based on neural implicit expression |
CN116934948A (en) * | 2023-06-15 | 2023-10-24 | 清华大学 | Relighting three-dimensional digital person construction method and device based on multi-view video |
CN116958396A (en) * | 2023-07-18 | 2023-10-27 | 咪咕文化科技有限公司 | Image relighting method and device and readable storage medium |
CN116977536A (en) * | 2023-08-14 | 2023-10-31 | 北京航空航天大学 | Novel visual angle synthesis method for borderless scene based on mixed nerve radiation field |
CN117237527A (en) * | 2023-08-25 | 2023-12-15 | 上海人工智能创新中心 | Multi-view three-dimensional reconstruction method |
CN117671126A (en) * | 2023-12-12 | 2024-03-08 | 四川大学 | Space change indoor scene illumination estimation method based on nerve radiation field |
-
2024
- 2024-03-27 CN CN202410353427.3A patent/CN117953137B/en active Active
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021223134A1 (en) * | 2020-05-07 | 2021-11-11 | 浙江大学 | Micro-renderer-based method for acquiring reflection material of human face from single image |
CN112183637A (en) * | 2020-09-29 | 2021-01-05 | 中科方寸知微(南京)科技有限公司 | Single-light-source scene illumination re-rendering method and system based on neural network |
CN113240622A (en) * | 2021-03-12 | 2021-08-10 | 清华大学 | Human body scene image intrinsic decomposition and relighting method and device |
CN112927341A (en) * | 2021-04-02 | 2021-06-08 | 腾讯科技(深圳)有限公司 | Illumination rendering method and device, computer equipment and storage medium |
US20220335636A1 (en) * | 2021-04-15 | 2022-10-20 | Adobe Inc. | Scene reconstruction using geometry and reflectance volume representation of scene |
WO2022231582A1 (en) * | 2021-04-28 | 2022-11-03 | Google Llc | Photo relighting and background replacement based on machine learning models |
CN114092625A (en) * | 2021-11-19 | 2022-02-25 | 山东大学 | Real-time multi-scale high-frequency material rendering method and system based on normal map |
CN114429538A (en) * | 2022-04-02 | 2022-05-03 | 中科计算技术创新研究院 | Method for interactively editing nerve radiation field geometry |
CN115131492A (en) * | 2022-04-12 | 2022-09-30 | 腾讯科技(深圳)有限公司 | Target object relighting method and device, storage medium and background replacement method |
CN114972617A (en) * | 2022-06-22 | 2022-08-30 | 北京大学 | Scene illumination and reflection modeling method based on conductive rendering |
CN115719399A (en) * | 2022-09-30 | 2023-02-28 | 中国人民解放军国防科技大学 | Object illumination editing method, system and medium based on single picture |
CN116310018A (en) * | 2022-12-07 | 2023-06-23 | 西北大学 | Model hybrid rendering method based on virtual illumination environment and light query |
CN116051696A (en) * | 2023-01-10 | 2023-05-02 | 之江实验室 | Reconstruction method and device of human body implicit model capable of being re-illuminated |
CN116485994A (en) * | 2023-03-08 | 2023-07-25 | 浙江大学 | Scene reverse drawing method and device based on neural implicit expression |
CN116934948A (en) * | 2023-06-15 | 2023-10-24 | 清华大学 | Relighting three-dimensional digital person construction method and device based on multi-view video |
CN116958396A (en) * | 2023-07-18 | 2023-10-27 | 咪咕文化科技有限公司 | Image relighting method and device and readable storage medium |
CN116977536A (en) * | 2023-08-14 | 2023-10-31 | 北京航空航天大学 | Novel visual angle synthesis method for borderless scene based on mixed nerve radiation field |
CN117237527A (en) * | 2023-08-25 | 2023-12-15 | 上海人工智能创新中心 | Multi-view three-dimensional reconstruction method |
CN117671126A (en) * | 2023-12-12 | 2024-03-08 | 四川大学 | Space change indoor scene illumination estimation method based on nerve radiation field |
Non-Patent Citations (6)
Title |
---|
KAI ZHANG: "PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting", 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2 November 2021 (2021-11-02) * |
ZHONG LI: "Relit-NeuLF: Efficient Relighting and Novel View Synthesis via Neural 4D Light Field", ACM MULTIMEDIA, 23 October 2023 (2023-10-23) * |
严忻恺: "神经渲染及其硬件加速综述", 计算机研究与发展, 9 January 2024 (2024-01-09) * |
吴洪宇;金鑫;: "基于图像的虚拟光影技术研究热点", 科技导报, no. 06, 28 March 2020 (2020-03-28) * |
宋仪: "基于深度学习的图像重光照研究", CNKI电子硕士电子期刊, 8 September 2021 (2021-09-08) * |
马晨星: "基于分解优化的人脸图像重光照技术研究", CNKI电子硕士电子期刊, 14 March 2024 (2024-03-14) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118710821A (en) * | 2024-08-30 | 2024-09-27 | 中央广播电视总台 | Reconstruction method and device of dynamic scene, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN117953137B (en) | 2024-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111783525B (en) | Aerial photographic image target sample generation method based on style migration | |
CN117953137B (en) | Human body re-illumination method based on dynamic surface reflection field | |
US20220335636A1 (en) | Scene reconstruction using geometry and reflectance volume representation of scene | |
Kang et al. | Learning efficient illumination multiplexing for joint capture of reflectance and shape. | |
CN115115688B (en) | Image processing method and electronic equipment | |
CN114972617B (en) | Scene illumination and reflection modeling method based on conductive rendering | |
Li et al. | [Retracted] Multivisual Animation Character 3D Model Design Method Based on VR Technology | |
US11663775B2 (en) | Generating physically-based material maps | |
CN110533707A (en) | Illuminant estimation | |
CN113572962A (en) | Outdoor natural scene illumination estimation method and device | |
CN112634456B (en) | Real-time high-realism drawing method of complex three-dimensional model based on deep learning | |
CN116071278A (en) | Unmanned aerial vehicle aerial image synthesis method, system, computer equipment and storage medium | |
CN117557714A (en) | Three-dimensional reconstruction method, electronic device and readable storage medium | |
KR102291162B1 (en) | Apparatus and method for generating virtual data for artificial intelligence learning | |
CN116416375A (en) | Three-dimensional reconstruction method and system based on deep learning | |
US20240119671A1 (en) | Systems and methods for face asset creation and models from one or more images | |
CN111862278A (en) | Animation obtaining method and device, electronic equipment and storage medium | |
CN116134491A (en) | Multi-view neuro-human prediction using implicit differentiable renderers for facial expression, body posture morphology, and clothing performance capture | |
CN117649478B (en) | Model training method, image processing method and electronic equipment | |
CN118397160A (en) | Autonomous three-dimensional rendering engine for reverse site building system of oil field site | |
Mittal | Neural radiance fields: Past, present, and future | |
Pei et al. | Research on 3D reconstruction technology of large‐scale substation equipment based on NeRF | |
WO2023200936A1 (en) | Scaling neural representations for multi-view reconstruction of scenes | |
CN115953524A (en) | Data processing method and device, computer equipment and storage medium | |
Fang et al. | Methods and strategies for improving the novel view synthesis quality of neural radiation field |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |