CN114549726A - High-quality material chartlet obtaining method based on deep learning - Google Patents

High-quality material chartlet obtaining method based on deep learning Download PDF

Info

Publication number
CN114549726A
CN114549726A CN202210060499.XA CN202210060499A CN114549726A CN 114549726 A CN114549726 A CN 114549726A CN 202210060499 A CN202210060499 A CN 202210060499A CN 114549726 A CN114549726 A CN 114549726A
Authority
CN
China
Prior art keywords
network model
network
obtaining
map
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210060499.XA
Other languages
Chinese (zh)
Inventor
林子森
冼楚华
黎嘉欣
吴昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Shidi Intelligent Technology Co Ltd
Original Assignee
Guangdong Shidi Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Shidi Intelligent Technology Co Ltd filed Critical Guangdong Shidi Intelligent Technology Co Ltd
Priority to CN202210060499.XA priority Critical patent/CN114549726A/en
Publication of CN114549726A publication Critical patent/CN114549726A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a high-quality material map obtaining method based on deep learning, which comprises the following steps: building a material acquisition platform, recording parameters, building a virtual scene, and obtaining n material images; the illumination control instrument of the material acquisition platform comprises a camera, a material platform and n light sources, wherein the n light sources are uniformly distributed on the hemispherical shell; constructing a virtual scene and n material images according to the parameters to obtain training data; training the network model by using the training data to obtain a trained network model; setting a global jumper structure between an encoder and a decoder in a network model, compressing information of the encoder through global average pooling and full-connection calculation, and transmitting the information to each area of the decoder through a broadcasting mode; and inputting the pictures shot on the material acquisition platform into the trained network model to obtain a material chartlet of the material. The invention can generate high-quality SVBRDF maps for artists' designers and industrial applications.

Description

High-quality material chartlet obtaining method based on deep learning
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a high-quality material map obtaining method based on deep learning.
Background
The Spatially Varying Bidirectional Reflectance Distribution Function (SVBRDF), modeled as a function of 6-dimensional space (light view direction (4D) and spatial position (2D)), describes the distribution of incident light in different exit directions after reflection by a particular surface. Under the assumption of the Cook-Torrance BRDF model with GGX normal distribution function (mainly for physics-based rendering), SVBRDF can be parameterized using four parameter maps: diffuse reflection, specular reflection, normal, and gloss.
Conventional acquisition of the SVBRDF parameters described above tends to be densely sampled over 6D space to obtain reasonable results, but they are inefficient in programming and are often limited by expensive hardware. Recent studies that aim to recover the reflective properties of materials from one or more pictures taken with a cell phone camera have demonstrated that depth learning can be applied to obtain SVBRDF parameters in a convenient manner. But estimating based on a priori knowledge received by the network may result in different results for photographs of the same material taken under different illuminations. This situation is very easy to happen, as a key factor in the acquisition task, the illumination is always changing: indoor or outdoor, sunny or cloudy, at noon or at night, etc. However, few proposals have been made to avoid these situations by controlling the lighting. Therefore, under the condition of stray light interference, the results of the researches only meet the entertainment requirements of ordinary users, but cannot meet the strict requirements of professional designers on the accuracy of the building materials. In terms of network model, the hop structure in U-net can only pass local information from encoder to decoder, lacking transmission of global information.
Disclosure of Invention
In order to solve the existing technical problems, the invention provides a high-quality material map obtaining method based on deep learning, and the method has a good shielding effect on external interference light by designing a dome-shaped instrument in the aspect of controlling illumination, and meanwhile, the power of an internal light source is consistent with network training data, so that the condition that real object illumination is consistent with virtual object illumination is achieved; in the aspect of model structure improvement, a global jump structure is designed, information of an encoder is compressed through mean value and full-connection calculation, and is transmitted to each region of a decoder in a broadcasting mode, so that the problem that the global information is ignored by u-net jump is solved. Thereby realizing the generation of high-quality SVBRDF maps for artists designers and industrial applications.
The purpose of the invention can be achieved by adopting the following technical scheme:
a method for obtaining a high-quality texture map based on deep learning, the method comprising:
building a material acquisition platform, recording parameters, building a virtual scene, and obtaining n material images; the illumination control instrument of the material acquisition platform comprises a camera, a material platform and n light sources, wherein the n light sources are uniformly distributed on the hemispherical shell;
constructing a virtual scene and n material images according to the parameters to obtain training data;
training a network model by using the training data to obtain a trained network model; a global jumper structure is arranged between an encoder and a decoder in the network model, the information of the encoder is compressed through global average pooling and full-connection calculation, and is transmitted to each area of the decoder in a broadcasting mode;
and inputting the pictures shot on the material acquisition platform into the trained network model to obtain a material chartlet of the material.
Further, the building of the material acquisition platform, recording parameters and building of a virtual scene, and obtaining n material images specifically include:
when the platform is built, recording parameters to build a virtual scene, wherein the virtual scene comprises camera parameters and camera positions, the size and the position of a material platform, the size and the power and the position of a light source;
the n light sources are distributed at three different levels of the hemispherical shell;
when the system starts to work, the n light sources are sequentially lightened, and meanwhile, when the light sources are lightened, the camera shoots materials on the material table;
at the end of the capture process, n material images are obtained, wherein each material image is illuminated by only one light source.
Further, the n light sources are distributed at three different levels of the hemispherical shell, specifically:
in a polar coordinate system taking the center of a hemisphere as a starting point, n/3 equidistant light sources are arranged on each layer, and the included angle between each layer is 22.5 degrees.
Further, the cameras were calibrated using an X-Rite ColorChecker Passport before the cameras shot the material on the material table to ensure high color accuracy during capture, with the light intensity also adjusted between hardware and the virtual rendering environment with 18% gray cards.
Further, in global hopping, the encoder features are first compressed to a value with unit size, and then the global hopping broadcasts it to every field in the entire decoder.
Further, the training data includes a map dp、sp、gp、npAnd n virtual photographs R1,...,Rn
The network model comprises a diffuse reflection network, a specular reflection network, a normal network and a glossiness network;
the training of the network model by using the training data to obtain the trained network model specifically comprises:
the virtual photo R1,...,RnInputting a network model, and obtaining a network predicted map d by using four networks in the network modelp、sp、gpAnd np
Map d predicted using the networkp、sp、gp、npAnd a map d in said training datap、sp、gp、npCalculating a loss function;
when the network model is trained, an L1 loss function and a rendering loss function are used, and meanwhile, a cosine loss function and an SSIM loss function are respectively added to a normal network and a diffuse reflection network;
and optimizing network parameters by taking the loss reduction function as a target, so as to obtain a trained network model.
Further, the cosine loss function is as follows:
Figure BDA0003477994090000031
wherein n istAnd npThe normal used for virtual rendering and the normal predicted by the network are represented.
Further, the constructing a virtual scene and n material images according to the parameters to obtain training data specifically includes:
obtaining a map d of a known materialt、st、gtAnd nt
Constructing a virtual scene according to the parameters to render the n material images to obtain virtual photos R1. The paste picture dt、st、gt、ntAnd the virtual photograph R1.., Rn constitutes training data;
to expand the material types, the decal is shuffled for use.
Furthermore, the material of the hemispherical shell has good light shading performance, and the material table is made of a material with good diffuse reflection performance.
Further, the light source is an LED lamp, and n is 24.
Compared with the prior art, the invention has the following beneficial effects:
1. in the aspect of controlling illumination, the invention designs a dome-shaped instrument, which has a good shielding effect on external interference light, and simultaneously, the power of an internal light source is consistent with network training data, so that the condition that real object illumination is consistent with virtual object illumination is achieved. Compared with the traditional reflection characteristic acquisition equipment, the material acquisition platform constructed by the invention is simpler and easier to build, and has low cost; compared with other methods, the illumination of the picture shot under the equipment can be kept stable, and the good reconstruction effect is ensured to be finally realized.
2. The global jump connection structure designed by the invention can transmit global information between the encoder and the decoder, and the jump connection in the traditional u-net can only transmit local information; the global jump structure compresses the information of the encoder through mean value and full connection calculation, and transmits the information to each region of the decoder in a broadcasting mode, thereby making up the problem that the u-net jump ignores the global information.
3. Compared with the network generated by other maps, the method trains different maps by using different networks to achieve the effect of decoupling. Compared with the methods, the final generated result also achieves better reconstruction effect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a flowchart of a high-quality material map obtaining method based on deep learning according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a lighting control apparatus for use in photographing according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a network model according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a network model training process according to an embodiment of the present invention.
FIG. 5 is a diagram illustrating a map acquisition according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, it is obvious that the described embodiments are some, but not all, embodiments of the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts based on the embodiments of the present invention belong to the protection scope of the present invention.
Example (b):
as shown in fig. 1, the present embodiment provides a method for obtaining a high-quality material map based on deep learning, taking map obtaining of a cloth as an example (the method uses a near-plane object, and is not limited to a cloth), and includes the following steps:
s101, building a material obtaining platform, recording parameters, building a virtual scene, and obtaining n material images.
The value of n in this embodiment is 24.
As shown in fig. 2, the illumination control apparatus for shooting is composed of a camera, a material table and n LED lamps, which are uniformly distributed on a hemispherical shell. When collecting, need carry out shading to the instrument and handle, prevent that external light from disturbing. When the platform is built, the camera parameters and the camera position, the size and the position of the material platform, and the size, the power and the position of the LED lamp are recorded. These data are used to render virtual data on rendering software, using planar light to approximate a volumetric LED light source.
The n LEDs are distributed in three different levels of the hemispherical shell. In a polar coordinate system with the center of the hemisphere as the starting point, n/3 (8 in the present embodiment) equally spaced LEDs are mounted on each layer, and the included angle between each layer is 22.5 degrees. When the system starts to work, the n LEDs will light up in sequence. Meanwhile, when the LED lights are on, the camera may photograph the material on the material table. At the end of the capture process, n images of the material can be obtained, each image being illuminated by only one LED.
Prior to acquisition, the camera needs to be calibrated using X-Rite ColorChecker Passport to ensure high color accuracy during capture. The light intensity is also adjusted between the hardware and the virtual rendering environment with 18% gray cards. Color and light intensity calibration can further narrow the illumination gap between training and test data sets.
In the implementation, the material of the shading hemisphere only needs to have good shading performance, the pixel of the camera determines the pixel of the final generated mapping, the material platform uses the material with good diffuse reflection performance, and the bottom light needs to be uniformly distributed on the material platform to ensure the calculation of the transparency. The LED light source can be replaced by other light sources. The light source position is not limited to the position marked in the design scheme, and can be replaced by other positions, and only the virtual rendering scene needs to be changed according to the situation.
S102, building a virtual scene and n material images according to the parameters to obtain training data.
Please professional to make a map of the existing cloth (or download the related map from network), so as to obtain a map dt、st、gtAnd nt. Building a virtual scene according to the parameters recorded in the step S101, and rendering the n material images to obtain a virtual photo R1,...,Rn. These two parts of data constitute virtual data used for training as training data.
In order to expand the cloth types, different pictures can be disorderly used.
S103, training the network model by using the training data to obtain the trained network model.
As shown in fig. 3, a global hopping structure is arranged between an encoder and a decoder in a network model, information of the encoder is compressed through global average pooling and full-connection calculation, and is spread to each area of the decoder through a broadcast mode, so that the problem that the global information is ignored by u-net hopping is solved.
In global hopping, the encoder features are first compressed to a value of unity size, and then the global hopping broadcasts it throughout the decoder. In a general hop, each field in the decoder can only get the information in the corresponding encoder field, which delivers only local information, while a global hop delivers global information to each field in the decoder via broadcast.
The virtual photograph R in the training data obtained in step S1021,...,RnInputting the network model to obtain a network predicted map dp、sp、gpAnd npUsing the predicted map and the map d in the training datat、st、gtAnd ntA loss function (loss) is calculated. And optimizing network parameters by taking loss reduction as a target so that the predicted mapping is close to the real mapping. During training, training parameters (such as learning rate) need to be adjusted according to training conditions, so that the training result is better.
Each predicted map is generated using a separate network. And (3) using an L1 loss function and a rendering loss function during training, and adding cosine loss and SSIM loss in the normal network and the diffuse reflection network respectively. Wherein the cosine loss function is as follows:
Figure BDA0003477994090000051
wherein n istAnd npThe normal used for virtual rendering and the normal predicted by the network are represented.
As shown in fig. 4, during the training phase, pairs of training samples R1 of the supervised network, Rn, are generated by using the known virtual SVBRDF parameters (denoted as d)t、st、gtAnd nt) And under the same illumination setting as the acquisition equipment, rendering is carried out by using a Cook-Torrance BRDF model.
And S104, inputting the picture shot on the material acquisition platform into the trained network model to obtain a material chartlet of the material.
As shown in fig. 5, the equipment set up in step 101 is used to take a picture of the cloth, and the picture (I) is taken1,...,In) Inputting the trained network model to obtain the material chartlet (dp, sp, gp and np) of the cloth.
Those skilled in the art will appreciate that all or part of the steps in the method for implementing the above embodiments may be implemented by a program to instruct associated hardware, and the corresponding program may be stored in a computer-readable storage medium.
It should be noted that although the method operations of the above-described embodiments are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Rather, the depicted steps may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
The above description is only for the preferred embodiments of the present invention, but the protection scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution and the inventive concept of the present invention within the scope of the present invention.

Claims (10)

1. A high-quality material mapping obtaining method based on deep learning is characterized by comprising the following steps:
building a material acquisition platform, recording parameters, building a virtual scene, and obtaining n material images; the illumination control instrument of the material acquisition platform comprises a camera, a material platform and n light sources, wherein the n light sources are uniformly distributed on the hemispherical shell;
constructing a virtual scene and n material images according to the parameters to obtain training data;
training a network model by using the training data to obtain a trained network model; a global jumper structure is arranged between an encoder and a decoder in the network model, the information of the encoder is compressed through global average pooling and full-connection calculation, and is transmitted to each area of the decoder in a broadcasting mode;
and inputting the pictures shot on the material acquisition platform into the trained network model to obtain a material chartlet of the material.
2. The method for obtaining the high-quality material map according to claim 1, wherein the building of the material obtaining platform, recording parameters and building of a virtual scene, and obtaining n material images specifically comprise:
when a platform is built, recording parameters to build a virtual scene, wherein the virtual scene comprises camera parameters and a camera position, the size and the position of a material platform, the size and the power and the position of a light source;
the n light sources are distributed at three different levels of the hemispherical shell;
when the system starts to work, the n light sources are sequentially lightened, and meanwhile, when the light sources are lightened, the camera shoots materials on the material table;
at the end of the capture process, n material images are obtained, wherein each material image is illuminated by only one light source.
3. The method according to claim 2, wherein said n light sources are distributed at three different levels of the hemispherical shell, specifically:
in a polar coordinate system taking the center of a hemisphere as a starting point, n/3 equidistant light sources are arranged on each layer, and the included angle between each layer is 22.5 degrees.
4. A method of high quality texture map acquisition as claimed in claim 2, characterized in that before the camera takes the material on the material table, the camera is calibrated using X-Rite ColorChecker Passport to ensure high color accuracy during capture, the light intensity is also adjusted between hardware and virtual rendering environment with 18% gray card.
5. The high quality map acquisition method of claim 1 wherein in a global skip, the encoder features are first compressed to a value with unit size and then the global skip broadcasts it to every field in the entire decoder.
6. The method of claim 5, wherein the training data includes a map dp、sp、gp、npAnd n virtual photographs R1,...,Rn
The network model comprises a diffuse reflection network, a specular reflection network, a normal network and a glossiness network;
the training of the network model by using the training data to obtain the trained network model specifically comprises:
the virtual photo R1,...,RnInputting a network model, and obtaining a network predicted map d by using four networks in the network modelp、sp、gpAnd np
Map d predicted using the networkp、sp、gp、npAnd a map d in said training datap、sp、gp、npCalculating a loss function;
when the network model is trained, an L1 loss function and a rendering loss function are used, and meanwhile, a cosine loss function and an SSIM loss function are respectively added to a normal network and a diffuse reflection network;
and optimizing network parameters by taking the loss reduction function as a target, so as to obtain a trained network model.
7. The method of claim 6, wherein the cosine loss function is as follows:
Figure FDA0003477994080000021
wherein n istAnd npThe normal used for virtual rendering and the normal predicted by the network are represented.
8. The method for obtaining the high-quality material map according to claim 1, wherein the constructing of the virtual scene and the n material images according to the parameters to obtain training data specifically comprises:
obtaining a map d of a known materialt、st、gtAnd nt
Constructing a virtual scene according to the parameters to render the n material images to obtain virtual photos R1. Said paste picture dt、st、gt、ntAnd stationThe virtual photos R1., Rn constitute training data;
to expand the material types, the decal is shuffled for use.
9. The method for obtaining the high-quality material map according to any one of claims 1 to 8, wherein the material of the hemispherical shell has good light shielding property, and the material of the material table has good diffuse reflection property.
10. The method for obtaining a high-quality material map according to any one of claims 1 to 8, wherein the light source is an LED lamp, and n is 24.
CN202210060499.XA 2022-01-19 2022-01-19 High-quality material chartlet obtaining method based on deep learning Pending CN114549726A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210060499.XA CN114549726A (en) 2022-01-19 2022-01-19 High-quality material chartlet obtaining method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210060499.XA CN114549726A (en) 2022-01-19 2022-01-19 High-quality material chartlet obtaining method based on deep learning

Publications (1)

Publication Number Publication Date
CN114549726A true CN114549726A (en) 2022-05-27

Family

ID=81672337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210060499.XA Pending CN114549726A (en) 2022-01-19 2022-01-19 High-quality material chartlet obtaining method based on deep learning

Country Status (1)

Country Link
CN (1) CN114549726A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114926593A (en) * 2022-06-13 2022-08-19 山东大学 SVBRDF material modeling method and system based on single highlight image
CN117934692A (en) * 2023-12-29 2024-04-26 山东舜网传媒股份有限公司 SC-FEGAN depth model-based 3D scene self-adaptive mapping method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648334A (en) * 2019-09-18 2020-01-03 中国人民解放军火箭军工程大学 Multi-feature cyclic convolution saliency target detection method based on attention mechanism
US20200294309A1 (en) * 2019-03-11 2020-09-17 Beijing University Of Technology 3D Reconstruction Method Based on Deep Learning
CN113496495A (en) * 2021-06-25 2021-10-12 华中科技大学 Medical image segmentation model building method capable of realizing missing input and segmentation method
CN113538604A (en) * 2020-04-21 2021-10-22 中移(成都)信息通信科技有限公司 Image generation method, apparatus, device and medium
CN113596278A (en) * 2021-08-03 2021-11-02 广东时谛智能科技有限公司 System, method, medium and equipment for digitalized rapid scanning of fabric

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200294309A1 (en) * 2019-03-11 2020-09-17 Beijing University Of Technology 3D Reconstruction Method Based on Deep Learning
CN110648334A (en) * 2019-09-18 2020-01-03 中国人民解放军火箭军工程大学 Multi-feature cyclic convolution saliency target detection method based on attention mechanism
CN113538604A (en) * 2020-04-21 2021-10-22 中移(成都)信息通信科技有限公司 Image generation method, apparatus, device and medium
CN113496495A (en) * 2021-06-25 2021-10-12 华中科技大学 Medical image segmentation model building method capable of realizing missing input and segmentation method
CN113596278A (en) * 2021-08-03 2021-11-02 广东时谛智能科技有限公司 System, method, medium and equipment for digitalized rapid scanning of fabric

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114926593A (en) * 2022-06-13 2022-08-19 山东大学 SVBRDF material modeling method and system based on single highlight image
CN117934692A (en) * 2023-12-29 2024-04-26 山东舜网传媒股份有限公司 SC-FEGAN depth model-based 3D scene self-adaptive mapping method

Similar Documents

Publication Publication Date Title
CN103891294B (en) The apparatus and method coded and decoded for HDR image
CN109644224A (en) System and method for capturing digital picture
CN114549726A (en) High-quality material chartlet obtaining method based on deep learning
CN102761702A (en) Image overlay in a mobile device
CN101422035A (en) Image high-resolution upgrading device, image high-resolution upgrading method, image high-resolution upgrading program and image high-resolution upgrading system
LU501944B1 (en) Method for Making Three-dimensional Reconstruction and PBR Maps Based on Close-range Photogrammetry
CN114125310B (en) Photographing method, terminal device and cloud server
CN106327505A (en) Machine vision processing system
CN116506993A (en) Light control method and storage medium
CN113110731B (en) Method and device for generating media content
CN106357979A (en) Photographing method, device and terminal
CN107743637A (en) Method and apparatus for handling peripheral images
CN115866160A (en) Low-cost movie virtualization production system and method
CN116630518A (en) Rendering method, electronic equipment and medium
CN107295261A (en) Image defogging processing method, device, storage medium and mobile terminal
US20230171507A1 (en) Increasing dynamic range of a virtual production display
CN112866507B (en) Intelligent panoramic video synthesis method and system, electronic device and medium
US10564518B2 (en) Environmental lighting system and method
WO2023094870A1 (en) Increasing dynamic range of a virtual production display
WO2023160219A1 (en) Light supplementing model training method, image processing method, and related device thereof
CN116245741B (en) Image processing method and related device
WO2023026543A1 (en) Information processing device, information processing method, and program
CN118608436A (en) Picture processing method, electronic device and readable storage medium
WO2023094872A1 (en) Increasing dynamic range of a virtual production display
GB2334591A (en) Remote controlled and synchronized camera and flashlamp

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination