WO2021051893A1 - 一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型、方法及装置 - Google Patents

一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型、方法及装置 Download PDF

Info

Publication number
WO2021051893A1
WO2021051893A1 PCT/CN2020/094759 CN2020094759W WO2021051893A1 WO 2021051893 A1 WO2021051893 A1 WO 2021051893A1 CN 2020094759 W CN2020094759 W CN 2020094759W WO 2021051893 A1 WO2021051893 A1 WO 2021051893A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
monte carlo
denoising
rendering image
network
Prior art date
Application number
PCT/CN2020/094759
Other languages
English (en)
French (fr)
Inventor
唐睿
徐冰
张骏飞
Original Assignee
杭州群核信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州群核信息技术有限公司 filed Critical 杭州群核信息技术有限公司
Priority to US17/631,397 priority Critical patent/US20220335574A1/en
Publication of WO2021051893A1 publication Critical patent/WO2021051893A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the invention belongs to the field of image denoising, and in particular relates to a Monte Carlo rendering graph denoising model, method and device based on a generative confrontation network.
  • Monte-Carlo Simulation (Monte-Carlo Simulation) rendering technology
  • the variance convergence of the rendered image requires a large amount of sampling, it consumes a lot of time and computing resources.
  • a lower sampling rate is used for rendering to obtain a noisy rendered image, and then certain denoising techniques are used to denoise the rendered image to obtain a noise-free and visual performance Better rendering.
  • the more cutting-edge denoising technology for Monte Carlo rendering is mostly based on deep learning.
  • the most commonly used is to use a convolutional neural network to denoise the Monte Carlo rendering.
  • the L1 norm/L2 norm loss function of the Monte Carlo rendering and the target noise-free image are used as the goal of optimizing regression.
  • the convolutional neural network is trained, and the trained convolutional neural model can realize the denoising of the Monte Carlo rendering.
  • the purpose of the present invention is to provide a Monte Carlo rendering image denoising model based on a generative confrontation network and its establishment method.
  • the established Monte Carlo rendering image denoising model can realize the denoising of the Monte Carlo rendering image containing noise. Noise, while achieving a good denoising effect on low-frequency details, it can also significantly improve the retention of high-frequency details to obtain a more visually realistic rendering.
  • Another object of the present invention is to provide a denoising method and device for a Te Carlo rendering map.
  • the denoising method and device use the Monte Carlo rendering map denoising model constructed as described above, which can realize the denoising of the Monte Carlo rendering map. Denoising, while achieving a good denoising effect on low-frequency details, it can also significantly improve the retention of high-frequency details to obtain a more visually realistic rendering.
  • the first embodiment provides a method for constructing a Monte Carlo rendering graph denoising model based on a generative confrontation network, which includes the following steps:
  • the generative confrontation network includes a denoising network and a discriminant network.
  • the denoising network is used to denoise the input noise rendering image and auxiliary features, and output the denoising rendering image.
  • the network is used to classify the input denoising rendering image and the target rendering image corresponding to the noise rendering image, and output the classification result;
  • the training samples are used to tune the network parameters of the generative confrontation network. After the tuning is completed, the denoising network determined by the network parameters is used as the Monte Carlo rendering graph denoising model.
  • the second embodiment provides a Monte Carlo rendering map denoising model based on a generative confrontation network, and the Monte Carlo rendering map denoising model is constructed by the construction method provided in the first embodiment.
  • the Monte Carlo rendering image denoising model is a Monte Carlo rendering image denoising model M d , which is the Monte Carlo rendering image P d obtained by rendering using the diffuse path rendering process, and generating the Monte Carlo rendering image assist features when P d, and the target P d Monte Carlo rendering rendering corresponding training samples obtained as training;
  • the third embodiment provides a Monte Carlo rendering image denoising method, including the following steps:
  • the rendering process of the rendering engine is split into the diffuse path rendering process and the specular path rendering process;
  • the denoising rendering image P d 'and the denoising rendering image P s ' are merged to obtain the final denoising rendering image.
  • the fourth embodiment provides a denoising device for Monte Carlo rendering, including a computer memory, a computer processor, and a computer program stored in the computer memory and executable on the computer processor.
  • the said Monte Carlo rendering image denoising model M s and the Monte Carlo rendering image denoising model M d are stored in the computer memory;
  • the rendering process of the rendering engine is split into the diffuse path rendering process and the specular path rendering process;
  • the diffuse path rendering process and the specular path rendering process are used for rendering respectively to obtain low sampling rate Monte Carlo rendering images P d and Monte Carlo rendering images P s , and generating Monte Carlo rendering images P d and Monte Carlo rendering images at the same time Auxiliary features corresponding to P s;
  • the denoising rendering image P d 'and the denoising rendering image P s ' are merged to obtain the final denoising rendering image.
  • the Monte Carlo rendering image denoising model has stronger denoising capabilities, and the denoising rendering image obtained after denoising can bring better denoising effects in terms of human visual perception.
  • the denoising method and device for the Te Carlo rendering map use the Monte Carlo rendering map denoising model, which can achieve the rendering effect that can be achieved by using a lower sampling rate to achieve a high sampling rate, and the time for denoising is only limited to On the order of one second, it is far less than the rendering time required for multi-sampling (on the order of hundreds to thousands of seconds), which greatly saves rendering time and computing costs, thereby reducing the use of servers and reducing the industry cost of the entire rendering service. save resources.
  • Figure 1 is a schematic diagram of the structure of a generative confrontation network
  • Figure 2 is a schematic diagram of the training process of a generative confrontation network
  • Fig. 3 is a schematic flow chart of a denoising method for a Monte Carlo rendering image.
  • the Monte Carlo rendering image obtained often has a lot of noise.
  • the following implementation provides a generative confrontation network based on The Monte Carlo rendering image denoising model and the method for establishing the same, a denoising method using the Monte Carlo rendering image denoising model, and a denoising device calling the Monte Carlo rendering image denoising model.
  • An embodiment provides a method for establishing a Monte Carlo rendering graph denoising model based on a generative confrontation network, as shown in Figure 1 and Figure 2, which specifically includes the following processes:
  • the goal that the Monte Carlo rendering image denoising model constructed in this embodiment can achieve is to perform a denoising operation on the input noise rendering image, and output the denoising rendering image whose image quality reaches the target rendering image.
  • the present invention also considers adding other auxiliary features as the input of the Monte Carlo rendering image denoising model, so that the Monte Carlo rendering image denoising model can be used for denoising.
  • auxiliary features include but are not limited to Normal Buffer, Depth Buffer, Material Texture Albedo Buffer.
  • the noise rendering image and the corresponding auxiliary features, and the target rendering image corresponding to the noise rendering image are used as a training sample to construct a training sample set.
  • the convolutional neural network is simply used to denoise the noise rendering image, and the obtained denoising rendering image lacks realism in details.
  • this embodiment constructs Monte Carlo through adversarial learning Rendered image denoising model.
  • the constructed generative confrontation network includes the denoising network Denoising Net and the discriminant network Critic Net.
  • the Denoising Net denoising network is used to denoise the input noise rendering image and auxiliary features, and the output denoise Noise rendering map, the discrimination network Critic Net is used to classify the input denoising rendering map and the target rendering map corresponding to the noise rendering map, and output the classification results.
  • the denoising network includes:
  • the auxiliary graph feature extraction sub-network is a convolutional neural network including at least one convolutional layer, and is used to fuse input auxiliary features and output auxiliary feature maps;
  • a rendering map feature extraction sub-network is a convolutional neural network including at least one convolutional layer, used for extracting features of the noise rendering map, and outputting the noise feature map;
  • the feature fusion sub-network is a neural network that adopts the idea of residual error and uses convolutional layers to fuse and extract auxiliary feature maps and noise feature maps.
  • the auxiliary graph feature extraction sub-network Encoder Net can be a convolutional neural network in which at least two convolutional layers Conv and an activation layer RelU are connected in sequence.
  • the auxiliary feature fusion network Encoder Net can be as shown in Figure 1(c).
  • the shown convolutional neural network specifically includes Conv k3n128s1, Leaky RelU, Conv k1n128s1, Leaky RelU, Conv k1n128s1, Leaky RelU, Conv k1n128s1, Leaky RelU and Conv k1n32k3 which are connected in sequence, where 128 is the convolution of 3Conv*RelU and Conv*K1n32s1. 3.
  • the convolutional layer with 128 channels and 1 step size the explanation of other convolutional layers is similar, so I won't repeat them here.
  • the feature fusion sub-network may include:
  • the feature fusion unit is used to combine the auxiliary feature map and the noise feature map to output the modulation feature map, specifically including multiple auxiliary feature modulation modules CFM ResBlock, auxiliary feature modulation section CFM and convolutional layer connected in sequence, Among them, the input of the auxiliary feature modulation module CFM Block and the auxiliary feature modulation section CFM are the auxiliary feature map and the output of the previous layer, and the input of the first auxiliary feature modulation module CFM ResBlock is the noise feature map and the auxiliary feature map, and the convolutional layer The input of is the output of the auxiliary characteristic modulation section CFM, and the output is the modulation characteristic map;
  • the output unit is used to perform feature fusion on the noise feature map output by the feature extraction unit and the modulation feature map output by the modulation unit, that is, the input is the feature map after the noise feature map and the modulation feature map are superimposed, and the output is the denoising rendering Figure.
  • the auxiliary feature modulation module CFM ResBlock includes the auxiliary feature modulation section CFM, convolutional layer, activation layer, and superimposition operation.
  • the auxiliary feature modulation section CFM is used to modulate the auxiliary feature and the last output feature, that is, the auxiliary feature modulation section CFM.
  • the input of the feature modulation section CFM includes the auxiliary feature map and the output feature of the previous layer.
  • the superimposition operation is used to superimpose the input of the auxiliary feature modulation module CFM ResBlock and the output of the final convolutional layer.
  • the auxiliary characteristic modulation module CFM ResBlock includes the auxiliary characteristic modulation section CFM, Convk3n64s1, ReLU, the auxiliary characteristic modulation section CFM, Conv k3n64s1, and the superimposition operation ⁇ which are connected in sequence.
  • the auxiliary characteristic modulation The input of section CFM includes the auxiliary feature map and the output feature of the previous layer.
  • the superimposition operation is used to superimpose the input of the auxiliary feature modulation module CFM ResBlock and the output of Conv k3n64s1.
  • the auxiliary feature modulation section CFM includes a convolution layer, a dot multiplication operation, and a superimposition operation.
  • the input of the convolution layer is an auxiliary feature map
  • the dot multiplication operation is used to point the output of the convolution layer and the output of the previous layer.
  • the multiplication operation, the superposition operation is used to superimpose the output of the convolutional layer and the dot multiplication operation to output the feature map.
  • the auxiliary feature modulation section CFM includes Conv k1n32s1, Leaky ReLU, Conv k1n64s1, dot multiplication ⁇ , and superposition operation ⁇ , where Conv k1n32s1, Leaky ReLU, and Conv k1n64s1 are connected in sequence ,
  • the input of Conv k1n32s1 is the auxiliary feature map
  • the dot multiplication operation ⁇ refers to the point multiplication of the output of the previous layer with the output ⁇ of Conv k1n64s1
  • the superposition operation ⁇ refers to the result of the dot multiplication operation and the output ⁇ of Conv k1n64s1 Overlay.
  • the fusion unit includes a convolutional layer and an activation layer, and is used to perform feature fusion on the noise feature map output by the feature extraction unit and the modulation feature map output by the modulation unit, and output a denoising feature map.
  • the fusion unit includes Conv k3n64s1, ReLU, Conv k3n3s1, and ReLU connected in sequence.
  • Critic Net is a network composed of convolutional layer, BN, activation layer and fully connected layer.
  • the discrimination network Critic Net includes successively connected Conv, Leaky ReLU, multiple consecutive extraction units, fully connected layer Dense(100), Leaky ReLU, and fully connected layer Dense(1) , Where the extraction unit includes consecutive Conv, BN, and Leaky ReLU, and 100 in the fully connected layer Dense (100) indicates that the output dimension is 100.
  • the training sample set is used to conduct confrontation training on the generative confrontation network, and the network parameters of the generation confrontation network are optimized.
  • the role of the denoising network Denoising Net is to denoise the noise rendering image and generate the denoising rendering image.
  • the purpose is to make the discrimination network Critic Net unable to distinguish the denoising rendering image and the target rendering image; and the role of the discrimination network CriticNet is to distinguish as much as possible
  • the entire training is based on the confrontation process to make denoising
  • the capabilities of the network DenoisingNet and the discrimination network CriticNet have been improved at the same time.
  • the denoising network Denoising Net determined by the parameters is extracted as the Monte Carlo rendering image denoising model.
  • the Monte Carlo rendering image denoising model can realize the denoising of noisy Monte Carlo rendering images. While achieving good denoising effects on low-frequency details, it can also significantly improve the retention of high-frequency details to gain Visually more realistic rendering.
  • the generative confrontation network constructed above can also be trained by changing the training samples to obtain a Monte Carlo rendering image denoising model capable of processing other input images.
  • Monte Carlo rendering is an improvement of traditional reverse ray tracing. It is mainly based on the principle of ray tracing. Therefore, when rendering, according to the material difference at the intersection point of the first ray and object of path tracing, the rendering engine can be changed.
  • the rendering process is split into a diffuse path rendering process and a specular path rendering process.
  • the diffuse path rendering process and the specular path rendering process are used to render separately, and you can get the Monte Carlo rendering P d and the Monte Carlo rendering P with noise. s .
  • FIG P d Monte Carlo denoising denoising model renderings of Monte Carlo M d and P s rendering denoising rendering to Monte Carlo Noise model M s .
  • the rendering process using rendering diffuse path obtained Monte Carlo rendering P d P d rendered as the noise (i.e. noisy Diffuse), P d renderings of noise, noise generated when the assist features renderings P d (Auxiliary feature), and the target rendering map corresponding to the noise rendering map P d as the training sample to conduct confrontation training on the above-mentioned generative confrontation network.
  • confrontation training is completed, extract the denoising network Denoising Net and the auxiliary feature fusion network Encoder Net as Monte Carlo Rendered image denoising model M d .
  • the Monte Carlo rendering image P s rendered by the specular path rendering process is used as the noise rendering image P s (that is, noisysy Specular), the noise rendering image P s , the auxiliary features when generating the noise rendering image P s , and the noise rendering image
  • the target rendering corresponding to P s is used as the training sample to conduct confrontation training on the above-mentioned generative confrontation network.
  • the denoising network Denoising Net and the auxiliary feature fusion network Encoder Net are extracted as the Monte Carlo rendering denoising model M s .
  • Another embodiment provides a Monte Carlo rendering image denoising method, as shown in Figure 3, including the following steps:
  • the rendering process of the rendering engine is split into a diffuse path rendering process and a specular path rendering process
  • the auxiliary feature Auxiliary Feature corresponding to the Monte Carlo rendering image P d and the Monte Carlo rendering image P s includes, but is not limited to, the normal map Normal Buffer, the depth map Depth Buffer, and the material texture map Albedo Buffer.
  • the Monte Carlo rendering image denoising model M d and the Monte Carlo rendering image denoising model M s are constructed and obtained according to the above-mentioned construction method, and will not be repeated here.
  • This denoising method uses the Monte Carlo rendering image denoising models M d and M s to achieve rendering effects that can only be achieved by using a lower sampling rate to achieve a high sampling rate, and the denoising time is only on the order of one second. , Far less than the rendering time required for multi-sampling (on the order of hundreds to thousands of seconds), greatly saving rendering time and computing costs, thereby reducing the use of servers, reducing the industry cost of the entire rendering service, and saving resources.
  • Another embodiment provides a denoising device for Monte Carlo rendering, including a computer memory, a computer processor, and a computer program stored in the computer memory and executable on the computer processor.
  • the above-mentioned Monte Carlo rendering image denoising model M s and the Monte Carlo rendering image denoising model M d are stored in the computer memory;
  • the rendering process of the rendering engine is split into the diffuse path rendering process and the specular path rendering process;
  • the diffuse path rendering process and the specular path rendering process are used for rendering respectively to obtain low sampling rate Monte Carlo rendering images P d and Monte Carlo rendering images P s , and generating Monte Carlo rendering images P d and Monte Carlo rendering images at the same time Auxiliary features corresponding to P s;
  • the denoising rendering image P d 'and the denoising rendering image P s ' are merged to obtain the final denoising rendering image.
  • the denoising device uses the Monte Carlo rendering image denoising models M d and M s , which can achieve rendering effects that can only be achieved by using a lower sampling rate to achieve a high sampling rate, and the denoising time is only on the order of one second. , Far less than the rendering time required for multi-sampling (on the order of hundreds to thousands of seconds), greatly saving rendering time and computing costs, thereby reducing the use of servers, reducing the industry cost of the entire rendering service, and saving resources.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Image Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型及其构建方法,包括:构建训练样本,构建生成式对抗网络,包括去噪网络和判别网络,其中,去噪网络用于输入的噪声渲染图和辅助特征进行去噪,输出去噪渲染图,判别网络用于对输入的去噪渲染图和噪声渲染图对应的目标渲染图进行分类,输出分类结果;利用训练样本对生成式对抗网络的网络参数进行调优,以网络参数确定的去噪网络作为蒙特卡洛渲染图去噪模型,还公开了一种蒙特卡洛渲染图的去噪方法和装置,能够实现对含有噪声的蒙特卡洛渲染图的去噪。

Description

一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型、方法及装置 技术领域
本发明属于图像去噪声领域,具体涉及一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型、方法及装置。
背景技术
基于蒙特卡洛积分(Monte-Carlo Simulation)的渲染技术,由于渲染图的方差收敛需要大量的采样,因此,耗费大量的时间和计算资源。为了节省计算资源和降低渲染时间,一般采用较低采样率进行渲染得到一张有噪点的渲染图后,再采用一定的去噪技术对渲染图进行降噪,以得到一张无噪点、视觉表现较佳的渲染图。
目前,比较前沿的对蒙特卡洛渲染图去噪技术多基于深度学习。用的最多的是采用卷积神经网络对蒙特卡洛渲染图进行去噪,具体以蒙特卡洛渲染图和目标无噪点图片的L1范数/L2范数损失函数作为优化回归的目标,对卷积神经网络进行训练,训练好的卷积神经模型即可以实现对蒙特卡洛渲染图去噪。
Disney的“Bako S,Vogels T,McWilliams B,et al.Kernel-predicting convolutional networks for denoising Monte Carlo renderings[J].ACM Transactions on Graphics(TOG),2017,36(4):97.”以及Nvidia的“Chaitanya C R A,Kaplanyan A S,Schied C,et al.Interactive reconstruction of Monte Carlo image sequences using a recurrent denoising autoencoder[J].ACM Transactions on Graphics(TOG),2017,36(4):98.”由于以像素级的 loss作为优化目标,很难准确地描述真实的人眼视觉感受,所以即便在这个优化目标上做到标准很高,往往会得到相对模糊或者还原度较低的高频细节,使得去噪后的蒙特卡洛渲染图在细节处缺乏真实感,甚至有些高频细节较多的地方会显得比较脏。例如对室内渲染图进行去噪后,会使得室内渲染图中吊顶的墙角和踢脚线等这些高频细节较多的地方比较脏。
因此,迫切地需要一种对蒙特卡洛渲染图的去噪技术,该去噪技术既能做到对低频细节取得良好去噪效果,还能够较好地保留高频细节。
发明内容
本发明的目的是提供一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型及其建立方法,建立的蒙特卡洛渲染图去噪模型能够实现对含有噪声的蒙特卡洛渲染图的去噪,在对低频细节取得良好去噪效果的同时,还能够明显提升对高频细节的保留,以获得在视觉上更加真实的渲染图。
本发明的另一目的是提供一种特卡洛渲染图的去噪方法和装置,该去噪方法和装置利用上述构建的蒙特卡洛渲染图去噪模型,能够实现对蒙特卡洛渲染图的去噪,在对低频细节取得良好去噪效果的同时,还能够明显提升对高频细节的保留,以获得在视觉上更加真实的渲染图。
为实现上述发明目的,提供以下技术方案:
第一实施方式提供了一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型的构建方法,包括以下步骤:
获取含有噪声的蒙特卡洛渲染图作为噪声渲染图,获取生成噪声渲染图时的辅助特征,以噪声渲染图和对应的辅助特征,以及噪声渲染图对应的目标渲染图作为一个训练样本;
构建生成式对抗网络,所述生成式对抗网络包括去噪网络和判别网络, 其中,所述去噪网络用于输入的噪声渲染图和辅助特征进行去噪,输出去噪渲染图,所述判别网络用于对输入的去噪渲染图和噪声渲染图对应的目标渲染图进行分类,输出分类结果;
利用训练样本对所述生成式对抗网络的网络参数进行调优,调优结束后,以网络参数确定的去噪网络作为蒙特卡洛渲染图去噪模型。
第二实施方式提供了一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型,所述蒙特卡洛渲染图去噪模型通过第一实施方式提供的构建方法构建获得。
优选地,所述蒙特卡洛渲染图去噪模型为蒙特卡洛渲染图去噪模型M d,其为利用diffuse路径渲染流程渲染得到的蒙特卡洛渲染图P d、生成该蒙特卡洛渲染图P d时的辅助特征,以及蒙特卡洛渲染图P d对应的目标渲染图作为训练样本训练得到;
所述蒙特卡洛渲染图去噪模型为蒙特卡洛渲染图去噪模型M s,其为利用specular路径渲染流程渲染得到的蒙特卡洛渲染图P s、生成该蒙特卡洛渲染图P s时的辅助特征,以及蒙特卡洛渲染图P s对应的目标渲染图作为训练样本训练得到。
第三实施方式提供了一种蒙特卡洛渲染图的去噪方法,包括以下步骤:
根据路径追踪第一次光线和物体相交交点处的材质区别,将渲染引擎的渲染流程拆分为diffuse路径渲染流程和specular路径渲染流程;
分别利用diffuse路径渲染流程和specular路径渲染流程进行渲染,得到含有噪声的蒙特卡洛渲染图P d和蒙特卡洛渲染图P s,同时生成蒙特卡洛渲染图P d和蒙特卡洛渲染图P s对应的辅助特征;
将蒙特卡洛渲染图P d以及对应的辅助特征输入至所述的蒙特卡洛渲染图去噪模型M d中,获得去噪渲染图P d’;
将蒙特卡洛渲染图P s以及对应的辅助特征输入至所述蒙特卡洛渲染图去噪模型M s中,获得去噪渲染图P s’;
融合去噪渲染图P d’和去噪渲染图P s’,得到最终去噪渲染图。
第四实施方式提供了一种对蒙特卡洛渲染图的去噪装置,包括计算机存储器、计算机处理器以及存储在所述计算机存储器中并可在所述计算机处理器上执行的计算机程序,所述计算机存储器中存有所述的蒙特卡洛渲染图去噪模型M s和蒙特卡洛渲染图去噪模型M d
所述计算机处理器执行所述计算机程序时实现以下步骤:
根据路径追踪第一次光线和物体相交交点处的材质区别,将渲染引擎的渲染流程拆分为diffuse路径渲染流程和specular路径渲染流程;
分别利用diffuse路径渲染流程和specular路径渲染流程进行渲染,得到低采样率的蒙特卡洛渲染图P d和蒙特卡洛渲染图P s,同时生成蒙特卡洛渲染图P d和蒙特卡洛渲染图P s对应的辅助特征;
调用蒙特卡洛渲染图去噪模型M d对将蒙特卡洛渲染图P d以及对应的辅助特征进行去噪,获得去噪渲染图P d’;
调用蒙特卡洛渲染图去噪模型M s对蒙特卡洛渲染图P s以及对应的辅助特征进行去噪,获得去噪渲染图P s’;
融合去噪渲染图P d’和去噪渲染图P s’,得到最终去噪渲染图。
本发明具有的有益效果为:
所述蒙特卡洛渲染图去噪模型,具有更强的去噪能力,去噪后获得的去噪渲染图能够带给人类视觉感受上更好的降噪效果。
所述特卡洛渲染图的去噪方法和装置由于利用了蒙特卡洛渲染图去噪模型,可以实现采用较低的采样率达到高采样率才能达到的渲染效果,同时去噪的时间只在一秒的数量级,远远小于多采样所需要的渲染时间 (几百到几千秒的数量级),极大地节约渲染时间和计算成本,从而可以减少服务器的使用,降低整个渲染服务的产业成本,节约资源。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图做简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动前提下,还可以根据这些附图获得其他附图。
图1是生成式对抗网络的结构示意图;
图2是生成式对抗网络的训练过程示意图;
图3是对对蒙特卡洛渲染图的去噪方法流程示意图。
具体实施方式
为使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例对本发明进行进一步的详细说明。应当理解,此处所描述的具体实施方式仅仅用以解释本发明,并不限定本发明的保护范围。
当采用低采样率对模型进行蒙特卡洛渲染时,获得的蒙特卡洛渲染图往往都存在很多噪点,为了去除蒙特卡洛渲染图中的噪点,以下实施方式提供了一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型及其建立方法,还提供了一种利用该蒙特卡洛渲染图去噪模型的去噪方法,以及调用该蒙特卡洛渲染图去噪模型的去噪装置。
一个实施方式,提供了一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型的建立方法,如图1所示和图2所示,具体包括以下过程:
建立训练样本集
首先,采用低采样率对模型进行蒙特卡洛渲染渲染,得到含有噪声的蒙特卡洛渲染图作为噪声渲染图,然后,采用高采样率对同样模型进行蒙特卡洛渲染,得到含有极少噪声的蒙特卡洛渲染图,并以含有极少噪声的蒙特卡洛渲染图作为目标渲染图,当然,也可以采用其他的方式对噪声渲染图进行去噪,以获得图像质量满足需求的目标渲染图,目标渲染图的获取方式在此不做限定。
本实施方式构建的蒙特卡洛渲染图去噪模型能够实现的目标就是对输入的噪声渲染图进行去噪操作,输出图像质量达到目标渲染图的去噪渲染图。
为了提升蒙特卡洛渲染图去噪模型的去噪能力,本发明还考虑增加其他辅助特征作为蒙特卡洛渲染图去噪模型的输入,这样蒙特卡洛渲染图去噪模型在去噪的时候能够综合结合噪声渲染图的特征和辅助特征,多次提取能够提升图像质量的特征点,形成去噪渲染图。因此,在采用低采样率对模型进行蒙特卡洛渲染渲染,获得含有噪声的蒙特卡洛渲染图作为噪声渲染图时,提取噪声渲染图对应的辅助特征,该些辅助特征Auxiliary Feature包括但不限于法线图Normal Buffer,深度图Depth Buffer,材质纹理图Albedo Buffer。
因此,以噪声渲染图和对应的辅助特征,以及噪声渲染图对应的目标渲染图作为一个训练样本,以此构建训练样本集。
构建生成式对抗网络
单纯的采用卷积神经网络对噪声渲染图去噪,获得的去噪渲染图在细节处缺乏真实感,为了提升去噪时对高频细节的保留,本实施方式通过对抗学习来构建蒙特卡洛渲染图去噪模型,具体地,构建的生成式对抗网络包括去噪网络Denoising Net和判别网络Critic Net,其中,Denoising Net 去噪网络用于输入的噪声渲染图和辅助特征进行去噪,输出去噪渲染图,判别网络Critic Net用于对输入的去噪渲染图和噪声渲染图对应的目标渲染图进行分类,输出分类结果。
具体地,去噪网络包括:
辅助图特征提取子网络,该辅助图特征提取子网络为包括至少一个卷积层的卷积神经网络,用于对输入的辅助特征进行融合,输出辅助特征图;
渲染图特征提取子网络,该渲染图特征提取子网络为包括至少一个卷积层的卷积神经网络,用于提取噪声渲染图的特征,输出噪声特征图;
特征融合子网络,该特征融合子网络为采用残差思想,利用卷积层对辅助特征图和噪声特征图进行融合提取的神经网络。
对于辅助图特征提取子网络Encoder Net,具体可以为至少2个卷积层Conv和激活层RelU依次连接的卷积神经网络,举例说明,辅助特征融合网络Encoder Net可以为如图1(c)所示的卷积神经网络,具体包括依次连接的Conv k3n128s1,Leaky RelU,Conv k1n128s1,Leaky RelU,Conv k1n128s1,Leaky RelU,Conv k1n128s1,Leaky RelU以及Conv k1n32s1,其中,Conv k3n128s1表示卷积核为3*3,通道数为128,步长为1的卷积层,其他卷积层解释类似,在此不再赘述。
具体地,特征融合子网络可以包括:
特征融合单元,该特征融合单元用于对辅助特征图和噪声特征图进行结合,输出调制特征图,具体包括依次连接的多个辅助特征调制模块CFM ResBlock、辅助特征调制节CFM以及卷积层,其中,辅助特征调制模块CFM Block和辅助特征调制节CFM的输入为辅助特征图和上一层的输出,第一个辅助特征调制模块CFM ResBlock的输入为噪声特征图和辅助特征图,卷积层的输入为辅助特征调制节CFM的输出,输出为调制特征图;
输出单元,该输出单元用于对特征提取单元输出的噪声特征图和调制单元输出的调制特征图进行特征融合,即输入为噪声特征图和调制特征图叠加后的特征图,输出为去噪渲染图。
具体地,辅助特征调制模块CFM ResBlock包括辅助特征调制节CFM、卷积层、激活层以及叠加操作,其中,辅助特征调制节CFM用于对于辅助特征和上一次输出的特征进行调制,也就是辅助特征调制节CFM的输入包括辅助特征图和上一层的输出特征,叠加操作用于对辅助特征调制模块CFM ResBlock的输入和最后的卷积层的输出进行叠加。
举例说明,如图1(b)所示,辅助特征调制模块CFM ResBlock包括依次连接的辅助特征调制节CFM、Convk3n64s1、ReLU、辅助特征调制节CFM、Conv k3n64s1以及叠加操作⊕,其中,辅助特征调制节CFM的输入包括辅助特征图和上一层的输出特征,叠加操作⊕用于叠加辅助特征调制模块CFM ResBlock的输入和Conv k3n64s1的输出。
其中,辅助特征调制节CFM包括卷积层、点乘操作以及叠加操作,其中,卷积层的输入为辅助特征图,点乘操作用于对卷积层的输出和上一层的输出进行点乘操作,叠加操作用于对卷积层的输出和点乘操作进行叠加,输出特征图。
举例说明,如图1(b)所示,辅助特征调制节CFM包括Conv k1n32s1、Leaky ReLU、Conv k1n64s1、点乘操作⊙以及叠加操作⊕,其中,Conv k1n32s1、Leaky ReLU、Conv k1n64s1三层依次连接,Conv k1n32s1的输入为辅助特征图,点乘操作⊙是指将上一层的输出与Conv k1n64s1的输出γ进行点乘,叠加操作⊕是指将点乘操作的结果与Conv k1n64s1的输出β进行叠加。
具体地,融合单元包括卷积层和激活层,用于对对特征提取单元输出 的噪声特征图和调制单元输出的调制特征图进行特征融合,输出去噪特征图。举例说明,如图1(a)所示,融合单元包括依次连接的Conv k3n64s1、ReLU、Conv k3n3s1以及ReLU。
判别网络Critic Net为卷积层、BN、激活层以及全连接层组成的网络。举例说明,如图1(d)所示,判别网络Critic Net包括依次连接的Conv、Leaky ReLU、多个连续的提取单元、全连接层Dense(100)、Leaky ReLU以全连接层Dense(1),其中,提取单元包括连续的Conv、BN以及Leaky ReLU,全连接层Dense(100)中的100表示输出维度为100。
生成式对抗网路的训练
在构建完生成式对抗网络后,即利用训练样本集对生成式对抗网络进行对抗训练,优化生成式对抗网络的网络参数。去噪网络Denoising Net的作用在于对噪声渲染图进行去噪,生成去噪渲染图,目的在于使得判别网络Critic Net无法分辨出去噪渲染图和目标渲染图;而判别网络CriticNet的作用在于尽可能区分去噪渲染图和目标渲染图的视觉质量。因此,在训练时,利用Critic Net的预测输出与实际标签的差值反向传递更新生成式对抗网络中的参数,以实现对生成式对抗网络的对抗训练,整个训练基于对抗性过程使得去噪网络DenoisingNet和判别网络CriticNet的能力同时得到提升。
当参数调优结束后,提取参数确定的去噪网络Denoising Net作为蒙特卡洛渲染图去噪模型。
该蒙特卡洛渲染图去噪模型能够实现对含有噪声的蒙特卡洛渲染图的去噪,在对低频细节取得良好去噪效果的同时,还能够明显提升对高频细节的保留,以获得在视觉上更加真实的渲染图。
在上述蒙特卡洛渲染图去噪模型构建的基础上,还可以通过改变训练 样本训练上述构建的生成式对抗网络,以获得能够处理其他输入图像的蒙特卡洛渲染图去噪模型。
众所周知,蒙特卡洛渲染是对传统的逆向光线追踪的改进,其主要还是基于光线追踪原理,因此,在渲染时,根据路径追踪第一次光线和物体相交交点处的材质区别,可以将渲染引擎的渲染流程拆分为diffuse路径渲染流程和specular路径渲染流程,利用diffuse路径渲染流程和specular路径渲染流程单独进行渲染,既可以获得含有噪声的蒙特卡洛渲染图P d和蒙特卡洛渲染图P s
在此基础上,即可以获得的对蒙特卡洛渲染图P d进行去噪的蒙特卡洛渲染图去噪模型M d和对蒙特卡洛渲染图P s进行去噪的蒙特卡洛渲染图去噪模型M s
具体地,利用diffuse路径渲染流程渲染得到的蒙特卡洛渲染图P d作为噪声渲染图P d(也就是Noisy Diffuse),以噪声渲染图P d、生成噪声渲染图P d时的辅助特征(Auxiliary feature),以及噪声渲染图P d对应的目标渲染图作为训练样本,对上述生成式对抗网络进行对抗训练,对抗训练结束后,提取去噪网络Denoising Net和辅助特征融合网络Encoder Net作为蒙特卡洛渲染图去噪模型M d
利用specular路径渲染流程渲染得到的蒙特卡洛渲染图P s作为噪声渲染图P s(也就是Noisy Specular),以噪声渲染图P s、生成噪声渲染图P s时的辅助特征,以及噪声渲染图P s对应的目标渲染图作为训练样本,对上述生成式对抗网络进行对抗训练,对抗训练结束后,提取去噪网络Denoising Net和辅助特征融合网络Encoder Net作为蒙特卡洛渲染图去噪模型M s
另外一实施方式,提供了一种蒙特卡洛渲染图的去噪方法,如图3所 示,包括以下步骤:
S101,根据路径追踪第一次光线和物体相交交点处的材质区别,将渲染引擎的渲染流程拆分为diffuse路径渲染流程和specular路径渲染流程;
S102,分别利用diffuse路径渲染流程和specular路径渲染流程进行渲染,得到含有噪声的蒙特卡洛渲染图P d和蒙特卡洛渲染图P s,同时生成蒙特卡洛渲染图P d和蒙特卡洛渲染图P s对应的辅助特征;
S103,将蒙特卡洛渲染图P d以及对应的辅助特征输入至上述蒙特卡洛渲染图去噪模型M d中,获得去噪渲染图P d’;
S104,将蒙特卡洛渲染图P s以及对应的辅助特征输入至上述蒙特卡洛渲染图去噪模型M s中,获得去噪渲染图P s’;
S105,融合去噪渲染图P d’和去噪渲染图P s’,得到最终去噪渲染图。
该去噪方法中,蒙特卡洛渲染图P d和蒙特卡洛渲染图P s对应的辅助特征Auxiliary Feature包括但不限于法线图Normal Buffer,深度图Depth Buffer,材质纹理图Albedo Buffer。
所述蒙特卡洛渲染图去噪模型M d和蒙特卡洛渲染图去噪模型M s按照上述构建方法构建获得,此处不再赘述。
该去噪方法由于利用了蒙特卡洛渲染图去噪模型M d和M s,可以实现采用较低的采样率达到高采样率才能达到的渲染效果,同时去噪的时间只在一秒的数量级,远远小于多采样所需要的渲染时间(几百到几千秒的数量级),极大地节约渲染时间和计算成本,从而可以减少服务器的使用,降低整个渲染服务的产业成本,节约资源。
另一个实施方式,提供了一种对蒙特卡洛渲染图的去噪装置,包括计算机存储器、计算机处理器以及存储在所述计算机存储器中并可在所述计算机处理器上执行的计算机程序,所述计算机存储器中存有上述蒙特卡洛 渲染图去噪模型M s和蒙特卡洛渲染图去噪模型M d
所述计算机处理器执行所述计算机程序时实现以下步骤:
根据路径追踪第一次光线和物体相交交点处的材质区别,将渲染引擎的渲染流程拆分为diffuse路径渲染流程和specular路径渲染流程;
分别利用diffuse路径渲染流程和specular路径渲染流程进行渲染,得到低采样率的蒙特卡洛渲染图P d和蒙特卡洛渲染图P s,同时生成蒙特卡洛渲染图P d和蒙特卡洛渲染图P s对应的辅助特征;
调用蒙特卡洛渲染图去噪模型M d对将蒙特卡洛渲染图P d以及对应的辅助特征进行去噪,获得去噪渲染图P d’;
调用蒙特卡洛渲染图去噪模型M s对蒙特卡洛渲染图P s以及对应的辅助特征进行去噪,获得去噪渲染图P s’;
融合去噪渲染图P d’和去噪渲染图P s’,得到最终去噪渲染图。
该去噪装置由于利用了蒙特卡洛渲染图去噪模型M d和M s,可以实现采用较低的采样率达到高采样率才能达到的渲染效果,同时去噪的时间只在一秒的数量级,远远小于多采样所需要的渲染时间(几百到几千秒的数量级),极大地节约渲染时间和计算成本,从而可以减少服务器的使用,降低整个渲染服务的产业成本,节约资源。
以上所述的具体实施方式对本发明的技术方案和有益效果进行了详细说明,应理解的是以上所述仅为本发明的最优选实施例,并不用于限制本发明,凡在本发明的原则范围内所做的任何修改、补充和等同替换等,均应包含在本发明的保护范围之内。

Claims (8)

  1. 一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型的构建方法,包括以下步骤:
    获取含有噪声的蒙特卡洛渲染图作为噪声渲染图,获取生成噪声渲染图时的辅助特征,以噪声渲染图和对应的辅助特征,以及噪声渲染图对应的目标渲染图作为一个训练样本;
    构建生成式对抗网络,所述生成式对抗网络包括
    去噪网络和判别网络,其中,所述去噪网络用于输入的噪声渲染图和辅助特征进行去噪,输出去噪渲染图,所述判别网络用于对输入的去噪渲染图和噪声渲染图对应的目标渲染图进行分类,输出分类结果;
    利用训练样本对所述生成式对抗网络的网络参数进行调优,调优结束后,以网络参数确定的去噪网络作为蒙特卡洛渲染图去噪模型。
  2. 如权利要求1所述的基于生成式对抗网络的蒙特卡洛渲染图去噪模型的构建方法,其特征在于,所述去噪网络包括:
    辅助图特征提取子网络,该辅助图特征提取子网络为包括至少一个卷积层的卷积神经网络,用于对输入的辅助特征进行融合,输出辅助特征图;
    渲染图特征提取子网络,该渲染图特征提取子网络为包括至少一个卷积层的卷积神经网络,用于提取噪声渲染图的特征,输出噪声特征图;
    特征融合子网络,该特征融合子网络为采用残差思想,利用卷积层对辅助特征图和噪声特征图进行融合提取的神经网络。
  3. 如权利要求2所述的基于生成式对抗网络的蒙特卡洛渲染图去噪模型的构建方法,其特征在于,所述特征融合子网络包括:
    特征融合单元,该特征融合单元用于对辅助特征图和噪声特征图进行 结合,输出调制特征图,具体包括依次连接的多个辅助特征调制模块CFM ResBlock、辅助特征调制节CFM以及卷积层,其中,辅助特征调制模块CFM Block和辅助特征调制节CFM的输入为辅助特征图和上一层的输出,第一个辅助特征调制模块CFM ResBlock的输入为噪声特征图和辅助特征图,卷积层的输入为辅助特征调制节CFM的输出,输出为调制特征图;
    输出单元,该输出单元用于对特征提取单元输出的噪声特征图和调制单元输出的调制特征图进行特征融合,即输入为噪声特征图和调制特征图叠加后的特征图,输出为去噪渲染图。
  4. 如权利要求1所述的基于生成式对抗网络的蒙特卡洛渲染图去噪模型的构建方法,其特征在于,所述判别网络为卷积层、BN、激活层以及全连接层组成的网络。
  5. 一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型,其特征在于,所述蒙特卡洛渲染图去噪模型通过权利要求1~4任一项所述的构建方法构建获得。
  6. 如权利要求5所示的基于生成式对抗网络的蒙特卡洛渲染图去噪模型,其特征在于,
    所述蒙特卡洛渲染图去噪模型为蒙特卡洛渲染图去噪模型M d,其为利用diffuse路径渲染流程渲染得到的蒙特卡洛渲染图P d、生成该蒙特卡洛渲染图P d时的辅助特征,以及蒙特卡洛渲染图P d对应的目标渲染图作为训练样本训练得到;
    所述蒙特卡洛渲染图去噪模型为蒙特卡洛渲染图去噪模型M s,其为利用specular路径渲染流程渲染得到的蒙特卡洛渲染图P s、生成该蒙特卡洛渲染图P s时的辅助特征,以及蒙特卡洛渲染图P s对应的目标渲染图作为训练样本训练得到。
  7. 一种蒙特卡洛渲染图的去噪方法,包括以下步骤:
    根据路径追踪第一次光线和物体相交交点处的材质区别,将渲染引擎的渲染流程拆分为diffuse路径渲染流程和specular路径渲染流程;
    分别利用diffuse路径渲染流程和specular路径渲染流程进行渲染,得到含有噪声的蒙特卡洛渲染图P d和蒙特卡洛渲染图P s,同时生成蒙特卡洛渲染图P d和蒙特卡洛渲染图P s对应的辅助特征;
    将蒙特卡洛渲染图P d以及对应的辅助特征输入至权利要求6所述的蒙特卡洛渲染图去噪模型M d中,获得去噪渲染图P d’;
    将蒙特卡洛渲染图P s以及对应的辅助特征输入至权利要求6所述的蒙特卡洛渲染图去噪模型M s中,获得去噪渲染图P s’;
    融合去噪渲染图P d’和去噪渲染图P s’,得到最终去噪渲染图。
  8. 一种对蒙特卡洛渲染图的去噪装置,包括计算机存储器、计算机处理器以及存储在所述计算机存储器中并可在所述计算机处理器上执行的计算机程序,其特征在于,
    所述计算机存储器中存有权利要求6所述的蒙特卡洛渲染图去噪模型M s和蒙特卡洛渲染图去噪模型M d
    所述计算机处理器执行所述计算机程序时实现以下步骤:
    根据路径追踪第一次光线和物体相交交点处的材质区别,将渲染引擎的渲染流程拆分为diffuse路径渲染流程和specular路径渲染流程;
    分别利用diffuse路径渲染流程和specular路径渲染流程进行渲染,得到低采样率的蒙特卡洛渲染图P d和蒙特卡洛渲染图P s,同时生成蒙特卡洛渲染图P d和蒙特卡洛渲染图P s对应的辅助特征;
    调用蒙特卡洛渲染图去噪模型M d对将蒙特卡洛渲染图P d以及对应的辅助特征进行去噪,获得去噪渲染图P d’;
    调用蒙特卡洛渲染图去噪模型M s对蒙特卡洛渲染图P s以及对应的辅助特征进行去噪,获得去噪渲染图P s’;
    融合去噪渲染图P d’和去噪渲染图P s’,得到最终去噪渲染图。
PCT/CN2020/094759 2019-09-17 2020-06-05 一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型、方法及装置 WO2021051893A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/631,397 US20220335574A1 (en) 2019-09-17 2020-06-05 A monte carlo rendering image denoising model, method and device based on generative adversarial network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910876687.8 2019-09-17
CN201910876687.8A CN110728636A (zh) 2019-09-17 2019-09-17 一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型、方法及装置

Publications (1)

Publication Number Publication Date
WO2021051893A1 true WO2021051893A1 (zh) 2021-03-25

Family

ID=69219064

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/094759 WO2021051893A1 (zh) 2019-09-17 2020-06-05 一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型、方法及装置

Country Status (3)

Country Link
US (1) US20220335574A1 (zh)
CN (1) CN110728636A (zh)
WO (1) WO2021051893A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436111A (zh) * 2021-07-21 2021-09-24 西北工业大学 一种基于网络结构搜索的高光谱遥感图像去噪方法
CN114742931A (zh) * 2022-04-28 2022-07-12 北京字跳网络技术有限公司 渲染图像的方法、装置、电子设备及存储介质
CN118115634A (zh) * 2024-03-18 2024-05-31 海南渔人映画文化传媒有限公司 一种数字动画渲染方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728636A (zh) * 2019-09-17 2020-01-24 杭州群核信息技术有限公司 一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型、方法及装置
US11887279B2 (en) * 2020-08-25 2024-01-30 Sharif University Of Technology Machine learning-based denoising of an image
CN113628126B (zh) * 2021-06-29 2022-03-01 光线云(杭州)科技有限公司 基于重要度特征图共享的实时蒙特卡洛路径追踪降噪方法、装置和计算机设备
US20230035541A1 (en) * 2021-07-28 2023-02-02 Oracle International Corporation Optimizing a prognostic-surveillance system to achieve a user-selectable functional objective
US20230169176A1 (en) * 2021-11-28 2023-06-01 International Business Machines Corporation Graph exploration framework for adversarial example generation
CN114331895A (zh) * 2021-12-30 2022-04-12 电子科技大学 一种基于生成对抗网络的蒙特卡罗渲染图去噪方法
CN115983352B (zh) * 2023-02-14 2023-06-16 北京科技大学 一种基于辐射场和生成对抗网络的数据生成方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859147A (zh) * 2019-03-01 2019-06-07 武汉大学 一种基于生成对抗网络噪声建模的真实图像去噪方法
CN110148088A (zh) * 2018-03-14 2019-08-20 北京邮电大学 图像处理方法、图像去雨方法、装置、终端及介质
CN110223254A (zh) * 2019-06-10 2019-09-10 大连民族大学 一种基于对抗生成网络的图像去噪方法
CN110728636A (zh) * 2019-09-17 2020-01-24 杭州群核信息技术有限公司 一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型、方法及装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101907081B1 (ko) * 2011-08-22 2018-10-11 삼성전자주식회사 3차원 점군의 물체 분리 방법
US20160321523A1 (en) * 2015-04-30 2016-11-03 The Regents Of The University Of California Using machine learning to filter monte carlo noise from images
US10572979B2 (en) * 2017-04-06 2020-02-25 Pixar Denoising Monte Carlo renderings using machine learning with importance sampling
US10475165B2 (en) * 2017-04-06 2019-11-12 Disney Enterprises, Inc. Kernel-predicting convolutional neural networks for denoising
US11557022B2 (en) * 2017-07-27 2023-01-17 Nvidia Corporation Neural network system with temporal feedback for denoising of rendered sequences
CN108765319B (zh) * 2018-05-09 2020-08-14 大连理工大学 一种基于生成对抗网络的图像去噪方法
CN109740283A (zh) * 2019-01-17 2019-05-10 清华大学 自主多智能体对抗仿真方法及系统
CN109872288B (zh) * 2019-01-31 2023-05-23 深圳大学 用于图像去噪的网络训练方法、装置、终端及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148088A (zh) * 2018-03-14 2019-08-20 北京邮电大学 图像处理方法、图像去雨方法、装置、终端及介质
CN109859147A (zh) * 2019-03-01 2019-06-07 武汉大学 一种基于生成对抗网络噪声建模的真实图像去噪方法
CN110223254A (zh) * 2019-06-10 2019-09-10 大连民族大学 一种基于对抗生成网络的图像去噪方法
CN110728636A (zh) * 2019-09-17 2020-01-24 杭州群核信息技术有限公司 一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型、方法及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436111A (zh) * 2021-07-21 2021-09-24 西北工业大学 一种基于网络结构搜索的高光谱遥感图像去噪方法
CN113436111B (zh) * 2021-07-21 2024-01-09 西北工业大学 一种基于网络结构搜索的高光谱遥感图像去噪方法
CN114742931A (zh) * 2022-04-28 2022-07-12 北京字跳网络技术有限公司 渲染图像的方法、装置、电子设备及存储介质
CN118115634A (zh) * 2024-03-18 2024-05-31 海南渔人映画文化传媒有限公司 一种数字动画渲染方法

Also Published As

Publication number Publication date
CN110728636A (zh) 2020-01-24
US20220335574A1 (en) 2022-10-20

Similar Documents

Publication Publication Date Title
WO2021051893A1 (zh) 一种基于生成式对抗网络的蒙特卡洛渲染图去噪模型、方法及装置
CN108198154B (zh) 图像去噪方法、装置、设备及存储介质
JP6961139B2 (ja) 知覚的な縮小方法を用いて画像を縮小するための画像処理システム
CN109214990A (zh) 一种基于Inception模型的深度卷积神经网络图像去噪方法
US8285076B2 (en) Methods and apparatus for visual sub-band decomposition of signals
KR20200132682A (ko) 이미지 최적화 방법, 장치, 디바이스 및 저장 매체
Ali et al. Comparametric image compositing: Computationally efficient high dynamic range imaging
Liu et al. Learning hadamard-product-propagation for image dehazing and beyond
CN111612891A (zh) 模型生成方法、点云数据处理方法、装置、设备及介质
Bachl et al. City-GAN: Learning architectural styles using a custom Conditional GAN architecture
JP2009508234A (ja) 2d/3d結合レンダリング
CN116645305A (zh) 基于多注意力机制与Retinex的低光照图像增强方法
Khan et al. A deep hybrid few shot divide and glow method for ill-light image enhancement
CN115018968A (zh) 图像渲染方法、装置、存储介质及电子设备
DE102022100517A1 (de) Verwenden von intrinsischen funktionen zum schattenentrauschen in raytracinganwendungen
CN106709888A (zh) 一种基于人眼视觉模型的高动态范围图像产生方法
Panetta et al. Novel multi-color transfer algorithms and quality measure
Xu et al. Artistic color virtual reality implementation based on similarity image restoration
CN112383366B (zh) 一种数字荧光频谱的频谱监测方法、装置及存储介质
Tang et al. Feature comparison and analysis for new challenging research fields of image quality assessment
Titarenko et al. Study of the ability of neural networks to extract and use semantic information when they are trained to reconstruct noisy images
CN113223128B (zh) 用于生成图像的方法和装置
Bae et al. Non-iterative tone mapping with high efficiency and robustness
Zhang et al. Fast Mesh Reconstruction from Single View Based on GCN and Topology Modification.
Fan et al. Bidirectional image denoising with blurred image feature

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20864376

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20864376

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.09.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20864376

Country of ref document: EP

Kind code of ref document: A1