CN116228984A - A Volumetric Cloud Modeling and Rendering Method Based on Meteorological Data - Google Patents

A Volumetric Cloud Modeling and Rendering Method Based on Meteorological Data Download PDF

Info

Publication number
CN116228984A
CN116228984A CN202310236581.8A CN202310236581A CN116228984A CN 116228984 A CN116228984 A CN 116228984A CN 202310236581 A CN202310236581 A CN 202310236581A CN 116228984 A CN116228984 A CN 116228984A
Authority
CN
China
Prior art keywords
cloud
volumetric
noise
clouds
modeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310236581.8A
Other languages
Chinese (zh)
Inventor
林晓颖
李辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202310236581.8A priority Critical patent/CN116228984A/en
Publication of CN116228984A publication Critical patent/CN116228984A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

本发明提出了一种基于气象数据的体积云建模、渲染方法,其中包括云底高度、云顶高度、覆盖率、云底类型、云顶类型这五个数据字段,首先云的数据结构构建基本、体积云的深度值、采用光线步进算法、渲染场景进行混合和根据不同区域的云的疏密程度完全由噪声参数控制,叠加不同频率的Perlin噪声和Worley噪声保存为三维纹理,用来进行体积云的建模,形状噪声纹理使用分形Perlin‑Worley噪声,用于生成云的基本形状;细节噪声纹理使用分形Perlin噪声,用于对云的形状边缘进行侵蚀以添加细节,实现了视点静止时的极具真实感的体积云,以及运动状态下与体积云的穿云交互,该技术方案不仅能体现由真实数据驱动的云层姿态,还使用了较低的渲染成本获得了真实性较高的体积云效果。

Figure 202310236581

The present invention proposes a volumetric cloud modeling and rendering method based on meteorological data, which includes five data fields of cloud base height, cloud top height, coverage, cloud base type, and cloud top type. First, the cloud data structure is constructed basically, The depth value of the volumetric cloud, the ray stepping algorithm, the rendering scene is mixed and the density of the cloud in different regions is completely controlled by the noise parameter, and the Perlin noise and Worley noise of different frequencies are superimposed and saved as a three-dimensional texture for volumetric For cloud modeling, the shape noise texture uses fractal Perlin-Worley noise to generate the basic shape of the cloud; the detail noise texture uses fractal Perlin noise to erode the edge of the cloud shape to add details, realizing the static Very realistic volumetric clouds, and the cloud-penetrating interaction with volumetric clouds in the moving state, this technical solution can not only reflect the cloud attitude driven by real data, but also use a lower rendering cost to obtain a more realistic volume cloud effect.

Figure 202310236581

Description

一种基于气象数据的体积云建模、渲染方法A Volumetric Cloud Modeling and Rendering Method Based on Meteorological Data

技术领域technical field

本发明涉及体积云建模和渲染技术领域,具体为一种基于气象数据的体积云建模、渲染方法。The invention relates to the technical field of volumetric cloud modeling and rendering, in particular to a volumetric cloud modeling and rendering method based on meteorological data.

背景技术Background technique

对体积云进行建模即是要构建体积云的密度场,计算出空间中任意一点的云的密度值。目前主流的做法一般是借助3D纹理来存储云的密度信息,3D纹理中可以直接存储云的密度值、可以存储距离场信息、也可以存储烘焙的光照信息等,具体做法多种多样,视需求而定。To model the volumetric cloud is to construct the density field of the volumetric cloud and calculate the density value of the cloud at any point in the space. The current mainstream approach is generally to use 3D textures to store cloud density information. 3D textures can directly store cloud density values, distance field information, and baked lighting information. There are various specific methods, depending on the needs. depends.

本文主要介绍实时的天空体积云建模方法,目前PC/主机端比较主流的做法都是基于噪声纹理进行程序化建模,然后配合一些美术资源来做出更加丰富的形态和表现效果。使用何种形式的噪声纹理和美术资源也有很多种做法,像是地平线、荒野大镖客2、寒霜等分享的方法都很值得参考,UE4里面甚至直接使用了Material Graph,将建模方式完全开放给美术/TA来做,这种方法虽然很灵活,但是性能上不太易于进行控制和优化,纠结了很久还是暂时没有采用这种做法。This article mainly introduces the real-time sky volumetric cloud modeling method. At present, the mainstream method of PC/host end is to carry out procedural modeling based on noise texture, and then cooperate with some art resources to make more abundant forms and performance effects. There are also many ways to use noise textures and art resources. For example, the methods shared by Horizon, Red Dead Redemption 2, and Frost are all worthy of reference. In UE4, Material Graph is even used directly, and the modeling method is completely open. For art/TA, although this method is very flexible, it is not easy to control and optimize performance. After struggling for a long time, this method has not been adopted yet.

其中申请号为“CN202111589721.7”所公开的“一种基于地基云图的三维云场景建模与可视化方法”也是日益成熟的技术,其“由CPU从二维的地基云图中提取积云图像区域,计算云基高度和云团厚度作为三维积云建模的参数。根据上述两种参数确定构成积云模型的所有体素中心点的位置信息,作为顶点阵列传入GPU,在GPU的几何着色器阶段根据顶点阵列绘制所有积云体素,最后进入GPU的片段着色器阶段,根据每个体素与云团中心点的距离确定云的密度值,根据密度值计算体素的颜色从而实现三维云场景建模和可视化一体化。本发明以体素作为最小几何单位,基于GPU构建三维积云模型并在三维场景中实现可视化,提高了三维积云建模的效率,同时满足在三维地理场景中流畅漫游的需求”,但是该种农业生产溯源系统在实际使用过程中,还存在以下缺陷:Among them, the "a three-dimensional cloud scene modeling and visualization method based on ground-based cloud image" disclosed by the application number "CN202111589721.7" is also an increasingly mature technology. , calculate the cloud base height and cloud thickness as the parameters of three-dimensional cumulus modeling. Determine the position information of all voxel center points that constitute the cumulus model according to the above two parameters, and pass it to the GPU as a vertex array, and use it in the geometric coloring of the GPU The shader stage draws all cumulus voxels according to the vertex array, and finally enters the fragment shader stage of the GPU to determine the density value of the cloud according to the distance between each voxel and the center point of the cloud, and calculate the color of the voxel according to the density value to realize the three-dimensional cloud Integration of scene modeling and visualization. The present invention takes voxel as the smallest geometric unit, builds a three-dimensional cumulus model based on GPU and realizes visualization in a three-dimensional scene, improves the efficiency of three-dimensional cumulus modeling, and satisfies the requirements of three-dimensional cumulus modeling in three-dimensional geographical scenes. Need for smooth roaming”, but this kind of agricultural production traceability system still has the following defects in the actual use process:

现有技术方式无法对体积云进行构建基础,从而无法进行构建出体积云的密度场,渲染的精准度有限,获取体积云的真实性效果较差。The existing technical methods cannot build the basis of the volumetric cloud, so that the density field of the volumetric cloud cannot be constructed, the rendering accuracy is limited, and the authenticity of the volumetric cloud is obtained poorly.

发明内容Contents of the invention

本发明的目的在于提供一种基于气象数据的体积云建模、渲染方法,以解决上述背景技术提出的无法对体积云进行构建基础,从而无法进行构建出体积云的密度场,渲染的精准度有限,获取得体积云的真实性效果较差的问题。The purpose of the present invention is to provide a volumetric cloud modeling and rendering method based on meteorological data, so as to solve the problem that the above-mentioned background technology cannot construct the basis for the volumetric cloud, so that the density field of the volumetric cloud cannot be constructed, and the accuracy of rendering cannot be solved. Limited, the problem that the authenticity of the obtained volumetric cloud is poor.

为实现上述目的,本发明提供如下技术方案:一种基于气象数据的体积云建模、渲染方法,构造体积云数据结构,其中包括云底高度、云顶高度、覆盖率、云底类型、云顶类型这五个数据字段;首先云的数据结构构建基本、体积云的深度值、采用光线步进算法、渲染场景进行混合和根据不同区域的云的疏密程度完全由噪声参数控制;To achieve the above object, the present invention provides the following technical solutions: a volumetric cloud modeling and rendering method based on meteorological data, constructing a volumetric cloud data structure, including cloud base height, cloud top height, coverage, cloud base type, cloud top type These five data fields; firstly, the data structure of the cloud is basically constructed, the depth value of the volumetric cloud, the light stepping algorithm is used, the rendering scene is mixed, and the density of the cloud in different regions is completely controlled by the noise parameter;

所述叠加不同频率的Perlin噪声和Worley噪声保存为三维纹理,用来进行体积云的建模;The superimposed Perlin noise and Worley noise of different frequencies are saved as three-dimensional textures for modeling volumetric clouds;

所述形状噪声纹理使用分形Perlin-Worley噪声,用于生成云的基本形状;细节噪声纹理使用分形Perlin噪声,用于对云的形状边缘进行侵蚀以添加细节;The shape noise texture uses fractal Perlin-Worley noise to generate the basic shape of the cloud; the detail noise texture uses fractal Perlin noise to erode the shape edge of the cloud to add details;

所述体积云自阴影的分析制作。The volumetric clouds are produced from the analysis of shadows.

优选的,所述根据体积云的数据结构构建基本的云层轮廓,再使用三维噪声来侵蚀以上的云层轮廓,给云层增加细节,得到体积云的密度场。Preferably, the basic cloud profile is constructed according to the data structure of the volumetric cloud, and then three-dimensional noise is used to erode the above cloud profile to add details to the cloud layer to obtain the density field of the volumetric cloud.

优选的,所述获取待显示场景各个像素的深度值以及体积云的深度值,对比决定每个像素是否被云层遮挡,若该像素未被云层遮挡,则直接显示该像素本来的颜色。Preferably, the acquisition of the depth value of each pixel of the scene to be displayed and the depth value of the volumetric cloud compares and determines whether each pixel is blocked by clouds, and if the pixel is not blocked by clouds, the original color of the pixel is directly displayed.

优选的,所述采用光线步进算法,获得虚拟摄像机发出射线与云层表面的交点作为起始点,若视点位于云中,则采用结构化采样法,将视点位置作为起始点,由起始点开始沿着射线方向步进,在密度场中获取采样点的浓度。Preferably, the ray stepping algorithm is used to obtain the intersection point of the ray emitted by the virtual camera and the surface of the cloud layer as the starting point. If the viewpoint is located in the cloud, a structured sampling method is adopted, and the viewpoint position is used as the starting point. Step along the direction of the ray, and obtain the concentration of the sampling point in the density field.

优选的,所述根据自遮挡阴影和采样点的云浓度求解光照模型,得到体积云的最终透明度和颜色,并与待渲染场景进行混合。Preferably, the illumination model is solved according to the self-occlusion shadow and the cloud concentration of the sampling point to obtain the final transparency and color of the volumetric cloud, and mix it with the scene to be rendered.

优选的,所述根据不同区域的云的疏密程度完全由噪声参数控制,如果美术需要场景中同时有不同类型的云,或者需要能够自由控制云的疏密程度。Preferably, the density of clouds according to different regions is completely controlled by noise parameters, if the artist needs to have different types of clouds in the scene at the same time, or needs to be able to freely control the density of clouds.

优选的,所述对于形状比较复杂的积雨云,需要结合Cloud Lut在Cloud Type的0.5~1.0之间进行过渡(这样当然也能做),即根据高度给噪声不同的覆盖率和侵蚀度,高处的云覆盖率低且噪声侵蚀度高,低处的云覆盖率高且噪声侵蚀度低。Preferably, for cumulonimbus clouds with complex shapes, it is necessary to combine Cloud Lut to transition between Cloud Type 0.5 to 1.0 (this can of course be done), that is, to give noise different coverage and erosion degrees according to height, High cloud coverage is low and noise erosion is high, and low cloud coverage is high and noise erosion is low.

优选的,所述体积云自阴影的做法主要有两种:Preferably, there are mainly two methods for the self-shadowing of the volume cloud:

第一种方法比较常见,所述做法是每个采样点向光源方向再做一次RayMarching(PC/主机端)来计算transmittance,在采样步数够多的情况下效果也较好,但显然开销会比较大;The first method is more common. The method is to do RayMarching (PC/host side) for each sampling point in the direction of the light source to calculate the transmittance. The effect is better when the number of sampling steps is sufficient, but obviously the overhead will be high. bigger;

第二种方法是体积阴影映射,所述使用UE4的Beer's Shadow Map,以及更复杂一点的Transmittance Fucntion Mapping方法(目前只看到最终幻想中用过类似方法),Transmittance Fucntion Mapping方法的原理是使用一系列正交基函数来近似透射率函数。The second method is volumetric shadow mapping, which uses UE4's Beer's Shadow Map, and the more complicated Transmittance Fucntion Mapping method (currently only seen in Final Fantasy). The principle of the Transmittance Fucntion Mapping method is to use a A series of orthogonal basis functions to approximate the transmittance function.

与现有技术相比,本发明的有益效果是:Compared with prior art, the beneficial effect of the present invention is:

在进行体积云建模时,根据云层覆盖率数据和云层类型数据构建云层的基本形状,并叠加使用多种不同频率的噪声来添加细节,由此构建出体积云的密度场,在进行体积云渲染时,使用光线步进方法来计算单个像素的累计浓度和光照遮蔽,求解光照模型得到体积云最终的颜色和透明度,再与场景进行混合,针对不同的使用场景采用了不同的采样方法,实现了视点静止时的极具真实感的体积云,以及运动状态下与体积云的穿云交互,该技术方案不仅能体现由真实数据驱动的云层姿态,还使用了较低的渲染成本获得了真实性较高的体积云效果。When performing volumetric cloud modeling, the basic shape of the cloud layer is constructed according to the cloud coverage data and cloud type data, and a variety of noises of different frequencies are superimposed to add details, thereby constructing the density field of the volumetric cloud. When rendering, use the ray stepping method to calculate the cumulative concentration and light shading of a single pixel, solve the lighting model to get the final color and transparency of the volumetric cloud, and then mix it with the scene. Different sampling methods are used for different usage scenarios to achieve In addition to the realistic volumetric cloud when the viewpoint is still, and the cloud-penetrating interaction with the volumetric cloud in the moving state, this technical solution can not only reflect the cloud attitude driven by real data, but also use a lower rendering cost to obtain realistic Higher volumetric cloud effect.

附图说明Description of drawings

图1是本发明的体积云构建渲染方法流程图。Fig. 1 is a flow chart of the volumetric cloud construction and rendering method of the present invention.

图2是本发明的高度分布图。Figure 2 is a height distribution diagram of the present invention.

图3是本发明的低频PerlinWorley噪声和频率依次提高的Worley噪声图。Fig. 3 is the low frequency Perlin Worley noise of the present invention and the Worley noise figure that the frequency increases successively.

图4是本发明的使用三维噪声增加了体积云细节图。FIG. 4 is a detailed view of volumetric cloud augmented with 3D noise in the present invention.

图5是本发明的结构化采样示意图流程图。Fig. 5 is a schematic flow chart of structured sampling in the present invention.

具体实施方式Detailed ways

下面将结合附图对本发明的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。The technical solutions of the present invention will be clearly and completely described below in conjunction with the accompanying drawings. Apparently, the described embodiments are part of the embodiments of the present invention, but not all of them.

通常在此处附图中描述和显示出的本发明实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。The components of the embodiments of the invention generally described and shown in the figures herein may be arranged and designed in a variety of different configurations. Accordingly, the following detailed description of the embodiments of the invention provided in the accompanying drawings is not intended to limit the scope of the claimed invention, but merely represents selected embodiments of the invention.

基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

在本发明的描述中,需要说明的是,术语“中心”、“上”、“下”、“左”、“右”、“竖直”、“水平”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer" etc. The indicated orientation or positional relationship is based on the orientation or positional relationship shown in the drawings, and is only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying that the referred device or element must have a specific orientation, or in a specific orientation. construction and operation, therefore, should not be construed as limiting the invention. In addition, the terms "first", "second", and "third" are used for descriptive purposes only, and should not be construed as indicating or implying relative importance.

请参阅图1-图5,本发明提供一种技术方案:构造体积云数据结构,其中包括云底高度、云顶高度、覆盖率、云底类型、云顶类型这五个数据字段;首先云的数据结构构建基本、体积云的深度值、采用光线步进算法、渲染场景进行混合和根据不同区域的云的疏密程度完全由噪声参数控制;叠加不同频率的Perlin噪声和Worley噪声保存为三维纹理,用来进行体积云的建模;形状噪声纹理使用分形Perlin-Worley噪声,用于生成云的基本形状;细节噪声纹理使用分形Perlin噪声,用于对云的形状边缘进行侵蚀以添加细节;Referring to Fig. 1-Fig. 5, the present invention provides a kind of technical scheme: structure volumetric cloud data structure, wherein comprise these five data fields of cloud base height, cloud top height, coverage rate, cloud base type, cloud top type; First the data of cloud The basic structure construction, the depth value of the volume cloud, the use of light stepping algorithm, the rendering scene is mixed, and the density of the cloud in different regions is completely controlled by the noise parameter; the Perlin noise and Worley noise of different frequencies are superimposed and saved as a three-dimensional texture. Used to model volumetric clouds; the shape noise texture uses fractal Perlin-Worley noise to generate the basic shape of the cloud; the detail noise texture uses fractal Perlin noise to erode the edge of the cloud shape to add details;

体积云自阴影的分析制作,根据体积云的数据结构构建基本的云层轮廓,再使用三维噪声来侵蚀以上的云层轮廓,给云层增加细节,得到体积云的密度场。The analysis and production of the volumetric cloud self-shadow, the basic cloud outline is constructed according to the data structure of the volumetric cloud, and then the three-dimensional noise is used to erode the above cloud outline, add details to the cloud layer, and obtain the density field of the volumetric cloud.

获取待显示场景各个像素的深度值以及体积云的深度值,对比决定每个像素是否被云层遮挡,若该像素未被云层遮挡,则直接显示该像素本来的颜色,采用光线步进算法,获得虚拟摄像机发出射线与云层表面的交点作为起始点,若视点位于云中,则采用结构化采样法,将视点位置作为起始点,由起始点开始沿着射线方向步进,在密度场中获取采样点的浓度。Obtain the depth value of each pixel of the scene to be displayed and the depth value of the volumetric cloud, compare and determine whether each pixel is blocked by clouds, if the pixel is not blocked by clouds, directly display the original color of the pixel, and use the ray stepping algorithm to obtain The intersection point of the ray emitted by the virtual camera and the surface of the cloud layer is used as the starting point. If the viewpoint is in the cloud, the structured sampling method is adopted, and the viewpoint position is used as the starting point. From the starting point, the point along the direction of the ray is taken to obtain samples in the density field. point concentration.

根据自遮挡阴影和采样点的云浓度求解光照模型,得到体积云的最终透明度和颜色,并与待渲染场景进行混合,根据不同区域的云的疏密程度完全由噪声参数控制,如果美术需要场景中同时有不同类型的云,或者需要能够自由控制云的疏密程度,则需要用到Cloud Map,Cloud Map可以由美术直接进行绘制,不过由于这种方式无法实时观察到效果,不太方便,因此我们制作了一个笔刷工具,方便美术进行绘制,和UE4的Blueprint PaintedClouds类似。Solve the lighting model according to the self-occlusion shadow and the cloud concentration of the sampling point, get the final transparency and color of the volumetric cloud, and mix it with the scene to be rendered. The density of the cloud in different areas is completely controlled by the noise parameter. If the art needs the scene There are different types of clouds at the same time, or you need to be able to freely control the density of clouds, you need to use Cloud Map, Cloud Map can be drawn directly by artists, but because this method cannot observe the effect in real time, it is not convenient, So we made a brush tool to facilitate the drawing of artists, similar to UE4's Blueprint PaintedClouds.

对于形状比较复杂的积雨云,需要结合Cloud Lut在Cloud Type的0.5~1.0之间进行过渡(这样当然也能做),即根据高度给噪声不同的覆盖率和侵蚀度,高处的云覆盖率低且噪声侵蚀度高,低处的云覆盖率高且噪声侵蚀度低;For cumulonimbus clouds with complex shapes, it is necessary to combine Cloud Lut to transition between Cloud Type 0.5 to 1.0 (of course this can also be done), that is, to give noise different coverage and erosion degrees according to height, and cloud coverage at high places Low cloud coverage and high noise erosion, low cloud coverage and low noise erosion;

CloudLut中G通道存储的Erosion实际上不只是给细节噪声用,也会影响形状噪声的计算结果,具体细节可以参考Unity HDRP源码中的实现。The Erosion stored in the G channel in CloudLut is not only used for detail noise, but also affects the calculation result of shape noise. For details, please refer to the implementation in the Unity HDRP source code.

体积云自阴影的做法主要有两种:第一种方法比较常见,做法是每个采样点向光源方向再做一次RayMarching(PC/主机端)来计算transmittance,在采样步数够多的情况下效果也较好,但显然开销会比较大;第二种方法是体积阴影映射,使用UE4的Beer'sShadow Map,以及更复杂一点的Transmittance Fucntion Mapping方法(目前只看到最终幻想中用过类似方法),Transmittance Fucntion Mapping方法的原理是使用一系列正交基函数来近似透射率函数,基函数的选择上有用傅里叶函数、DCT函数、Haar小波函数等做法,与Beer's Shadow Map相比,TFM这类方法在一些情况下对透射率曲线的近似会更准确,从而对体积云的渲染效果更好。There are two main methods for volumetric cloud self-shadowing: the first method is more common, and the method is to do RayMarching (PC/host side) for each sampling point in the direction of the light source to calculate the transmittance. In the case of enough sampling steps The effect is also better, but obviously the overhead will be relatively high; the second method is volumetric shadow mapping, using UE4's Beer'sShadow Map, and a more complicated Transmittance Fucntion Mapping method (currently only seen similar methods used in Final Fantasy ), the principle of the Transmittance Fucntion Mapping method is to use a series of orthogonal basis functions to approximate the transmittance function. The selection of basis functions can be done by Fourier function, DCT function, Haar wavelet function, etc. Compared with Beer's Shadow Map, TFM Such methods can approximate the transmittance curve more accurately in some cases, resulting in better rendering of volumetric clouds.

体积云建模过程,根据真实气象数据获得体积云气象贴图,制作成一张三通道的二维纹理,r通道储存体积云的覆盖率信息,g通道储存云底类型信息,b通道储存云顶类型信息。体积云覆盖率表示水平层面上体积云的出现概率ph orizontal。云层类型数据的值范围为0.0~1.0,根据云层的类型从垂直分布图中获取垂直方向上的体积云出现概率pvertival,根据这两个概率,按以下公式计算得到体积云的存在概率场pprofile,这个概率场描绘了体积云的大致轮廓。In the volumetric cloud modeling process, the volumetric cloud weather map is obtained according to the real meteorological data, and a three-channel two-dimensional texture is made. The r channel stores the coverage information of the volumetric cloud, the g channel stores the cloud base type information, and the b channel stores the cloud top type information . The volumetric cloud coverage represents the occurrence probability p h horizontal of the volumetric cloud on the horizontal layer. The value range of cloud layer type data is 0.0~1.0. According to the type of cloud layer, the volume cloud occurrence probability p vertical in the vertical direction is obtained from the vertical distribution diagram. According to these two probabilities , the volume cloud existence probability field p is calculated according to the following formula profile , this probability field depicts the approximate outline of the volumetric cloud.

pprofile=ph orizontal*Pvertival p profile =p h horizontal *P vertical

使用分形布朗运动(Fractal Brownian Motion)方法将多个频率依次增高、幅度依次减弱的多个噪声叠加在一起,离线生成两张三维噪声纹理,FBM的计算公式如下:Use the Fractal Brownian Motion method to superimpose multiple noises with sequentially increasing frequencies and sequentially decreasing amplitudes, and generate two 3D noise textures offline. The calculation formula of FBM is as follows:

Figure BDA0004122542720000071
Figure BDA0004122542720000071

其中n为叠加的纹理数量,amplitude表示每次所叠加噪声的振幅,frequency表示叠加噪声的频率,noise()函数表示所采用的噪声函数。采用fbm方法还可以获得体积云的动态效果,只需要在读取噪声纹理时再进行域扭曲,使用如下公式计算域扭曲后的噪声值:Among them, n is the number of superimposed textures, amplitude represents the amplitude of the noise superimposed each time, frequency represents the frequency of the superimposed noise, and the noise() function represents the noise function used. Using the fbm method can also obtain the dynamic effect of the volumetric cloud, only need to perform domain distortion when reading the noise texture, and use the following formula to calculate the noise value after domain distortion:

Density(p)=fbm(p+fbm(p+fbm(p)))Density(p)=fbm(p+fbm(p+fbm(p)))

程序生成的噪声具有伪随机性,通常通过预先生成并以图片形式保存。体积云建模需要用到两个三维噪声纹理。其中第一张纹理的尺寸为128*128*128*4,R通道存储低频的PelinWorley噪声,GBA通道分别存储频率依次提高的Worley噪声。Procedurally generated noise is pseudo-random and is usually pre-generated and saved as an image. Volumetric cloud modeling requires two 3D noise textures. The size of the first texture is 128*128*128*4, the R channel stores low-frequency PelinWorley noise, and the GBA channel stores Worley noise with increasing frequency.

低频PerlinWorley噪声和频率依次提高的Worley噪声,分辨率为1283,第二张三维噪声纹理的尺寸为32*32*32*3,三个通道分别存储频率更高的Worley噪声,用来为体积云增加更多的细节,高频Worley噪声,分辨率为323Low-frequency Perlin Worley noise and Worley noise with increasing frequency, the resolution is 128 3 , the size of the second 3D noise texture is 32*32*32*3, and the three channels store Worley noise with higher frequency respectively, which is used for volume Clouds add more detail, high-frequency Worley noise, with a resolution of 32 3 ;

最后,体积云的密度场由以下公式计算得出Finally, the density field of the volumetric cloud is calculated by

Figure BDA0004122542720000072
Figure BDA0004122542720000072

其中noise表示经过或扭曲的噪声值;Where noise represents the passed or distorted noise value;

渲染过程:采样点的选择,采用光线步进方法从虚拟摄像机位置生成一条指向体积云的射线,从相机位置开始,以特定步长前进,计算每个采样点的位置,并采样该点的云密度,累计云密度并计算每个采样点的亮度信息,更新该射线所指体积云片段的颜色亮度。Rendering process: selection of sampling points, using the ray stepping method to generate a ray pointing to the volumetric cloud from the virtual camera position, starting from the camera position, advancing with a specific step, calculating the position of each sampling point, and sampling the cloud at that point Density, accumulate cloud density and calculate the brightness information of each sampling point, and update the color brightness of the volumetric cloud segment pointed to by this ray.

距离相机越远的像素对体积云最终的呈现状态影响越小,因此在体积云光线步进方法中,采用了变步长采样的方法。Pixels that are farther away from the camera have less influence on the final rendering state of the volumetric cloud. Therefore, in the volumetric cloud ray stepping method, the method of variable step size sampling is adopted.

当虚拟相机位置处于地面时,计算步进射线与云层底部的交点,把这个交点作为步进起始点进行变步长采样。对每个像素以相同步长进行光线步进,将导致体积云有明显的带状纹路,为了防止这种情况的发生,同时加速渲染效率,采用了蓝噪声来给步长加上一个抖动值;When the virtual camera position is on the ground, calculate the intersection point of the step ray and the bottom of the cloud layer, and use this intersection point as the step starting point for variable step size sampling. Stepping light with the same step length for each pixel will lead to obvious banding in the volume cloud. In order to prevent this from happening and speed up rendering efficiency, blue noise is used to add a jitter value to the step size ;

当虚拟相机位置处于云中时,将相机位置作为光线步进的起始点,变步长前进采样,虚拟相机在云中时,通常保持着运动状态,视点的变化将导致采样点的快速变化,从而导致体积云的时间性走样。使用蓝噪声抖动采样可以缓解这种走样,但优化效果有限,使用结构化采样方法可以解决这个问题,根据视点的位置计算出光线步进的初始位移,使得视点移动时,只有距离视点最近和最远的采样点会发生变化。近处的采样点相对密集,其发生的变化对体积云结果影响可以忽略不计。而最远处的采样点在一次光线步进的密度累计中本身占的权重就比较小,对结果的影响也几乎可以忽略。When the virtual camera position is in the cloud, the camera position is used as the starting point of the ray step, and the step size is changed to advance sampling. When the virtual camera is in the cloud, it usually keeps moving, and the change of the viewpoint will lead to a rapid change of the sampling point. This results in temporal aliasing of volumetric clouds. Using blue noise jitter sampling can alleviate this kind of aliasing, but the optimization effect is limited. Using structured sampling method can solve this problem, and calculate the initial displacement of the ray step according to the position of the viewpoint, so that when the viewpoint moves, only the closest and the most Distant sampling points will change. The nearby sampling points are relatively dense, and their changes have negligible influence on the volumetric cloud results. The farthest sampling point itself has a relatively small weight in the density accumulation of a ray step, and its influence on the result is almost negligible.

采样点的云密度值如果大于0,则从采样点出发,向光源方向步进采样,累计采样点在光源方向上的云密度,求解光照模型来计算采样点的亮度。If the cloud density value of the sampling point is greater than 0, start from the sampling point and sample step by step toward the light source, accumulate the cloud density of the sampling point in the direction of the light source, and solve the illumination model to calculate the brightness of the sampling point.

光照模型求解:Lighting model solution:

体积云的光照模型包括了云层中多次散射、折射造成的明暗效果、正视太阳方向时的亮边效果、背向太阳时的暗边效果。The lighting model of the volume cloud includes the light and shade effects caused by multiple scattering and refraction in the cloud layer, the bright edge effect when facing the sun directly, and the dark edge effect when facing away from the sun.

针对云层中多次散射、折射的现象,使用Beer定律来描述体积云透明度与光学厚度之间的关系。Aiming at the phenomenon of multiple scattering and refraction in clouds, Beer's law is used to describe the relationship between volumetric cloud transparency and optical thickness.

cloud_transmittnnce=e-Cloud_depth cloud_transmittnnce=e -Cloud_depth

Figure BDA0004122542720000081
Figure BDA0004122542720000081

其中Cloud_depth表示光线步进到当前采样点已经累计的密度,Density(p)获取位于p的采样点对应的密度值。Among them, Cloud_depth represents the accumulated density of light stepping to the current sampling point, and Density(p) obtains the density value corresponding to the sampling point at p.

对于正视太阳方向时的亮边效果,是太阳光穿过云层发生散射而产生的。相位函数描述了光线在各个角度上的散射强度和入射光方向的关系,可以用来模拟云层中光线散射的情况。MIE散射能够较为准确地描述这种散射关系,但计算成本较高,因此在云层渲染中通常使用HG相位函数来近似MIE散射。为了弥补HG相位函数在后向散射上近似误差较大的问题,使用TTHG相位函数来代替,并使用粒子群算法来优化TTHG中的参数。For the bright edge effect when facing the direction of the sun, it is caused by the scattering of sunlight through the clouds. The phase function describes the relationship between the scattering intensity of light at various angles and the direction of incident light, which can be used to simulate the scattering of light in clouds. MIE scattering can describe this scattering relationship more accurately, but the calculation cost is high, so the HG phase function is usually used to approximate MIE scattering in cloud rendering. In order to make up for the large approximation error of the HG phase function in backscattering, the TTHG phase function is used instead, and the parameters in TTHG are optimized using the particle swarm optimization algorithm.

Figure BDA0004122542720000091
Figure BDA0004122542720000091

ph aseTTHG(θ,gα,gβ,α)=α·phaseHG(θ,gα)+(1-α)·phaseHG(θ,gβ)phase TTHG (θ, g α , g β , α) = α·phase HG (θ, g α )+(1-α)·phase HG (θ, g β )

其中gα、gβ分别表示前向散射和后向散射的不对称因子,α为权重;Where g α and g β represent the asymmetry factors of forward scattering and back scattering respectively, and α is the weight;

背向太阳观察云层时,会看到云团具有暗边效果,单纯用上述两种方法无法模拟这种暗边效果。当背向太阳观察体积云时,使用Beer's-Powder函数替代Beer函数。When observing the clouds facing away from the sun, you will see that the clouds have a dark edge effect, which cannot be simulated by the above two methods alone. When viewing volumetric clouds facing away from the sun, use the Beer's-Powder function instead of the Beer function.

E=2.0·e-Cloud_depth·(1.0-e-2·Cloud_depth)E=2.0·e -Cloud_depth ·(1.0-e -2·Cloud_depth )

为了减小结果与原始的归一化函数的差别,Beer's-Powder函数会将结果乘上2。To reduce the difference between the result and the original normalized function, the Beer's-Powder function multiplies the result by 2.

最终的光照模型可以用以下公式来表示:The final lighting model can be expressed by the following formula:

E=2.0·e-Cloud_deoth·(1.0-e-2·Cloud_depth)E=2.0·e -Cloud_deoth ·(1.0-e -2·Cloud_depth )

根据体积云轮廓深度信息和待渲染场景的深度信息,找出被体积云遮挡的像素,对于这些像素,分别进行上述的光线步进,计算每个像素上体积云的颜色和透明度,最后和待渲染场景混合在一起。According to the depth information of the volumetric cloud outline and the depth information of the scene to be rendered, find the pixels blocked by the volumetric cloud. For these pixels, perform the above-mentioned ray stepping respectively, calculate the color and transparency of the volumetric cloud on each pixel, and finally Rendered scenes are blended together.

Figure BDA0004122542720000101
Figure BDA0004122542720000101

本说明中未作详细描述的内容属于本领域专业技术人员公知的现有技术,尽管参照前述实施例对本发明进行了详细的说明,对于本领域的技术人员来说,其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The content that is not described in detail in this description belongs to the prior art known to those skilled in the art. Although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art can still implement the foregoing Modifications to the technical solutions recorded in the examples, or equivalent replacements for some of the technical features, within the spirit and principles of the present invention, any modifications, equivalent replacements, improvements, etc., shall be included in the scope of protection of the present invention within.

Claims (8)

1.一种基于气象数据的体积云建模、渲染方法,构造体积云数据结构,其中包括云底高度、云顶高度、覆盖率、云底类型、云顶类型这五个数据字段;首先云的数据结构构建基本、体积云的深度值、采用光线步进算法、渲染场景进行混合和根据不同区域的云的疏密程度完全由噪声参数控制;1. A volumetric cloud modeling and rendering method based on meteorological data, constructing a volumetric cloud data structure, which includes five data fields of cloud base height, cloud top height, coverage, cloud base type, and cloud top type; first, cloud data The basic structure construction, the depth value of the volume cloud, the use of light stepping algorithm, the rendering scene is mixed, and the density of the cloud in different regions is completely controlled by the noise parameter; 所述叠加不同频率的Perlin噪声和Worley噪声保存为三维纹理,用来进行体积云的建模;The superimposed Perlin noise and Worley noise of different frequencies are saved as three-dimensional textures for modeling volumetric clouds; 所述形状噪声纹理使用分形Perlin-Worley噪声,用于生成云的基本形状;细节噪声纹理使用分形Perlin噪声,用于对云的形状边缘进行侵蚀以添加细节;The shape noise texture uses fractal Perlin-Worley noise to generate the basic shape of the cloud; the detail noise texture uses fractal Perlin noise to erode the shape edge of the cloud to add details; 所述体积云自阴影的分析制作。The volumetric clouds are produced from the analysis of shadows. 2.根据权利要求1所述的一种基于气象数据的体积云建模、渲染方法,其特征在于:所述根据体积云的数据结构构建基本的云层轮廓,再使用三维噪声来侵蚀以上的云层轮廓,给云层增加细节,得到体积云的密度场。2. A kind of volumetric cloud modeling based on meteorological data, rendering method according to claim 1, it is characterized in that: described according to the data structure of volumetric cloud construction basic cloud layer profile, use three-dimensional noise to erode above cloud layer again Silhouettes, add detail to clouds, get density fields for volumetric clouds. 3.根据权利要求1所述的一种基于气象数据的体积云建模、渲染方法,其特征在于:所述获取待显示场景各个像素的深度值以及体积云的深度值,对比决定每个像素是否被云层遮挡,若该像素未被云层遮挡,则直接显示该像素本来的颜色。3. A volumetric cloud modeling and rendering method based on meteorological data according to claim 1, characterized in that: the depth value of each pixel of the scene to be displayed and the depth value of the volumetric cloud are acquired, and each pixel is determined by comparison Whether it is blocked by clouds, if the pixel is not blocked by clouds, the original color of the pixel will be displayed directly. 4.根据权利要求1所述的一种基于气象数据的体积云建模、渲染方法,其特征在于:所述采用光线步进算法,获得虚拟摄像机发出射线与云层表面的交点作为起始点,若视点位于云中,则采用结构化采样法,将视点位置作为起始点,由起始点开始沿着射线方向步进,在密度场中获取采样点的浓度。对于每个浓度大于0的采样点,向光照方向再进行一次光线步进算法,计算体积云自遮挡阴影。4. a kind of volumetric cloud modeling based on meteorological data, rendering method according to claim 1, is characterized in that: described adopting ray-stepping algorithm, obtains the intersection point that virtual camera sends out ray and cloud layer surface as starting point, if If the viewpoint is located in the cloud, the structured sampling method is adopted. The viewpoint position is used as the starting point, and the concentration of the sampling point is obtained in the density field by stepping along the ray direction from the starting point. For each sampling point with a density greater than 0, the ray stepping algorithm is performed again in the direction of light to calculate the volumetric cloud self-occlusion shadow. 5.根据权利要求1所述的一种基于气象数据的体积云建模、渲染方法,其特征在于:所述根据自遮挡阴影和采样点的云浓度求解光照模型,得到体积云的最终透明度和颜色,并与待渲染场景进行混合。5. a kind of volumetric cloud modeling based on meteorological data, rendering method according to claim 1, is characterized in that: described according to the cloud density solution illumination model of self-occlusion shadow and sampling point, obtains the final transparency of volumetric cloud and color and blend it with the scene to be rendered. 6.根据权利要求1所述的一种基于气象数据的体积云建模、渲染方法,其特征在于:所述根据不同区域的云的疏密程度完全由噪声参数控制,如果美术需要场景中同时有不同类型的云,或者需要能够自由控制云的疏密程度,则需要用到CloudMap,CloudMap可以由美术直接进行绘制,不过由于这种方式无法实时观察到效果,不太方便,因此我们制作了一个笔刷工具,方便美术进行绘制,和UE4的BlueprintPaintedClouds类似。6. A kind of volumetric cloud modeling and rendering method based on meteorological data according to claim 1, characterized in that: the density of the clouds according to different regions is completely controlled by noise parameters, if the art requires that the scene simultaneously There are different types of clouds, or you need to be able to freely control the density of clouds, you need to use CloudMap, CloudMap can be drawn directly by art, but because this method cannot observe the effect in real time, it is not convenient, so we made A brush tool, which is convenient for artists to draw, similar to UE4's BlueprintPaintedClouds. 7.根据权利要求1所述的一种基于气象数据的体积云建模、渲染方法,其特征在于:所述对于形状比较复杂的积雨云,需要结合CloudLut在CloudType的0.5~1.0之间进行过渡(这样当然也能做),即根据高度给噪声不同的覆盖率和侵蚀度,高处的云覆盖率低且噪声侵蚀度高,低处的云覆盖率高且噪声侵蚀度低;7. A volumetric cloud modeling and rendering method based on meteorological data according to claim 1, characterized in that: for cumulonimbus clouds with relatively complex shapes, it is necessary to combine CloudLut between 0.5 and 1.0 of CloudType Transition (of course this can also be done), that is, give different coverage and erosion to the noise according to the height, the cloud coverage at high places is low and the noise erosion is high, and the cloud coverage at low places is high and the noise erosion is low; 所述CloudLut中G通道存储的Erosion实际上不只是给细节噪声用,也会影响形状噪声的计算结果,具体细节可以参考UnityHDRP源码中的实现。The Erosion stored in the G channel in CloudLut is not only used for detail noise, but also affects the calculation result of shape noise. For details, please refer to the implementation in the UnityHDRP source code. 8.根据权利要求1所述的一种基于气象数据的体积云建模、渲染方法,其特征在于:所述体积云自阴影的做法主要有两种:8. A kind of volumetric cloud modeling and rendering method based on meteorological data according to claim 1, characterized in that: the way of self-shadowing of the volumetric cloud mainly contains two kinds: 第一种方法比较常见,所述做法是每个采样点向光源方向再做一次RayMarching(PC/主机端)来计算transmittance,在采样步数够多的情况下效果也较好,但显然开销会比较大;The first method is more common. The method is to do RayMarching (PC/host side) for each sampling point in the direction of the light source to calculate the transmittance. The effect is better when the number of sampling steps is sufficient, but obviously the overhead will be high. bigger; 第二种方法是体积阴影映射,所述使用UE4的Beer'sShadowMap,以及更复杂一点的TransmittanceFucntionMapping方法(目前只看到最终幻想中用过类似方法),TransmittanceFucntionMapping方法的原理是使用一系列正交基函数来近似透射率函数,基函数的选择上有用傅里叶函数、DCT函数、Haar小波函数等做法,与Beer'sShadowMap相比,TFM这类方法在一些情况下对透射率曲线的近似会更准确,从而对体积云的渲染效果更好。The second method is volume shadow mapping, which uses UE4's Beer'sShadowMap, and the more complicated TransmittanceFucntionMapping method (only seen in Final Fantasy). The principle of the TransmittanceFucntionMapping method is to use a series of orthogonal bases function to approximate the transmittance function, the selection of basis functions can be done by Fourier function, DCT function, Haar wavelet function, etc. Compared with Beer'sShadowMap, methods such as TFM can approximate the transmittance curve better in some cases. Accurate, resulting in better rendering of volumetric clouds.
CN202310236581.8A 2023-03-13 2023-03-13 A Volumetric Cloud Modeling and Rendering Method Based on Meteorological Data Pending CN116228984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310236581.8A CN116228984A (en) 2023-03-13 2023-03-13 A Volumetric Cloud Modeling and Rendering Method Based on Meteorological Data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310236581.8A CN116228984A (en) 2023-03-13 2023-03-13 A Volumetric Cloud Modeling and Rendering Method Based on Meteorological Data

Publications (1)

Publication Number Publication Date
CN116228984A true CN116228984A (en) 2023-06-06

Family

ID=86590875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310236581.8A Pending CN116228984A (en) 2023-03-13 2023-03-13 A Volumetric Cloud Modeling and Rendering Method Based on Meteorological Data

Country Status (1)

Country Link
CN (1) CN116228984A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523026A (en) * 2024-01-08 2024-02-06 北京理工大学 Cloud and fog image simulation methods, systems, media and terminals for infrared remote sensing imaging
CN117649481A (en) * 2023-12-06 2024-03-05 中国人民解放军军事科学院战争研究院 Volume cloud ground shadow rendering method, device and storage medium
CN117710557A (en) * 2024-02-05 2024-03-15 杭州经纬信息技术股份有限公司 Method, device, equipment and medium for constructing realistic volume cloud

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649481A (en) * 2023-12-06 2024-03-05 中国人民解放军军事科学院战争研究院 Volume cloud ground shadow rendering method, device and storage medium
CN117523026A (en) * 2024-01-08 2024-02-06 北京理工大学 Cloud and fog image simulation methods, systems, media and terminals for infrared remote sensing imaging
CN117523026B (en) * 2024-01-08 2024-03-29 北京理工大学 Cloud and fog image simulation methods, systems, media and terminals for infrared remote sensing imaging
CN117710557A (en) * 2024-02-05 2024-03-15 杭州经纬信息技术股份有限公司 Method, device, equipment and medium for constructing realistic volume cloud
CN117710557B (en) * 2024-02-05 2024-05-03 杭州经纬信息技术股份有限公司 Method, device, equipment and medium for constructing realistic volume cloud

Similar Documents

Publication Publication Date Title
CN116228984A (en) A Volumetric Cloud Modeling and Rendering Method Based on Meteorological Data
CN111508052B (en) Rendering method and device of three-dimensional grid body
Behrendt et al. Realistic real-time rendering of landscapes using billboard clouds
Baatz et al. Nerf‐tex: Neural reflectance field textures
WO2017206325A1 (en) Calculation method and apparatus for global illumination
US20080143720A1 (en) Method for rendering global illumination on a graphics processing unit
CN102768765A (en) Real-time soft shadow rendering method for point light sources
Bruneton et al. Real‐time realistic rendering and lighting of forests
CN104091363A (en) Real-time size cloud computing method based on screen space
US11380044B2 (en) Methods and systems for volumetric reconstruction based on a confidence field
Dachsbacher Interactive terrain rendering: towards realism with procedural models and graphics hardware
Simons et al. An Interactive Information Visualization Approach to Physically-Based Rendering.
Boulanger Real-time realistic rendering of nature scenes with dynamic lighting
Xu et al. Rendering and modeling of stratus cloud using weather forecast data
CN116194960A (en) Direct volume rendering device
Congote et al. Volume ray casting in WebGL
CN119006683B (en) A real-time rendering method and system for terrain shading images of a custom area
CN118096985B (en) Real-time rendering method and device for virtual forest scene
Kolivand et al. ReLiShaft: realistic real-time light shaft generation taking sky illumination into account
Favorskaya et al. Large scene rendering
Yuan et al. Near-Surface Atmospheric Scattering Rendering Method
Bhattacharjee et al. Real-time painterly rendering of terrains
Seibt et al. Multidimensional Image Morphing-Fast Image-based Rendering of Open 3D and VR Environments
Tandianus et al. Real-time rendering of approximate caustics under environment illumination
Wang et al. Real‐time rendering of sky scene considering scattering and refraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination