WO2017167084A1 - 基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法 - Google Patents

基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法 Download PDF

Info

Publication number
WO2017167084A1
WO2017167084A1 PCT/CN2017/077634 CN2017077634W WO2017167084A1 WO 2017167084 A1 WO2017167084 A1 WO 2017167084A1 CN 2017077634 W CN2017077634 W CN 2017077634W WO 2017167084 A1 WO2017167084 A1 WO 2017167084A1
Authority
WO
WIPO (PCT)
Prior art keywords
light source
column
virtual
sampling
visual
Prior art date
Application number
PCT/CN2017/077634
Other languages
English (en)
French (fr)
Inventor
鲍虎军
王锐
霍宇驰
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Publication of WO2017167084A1 publication Critical patent/WO2017167084A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects

Definitions

  • the present invention relates to the field of image technologies, and in particular, to a global illumination rendering method based on virtual light source and adaptive sparse matrix restoration.
  • Global illumination is a very important research field in computer graphics. By simulating the illumination in nature, it captures the effects of multiple reflections, refractions, reflections, and indirect refractions of light in real environments. The effect can greatly enhance the realism of the rendering effect. This technique is often used in the rendering of movies, animations, and 3D models.
  • global illumination such as radiance, ray tracing, ambient occlusion, and photon mapping.
  • a large number of light source (Many-light) methods are one of the important global illumination technologies that generate a large number of virtual light sources in the scene, including Virtual Point Light (VPL) and Virtual Ray Light (VRL).
  • VPL Virtual Point Light
  • VRL Virtual Ray Light
  • the global illumination effect is obtained by separately calculating the degree to which each of the view samplers is illuminated by the virtual light sources, wherein the view sampler includes a shading point and an eye ray (Eye Ray).
  • Wald et al. invented a light cuts method based on a large number of point source frames, which establishes a hierarchical structure of virtual point sources and uses a cut set of hierarchical trees to represent all virtual point sources. , reduce the amount of calculations and speed up the operation.
  • the present invention provides a global illumination rendering method based on virtual light source and adaptive sparse matrix restoration, which uses an adaptive matrix restoration technique and can be combined with a virtual line light (VRL) method. Scenes with media, with higher universality and faster rendering speed.
  • VRL virtual line light
  • a global illumination rendering method based on virtual light source and adaptive sparse matrix restoration includes the following steps:
  • the visual sampler includes a rendering point of the camera on a geometric mesh surface of the scene to be drawn and a medium in the scene to be drawn
  • the virtual light source includes a virtual point source and a virtual line source
  • (3-1) taking a column corresponding to a leaf node included in a current node in the optical tree in the optical transfer matrix as a column sampling set, performing column sparse sampling on the column sampling set to obtain a sampling column;
  • the target draws the geometric mesh of the scene (ie, the scene to be drawn) (ie, some triangular patches), and the normal vectors of each geometric mesh; draws the media information in the scene (ie, the spatial mesh containing the media information), a series of virtual
  • the light source includes material information of each virtual light source, position information (actually including the middle position and direction in the target drawing scene) and energy information, and spatial information of the camera (including the position and orientation of the camera).
  • the spatial acceleration structure of the geometric mesh is constructed by using the SBVH method (Spatial splits in bounding volume hierarchies, SBVH), and the camera is determined by the ray tracing method by using the spatial acceleration structure and the position information of the camera.
  • SBVH spatial splits in bounding volume hierarchies
  • a visual sampler in the geometric grid and determine the sampling information for each visual sampler.
  • All the elements in the light transfer matrix constructed in the step (2) are unknown, and it can be understood that each element in the constructed light transfer matrix is empty.
  • the virtual light source may be a virtual point light source hitting a geometric surface or a virtual line light source passing through a medium such as smoke.
  • the elements of the optical transfer matrix can be divided into four categories, as follows:
  • x represents the rendering point
  • y represents the virtual point source
  • V(x, y) represents the generalized visible item between two points
  • G(x, y) represents the geometric item between two points
  • L represents the intensity of the virtual point source y
  • f(x, y) represents the material item that renders the point x to the virtual point source y (ie, toward the virtual point source y).
  • u is the length of the line of sight
  • d is the direction of the line of sight
  • a is the starting position of the line of sight
  • y is the virtual point source
  • V(x,y) is A generalized visible term between two points
  • G(x, y) represents the geometric term between two points
  • L represents the intensity of the source y
  • f(x, y) represents x Material item to y;
  • V(x,y ) represents a generalized visible term between two points
  • G(x, y) represents the geometric term between two points
  • L represents the intensity of the source y
  • f(x, y) represents the material term of x to y.
  • u is the length of the line of sight segment
  • d is the direction of the line of sight segment
  • a is the starting point of the line of sight segment
  • v is the length of the virtual line source
  • i is the direction of the virtual line source
  • b is the starting point of the virtual line source
  • x is the rendering point
  • V(x, y) is the generalized visible item between the two points
  • G(x,y ) represents the geometric term between two points
  • L(y) represents the intensity of the source y
  • f(x, y) represents the material term of x to y.
  • the step (1) further comprises determining sampling information of each visual sampler
  • the sampling information includes a position, a material, and a corresponding pixel point mark of the rendering point; for the line of sight segment, the sampling information includes a position of the line of sight segment, a medium, and a corresponding pixel point mark, wherein the rendering point
  • the position includes the position and the normal vector; the position of the line of sight includes the starting position, direction and length.
  • the step (1) further includes separately clustering the rendering point and the line of sight segment according to the positions of the respective visual samplers. Accordingly, in the step (2), respectively constructing corresponding optical transmissions for each type of visual sampler.
  • the matrix the step (3) performs sparse matrix reduction by column for each optical transfer matrix.
  • a light tree of a geometric scene to be drawn is constructed by using a large number of point virtual light source models, and the visual sampler is clustered, and various types of visual samplers are classified and processed according to the clustering result for each class.
  • the visual sampler constructs an optical transfer matrix and, for each type of visual sampler, constructs an optical transfer matrix of such a visual sampler. This sorting process effectively reduces The rank of the light transfer matrix reduces the amount of computation (reducing the sampling rate of sparse sampling).
  • the optical transfer matrix is constructed for each type of visual sampler according to the clustering result, which greatly reduces the rank of the optical transfer matrix, which is beneficial to reduce the calculation amount and improve the rendering efficiency.
  • the optical transfer matrix corresponding to each type of visual sampler can be reduced in parallel sparse matrix.
  • the number of columns sampled when performing column sparse sampling on the column sample set is 10 to 100.
  • step (3-2) 10% to 20% of the total elements are selected as reference elements for each sample column.
  • the error ⁇ of the current node is calculated according to all the solved sample columns:
  • the error threshold preset in the step (3-3) is 0.0001 to 0.01.
  • the visual sampler in the present invention comprises a rendering point of the camera on the geometric mesh surface of the scene to be drawn and a line of sight segment in the medium of the scene to be drawn, the virtual light source comprising a virtual point source and a virtual line source, such that
  • the rendering method enables the rendering of scenes with media.
  • some columns are selected adaptively, and some elements of the selected column are randomly sampled and the values of the selected elements are calculated, and then the sparse matrix is restored by using some elements, and the brightness of each viewpoint sampler is further calculated. Value (ie contribution value).
  • the geometric grid of the target drawing scene ie the scene to be drawn
  • the normal vector of each geometric mesh ie, the media information in the scene (ie, the spatial mesh containing the media information)
  • a series of virtual light sources including the material information and position information of each virtual light source (actually included in the target drawing scene) Medium position, as well as direction) and energy information, as well as spatial information of the camera (including the position and orientation of the camera).
  • the visual sampler includes a rendering point of the camera on the geometric mesh surface of the scene to be drawn and a line of sight segment in the medium of the scene to be drawn
  • the virtual light source includes a virtual point light source and a virtual line light source
  • the SBVH method (Spatial splits in bounding volume hierarchies, SBVH) is used to construct the spatial acceleration structure of the geometric grid, and the spatial acceleration structure and the position information of the camera are used to determine the camera in the geometric grid by ray tracing method.
  • the sampling information includes the position of the corresponding viewpoint sampler (the geometric position of the visual sampling point and the midpoint of the visual sampling line), the direction (the normal vector particle direction of the visual sampling point and the average sample particle direction of the visual sampling line), the material and the corresponding pixel point mark. .
  • the sampling information includes the position, the material, and the corresponding pixel point mark of the rendering point; for the line of sight segment, the sampling information includes the position of the line of sight segment, the medium, and the corresponding pixel point mark, wherein the position of the rendered point includes the position and the method Vector; the position of the line of sight includes the starting position, direction and length.
  • Corresponding point light source and line light source are determined according to illumination information of the scene to be drawn, and the light source information of each virtual point light source includes position information, material information and energy information, and the light source information of the virtual line source includes position information, direction information, medium information and energy information.
  • the visual sampling points are classified by clustering, as follows:
  • Clustering is performed for the rendering point and the line of sight segment according to the position of the visual sampler.
  • K-means is used for clustering.
  • the distance function for each iteration of clustering is:
  • Each cluster has a size of 512 to 1024.
  • the value of the constant ⁇ is 0.5 to 1, which is used to control the relative importance of the distance and the angle during each iteration of the clustering process.
  • the clustering of the visual sampler has a great influence on the final rendering effect. It helps to adopt a more conservative but higher quality traditional clustering method.
  • the distance function used in clustering considers both position and normal, and larger clusters can increase the matrix. Stability and accuracy when restoring.
  • (3-1) taking a column corresponding to a leaf node of a current node in the optical tree in the optical transfer matrix as a column sampling set, performing column sparse sampling on the column sampling set to obtain a sampling column;
  • the document "A Matrix Sampling-and-Recovery” is used.
  • the method disclosed in Approach for Many-Lights Rendering, Huo, Wang, Jin, Liu, & Bao, 2015" establishes a corresponding light tree.
  • the height of the light tree is 32-64.
  • a light tree is separately established for the virtual point source and the line source, that is, the VPL light tree is established according to the position information, material information, and energy information of all the virtual point sources; according to the position information and direction information of all the virtual line sources, the medium Information and energy information, building a VRL light tree.
  • the optical transfer matrix can be constructed for the VPL light tree and the VRL light tree, respectively. This embodiment does not consider that the virtual point source and the virtual line source are constructed as one optical transfer matrix.
  • Step (3) is performed separately for the plurality of optical transfer matrices constructed, and the corresponding column sparse matrix restoration is completed.
  • the columns of a portion of the matrix are sparsely sampled and the Monte Carlo method is used to calculate the contribution of the entire sub-optical transfer matrix. Then select a cut set on this tree to approximate the contribution of the whole tree.
  • l k denotes a column of sub-optical transfer matrices
  • pdf(l k ) denotes the probability of taking this column
  • K is the number of columns taken together
  • E is the vector of the sum of the columns of a tree node, and its row corresponds
  • the visual sampler receives the contribution of this tree node, and these brightnesses are added to the corresponding pixel points of the visual sampler to produce the final picture.
  • the error for each node is calculated and a suitable cut set is dynamically found for each sub-optical transfer matrix.
  • the overall flow of the algorithm is that for each optical transfer matrix, the root node of the optical tree is first input, and the root node is pushed into a priority queue. The program then cyclically fetches the node with the largest error in the priority queue and divides the node into its Two child nodes (ie, left and right child nodes), if the error of the newly generated child node is less than a given error upper limit parameter (ie, an error threshold), the contribution of the child node is added to the pixels of the picture, otherwise This child node is pushed into the priority queue for further segmentation.
  • a given error upper limit parameter ie, an error threshold
  • step (3-3) of this embodiment the error ⁇ of the current node is calculated according to all the solved sample columns:

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Image Generation (AREA)

Abstract

提供了一种基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法,包括:确定待绘制场景的视觉采样器、虚拟光源以及各个虚拟光源的光源信息;所述的视觉采样器包括摄像机在待绘制场景的几何网格表面的渲染点以及在待绘制场景的介质中的视线段,所述的虚拟光源包括虚拟点光源和虚拟线光源;构建待绘制场景的光传递矩阵,根据虚拟光源的光源信息建立相应的光树,依据光树中对光传递矩阵按列进行稀疏矩阵还原:将各个像素点对应的视觉采样器的光照值进行加权求和,以加权求和结果作为该像素点的亮度值。

Description

基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法 技术领域
本发明涉及图像技术领域,尤其涉及一种基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法。
背景技术
全局光照是计算机图形学中非常重要的研究领域,通过对大自然中光照情况的模拟,捕捉真实环境中的光的多次传播、折射、反射所产生的软影、间接折射等光照效果,这些效果能大大加强渲染效果的真实感。这一技术常用于电影、动画、三维模型的渲染中。全局光照有多种实现方法,例如辐射度、光线追踪、环境光遮蔽、光子贴图。
大量光源(Many-light)的方法是其中一类重要的全局光照技术,它在场景中生成大量虚拟光源,包括虚拟点光源(Virtual Point Light,VPL)和虚拟线光源(Virtual Ray Light,VRL),通过分别计算各视角采样器被这些虚拟光源照亮的程度来获得全局光照效果,其中视角采样器包括渲染点(Shading Points)和视线段(Eye Ray)。通过把复杂的多次传播问题简化为采样器被虚拟光源直接照亮问题,为全局光照问题提供了一个统一的数学框架,并且拥有很高的灵活性,可以根据实际需要调节算法的复杂度。
为进一步提高绘制速度,提高实时性,Wald等人发明了基于大量点光源框架的光割(light cuts)方法,将虚拟点光源建立层次结构并使用层次结构树的一个割集代表所有虚拟点光源,减少运算量并加快了运算速度。
Novák等人发明了虚拟线光源(VirtualRayLight,VRL)方法,使用虚拟线光源模拟介质(比如雾)的光能贡献,扩大了多光技术的应用范围。
近年来,随着不同研究者对光割方法的不断完善,大量光源框架已经成为了实现全局光照方法中效率很高的一种。但是,光割方法仍然需要进行大量计算,平均每个视角采样点需要计算几百到上千个虚拟光源对它的贡献,严重制约了绘制速度,实时性差。因此可以看出,效率仍然是限制其应用发 展的主要瓶颈。
针对该问题,公开号为CN103971397A和CN105335995A的中国专利申请中公开了一种基于光割技术的多光渲染方法,其使用全局的稀疏矩阵还原法加快了多光渲染速度,提高光渲染效率,但是,由于受到光割技术的限制,该渲染方法只能适用于不带介质的场景中。
发明内容
针对现有技术的不足,本发明提供了一种基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法,才使用自适应矩阵还原技术,可以结合虚拟线光源(Virtual Ray Light,VRL)方法渲染带有介质的场景,具有更高的普适性,且渲染速率快。
一种基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法,包括如下步骤:
(1)确定待绘制场景的视觉采样器、虚拟光源以及各个虚拟光源的光源信息;所述的视觉采样器包括摄像机在待绘制场景的几何网格表面的渲染点以及在待绘制场景的介质中的视线段,所述的虚拟光源包括虚拟点光源和虚拟线光源;
(2)构建待绘制场景的光传递矩阵,所述光传递矩阵的行与视觉采样器一一对应,列与虚拟光源一一对应;
(3)根据虚拟光源的光源信息建立相应的光树,以光树的根节点作为当前节点,通过如下步骤对光传递矩阵进行按列的稀疏矩阵还原:
(3-1)以光传递矩阵中与光树中当前节点包含的叶节点所对应的列为列采样集,对该列采样集进行列稀疏采样得到采样列;
(3-2)针对每个采样列,选取若干元素作为参考元素并计算各个参考元素的值,然后利用参考元素的值对该列进行稀疏还原求解该采样列;
(3-3)根据所有求解的采样列的计算当前节点的误差,若误差小于预设的误差阈值,则计算当前节点对应的虚拟光源对各个视觉采样器的光照值;
否则,则针对光树中当前节点的左、右子节点分别执行返回步骤(3-1)~(3-3);
(4)根据各个视觉采样器的对应像素点标记,确定各个像素点对应的视觉采样器,将各个像素点对应的视觉采样器的光照值进行加权求和,以加权求和结果作为该像素点的亮度值。
循环步骤(3-1)~(3-3)时会存在执行到叶子节点,此时不考虑误差大小一旦达叶子节点即结束循环。
本发明的全局光照绘制方法中首先需要以下输入以下信息:
目标绘制场景(即待绘制场景)的几何网格(即一些三角面片),以及各个几何网格的法向量;绘制场景中的介质信息(即包含介质信息的空间网格),一系列虚拟光源,包括各个虚拟光源的材质信息、位置信息(实际上包括在目标绘制场景中的中位置,以及方向)和能量信息,以及摄像机的空间信息(包括摄像机的位置和方向)。
本发明中采用SBVH方法(空间分割包围盒,Spatial splits in bounding volume hierarchies,SBVH)构建几何网格的空间加速结构,并利用所述的空间加速结构和摄像机的位置信息,采用光线追踪法确定摄像机在几何网格中的视觉采样器,并确定各个视觉采样器的采样信息。所述步骤(2)中构建的光传递矩阵中所有元素均未知,可以理解为构建的光传递矩阵中每个元素均为空。每一对应一个的虚拟光源,虚拟光源可以是打到几何表面上的虚拟点光源或者一段通过介质(比如烟雾)的虚拟线光源。且光传递矩阵的元素可分为四类,分别如下:
(a)用于表示虚拟点光源到渲染点的贡献值:
V(x,y)G(x,y)f(x,y)L      (1)
其中x表示渲染点,y表示虚拟点光源,V(x,y)表示两点间的广义可见项,G(x,y)表示两点间的几何项,L表示虚拟点光源y的强度,f(x,y)表示渲染点x到虚拟点光源y(即向着虚拟点光源y方向)的材质项。
(b)用于表示虚拟点光源到视线段的贡献值:
Figure PCTCN2017077634-appb-000001
其中x=a+td表示视线段上的一个点,u为视线段的长度,d为视线段的方向,a为视线段的起始位置,y表示虚拟点光源,V(x,y)表示两点间的广义可见项,G(x,y)表示两点间的几何项,L表示光源y的强度,f(x,y)表示x 到y的材质项;
(c)用于表示虚拟线光源到渲染点的贡献值:
Figure PCTCN2017077634-appb-000002
其中y=b+si表示虚拟线光源上的一个点,v为虚拟线光源的长度,i为虚拟线光源的方向,b为虚拟线光源的起始点,x表示渲染点,V(x,y)表示两点间的广义可见项,G(x,y)表示两点间的几何项,L表示光源y的强度,f(x,y)表示x到y的材质项。
(d)用于表示虚拟线光源到视线段的贡献值:
Figure PCTCN2017077634-appb-000003
其中x=a+td表示视线段上的一个点,u为视线段的长度,d为视线段的方向,a为视线段的起始点,y=b+si表示虚拟线光源上的一个点,v为虚拟线光源的长度,i为虚拟线光源的方向,b为虚拟线光源的起始点,x表示渲染点,V(x,y)表示两点间的广义可见项,G(x,y)表示两点间的几何项,L(y)表示光源y的强度,f(x,y)表示x到y的材质项。
作为优选所述步骤(1)中还包括确定各个视觉采样器的采样信息;
对于渲染点,所述的采样信息包括该渲染点的位置、材质和对应像素点标记;对于视线段,所述的采样信息包括该视线段的位置、介质和对应像素点标记,其中,渲染点的位置包括位置和法向量;视线段的位置包括起始位置,方向和长度。
所述步骤(1)还包括根据各个视觉采样器的位置分别对渲染点和视线段单独进行聚类,相应的,所述步骤(2)中针对每一类视觉采样器分别构建相应的光传递矩阵,所述步骤(3)针对每个光传递矩阵按列进行稀疏矩阵还原。
本发明的全局光照绘制方法中,采用使用大量点虚拟光源模型构建待绘制几何场景的光树,并对视觉采样器进行聚类,分类处理各类视觉采样器,根据聚类结果针对每一类视觉采样器构建光传递矩阵,且对于每一类视觉采样器处理时,构建该类视觉采样器的光传递矩阵。这样分类处理,有效减小 了光传递矩阵的秩,降低了计算量(降低了稀疏采样的采样率)。
通过对视觉采样器进行聚类,根据聚类结果针对每一类视觉采样器构建光传递矩阵,大大降低了光传递矩阵的秩,有利于降低计算量,提高渲染效率。作为优选,针对每一类视觉采样器对应的光传递矩阵可并行稀疏矩阵还原。
所述步骤(3-1)中对列采样集进行列稀疏采样时采样的列数为10~100。
所述步骤(3-2)中针对每个采样列选取总元素的10%~20%作为参考元素。
所述步骤(3-3)中根据所有求解的采样列的计算当前节点的误差ε:
Figure PCTCN2017077634-appb-000004
其中,lk为采样列,pdf(lk)为对光传递矩阵进行列稀疏采样时以lk作为采样列的概率,K为采样列的总列数,E为当前节点的贡献值,根据如下公式计算:
Figure PCTCN2017077634-appb-000005
所述步骤(3-3)中预设的误差阈值为0.0001~0.01。
本发明中的视觉采样器包括摄像机在待绘制场景的几何网格表面的渲染点以及在待绘制场景的介质中的视线段,所述的虚拟光源包括虚拟点光源和虚拟线光源,这样使该渲染方法能够实现带介质的场景的绘制。且对于每个光传递矩阵,通过自适应选择一些列,并随机采样被选择的列的部分元素并计算选择元素的取值,然后利用部分元素进行稀疏矩阵还原,进一步计算得到各个视点采样器亮度值(即贡献值)。
具体实施方式
下面将结合具体实施例对本发明进行详细说明。
利用本实施例的基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法进行场景绘制时首先需要以下输入以下信息:
目标绘制场景(即待绘制场景)的几何网格(即一些三角面片),以及 各个几何网格的法向量;绘制场景中的介质信息(即包含介质信息的空间网格),一系列虚拟光源,包括各个虚拟光源的材质信息、位置信息(实际上包括在目标绘制场景中的中位置,以及方向)和能量信息,以及摄像机的空间信息(包括摄像机的位置和方向)。
进行绘制时依次进行如下步骤:
(1)确定待绘制场景的视觉采样器、虚拟光源以及各个虚拟光源的光源信息;视觉采样器包括摄像机在待绘制场景的几何网格表面的渲染点以及在待绘制场景的介质中的视线段,所述的虚拟光源包括虚拟点光源和虚拟线光源;
采用SBVH方法(空间分割包围盒,Spatial splits in bounding volume hierarchies,SBVH)构建几何网格的空间加速结构,并利用空间加速结构和摄像机的位置信息,采用光线追踪法确定摄像机在几何网格中的视觉采样器,并确定各个视觉采样器的采样信息。采样信息包括对应视点采样器的位置(视觉采样点的几何位置和视觉采样线的中点)、方向(视觉采样点的法向量和视觉采样线的平均介质粒子方向)、材质和对应像素点标记。对于渲染点,采样信息包括该渲染点的位置、材质和对应像素点标记;对于视线段,采样信息包括该视线段的位置、介质和对应像素点标记,其中,渲染点的位置包括位置和法向量;视线段的位置包括起始位置,方向和长度。
根据待绘制场景的光照信息确定相应点光源和线光源,每个虚拟点光源的光源信息包括位置信息、材质信息和能量信息,虚拟线光源的光源信息包括位置信息、方向信息,介质信息和能量信息。
根据各个虚拟点光源的位置信息、材质信息和能量信息,建立光树;根据各个虚拟线光源的位置信息、方向信息,介质信息和能量信息,建立光树;
(2)构建待绘制场景的光传递矩阵,光传递矩阵的行与视觉采样器一一对应,列与虚拟光源一一对应;
为提高渲染速率,本实施例中首先将所有视觉采样点分类,然后根据分类结果,针对每一类视觉采样点构建相应的光传递矩阵。
本实施例通过聚类对视觉采样点进行分类,具体如下:
根据视觉采样器的位置分别针对渲染点和视线段进行聚类,本实施例中采用K-均值(K-means)进行聚类。聚类时每次迭代的距离函数为:
Figure PCTCN2017077634-appb-000006
其中,α为常数,
xk为第k类视角采样器的位置均值,
Figure PCTCN2017077634-appb-000007
为第k类视角采样器的方向均值,k=1,2,……,K,K为每一次迭代得到的类的总数,
xi为当前被聚类的视觉采样点的位置,
Figure PCTCN2017077634-appb-000008
为当前被聚类的视觉采样器的法向量,i=1,2……,I,I为第k类视角采样器中视觉采样器的总数;
每一簇的大小为512~1024。
常数α的取值为0.5~1,用于在聚类过程中各次迭代时,控制距离与角度相对重要度。
每一次迭代得到类的总数根据聚类方向确定,若为自顶向下聚类,则第一次迭代得到的类的总数K=2,第二次迭代得到的类的总数K=4,此后依次递推,第l次迭代得到的类的总数K=2l,l=1,2,……,L,L为聚类时的迭代总次数,根据实际情况确定。
视觉采样器的聚类对最终的绘制效果影响较大,帮采用更保守但是质量较高的传统聚类方法,聚类中使用的距离函数同时考虑位置和法向,较大的簇可以增加矩阵还原时的稳定性和准确性。
(3)根据虚拟光源的光源信息建立相应的光树,以光树的根节点作为当前节点,通过如下步骤对光传递矩阵按列进行稀疏矩阵还原:
(3-1)以光传递矩阵中与光树中当前节点的叶节点所对应的列为列采样集,对该列采样集进行列稀疏采样得到采样列;
(3-2)针对每个采样列,选取若干元素作为参考元素并计算各个参考元素的值,然后利用参考元素的值对该列进行稀疏还原求解该采样列;
(3-3)根据所有求解的采样列的计算当前节点的误差,若误差小于预设的误差阈值,则计算当前节点对应的虚拟光源对各个视觉采样器的光照值;
否则,则针对光树中当前节点的左、右子节点分别执行返回步骤(3-1)~(3-3);
根据各个虚拟光源的位置信息(虚拟点光源的几何位置和虚拟线光源的中点)、材质信息和能量信息,利用文献“A Matrix Sampling-and-Recovery  Approach for Many-Lights Rendering,Huo,Wang,Jin,Liu,&Bao,2015”中公开的方法建立相应的光树。本实施例中光树的高度为32~64。
本实施例中针对虚拟点光源和线光源单独建立光树,即根据所有虚拟点光源的位置信息、材质信息和能量信息,建立VPL光树;根据所有虚拟线光源的位置信息、方向信息,介质信息和能量信息,建立VRL光树。相应的,步骤(2)中可以分别针对VPL光树和VRL光树构建光传递矩阵。本实施例中不考虑,即针对虚拟点光源和虚拟线光源构建为一个光传递矩阵。
由于采用稀疏还原的方法,需要实际计算的矩阵元素大量减少,因此可以建造更深、更精确的光树结构来支持百万级的虚拟光源,进行高质量的场景绘制。
针对构建的多个光传递矩阵分别执行步骤(3),完成相应的列稀疏矩阵还原。
为了加速计算光传递矩阵的贡献,稀疏地随机采样一部分矩阵的列,并使用蒙托卡洛方法来计算整个子光传递矩阵的贡献。然后选择这棵树上的一个割集来近似整棵树的贡献。
对于光树上的任意一个节点,其贡献的计算公式如下:
Figure PCTCN2017077634-appb-000009
其中lk表示子光传递矩阵的一列,pdf(lk)表示采这一列的概率,K是一共采的列的数目,E是一个树节点的列加和得到的向量,它的行就是对应视觉采样器接受到这个树节点的贡献,这些亮度累加到视觉采样器对应的像素点就可以产生最终的图片。
计算某一列lk的值时,稀疏地随机采样这一列上一定比例的行(本实施例中的比例为10%),然后使用文献“On the Power of Adaptivity in Matrix Completion and Approximation,Krishnamurthy&Singh,2014”的方法使用这些稀疏的元素将整列还原出来,从而达到减少采样的目的。
因为使用一部分列来逼近树节点的贡献会产生误差,计算每个节点的误差并动态地为每个子光传递矩阵寻找合适的割集。算法的整体流程为,对于每个光传递矩阵,首先输入光树的根节点,将该根节点推入一个优先队列中。然后程序循环地取出该优先队列中误差最大的节点,并把这个节点分为它的 两个子节点(即左右子节点),如果新生成的子节点的误差小于一个用户给定的误差上限参数(即误差阈值),则将此子节点的贡献累加到图片的像素中,否则就把该子节点推入优先队列中以进行进一步细分。
本实施例中步骤(3-3)中根据所有求解的采样列的计算当前节点的误差ε:
Figure PCTCN2017077634-appb-000010
其中,lk为采样列,pdf(lk)为对光传递矩阵进行列稀疏采样时以lk作为采样列的概率,K为采样列的总列数,E为当前节点的贡献值,根据如下公式计算:
Figure PCTCN2017077634-appb-000011
(4)根据各个视觉采样点的对应像素点标记,确定几何网格中各个像素点对应的视觉采样器,将各个像素点对应的视觉采样器的光照值进行加权求和,以加权求和结果作为该像素点的亮度值。
以上所述的具体实施方式对本发明的技术方案和有益效果进行了详细说明,应理解的是以上所述仅为本发明的最优选实施例,并不用于限制本发明,凡在本发明的原则范围内所做的任何修改、补充和等同替换等,均应包含在本发明的保护范围之内。

Claims (7)

  1. 一种基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法,其特征在于,包括如下步骤:
    (1)确定待绘制场景的视觉采样器、虚拟光源以及各个虚拟光源的光源信息;所述的视觉采样器包括摄像机在待绘制场景的几何网格表面的渲染点以及在待绘制场景的介质中的视线段,所述的虚拟光源包括虚拟点光源和虚拟线光源;
    (2)构建待绘制场景的光传递矩阵,所述光传递矩阵的行与视觉采样器一一对应,列与虚拟光源一一对应;
    (3)根据虚拟光源的光源信息建立相应的光树,以光树的根节点作为当前节点,通过如下步骤对光传递矩阵按列进行稀疏矩阵还原:
    (3-1)以光传递矩阵中与光树中当前节点包含的叶节点所对应的列为列采样集,对该列采样集进行列稀疏采样得到采样列;
    (3-2)针对每个采样列,选取若干元素作为参考元素并计算各个参考元素的值,然后利用参考元素的值对该列进行稀疏还原求解该采样列;
    (3-3)根据所有求解的采样列计算当前节点的误差,若误差小于预设的误差阈值,则计算当前节点对应的虚拟光源对各个视觉采样器的光照值;
    否则,则针对光树中当前节点的左、右子节点分别执行返回步骤(3-1)~(3-3);
    (4)根据各个视觉采样器的对应像素点标记,确定各个像素点对应的视觉采样器,将各个像素点对应的视觉采样器的光照值进行加权求和,以加权求和结果作为该像素点的亮度值。
  2. 如权利要求1所述的基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法,其特征在于,所述步骤(1)中还包括确定各个视觉采样器的采样信息;
    对于渲染点,所述的采样信息包括该渲染点的位置、材质和对应像素点标记;对于视线段,所述的采样信息包括该视线段的位置、介质和对应像素点标记,其中,渲染点的位置包括位置和法向量;视线段的位置包括起始位置,方向和长度。
  3. 如权利要求2所述的基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法,其特征在于,所述步骤(1)还包括根据各个视觉采样器的位置分别对渲染点和视线段单独进行聚类,相应的,所述步骤(2)中针对每一类视觉采样器分别构建相应的光传递矩阵,所述步骤(3)针对每个光传递矩阵按列进行稀疏矩阵还原。
  4. 如权利要求3所述的基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法,其特征在于,所述步骤(3-1)中对列采样集进行列稀疏采样时采样的列数为10~100。
  5. 如权利要求4所述的基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法,其特征在于,所述步骤(3-2)中针对每个采样列选取总元素的10%~20%作为参考元素。
  6. 如权利要求1~5中任意一项所述的基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法,其特征在于,所述步骤(3-3)中根据所有求解的采样列的计算当前节点的误差ε:
    Figure PCTCN2017077634-appb-100001
    其中,lk为采样列,pdf(lk)为对光传递矩阵进行列稀疏采样时以lk作为采样列的概率,K为采样列的总列数,E为当前节点的贡献值,根据如下公式计算:
    Figure PCTCN2017077634-appb-100002
  7. 如权利要求6所述的基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法,其特征在于,所述步骤(3-3)中预设的误差阈值为0.0001~0.01。
PCT/CN2017/077634 2016-03-29 2017-03-22 基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法 WO2017167084A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610188547.8 2016-03-29
CN201610188547.8A CN105825545B (zh) 2016-03-29 2016-03-29 基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法

Publications (1)

Publication Number Publication Date
WO2017167084A1 true WO2017167084A1 (zh) 2017-10-05

Family

ID=56525350

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/077634 WO2017167084A1 (zh) 2016-03-29 2017-03-22 基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法

Country Status (2)

Country Link
CN (1) CN105825545B (zh)
WO (1) WO2017167084A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448098A (zh) * 2018-09-29 2019-03-08 北京航空航天大学 一种基于建筑物单张夜景图像重建虚拟场景光源的方法
CN109493413A (zh) * 2018-11-05 2019-03-19 长春理工大学 基于自适应虚拟点光源采样的三维场景全局光照效果绘制方法
CN111145341A (zh) * 2019-12-27 2020-05-12 陕西职业技术学院 一种基于单光源的虚实融合光照一致性绘制方法
CN111583371A (zh) * 2020-04-30 2020-08-25 山东大学 基于神经网络的参与性介质多重散射绘制方法及系统
CN115082611A (zh) * 2022-08-18 2022-09-20 腾讯科技(深圳)有限公司 光照渲染方法、装置、设备和介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825545B (zh) * 2016-03-29 2018-06-19 浙江大学 基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法
US10395624B2 (en) * 2017-11-21 2019-08-27 Nvidia Corporation Adjusting an angular sampling rate during rendering utilizing gaze information
CN109509246B (zh) * 2018-03-25 2022-08-02 哈尔滨工程大学 一种基于自适应视线划分的光子图聚类方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090295805A1 (en) * 2008-06-02 2009-12-03 Samsung Electronics Co., Ltd. Hierarchical based 3D image processor, method, and medium
CN103971397A (zh) * 2014-04-16 2014-08-06 浙江大学 基于虚拟点光源和稀疏矩阵还原的全局光照绘制方法
CN105825545A (zh) * 2016-03-29 2016-08-03 浙江大学 基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101458823B (zh) * 2008-12-19 2011-08-31 北京航空航天大学 一种虚拟舞台环境下实时光照绘制的方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090295805A1 (en) * 2008-06-02 2009-12-03 Samsung Electronics Co., Ltd. Hierarchical based 3D image processor, method, and medium
CN103971397A (zh) * 2014-04-16 2014-08-06 浙江大学 基于虚拟点光源和稀疏矩阵还原的全局光照绘制方法
CN105825545A (zh) * 2016-03-29 2016-08-03 浙江大学 基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448098A (zh) * 2018-09-29 2019-03-08 北京航空航天大学 一种基于建筑物单张夜景图像重建虚拟场景光源的方法
CN109448098B (zh) * 2018-09-29 2023-01-24 北京航空航天大学 一种基于建筑物单张夜景图像重建虚拟场景光源的方法
CN109493413A (zh) * 2018-11-05 2019-03-19 长春理工大学 基于自适应虚拟点光源采样的三维场景全局光照效果绘制方法
CN109493413B (zh) * 2018-11-05 2022-10-21 长春理工大学 基于自适应虚拟点光源采样的三维场景全局光照效果绘制方法
CN111145341A (zh) * 2019-12-27 2020-05-12 陕西职业技术学院 一种基于单光源的虚实融合光照一致性绘制方法
CN111145341B (zh) * 2019-12-27 2023-04-28 陕西职业技术学院 一种基于单光源的虚实融合光照一致性绘制方法
CN111583371A (zh) * 2020-04-30 2020-08-25 山东大学 基于神经网络的参与性介质多重散射绘制方法及系统
CN111583371B (zh) * 2020-04-30 2023-11-24 山东大学 基于神经网络的参与性介质多重散射绘制方法及系统
CN115082611A (zh) * 2022-08-18 2022-09-20 腾讯科技(深圳)有限公司 光照渲染方法、装置、设备和介质
CN115082611B (zh) * 2022-08-18 2022-11-11 腾讯科技(深圳)有限公司 光照渲染方法、装置、设备和介质

Also Published As

Publication number Publication date
CN105825545A (zh) 2016-08-03
CN105825545B (zh) 2018-06-19

Similar Documents

Publication Publication Date Title
WO2017167084A1 (zh) 基于虚拟光源和自适应稀疏矩阵还原的全局光照绘制方法
US11538216B2 (en) Dynamically estimating light-source-specific parameters for digital images using a neural network
CN107368845B (zh) 一种基于优化候选区域的Faster R-CNN目标检测方法
CN105976378B (zh) 基于图模型的显著性目标检测方法
CN102096941B (zh) 虚实融合环境下的光照一致性方法
CN113052835B (zh) 一种基于三维点云与图像数据融合的药盒检测方法及其检测系统
CN111695494A (zh) 一种基于多视角卷积池化的三维点云数据分类方法
CN109509248B (zh) 一种基于神经网络的光子映射渲染方法和系统
CN105488844B (zh) 一种三维场景中海量模型实时阴影的显示方法
WO2008157453A2 (en) Interactive relighting with dynamic reflectance
CN110827295A (zh) 基于体素模型与颜色信息耦合的三维语义分割方法
CN109978036A (zh) 目标检测深度学习模型训练方法以及目标检测方法
US11189060B2 (en) Generating procedural materials from digital images
CN111814626A (zh) 一种基于自注意力机制的动态手势识别方法和系统
CN113160392B (zh) 基于深度神经网络的光学建筑目标三维重建方法
CN113436308A (zh) 一种三维环境空气质量动态渲染方法
CN108764250A (zh) 一种运用卷积神经网络提取本质图像的方法
CN103971397B (zh) 基于虚拟点光源和稀疏矩阵还原的全局光照绘制方法
US10169910B2 (en) Efficient rendering of heterogeneous polydisperse granular media
CN103729873A (zh) 一种内容感知的环境光采样方法
CN114359269A (zh) 基于神经网络的虚拟食品盒缺陷生成方法及系统
CN103679818A (zh) 一种基于虚拟面光源的实时场景绘制方法
CN113989631A (zh) 一种基于卷积神经网络的红外图像目标检测网络压缩方法
WO2024021363A1 (zh) 一种基于隐式光传输函数的动态绘制方法和装置
Kim et al. Fast animation of lightning using an adaptive mesh

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17773114

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17773114

Country of ref document: EP

Kind code of ref document: A1