WO2017186019A1 - 一种基于能耗-误差预算的实时绘制方法 - Google Patents

一种基于能耗-误差预算的实时绘制方法 Download PDF

Info

Publication number
WO2017186019A1
WO2017186019A1 PCT/CN2017/080859 CN2017080859W WO2017186019A1 WO 2017186019 A1 WO2017186019 A1 WO 2017186019A1 CN 2017080859 W CN2017080859 W CN 2017080859W WO 2017186019 A1 WO2017186019 A1 WO 2017186019A1
Authority
WO
WIPO (PCT)
Prior art keywords
line
camera
error
energy consumption
subspace
Prior art date
Application number
PCT/CN2017/080859
Other languages
English (en)
French (fr)
Inventor
王锐
鲍虎军
胡天磊
于博文
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Priority to US16/097,529 priority Critical patent/US10733791B2/en
Publication of WO2017186019A1 publication Critical patent/WO2017186019A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware

Definitions

  • the present invention relates to the field of image technologies, and in particular, to an optimal real-time rendering framework based on energy consumption or error budget.
  • Drawing is a process of converting a 3D geometric model into a graphical image.
  • Drawing an animated scene is a very time consuming process because an animation is typically made up of thousands of frames.
  • the resolution of each frame is getting higher and higher, and the pixels are getting higher and higher. It may take several hours for a picture to be drawn.
  • the present invention proposes a real-time rendering method based on energy consumption-error budget, which can find the optimal drawing parameters under the condition of satisfying the budget, and can significantly reduce the drawing process under the premise of ensuring image quality.
  • Energy requirements increase battery life or reduce rendering errors while maintaining energy requirements.
  • a real-time rendering method based on energy consumption-error budget comprising the following steps:
  • each time sub-space obtained for each position subspace obtained after each splitting is obtained, each line of sight of the camera at each vertex of the bounding body surrounding the position subspace is acquired.
  • the subspace uses several sets of preset drawing parameters to draw the error and energy consumption of the 3D scene to be drawn, and constructs the Pareto line of the corresponding vertex and line of sight subspace according to the error and energy consumption;
  • the viewpoint information includes a position and a direction of the camera, and the budget condition is an energy consumption budget or an error budget.
  • the Pareto line of the present invention is an energy consumption-error Pareto line, and the Pareto line corresponds to several sets of energy consumption-error optimal drawing parameters, and the energy consumption-error optimal drawing parameters are used.
  • the Pareto optimality of the energy and error obtained by plotting the parameters in the space consisting of energy consumption and error obtained using other plotted parameters.
  • Each set of drawing parameters in the present invention includes at least one value of a drawing variable that controls the quality of the real-time drawing.
  • a set of drawing parameters may be represented by a vector.
  • the energy consumption corresponding to each set of drawing parameters is the energy consumption required to draw the scene to be drawn using the set of drawing parameters.
  • the camera line of sight at each vertex that allows the user to browse is divided according to the direction to obtain a corresponding line of sight subspace.
  • the division is performed in six directions, and a three-dimensional rectangular coordinate system is constructed at the vertex when dividing, and is divided according to the positive and negative directions of each coordinate axis in the three-dimensional orthogonal coordinate system.
  • the energy consumption threshold is 0.5 and the error threshold is 0.005, so as to ensure that the final result can achieve the lowest energy consumption under the condition of ensuring image quality.
  • Energy consumption threshold in practical applications And the error threshold can also be set according to the application requirements.
  • c 0 and c 1 represent the Pareto line, respectively Corresponding camera viewpoint information, the camera viewpoint information is determined by the current position and direction of the camera; N is the Pareto line The number of drawing parameters on the s 0j is the Pareto line The jth drawing parameter on the The error when the viewpoint information of the camera is c 0 and the drawing parameter is set to s 0j , The error when the viewpoint information of the camera is c 1 and the drawing parameter is set to s 0j , The difference between the error when the viewpoint information of the camera is c 0 and the drawing parameter is set to s 0j with respect to the error when the viewpoint information of the camera is c 1 and the drawing parameter is set to s 0j ; The error when the viewpoint information of the camera is c 1 and the drawing parameter is set to s 0j , The error when the viewpoint information of the camera is c 1 and the drawing parameter is set to s 0j , The error when the viewpoint information of the camera is c 1 and
  • M is the Pareto line
  • the number of drawing parameters on the s 1j is the Pareto line
  • the jth drawing parameter on The error when the viewpoint information of the camera is c 1 and the drawing parameter is set to s 1j , The error when the viewpoint information of the camera is c 0 and the drawing parameter is set to s 1j , The difference between the error when the viewpoint information of the camera is c 1 and the drawing parameter is set to s 1j with respect to the error when the viewpoint information of the camera is c 0 and the drawing parameter is set to s 1j ;
  • the energy consumption when the viewpoint information of the camera is c 1 and the drawing parameter is set to s 1j The energy consumption when the viewpoint information of the camera is c 0 and the drawing parameter is set to s 1j ,
  • Information for the camera viewpoint is c 1
  • the drawing parameter setting when the energy difference with respect to the viewpoint information of the camera parameter is set to s 1j drawn by c 0 is at a power consumption when s 1
  • the Pareto line on each line of sight subspace at the vertices of each bounding volume surrounding the position subspace is calculated as follows:
  • vertex is the vertex of the bounding body of the position subspace obtained by enclosing the upper layer, it is not calculated;
  • the Pareto may be selected from the two vertices of the edge.
  • the line acts as the Pareto line for the vertex;
  • the Pareto line of the vertices is calculated based on the genetic algorithm, as follows:
  • U ⁇ u' denotes (e(c, u') ⁇ e(c, u) or p(c, u') ⁇ p(c, u)) and (p(c, u') ⁇ p(c, u) or e(c, u') ⁇ e(c, u)), where c is viewpoint information, and e(c, s) represents an error plotted when the camera viewpoint information is c and the drawing parameter vector is s, p (c, s) indicates the energy consumption drawn when the camera viewpoint information is c and the drawing parameter vector is s.
  • the maximum number of iterations is the number of values that can be taken by all drawing variables in the drawing parameter. average value.
  • the method of searching for the target Pareto line in the spatial hierarchy according to the position and orientation of the current camera is as follows:
  • the search target Pareto line finds the drawing parameter that satisfies the energy budget and has the smallest error as the optimal drawing parameter;
  • the search target Pareto line finds the drawing parameter that satisfies the error budget and consumes the least energy as the optimal drawing parameter.
  • the optimal drawing parameters and the optimal drawing parameters that are different from the current optimal drawing parameters and temporally closest to each other in the real-time rendering process are interpolated during the preset transition time.
  • the final drawing parameters are used for real-time drawing.
  • s optimal is the final drawing parameter
  • s old represents the optimal drawing parameter that is different from the current optimal drawing parameter and temporally nearest in the real-time rendering process
  • s new represents the currently obtained optimal drawing parameter
  • t represents utilization
  • the drawing time of the current optimal drawing parameters to be drawn T is the preset transition time, and the square brackets indicate rounding down.
  • the optimal drawing parameters are interpolated to avoid excessive fluctuation of the drawing parameters, thereby ensuring the consistency of the drawing effect and reducing the image jumping caused by the variation of the drawing parameters.
  • the present invention can complete the step (1) in advance for the three-dimensional scene to be drawn, that is, divide the whole drawing process into two processes of pre-calculation and real-time drawing, and construct the octree by pre-calculation for real-time drawing, which can effectively improve drawing. effectiveness.
  • the method used in the error metric of the present invention is based on a perceptual SSIM, and the energy consumption measurement data is obtained by an API provided by the NVML library.
  • the application method is wide, and is not limited to a specific application, and the drawing method of the present invention can be applied.
  • the OpenGL drawing framework and the UnrealEngine 4 commercial game engine can be extended to other different platforms, such as desktop PCs and mobile devices;
  • the drawing program can quickly find the optimal drawing settings in line with the budget.
  • each time sub-space obtained for each position subspace obtained after each splitting is obtained, each line of sight of the camera at each vertex of the bounding body surrounding the position subspace is acquired.
  • the subspace uses several sets of preset drawing parameters to draw the error and energy consumption of the 3D scene to be drawn, and constructs the Pareto line of the corresponding vertex and line of sight subspace according to the error and energy consumption;
  • Each set of drawing parameters includes at least one value of the drawing variable that controls the quality of the real-time drawing.
  • a set of drawing parameters may be represented by a vector.
  • the energy consumption corresponding to each group of drawing parameters is the energy consumption required to draw the scene to be drawn by using the group drawing parameters; the error corresponding to each group of drawing parameters is obtained by using the drawing parameters to draw the three-dimensional scene to be drawn.
  • the image and the highest quality drawing parameter are used to draw the difference of the image obtained by the three-dimensional scene to be drawn; the height relationship of the image quality of the corresponding drawing parameters of each group of the preset drawing parameters is known, and the most drawn image is directly selected from the image.
  • the highest quality set of drawing parameters can be used as the highest quality drawing parameters.
  • the spatial position is first divided to obtain the corresponding position subspace, and then the surrounding body of the subspace at the position of the surrounding body is At the vertex, the camera line of sight at the vertex that allows the user to browse is divided to obtain a corresponding line of sight subspace.
  • the camera line space that allows the user to browse at each vertex is divided according to the direction to obtain a corresponding line of sight subspace, and a three-dimensional rectangular coordinate system is constructed at the vertex, according to each coordinate axis in the three-dimensional orthogonal coordinate system.
  • the positive and negative directions are divided.
  • the maximum number of iterations is 50
  • the energy consumption threshold is 0.5W
  • the error threshold is 0.005.
  • c 0 and c 1 represent the Pareto line, respectively The corresponding viewpoint information of the camera, Pareto line The distance between the errors, Pareto line The distance between energy consumption,
  • c 0 and c 1 represent the Pareto line, respectively Corresponding camera viewpoint information, the camera viewpoint information is determined by the current position and direction of the camera; N is the Pareto line The number of drawing parameters on the s 0j is the Pareto line The jth drawing parameter on the The error when the viewpoint information of the camera is c 0 and the drawing parameter is set to s 0j , The error when the viewpoint information of the camera is c 1 and the drawing parameter is set to s 0j , The difference between the error when the viewpoint information of the camera is c 0 and the drawing parameter is set to s 0j with respect to the error when the viewpoint information of the camera is c 1 and the drawing parameter is set to s 0j ; The error when the viewpoint information of the camera is c 1 and the drawing parameter is set to s 0j , The error when the viewpoint information of the camera is c 1 and the drawing parameter is set to s 0j , The error when the viewpoint information of the camera is c 1 and
  • M is the Pareto line
  • the number of drawing parameters on the s 1j is the Pareto line
  • the jth drawing parameter on the The error when the viewpoint information of the camera is c 1 and the drawing parameter is set to s 1j , The error when the viewpoint information of the camera is c 0 and the drawing parameter is set to s 1j , The difference between the error when the viewpoint information of the camera is c 1 and the drawing parameter is set to s 1j with respect to the error when the viewpoint information of the camera is c 0 and the drawing parameter is set to s 1j ;
  • the energy consumption when the viewpoint information of the camera is c 1 and the drawing parameter is set to s 1j The energy consumption when the viewpoint information of the camera is c 0 and the drawing parameter is set to s 1j ,
  • Information for the camera viewpoint is c 1
  • the drawing parameter setting when the energy difference with respect to the viewpoint information of the camera parameter is set to s 1j drawn by c 0 is at a power consumption when s
  • the Pareto line on each line of sight subspace at each vertex of the bounding volume surrounding the position subspace is calculated as follows:
  • vertex is the vertex of the bounding body of the position subspace obtained by enclosing the upper layer, it is not calculated;
  • the Pareto line of one vertex is selected from the two vertices of the edge. a Pareto line as the vertex;
  • the Pareto line of the vertex is calculated based on the genetic algorithm: firstly, several renderings are randomly initialized. The parameter vector constructs the initial parameter set, and then iterates as follows until the maximum number of iterations is reached, and the Pareto line is constructed according to the drawing parameter vector in the parameter set obtained in the last iteration:
  • U ⁇ u' denotes (e(c, u') ⁇ e(c, u) or p(c, u') ⁇ p(c, u)) and (p(c, u') ⁇ p(c, u) or e(c, u') ⁇ e(c, u)), where c is viewpoint information, and e(c, s) represents an error plotted when the camera viewpoint information is c and the drawing parameter vector is s, p (c, s) indicates the energy consumption drawn when the camera viewpoint information is c and the drawing parameter vector is s.
  • an octree description adaptive spatial partitioning is used to obtain a spatial hierarchical structure of a position subspace and a view subspace having a spatial hierarchical relationship, and all Pareto lines are recorded.
  • Each node in the octree corresponds to the subspace obtained under the corresponding number of layers, and the node information of each node includes the Pareto line in six directions when the cameras are respectively located at the respective vertices of the corresponding subspace. .
  • the viewpoint information of the current camera includes the position and direction of the camera, and the budget condition is an energy consumption budget or an error budget.
  • the method of searching for the target Pareto line in the spatial hierarchy according to the viewpoint information of the current camera is as follows:
  • the method for determining the current optimal drawing parameters that meet the budget conditions is as follows:
  • the search target Pareto line finds the drawing parameter that satisfies the energy budget and has the smallest error as the optimal drawing parameter;
  • the search target Pareto line finds the drawing parameter that satisfies the error budget and consumes the least energy as the optimal drawing parameter.
  • the current optimal drawing parameter and the real-time drawing process are different from the current optimal drawing parameter in the preset transition time, and The nearest neighboring optimal rendering parameters are interpolated to obtain final rendering parameters for real-time rendering. Interpolation is performed according to the following formula:
  • s optimal is the final drawing parameter
  • s old represents the optimal drawing parameter that is different from the current optimal drawing parameter and temporally nearest in the real-time rendering process
  • s new represents the currently obtained optimal drawing parameter
  • t represents utilization
  • the drawing time of the currently obtained optimal drawing parameter ie, the time interval between the obtained optimal drawing parameter and the interpolation processing time
  • T is the preset transition time (T is 2s in this embodiment)
  • the brackets indicate rounding down.
  • the experimental simulation results (including the average energy consumption and the average error) when the scene is drawn by using the method of the present embodiment and the existing method are as shown in the table, wherein the highest image quality is 3 for all parameters in the Unreal Engine 4 scalability setting.
  • the medium-high image quality is taken as the parameter 2
  • the medium-low image quality is taken as the parameter
  • the lowest image quality is taken as the parameter.
  • the optimization parameter is the result of the dynamic selection of the invention. It can be seen that the method of the present invention can achieve a good balance between error and energy consumption. In this scenario, the optimization method requires only 7.87 W of energy consumption, which saves 50.4% compared with the highest image quality. Energy consumption, while the error is an order of magnitude smaller than the result under the lowest quality parameters.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)
  • Studio Devices (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种基于能耗-误差预算的实时绘制方法,包括:对待绘制三维场景中允许用户浏览的摄像机的位置空间和视线空间进行自适应空间剖确定待绘制场景的空间层次结构;在进行自适应空间剖分的过程中,每次剖分完成后针对得到的每个位置子空间,获取摄像机在包围该位置子空间的包围体的每个顶点处的各个视线子空间中使用若干组预设的绘制参数绘制待绘制三维场景的误差和能耗,并根据误差和能耗构建相应顶点和视线子空间的帕累托线;根据当前摄像机的视点信息,在空间层次结构中搜索得到目标帕累托线以确定满足预算条件的一组绘制参数作为最优绘制参数进行绘制。本发明节约大量能耗的同时保证绘制效果的质量,延长电池的使用寿命。

Description

一种基于能耗-误差预算的实时绘制方法 技术领域
本发明涉及图像技术领域,尤其涉及一种基于能耗或误差预算的最优实时绘制框架。
背景技术
绘制是将三维几何模型转换程图形图像的一个过程。对一个动画场景的绘制是一项非常耗时的过程,因为一个动画一般都是由成千上万帧构成的。随着人们对视觉效果的要求越来越高,每一帧的分辨率也越来越高,像素也越来越高,一幅图片有可能要花上好几个小时才能绘制完。
随着现代图形硬件加速技术的进步,GPU的高速发展极大的提高了计算机图形处理的速度和质量,并促进了与计算机图形相关应用领域的快速发展。于此同时,GPU绘制流水线的高速度和并行性,为GPU的通用计算提供了良好的平台。
近年来随着GPU在电池供电的移动设备上的普及,大量需要实时绘制技术的应用程序出现在了这样的设备上。运行这些应用程序所需的计算会迅速消耗电池电量,一方面对电池的生命周期有显著影响,更重要的是大大缩短了移动设定的待机时长,限定了移动设备使用者。
在这样的背景下,能够在能耗预算的条件下进行绘制已成为一个实际需求。降低绘制相关应用程序的能耗需求是计算机图形学领域未来的挑战之一。然而目前并不存在通用的解决方案,在这个方面还有很多的可能性没有被探索。在降低运行在电池供电设备上图形应用程序能耗需求的策略中,减少在图形绘制流水线中的计算量被证明是一个有效的解决方案。然而,多数已有方案的应用范围有限,通常只适用于某个具体的应用程序。
另外,现有的方法绝大多数是基于具体的图形流水线设计和硬件实现来达到优化资源并降低能耗的目的。由于通过底层硬件实现,需要改变现有设备的硬件结构,成本高。
此外,另一些方法通过自适应调节显示设备的亮度以降低能耗,但是其并不能够保证画质,也并未从实际上解决问题。
发明内容
针对现有技术的不足,本发明提出了一种基于能耗-误差预算的实时绘制方法,在满足预算条件下寻找最优的绘制参数,能够在保证画质的前提下显著降低了绘制过程的能耗需求延长了电池的使用时间,或保证能耗要求的前提下降低绘制误差。
一种基于能耗-误差预算的实时绘制方法,包括如下步骤:
(1)对待绘制三维场景中允许用户浏览的摄像机的位置空间和视线空间进行自适应空间剖分得到若干个具有空间层次关系的位置子空间和视线子空间,并根据位置子空间和视线子空间的空间层次关系确定待绘制场景的空间层次结构;
在对摄像机的位置空间进行自适应空间剖分的过程中,每次剖分完成后针对得到的每个位置子空间,获取摄像机在包围该位置子空间的包围体的每个顶点处的各个视线子空间中使用若干组预设的绘制参数绘制待绘制三维场景的误差和能耗,并根据误差和能耗构建相应顶点和视线子空间的帕累托线;
(2)根据当前摄像机的视点信息,在所述的空间层次结构中搜索得到目标帕累托线,并利用目标帕累托线确定满足预算条件的一组绘制参数作为最优绘制参数用以绘制待绘制三维场景;
所述的视点信息包括摄像机的位置和方向,所述的预算条件为能耗预算或误差预算。
本发明的帕累托线即为能耗-误差帕累托线,该帕累托线上对应有若干组能耗-误差最优的绘制参数,能耗-误差最优的绘制参数为使用这些绘制参数得到的能耗与误差在使用其他绘制参数得到的能耗与误差所组成的空间中的帕累托最优。
本发明中每一组绘制参数至少包括一个控制实时绘制质量的绘制变量的取值。当绘制参数对应的绘制变量的个数超过1时,可以采用向量表示一组绘制参数。
本发明中获取摄像机在包围该位置子空间的包围体的每个顶点处的各个 视线子空间中使用若干组预设的绘制参数绘制待绘制三维场景的误差和能耗时使用的若干组绘制参数对应的画质关系是确定的。具体使用的绘制参数的组数组数越多,最终得到的绘制参数越精确,但是会消耗大量的时间,因此需要根据实际应用需要设定。
本发明中,对于每一组绘制参数对应的能耗为使用该组绘制参数绘制待绘制场景所需的能耗。
进一步优选,每组绘制参数对应的误差为使用该绘制参数绘制待绘制的三维场景得到的图像与最高质量的绘制参数绘制待绘制的三维场景得到的图像的差异;
若干组预设的绘制参数中各组绘制参数的对应得到的绘制图像质量的高低关系已知,直接从中选择最绘制图像质量最高的一组绘制参数作为最高质量的绘制参数即可。
对待绘制三维场景中允许用户浏览的摄像机的位置空间和视线空间进行自适应空间剖分时,先进行空间位置剖分得到相应的位置子空间,然后在包围该位置子空间的包围体的各个顶点处,将该顶点处允许用户浏览的摄像机视线空间进行划分得到相应的视线子空间。
作为优选,将每个顶点处允许用户浏览的摄像机视线空间按照方向进行划分得到相应的视线子空间。
进一步优选,按照六个方向进行划分,划分时在该顶点处构建一个三维直角坐标系,按照该三维直角坐标系中各个坐标轴的正向与负向进行划分。
对待绘制三维场景中允许用户浏览的摄像机的位置空间和视线空间进行自适应空间剖分的过程中,每次剖分前针对当前待剖分的位置子空间,根据包围该位置子空间的包围体的各个顶点处的帕累托线进行如下操作:
(a)若包围当前位置子空间的包围体的所有边上的两个顶点在每一个视线空间中对应的两条帕累托线均满足收敛条件,则停止对该位置子空间的剖分;所述收敛条件为两条帕累托线间的能耗距离小于能耗阈值或误差距离小于距离阈值;
(b)否则,继续对该位置子空间进行剖分。
作为优选,所述的能耗阈值为0.5,误差阈值为0.005,以保证最终得到的结果能够在保证画质的条件下达到能耗最低的目的。在实际应用时,能耗阈值 和误差阈值也可以根据应用需求设定。
两条帕累托线
Figure PCTCN2017080859-appb-000001
间的能耗距离和误差距离的定义如下:
Figure PCTCN2017080859-appb-000002
Figure PCTCN2017080859-appb-000003
其中,c0和c1分别表示帕累托线
Figure PCTCN2017080859-appb-000004
对应的摄像机的视点信息,
Figure PCTCN2017080859-appb-000005
为帕累托线
Figure PCTCN2017080859-appb-000006
之间的误差距离,
Figure PCTCN2017080859-appb-000007
为帕累托线
Figure PCTCN2017080859-appb-000008
之间的能耗距离,
Figure PCTCN2017080859-appb-000009
为帕累托线
Figure PCTCN2017080859-appb-000010
到帕累托线
Figure PCTCN2017080859-appb-000011
的误差的半距离函数,
Figure PCTCN2017080859-appb-000012
为帕累托线
Figure PCTCN2017080859-appb-000013
到帕累托线
Figure PCTCN2017080859-appb-000014
的误差的半距离函数,
Figure PCTCN2017080859-appb-000015
为帕累托线
Figure PCTCN2017080859-appb-000016
到帕累托线
Figure PCTCN2017080859-appb-000017
的能耗的半距离,
Figure PCTCN2017080859-appb-000018
为帕累托线
Figure PCTCN2017080859-appb-000019
到帕累托线
Figure PCTCN2017080859-appb-000020
的能耗的半距离函数,
分别根据如下公式计算:
Figure PCTCN2017080859-appb-000021
Figure PCTCN2017080859-appb-000022
Figure PCTCN2017080859-appb-000023
Figure PCTCN2017080859-appb-000024
其中,c0和c1分别表示帕累托线
Figure PCTCN2017080859-appb-000025
对应的摄像机视点信息,摄像机视点信息由摄像机当前所在的位置和方向决定;N为帕累托线
Figure PCTCN2017080859-appb-000026
上的绘制参数的个数,s0j为帕累托线
Figure PCTCN2017080859-appb-000027
上的第j个绘制参数,
Figure PCTCN2017080859-appb-000028
为摄像机的视点信息为c0、绘制参数设置为s0j时的误差,
Figure PCTCN2017080859-appb-000029
为摄像机的视点信息为c1、绘制参数设置为s0j时的误差,
Figure PCTCN2017080859-appb-000030
为摄像机的视点信息为c0、绘制参数设置为s0j时的误差相对于位于摄像机的视点信息为c1、绘制参数设置为s0j时的误差的差值;
Figure PCTCN2017080859-appb-000031
为摄像机的视点信息为c1、绘制参数设置为s0j时的误差,
Figure PCTCN2017080859-appb-000032
为摄像机的视点信息为c1、绘制参数设置为s0j时的误差,
Figure PCTCN2017080859-appb-000033
为摄像机的视点信息为c1、绘制参数设置为s0j时的能耗相对于摄像机的视点信息为c0处绘制参数设置为s0j时的能耗的差值;
M为帕累托线
Figure PCTCN2017080859-appb-000034
上的绘制参数的个数,s1j为帕累托线
Figure PCTCN2017080859-appb-000035
上的第j个绘制 参数,
Figure PCTCN2017080859-appb-000036
为摄像机的视点信息为c1、绘制参数设置为s1j时的误差,
Figure PCTCN2017080859-appb-000037
为摄像机的视点信息为c0、绘制参数设置为s1j时的误差,
Figure PCTCN2017080859-appb-000038
为摄像机的视点信息为c1、绘制参数设置为s1j时的误差相对于位于摄像机的视点信息为c0、绘制参数设置为s1j时的误差的差值;
Figure PCTCN2017080859-appb-000039
为摄像机的视点信息为c1、绘制参数设置为s1j时的能耗,
Figure PCTCN2017080859-appb-000040
为摄像机的视点信息为c0、绘制参数设置为s1j时的能耗,
Figure PCTCN2017080859-appb-000041
为摄像机的视点信息为c1、绘制参数设置为s1j时的能耗相对于摄像机的视点信息为c0处绘制参数设置为s1j时的能耗的差值。
针对每次剖分得到的位置子空间,每个包围该位置子空间的包围体的顶点处各个视线子空间上的帕累托线根据如下方法计算:
若该顶点为包围上一层剖分得到的位置子空间的包围体的顶点,则不计算;
若在包围上一层剖分得到的位置子空间的包围体上该顶点所在边上的两个顶点的满足收敛条件的边上,从该条边的两个顶点任选一个顶点的帕累托线作为该顶点的帕累托线;
否则,计算该顶点的帕累托线。
为了高效的获取各个帕累托线,作为优选,基于遗传算法计算顶点的帕累托线,具体如下:
首先随机初始化若干绘制参数向量构建初始参数集合,然后迭代进行如下操作直至达到最大迭代次数,并根据最后一次迭代得到的参数集合中的绘制参数向量构建帕累托线:
对初始参数集合中的绘制参数向量进行繁殖和变异产生新的绘制参数向量以形成候选集合;
针对候选集合中的任意两个参数组合u和u′,若满足u≮u′,则保留,否则,淘汰u′,并结束本次迭代,并以本次迭代得到的组合作为初始集合以进行下一次迭代,
u≮u′表示(e(c,u′)≤e(c,u)或p(c,u′)<p(c,u))且(p(c,u′)≤p(c,u)或e(c,u′)<e(c,u)),其中,c为视点信息,e(c,s)表示摄像机视点信息为c、绘制参数向量为s时绘制的误差,p(c,s)表示摄像机视点信息为c、绘制参数向量为s时绘制的能耗。
作为优选,最大迭代次数为绘制参数向中所有绘制变量可取数值的个数的 平均值。
根据当前摄像机的位置和方向在所述的空间层次结构中搜索得到目标帕累托线方法如下:
根据当前摄像机的视点信息搜索空间层次结构,得到摄像机的位置所在的位置子空间以及该位置子空间中与该位置距离最近的顶点,并从距离最近的包围该位置子空间的包围体的顶点的视线子空间中确定当前摄像机的方向所在的视线子空间,并以该视线子空间对应的帕累托线作为目标帕累托线。确定满足预算条件的当前最优绘制参数方法如下:
若预设的预算条件为能耗预算,则搜索目标帕累托线找到满足能耗预算且误差最小的绘制参数作为最优绘制参数;
若预设的预算条件为误差预算,则搜索目标帕累托线找到满足误差预算且能耗最小的绘制参数作为最优绘制参数。
得到最优绘制参数后在预设的过渡时间内还对当前得到的最优绘制参数和实时绘制过程中与本次最优绘制参数不同、且时间上最邻近的最优绘制参数进行插值处理得到最终绘制参数用于进行实时绘制。
根据如下公式进行插值处理:
Figure PCTCN2017080859-appb-000042
其中,soptimal为最终绘制参数,sold表示实时绘制过程中与本次最优绘制参数不同、且时间上最邻近的最优绘制参数,snew表示当前得到的最优绘制参数;t表示利用当前得到的最优绘制参数进行绘制的绘制时间,T为预设的过渡时间,方括号表示向下取整。
本发明中对最优绘制参数进行插值处理以避免绘制参数波动过大,进而保证绘制效果的连贯性,减少由绘制参数变动造成的图像跳动。
本发明针对当前待绘制的三维场景,可以预先完成步骤(1),即将整个绘制过程划分为预计算和实时绘制两个过程,通过预计算构建八叉树以供实时绘制使用,可以有效提高绘制效率。
未作特殊说明,本发明误差度量所使用的方法是基于感知的SSIM,能耗测量数据由NVML库所提供的API获取。
与现有技术相比,本发明的有益效果如下:
应用范围广泛,不局限于某一具体应用程序,本发明的绘制方法可以应用 于OpenGL绘制框架以及UnrealEngine 4商用游戏引擎,可以被推广到其他不同的平台,比如桌面PC和移动设备;
节约大量能耗的同时保证绘制效果的质量,延长电池的使用寿命,使用预计算生成的数据,绘制程序在运行时可以十分迅速的找到符合预算的最优绘制设置。
具体实施方式
下面将结合具体实施例对本发明进行详细说明。
本实施例的基于能耗-误差预算的实时绘制方法,包括如下步骤:
(1)通过对待绘制三维场景中允许用户浏览的摄像机的位置空间和视线空间进行自适应空间剖分得到若干个具有空间层次关系的位置子空间和视线子空间,并根据位置子空间和视线子空间的空间层次关系确定待绘制场景的空间层次结构;
在对摄像机的位置空间进行自适应空间剖分的过程中,每次剖分完成后针对得到的每个位置子空间,获取摄像机在包围该位置子空间的包围体的每个顶点处的各个视线子空间中使用若干组预设的绘制参数绘制待绘制三维场景的误差和能耗,并根据误差和能耗构建相应顶点和视线子空间的帕累托线;
每一组绘制参数至少包括一个控制实时绘制质量的绘制变量的取值。当绘制参数对应的绘制变量的个数超过1时,可以采用向量表示一组绘制参数。
获取摄像机在该包围该位置子空间的包围体的每个顶点处的各个视线子空间中使用若干组预设的绘制参数绘制待绘制三维场景的误差和能耗时使用的若干组绘制参数对应的画质关系是确定。具体使用的绘制参数的组数组数越多,最终得到的绘制参数越精确,但是会消耗大量的时间,因此需要根据实际应用需要设定。
本实施例中对于每一组绘制参数对应的能耗为使用该组绘制参数绘制待绘制场景所需的能耗;每组绘制参数对应的误差为使用该绘制参数绘制待绘制的三维场景得到的图像与最高质量的绘制参数绘制待绘制的三维场景得到的图像的差异;若干组预设的绘制参数中各组绘制参数的对应得到的绘制图像质量的高低关系已知,直接从中选择最绘制图像质量最高的一组绘制参数作为最高质量的绘制参数即可。
对待绘制三维场景中允许用户浏览的摄像机的位置空间和视线空间进行自适应空间剖分时,先进行空间位置剖分得到相应的位置子空间,然后在包围体该位置子空间的包围体的各个顶点处,将该顶点处允许用户浏览的摄像机视线空间进行划分得到相应的视线子空间。
本实施例中将每个顶点处允许用户浏览的摄像机视线空间按照方向进行划分得到相应的视线子空间,划分时在该顶点处构建一个三维直角坐标系,按照该三维直角坐标系中各个坐标轴的正向与负向进行划分。
对待绘制三维场景中允许用户浏览的摄像机的位置空间和视线空间进行自适应空间剖分的过程中,每次剖分前针对当前待剖分的位置子空间,根据包围该位置子空间的包围体的各个顶点处的帕累托线进行如下操作:
(a)若包围当前位置子空间的包围体的所有边上的两个顶点在每一个视线空间中对应的两条帕累托线均满足收敛条件,则停止对该位置子空间的剖分;所述收敛条件为两条帕累托线间的能耗距离小于能耗阈值或误差距离小于距离阈值;
(b)否则,继续对该位置子空间进行剖分。
本实施例中,最大迭代次数为50,能耗阈值为0.5W,误差阈值为0.005。
两条帕累托线
Figure PCTCN2017080859-appb-000043
间的能耗距离和误差距离的定义如下:
Figure PCTCN2017080859-appb-000044
Figure PCTCN2017080859-appb-000045
其中,c0和c1分别表示帕累托线
Figure PCTCN2017080859-appb-000046
对应的摄像机的视点信息,
Figure PCTCN2017080859-appb-000047
为帕累托线
Figure PCTCN2017080859-appb-000048
之间的误差距离,
Figure PCTCN2017080859-appb-000049
为帕累托线
Figure PCTCN2017080859-appb-000050
之间的能耗距离,
Figure PCTCN2017080859-appb-000051
为帕累托线
Figure PCTCN2017080859-appb-000052
到帕累托线
Figure PCTCN2017080859-appb-000053
的误差的半距离函数,
Figure PCTCN2017080859-appb-000054
为帕累托线
Figure PCTCN2017080859-appb-000055
到帕累托线
Figure PCTCN2017080859-appb-000056
的误差的半距离函数,
Figure PCTCN2017080859-appb-000057
为帕累托线
Figure PCTCN2017080859-appb-000058
到帕累托线
Figure PCTCN2017080859-appb-000059
的能耗的半距离,
Figure PCTCN2017080859-appb-000060
为帕累托线
Figure PCTCN2017080859-appb-000061
到帕累托线
Figure PCTCN2017080859-appb-000062
的能耗的半距离函数,
分别根据如下公式计算:
Figure PCTCN2017080859-appb-000063
Figure PCTCN2017080859-appb-000064
Figure PCTCN2017080859-appb-000065
Figure PCTCN2017080859-appb-000066
其中,c0和c1分别表示帕累托线
Figure PCTCN2017080859-appb-000067
对应的摄像机视点信息,摄像机视点信息由摄像机当前所在的位置和方向决定;N为帕累托线
Figure PCTCN2017080859-appb-000068
上的绘制参数的个数,s0j为帕累托线
Figure PCTCN2017080859-appb-000069
上的第j个绘制参数,
Figure PCTCN2017080859-appb-000070
为摄像机的视点信息为c0、绘制参数设置为s0j时的误差,
Figure PCTCN2017080859-appb-000071
为摄像机的视点信息为c1、绘制参数设置为s0j时的误差,
Figure PCTCN2017080859-appb-000072
为摄像机的视点信息为c0、绘制参数设置为s0j时的误差相对于位于摄像机的视点信息为c1、绘制参数设置为s0j时的误差的差值;
Figure PCTCN2017080859-appb-000073
为摄像机的视点信息为c1、绘制参数设置为s0j时的误差,
Figure PCTCN2017080859-appb-000074
为摄像机的视点信息为c1、绘制参数设置为s0j时的误差,
Figure PCTCN2017080859-appb-000075
为摄像机的视点信息为c1、绘制参数设置为s0j时的能耗相对于摄像机的视点信息为c0处绘制参数设置为s0j时的能耗的差值;
M为帕累托线
Figure PCTCN2017080859-appb-000076
上的绘制参数的个数,s1j为帕累托线
Figure PCTCN2017080859-appb-000077
上的第j个绘制参数,
Figure PCTCN2017080859-appb-000078
为摄像机的视点信息为c1、绘制参数设置为s1j时的误差,
Figure PCTCN2017080859-appb-000079
为摄像机的视点信息为c0、绘制参数设置为s1j时的误差,
Figure PCTCN2017080859-appb-000080
为摄像机的视点信息为c1、绘制参数设置为s1j时的误差相对于位于摄像机的视点信息为c0、绘制参数设置为s1j时的误差的差值;
Figure PCTCN2017080859-appb-000081
为摄像机的视点信息为c1、绘制参数设置为s1j时的能耗,
Figure PCTCN2017080859-appb-000082
为摄像机的视点信息为c0、绘制参数设置为s1j时的能耗,
Figure PCTCN2017080859-appb-000083
为摄像机的视点信息为c1、绘制参数设置为s1j时的能耗相对于摄像机的视点信息为c0处绘制参数设置为s1j时的能耗的差值。
针对每次剖分得到的位置子空间,包围该位置子空间的包围体的每个顶点处各个视线子空间上的帕累托线根据如下方法计算:
若该顶点为包围上一层剖分得到的位置子空间的包围体的顶点,则不计算;
若包围上一层剖分得到的位置子空间的包围体上该顶点所在边上的两个顶点的满足收敛条件的边上,从该条边的两个顶点任选一个顶点的帕累托线作为该顶点的帕累托线;
否则,计算该顶点的帕累托线。
本实施例中基于遗传算法计算顶点的帕累托线:首先随机初始化若干绘制 参数向量构建初始参数集合,然后迭代进行如下操作直至达到最大迭代次数,并根据最后一次迭代得到的参数集合中的绘制参数向量构建帕累托线:
对初始参数集合中的绘制参数向量进行繁殖和变异产生新的绘制参数向量以形成候选集合;
针对候选集合中的任意两个参数组合u和u′,若满足u≮u′,则保留,否则,淘汰u′,并结束本次迭代,并以本次迭代得到的组合作为初始集合以进行下一次迭代,
u≮u′表示(e(c,u′)≤e(c,u)或p(c,u′)<p(c,u))且(p(c,u′)≤p(c,u)或e(c,u′)<e(c,u)),其中,c为视点信息,e(c,s)表示摄像机视点信息为c、绘制参数向量为s时绘制的误差,p(c,s)表示摄像机视点信息为c、绘制参数向量为s时绘制的能耗。
本实施例中采用八叉树描述自适应空间剖分得到个具有空间层次关系的位置子空间和视线子空间的空间层次结构,并记录所有的帕累托线。
八叉树中每个结点与相应剖分层数下得到的子空间一一对应,每个节点的节点信息包括摄像机分别位于对应子空间的各个顶点时在六个方向下的帕累托线。
(2)根据当前摄像机的视点信息,在空间层次结构(即八叉树)中搜索得到目标帕累托线,并利用目标帕累托线确定满足预算条件的一组绘制参数作为最优绘制参数用以绘制待绘制三维场景;
其中,当前摄像机的视点信息包括摄像机的位置和方向,预算条件为能耗预算或误差预算。
根据当前摄像机的视点信息在空间层次结构中搜索得到目标帕累托线方法如下:
根据当前摄像机的视点信息搜索空间层次结构,得到摄像机的位置所在的位置子空间以及包围该位置子空间的包围体上与该位置距离最近的顶点,并从距离最近的顶点的视线子空间中确定当前摄像机的方向所在的视线子空间,并以该视线子空间对应的帕累托线作为目标帕累托线。
确定满足预算条件的当前最优绘制参数方法如下:
若预设的预算条件为能耗预算,则搜索目标帕累托线找到满足能耗预算且误差最小的绘制参数作为最优绘制参数;
若预设的预算条件为误差预算,则搜索目标帕累托线找到满足误差预算且能耗最小的绘制参数作为最优绘制参数。
本实施例中为进一步保证绘制效果的连贯性,得到最优绘制参数后在预设的过渡时间内还对当前得到的最优绘制参数和实时绘制过程中与本次最优绘制参数不同、且时间上最邻近的最优绘制参数进行插值处理得到最终绘制参数用于进行实时绘制。根据如下公式进行插值处理:
Figure PCTCN2017080859-appb-000084
其中,soptimal为最终绘制参数,sold表示实时绘制过程中与本次最优绘制参数不同、且时间上最邻近的最优绘制参数,snew表示当前得到的最优绘制参数;t表示利用当前得到的最优绘制参数进行绘制的绘制时间(即得到的该最优绘制参数的时刻与插值处理时刻的时间间隔),T为预设的过渡时间(本实施例中T为2s),方括号表示向下取整。
利用本实施例的方法和现有方法对场景绘制时的实验仿真结果(包括平均能耗和平均误差)如表所示,其中,最高画质为Unreal Engine 4扩展性设置中所有参数取3,中高画质为参数取2,中低画质为参数取1,最低画质为参数取0,优化参数为本发明动态选择的结果。可以看出,本发明的方法可以在误差和能耗之间取得良好的平衡,在该场景中使用本优化方法只需7.87W的能耗,这相对于最高画质来说节约了50.4%的能耗,同时误差要比最低画质参数下的结果小一个数量级。
表1
Figure PCTCN2017080859-appb-000085
以上所述仅为本发明的优选实施方式,本发明的保护范围并不仅限于上述实施方式,凡是属于本发明原理的技术方案均属于本发明的保护范围。对于本领域的技术人员而言,在不脱离本发明的原理的前提下进行的若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (12)

  1. 一种基于能耗-误差预算的实时绘制方法,其特征在于,包括如下步骤:
    (1)对待绘制三维场景中允许用户浏览的摄像机的位置空间和视线空间进行自适应空间剖分得到若干个具有空间层次关系的位置子空间和视线子空间,并根据位置子空间和视线子空间的空间层次关系确定待绘制场景的空间层次结构;
    在对摄像机的位置空间进行自适应空间剖分的过程中,每次剖分完成后针对得到的每个位置子空间,获取摄像机在包围该位置子空间的包围体的每个顶点处的各个视线子空间中使用若干组预设的绘制参数绘制待绘制三维场景的误差和能耗,并根据误差和能耗构建相应顶点和视线子空间的帕累托线;
    (2)根据当前摄像机的视点信息,在所述的空间层次结构中搜索得到目标帕累托线,并利用目标帕累托线确定满足预算条件的一组绘制参数作为最优绘制参数用以绘制待绘制三维场景;
    所述的视点信息包括摄像机的位置和方向,所述的预算条件为能耗预算或误差预算。
  2. 如权利要求1所述的基于能耗-误差预算的实时绘制方法,其特征在于,每组绘制参数对应的误差为使用该绘制参数绘制待绘制的三维场景得到的图像与最高质量的绘制参数绘制待绘制的三维场景得到的图像的差异;
    若干组预设的绘制参数中各组绘制参数对应得到的绘制图像质量的高低关系已知。
  3. 如权利要求1所述的基于能耗-误差预算的实时绘制方法,其特征在于,对待绘制三维场景中允许用户浏览的摄像机的位置空间和视线空间进行自适应空间剖分时,先进行空间位置剖分得到相应的位置子空间,然后在包围该位置子空间的包围体的各个顶点处,将该顶点处允许用户浏览的摄像机视线空间进行划分得到相应的视线子空间。
  4. 如权利要求3所述的基于能耗-误差预算的实时绘制方法,其特征在于,将每个顶点处允许用户浏览的摄像机视线空间按照方向进行划分得到相应的视线子空间。
  5. 如权利要求1~4中任意一项所述的基于能耗-误差预算的实时绘制方法, 其特征在于,对待绘制三维场景中允许用户浏览的摄像机的位置空间和视线空间进行自适应空间剖分的过程中,每次剖分前针对当前待剖分的位置子空间,根据包围该位置子空间的包围体的各个顶点处的帕累托线进行如下操作:
    (a)若包围当前位置子空间包围体的所有边上的两个顶点在每一个视线空间中对应的两条帕累托线均满足收敛条件,则停止对该位置子空间的剖分;所述收敛条件为两条帕累托线间的能耗距离小于能耗阈值或误差距离小于距离阈值;
    (b)否则,继续对该位置子空间进行剖分。
  6. 如权利要求5所述的基于能耗-误差预算的实时绘制方法,其特征在于,两条帕累托线
    Figure PCTCN2017080859-appb-100001
    间的能耗距离和误差距离的定义如下:
    Figure PCTCN2017080859-appb-100002
    Figure PCTCN2017080859-appb-100003
    其中,c0和c1分别表示帕累托线
    Figure PCTCN2017080859-appb-100004
    对应的摄像机的视点信息,
    Figure PCTCN2017080859-appb-100005
    为帕累托线
    Figure PCTCN2017080859-appb-100006
    之间的误差距离,
    Figure PCTCN2017080859-appb-100007
    为帕累托线
    Figure PCTCN2017080859-appb-100008
    之间的能耗距离,
    Figure PCTCN2017080859-appb-100009
    为帕累托线
    Figure PCTCN2017080859-appb-100010
    到帕累托线
    Figure PCTCN2017080859-appb-100011
    的误差的半距离函数,
    Figure PCTCN2017080859-appb-100012
    为帕累托线
    Figure PCTCN2017080859-appb-100013
    到帕累托线
    Figure PCTCN2017080859-appb-100014
    的误差的半距离函数,
    Figure PCTCN2017080859-appb-100015
    为帕累托线
    Figure PCTCN2017080859-appb-100016
    到帕累托线
    Figure PCTCN2017080859-appb-100017
    的能耗的半距离,
    Figure PCTCN2017080859-appb-100018
    为帕累托线
    Figure PCTCN2017080859-appb-100019
    到帕累托线
    Figure PCTCN2017080859-appb-100020
    的能耗的半距离函数,
    分别根据如下公式计算:
    Figure PCTCN2017080859-appb-100021
    Figure PCTCN2017080859-appb-100022
    Figure PCTCN2017080859-appb-100023
    Figure PCTCN2017080859-appb-100024
    其中,c0和c1分别表示帕累托线
    Figure PCTCN2017080859-appb-100025
    对应的摄像机视点信息,摄像机视点信息由摄像机当前所在的位置和方向决定;N为帕累托线
    Figure PCTCN2017080859-appb-100026
    上的绘制参数的个数,s0j为帕累托线
    Figure PCTCN2017080859-appb-100027
    上的第j个绘制参数,
    Figure PCTCN2017080859-appb-100028
    为摄像机的视点信息为c0、绘制参数设置为s0j时的误差,
    Figure PCTCN2017080859-appb-100029
    为摄像机的视点信息为c1、绘制参数设置为 s0j时的误差,
    Figure PCTCN2017080859-appb-100030
    为摄像机的视点信息为c0、绘制参数设置为s0j时的误差相对于位于摄像机的视点信息为c1、绘制参数设置为s0j时的误差的差值;
    Figure PCTCN2017080859-appb-100031
    为摄像机的视点信息为c1、绘制参数设置为s0j时的误差,
    Figure PCTCN2017080859-appb-100032
    为摄像机的视点信息为c1、绘制参数设置为s0j时的误差,
    Figure PCTCN2017080859-appb-100033
    为摄像机的视点信息为c1、绘制参数设置为s0j时的能耗相对于摄像机的视点信息为c0处绘制参数设置为s0j时的能耗的差值;
    M为帕累托线
    Figure PCTCN2017080859-appb-100034
    上的绘制参数的个数,s1j为帕累托线
    Figure PCTCN2017080859-appb-100035
    上的第j个绘制参数,
    Figure PCTCN2017080859-appb-100036
    为摄像机的视点信息为c1、绘制参数设置为s1j时的误差,
    Figure PCTCN2017080859-appb-100037
    为摄像机的视点信息为c0、绘制参数设置为s1j时的误差,
    Figure PCTCN2017080859-appb-100038
    为摄像机的视点信息为c1、绘制参数设置为s1j时的误差相对于位于摄像机的视点信息为c0、绘制参数设置为s1j时的误差的差值;
    Figure PCTCN2017080859-appb-100039
    为摄像机的视点信息为c1、绘制参数设置为s1j时的能耗,
    Figure PCTCN2017080859-appb-100040
    为摄像机的视点信息为c0、绘制参数设置为s1j时的能耗,
    Figure PCTCN2017080859-appb-100041
    为摄像机的视点信息为c1、绘制参数设置为s1j时的能耗相对于摄像机的视点信息为c0处绘制参数设置为s1j时的能耗的差值。
  7. 如权利要求6所述的基于能耗-误差预算的实时绘制方法,其特征在于,针对每次剖分得到的位置子空间,包围该位置子空间的包围体的每个顶点处各个视线子空间上的帕累托线根据如下方法计算:
    若该顶点为上一层剖分得到的包围位置子空间的包围体的顶点,则不计算;
    若包围上一层剖分得到的位置子空间的包围体上该顶点所在边上的两个顶点的满足收敛条件的边上,从该条边的两个顶点任选一个顶点的帕累托线作为该顶点的帕累托线;
    否则,计算该顶点的帕累托线。
  8. 如权利要求7所述的基于能耗-误差预算的实时绘制方法,其特征在于,基于遗传算法计算顶点的帕累托线时:首先随机初始化若干绘制参数向量构建初始参数集合,然后迭代进行如下操作直至达到最大迭代次数,并根据最后一次迭代得到的参数集合中的绘制参数向量构建帕累托线:
    对初始参数集合中的绘制参数向量进行繁殖和变异产生新的绘制参数向量以形成候选集合;
    针对候选集合中的任意两个参数组合u和u′,若满足
    Figure PCTCN2017080859-appb-100042
    则保留,否 则,淘汰u′,并结束本次迭代,并以本次迭代得到的组合作为初始集合以进行下一次迭代,
    Figure PCTCN2017080859-appb-100043
    表示(e(c,u′)≤e(c,u)或p(c,u′)<p(c,u))且(p(c,u′)≤p(c,u)或e(c,u′)<e(c,u)),其中,c为视点信息,e(c,s)表示摄像机视点信息为c、绘制参数向量为s时绘制的误差,p(c,s)表示摄像机视点信息为c、绘制参数向量为s时绘制的能耗。
  9. 如权利要求8所述的基于能耗-误差预算的实时绘制方法,其特征在于,根据当前摄像机的位置和方向在所述的空间层次结构中搜索得到目标帕累托线方法如下:
    根据当前摄像机的视点信息搜索空间层次结构,得到摄像机的位置所在的位置子空间以及包围该位置子空间的包围体上与该位置距离最近的顶点,并从距离最近的顶点的视线子空间中确定当前摄像机的方向所在的视线子空间,并以该视线子空间对应的帕累托线作为目标帕累托线。
  10. 如权利要求9所述的基于能耗-误差预算的实时绘制方法,其特征在于,确定满足预算条件的当前最优绘制参数方法如下:
    若预设的预算条件为能耗预算,则搜索目标帕累托线找到满足能耗预算且误差最小的绘制参数作为最优绘制参数;
    若预设的预算条件为误差预算,则搜索目标帕累托线找到满足误差预算且能耗最小的绘制参数作为最优绘制参数。
  11. 如权利要求10所述的基于能耗-误差预算的实时绘制方法,其特征在于,得到最优绘制参数后在预设的过渡时间内还对当前得到的最优绘制参数和实时绘制过程中与本次最优绘制参数不同、且时间上最邻近的最优绘制参数进行插值处理得到最终绘制参数用于进行实时绘制。
  12. 如权利要求11所述的基于能耗-误差预算的实时绘制方法,其特征在于,根据如下公式进行插值处理:
    Figure PCTCN2017080859-appb-100044
    其中,soptimal为最终绘制参数,sold表示实时绘制过程中与本次最优绘制参数不同、且时间上最邻近的最优绘制参数,snew表示当前得到的最优绘制参数;t表示利用当前得到的最优绘制参数进行绘制的绘制时间,T为预设的过渡时间,方括号表示向下取整。
PCT/CN2017/080859 2016-04-28 2017-04-18 一种基于能耗-误差预算的实时绘制方法 WO2017186019A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/097,529 US10733791B2 (en) 2016-04-28 2017-04-18 Real-time rendering method based on energy consumption-error precomputation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610278727.5A CN105976306B (zh) 2016-04-28 2016-04-28 一种基于能耗-误差预算的实时绘制方法
CN201610278727.5 2016-04-28

Publications (1)

Publication Number Publication Date
WO2017186019A1 true WO2017186019A1 (zh) 2017-11-02

Family

ID=56994557

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/080859 WO2017186019A1 (zh) 2016-04-28 2017-04-18 一种基于能耗-误差预算的实时绘制方法

Country Status (3)

Country Link
US (1) US10733791B2 (zh)
CN (1) CN105976306B (zh)
WO (1) WO2017186019A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976306B (zh) * 2016-04-28 2019-06-04 浙江大学 一种基于能耗-误差预算的实时绘制方法
CN109191555B (zh) * 2018-07-12 2020-05-08 浙江大学 一种基于能耗-误差预测及预算的实时绘制方法
CN112150631B (zh) * 2020-09-23 2021-09-21 浙江大学 一种基于神经网络的实时能耗优化绘制方法和装置
CN114418917B (zh) * 2022-03-11 2022-06-21 腾讯科技(深圳)有限公司 数据处理方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050110789A1 (en) * 2003-11-20 2005-05-26 Microsoft Corporation Dynamic 2D imposters of 3D graphic objects
CN101183276A (zh) * 2007-12-13 2008-05-21 上海交通大学 基于摄像头投影仪技术的交互系统
CN101369345A (zh) * 2008-09-08 2009-02-18 北京航空航天大学 一种基于绘制状态的多属性对象绘制顺序优化方法
CN102622776A (zh) * 2011-01-31 2012-08-01 微软公司 三维环境重构
CN105976306A (zh) * 2016-04-28 2016-09-28 浙江大学 一种基于能耗-误差预算的实时绘制方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914124B (zh) * 2014-04-04 2016-08-17 浙江工商大学 面向三维场景的节能颜色映射方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050110789A1 (en) * 2003-11-20 2005-05-26 Microsoft Corporation Dynamic 2D imposters of 3D graphic objects
CN101183276A (zh) * 2007-12-13 2008-05-21 上海交通大学 基于摄像头投影仪技术的交互系统
CN101369345A (zh) * 2008-09-08 2009-02-18 北京航空航天大学 一种基于绘制状态的多属性对象绘制顺序优化方法
CN102622776A (zh) * 2011-01-31 2012-08-01 微软公司 三维环境重构
CN105976306A (zh) * 2016-04-28 2016-09-28 浙江大学 一种基于能耗-误差预算的实时绘制方法

Also Published As

Publication number Publication date
CN105976306A (zh) 2016-09-28
US10733791B2 (en) 2020-08-04
US20190147641A1 (en) 2019-05-16
CN105976306B (zh) 2019-06-04

Similar Documents

Publication Publication Date Title
CN108898630B (zh) 一种三维重建方法、装置、设备和存储介质
WO2017186019A1 (zh) 一种基于能耗-误差预算的实时绘制方法
US20130083021A1 (en) Stereo-Aware Image Editing
US20190057532A1 (en) Realistic augmentation of images and videos with graphics
US20200410688A1 (en) Image Segmentation Method, Image Segmentation Apparatus, Image Segmentation Device
US10438317B2 (en) Method and apparatus for rendering
US20180174349A1 (en) Adaptive partition mechanism with arbitrary tile shape for tile based rendering gpu architecture
US11263356B2 (en) Scalable and precise fitting of NURBS surfaces to large-size mesh representations
CN111581776A (zh) 一种基于几何重建模型的等几何分析方法
CN115018992A (zh) 发型模型的生成方法、装置、电子设备及存储介质
Zhang et al. Slimmer: Accelerating 3D semantic segmentation for mobile augmented reality
Lee et al. Geometry splitting: an acceleration technique of quadtree-based terrain rendering using GPU
EP4287134A1 (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions
Hou et al. Depth estimation and object detection for monocular semantic SLAM using deep convolutional network
JP2006520050A (ja) 動的運動体の視覚的シミュレーション
US11954802B2 (en) Method and system for generating polygon meshes approximating surfaces using iteration for mesh vertex positions
Jie et al. LOD methods of large-scale urban building models by GPU accelerating
US20230394767A1 (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions
CN116012666B (zh) 图像生成、模型的训练、信息重建方法、装置及电子设备
CN115953553B (zh) 虚拟形象生成方法、装置、电子设备以及存储介质
US20240221317A1 (en) Method and system for generating polygon meshes approximating surfaces using iteration for mesh vertex positions
Pan et al. Improved QEM simplification algorithm based on local area feature information constraint
CN109598790B (zh) 一种三维模型绘制方法及装置、一种计算设备及存储介质
US20240054657A1 (en) Frame rate up-conversion using optical flow
Kourgli Large-scale terrain level of detail estimation based on wavelet transform

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17788671

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17788671

Country of ref document: EP

Kind code of ref document: A1