WO2022000953A1 - 一种基于径向模糊的绒毛渲染方法、装置及存储介质 - Google Patents

一种基于径向模糊的绒毛渲染方法、装置及存储介质 Download PDF

Info

Publication number
WO2022000953A1
WO2022000953A1 PCT/CN2020/130324 CN2020130324W WO2022000953A1 WO 2022000953 A1 WO2022000953 A1 WO 2022000953A1 CN 2020130324 W CN2020130324 W CN 2020130324W WO 2022000953 A1 WO2022000953 A1 WO 2022000953A1
Authority
WO
WIPO (PCT)
Prior art keywords
fluff
rendered
space
rendering
attribute information
Prior art date
Application number
PCT/CN2020/130324
Other languages
English (en)
French (fr)
Inventor
陈志强
Original Assignee
完美世界(北京)软件科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 完美世界(北京)软件科技发展有限公司 filed Critical 完美世界(北京)软件科技发展有限公司
Publication of WO2022000953A1 publication Critical patent/WO2022000953A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Definitions

  • the present invention relates to the technical field of scene rendering, and in particular, to a radial blur-based fluff rendering method, device and storage medium.
  • the fluff scheme often used in the prior art includes the FurShell scheme.
  • the fluff is divided into multiple levels for Rendering, in order to improve the rendering effect, it is necessary to increase the number of renderings of the model, that is, to render the same model multiple times, so that the rendering efficiency is not high.
  • the present invention is proposed to provide a radial blur-based fluff rendering method, device and storage medium that overcome the above problems or at least partially solve or alleviate the above problems.
  • a radial blur-based fluff rendering method including:
  • the object to be rendered is subjected to radial blurring based on the scene color information and the fluff direction in the screen space buffered by the second buffer to obtain the fluff pixel value of the object to be rendered.
  • a radial blur-based fluff rendering device including:
  • the acquisition module is suitable for acquiring the fluff rendering related resources of the object to be rendered
  • An analysis module adapted to analyze the fluff direction and other fluff attribute information in the screen space of the object to be rendered according to the fluff rendering related resources, cache the other fluff attribute information in the first buffer, and store the screen space
  • the fluff direction in the buffer is buffered to the second buffer;
  • a calculation module adapted to perform illumination calculation based on the other fluff attribute information cached in the first buffer to obtain scene color information of the object to be rendered;
  • the processing module is adapted to perform radial blur processing on the object to be rendered based on the scene color information and the fluff direction in the screen space buffered by the second buffer to obtain the fluff pixel value of the object to be rendered.
  • a computer program comprising computer readable code which, when run on a server, causes the server to execute any one of the above embodiments The radial blur-based fluff rendering method described.
  • a computer storage medium in which a computer program for a server to execute the radial blur-based fluff rendering method according to any one of the above embodiments is stored.
  • the scene color of the object to be rendered can be obtained by performing lighting calculation based on the cached other fluff attribute information information, and further, radial blurring is performed on the object to be rendered based on the scene color information and the fluff direction in the buffered screen space to obtain the fluff pixel value of the object to be rendered.
  • the embodiment of the present invention can increase the output of the fluff direction in the screen space of the object to be rendered based on the fluff rendering related resources of the object to be rendered in the basic pass of the deferred rendering process, and cache the fluff direction in the screen space to a dedicated In the second buffer of the second buffer, and by adding a rendering pass, the object to be rendered is radially blurred according to the fluff direction and scene color information in the screen space of the second buffer, so that the real object to be rendered can be quickly and easily rendered.
  • the rendering process does not need to render the object to be rendered multiple times, especially rendering fluff for multiple objects to be rendered under the same camera perspective in the game scene, which can effectively improve fluff rendering efficiency.
  • FIG. 1 shows a schematic flowchart of a radial blur-based fluff rendering method according to an embodiment of the present invention
  • FIG. 2 shows a schematic diagram of a process of analyzing the fluff direction in the screen space of an object to be rendered according to an embodiment of the present invention
  • FIG. 3 shows a schematic diagram of a process of performing radial blur processing on an object to be rendered according to an embodiment of the present invention
  • FIG. 4 shows a schematic structural diagram of a fluff rendering device based on radial blurring according to an embodiment of the present invention
  • Figure 5 shows a block diagram of a server for performing the method according to the present invention.
  • Figure 6 shows a memory unit for holding or carrying program code implementing the method according to the invention.
  • FIG. 1 shows a schematic flowchart of a radial blur-based fluff rendering method according to an embodiment of the present invention. Referring to FIG. 1, the method includes steps S102 to S108.
  • Step S102 acquiring fluff rendering related resources of the object to be rendered.
  • the objects to be rendered in this embodiment of the present invention may include multiple objects to be rendered that need to be rendered with fluff in the game scene, that is, rendering of multiple objects to be rendered.
  • Step S104 analyze the fluff direction and other fluff attribute information in the screen space of the object to be rendered according to the fluff rendering related resources, cache other fluff attribute information in the first buffer, and cache the fluff direction in the screen space into the second buffer .
  • fluff attribute information in the embodiment of the present invention may include the basic color, roughness, metallicity, world space normal direction, etc. of the object to be rendered, which is not specifically limited in the embodiment of the present invention.
  • the fluff direction in the screen space cached in the second buffer area is actually the fluff direction in the screen space obtained after the fluff direction in the tangent space is transformed by space, which will be described in detail in the following embodiments. .
  • Step S106 performing lighting calculation based on other fluff attribute information cached in the first buffer to obtain scene color information of the object to be rendered.
  • Step S108 Perform radial blurring on the object to be rendered based on the scene color information and the fluff direction in the screen space buffered by the second buffer to obtain fluff pixel values of the object to be rendered.
  • the embodiments of the present invention can increase the output of the fluff direction in the screen space of the object to be rendered based on the fluff rendering related resources of the object to be rendered in the BasePass (basic pass) of the deferred rendering (DeferredShading) process, and convert the fluff direction in the screen space Cache into a dedicated second buffer, and by adding a render pass (render Pass) to radially blur the object to be rendered according to the fluff direction and scene color information in the screen space of the second buffer, so as to quickly and easily
  • the real fluff effect of the object to be rendered is rendered, and the rendering process does not require multiple rendering of the object to be rendered, especially rendering fluff for multiple objects to be rendered under the same camera perspective in the game scene, which can effectively improve fluff rendering efficiency.
  • acquiring the fluff rendering-related resources of the object to be rendered may include a model basic color map resource (Albedo), a fluff noise map resource (FurNoise), and a fluff direction in the model tangent space.
  • Texture resources FluDirection
  • model tangent space normal map resources Normal
  • model roughness metal map resources RoughnessMetallic
  • model resources containing normal and tangent data etc.
  • the RG channel in the fluff direction map in the model tangent space records the fluff direction
  • the B channel records the fluff length, which can mark the fluff area of the object to be rendered.
  • the noise map of the fluff can be a black and white noise map used to produce fluff shading of the object to be rendered.
  • the resources related to fluff rendering may also include other resources, which are not specifically limited in this embodiment of the present invention.
  • the fluff direction in the screen space of the object to be rendered obtained by analysis is cached in the second buffer (GBuffer), and other fluff attribute information is cached in the first buffer (GBuffer).
  • BasePass GBuffer the GBuffer here can also be called geometry buffer
  • other fluff attribute information cached in the first buffer can be used for lighting calculation
  • the fluff direction in the screen space cached in the second buffer can be Used for subsequent radial blurring of objects to be rendered.
  • the process of analyzing the fluff direction in the screen space of the object to be rendered according to the fluff rendering related resources may include steps S1041 to S1044 .
  • Step S1041 extract the RG channel value and the B channel value from the model tangent space normal map resource.
  • Step S1042 calculating the fluff direction in the tangent space based on the extracted RG channel value.
  • the RG channel of the texture in the fluff direction map in the tangent space of the model records the fluff direction, and the fluff direction in the tangent space can be effectively calculated based on the RG channel value.
  • Step S1043 Multiply the fluff direction in the tangent space, the extracted B channel value and the preset fluff length coefficient to obtain the real fluff direction in the tangent space.
  • the preset pile length coefficient in the embodiment of the present invention may be set by the user, and the specific value of the pile length coefficient is not limited in the embodiment of the present invention.
  • Step S1044 Convert the real fluff direction in the tangent space to the fluff direction in the screen space.
  • the fluff direction in the screen space may be referred to as the fluff direction in the NDC (Normalized Device Coordinates, normalized device coordinates) space.
  • NDC Normalized Device Coordinates, normalized device coordinates
  • a TBN matrix composed of three vectors of the tangent direction, the subtangent direction and the normal direction in the tangent space can be used to convert the real fluff direction in the tangent space to the real fluff direction in the world space.
  • the real fluff direction in the world space can be multiplied by the view matrix and the projection matrix to obtain the real fluff direction in the projection space.
  • perspective division is used to convert the real fluff direction in projection space to the fluff direction in screen space.
  • the fluff direction finally converted to screen space is limited to a range of -1 to 1.
  • step S104 when performing step S104 to analyze other fluff attribute information of the object to be rendered according to the fluff rendering related resources, if the other fluff attribute information is the basic color of the object to be rendered, it can be extracted from the model basic color map resource first. Base color information, and extract the noise color information from the noise map resource of the fluff. Then, the base color information and the noise color information are mixed and calculated, so that the base color (BaseColor) of the object to be rendered can be obtained. For example, the basic color information and the noise color information can be multiplied to obtain the basic color of the object to be rendered.
  • Base color information and extract the noise color information from the noise map resource of the fluff.
  • the base color information and the noise color information are mixed and calculated, so that the base color (BaseColor) of the object to be rendered can be obtained.
  • the basic color information and the noise color information can be multiplied to obtain the basic color of the object to be rendered.
  • step S104 when step S104 is performed to analyze other fluff attribute information of the object to be rendered according to the fluff rendering related resources, if the other fluff attribute information includes the normal direction in the world space, the model tangent space normal map may be firstly obtained from the model tangent space normal map. Extract the tangent space normal direction.
  • the light source provided in the game scene is located in the world space. In order to perform correct lighting calculations, it is necessary to convert the tangent space normal direction to the world space normal direction. Furthermore, the tangent space normal direction is converted to the world space normal direction.
  • the roughness of the model can be directly obtained from the The roughness and metalness of the object to be rendered are obtained by parsing the metalness map resource.
  • the process of performing radial blurring on the object to be rendered based on the scene color (SceneColor) information and the fluff direction in the screen space includes steps S1081 to S1083.
  • Step S1081 determining the radial blur direction according to the fluff direction in the screen space.
  • the current direction of the fluff on the screen can be obtained by sampling from the direction of the fluff in the buffered screen space, and used as the radial blur direction.
  • Step S1082 starting from any pixel point of the current screen along the radial blur direction and stepping the preset sampling times according to the preset distance interval, to obtain the preset sampling times and sampling points.
  • the preset sampling times that is, the preset radial blur sampling times
  • the specific value of the radial blur sampling times is not limited in this embodiment of the present invention.
  • the preset distance interval may be the product of the preset mirror blur distance coefficient and the fluff length in the screen space.
  • the preset mirror image blur distance coefficient may also be preset by a user, and the specific value of the preset mirror image blur distance coefficient is not limited in this embodiment of the present invention.
  • Step S1083 Acquire scene color information corresponding to the sampling point, calculate an average value of the acquired scene color information, and determine the fluff pixel value of the object to be rendered according to the average value.
  • the acquired scene color also has preset sampling times.
  • the preset sampling times and colors are averaged, and the fluff pixel value of the object to be rendered is determined according to the average value.
  • the obtained fluff pixel value can reflect the radial blurring result of fluff, that is, the preset sampling in the embodiment of the present invention.
  • the pixel values of the number of colors are processed by the radial blur method, and the pixel values after radial blur are obtained. After the pixels are rendered, a new fluff effect can be displayed on the screen to obtain the rendered fluff effect. Therefore, by using the embodiments of the present invention, the fluff effect of the object to be rendered can be rendered effectively and quickly.
  • an embodiment of the present invention further provides a fluff rendering device based on radial blur.
  • FIG. 4 shows a schematic structural diagram of a fluff rendering device based on radial blurring according to an embodiment of the present invention.
  • the fluff rendering device based on radial blurring includes an acquisition module 410 , an analysis module 420 , a calculation module 430 , and a processing module 410 .
  • Module 440 is
  • an obtaining module 410 adapted to obtain fluff rendering related resources of the object to be rendered
  • the analysis module 420 is adapted to analyze the fluff direction and other fluff attribute information in the screen space of the object to be rendered according to the fluff rendering related resources, cache other fluff attribute information in the first buffer, and cache the fluff direction in the screen space to the first buffer.
  • the calculation module 430 is adapted to perform lighting calculation based on other fluff attribute information cached in the first buffer to obtain scene color information of the object to be rendered;
  • the processing module 440 is adapted to perform radial blurring on the object to be rendered based on the scene color information and the fluff direction in the screen space buffered by the second buffer to obtain fluff pixel values of the object to be rendered.
  • the fluff rendering related resources of the object to be rendered include model base color map resources, fluff noise map resources, fluff direction map resources in model tangent space, model tangent space normal map resources, and model roughness At least one of a metalness map resource, a model resource containing normal and tangent data.
  • the analysis module 420 is further adapted to: extract the RG channel value and the B channel value from the model tangent space normal map resource; calculate the fluff direction in the tangent space based on the extracted RG channel value; Multiply the fluff direction in space, the extracted B channel value and the preset fluff length coefficient to obtain the real fluff direction in tangent space; convert the real fluff direction in tangent space to the fluff direction in screen space.
  • the analysis module 420 is further adapted to: after converting the real fluff direction in the tangent space to the real fluff direction in the world space, convert the real fluff direction in the world space to the real fluff direction in the projection space; adopt perspective The division transforms the true fluff orientation in projected space to the fluff orientation in screen space.
  • other fluff attribute information includes the basic color of the object to be rendered
  • the analysis module 420 is further adapted to: extract the basic color information from the model basic color map resource, and extract the noise color information from the noise map resource of the fluff ; Mix the basic color information and the noise color information to obtain the basic color of the object to be rendered.
  • the other fluff attribute information further includes the world space normal direction
  • the analysis module 420 is further adapted to: extract the tangent space normal direction from the model tangent space normal map, and convert the tangent space normal direction to World space normal direction.
  • other fluff attribute information further includes roughness and metalness of the object to be rendered
  • the analysis module 420 is further adapted to: obtain the roughness and metalness of the object to be rendered by parsing from the model roughness metalness map resource Spend.
  • the processing module 440 is further adapted to: determine the radial blurring direction according to the fluff direction in the screen space; start from any pixel point of the current screen along the radial blurring direction and step the preset distance intervals in advance. Set the sampling times, and obtain the preset sampling times and sampling points; obtain the scene color information corresponding to the sampling points, calculate the average value of the obtained scene color information, and determine the fluff pixel value of the object to be rendered according to the average value.
  • the preset distance interval includes: a product of a preset mirror image blur distance coefficient and a fluff length in screen space.
  • Various component embodiments of the present invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of some or all of some or all of the components in the radial blur-based fluff rendering apparatus according to the embodiment of the present invention
  • DSP digital signal processor
  • the present invention can also be implemented as apparatus or apparatus programs (eg, computer programs and computer program products) for performing part or all of the methods described herein.
  • Such a program implementing the present invention may be stored on a computer-readable medium, or may be in the form of one or more signals. Such signals may be downloaded from Internet sites, or provided on carrier signals, or in any other form.
  • FIG. 5 shows a server, such as an application server, that can implement the radial blur-based fluff rendering method according to the present invention.
  • the server traditionally includes a processor 510 and a computer program product or computer storage medium in the form of memory 520 .
  • the memory 520 may be electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the memory 520 has storage space 530 for program code 531 for performing any of the method steps in the above-described methods.
  • the storage space 530 for program codes may include various program codes 531 for implementing various steps in the above methods, respectively. These program codes can be read from or written to one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks. Such computer program products are typically portable or fixed storage units as described with reference to FIG. 6 .
  • the storage unit may have storage segments, storage spaces, etc. arranged similarly to the storage 520 in the server of FIG. 5 .
  • the program code may, for example, be compressed in a suitable form.
  • the storage unit includes computer readable code 531', i.e. code readable by a processor such as 510 for example, which when executed by a server, causes the server to perform the various steps in the methods described above.
  • references herein to "one embodiment,” “an embodiment,” or “one or more embodiments” means that a particular feature, structure, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Also, please note that instances of the phrase “in one embodiment” herein are not necessarily all referring to the same embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

一种基于径向模糊的绒毛渲染方法、装置及存储介质,该方法包括获取待渲染对象的绒毛渲染相关资源(S102);依据绒毛渲染相关资源分析待渲染对象的屏幕空间中的绒毛方向和其他绒毛属性信息,将其他绒毛属性信息缓存至第一缓冲区,将屏幕空间中的绒毛方向缓存至第二缓冲区(S104);基于第一缓冲区缓存的其他绒毛属性信息进行光照计算得到待渲染对象的场景颜色信息(S106);基于场景颜色信息和第二缓冲区缓存的屏幕空间中的绒毛方向对待渲染对象进行径向模糊处理,得到渲染后的绒毛效果(S108)。由此,本方法能够快速且方便地渲染出待渲染对象的真实绒毛效果,无需对待渲染对象进行多次渲染,能够有效地提高绒毛渲染效率。

Description

一种基于径向模糊的绒毛渲染方法、装置及存储介质
本申请要求于2020年7月3日提交中国专利局的申请号为202010631548.1、申请名称为“一种基于径向模糊的绒毛渲染方法、装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中。
技术领域
本发明涉及场景渲染技术领域,尤其涉及一种基于径向模糊的绒毛渲染方法、装置及存储介质。
背景技术
随着计算机图形技术的飞速发展,虚拟对象越来越多的被应用到电影、游戏和动画的制作当中。为了实现虚拟对象的真实感,需要一些关键技术,如骨骼动画、表情神态以及绒毛的模拟等,现有技术常采用的绒毛方案包括FurShell方案,FurShell方案实现过程中会将绒毛分成多个层次进行渲染,若要提高渲染效果,需要增加模型的渲染次数,即多次渲染同一模型,从而使得渲染效率不高。
发明内容
鉴于上述问题,提出了本发明以便提供一种克服上述问题或者至少部分地解决或者减缓上述问题的一种基于径向模糊的绒毛渲染方法、装置及存储介质。
根据本发明实施例的一个方面,提供了一种基于径向模糊的绒毛渲染方法,包括:
获取待渲染对象的绒毛渲染相关资源;
依据所述绒毛渲染相关资源分析所述待渲染对象的屏幕空间中的绒毛方向和其他绒毛属性信息,将所述其他绒毛属性信息缓存至第一缓冲区,将所述屏幕空间中的绒毛方向缓存至第二缓冲区;
基于所述第一缓冲区缓存的所述其他绒毛属性信息进行光照计算得到所述待渲染对象的场景颜色信息;
基于所述场景颜色信息和所述第二缓冲区缓存的所述屏幕空间中的绒毛方向对待渲染对象进行径向模糊处理,得到所述待渲染对象的绒毛像素值。
根据本发明实施例的另一方面,还提供了一种基于径向模糊的绒毛渲染装置,包括:
获取模块,适于获取待渲染对象的绒毛渲染相关资源;
分析模块,适于依据所述绒毛渲染相关资源分析所述待渲染对象的屏幕空间中的绒毛方向和其他绒毛属性信息,将所述其他绒毛属性信息缓存至第一缓冲区,将所述屏幕空间中的绒毛方向缓存至第二缓冲区;
计算模块,适于基于所述第一缓冲区缓存的所述其他绒毛属性信息进行光照计算得到所述待渲染对象的场景颜色信息;
处理模块,适于基于所述场景颜色信息和所述第二缓冲区缓存的所述屏幕空间中的绒毛方向对待渲染对象进行径向模糊处理,得到所述待渲染对象的绒毛像素值。
根据本发明实施例的又一个方面,还提供了一种计算机程序,其包括计算机可读代码,当所述计算机可读代码在服务器上运行时,导致所述服务器执行根据上文任一个实施例所述的基于径向模糊的绒毛渲染方法。
根据本发明实施例的再一个方面,还提供了一种计算机存储介质,其中存储了服务器执行根据上文任一个实施例所述的基于径向模糊的绒毛渲染方法的计算机程序。
本发明的有益效果为:
在依据绒毛渲染相关资源分析得到待渲染对象的屏幕空间中的绒毛方向和其他绒毛属性信息并分别缓存至不同缓冲区后,可以基于缓存的其他绒毛属性信息进行光照计算得到待渲染对象的场景颜色信息,进而,基于场景颜色信息和缓存的屏幕空间中的绒毛方向对待渲染对象进行径向模糊处理,得到待渲染对象的绒毛像素值。由此,本发明实施例可以在延迟渲染过程的基础通道中基于待渲染对象的绒毛渲染相关资源增加待渲染对象的屏幕空间中的绒毛方向的输出,且将屏幕空间中的绒毛方向缓存至专用的第二缓冲区中,并通过添加渲染通道来依据第二缓冲区的屏幕空间中的绒毛方向和场景颜色信息对待 渲染对象进行径向模糊处理,从而快速且方便地渲染出待渲染对象的真实绒毛效果,该渲染过程无需对待渲染对象进行多次渲染,特别是对游戏场景中同一相机视角下的多个待渲染对象渲染绒毛,能够有效地提高绒毛渲染效率。
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。
附图说明
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本发明的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1示出了根据本发明一实施例的基于径向模糊的绒毛渲染方法的流程示意图;
图2示出了根据本发明一实施例的分析待渲染对象的屏幕空间中的绒毛方向的过程示意图;
图3示出了根据本发明一实施例的对待渲染对象进行径向模糊处理的过程示意图;
图4示出了根据本发明一实施例的基于径向模糊的绒毛渲染装置的结构示意图;
图5示出了用于执行根据本发明的方法的服务器的框图;以及
图6示出了用于保持或者携带实现根据本发明的方法的程序代码的存储单元。
具体实施例
下面结合附图和具体的实施方式对本发明作进一步的描述。
为解决上述技术问题,本发明实施例提供了一种基于径向模糊的绒毛渲染方法,图1示出了根据本发明一实施例的基于径向模糊的绒毛渲染方法的流程示意图。参见图1,该方法包括步骤S102至步骤S108。
步骤S102,获取待渲染对象的绒毛渲染相关资源。
本发明实施例中的待渲染对象可以包含游戏场景中多个需要进行绒毛渲染的待渲染对象,即对多个待渲染对象进行渲染。
步骤S104,依据绒毛渲染相关资源分析待渲染对象的屏幕空间中的绒毛方向和其他绒毛属性信息,将其他绒毛属性信息缓存至第一缓冲区,将屏幕空间中的绒毛方向缓存至第二缓冲区。
本发明实施例的其他绒毛属性信息可以包括待渲染对象的基础颜色、粗糙度、金属度、世界空间法线方向等等,本发明实施例对此不做具体限定。并且,本发明实施例中缓存至第二缓冲区中的屏幕空间中的绒毛方向实际上是切线空间中的绒毛方向经过空间转换后得到的屏幕空间中的绒毛方向,后文实施例会进行具体介绍。
步骤S106,基于第一缓冲区缓存的其他绒毛属性信息进行光照计算得到待渲染对象的场景颜色信息。
步骤S108,基于场景颜色信息和第二缓冲区缓存的屏幕空间中的绒毛方向对待渲染对象进行径向模糊处理,得到待渲染对象的绒毛像素值。
本发明实施例可以在延迟渲染(DeferredShading)过程的BasePass(基础通道)中基于待渲染对象的绒毛渲染相关资源增加待渲染对象的屏幕空间中的绒毛方向的输出,且将屏幕空间中的绒毛方向缓存至专用的第二缓冲区中,并通过添加渲染通道(render Pass)来依据第二缓冲区的屏幕空间中的绒毛方向和场景颜色信息对待渲染对象进行径向模糊处理,从而快速且方便地渲染出待渲染对象的真实绒毛效果,该渲染过程无需对待渲染对象进行多次渲染,特别是对游戏场景中同一相机视角下的多个待渲染对象渲染绒毛,能够有效地提高绒毛渲染效率。
参见上文步骤S102,在本发明一实施例中,获取待渲染对象的绒毛渲染相关资源可以包括模型基础色贴图资源(Albedo)、绒毛的噪声图资源(FurNoise)、模型切线空间中的绒毛方向贴图资源(FurDirection)、模型切线空间法线贴图资源(Normal)、模型粗糙度金属度图资源(RoughnessMetallic)、包含法线和切线数据的模型资源等等。其中,模型切线空间中的绒毛方向贴图中的贴图RG通道记录了绒毛方向,B通道记录了绒毛长度,能够标记待渲染对象的绒毛区域。 绒毛的噪声图可以是黑白噪声图,用于产生待渲染对象的绒毛明暗效果。当然绒毛渲染相关资源还可以包含其他资源,本发明实施例对此不做具体的限定。
参见上文步骤S104,在本发明实施例中,将分析得到的待渲染对象的屏幕空间中的绒毛方向缓存至第二缓冲区(GBuffer),并将其他绒毛属性信息缓存至第一缓冲区(BasePass GBuffer),这里的GBuffer还可称为几何缓冲(geometry buffer),第一缓冲区中缓存的其他绒毛属性信息可以用于进行光照计算,第二缓冲区中缓存的屏幕空间中的绒毛方向可以用于后续的对待渲染对象进行径向模糊处理。
在本发明一实施例中,参见图2,依据绒毛渲染相关资源分析待渲染对象的屏幕空间中的绒毛方向的过程可以包括步骤S1041至步骤S1044。
步骤S1041,从模型切线空间法线贴图资源中提取RG通道值和B通道值。
步骤S1042,基于提取的RG通道值计算得到切线空间中的绒毛方向。
本发明实施例中,模型切线空间中的绒毛方向贴图中的贴图RG通道记录了绒毛方向,基于RG通道值可以有效计算得到切线空间中的绒毛方向,切线空间中的绒毛方向包含XYZ三个分量,其中,XY=RG*2–1,Z=sqrt(1–XY*XY)。
步骤S1043,将切线空间中的绒毛方向、提取的B通道值以及预置绒毛长度系数进行相乘,得到切线空间中真实绒毛方向。
本发明实施例中的预置绒毛长度系数可以是由用户进行设置的,本发明实施例对绒毛长度系数的具体数值不做限定。
步骤S1044,将切线空间中真实绒毛方向转换至屏幕空间中的绒毛方向。
在本发明一实施例中,屏幕空间中的绒毛方向可以称为NDC(Normalized Device Coordinates,标准化设备坐标)空间中的绒毛方向。在执行步骤S1044将切线空间中真实绒毛方向转换至屏幕空间中的绒毛方向的过程中,需要对切线空间中真实绒毛方向进行多次空间转换,以使真实绒毛方向被限制于-1~1范围内,从而方便后续径向模糊处理过程。
首先,将切线空间中真实绒毛方向转换至世界空间中真实绒毛方向。该实施例中,可以采用由切线空间的切线方向、副切线方向、法线方向三个向量组成的TBN矩阵,将切线空间中真实绒毛方向转换至世界空间中真实绒毛方向。然后,再将世界空间中真实绒毛方向转换至投影空间中真实绒毛方向。该实施例中,可以将世界空间中真实绒毛方向乘以视矩阵和投影矩阵,以得到投影空间中真实绒毛方向。最后,采用透视除法将投影空间中真实绒毛方向转换至屏幕空间中的绒毛方向。最终转换至屏幕空间中的绒毛方向被限制成-1~1范围内。
在本发明一实施例中,执行步骤S104依据绒毛渲染相关资源分析待渲染对象的其他绒毛属性信息时,若其他绒毛属性信息为待渲染对象的基础颜色,可以先从模型基础色贴图资源中提取基础颜色信息,并从绒毛的噪声图资源中提取噪声颜色信息。然后,将基础颜色信息和噪声颜色信息进行混合计算,从而可以得到待渲染对象的基础颜色(BaseColor)。例如,可以将基础颜色信息和噪声颜色信息进行相乘计算,从而得到待渲染对象的基础颜色。
在本发明另一实施例中,执行步骤S104依据绒毛渲染相关资源分析待渲染对象的其他绒毛属性信息时,若其他绒毛属性信息包括世界空间法线方向,可以先从模型切线空间法线贴图中提取切线空间法线方向。通常游戏场景中所提供的光源位于世界空间,若要进行正确的光照计算,需要把切线空间法线方向转换到世界空间法线方向。进而,再将切线空间法线方向转换至世界空间法线方向。
在本发明另一实施例中,若其他绒毛属性信息包括待渲染对象的粗糙度、金属度,执行步骤S104依据绒毛渲染相关资源分析待渲染对象的其他绒毛属性信息时,可以直接从模型粗糙度金属度图资源中解析得到待渲染对象的粗糙度和金属度。
参见上文步骤S108,在本发明一实施例中,参见图3,基于场景颜色(SceneColor)信息和屏幕空间中的绒毛方向对待渲染对象进行径向模糊处理的过程包括步骤S1081至步骤S1083。
步骤S1081,依据屏幕空间中的绒毛方向确定径向模糊方向。
在本发明实施例中,可以从缓存的屏幕空间中的绒毛方向中采样得到当前绒毛在屏幕上的方向,并作为径向模糊方向。
步骤S1082,从当前屏幕任一像素点开始沿径向模糊方向并按照预 设距离间隔步进预设采样次数,得到预设采样次数个采样点。
在该步骤中,预设采样次数即预设的径向模糊采样次数,可以由用户进行预先设置,本发明实施例对具体径向模糊采样次数的数值不做限定。
本发明一实施例中,预设距离间隔可以是预设镜像模糊距离系数与屏幕空间中的绒毛长度的乘积。其中,预设镜像模糊距离系数也可以由用户进行预先设置,本发明实施例对预设镜像模糊距离系数的具体数值不做限定。
步骤S1083,获取采样点对应的场景颜色信息,对获取到的场景颜色信息求平均值,依据平均值确定待渲染对象的绒毛像素值。
在本发明实施例中,由于采样点具有预设采样次数个,因此,获取到的场景颜色也具有预设采样次数个。将预设采样次数个颜色求平均值,并依据平均值确定待渲染对象的绒毛像素值,可以通过求得的绒毛像素值体现出绒毛的径向模糊结果,即本发明实施例对预设采样次数个颜色的像素值采用径向模糊方法进行处理,得到径向模糊后的像素值,像素通过渲染后可以在屏幕上显示新的绒毛效果,以得到渲染后的绒毛效果。由此,采用本发明实施例可以有效地且快速地渲染出待渲染对象的绒毛效果。
基于同一发明构思,本发明实施例还提供了一种基于径向模糊的绒毛渲染装置。图4示出了根据本发明一实施例的基于径向模糊的绒毛渲染装置的结构示意图,参见图4,基于径向模糊的绒毛渲染装置包括获取模块410、分析模块420、计算模块430、处理模块440。
获取模块410,适于获取待渲染对象的绒毛渲染相关资源;
分析模块420,适于依据绒毛渲染相关资源分析待渲染对象的屏幕空间中的绒毛方向和其他绒毛属性信息,将其他绒毛属性信息缓存至第一缓冲区,将屏幕空间中的绒毛方向缓存至第二缓冲区;
计算模块430,适于基于第一缓冲区缓存的其他绒毛属性信息进行光照计算得到待渲染对象的场景颜色信息;
处理模块440,适于基于场景颜色信息和第二缓冲区缓存的屏幕空间中的绒毛方向对待渲染对象进行径向模糊处理,得到待渲染对象的绒毛像素值。
在本发明一实施例中,待渲染对象的绒毛渲染相关资源包括模型 基础色贴图资源、绒毛的噪声图资源、模型切线空间中的绒毛方向贴图资源、模型切线空间法线贴图资源、模型粗糙度金属度图资源、包含法线和切线数据的模型资源中的至少一项。
在本发明一实施例中,分析模块420还适于:从模型切线空间法线贴图资源中提取RG通道值和B通道值;基于提取的RG通道值计算得到切线空间中的绒毛方向;将切线空间中的绒毛方向、提取的B通道值以及预置绒毛长度系数进行相乘,得到切线空间中真实绒毛方向;将切线空间中真实绒毛方向转换至屏幕空间中的绒毛方向。
在本发明一实施例中,分析模块420还适于:将切线空间中真实绒毛方向转换至世界空间中真实绒毛方向后,将世界空间中真实绒毛方向转换至投影空间中真实绒毛方向;采用透视除法将投影空间中真实绒毛方向转换至屏幕空间中的绒毛方向。
在本发明一实施例中,其他绒毛属性信息包括待渲染对象的基础颜色,分析模块420还适于:从模型基础色贴图资源中提取基础颜色信息,从绒毛的噪声图资源中提取噪声颜色信息;将基础颜色信息和噪声颜色信息进行混合计算,得到待渲染对象的基础颜色。
在本发明一实施例中,其他绒毛属性信息还包括世界空间法线方向,分析模块420还适于:从模型切线空间法线贴图中提取切线空间法线方向,将切线空间法线方向转换至世界空间法线方向。
在本发明一实施例中,其他绒毛属性信息还包括待渲染对象的粗糙度、金属度,分析模块420还适于:从模型粗糙度金属度图资源中解析得到待渲染对象的粗糙度和金属度。
在本发明一实施例中,处理模块440还适于:依据屏幕空间中的绒毛方向确定径向模糊方向;从当前屏幕任一像素点开始沿径向模糊方向并按照预设距离间隔步进预设采样次数,得到预设采样次数个采样点;获取采样点对应的场景颜色信息,对获取到的场景颜色信息求平均值,依据所平均值确定待渲染对象的绒毛像素值。
在本发明一实施例中,预设距离间隔包括:预设镜像模糊距离系数与屏幕空间中的绒毛长度的乘积。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器 (DSP)来实现根据本发明实施例的基于径向模糊的绒毛渲染装置中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
例如,图5示出了可以实现根据本发明的基于径向模糊的绒毛渲染方法的服务器,例如应用服务器。该服务器传统上包括处理器510和以存储器520形式的计算机程序产品或者计算机存储介质。存储器520可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器520具有用于执行上述方法中的任何方法步骤的程序代码531的存储空间530。例如,用于程序代码的存储空间530可以包括分别用于实现上面的方法中的各种步骤的各个程序代码531。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如参考图6所述的便携式或者固定存储单元。该存储单元可以具有与图5的服务器中的存储器520类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括计算机可读代码531’,即可以由例如诸如510之类的处理器读取的代码,这些代码当由服务器运行时,导致该服务器执行上面所描述的方法中的各个步骤。
本文中所称的“一个实施例”、“实施例”或者“一个或者多个实施例”意味着,结合实施例描述的特定特征、结构或者特性包括在本发明的至少一个实施例中。此外,请注意,这里“在一个实施例中”的词语例子不一定全指同一个实施例。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下被实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
应该注意的是上述实施例对本发明进行说明而不是对本发明进行 限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
此外,还应当注意,本说明书中使用的语言主要是为了可读性和教导的目的而选择的,而不是为了解释或者限定本发明的主题而选择的。因此,在不偏离所附权利要求书的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。对于本发明的范围,对本发明所做的公开是说明性的,而非限制性的,本发明的范围由所附权利要求书限定。

Claims (12)

  1. 一种基于径向模糊的绒毛渲染方法,包括:
    获取待渲染对象的绒毛渲染相关资源;
    依据所述绒毛渲染相关资源分析所述待渲染对象的屏幕空间中的绒毛方向和其他绒毛属性信息,将所述其他绒毛属性信息缓存至第一缓冲区,将所述屏幕空间中的绒毛方向缓存至第二缓冲区;
    基于所述第一缓冲区缓存的所述其他绒毛属性信息进行光照计算得到所述待渲染对象的场景颜色信息;
    基于所述场景颜色信息和所述第二缓冲区缓存的所述屏幕空间中的绒毛方向对待渲染对象进行径向模糊处理,得到所述待渲染对象的绒毛像素值。
  2. 根据权利要求1所述的方法,其中,所述待渲染对象的绒毛渲染相关资源包括:
    模型基础色贴图资源、绒毛的噪声图资源、模型切线空间中的绒毛方向贴图资源、模型切线空间法线贴图资源、模型粗糙度金属度图资源、包含法线和切线数据的模型资源中的至少一项。
  3. 根据权利要求2所述的方法,其中,依据所述绒毛渲染相关资源分析所述待渲染对象的屏幕空间中的绒毛方向,包括:
    从模型切线空间法线贴图资源中提取RG通道值和B通道值;
    基于提取的所述RG通道值计算得到切线空间中的绒毛方向;
    将所述切线空间中的绒毛方向、提取的B通道值以及预置绒毛长度系数进行相乘,得到切线空间中真实绒毛方向;
    将所述切线空间中真实绒毛方向转换至所述屏幕空间中的绒毛方向。
  4. 根据权利要求3所述的方法,其中,将所述切线空间中真实绒毛方向转换至所述屏幕空间中的绒毛方向,包括:
    将所述切线空间中真实绒毛方向转换至所述世界空间中真实绒毛方向后,将所述世界空间中真实绒毛方向转换至投影空间中真实绒毛方向;
    采用透视除法将所述投影空间中真实绒毛方向转换至所述屏幕空间中的绒毛方向。
  5. 根据权利要求2-4任一项所述的方法,其中,所述其他绒毛属性信息包括所述待渲染对象的基础颜色,依据所述绒毛渲染相关资源分析所述待渲染对象的其他绒毛属性信息,包括:
    从所述模型基础色贴图资源中提取基础颜色信息,从所述绒毛的噪声图资源中提取噪声颜色信息;
    将所述基础颜色信息和所述噪声颜色信息进行混合计算,得到所述待渲染对象的基础颜色。
  6. 根据权利要求2-4任一项所述的方法,其中,所述其他绒毛属性信息还包括世界空间法线方向,依据所述绒毛渲染相关资源分析所述待渲染对象的其他绒毛属性信息,包括:
    从所述模型切线空间法线贴图中提取切线空间法线方向,将所述切线空间法线方向转换至世界空间法线方向。
  7. 根据权利要求2-4任一项所述的方法,其中,所述其他绒毛属性信息还包括所述待渲染对象的粗糙度、金属度,依据所述绒毛渲染相关资源分析所述待渲染对象的其他绒毛属性信息,包括:
    从所述模型粗糙度金属度图资源中解析得到所述待渲染对象的粗糙度和金属度。
  8. 根据权利要求1-4任一项所述的方法,其中,基于所述场景颜色信息和所述第二缓冲区缓存的所述屏幕空间中的绒毛方向对待渲染对象进行径向模糊处理,得到所述待渲染对象的绒毛像素值,包括:
    依据所述屏幕空间中的绒毛方向确定径向模糊方向;
    从当前屏幕任一像素点开始沿所述径向模糊方向并按照预设距离间隔步进预设采样次数,得到预设采样次数个采样点;
    获取所述采样点对应的场景颜色信息,对获取到的场景颜色信息求平均值,依据所述平均值确定所述待渲染对象的绒毛像素值。
  9. 根据权利要求8所述的方法,其中,
    所述预设距离间隔包括:预设镜像模糊距离系数与所述屏幕空间中的绒毛长度的乘积。
  10. 一种基于径向模糊的绒毛渲染装置,包括:
    获取模块,适于获取待渲染对象的绒毛渲染相关资源;
    分析模块,适于依据所述绒毛渲染相关资源分析所述待渲染对象的屏幕空间中的绒毛方向和其他绒毛属性信息,将所述其他绒毛属性 信息缓存至第一缓冲区,将所述屏幕空间中的绒毛方向缓存至第二缓冲区;
    计算模块,适于基于所述第一缓冲区缓存的所述其他绒毛属性信息进行光照计算得到所述待渲染对象的场景颜色信息;
    处理模块,适于基于所述场景颜色信息和所述第二缓冲区缓存的所述屏幕空间中的绒毛方向对待渲染对象进行径向模糊处理,得到所述待渲染对象的绒毛像素值。
  11. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在服务器上运行时,导致所述服务器执行根据权利要求1-9中的任一个所述的基于径向模糊的绒毛渲染方法。
  12. 一种计算机存储介质,其中存储了如权利要求11所述的计算机程序。
PCT/CN2020/130324 2020-07-03 2020-11-20 一种基于径向模糊的绒毛渲染方法、装置及存储介质 WO2022000953A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010631548.1 2020-07-03
CN202010631548.1A CN111862290B (zh) 2020-07-03 2020-07-03 一种基于径向模糊的绒毛渲染方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2022000953A1 true WO2022000953A1 (zh) 2022-01-06

Family

ID=73152111

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/130324 WO2022000953A1 (zh) 2020-07-03 2020-11-20 一种基于径向模糊的绒毛渲染方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN111862290B (zh)
WO (1) WO2022000953A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693856A (zh) * 2022-05-30 2022-07-01 腾讯科技(深圳)有限公司 对象生成方法、装置、计算机设备和存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862290B (zh) * 2020-07-03 2021-05-11 完美世界(北京)软件科技发展有限公司 一种基于径向模糊的绒毛渲染方法、装置及存储介质
CN113836705A (zh) * 2021-09-06 2021-12-24 网易(杭州)网络有限公司 光照数据的处理方法、装置、存储介质和电子装置
CN116883567A (zh) * 2023-07-07 2023-10-13 上海散爆信息技术有限公司 一种绒毛渲染方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5758046A (en) * 1995-12-01 1998-05-26 Lucas Digital, Ltd. Method and apparatus for creating lifelike digital representations of hair and other fine-grained images
CN106575445A (zh) * 2014-09-24 2017-04-19 英特尔公司 毛皮虚拟化身动画
CN110060321A (zh) * 2018-10-15 2019-07-26 叠境数字科技(上海)有限公司 基于真实材质的毛发快速实时渲染方法
CN110648386A (zh) * 2019-07-23 2020-01-03 完美世界(北京)软件科技发展有限公司 一种用于图元的反走样的方法和系统
CN111862290A (zh) * 2020-07-03 2020-10-30 完美世界(北京)软件科技发展有限公司 一种基于径向模糊的绒毛渲染方法、装置及存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682420A (zh) * 2012-03-31 2012-09-19 北京百舜华年文化传播有限公司 一种真实人物图像转换为卡通风格图像的方法及装置
CN102982575B (zh) * 2012-11-29 2015-05-06 杭州挪云科技有限公司 一种基于光线跟踪的毛发渲染方法
WO2014138925A1 (en) * 2013-03-15 2014-09-18 Interaxon Inc. Wearable computing apparatus and method
CN104268922B (zh) * 2014-09-03 2017-06-06 广州博冠信息科技有限公司 一种图像渲染方法及图像渲染装置
CN104574479B (zh) * 2015-01-07 2017-08-25 北京春天影视科技有限公司 一种三维动画中鸟类单根羽毛的快速生成方法
CN107204036B (zh) * 2016-03-16 2019-01-08 腾讯科技(深圳)有限公司 生成头发图像的方法和装置
CN108510500B (zh) * 2018-05-14 2021-02-26 深圳市云之梦科技有限公司 一种基于人脸肤色检测的虚拟人物形象的头发图层处理方法及系统
CN110136238B (zh) * 2019-04-02 2023-06-23 杭州小影创新科技股份有限公司 一种结合物理光照模型的ar绘画方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5758046A (en) * 1995-12-01 1998-05-26 Lucas Digital, Ltd. Method and apparatus for creating lifelike digital representations of hair and other fine-grained images
CN106575445A (zh) * 2014-09-24 2017-04-19 英特尔公司 毛皮虚拟化身动画
CN110060321A (zh) * 2018-10-15 2019-07-26 叠境数字科技(上海)有限公司 基于真实材质的毛发快速实时渲染方法
CN110648386A (zh) * 2019-07-23 2020-01-03 完美世界(北京)软件科技发展有限公司 一种用于图元的反走样的方法和系统
CN111862290A (zh) * 2020-07-03 2020-10-30 完美世界(北京)软件科技发展有限公司 一种基于径向模糊的绒毛渲染方法、装置及存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GOD OF ZHE FU: "How to calculate the B channel from the RG channel of the normal map?", ZHIHU, CN, pages 1 - 6, XP009533288, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/42668577> *
SHAOHUI JIAO ; BIN SHENG ; HANQIU SUN ; ENHUA WU: "Furry stylized texel-rendering in images and videos", INFORMATION, COMMUNICATIONS AND SIGNAL PROCESSING, 2009. ICICS 2009. 7TH INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 8 December 2009 (2009-12-08), Piscataway, NJ, USA , pages 1 - 5, XP031618112, ISBN: 978-1-4244-4656-8 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693856A (zh) * 2022-05-30 2022-07-01 腾讯科技(深圳)有限公司 对象生成方法、装置、计算机设备和存储介质
CN114693856B (zh) * 2022-05-30 2022-09-09 腾讯科技(深圳)有限公司 对象生成方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN111862290A (zh) 2020-10-30
CN111862290B (zh) 2021-05-11

Similar Documents

Publication Publication Date Title
WO2022000953A1 (zh) 一种基于径向模糊的绒毛渲染方法、装置及存储介质
CN107154063B (zh) 图像展示区域的形状设置方法及装置
US9928637B1 (en) Managing rendering targets for graphics processing units
KR101373020B1 (ko) 정적 영상에서 애니메이션 아트 효과를 생성하기 위한 방법 및 시스템
CN109255767B (zh) 图像处理方法和装置
US9626733B2 (en) Data-processing apparatus and operation method thereof
US10930059B2 (en) Method and apparatus for processing virtual object lighting inserted into a 3D real scene
US8854392B2 (en) Circular scratch shader
CN106447756B (zh) 用于生成用户定制的计算机生成动画的方法和系统
US10733793B2 (en) Indexed value blending for use in image rendering
CN112581632B (zh) 一种房源数据的处理方法和装置
US8824778B2 (en) Systems and methods for depth map generation
CN110363837B (zh) 游戏中纹理图像的处理方法及装置、电子设备、存储介质
WO2016178807A1 (en) Painterly picture generation
CN112102145B (zh) 图像处理方法及装置
KR20180080618A (ko) 증강현실 기반 실감 렌더링 방법 및 장치
CN108776963B (zh) 一种反向图像鉴真的方法和系统
US11037311B2 (en) Method and apparatus for augmenting data in monitoring video
Jingtao et al. End-to-end deep neural network for illumination consistency and global illumination
CN114286163B (zh) 一种序列图的录制方法、装置、设备及存储介质
Xiong et al. The Application of Graphics and Images in Digital Technology Environment
US20240233287A9 (en) Display method and apparatus and electronic device
WO2023181904A1 (ja) 情報処理装置、情報処理方法および記録媒体
Tang et al. Research on 3D Rendering Effect under Multi-strategy
CN117909003A (zh) 用于重建数字照片的物理即时模拟冲印体验的系统和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20942820

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20942820

Country of ref document: EP

Kind code of ref document: A1