CN115063517A - Flash effect rendering method and device in game, storage medium and electronic equipment - Google Patents

Flash effect rendering method and device in game, storage medium and electronic equipment Download PDF

Info

Publication number
CN115063517A
CN115063517A CN202210637233.7A CN202210637233A CN115063517A CN 115063517 A CN115063517 A CN 115063517A CN 202210637233 A CN202210637233 A CN 202210637233A CN 115063517 A CN115063517 A CN 115063517A
Authority
CN
China
Prior art keywords
gradient
target
vector
probability
distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210637233.7A
Other languages
Chinese (zh)
Inventor
金紫凤
陈家挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210637233.7A priority Critical patent/CN115063517A/en
Publication of CN115063517A publication Critical patent/CN115063517A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)

Abstract

The disclosure relates to the technical field of computers, and relates to a flash effect rendering method and device in a game, a storage medium and an electronic device. The method comprises the following steps: calculating the gradient of the pixel point according to a macroscopic normal vector, an observation vector and an illumination vector of the pixel point in the model to be rendered, wherein the gradient is the projection of a micro surface normal in a world space on a two-dimensional tangent space; determining a target LOD level to which the pixel belongs based on texture coordinates of the pixel and target particle granularity required by a flash effect, wherein the target LOD level is configured with a gradient probability distribution map; sampling the gradient probability distribution chartlet by using the target gradient, and determining the target distribution probability of the pixel points according to the sampling result; and performing illumination calculation on the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector and the half-angle vector, and performing flash effect rendering on the model to be rendered according to the obtained illumination result. The present disclosure improves the rendering efficiency and quality of the flash effect.

Description

Flash effect rendering method and device in game, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for rendering flash effect in a game, a computer storage medium, and an electronic device.
Background
With the development of computer and internet technologies, online games are more and more popular, and because online games have scenes like real life, users can experience visual impact brought by games in an immersive manner, and the initiative and the sense of reality of the games are strong. In real life, there are abundant flashing materials, such as frosted metal surfaces, painted surfaces of automobiles, fine flashes of cosmetics, decorations with crystal grains on cloth, microcrystalline flashes of snow and sand, and water surfaces of polo glistenings, and how to render these flashing effects in games becomes one of the non-trivial tasks for game production.
In the related art, the rendering of the flash effect is realized by a way of producing the high-precision normal map by an art worker, but the production cost of the high-precision normal map is high and the high-precision normal map cannot be reused. In another related technology, a flash effect is generated by using a written noise map, but different noise maps are required to be provided according to different rendering requirements, so that the rendering manufacturing cost is increased, the difficulty in writing the maps is high, the maps are completed by experienced art personnel, and a rendering result conforming to a real physical phenomenon cannot be obtained.
It is to be noted that the information invented in the background section above is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to a method and an apparatus for rendering a flash effect in a game, a computer storage medium, and an electronic device, so as to reduce the rendering cost and difficulty of the flash effect in the game at least to a certain extent and improve the rendering quality of the flash effect.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the present disclosure, there is provided a flash effect rendering method in a game, including: calculating the gradient of a pixel point according to a macro normal vector, an observation vector and an illumination vector of the pixel point in a model to be rendered, wherein the gradient is the projection of a micro surface normal in a world space on a two-dimensional tangent space, and the macro normal vector, the observation vector and the illumination vector are vectors in the world space; determining a target LOD level to which the pixel point belongs based on texture coordinates of the pixel point and target particle granularity required by a flash effect, wherein the target LOD level is configured with a slope probability distribution map, and the slope probability distribution map comprises distribution probabilities of different slope values in different coordinate directions; sampling the gradient probability distribution map by using a target gradient, and determining a target distribution probability of the pixel points according to a sampling result, wherein the target distribution probability is used for representing the probability that the projection direction is a first target direction, the target gradient is the gradient of the pixel points corresponding to the texels, and the target gradient is obtained based on the gradient of the pixel points; and performing illumination calculation on the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector and the half-angle vector, and performing flash effect rendering on the model to be rendered according to the obtained illumination result.
In an exemplary embodiment of the present disclosure, the calculating, according to a macroscopic normal vector, an observation vector, and an illumination vector of a pixel point in a model to be rendered, a gradient of the pixel point includes: constructing a tangent space according to the macroscopic normal vector, and converting the observation vector and the illumination vector into the tangent space; calculating the vector sum of the observation vector and the illumination vector in the tangent space, and carrying out standardization processing on the vector sum to obtain a half-angle vector; the gradient is derived based on the half angle vector.
In an exemplary embodiment of the present disclosure, the deriving the gradient based on the half-angle vector includes: dividing the first component and the second component of the half-angle vector by the third component of the half-angle vector to convert the half-angle vector from a three-dimensional vector to a two-dimensional plane vector; and taking the two-dimensional plane vector as the gradient.
In an exemplary embodiment of the present disclosure, the determining a target LOD level to which the pixel belongs based on the texture coordinates of the pixel and the target grain size required for the flash effect includes: obtaining the differential of the texture coordinate uv to the screen space coordinate xy to obtain a differential result, and carrying out scaling processing on the differential result according to the value of the target particle granularity to obtain the texture coordinate differential; calculating the length of the corresponding vector of the texture coordinate differential to obtain a first length value and a second length value, wherein the first length value is larger than the second length value; and determining the target LOD level by combining the second length value and a preset LOD level number threshold.
In an exemplary embodiment of the present disclosure, the target LOD level is determined by combining the second length value and a preset LOD level number threshold, with the following formula:
L=max(0,NLEVELS-1+log2(minorLength))
wherein, the L is the target LOD level, NLEVELS is the LOD level number threshold, and the minor Length is the second Length value.
In an exemplary embodiment of the disclosure, before the sampling the gradient probability distribution map by using the gradient value of the target gradient and determining the target distribution probability of the pixel point according to the sampling result, the method further includes: according to the texture coordinate of the pixel point and the texture coordinate differential, constructing an elliptical area of the pixel point in a texture space; and traversing the texels in the elliptical area, and rotating and scaling the gradient of the pixel point aiming at each texel to obtain the target gradient of each texel.
In an exemplary embodiment of the present disclosure, traversing texels in the elliptical area, and performing rotation and scaling processing on a slope of the pixel point for each texel to obtain a target slope of each texel includes: acquiring a random rotation angle corresponding to each texel, and rotating the gradient of the pixel point according to the random rotation angle to obtain a rotation gradient; and calculating a gradient scaling factor by combining preset roughness and a roughness scaling factor, and scaling the rotation gradient according to the gradient scaling factor to generate the target gradient of each texel.
In an exemplary embodiment of the present disclosure, the obtaining a random rotation angle corresponding to the texel, and rotating the slope of the pixel point according to the random rotation angle to obtain a rotation slope includes: converting a texel index of the texel into a consistency index according to the target LOD level, wherein the consistency index indicates consistency of random rotation angles corresponding to the texel on different LOD levels; generating a random numerical value based on the consistency index, and taking the random numerical value as the random rotation angle; and rotating the gradient of the pixel point according to the random rotation angle to obtain a rotation gradient.
In an exemplary embodiment of the present disclosure, the slope scaling factor is calculated according to the following formula in combination with a preset roughness and roughness scaling factor:
Figure BDA0003680913220000031
wherein, the S α Is the gradient scaling factor, the alpha dict The preset roughness adopted in the process of generating the gradient probability distribution map is obtained by alpha from the roughness map corresponding to the model to be rendered, and the alpha is obtained by the roughness map corresponding to the model to be rendered x And said alpha y Respectively the required roughness scaling factors of the spin gradient in different decomposition directions.
In an exemplary embodiment of the present disclosure, the sampling the gradient probability distribution map by using a gradient value of a target gradient, and determining a target distribution probability of the pixel point according to a sampling result includes: sampling the slope probability distribution map by using the slope value of the target slope, and calculating the joint distribution probability of the pixel point corresponding to the texel according to the sampling result; determining the Gaussian weight corresponding to the texel according to the distance between the texel and the texture coordinate of the pixel point; and calculating the target distribution probability based on the joint distribution probability and the Gaussian weight corresponding to the texel.
In an exemplary embodiment of the disclosure, the sampling the slope probability distribution map by using the slope value of the target slope, and calculating the joint distribution probability of the texel corresponding to the pixel point according to the sampling result include: carrying out orthogonal decomposition on the target gradient to obtain a first gradient value and a second gradient value; sampling the gradient distribution map by using the first gradient value to obtain a first distribution probability; sampling the gradient distribution map by using the second gradient value to obtain a second distribution probability; and performing fusion calculation on the first distribution probability and the second distribution probability to generate the joint distribution probability.
In an exemplary embodiment of the disclosure, the fusion calculation is performed on the first distribution probability and the second distribution probability by using the following formula, so as to generate the joint distribution probability:
Figure BDA0003680913220000041
wherein, P 3 For the joint distribution probability, the P 1 Is the first distribution probability, the P 2 Is the second distribution probability, the alpha dict The preset roughness adopted in the process of generating the gradient probability distribution map is determined, wherein alpha is obtained from the roughness map corresponding to the model to be rendered, and alpha is x And said alpha y Respectively, the needed roughness scaling times of the target gradient in different decomposition directions.
In an exemplary embodiment of the present disclosure, the calculating the target distribution probability based on the joint distribution probability and the gaussian weight corresponding to the texel includes: and carrying out weighted average on the joint probability distribution corresponding to the texels in the elliptical area based on the Gaussian weight to obtain the target distribution probability.
In an exemplary embodiment of the present disclosure, the method further comprises: calculating the target distribution probability of the pixel points on the adjacent LOD levels of the target LOD levels; according to the target distribution probability on the adjacent LOD levels and the target distribution probability on the target LOD level, interpolating the target distribution probability on the target LOD level by adopting the following interpolation formula to obtain the interpolated target distribution probability:
P=P L (1-(L-[L]))+P L+1 (L-[L])
wherein P is the interpolated target distribution probability, L is the target LOD level, L +1 is the adjacent LOD level, and P L Is the target distribution probability, P, on the target LOD level L+1 Distributing probabilities for the targets on the adjacent LOD levels;
and updating the target distribution probability of the pixel points on the target LOD level by using the interpolated target distribution probability.
In an exemplary embodiment of the present disclosure, before the calculating, according to a macro normal vector, an observation vector, and an illumination vector of a pixel point in a model to be rendered, a gradient of the pixel point, the method further includes: generating gradient probability distribution maps corresponding to different LOD levels; the process of generating the gradient probability distribution map of the LOD level of the Nth layer comprises the following steps: generating a first Gaussian distribution function by taking preset roughness as a parameter; obtaining 2 using the first Gaussian distribution function N A numerical value; based on the preset target standard deviation, taking the numerical value as a mean value to generate 2 N A second Gaussian distribution function; 2 is to be N Fitting and superposing the second Gaussian distribution functions to generate a target Gaussian distribution function which is used as the probability density of the target gradient; taking the variable value of the target gradient probability density as a gradient value, adopting the target gradient probability density to calculate the distribution probability corresponding to each gradient value, and correspondingly storing the gradient value and the distribution probability; wherein N is an integer of 0 or more.
In an exemplary embodiment of the present disclosure, the performing, by combining the target distribution probability, the observation vector, the illumination vector, and the half-angle vector, illumination calculation on the model to be rendered, and performing flash effect data rendering on the model to be rendered according to an obtained illumination result includes: calculating the normal distribution probability of the pixel points based on the projection value of the micro surface normal on the macro normal vector and the target distribution probability, wherein the normal distribution probability is used for representing the probability that the orientation of the micro surface is a second target direction; calculating first illumination data by using a Nefel equation in combination with the half-angle vector, the observation vector and a preset surface basic reflectivity, wherein the first illumination data represents the ratio of reflected light to total light; calculating second illumination data by adopting a geometric shading function according to the observation vector, the illumination vector and the half-angle vector, wherein the second illumination data represents the quantity of surface blocking light rays; calculating fusion illumination data by adopting a bidirectional reflection distribution function in combination with the normal distribution probability, the first illumination data and the second illumination data; and solving the product of the fused illumination data and the preset incident light intensity to obtain the illumination result.
According to an aspect of the present disclosure, there is provided a flash effect rendering apparatus in a game, including: the gradient calculation module is used for calculating the gradient of the pixel point according to a macro normal vector, an observation vector and an illumination vector of the pixel point in the model to be rendered, wherein the gradient is the projection of a micro surface normal in a world space on a two-dimensional tangent space, and the macro normal vector, the observation vector and the illumination vector are vectors in the world space; the level determining module is used for determining a target LOD level to which the pixel point belongs based on texture coordinates of the pixel point and target particle granularity required by a flash effect, wherein the target LOD level is configured with a slope probability distribution chartlet, and the slope probability distribution chartlet comprises distribution probabilities of different slope values in different coordinate directions; the sampling module is used for sampling the gradient probability distribution chartlet by utilizing a target gradient and determining the target distribution probability of the pixel points according to a sampling result, wherein the target distribution probability is used for representing the probability that the projection direction is a first target direction, the target gradient is the gradient of the pixel points corresponding to the texels, and the target gradient is obtained based on the gradient of the pixel points; and the rendering module is used for carrying out illumination calculation on the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector and the half-angle vector and carrying out flash effect rendering on the model to be rendered according to the obtained illumination result.
According to an aspect of the present disclosure, there is provided a computer storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing the method of any one of the above.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of any one of the above via execution of the executable instructions.
According to the flash effect rendering method in the game in the exemplary embodiment of the disclosure, the gradient of the pixel point is calculated according to the macroscopic normal vector, the observation vector and the illumination vector of the pixel point, the target LOD level to which the pixel point belongs is determined based on the texture coordinate of the pixel point and the target particle granularity required by flash, the target LOD level is sampled by using the gradient value of the target gradient of the pixel point corresponding to the pixel point, the target distribution probability of the pixel point is determined according to the sampling result, and finally, illumination calculation is performed by combining the target distribution probability, the observation direction, the illumination direction and the half-angle vector, so that the model to be rendered is rendered with the flash effect according to the illumination result. On one hand, an art worker does not need to make a normal map carrying flash effect detail information, only a conventional macro normal needs to be provided, and the macro normal is converted into the gradient of a pixel point, so that the gradient probability distribution map is sampled according to the gradient value of the obtained target gradient, and the manufacturing cost and the manufacturing difficulty are reduced; on the other hand, corresponding slope probability distribution maps are configured for different LOD levels, the degree of particle definition is different among different LOD levels, sampling is carried out on different target LOD layers, the target distribution probability of the obtained pixel points is different, the final flash effect is well-arranged, and the flash effect better accords with a real physical phenomenon; on the other hand, the corresponding LOD levels are sampled according to the gradient values of different pixel points, the same target distribution probability is avoided, the repetition of the final illumination result is avoided, the method can adapt to different lens changes, different flash effects are generated, and a flash rendering effect with higher quality is obtained.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 shows a flow diagram of a flash effect rendering method in a game according to an exemplary embodiment of the present disclosure;
FIG. 2 shows a schematic diagram of a comparison of target particle sizes according to an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a flow chart for determining a target LOD level to which a pixel belongs according to an exemplary embodiment of the disclosure;
FIG. 4 illustrates a graph of model rendering effects after an elliptical weight averaging method and a trilinear filtering method are employed according to an exemplary embodiment of the disclosure;
FIG. 5 illustrates a rendering effect graph with and without filtering according to an example embodiment of the present disclosure;
FIG. 6 illustrates a flow diagram for illumination computation of a model to be rendered in conjunction with a target distribution probability, an observation vector, an illumination vector, and a half angle vector, according to an exemplary embodiment of the present disclosure;
FIG. 7 shows a flowchart for calculating the slope of a pixel point according to the macroscopic normal vector, the observation vector, and the illumination vector of the pixel point in the model to be rendered according to an exemplary embodiment of the present disclosure;
FIG. 8 shows a flow diagram of one implementation of filtering pixel points, according to an example embodiment of the present disclosure;
FIG. 9 shows a flowchart for obtaining a target grade according to an example embodiment of the present disclosure;
FIG. 10 shows a flowchart for determining a target distribution probability for a pixel point according to an example embodiment of the present disclosure;
FIG. 11 illustrates a flowchart of generating a slope probability distribution map for a LOD level of the Nth tier according to an exemplary embodiment of the present disclosure;
fig. 12 illustrates a second gaussian distribution function graph corresponding from a level 0 to a level 15 according to an exemplary embodiment of the present disclosure;
FIG. 13 illustrates a comparison of rendering results before and after filtering a slope according to an exemplary embodiment of the present disclosure;
fig. 14 illustrates a comparison graph of rendering results before and after adjusting the degree of attenuation of the flashlight effect according to an exemplary embodiment of the present disclosure;
fig. 15 illustrates a rendering result comparison graph using different luminance indexes according to an exemplary embodiment of the present disclosure;
FIG. 16 shows a rendering structure diagram of various flash effects according to an example embodiment of the present disclosure;
fig. 17 illustrates a structural diagram of a flash effect rendering apparatus in a game according to an exemplary embodiment of the present disclosure;
FIG. 18 shows a schematic diagram of a storage medium according to an example embodiment of the present disclosure; and
fig. 19 shows a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
Exemplary embodiments will now be described more fully with reference to the accompanying drawings. The exemplary embodiments, however, may be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of exemplary embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus a detailed description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known structures, methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. That is, these functional entities may be implemented in the form of software, or in one or more software-hardened modules, or in different networks and/or processor devices and/or microcontroller devices.
In the related art, in order to obtain a game scene as a real life, flash effect rendering is also performed on a material having a flash effect in the game scene. The glittering effect refers to the micro-plane granular high light reflection on the surface of the model. In a related art, flash effect rendering is performed by creating a high-definition normal map, but the map resolution is too large to occupy memory and is not reusable, and if high-definition normal maps are created according to different LOD (Levels of Detail) Levels, that is, normal maps containing rich flash effect information, the cost is high. In another related technology, a sampling noise is compiled to generate a flash effect, but different noise maps are required according to different rendering requirements, so that the manufacturing cost is increased, the compiling difficulty is high, the requirement on high service capability of art workers is met, the size of a noise map is limited, physical repetition easily occurs, the lens change is difficult to adapt, and the quality of the flash effect is influenced.
Based on this, in an exemplary embodiment of the present disclosure, there is first provided a flash effect rendering method in a game. Referring to fig. 1, a flash effect rendering method in a game according to an embodiment of the present disclosure may include the following steps S110 to S140:
and step S110, calculating the gradient of the pixel point according to the macroscopic normal vector, the observation vector and the illumination vector of the pixel point in the model to be rendered.
In the exemplary embodiment of the present disclosure, according to the micro-surface theory, a smooth or rough macro surface is composed of many micro-surfaces, the orientation of each micro-surface is determined by its micro-surface normal, and the slope of the pixel point is calculated according to the macro-normal vector, the observation vector and the illumination vector of the pixel point, and the slope is the projection of the micro-surface normal in the world space on the two-dimensional tangent space. Therefore, after the gradient of the pixel point is obtained, the micro surface normal can be solved. The macro normal vector of the pixel point is not the high-precision macro normal vector containing the flash effect information provided by the art, and is a general smooth surface macro normal vector. The observation vector is a vector between the world coordinates of the pixel points and the position coordinates of the virtual camera, and the illumination direction indicates the direction of sunlight in the scene.
In the embodiment of the present disclosure, based on the material file of the model to be rendered, a CPU (Central Processing Unit) reads the geometry corresponding to the material file, organizing the read result into a vertex and submitting the vertex to a GPU (graphics Processing Unit), converting the vertex from a model space to a world space, an observation space, a clipping space and a screen space in sequence by the GPU through a vertex shader, rasterizing the result output by the vertex shader by a pixel shader, and dispersing the vertex into pixel points on the screen space, namely the pixel points of the model to be rendered in the embodiment of the disclosure, each pixel point carries coloring information, wherein the coloring information comprises but is not limited to a macroscopic normal vector, an illumination vector, an observation vector, a light color, a light intensity, texture coordinates, preset roughness, a metal degree, basic color information and the like.
The material files of the model to be rendered include but are not limited to files pre-made in three-dimensional modeling software, such as a geometric file, a basic color map, a roughness map, a metal degree map and the like of the model. The model space is also called an object space or a local space, each model has a corresponding model space, and the model space rotates along with the corresponding model; the world space is a macroscopic special coordinate system, and in the Unity engine, if the model has no father node, the model is located in the world space; the observation space is a camera space, and is different from other spaces in that a right-hand coordinate system is used in the observation space; the method comprises the following steps of (1) converting a projection matrix into a clipping space from an observation space after one vertex is multiplied by the projection matrix, wherein the projection matrix essentially performs scaling and translation operations on each component of the vertex (x, y, z); and the screen space is to project the view cone in the cutting space to the screen space and determine the two-dimensional position coordinates of the pixel points. The space transformation methods in the related art are all suitable for transforming the vertex from the model space to the world space, the observation space, the clipping space and the screen space in sequence in the embodiments of the present disclosure, and the present disclosure is not described in detail herein.
It should be noted that the macroscopic normal vector, the observation vector, and the illumination vector of the embodiment of the present disclosure are vectors in the world space.
By the aid of the method and the device, high-precision normal maps containing flash effects do not need to be manufactured, the smooth surface macro normal vector is converted into the gradient by combining the observation vector and the illumination vector to represent the projection of the micro surface normal in the world space on the two-dimensional tangent space, so that the micro surface normal vector can be acquired later, and manufacturing cost is reduced.
And step S120, determining a target LOD level to which the pixel point belongs based on the texture coordinates of the pixel point and the target particle granularity required by the flash effect.
In an exemplary embodiment of the present disclosure, the LOD technology (Levels of Detail, multiple Levels of Detail) refers to determining resource allocation for rendering an object model according to the positions and importance Levels of nodes of the object model in a display environment, and reducing the number of faces and details of non-important object models, and the like.
And the target LOD level is configured with a gradient probability distribution map, the gradient probability distribution map comprises the distribution probability of different gradient values in different coordinate directions, and the gradient values are obtained by performing orthogonal decomposition on the gradients. For example, the slope is decomposed into slope values in the x-direction and the y-direction. The target particle granularity is determined according to an actual rendering scene, the numerical value of the target particle granularity can be adjusted according to an actual rendering requirement, the higher the LOD level is, the larger the numerical value of the target particle granularity is, and the lower the LOD level is, the smaller the numerical value of the target particle granularity is. For example, the size of the target particles required for the Pond on the water surface is larger and the size of the target particles required for the metal surface is smaller, as shown schematically in FIG. 2 for comparison to the target particle size.
In an exemplary embodiment of the present disclosure, an implementation manner for determining a target LOD level to which a pixel point belongs is provided. Determining the target LOD level to which the pixel belongs based on the texture coordinates of the pixel and the target particle size required for the flash effect may include steps S310 to S330:
step S310, the differential of the texture coordinate uv to the screen space coordinate xy is obtained, and the differential result is zoomed according to the value of the target particle granularity to obtain the texture coordinate differential, namely the texture coordinate differential
Figure BDA0003680913220000111
Wherein A is the value of the target particle size.
In an exemplary embodiment of the present disclosure, the texture coordinate is uv used by a sampling map written in a vertex in the texture file, a differential result of the texture coordinate uv to screen space coordinates xy is obtained, where the differential result is obtained by obtaining a unit number of a unit corresponding to the texture coordinate on a screen, and the unit number may be obtained by an HLSL (high level shader language) built-in function ddx, ddy or a GLSL (rendering language) built-in function dFdx, dFdy. Wherein the differential of the texture coordinate uv to the screen space coordinate xy is found, i.e.
Figure BDA0003680913220000112
Further, the differential result is scaled according to the value of the target particle size, and the value of the target particle size can be used as a coefficient of the differential result to enlarge or reduce the differential result, and the larger the differential result is, the larger the movement span of the corresponding texture coordinate is, the larger the movement span of the texture coordinate is, the higher the mipmap (texture mapping) level image needs to be sampled.
And based on the value of the target particle granularity, after the differential result is scaled, changing texture coordinate differential to change the LOD level determined based on the texture coordinate differential, and further adjusting the particle granularity displayed by the flash effect.
Step S320, calculating the length of the vector corresponding to the texture coordinate differential to obtain a first length value and a second length value, wherein the first length value is greater than the second length value.
In an exemplary embodiment of the present disclosure, the length of the texture coordinate differential corresponding vector may be calculated using a length function, and the differential result obtained in step S310 may be obtained
Figure BDA0003680913220000113
And calculating by adopting a length function to obtain a first length value and a second length value. The ratio of the first length value to the second length value can be controlled not to be larger than a set threshold value, so that the situation that the number of subsequent sampling times in the direction with the first length value as a long axis is increased due to too large ratio is avoided, and the running efficiency of a program is reduced.
And step S330, determining a target LOD level by combining the length value of the second function and a preset LOD level number threshold.
In an exemplary embodiment of the disclosure, the target LOD level is determined by combining the second length value and a preset LOD level number threshold value, using the following formula:
L=max(0,NLEVELS-1+log2(minorLength)) (1)
wherein, L is a target LOD level, NLEVELS is a LOD level number threshold value, and minor Length is a second Length value. The LOD level number threshold may be set according to an actual rendering requirement, and may be, for example, 14, 16, or 18, which is not specifically limited in the embodiments of the present disclosure.
Through this disclosed example embodiment, based on the texture coordinate of pixel and the required target granule granularity of flash of light effect, confirm the required target LOD level of pixel to follow-up carry out the map sampling in the target LOD level, and then according to different pixels, carry out the map sampling in different LOD levels, avoid the map of pixel to repeat, improve the render diversity of flash of light effect, with the change of adaptation camera lens.
And step S130, sampling the gradient probability distribution map by using the target gradient, and determining the target distribution probability of the pixel points according to the sampling result.
In an exemplary embodiment of the present disclosure, the target distribution probability (also referred to as target gradient distribution probability) is used to characterize the orientation of the projection of the micro-surface normal in world space on a two-dimensional tangential space, being the probability of the first target direction. The target gradient is the gradient of the texel corresponding to the pixel point and is obtained based on the gradient. In the embodiment of the present disclosure, the slope probability of the texel point is filtered by using an elliptical weight average method to obtain the slope probability of the pixel point, and of course, other filtering methods, such as a trilinear filtering method, may also be selected according to actual requirements. Since the trilinear filtering method is an isotropic filtering method, the difference of different axes is not considered, and when the sight line direction is switched to be perpendicular to the normal line, a blur is easily generated, so to increase the anisotropy, in the embodiment of the present disclosure, an elliptical weight averaging method is preferably used to filter the slope probability of the texel point, as shown in fig. 4, a model rendering effect diagram after the elliptical weight averaging method and the trilinear filtering method are used according to the embodiment of the present disclosure, and a rendering effect corresponding to the elliptical weight averaging method in the right diagram is relatively clear. As shown in fig. 5, the rendering effect graphs with and without filtering show that the rendering results with filtering performed by the elliptic weight averaging method in the right graph have less noise than the rendering effects without filtering.
Based on the target LOD hierarchy determined in step S130, the gradient probability distribution map configured for the target LOD hierarchy is sampled using the gradient value of the target gradient, and then the target distribution probability of the pixel point is determined according to the sampling result.
According to the exemplary embodiment of the disclosure, the slope probability distribution maps which are configured in advance are sampled by using the slope value of the target slope, so that the high-precision normal map or noise map containing the flash effect is avoided from being manufactured, the development cost and the development difficulty are reduced, the slope probability distribution maps which are configured in advance and correspond to different LOD levels have reusability, and the rendering efficiency is improved.
And step S140, performing illumination calculation on the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector and the half-angle vector, and performing flash effect rendering on the model to be rendered according to the obtained illumination result.
In exemplary embodiments of the present disclosure, the illumination calculation includes, but is not limited to, calculating normal distribution probability, ratio of reflected light to total light, geometric shading, and the like. May include steps S610 to S650:
step S610, calculating the normal distribution probability of the pixel points based on the projection value of the micro surface normal on the macro normal vector and the target distribution probability. The normal distribution probability is used for representing the probability that the orientation of the micro surface is the second target direction, the ratio of the target distribution probability to the 4 th power of the projection value of the micro surface normal on the macroscopic normal can be obtained, the ratio is used as the normal distribution probability of the pixel point, and the normal distribution probability represents the probability that the orientation of the micro surface normal is consistent with the orientation of the half-angle vector. It should be noted that, the obtaining of the normal distribution probability is to obtain the probability that the orientations of the micro-surface normal and the half-angle vector are consistent, so that the ratio of the target distribution probability to the 4 th power of the projection value of the half-angle vector on the macro normal may also be obtained here, and the ratio is used as the normal distribution probability of the pixel point.
Step S620, combining the half-angle vector, the observation vector and the preset surface basic reflectivity, and calculating first illumination data by adopting a Nefel equation.
Wherein the first illumination data is indicative of a ratio of reflected light to total light. And calculating first illumination data by combining the half-angle vector, the observation vector and the preset surface basic reflectivity by adopting the following formula:
F schlick (h,v,F 0 )=F 0 +(1-F 0 )(1-(h·v)) 5 (2)
wherein, F schlick As first illumination data, F 0 Is a preset surface base reflectivity, h is a half angle vector, v is an observation vector, F 0 The rendering model may be obtained by calculation according to the surface color and the metal degree of the model to be rendered, or may be obtained by looking up the related material information, which is not particularly limited in this disclosure.
Step S630, calculating second illumination data by using a geometric masking function according to the observation vector, the illumination vector, and the half-angle vector.
The second illumination data represents the quantity of the surface blocking light, and the following geometric shading function v-cavity is adopted to calculate the second illumination data according to the observation vector, the illumination vector and the half-angle vector:
Figure BDA0003680913220000141
wherein G is second illumination data, n is a macroscopic normal vector, h is a half-angle vector, ν is an observation vector, and l is the illumination vector.
And step S640, calculating fusion illumination data by adopting a bidirectional reflection distribution function in combination with the normal distribution probability, the first illumination data and the second illumination data.
The method includes the steps of calculating fusion illumination data by using a Bidirectional reflection Distribution Function in combination with a normal Distribution probability, first illumination data and second illumination data, and substituting numerical values of the normal Distribution probability, the first illumination data and the second illumination data into a BRDF (Bidirectional reflection Distribution Function):
Figure BDA0003680913220000142
wherein F is fusion illumination data, D is a numerical value of the target distribution probability, F is a numerical value of the first illumination data, G is a numerical value of the second illumination data, n is a macroscopic normal vector, ν is an observation vector, and l' is an incidence direction.
If only one light source exists in the scene, such as sunlight, the incident direction of the sunlight is substituted into the above formula (4) to obtain illumination data; if at least two light sources, such as sunlight and lamps (point light sources), exist in the scene, the at least two light sources are respectively substituted into the formula (4) to obtain corresponding at least two illumination data.
And step S650, multiplying the fused illumination data by the preset incident light intensity to obtain an illumination result.
The preset incident light intensity can be multiplied by the basic color of the model to be rendered through the intensity of sunlight. The basic color is obtained from a material file of the model to be rendered and is obtained by sampling a basic color mapping provided by an art worker in advance. The obtained fusion illumination data is used for indicating the radiation illumination in the given incident direction and how to influence the radiance in the given emergent direction, and the fusion illumination data is multiplied by the incident light intensity to obtain the reflected light intensity, so that the rendering requirement of the flash effect can be reflected.
In an exemplary embodiment of the present disclosure, an implementation manner for calculating a gradient of a pixel point is provided. Calculating the gradient of the pixel point according to the macro normal vector, the observation vector and the illumination vector of the pixel point in the model to be rendered may include steps S710 to S730:
step S710, a tangent space is constructed according to the macroscopic normal vector, and the observation vector and the illumination vector are converted into the tangent space.
In the exemplary embodiment of the present disclosure, the tangent space refers to a space composed of a normal line, a tangent line, and a sub-tangent line, and three axes of the tangent space may be calculated from a macro normal vector using a LookAt algorithm. Wherein the macroscopic normal vector is taken as the z-axis, and the observation vector and the illumination vector are converted into a tangential space.
Specifically, the process of calculating three axes of a tangent space according to a macroscopic normal vector by using a lookot algorithm includes: determining a vector (0,0,1) in the vertical direction, taking the cross product of the vector in the vertical direction and the macroscopic normal vector as a tangent, calculating the cross product of the tangent and the macroscopic normal vector as a secondary tangent, and forming a z axis, an x axis and a y axis of a tangent space according to the macroscopic normal vector, the secondary tangent and the tangent.
The process of converting the observation vector into a tangential space may include: and respectively calculating the projection of the observation vector on three axes of the tangent space to form a three-dimensional vector of the tangent space.
Accordingly, the process of converting the illumination vector to the tangential space is similar, and the details of the disclosure are omitted here.
And step S720, calculating the vector sum of the observation vector and the illumination vector in the tangent space, and carrying out standardization processing on the vector sum to obtain a half-angle vector.
The method comprises the steps of firstly obtaining a vector sum of an observation vector and an illumination vector in a tangent space, then carrying out standardization processing on the vector sum, and taking the vector after the standardization processing as a half-angle vector.
In step S730, the gradient is obtained based on the half-angle vector.
In an exemplary embodiment of the present disclosure, the half angle vector may be converted into a gradient. Wherein deriving the slope based on the half angle vector may be: converting the half-angle vector from a three-dimensional vector into a two-dimensional plane vector, and taking the obtained two-dimensional plane vector as a slope.
For example, the half-angle vector is (x, y, z), and the value of x and the value of y are divided by the value of z to obtain
Figure BDA0003680913220000151
As the slope of the pixel.
Through this disclosed example embodiment, will wait to render the macroscopic normal vector of model, convert the slope of pixel into, avoid making the normal map that the high accuracy contains the flash of light effect, only need adopt general smooth surface map can, reduce the preparation degree of difficulty, and through the slope value sampling slope probability distribution map of the slope that obtains, can multiplex slope probability distribution map, improve and render efficiency.
In an exemplary embodiment of the present disclosure, an implementation of filtering pixel points is also provided. Before sampling the gradient probability distribution map by using the gradient value of the target gradient and determining the target distribution probability of the pixel points according to the sampling result, step S810 and step S820 may be further performed:
step S810, according to the texture coordinate of the pixel point and the texture coordinate differential, an elliptical area of the pixel point in the texture space is constructed.
In an exemplary embodiment of the present disclosure, the first length value and the second length value may be obtained first as the major axis and the minor axis of the ellipse, respectively, according to calculating the length of the texture coordinate differential corresponding vector, and calculating the rotation angle of the ellipse, and then constructing the elliptical region of the ellipse in the texture space according to the major axis, the minor axis, and the rotation angle.
Wherein, if the texture coordinate of the current pixel point is (u, v), the left boundary of the bounding box corresponding to the elliptical region is
Figure BDA0003680913220000161
The right boundary is
Figure BDA0003680913220000162
The upper boundary is
Figure BDA0003680913220000163
The lower boundary is
Figure BDA0003680913220000164
And filtering the pixel points outside the elliptical area. Wherein A is a long axis, B is a short axis, and C is a rotation angle.
Step S820, traverse the texels in the elliptical area, and perform rotation and scaling processing on the gradient of the pixel point for each texel to obtain the target gradient of each texel.
In an exemplary embodiment of the present disclosure, acquiring the target gradient of each texel may include steps S910 and S920:
step S910, for each texel, obtaining a random rotation angle corresponding to the texel, and rotating the gradient of the pixel point according to the random rotation angle to obtain a rotation gradient.
In an exemplary embodiment of the present disclosure, the texel index of the texels may first be converted to a consistency index according to the target LOD hierarchy. The texel index is a search directory of a texel area, and the higher the LOD level is, the smaller the number of texels is. For example, when the LOD level is 0, n texels are in the u or v direction, and the corresponding texel index is 0,1, 2, 3, … …, n-1. When the LOD level is L, the texels have a u or v direction
Figure BDA0003680913220000165
A corresponding texel index of
Figure BDA0003680913220000166
The consistency index is used for indicating consistency of random rotation angles corresponding to texels on different LOD levels, that is, to ensure that gradients corresponding to texels in the same region are consistent on the random rotation angles when the LOD levels are changed.
The consistency index is obtained by multiplying the texel index by 2 L . For example, when the LOD level is L, the consistency index is 0, 2 L ,2*2 L ,3*2 L ,4*2 L ,……,n-2 L
Further, after consistency indexes of the texels in the horizontal and vertical directions (x direction and y direction) are obtained, the two consistency indexes are fused by adopting a preset polynomial to obtain a numerical value, and the numerical value is used as a random seed. The preset polynomial may be a linear polynomial, for example, Y ═ c1Y1+ c2Y2, where Y is an obtained numerical value, Y1 and Y2 are consistency indexes, respectively, and c1 and c2 are coefficients of corresponding consistency indexes, and the form of the polynomial is not particularly limited in the embodiment of the present disclosure, and may be set according to an actual rendering scene.
After obtaining the random seed, the random seed may be mapped to a value by using a hash function as a random rotation angle, so as to rotate the slope of the pixel point according to the random rotation angle to obtain a rotation slope. The hash function may be selected according to actual requirements, such as iq (eigen quilez) hash, and the disclosure does not make any special limitation on the hash function. It should be noted that the initial slope of the pixel corresponding to the texel is the slope of the pixel, so that the slope of the pixel is rotated according to the random rotation angle.
The gradient is rotated based on a random rotation angle, so that the result of each texel sampling gradient probability distribution map according to the target gradient is different, the subsequent illumination result based on the target distribution probability is enriched, and the rendering diversity is enriched.
And step S920, calculating a gradient scaling factor by combining the preset roughness and the preset roughness scaling factor, and performing scaling processing on the rotating gradient according to the gradient scaling factor to generate the target gradient of each texel.
Wherein, can be according to predetermined roughness and roughness zoom factor, according to following formula calculation slope zoom factor:
Figure BDA0003680913220000171
wherein S is α As a factor of slope scaling, α dict Alpha is obtained from the roughness map corresponding to the model to be rendered, alpha is a preset roughness adopted when generating the gradient probability distribution map x And alpha y Respectively the required roughness scaling factor for the spin gradient in different decomposition directions.
And further, carrying out scaling processing on the rotation gradient according to the scaling multiple. Based on formula (5), roughness pictures of sticking alpha that can reuse the fine arts personnel to provide avoids fine arts personnel to make the roughness pictures of sticking in the equidirectional not, only need through adjustment roughness zoom multiple can, improve the efficiency of the operation of rendering.
In an exemplary embodiment of the present disclosure, an implementation manner for determining joint distribution probability of pixel points is provided. Sampling the gradient probability distribution map by using the gradient value of the target gradient, and determining the target distribution probability of the pixel point according to the sampling result may include steps S1010 to S1040:
and step S1010, performing orthogonal decomposition on the target gradient to obtain a first gradient value and a second gradient value.
Where the target slope is decomposed orthogonally, e.g., the slope is decomposed into orthogonal x and y directions.
Step S1020, sampling the gradient distribution map by using the first gradient value to obtain a first distribution probability.
And sampling the gradient distribution map by using the first gradient value to obtain a first distribution probability corresponding to the first gradient value.
And step S1030, sampling the gradient distribution map by using the second gradient value to obtain a second distribution probability.
And sampling the gradient distribution map by using the second gradient value to obtain a second distribution probability corresponding to the second gradient value.
Step S1040, performing fusion calculation on the first distribution probability and the second distribution probability to generate a joint distribution probability of texels corresponding to the pixel point.
In an exemplary embodiment of the present disclosure, the first distribution probability and the second distribution probability are subjected to fusion calculation by using the following formula, so as to generate a joint distribution probability of the texel corresponding to the pixel point:
Figure BDA0003680913220000181
wherein, P 3 To jointly distribute probabilities, P 1 Is the first distribution probability, P 2 Is a second distribution probability, α dict A preset roughness adopted when generating the gradient probability distribution map, wherein alpha is obtained from the roughness map corresponding to the model to be rendered, and alpha is x And alpha y Respectively the required roughness scaling factor of the target slope in different decomposition directions.
Further, the target distribution probability can be calculated based on the joint distribution probability of the texels corresponding to the pixel points and the Gaussian weight corresponding to the texels.
In an exemplary embodiment of the present disclosure, the gaussian weight corresponding to the texel is obtained, and may be: and determining the Gaussian weight corresponding to the texel according to the distance between the texel and the texture coordinate of the pixel point.
Specifically, based on the determined elliptical region, if the texture coordinate of the current pixel point is (u, v), the left boundary of the bounding box corresponding to the elliptical region is (u, v)
Figure BDA0003680913220000182
The right boundary is
Figure BDA0003680913220000183
The upper boundary is
Figure BDA0003680913220000184
The lower boundary is
Figure BDA0003680913220000185
Traversing from the left boundary of the bounding box to the right boundary of the bounding box, sequentially from the upper boundary of the bounding box to the lower boundary of the bounding box, calculating the distance from texture coordinates to the center of an elliptical region, judging whether a texel point is in the elliptical region according to an elliptical formula and the distance, if the texel point is in the elliptical region, calculating by using a standard Gaussian function to obtain a weight according to the distance from the texel to the center of the ellipse, and simultaneously recording the sum of the weights (ensuring that the sum of the weights is 1) so as to carry out weighted averaging.
Based on the Gaussian weight, weighted average is carried out on the joint probability distribution corresponding to the texels in the elliptical area so as to obtain the target distribution probability of the pixel points.
In an exemplary embodiment of the disclosure, the gradient is a two-dimensional vector, and if the probability distribution map of the gradient corresponding to the pre-stored gradient is mapped, the time and space complexity is O (n) 2 ) If the two slope values after the slope decomposition are used for sampling the slope distribution mapping respectively and fusing the sampling results, the time and space complexity is O (n) through a mode of sampling and fusing twice, and the operation amount is reduced. For example, if there are n gradient values, it is necessary to calculate the gradient probability density (gradient probability distribution map) corresponding to the gradient value n times, and the gradient obtained by combining the two gradient values has n 2 And (4) respectively. While only the gradient probability density for a two-dimensional gradient is initially calculated(gradient probability distribution mapping), requires n 2 And (5) performing secondary operation.
In an exemplary embodiment of the present disclosure, an implementation manner of interpolating a target distribution probability is further provided, so as to update a target distribution probability of a pixel point on a target LOD level by using the target distribution probability after the difference, where the implementation manner may include:
calculating the target distribution probability of the pixel points on the adjacent LOD levels of the target LOD levels;
according to the target distribution probability on the adjacent LOD levels and the target distribution probability on the target LOD levels, the following interpolation formula is adopted to interpolate the target distribution probability on the target LOD levels so as to obtain the interpolated target distribution probability:
P=P L (1-(L-[L]))+P L+1 (L-[L])
wherein P is the interpolated target distribution probability, L is the target LOD level, L +1 is the adjacent LOD level, P L Is the target distribution probability, P, on the target LOD level L+1 Probability is distributed for the targets on adjacent LOD levels;
furthermore, the target distribution probability of the pixel point on the target LOD level is updated by using the interpolated target distribution probability, so that illumination calculation is performed on the model to be rendered by combining the updated target distribution probability, the observation vector, the illumination vector, the half-angle vector pair and the model to be rendered, and flash effect rendering is performed on the model to be rendered according to the obtained illumination result.
It should be noted that, the process of calculating the target distribution probability of the pixel point on the LOD level adjacent to the target LOD level is the same as the process of calculating the target distribution probability corresponding to the pixel point on the target LOD level in the embodiment of the present disclosure, and details are not repeated here.
According to the method and the device, the target distribution probability is subjected to interpolation, so that the target distribution probability is smoothly transited among the LOD levels, and obvious seams of rendering results caused by harsh transition among different LOD levels are avoided.
In an exemplary embodiment of the present disclosure, an implementation of generating a gradient probability distribution map is also provided. The process of generating the gradient probability distribution map of the LOD hierarchy of the nth layer may include steps S1110 to S1150:
in step S1110, a first gaussian distribution function is generated with a preset roughness as a parameter.
Wherein, in order
Figure BDA0003680913220000201
A first gaussian distribution function is generated with 0 as the mean for the standard deviation.
In step S1120, a first gaussian distribution function is used to obtain 2 N A numerical value.
To utilize in
Figure BDA0003680913220000202
A first Gaussian distribution function with 0 as the mean value as the standard deviation, and 2 N A number, where N is the level of the target LOD. For example, if the target LOD level is 0,1 numerical value is obtained, if the target LOD level is 1, 2 numerical values are obtained, and so on, the numerical values corresponding to the LOD levels of the respective stages are obtained.
Step S1130, based on the preset target standard deviation, using the value in step S1120 as the mean value, generating 2 N A second gaussian distribution function.
In an exemplary embodiment of the present disclosure, based on a preset target standard deviation, the values obtained in step S1120 are used as a mean value, and second gaussian distribution functions corresponding to the LOD levels of the respective layers are respectively generated. For example, if the target LOD level is 0,1 number of values is obtained, 1 is taken as a mean value, a preset target standard value is taken as a parameter, and 1 second gaussian distribution function is generated, if the target LOD level is 1, a first number and a second number are obtained, 1 second gaussian distribution function is generated, the second number is taken as a mean value, the preset target standard value is taken as a parameter, 2 second gaussian distribution functions are generated, and the like, so as to generate the second gaussian distribution function corresponding to each LOD level. Fig. 12 shows a second gaussian distribution function graph corresponding to a level from 0 to 15 according to an embodiment of the disclosure.
Step S1140, 2 N And fitting and superposing the second Gaussian distribution functions to generate a target Gaussian distribution function which is used as the probability density of the target gradient.
In an exemplary embodiment of the present disclosure, 2 to be obtained for each level of LOD hierarchy N And fitting and superposing the second Gaussian distribution functions to be used as the probability density of the target gradient. For example, if the target LOD level is 1, fitting the obtained two second gaussian distribution functions, if the target LOD level is 2, fitting the obtained four second gaussian distribution functions, and so on, to obtain the target gradient probability density corresponding to each LOD level.
And step S1150, taking the variable value of the target gradient probability density as a gradient value, adopting the target gradient probability density to calculate the distribution probability corresponding to each gradient value, and correspondingly storing the gradient value and the distribution probability.
In the exemplary embodiment of the present disclosure, the variable value of the target gradient probability density is used as a gradient value, the target gradient probability density is used to calculate the distribution probability corresponding to each gradient value, the gradient value and the distribution probability are stored correspondingly, the distribution probabilities of different gradient values in different coordinate directions are stored, that is, the obtained target gradient probability density is stored in a picture format. Wherein the variable range can be determined according to the actual rendering scene, for example, as
Figure BDA0003680913220000211
Wherein alpha is dict In order to generate the preset roughness for the gradient probability distribution map, the sampling number of the variable in the variation range interval can be determined according to the actual rendering scene, the sampling number determines the resolution of the gradient probability distribution map, and the more the sampling, the higher the resolution. In the disclosed embodiments, the increments between the variables may be of varying ranges
Figure BDA0003680913220000212
It should be noted that, in the embodiment of the present disclosure, in order to increase the diversity of the rendering result, the number of the gradient probability distribution maps may also be increased, which is not particularly limited in the embodiment of the present disclosure.
Through this embodiment of the disclosure, the slope probability distribution map that each LOD level corresponds is generated, has the reusability, can sample slope probability distribution map according to the slope value of target slope to obtain the target distribution probability of pixel, in order to be used for treating the illumination calculation of rendering the model, not only improve and render the operation effect, also need not the fine arts personnel and make the normal map that contains the flash of light effect, sample based on the LOD level that the pixel belongs to moreover, can increase the variety of sampling result, in order to increase the variety of rendering the result.
In an exemplary embodiment of the present disclosure, an implementation of adjusting a flash effect is also provided. The embodiment of the present disclosure may perform at least one of the following operations, to change the flash effect of the model to be rendered:
1) and taking the density parameter of the display particles with the flash effect as a filtering threshold, filtering the texels according to the comparison relation between the random seeds corresponding to the consistency index of the texels and the filtering threshold, and taking the slope corresponding to the rest texels as a target slope to adjust the display density degree of the flash effect and avoid the rendering result from generating a hard boundary. As shown in fig. 13, the rendering result in the right drawing has no hard boundary after filtering the gradient.
2) And changing the target LOD level corresponding to the pixel point to change the display grain size of the flash effect. Here, the higher the target LOD level is, the larger the display grain size of the sparkle effect is, and thus, the display grain size of the sparkle effect is changed by adjusting the level number of the target LOD level, as shown in fig. 2.
3) And adjusting the vector distance between the world coordinates of the pixel points and the position coordinates of the virtual camera so as to change the attenuation degree of the flash effect along with the vector distance. In the embodiment of the present disclosure, the relationship between the brightness and the distance is a negative exponential function, and the attenuation coefficient is used as a negative exponential coefficient. For example, the rhinestone decoration on clothes is still visible at a longer distance, so the attenuation coefficient is smaller; the paint spraying particles of the automobile need to be observed at a close position, so that the attenuation coefficient is larger, as shown in fig. 14, as shown in the right diagram, after the attenuation degree of the flash effect is adjusted to be changed along with the vector distance between the world coordinate of the pixel point and the position coordinate of the virtual camera, the rendering result can be like the flash effect of real life.
4) And adjusting the micro-surface parameters of the model to be rendered, taking the micro-surface parameters as the coefficients of the target LOD levels obtained in the step S330, and further changing the LOD levels to which the pixel points belong, and further changing the display intensity of the flash effect. For example, when the LOD level of the condition target exceeds 17 layers, the larger the micro-surface density is, the weaker the flash effect display is, the closer the mirror reflection is to the smooth surface, and the method is suitable for transparent plastic materials such as raincoats and the like; on the contrary, the micro-surface has small density and strong sense of fine and broken particles, so the glittering effect is more obvious, and the glittering agent is suitable for materials which generate sharp brilliance, such as frosted metal, precious stones and the like.
5) And adjusting the brightness of the model to be rendered by taking the obtained illumination result as an exponential function generated by the base number and the brightness parameter as an index of the exponential function. Fig. 15 shows a comparison graph using different luminance indexes, wherein the left graph shows a rendering result with a luminance indication of 0.01, and the right graph shows a rendering result with a luminance index of 1.2.
In summary, the method for rendering a flash effect in a game according to the exemplary embodiment of the present disclosure does not require an art worker to make a high-definition normal map carrying a flash effect, and only needs to provide a general smooth surface normal and convert the normal into a gradient of a pixel point, so as to sample a gradient probability distribution map according to a gradient value, thereby reducing a manufacturing cost and a manufacturing difficulty. Corresponding gradient probability distribution maps are configured for different LOD levels, the degree of particle definition is different among different LOD levels, sampling is carried out on different target LOD layers, the target distribution probability of obtained pixel points is different, the final flicker effect is well-arranged, and the real physical phenomenon is better met; on the other hand, the corresponding LOD levels are sampled according to the gradient values of different pixel points, the same target distribution probability is avoided, the repetition of the final illumination result is avoided, the method can adapt to different lens changes, different flash effects are generated, and a flash rendering effect with higher quality is obtained. In addition, through various parameter adjustments, various flash effects can be rendered by the flash effect rendering method in the game according to the embodiment of the present disclosure, and fig. 16 shows that a frosted metal surface, a clothes surface, and the like obtained by the flash effect rendering method in the game according to the embodiment of the present disclosure are obtained, and the method in the embodiment of the present disclosure has universality for rendering scenes of various flash effects, which is not listed here.
In an exemplary embodiment of the present disclosure, there is also provided a flash effect rendering apparatus in a game. Referring to fig. 17, the in-game flash effect rendering apparatus 1700 may include: ,
a gradient calculation module 1710, configured to calculate a gradient of a pixel point according to a macro normal vector, an observation vector, and an illumination vector of the pixel point in a model to be rendered, where the gradient is a two-dimensional vector that is used to characterize a position feature of the pixel point in the model to be rendered, and the macro normal vector, the observation vector, and the illumination vector are vectors in a world space;
a level determining module 1720, configured to determine, based on the texture coordinates of the pixel points and a target particle size required for a flash effect, a target LOD level to which the pixel points belong, where the target LOD level is configured with a slope probability distribution map, where the slope probability distribution map includes distribution probabilities of different slope values in different coordinate directions;
a sampling module 1730, configured to sample the gradient probability distribution map by using a target gradient, and determine a target distribution probability of the pixel according to a sampling result, where the target distribution probability is used to represent a probability that the orientation of the projection is a first target direction, the target gradient is a gradient of a texel corresponding to the pixel, and the target gradient is obtained based on the gradient of the pixel;
a rendering module 1740, configured to perform illumination calculation on the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector, and the half-angle vector, and perform flash effect rendering on the model to be rendered according to an obtained illumination result.
Since each functional module of the flash effect rendering device in the game according to the exemplary embodiment of the present disclosure is the same as that in the embodiment of the flash effect rendering method in the game, it is not described herein again.
It should be noted that although in the above detailed description several modules or units of the flash effect rendering device in a game are mentioned, this division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In addition, in the exemplary embodiments of the present disclosure, a computer storage medium capable of implementing the above method is also provided. On which a program product capable of implementing the above-described method of the present specification is stored. In some possible embodiments, aspects of the present disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present disclosure described in the "exemplary methods" section above of this specification, when the program product is run on the terminal device.
Referring to fig. 18, a program product 1800 for implementing the above method according to an exemplary embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not so limited, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided. As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 1900 according to such an embodiment of the disclosure is described below with reference to fig. 19. The electronic device 1900 shown in fig. 19 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 19, electronic device 1900 takes the form of a general-purpose computing device. The components of the electronic device 800 may include, but are not limited to: the at least one processing unit 1910, the at least one memory unit 1920, a bus 1930 connecting different system components (including the memory unit 1920 and the processing unit 1910), and a display unit 1940.
Wherein the storage unit stores program code that is executable by the processing unit 1910 to cause the processing unit 1910 to perform steps according to various exemplary embodiments of the present disclosure described in the "exemplary methods" section above of the present specification.
The storage 1920 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)1921 and/or a cache memory unit 1922, and may further include a read-only memory unit (ROM) 1923.
The storage unit 1920 may also include a program/utility 1924 having a set (at least one) of program modules 1925, such program modules 1925 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1930 can be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1900 may also communicate with one or more external devices 2000 (e.g., keyboard, pointing device, Bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1900, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 1900 to communicate with one or more other computing devices. Such communication can occur via an input/output (I/O) interface 1950. Also, electronic device 1900 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via network adapter 1960. As shown, the network adapter 1960 communicates with the other modules of the electronic device 1900 via a bus 1930. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with electronic device 1900, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (19)

1. A method of flash effect rendering in a game, comprising:
calculating the gradient of a pixel point according to a macro normal vector, an observation vector and an illumination vector of the pixel point in a model to be rendered, wherein the gradient is the projection of a micro surface normal in a world space on a two-dimensional tangent space, and the macro normal vector, the observation vector and the illumination vector are vectors in the world space;
determining a target LOD level to which the pixel point belongs based on texture coordinates of the pixel point and target particle granularity required by a flash effect, wherein the target LOD level is configured with a slope probability distribution map, and the slope probability distribution map comprises distribution probabilities of different slope values in different coordinate directions;
sampling the gradient probability distribution map by using a target gradient, and determining a target distribution probability of the pixel points according to a sampling result, wherein the target distribution probability is used for representing the probability that the projection direction is a first target direction, the target gradient is the gradient of the pixel points corresponding to the texels, and the target gradient is obtained based on the gradient of the pixel points;
and performing illumination calculation on the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector and the half-angle vector, and performing flash effect rendering on the model to be rendered according to the obtained illumination result.
2. The method of claim 1, wherein the calculating the gradient of the pixel point according to a macro normal vector, an observation vector and an illumination vector of the pixel point in the model to be rendered comprises:
constructing a tangent space according to the macroscopic normal vector, and converting the observation vector and the illumination vector into the tangent space;
calculating the vector sum of the observation vector and the illumination vector in the tangent space, and carrying out standardization processing on the vector sum to obtain a half-angle vector;
the gradient is derived based on the half angle vector.
3. The method of claim 2, wherein the deriving the slope based on the half angle vector comprises:
dividing the first component and the second component of the half-angle vector by the third component of the half-angle vector to convert the half-angle vector from a three-dimensional vector to a two-dimensional plane vector;
and taking the two-dimensional plane vector as the gradient.
4. The method of claim 1, wherein determining the target LOD level to which the pixel belongs based on the texture coordinates of the pixel and the target grain size required for the sparkle effect comprises:
obtaining the differential of the texture coordinate uv to the screen space coordinate xy to obtain a differential result, and carrying out scaling processing on the differential result according to the value of the target particle granularity to obtain texture coordinate differential;
calculating the length of the corresponding vector of the texture coordinate differential to obtain a first length value and a second length value, wherein the first length value is larger than the second length value;
and determining the target LOD level by combining the second length value and a preset LOD level number threshold.
5. The method of claim 4, wherein the target LOD level is determined using the following formula in combination with the second length value and a preset LOD level number threshold:
L=max(0,NLEVELS-1+log2(minorLength))
wherein, the L is the target LOD level, the NLEVELS is the LOD level number threshold, and the minor Length is the second Length value.
6. The method of claim 4, wherein before the sampling the gradient probability distribution map using the gradient value of the target gradient and determining the target distribution probability of the pixel point according to the sampling result, the method further comprises:
according to the texture coordinate of the pixel point and the texture coordinate differential, constructing an elliptical area of the pixel point in a texture space;
and traversing the texels in the elliptical area, and rotating and scaling the gradient of the pixel point aiming at each texel to obtain the target gradient of each texel.
7. The method of claim 6, wherein traversing the texels in the elliptical area, and for each texel, rotating and scaling the gradient of the pixel to obtain a target gradient of each texel comprises:
acquiring a random rotation angle corresponding to each texel, and rotating the gradient of the pixel point according to the random rotation angle to obtain a rotation gradient;
and calculating a gradient scaling factor by combining preset roughness and a roughness scaling factor, and scaling the rotation gradient according to the gradient scaling factor to generate the target gradient of each texel.
8. The method according to claim 7, wherein the obtaining of the random rotation angle corresponding to the texel and the rotating of the gradient of the pixel point according to the random rotation angle to obtain the rotation gradient comprises:
converting a texel index of the texel into a consistency index according to the target LOD level, wherein the consistency index indicates consistency of random rotation angles corresponding to the texel on different LOD levels;
generating a random numerical value based on the consistency index, and taking the random numerical value as the random rotation angle;
and rotating the gradient of the pixel point according to the random rotation angle to obtain a rotation gradient.
9. The method of claim 7, wherein the slope scaling factor is calculated in combination with a pre-set roughness and roughness scaling factor according to the following formula:
Figure FDA0003680913210000031
wherein, the S α Is the gradient scaling factor, the alpha dict The preset roughness adopted in the process of generating the gradient probability distribution map is obtained by alpha from the roughness map corresponding to the model to be rendered, and the alpha is obtained by the roughness map corresponding to the model to be rendered x And said alpha y Respectively the required roughness scaling factors of the spin gradient in different decomposition directions.
10. The method of claim 6, wherein the sampling the gradient probability distribution map using the gradient value of the target gradient and determining the target distribution probability of the pixel point according to the sampling result comprises:
sampling the gradient probability distribution map by using the gradient value of the target gradient, and calculating the joint distribution probability of the pixel point corresponding to the texel according to the sampling result;
determining the Gaussian weight corresponding to the texel according to the distance between the texel and the texture coordinate of the pixel point;
and calculating the target distribution probability based on the joint distribution probability and the Gaussian weight corresponding to the texel.
11. The method of claim 10, wherein the sampling the gradient probability distribution map using the gradient value of the target gradient and calculating the joint distribution probability of the pixel point to the texel according to the sampling result comprises:
carrying out orthogonal decomposition on the target gradient to obtain a first gradient value and a second gradient value;
sampling the gradient distribution map by using the first gradient value to obtain a first distribution probability;
sampling the gradient distribution map by using the second gradient value to obtain a second distribution probability;
and performing fusion calculation on the first distribution probability and the second distribution probability to generate the joint distribution probability.
12. The method of claim 11, wherein the fusion of the first distribution probability and the second distribution probability is performed by using the following formula to generate the joint distribution probability:
Figure FDA0003680913210000041
wherein, P 3 For the joint distribution probability, the P 1 Is the first distribution probability, the P 2 Is the second distribution probability, the alpha dict To generate the gradientThe preset roughness is adopted during probability distribution mapping, wherein alpha is obtained from the roughness mapping corresponding to the model to be rendered, and alpha is x And said alpha y Respectively, the needed roughness scaling times of the target gradient in different decomposition directions.
13. The method of claim 11, wherein the calculating the target distribution probability based on the joint distribution probability and the gaussian weight corresponding to the texel comprises:
and carrying out weighted average on the joint probability distribution corresponding to the texels in the elliptical area based on the Gaussian weight to obtain the target distribution probability.
14. The method according to any one of claims 10 to 13, further comprising:
calculating the target distribution probability of the pixel points on the adjacent LOD levels of the target LOD levels;
according to the target distribution probability on the adjacent LOD levels and the target distribution probability on the target LOD level, interpolating the target distribution probability on the target LOD level by adopting the following interpolation formula to obtain the interpolated target distribution probability:
P=P 0 (1-(L-[L]))+P 03. (L-[L])
wherein P is the interpolated target distribution probability, L is the target LOD level, L +1 is the adjacent LOD level, and P L Is the target distribution probability, P, on the target LOD level L+1 Distributing probabilities for the targets on the adjacent LOD levels;
and updating the target distribution probability of the pixel points on the target LOD level by using the interpolated target distribution probability.
15. The method according to any one of claims 1 to 13, wherein before the calculating the gradient of the pixel point according to the macro normal vector, the observation vector and the illumination vector of the pixel point in the model to be rendered, the method further comprises:
generating gradient probability distribution maps corresponding to different LOD levels;
the process of generating the gradient probability distribution map of the LOD level of the Nth layer comprises the following steps:
generating a first Gaussian distribution function by taking preset roughness as a parameter;
obtaining 2 using the first Gaussian distribution function N A numerical value;
based on the preset target standard deviation, taking the numerical value as a mean value to generate 2 N A second Gaussian distribution function;
2 is to be N Fitting and superposing the second Gaussian distribution functions to generate a target Gaussian distribution function which is used as the probability density of the target gradient;
taking the variable value of the target gradient probability density as a gradient value, adopting the target gradient probability density to calculate the distribution probability corresponding to each gradient value, and correspondingly storing the gradient value and the distribution probability;
wherein N is an integer of 0 or more.
16. The method according to any one of claims 1 to 13, wherein the performing illumination calculation on the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector and the half-angle vector, and performing flash effect data rendering on the model to be rendered according to the obtained illumination result comprises:
calculating the normal distribution probability of the pixel points based on the projection value of the micro surface normal on the macro normal vector and the target distribution probability, wherein the normal distribution probability is used for representing the probability that the orientation of the micro surface is a second target direction;
calculating first illumination data by using a Nefel equation in combination with the half-angle vector, the observation vector and a preset surface basic reflectivity, wherein the first illumination data represents the ratio of reflected light to total light;
calculating second illumination data by adopting a geometric shading function according to the observation vector, the illumination vector and the half-angle vector, wherein the second illumination data represents the quantity of surface blocking light rays;
calculating fusion illumination data by adopting a bidirectional reflection distribution function in combination with the normal distribution probability, the first illumination data and the second illumination data;
and solving the product of the fused illumination data and the preset incident light intensity to obtain the illumination result.
17. An in-game flash effect rendering apparatus, comprising:
the gradient calculation module is used for calculating the gradient of the pixel point according to a macro normal vector, an observation vector and an illumination vector of the pixel point in the model to be rendered, wherein the gradient is the projection of a micro surface normal in a world space on a two-dimensional tangent space, and the macro normal vector, the observation vector and the illumination vector are vectors in the world space;
the level determining module is used for determining a target LOD level to which the pixel points belong based on texture coordinates of the pixel points and target particle granularity required by a flash effect, wherein the target LOD level is configured with a slope probability distribution map, and the slope probability distribution map comprises distribution probabilities of different slope values in different coordinate directions;
the sampling module is used for sampling the gradient probability distribution map by using a target gradient and determining the target distribution probability of the pixel points according to a sampling result, wherein the target distribution probability is used for representing the probability that the projection direction is a first target direction, the target gradient is the gradient of the texels corresponding to the pixel points, and the target gradient is obtained based on the gradient of the pixel points;
and the rendering module is used for carrying out illumination calculation on the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector and the half-angle vector and carrying out flash effect rendering on the model to be rendered according to the obtained illumination result.
18. A storage medium having stored thereon a computer program which, when executed by a processor, implements a method according to any one of claims 1 to 16.
19. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1 to 16 via execution of the executable instructions.
CN202210637233.7A 2022-06-07 2022-06-07 Flash effect rendering method and device in game, storage medium and electronic equipment Pending CN115063517A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210637233.7A CN115063517A (en) 2022-06-07 2022-06-07 Flash effect rendering method and device in game, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210637233.7A CN115063517A (en) 2022-06-07 2022-06-07 Flash effect rendering method and device in game, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115063517A true CN115063517A (en) 2022-09-16

Family

ID=83199664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210637233.7A Pending CN115063517A (en) 2022-06-07 2022-06-07 Flash effect rendering method and device in game, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115063517A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168131A (en) * 2022-12-09 2023-05-26 北京百度网讯科技有限公司 Cloth rendering method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6525740B1 (en) * 1999-03-18 2003-02-25 Evans & Sutherland Computer Corporation System and method for antialiasing bump texture and bump mapping
CN103077497A (en) * 2011-10-26 2013-05-01 中国移动通信集团公司 Method and device for zooming image in level-of-detail model
US20150287230A1 (en) * 2014-04-05 2015-10-08 Sony Computer Entertainment America Llc Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location
CN111311723A (en) * 2020-01-22 2020-06-19 腾讯科技(深圳)有限公司 Pixel point identification and illumination rendering method and device, electronic equipment and storage medium
CN111583381A (en) * 2020-05-06 2020-08-25 网易(杭州)网络有限公司 Rendering method and device of game resource map and electronic equipment
CN112419466A (en) * 2020-11-20 2021-02-26 苏州幻塔网络科技有限公司 In-game object surface highlight rendering method, device, equipment and storage medium
CN112734896A (en) * 2021-01-08 2021-04-30 网易(杭州)网络有限公司 Environment shielding rendering method and device, storage medium and electronic equipment
CN113144611A (en) * 2021-03-16 2021-07-23 网易(杭州)网络有限公司 Scene rendering method and device, computer storage medium and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6525740B1 (en) * 1999-03-18 2003-02-25 Evans & Sutherland Computer Corporation System and method for antialiasing bump texture and bump mapping
CN103077497A (en) * 2011-10-26 2013-05-01 中国移动通信集团公司 Method and device for zooming image in level-of-detail model
US20150287230A1 (en) * 2014-04-05 2015-10-08 Sony Computer Entertainment America Llc Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location
CN111311723A (en) * 2020-01-22 2020-06-19 腾讯科技(深圳)有限公司 Pixel point identification and illumination rendering method and device, electronic equipment and storage medium
CN111583381A (en) * 2020-05-06 2020-08-25 网易(杭州)网络有限公司 Rendering method and device of game resource map and electronic equipment
CN112419466A (en) * 2020-11-20 2021-02-26 苏州幻塔网络科技有限公司 In-game object surface highlight rendering method, device, equipment and storage medium
CN112734896A (en) * 2021-01-08 2021-04-30 网易(杭州)网络有限公司 Environment shielding rendering method and device, storage medium and electronic equipment
CN113144611A (en) * 2021-03-16 2021-07-23 网易(杭州)网络有限公司 Scene rendering method and device, computer storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邵威;刘畅;贾金原;: "基于光照贴图的Web3D全局光照协作式云渲染系统", 系统仿真学报, no. 04, 31 December 2020 (2020-12-31), pages 125 - 135 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168131A (en) * 2022-12-09 2023-05-26 北京百度网讯科技有限公司 Cloth rendering method and device, electronic equipment and storage medium
CN116168131B (en) * 2022-12-09 2023-11-21 北京百度网讯科技有限公司 Cloth rendering method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11069124B2 (en) Systems and methods for reducing rendering latency
JP4643271B2 (en) Visible surface determination system and method for computer graphics using interval analysis
US11138782B2 (en) Systems and methods for rendering optical distortion effects
WO2017206325A1 (en) Calculation method and apparatus for global illumination
US10924727B2 (en) High-performance light field display simulator
US10553012B2 (en) Systems and methods for rendering foveated effects
US10699467B2 (en) Computer-graphics based on hierarchical ray casting
US20140002458A1 (en) Efficient rendering of volumetric elements
US11138800B1 (en) Optimizations to reduce multi-channel ray casting for color sampling
US20100033482A1 (en) Interactive Relighting of Dynamic Refractive Objects
TW200919376A (en) Real-time luminosity dependent subdivision
Uchida et al. Noise-robust transparent visualization of large-scale point clouds acquired by laser scanning
CN114820906A (en) Image rendering method and device, electronic equipment and storage medium
US20230230311A1 (en) Rendering Method and Apparatus, and Device
Okura et al. Mixed-reality world exploration using image-based rendering
CN104157000B (en) The computational methods of model surface normal
WO2008014384A2 (en) Real-time scenery and animation
CN115063517A (en) Flash effect rendering method and device in game, storage medium and electronic equipment
Frasson et al. Efficient screen-space rendering of vector features on virtual terrains
US20210241502A1 (en) Method for improved handling of texture data for texturing and other image processing tasks
JP2002526843A (en) Energy propagation modeling device
US6677947B2 (en) Incremental frustum-cache acceleration of line integrals for volume rendering
Callieri et al. A realtime immersive application with realistic lighting: The Parthenon
WO2024037116A1 (en) Three-dimensional model rendering method and apparatus, electronic device and storage medium
WO2024124370A1 (en) Model construction method and apparatus, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination