CN115063517B - Flash effect rendering method and device in game, storage medium and electronic equipment - Google Patents

Flash effect rendering method and device in game, storage medium and electronic equipment Download PDF

Info

Publication number
CN115063517B
CN115063517B CN202210637233.7A CN202210637233A CN115063517B CN 115063517 B CN115063517 B CN 115063517B CN 202210637233 A CN202210637233 A CN 202210637233A CN 115063517 B CN115063517 B CN 115063517B
Authority
CN
China
Prior art keywords
gradient
target
vector
probability
distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210637233.7A
Other languages
Chinese (zh)
Other versions
CN115063517A (en
Inventor
金紫凤
陈家挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210637233.7A priority Critical patent/CN115063517B/en
Publication of CN115063517A publication Critical patent/CN115063517A/en
Application granted granted Critical
Publication of CN115063517B publication Critical patent/CN115063517B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)

Abstract

The disclosure relates to the technical field of computers, and relates to a flash effect rendering method and device in a game, a storage medium and electronic equipment. The method comprises the following steps: calculating the gradient of a pixel point according to a macroscopic normal vector, an observation vector and an illumination vector of the pixel point in a model to be rendered, wherein the gradient is the projection of a micro-surface normal in world space on a two-dimensional tangential space; determining a target LOD level to which the pixel points belong based on texture coordinates of the pixel points and target particle sizes required by the flashing effect, wherein the target LOD level is configured with a gradient probability distribution map; sampling the gradient probability distribution map by using a target gradient, and determining the target distribution probability of the pixel points according to a sampling result; and carrying out illumination calculation on the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector and the half-angle vector, and carrying out flash effect rendering on the model to be rendered according to the obtained illumination result. The present disclosure improves the rendering efficiency and quality of flash effects.

Description

Flash effect rendering method and device in game, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technology, and more particularly, to a flash effect rendering method in a game, a flash effect rendering apparatus in a game, a computer storage medium, and an electronic device.
Background
With the development of computer and internet technology, online games are increasingly popular, and because online games have scenes such as real life, users can experience visual impact brought by games in the scene, and the initiative and the sense of reality of the games are strong. In real life, there are abundant flashing materials, such as frosted metal surfaces, automobile paint surfaces, fine flashing of cosmetics, crystal-bright ornaments on cloth, microcrystalline flashing on snowy and sandy places, and water surfaces of glistening and glistening, so how to render these flashing effects in games becomes one of the work contents that is not small for game making.
In the related art, the flash effect is rendered by the manner of making the high-definition normal map by the artist staff, but the high-definition normal map is made at high cost and is not reusable. In another related art, a flash effect is generated by using a written noise map, but different noise maps are required to be provided according to different rendering requirements, so that the rendering cost is increased, the map writing difficulty is high, experienced art personnel are required to complete the map writing, and a rendering result conforming to a real physical phenomenon cannot be obtained.
It should be noted that the information of the present invention in the above background section is only for enhancing understanding of the background of the present disclosure, and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
The purpose of the present disclosure is to provide a method and apparatus for rendering a flash effect in a game, a computer storage medium, and an electronic device, thereby reducing the cost and difficulty of rendering the flash effect in the game at least to a certain extent, and improving the rendering quality of the flash effect.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to one aspect of the present disclosure, there is provided a flash effect rendering method in a game, including: calculating the gradient of a pixel point according to a macroscopic normal vector, an observation vector and an illumination vector of the pixel point in a model to be rendered, wherein the gradient is the projection of a micro surface normal in world space on a two-dimensional tangential space, and the macroscopic normal vector, the observation vector and the illumination vector are vectors in the world space; determining a target LOD level to which the pixel point belongs based on texture coordinates of the pixel point and target particle sizes required by the flashing effect, wherein the target LOD level is configured with a gradient probability distribution map, and the gradient probability distribution map comprises distribution probabilities of different gradient values in different coordinate directions; sampling the gradient probability distribution map by using a target gradient, and determining a target distribution probability of the pixel point according to a sampling result, wherein the target distribution probability is used for representing the probability that the projection is oriented in a first target direction, the target gradient is the gradient of the pixel point corresponding to texel, and the target gradient is obtained based on the gradient of the pixel point; and carrying out illumination calculation on the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector and the half-angle vector, and carrying out flash effect rendering on the model to be rendered according to the obtained illumination result.
In an exemplary embodiment of the disclosure, the calculating the gradient of the pixel point according to the macroscopic normal vector, the observation vector and the illumination vector of the pixel point in the model to be rendered includes: constructing a tangent space according to the macroscopic normal vector, and converting the observation vector and the illumination vector into the tangent space; solving a vector sum of the observation vector and the illumination vector in the tangent space, and carrying out standardization processing on the vector sum to obtain a half-angle vector; the gradient is obtained based on the half angle vector.
In an exemplary embodiment of the present disclosure, the deriving the gradient based on the half angle vector includes: dividing the half-angle vector by a third component of the half-angle vector using the first and second components of the half-angle vector, respectively, to convert the half-angle vector from a three-dimensional vector to a two-dimensional plane vector; and taking the two-dimensional plane vector as the gradient.
In an exemplary embodiment of the present disclosure, the determining, based on the texture coordinates of the pixel point and the target particle size required for the sparkle effect, the target LOD level to which the pixel point belongs includes: obtaining the differentiation of the texture coordinate uv to the screen space coordinate xy to obtain a differentiation result, and scaling the differentiation result according to the value of the target particle size to obtain the texture coordinate differentiation; calculating the length of the texture coordinate differential corresponding vector to obtain a first length value and a second length value, wherein the first length value is larger than the second length value; and combining the second length value and a preset LOD level number threshold value to determine the target LOD level.
In an exemplary embodiment of the present disclosure, the target LOD level is determined using the following formula in combination with the second length value and a preset LOD level number threshold:
L=max(0,NLEVELS-1+log2(minorLength))
Wherein, L is the target LOD level, NLEVELS is the LOD level number threshold, and minor Length is the second Length value.
In an exemplary embodiment of the present disclosure, before the gradient probability distribution map is sampled using the gradient value of the target gradient, and the target distribution probability of the pixel point is determined according to the sampling result, the method further includes: constructing an elliptical region of the pixel point in a texture space according to the texture coordinates of the pixel point and the texture coordinate differentiation; traversing the texels in the elliptic region, and carrying out rotation and scaling processing on the gradient of the pixel point aiming at each texel to obtain the target gradient of each texel.
In an exemplary embodiment of the present disclosure, the traversing the texels in the elliptical area, for each texel, performing rotation and scaling processing on the gradient of the pixel point to obtain a target gradient of each texel, including: for each texel, acquiring a random rotation angle corresponding to the texel, and rotating the gradient of the pixel point according to the random rotation angle to obtain a rotation gradient; and calculating a gradient scaling multiple by combining preset roughness and a roughness scaling multiple, and scaling the rotating gradient according to the gradient scaling multiple to generate a target gradient of each texel.
In an exemplary embodiment of the disclosure, the obtaining a random rotation angle corresponding to the texel, and rotating the gradient of the pixel point according to the random rotation angle to obtain a rotation gradient includes: converting a texel index of the texel into a consistency index according to the target LOD hierarchy, wherein the consistency index is used for indicating consistency of the random rotation angle corresponding to the texel on different LOD hierarchies; generating a random value based on the consistency index, and taking the random value as the random rotation angle; and rotating the gradient of the pixel point according to the random rotation angle to obtain a rotation gradient.
In one exemplary embodiment of the present disclosure, the slope scaling factor is calculated in combination with a preset roughness and roughness scaling factor according to the following formula:
The step S α is the gradient scaling multiple, the step alpha dict is a preset roughness adopted when the gradient probability distribution map is generated, the step alpha is obtained from a roughness map corresponding to the model to be rendered, and the step alpha x and the step alpha y are respectively the roughness scaling multiple required by the rotation gradient in different decomposition directions.
In an exemplary embodiment of the disclosure, the sampling the gradient probability distribution map using the gradient value of the target gradient, and determining the target distribution probability of the pixel point according to the sampling result includes: sampling the gradient probability distribution map by using a gradient value of a target gradient, and calculating the joint distribution probability of the pixels corresponding to the texels according to a sampling result; determining Gaussian weights corresponding to the texels according to the distances between the texels and the texture coordinates of the pixel points; and calculating the target distribution probability based on the joint distribution probability and the Gaussian weight corresponding to the texel.
In an exemplary embodiment of the present disclosure, the sampling the gradient probability distribution map using the gradient value of the target gradient, and calculating the joint distribution probability of the pixel point corresponding to the texel according to the sampling result includes: performing orthogonal decomposition on the target gradient to obtain a first gradient value and a second gradient value; sampling the gradient distribution map by using the first gradient value to obtain a first distribution probability; sampling the gradient distribution map by using the second gradient value to obtain a second distribution probability; and carrying out fusion calculation on the first distribution probability and the second distribution probability to generate the joint distribution probability.
In an exemplary embodiment of the disclosure, the fusing calculation is performed on the first distribution probability and the second distribution probability by using the following formula, so as to generate the joint distribution probability:
Wherein P 3 is the joint distribution probability, P 1 is the first distribution probability, P 2 is the second distribution probability, α dict is a preset roughness adopted when generating the gradient probability distribution map, α is obtained from a roughness map corresponding to the model to be rendered, and α x and α y are roughness scaling factors required by the target gradient in different decomposition directions respectively.
In an exemplary embodiment of the disclosure, the calculating the target distribution probability based on the joint distribution probability and the gaussian weight corresponding to the texel includes: and carrying out weighted average on joint probability distribution corresponding to texels in the elliptic region based on the Gaussian weight so as to obtain the target distribution probability.
In an exemplary embodiment of the present disclosure, the method further comprises: calculating target distribution probability of the pixel points on adjacent LOD levels of the target LOD level; according to the target distribution probability on the adjacent LOD level and the target distribution probability on the target LOD level, the target distribution probability on the target LOD level is interpolated by adopting the following interpolation formula to obtain the interpolated target distribution probability:
P=PL(1-(L-[L]))+PL+1(L-[L])
Wherein P is the interpolated target distribution probability, L is the target LOD level, l+1 is the adjacent LOD level, P L is the target distribution probability on the target LOD level, and P L+1 is the target distribution probability on the adjacent LOD level;
And updating the target distribution probability of the pixel point on the target LOD level by using the interpolated target distribution probability.
In an exemplary embodiment of the disclosure, before the calculating the gradient of the pixel point according to the macroscopic normal vector, the observation vector and the illumination vector of the pixel point in the model to be rendered, the method further includes: generating slope probability distribution maps corresponding to different LOD levels; wherein, the process of generating slope probability distribution map of LOD level of the nth layer comprises: generating a first Gaussian distribution function by taking preset roughness as a parameter; obtaining 2 N numerical values by using the first Gaussian distribution function; generating 2 N second Gaussian distribution functions by taking the numerical value as a mean value based on a preset target standard deviation; fitting and superposing the 2 N second Gaussian distribution functions to generate a target Gaussian distribution function serving as a target gradient probability density; taking the variable value of the target gradient probability density as a gradient value, adopting the target gradient probability density to obtain the distribution probability corresponding to each gradient value, and storing the gradient value and the distribution probability correspondingly; wherein N is an integer not less than 0.
In an exemplary embodiment of the disclosure, the computing the illumination of the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector and the half angle vector, and rendering the flash effect data of the model to be rendered according to the obtained illumination result, includes: calculating the normal distribution probability of the pixel point based on the projection value of the micro-surface normal on the macroscopic normal vector and the target distribution probability, wherein the normal distribution probability is used for representing the probability that the micro-surface is oriented in the second target direction; calculating first illumination data by adopting a Fresnel equation in combination with the half angle vector, the observation vector and a preset surface basic reflectivity, wherein the first illumination data represents the ratio of reflected light to total light; calculating second illumination data according to the observation vector, the illumination vector and the half-angle vector by adopting a geometric shading function, wherein the second illumination data represents the quantity of surface blocking light rays; calculating fusion illumination data by adopting a bidirectional reflection distribution function in combination with the normal distribution probability, the first illumination data and the second illumination data; and obtaining the product of the fusion illumination data and the preset incident light intensity to obtain the illumination result.
According to an aspect of the present disclosure, there is provided a flash effect rendering apparatus in a game, including: the gradient calculation module is used for calculating the gradient of the pixel point according to a macroscopic normal vector, an observation vector and an illumination vector of the pixel point in the model to be rendered, wherein the gradient is the projection of a micro-surface normal in world space on a two-dimensional tangential space, and the macroscopic normal vector, the observation vector and the illumination vector are vectors in world space; the system comprises a hierarchy determining module, a gradient probability distribution mapping module and a gradient probability distribution processing module, wherein the hierarchy determining module is used for determining a target LOD hierarchy to which the pixel point belongs based on texture coordinates of the pixel point and target particle sizes required by a flashing effect, and the target LOD hierarchy is configured with gradient probability distribution mapping which comprises distribution probabilities of different gradient values in different coordinate directions; the sampling module is used for sampling the gradient probability distribution map by utilizing a target gradient, determining target distribution probability of the pixel point according to a sampling result, wherein the target distribution probability is used for representing the probability that the projection direction is a first target direction, the target gradient is the gradient of the pixel point corresponding to texel, and the target gradient is obtained based on the gradient of the pixel point; and the rendering module is used for carrying out illumination calculation on the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector and the half-angle vector, and carrying out flash effect rendering on the model to be rendered according to the obtained illumination result.
According to one aspect of the present disclosure, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any of the above.
According to one aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of any of the above via execution of the executable instructions.
According to the method for rendering the flash effect in the game in the exemplary embodiment of the disclosure, the gradient of the pixel point is calculated according to the macroscopic normal vector, the observation vector and the illumination vector of the pixel point, the target LOD level to which the pixel point belongs is determined based on the texture coordinate of the pixel point and the target particle size required by the flash, the gradient value of the target gradient of the pixel point corresponding to the pixel point is utilized to sample the target LOD level, the target distribution probability of the pixel point is determined according to the sampling result, and finally illumination calculation is performed by combining the target distribution probability, the observation direction, the illumination direction and the half angle vector, so that the flash effect is rendered according to the illumination result to the model to be rendered. On the one hand, no art worker is required to manufacture a normal map carrying the details information of the flashing effect, only a conventional macroscopic normal is required to be provided, and the macroscopic normal is converted into the gradient of the pixel point, so that the gradient probability distribution map is sampled through the gradient value of the obtained target gradient, and the manufacturing cost and the manufacturing difficulty are reduced; on the other hand, corresponding gradient probability distribution maps are configured for different LOD layers, the particle definition degree is different among different LOD layers, sampling is carried out for different target LOD layers, the obtained target distribution probability of the pixel point is different, the final flashing effect is clear in level, and the real physical phenomenon is more met; on the other hand, the corresponding LOD level is sampled according to gradient values of different pixel points, so that the same target distribution probability is avoided being obtained, the repetition of a final illumination result is avoided, different lens changes can be adapted, different flash effects are generated, and a higher-quality flash rendering effect is obtained.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
FIG. 1 illustrates a flowchart of a flash effect rendering method in a game according to an exemplary embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of an alignment of target particle sizes in accordance with an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a flowchart for determining a target LOD level to which a pixel belongs in accordance with an exemplary embodiment of the present disclosure;
FIG. 4 illustrates a model rendering effect diagram after an elliptical weighted average method and a tri-linear filtering method are employed in accordance with an exemplary embodiment of the present disclosure;
FIG. 5 illustrates a rendering effect diagram with and without filtering according to an exemplary embodiment of the present disclosure;
FIG. 6 illustrates a flowchart of illumination calculation for a model to be rendered in conjunction with a target distribution probability, an observation vector, an illumination vector, and a half angle vector, according to an example embodiment of the present disclosure;
FIG. 7 illustrates a flowchart for calculating a slope of a pixel point from a macroscopic normal vector, an observation vector, and an illumination vector of the pixel point in a model to be rendered according to an exemplary embodiment of the present disclosure;
FIG. 8 illustrates a flowchart of one implementation of filtering pixels according to an exemplary embodiment of the present disclosure;
FIG. 9 illustrates a flowchart for obtaining a target grade according to an exemplary embodiment of the present disclosure;
FIG. 10 illustrates a flowchart for determining a target distribution probability for a pixel point according to an exemplary embodiment of the present disclosure;
FIG. 11 illustrates a flowchart of generating a slope probability distribution map of an LOD level of an N-th layer in accordance with an exemplary embodiment of the present disclosure;
FIG. 12 illustrates a second Gaussian distribution function plot corresponding from level 0 to level 15 in accordance with an exemplary embodiment of the disclosure;
FIG. 13 illustrates a comparison of rendering results before and after filtering a grade according to an exemplary embodiment of the present disclosure;
FIG. 14 illustrates a comparison of rendering results before and after adjusting the degree of attenuation of a flash effect according to an exemplary embodiment of the present disclosure;
FIG. 15 illustrates a comparison of rendering results using different brightness indexes according to an exemplary embodiment of the present disclosure;
FIG. 16 illustrates a rendering structure diagram of various flash effects according to an exemplary embodiment of the present disclosure;
Fig. 17 illustrates a structural schematic diagram of a flash effect rendering apparatus in a game according to an exemplary embodiment of the present disclosure;
FIG. 18 illustrates a schematic diagram of a storage medium according to an exemplary embodiment of the present disclosure; and
Fig. 19 shows a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
Exemplary embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the exemplary embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus detailed descriptions thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the disclosed aspects may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known structures, methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, these functional entities may be implemented in software, or in one or more software-hardened modules, or in different networks and/or processor devices and/or microcontroller devices.
In the related art, in order to obtain a game scene such as a real life, a material having a flash effect in the game scene is also subjected to flash effect rendering. The flashing effect refers to the particulate specular reflection of the micro-planar surface of the model. In a related art, flash effect rendering is performed by making a high-definition normal map, but the map resolution is too high to occupy memory and not reusable, and if the high-definition normal map is made according to different LOD (level of Detail) Levels, that is, the normal map containing abundant flash effect information, the manufacturing cost is high. In another related technology, a flash effect is generated by writing sampling noise, but different noise figures are needed according to different rendering requirements, so that the manufacturing cost is increased, the writing difficulty is high, the requirements on higher service capacity of art staff are met, the size of the noise figures is limited, physical repetition is easy to occur, the lens change is difficult to adapt, and the quality of the flash effect is influenced.
Based on this, in the exemplary embodiments of the present disclosure, a flash effect rendering method in a game is provided first. Referring to fig. 1, the flash effect rendering method in a game of an embodiment of the present disclosure may include the following steps S110 to S140:
Step S110, calculating the gradient of the pixel point according to the macroscopic normal vector, the observation vector and the illumination vector of the pixel point in the model to be rendered.
In exemplary embodiments of the present disclosure, according to the theory of micro-surfaces, a smooth or rough macro-surface is composed of a plurality of micro-surfaces, each micro-surface is oriented by its micro-surface normal, and the gradient of the pixel is calculated according to the macro-normal vector, the observation vector and the illumination vector of the pixel, where the gradient is the projection of the micro-surface normal in world space onto a two-dimensional tangential space. Thus, after the slope of the pixel point is obtained, the micro-surface normal can be solved. The macroscopic normal vector of the pixel point is not a macroscopic normal vector containing the flash effect information with high precision provided by art, and is a general smooth surface macroscopic normal vector. The observation vector refers to a vector between world coordinates of a pixel point to position coordinates of a virtual camera, and the illumination direction indicates the direction of sunlight in a scene.
In the embodiment of the disclosure, based on a material file of a model to be rendered, a CPU (Central Processing Unit, a central processing unit) reads a geometry corresponding to the material file, organizes the read result into vertices and submits the vertices to a GPU (Graphic Processing Unit, a graphics processor), the GPU sequentially converts the vertices from a model space to a world space, an observation space, a clipping space and a screen space through a vertex shader, the result output by the vertex shader is rasterized through a pixel shader, and the vertices are scattered into pixels on the screen space, namely, the pixels of the model to be rendered in the embodiment of the disclosure, wherein each pixel carries coloring information including, but not limited to, a macroscopic normal vector, an illumination vector, an observation vector, a light color, a light intensity, a texture coordinate, a preset roughness, a degree of metal, basic color information and the like.
The material files of the model to be rendered include, but are not limited to, geometric files of the model, basic color maps, roughness maps, metallicity maps and the like which are prefabricated in three-dimensional modeling software. The model space is also called an object space or a local space, each model has a corresponding model space, and the model space rotates along with the corresponding model; the world space is a macroscopic special coordinate system, and in the Unity engine, if the model has no father node, the model is positioned in the world space; the observation space is a camera space, and the observation space is a right-hand coordinate system, different from other spaces; the clipping space is converted from the observation space to the clipping space after multiplication of one vertex with a projection matrix, the essence of which is to perform scaling and translation operations on each component of the vertex (x, y, z); the screen space is to project the cone of view in the clipping space to the screen space, and two-dimensional position coordinates of the pixel points are determined. The spatial conversion method in the related art is applicable to converting the vertex from the model space to the world space, the observation space, the clipping space and the screen space in sequence in the embodiment of the disclosure, and will not be described in detail in this disclosure.
The macroscopic normal vector, the observation vector, and the illumination vector in the embodiments of the present disclosure are vectors in world space.
According to the exemplary embodiment of the disclosure, the high-precision normal map containing the flash effect is not required to be manufactured, the observation vector and the illumination vector are combined, the smooth surface macroscopic normal vector is converted into the gradient, and the projection of the micro-surface normal in the world space on the two-dimensional tangential space is represented, so that the micro-surface normal vector is obtained later, and the manufacturing cost is reduced.
Step S120, determining the target LOD level to which the pixel belongs based on the texture coordinates of the pixel and the target particle size required by the sparkling effect.
In an exemplary embodiment of the present disclosure, LOD (level of Detail) refers to determining a resource allocation for rendering an object model according to a position and an importance level of a node of the object model in a display environment, reducing a number of planes and a Detail level of a non-important object model, etc., where the lower the LOD level, the clearer the Detail, and the clearer between particles of a flash effect.
The target LOD level is provided with a gradient probability distribution map, wherein the gradient probability distribution map comprises distribution probabilities of different gradient values in different coordinate directions, and the gradient values are obtained by orthogonal decomposition of gradients. For example, the gradient is decomposed into gradient values in the x-direction and the y-direction. The target particle size is determined according to the actual rendering scene, and the numerical value of the target particle size can be adjusted according to the actual rendering requirement, and the higher the LOD level is, the larger the numerical value of the target particle size is, the lower the LOD level is, and the smaller the numerical value of the target particle size is. For example, the desired target particle size for the water surface, glistening, is larger and the desired target particle size for the metal surface is smaller, as shown in fig. 2, which is a schematic diagram of the comparison of the target particle sizes.
In an exemplary embodiment of the present disclosure, an implementation of determining a target LOD level to which a pixel point belongs is provided. Determining the target LOD level to which the pixel belongs based on the texture coordinates of the pixel and the target grain size required for the sparkle effect may include steps S310 to S330:
Step S310, the differentiation of the texture coordinate uv to the screen space coordinate xy is obtained, and the differentiation result is scaled according to the value of the target particle size to obtain the texture coordinate differentiation, namely Wherein A is the value of the target particle size.
In an exemplary embodiment of the present disclosure, the texture coordinates are uv used for sampling a map in a vertex written in a texture file, and differentiation of the texture coordinates uv from the screen space coordinates xy is obtained, and a differentiation result is obtained, that is, a unit number on a screen corresponding to the texture coordinates is obtained by HLSL (high-level shader language) built-in functions ddx, ddy or GLSL (rendering language) built-in functions dFdx, dFdy. Wherein the differentiation of the texture coordinates uv to the screen space coordinates xy is determined, i.e
Further, scaling the differential result according to the value of the target grain size, and taking the value of the target grain size as a coefficient of the differential result to amplify or reduce the differential result, wherein the larger the differential result is, the larger the span of the corresponding texture coordinate movement is, and the higher the mipmap (texture mapping) level image is required to be sampled.
And scaling the differential result based on the numerical value of the target grain size, and then changing the texture coordinate differential to change the LOD level determined based on the texture coordinate differential, thereby adjusting the display grain size of the sparkling effect.
Step S320, calculating the length of the texture coordinate differential corresponding vector to obtain a first length value and a second length value, wherein the first length value is larger than the second length value.
In an exemplary embodiment of the present disclosure, the length of the texture coordinate differential correspondence vector may be calculated using a length function, and the differential result obtained in step S310 may be obtainedAnd calculating a first length value and a second length value by using a length function. The ratio of the first length value to the second length value can be controlled not to be larger than the set threshold value, so that the subsequent sampling times in the direction taking the first length value as the long axis are prevented from being increased too much due to the ratio, and the running efficiency of the program is reduced.
Step S330, combining the second function length value and the preset LOD level number threshold, determining the target LOD level.
In an exemplary embodiment of the present disclosure, the target LOD level is determined using the following formula in combination with the second length value and a preset LOD level number threshold:
L=max(0,NLEVELS-1+log2(minorLength)) (1)
Wherein L is a target LOD level, NLEVELS is a LOD level number threshold, and minor Length is a second Length value. The LOD level number threshold may be set according to actual rendering requirements, for example, may be 14, 16, 18, or the like, which is not particularly limited by the embodiments of the present disclosure.
According to the exemplary embodiment of the disclosure, the target LOD level required by the pixel point is determined based on the texture coordinates of the pixel point and the target particle size required by the flashing effect, so that mapping sampling is performed in the target LOD level later, mapping sampling is performed in different LOD levels according to different pixel points, mapping repetition of the pixel points is avoided, rendering diversity of the flashing effect is improved, and the variation of a lens is adapted.
Step S130, sampling the gradient probability distribution map by using the target gradient, and determining the target distribution probability of the pixel point according to the sampling result.
In an exemplary embodiment of the present disclosure, a target distribution probability (also referred to as a target gradient distribution probability) is used to characterize the orientation of the projection of the micro-surface normal in world space on a two-dimensional tangential space, which is the probability of the first target direction. The target gradient is the gradient of the pixel corresponding to the texel, and is obtained based on the gradient. In the embodiment of the disclosure, the slope probability of the texel is filtered by using an ellipse weight averaging method to obtain the slope probability of the texel, and of course, other filtering methods, such as a tri-linear filtering method, may also be selected according to actual requirements. Since the tri-linear filtering method is an isotropic filtering method, the difference of different axes is not considered, and when the line-of-sight direction is switched to be perpendicular to the normal line, blurring is easy to occur, so in order to increase anisotropy, in the embodiment of the disclosure, an elliptical weight averaging method is preferred to filter the gradient probability of the texel, as fig. 4 shows a model rendering effect diagram after the elliptical weight averaging method and the tri-linear filtering method are adopted according to the embodiment of the disclosure, and the rendering effect corresponding to the elliptical weight averaging method in the right diagram is relatively clear. As can be seen from the graph of the rendering effects with and without filtering shown in fig. 5, the right graph uses the elliptical weight averaging method to reduce noise in the rendering results corresponding to the filtering compared with the rendering effects without filtering.
And based on the target LOD level determined in the step S130, sampling the gradient probability distribution map configured by the target LOD level by using the gradient value of the target gradient, and further determining the target distribution probability of the pixel point according to the sampling result.
According to the exemplary embodiment of the disclosure, the pre-configured gradient probability distribution map is sampled by using the gradient value of the target gradient, so that the production of a high-precision normal map or noise map containing a flash effect is avoided, the development cost and the development difficulty are reduced, the pre-configured gradient probability distribution maps corresponding to different LOD levels have reusability, and the rendering efficiency is improved.
And step S140, carrying out illumination calculation on the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector and the half-angle vector, and carrying out flash effect rendering on the model to be rendered according to the obtained illumination result.
In exemplary embodiments of the present disclosure, illumination calculations include, but are not limited to, calculating normal distribution probabilities, ratios of reflected rays to total rays, geometric shading, and the like. Steps S610 to S650 may be included:
In step S610, the normal distribution probability of the pixel point is calculated based on the projection value of the micro surface normal on the macro normal vector and the target distribution probability. The normal distribution probability is used for representing the probability that the direction of the micro surface is the second target direction, the ratio of the target distribution probability to the 4 th power of the projection value of the micro surface normal on the macroscopic normal line can be obtained, the ratio is used as the normal distribution probability of the pixel point, and the normal distribution probability represents the probability that the direction of the micro surface normal is consistent with the half angle vector. It should be noted that, the obtaining of the normal distribution probability is to obtain the probability that the directions of the micro surface normal and the half angle vector are consistent, so that the ratio of the target distribution probability to the projection value of the half angle vector on the macroscopic normal line can be obtained here, and the ratio is taken as the normal distribution probability of the pixel point.
Step S620, calculating first illumination data by using a Fresnel equation in combination with the half angle vector, the observation vector and the preset surface basic reflectivity.
Wherein the first illumination data characterizes a ratio of reflected light to total light. Combining the half angle vector, the observation vector and the preset surface basic reflectivity, calculating first illumination data by adopting the following formula:
Fschlick(h,v,F0)=F0+(1-F0)(1-(h·v))5 (2)
Wherein F schlick is first illumination data, F 0 is a preset surface basic reflectivity, h is a half angle vector, v is an observation vector, F 0 can be obtained by calculating according to a surface color and a metaliness of a model to be rendered, and also can be obtained by referring to related material information, which is not particularly limited in the embodiment of the disclosure.
In step S630, the second illumination data is calculated by using the geometric shading function according to the observation vector, the illumination vector and the half angle vector.
The second illumination data represent the quantity of the surface blocking light rays, and the second illumination data are calculated according to the observation vector, the illumination vector and the half-angle vector by adopting the following geometric shading function v-cavity:
Wherein G is second illumination data, n is a macroscopic normal vector, h is a half-angle vector, v is an observation vector, and l is the illumination vector.
Step S640, calculating fusion illumination data by adopting a bidirectional reflection distribution function in combination with the normal distribution probability, the first illumination data and the second illumination data.
The normal distribution probability, the first illumination data and the second illumination data are combined, the bidirectional reflection distribution function is adopted to calculate the fusion illumination data, and numerical values of the normal distribution probability, the first illumination data and the second illumination data can be substituted into a BRDF formula (Bidirectional Reflectance Distribution Function, bidirectional reflection distribution function):
Wherein F is the fused illumination data, D is the numerical value of the target distribution probability, F is the numerical value of the first illumination data, G is the numerical value of the second illumination data, n is the macroscopic normal vector, v is the observation vector and l' is the incident direction.
It should be noted that, if only one light source exists in the scene, such as sunlight, the incident direction of the sunlight is substituted into the above formula (4) to obtain illumination data; if at least two light sources exist in the scene, such as sunlight and a lamp (point light source), substituting the at least two light sources into the above formula (4) respectively to obtain at least two corresponding illumination data.
Step S650, multiplying the fusion illumination data with the preset incident light intensity to obtain an illumination result.
The preset incident light intensity can be multiplied by the intensity of sunlight and the basic color of the model to be rendered. The basic color is obtained from a material file of a model to be rendered and is obtained by sampling a basic color map provided by an artist in advance. The fusion illumination data is multiplied by the preset incident light intensity to obtain an illumination result, the obtained fusion illumination data is used for indicating the radiation illumination in the given incident direction, how to influence the radiation rate in the given emergent direction, and the fusion illumination data is multiplied by the incident light intensity to obtain the reflected light intensity, so that the rendering requirement of the flash effect can be reflected.
In an exemplary embodiment of the present disclosure, an implementation of calculating a gradient of a pixel point is provided. According to the macroscopic normal vector, the observation vector and the illumination vector of the pixel point in the model to be rendered, calculating the gradient of the pixel point may include steps S710 to S730:
Step S710, constructing a tangent space according to the macroscopic normal vector, and converting the observation vector and the illumination vector into the tangent space.
In an exemplary embodiment of the present disclosure, the tangent space refers to one space composed of a normal line, a tangent line and a secondary tangent line, and the LookAt algorithm may be used to calculate three axes of the tangent space from macroscopic normal vectors. Wherein the macroscopic normal vector is taken as the z-axis and the observation vector and the illumination vector are converted into a tangential space.
Specifically, the process of calculating three axes of tangent space from macroscopic normal vectors using LookAt algorithm includes: and determining a vector (0, 1) in the vertical direction, taking the cross product of the vector in the vertical direction and the macroscopic normal vector as a tangent line, calculating the cross product of the tangent line and the macroscopic normal vector as an auxiliary tangent line, and forming a z axis, an x axis and a y axis of a tangent line space according to the macroscopic normal vector, the auxiliary tangent line and the tangent line.
The process of converting the observation vector into tangent space may include: and respectively calculating projections of the observation vector on three axes of the tangent space to form a three-dimensional vector of the tangent space.
Accordingly, the process of converting the illumination vector into the tangential space is similar, and this disclosure will not be repeated.
Step S720, the vector sum of the observation vector and the illumination vector in the tangent space is obtained, and the vector sum is normalized to obtain a half-angle vector.
The vector sum of the observation vector and the illumination vector in the tangent space is obtained, then the vector sum is normalized, and the normalized vector is used as a half-angle vector.
In step S730, a gradient is obtained based on the half angle vector.
In an exemplary embodiment of the present disclosure, the half angle vector may be converted to a gradient. Wherein, deriving the gradient based on the half angle vector may be: the half angle vector is converted from a three-dimensional vector to a two-dimensional plane vector, and the obtained two-dimensional plane vector is taken as a gradient.
For example, the half angle vector is (x, y, z), and the value of x and the value of y are divided by the value of z, respectively, to obtainAs the gradient of the pixel point.
According to the method and the device for rendering the model, the macroscopic normal vector of the model to be rendered is converted into the gradient of the pixel point, so that the high-precision normal map containing the flash effect is avoided from being manufactured, only a general smooth surface map is needed, manufacturing difficulty is reduced, the gradient probability distribution map is sampled through the gradient value of the obtained gradient, the gradient probability distribution map can be reused, and rendering efficiency is improved.
In an exemplary embodiment of the present disclosure, an implementation of filtering pixels is also provided. Step S810 and step S820 may also be performed before sampling the gradient probability distribution map using the gradient value of the target gradient and determining the target distribution probability of the pixel point according to the sampling result:
Step S810, constructing an elliptical region of the pixel point in the texture space according to the texture coordinates and the texture coordinate differentiation of the pixel point.
In an exemplary embodiment of the present disclosure, first, a first length value and a second length value may be obtained as a major axis and a minor axis of an ellipse, respectively, according to calculating lengths of corresponding vectors of texture coordinates differentiation, and rotation angles of the ellipse may be calculated, and then an elliptical region of the ellipse in a texture space may be constructed according to the major axis, the minor axis, and the rotation angles.
If the texture coordinate of the current pixel point is (u, v), the left boundary of the bounding box corresponding to the elliptical region isRight boundary isUpper boundary isLower boundary isAnd filtering the pixel points outside the elliptical area. Wherein A is a long axis, B is a short axis, and C is a rotation angle.
Step S820, traversing texels in the elliptical area, and rotating and scaling the gradient of the pixel point for each texel to obtain the target gradient of each texel.
In an exemplary embodiment of the present disclosure, acquiring the target gradient of each texel may include step S910 and step S920:
Step S910, for each texel, obtaining a random rotation angle corresponding to the texel, and rotating the gradient of the pixel point according to the random rotation angle to obtain a rotation gradient.
In an exemplary embodiment of the present disclosure, a texel index of a texel may first be converted to a consistency index according to a target LOD hierarchy. The texel index is a search directory of the texel area, and the higher the LOD level is, the fewer the number of texels is. For example, when LOD level is 0, there are n texels in the u or v direction, and the corresponding texel index is 0,1,2,3, … …, n-1. When LOD grade is L, texel has a value in the u or v directionThe corresponding texel index isThe consistency index is used for indicating consistency of random rotation angles corresponding to texels on different LOD levels, namely, the consistency index is used for ensuring that gradients corresponding to the texels in the same area are kept consistent on the random rotation angles when the LOD levels are changed.
The calculation method for obtaining the consistency index according to the texel index is to multiply the texel index by 2 L. For example, when the LOD level is L, the consistency index is 0,2 L,2*2L,3*2L,4*2L,……,n-2L.
Further, after the consistency indexes of the texels in the transverse and longitudinal directions (x direction and y direction) are obtained, the two consistency indexes are fused by adopting a preset polynomial to obtain a numerical value, and the numerical value is used as a random seed. The preset polynomial may be a linear polynomial, for example, y=c1y1+c2y2, where Y is an obtained value, Y1 and Y2 are respectively consistency indexes, and c1 and c2 are respectively coefficients of corresponding consistency indexes.
After the random seed is obtained, the hash function can be used to map the random seed into a numerical value to serve as a random rotation angle, so that the gradient of the pixel point is rotated according to the random rotation angle, and the rotation gradient is obtained. The hash function may be selected according to actual requirements, for example, IQ (Inigo Quilez) hash, which is not limited in this disclosure. It should be noted that, the initial gradient of the pixel corresponding to the texel is the gradient of the pixel, so the gradient of the pixel is rotated according to the random rotation angle.
The gradient is rotated based on the random rotation angle, so that each texel is different in the result of sampling the gradient probability distribution map according to the target gradient, the subsequent illumination result based on the target distribution probability is enriched, and the rendering diversity is enriched.
Step S920, calculating a gradient scaling multiple by combining the preset roughness and the roughness scaling multiple, and scaling the rotating gradient according to the gradient scaling multiple to generate the target gradient of each texel.
Wherein, according to the preset roughness and the roughness scaling multiple, the gradient scaling multiple can be calculated according to the following formula:
Wherein S α is a gradient scaling multiple, α dict is a preset roughness adopted when generating a gradient probability distribution map, α is obtained from a roughness map corresponding to a model to be rendered, and α x and α y are roughness scaling multiples required by a rotation gradient in different decomposition directions respectively.
Further, the rotation gradient is scaled according to the scaling multiple. Based on formula (5), roughness map alpha provided by the artistic staff can be recycled, the situation that the artistic staff makes roughness maps in different directions is avoided, only the roughness scaling multiple is required to be adjusted, and the efficiency of rendering operation is improved.
In an exemplary embodiment of the present disclosure, an implementation of determining joint distribution probabilities for pixel points is provided. Sampling the gradient probability distribution map using the gradient value of the target gradient, and determining the target distribution probability of the pixel point according to the sampling result may include steps S1010 to S1040:
In step S1010, the target gradient is orthogonally decomposed to obtain a first gradient value and a second gradient value.
Wherein the target grade is orthogonally decomposed, for example, the grade is decomposed into orthogonal x-and y-directions.
In step S1020, the slope distribution map is sampled by using the first slope value to obtain a first distribution probability.
And sampling the gradient distribution map by using the first gradient value to obtain a first distribution probability corresponding to the first gradient value.
In step S1030, the slope distribution map is sampled using the second slope value to obtain a second distribution probability.
And sampling the gradient distribution map by using the second gradient value to obtain a second distribution probability corresponding to the second gradient value.
Step S1040, fusion calculation is carried out on the first distribution probability and the second distribution probability, and joint distribution probability of pixels corresponding to the texels is generated.
In an exemplary embodiment of the present disclosure, the first distribution probability and the second distribution probability are fused and calculated by adopting the following formula, so as to generate a joint distribution probability of pixels corresponding to texels:
wherein, P 3 is joint distribution probability, P 1 is first distribution probability, P 2 is second distribution probability, alpha dict is preset roughness adopted when generating gradient probability distribution mapping, alpha is obtained from roughness mapping corresponding to a model to be rendered, and alpha x and alpha y are respectively roughness scaling factors required by target gradients in different decomposition directions.
Further, the target distribution probability may be calculated based on the joint distribution probability of the texel corresponding to the pixel point and the gaussian weight corresponding to the texel.
In an exemplary embodiment of the present disclosure, the obtaining a gaussian weight corresponding to texels may be: and determining Gaussian weights corresponding to the texels according to the distances between the texels and the texture coordinates of the pixel points.
Specifically, based on the determined elliptical region, if the texture coordinate of the current pixel point is (u, v), the left boundary of the elliptical region corresponding to the bounding box isRight boundary isUpper boundary isLower boundary isFrom the left boundary of the bounding box to the right boundary of the bounding box, traversing sequentially from the upper boundary of the bounding box to the lower boundary of the bounding box, calculating the distance from texture coordinates to the center of an elliptical area, judging whether the texel is in the elliptical area according to an elliptical formula and the distance, if the texel is in the elliptical area, calculating the weight according to the distance from the texel to the center of the elliptical area by using a standard Gaussian function, and recording the sum of the weights (ensuring the sum of the weights to be 1) so as to perform weighted average.
The weighted average may be performed on the joint probability distribution corresponding to the texels in the elliptical region based on the gaussian weight, so as to obtain the target distribution probability of the pixel point.
In the exemplary embodiment of the disclosure, the gradient is a two-dimensional vector, if the gradient probability distribution map corresponding to the gradient is pre-stored, the time and space complexity are both O (n 2), if the gradient distribution map is sampled respectively by two gradient values after gradient decomposition, and the sampling results are fused, so that the time and space complexity are both O (n) in a two-time sampling and fusion mode, thereby reducing the operand. For example, if there are n gradient values, n times are needed to calculate the gradient probability density (gradient probability distribution map) corresponding to the gradient values, and n 2 gradients are obtained by combining the two gradient values. If only the gradient probability density (gradient probability distribution map) of the two-dimensional gradient is initially calculated, n 2 operations are required.
In an exemplary embodiment of the present disclosure, an implementation manner of interpolating a target distribution probability is further provided, so as to update the target distribution probability of the pixel point on the target LOD level by using the target distribution probability after the difference value, which may include:
calculating target distribution probability of the pixel points on adjacent LOD levels of the target LOD levels;
According to the target distribution probability on the adjacent LOD level and the target distribution probability on the target LOD level, the target distribution probability on the target LOD level is interpolated by adopting the following interpolation formula to obtain the interpolated target distribution probability:
P=PL(1-(L-[L]))+PL+1(L-[L])
wherein P is the target distribution probability after interpolation, L is the target LOD level, L+1 is the adjacent LOD level, P L is the target distribution probability on the target LOD level, and P L+1 is the target distribution probability on the adjacent LOD level;
further, updating the target distribution probability on the target LOD level by using the interpolated target distribution probability to perform illumination calculation on the pixel point by combining the updated target distribution probability, the observation vector, the illumination vector and the half-angle vector pair and the model to be rendered, and performing flash effect rendering on the model to be rendered according to the obtained illumination result.
It should be noted that, the process of calculating the target distribution probability of the pixel point on the adjacent LOD level of the target LOD level is the same as the process of calculating the target distribution probability of the pixel point corresponding to the target LOD level in the above embodiment of the present disclosure, and will not be described herein.
By the aid of the method and the device, the target distribution probability is subjected to interpolation, so that the target distribution probability is smoothly transited among LOD levels, and obvious seams of rendering results caused by hard transition among different LOD levels are avoided.
In an exemplary embodiment of the present disclosure, an implementation of generating a slope probability distribution map is also provided. The process of generating a slope probability distribution map of the LOD level of the nth layer may include steps S1110 to S1150:
in step S1110, a first gaussian distribution function is generated with the preset roughness as a parameter.
Wherein, byAnd taking 0 as a mean value to generate a first Gaussian distribution function for standard deviation.
In step S1120, 2 N values are obtained by using the first gaussian distribution function.
By usingAnd obtaining 2 N numerical values as standard deviation by using a first Gaussian distribution function with 0 as a mean value, wherein N is the level of the target LOD. For example, if the target LOD level is 0, 1 value is obtained, if the target LOD level is 1, 2 values are obtained, and so on, the values corresponding to the LOD levels of each level are respectively obtained.
Step S1130, based on the preset target standard deviation, generates 2 N second gaussian distribution functions by taking the values in step S1120 as the mean value.
In an exemplary embodiment of the present disclosure, the values obtained in step S1120 are used as the mean values to generate the second gaussian distribution functions corresponding to the LOD levels of the respective layers, respectively, based on the preset target standard deviation. For example, if the target LOD level is 0,1 number is obtained, 1 is taken as a mean value, a preset target standard value is taken as a parameter, 1 second gaussian distribution function is generated, if the target LOD level is 1, the first number and the second number are obtained, the first number is taken as a mean value, the preset target standard value is taken as a parameter, 1 second gaussian distribution function is generated, the second number is taken as a mean value, the preset target standard value is taken as a parameter, 2 second gaussian distribution functions are generated, and the second gaussian distribution functions corresponding to the LOD levels of each layer are generated by analogy. Fig. 12 shows a second gaussian distribution function diagram corresponding from level 0 to level 15 according to an embodiment of the present disclosure.
And step S1140, fitting and superposing 2 N second Gaussian distribution functions to generate a target Gaussian distribution function serving as a target gradient probability density.
In an exemplary embodiment of the present disclosure, the obtained 2 N second gaussian distribution functions are fit superimposed for each level of LOD hierarchy as the target slope probability density. For example, if the target LOD level is 1, fitting the obtained two second gaussian distribution functions, if the target LOD level is 2, fitting the obtained four second gaussian distribution functions, and so on, to obtain the target gradient probability density corresponding to each LOD level.
In step S1150, the variable value of the target gradient probability density is used as the gradient value, the target gradient probability density is used to obtain the distribution probability corresponding to each gradient value, and the gradient value and the distribution probability are stored correspondingly.
In an exemplary embodiment of the disclosure, a variable value of a target gradient probability density is taken as a gradient value, the target gradient probability density is adopted, distribution probabilities corresponding to the gradient values are obtained, the gradient values and the distribution probabilities are stored correspondingly, the distribution probabilities of different gradient values in different coordinate directions are stored, and the obtained target gradient probability density is stored in a picture format. Wherein the range of variation of the variable may be determined based on the actual rendering scene, e.g. asThe alpha dict is a preset roughness adopted when the gradient probability distribution map is generated, the sampling number of the variable between the variation range intervals can also be determined according to the actual rendering scene, the resolution of the gradient probability distribution map is determined by the sampling number, and the more the sampling is, the higher the resolution is. In embodiments of the present disclosure, the delta between variables may be of varying scope
It should be noted that, in the embodiment of the present disclosure, in order to increase the diversity of rendering results, the number of gradient probability distribution maps may also be increased, which is not limited in particular in the embodiment of the present disclosure.
According to the embodiment of the disclosure, the slope probability distribution map corresponding to each LOD level is generated, reusability is achieved, the slope probability distribution map can be sampled according to the slope value of the target slope, so that the target distribution probability of the pixel points is obtained, the target distribution probability is used for illumination calculation of a model to be rendered, the rendering operation effect is improved, no art staff is required to manufacture a normal map containing the flashing effect, sampling is conducted based on the LOD level to which the pixel points belong, and the diversity of sampling results can be increased, so that the diversity of rendering results is increased.
In an exemplary embodiment of the present disclosure, an implementation of adjusting a flash effect is also provided. Embodiments of the present disclosure may perform at least one of the following operations to change the flash effect of the model to be rendered:
1) And filtering the texels according to the comparison relation between random seeds corresponding to consistency indexes of the texels and the filtering threshold by taking the density parameters of the display particles of the flashing effect as the filtering threshold, and taking the gradient corresponding to the rest texels as a target gradient to adjust the display density degree of the flashing effect so as to avoid hard boundaries of rendering results. As shown in fig. 13, the rendering results in the right graph have no hard boundaries by filtering the gradient.
2) And changing the target LOD level corresponding to the pixel point to change the display grain granularity of the sparkle effect. Wherein the higher the target LOD level, the larger the display grain size of the sparkle effect, and thus the display grain size of the sparkle effect is changed by adjusting the number of levels of the target LOD level, as shown in fig. 2.
3) Vector distances between world coordinates of the pixel points and position coordinates of the virtual camera are adjusted so that the flash effect changes in attenuation degree along with the vector distances. In real life, the flash effect generally needs to be observed at a close distance, and the further the distance is, the brightness of the flash effect is weakened, so the coefficient of the distance attenuation needs to be adjusted, and in the embodiment of the present disclosure, the brightness and the distance are in a negative exponential function relationship, and the attenuation coefficient is taken as a coefficient of a negative exponent. For example, the water drill point on the clothes is still visible at a longer distance, so the attenuation coefficient is smaller; the paint spraying particles of the automobile need to be observed at a relatively close position, so that the attenuation coefficient is relatively large, as shown in fig. 14, after the attenuation degree of the flash effect is adjusted to be changed along with the vector distance between the world coordinates of the pixel points and the position coordinates of the virtual camera, the rendering result can be like the flash effect in real life.
4) And (3) regulating micro-surface parameters of the model to be rendered, taking the micro-surface parameters as coefficients of the target LOD level obtained in the step S330, and further changing the LOD level to which the pixel points belong, and further changing the display intensity of the flashing effect. For example, when the condition target LOD level exceeds 17 layers, the larger the micro-surface density is, the weaker the flashing effect is displayed, and the closer the mirror reflection of a smooth surface is, so that the method is suitable for transparent plastic materials such as raincoats; on the contrary, the micro surface density is small, the fine particles have strong sense of breakage, so that the more obvious the flashing effect is, and the material is suitable for frosting metals, precious stones and other materials which generate sharp light rays.
5) And taking the obtained illumination result as an exponential function generated by the base, taking the brightness parameter as an index of the exponential function, and adjusting the brightness of the model to be rendered. As shown in fig. 15, a graph of the left image is a rendering result with a luminance indication of 0.01, and the right image is a rendering result with a luminance index of 1.2.
In summary, according to the method for rendering the flash effect in the game according to the exemplary embodiment of the present disclosure, an artist is not required to make a high-definition normal map carrying the flash effect, only a general smooth surface normal is required to be provided, and the normal is converted into the gradient of the pixel point, so that the gradient probability distribution map is sampled by the gradient value of the gradient, and the manufacturing cost and the manufacturing difficulty are reduced. Corresponding gradient probability distribution maps are configured for different LOD layers, the particle definition degree is different among different LOD layers, sampling is carried out for different target LOD layers, the obtained target distribution probability of pixel points is different, the final flicker effect is clear in level, and the real physical phenomenon is more met; on the other hand, the corresponding LOD level is sampled according to gradient values of different pixel points, so that the same target distribution probability is avoided being obtained, the repetition of a final illumination result is avoided, different lens changes can be adapted, different flash effects are generated, and a higher-quality flash rendering effect is obtained. In addition, through various parameter adjustment, various flash effects can be rendered and obtained through the flash effect rendering method in the game according to the embodiment of the present disclosure, and referring to fig. 16, frosted metal surfaces, garment surfaces, and the like obtained through the flash effect rendering method in the game according to the embodiment of the present disclosure are respectively shown, and for rendering scenes of various flash effects, the method according to the embodiment of the present disclosure has universality, and is not listed here.
In an exemplary embodiment of the present disclosure, there is also provided a flash effect rendering apparatus in a game. Referring to fig. 17, the flash effect rendering apparatus 1700 in the game may include: ,
The gradient calculation module 1710 is configured to calculate a gradient of a pixel point in a model to be rendered according to a macroscopic normal vector, an observation vector and an illumination vector of the pixel point, where the gradient is a two-dimensional vector used for representing a position feature of the pixel point in the model to be rendered, and the macroscopic normal vector, the observation vector and the illumination vector are vectors in world space;
A level determining module 1720, configured to determine, based on texture coordinates of the pixel point and a target particle size required for the sparkle effect, a target LOD level to which the pixel point belongs, where the target LOD level is configured with a gradient probability distribution map, and the gradient probability distribution map includes distribution probabilities of different gradient values in different coordinate directions;
The sampling module 1730 is configured to sample the gradient probability distribution map with a target gradient, and determine a target distribution probability of the pixel point according to a sampling result, where the target distribution probability is used to represent a probability that the projection is oriented in a first target direction, the target gradient is a gradient of a texel corresponding to the pixel point, and the target gradient is obtained based on the gradient of the pixel point;
The rendering module 1740 is configured to perform illumination calculation on the model to be rendered in combination with the target distribution probability, the observation vector, the illumination vector, and the half-angle vector, and perform flash effect rendering on the model to be rendered according to the obtained illumination result.
Since each functional module of the flash effect rendering device in the game according to the exemplary embodiment of the present disclosure is the same as that in the above-described embodiment of the flash effect rendering method in the game, a detailed description thereof will be omitted.
It should be noted that although several modules or units of the flash effect rendering device in the game are mentioned in the above detailed description, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, in exemplary embodiments of the present disclosure, a computer storage medium capable of implementing the above-described method is also provided. On which a program product is stored which enables the implementation of the method described above in the present specification. In some possible embodiments, the various aspects of the present disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device.
With reference to fig. 18, a program product 1800 for implementing the above-described method, which may employ a portable compact disc read-only memory (CD-ROM) and include program code, and which may be run on a terminal device, such as a personal computer, is described according to an exemplary embodiment of the present disclosure. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided. Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 1900 according to such an embodiment of the disclosure is described below with reference to fig. 19. The electronic device 1900 shown in fig. 19 is merely an example and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure.
As shown in FIG. 19, the electronic device 1900 may be in the form of a general purpose computing device. Components of electronic device 800 may include, but are not limited to: the at least one processing unit 1910, the at least one storage unit 1920, a bus 1930 connecting the various system components (including the storage unit 1920 and the processing unit 1910), and a display unit 1940.
Wherein the storage unit stores program code that is executable by the processing unit 1910 such that the processing unit 1910 performs steps according to various exemplary embodiments of the present disclosure described in the above-described "exemplary methods" section of the present specification.
The storage unit 1920 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 1921 and/or cache memory 1922, and may further include Read Only Memory (ROM) 1923.
The storage unit 1920 may also include a program/utility 1924 having a set (at least one) of program modules 1925, such program modules 1925 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1930 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
Electronic device 1900 may also communicate with one or more external devices 2000 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with electronic device 1900, and/or with any devices (e.g., routers, modems, etc.) that enable electronic device 1900 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1950. Also, electronic device 1900 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 1960. As shown, network adapter 1960 communicates with other modules of electronic device 1900 via bus 1930. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 1900, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (19)

1. A method for rendering a sparkling effect in a game, comprising:
Calculating the gradient of a pixel point according to a macroscopic normal vector, an observation vector and an illumination vector of the pixel point in a model to be rendered, wherein the gradient is the projection of a micro surface normal in world space on a two-dimensional tangential space, and the macroscopic normal vector, the observation vector and the illumination vector are vectors in the world space;
Determining a target LOD level to which the pixel point belongs based on texture coordinates of the pixel point and target particle sizes required by the flashing effect, wherein the target LOD level is configured with a gradient probability distribution map, and the gradient probability distribution map comprises distribution probabilities of different gradient values in different coordinate directions;
sampling the gradient probability distribution map by using a target gradient, and determining a target distribution probability of the pixel point according to a sampling result, wherein the target distribution probability is used for representing the probability that the projection is oriented in a first target direction, the target gradient is the gradient of the pixel point corresponding to texel, and the target gradient is obtained based on the gradient of the pixel point;
And carrying out illumination calculation on the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector and the half-angle vector, and carrying out flash effect rendering on the model to be rendered according to the obtained illumination result.
2. The method of claim 1, wherein calculating the gradient of the pixel point according to the macroscopic normal vector, the observation vector, and the illumination vector of the pixel point in the model to be rendered comprises:
Constructing a tangent space according to the macroscopic normal vector, and converting the observation vector and the illumination vector into the tangent space;
Solving a vector sum of the observation vector and the illumination vector in the tangent space, and carrying out standardization processing on the vector sum to obtain a half-angle vector;
The gradient is obtained based on the half angle vector.
3. The method of claim 2, wherein the deriving the grade based on the half angle vector comprises:
Dividing the half-angle vector by a third component of the half-angle vector using the first and second components of the half-angle vector, respectively, to convert the half-angle vector from a three-dimensional vector to a two-dimensional plane vector;
and taking the two-dimensional plane vector as the gradient.
4. The method of claim 1, wherein the determining the target LOD level to which the pixel belongs based on the texture coordinates of the pixel and the target particle size required for the sparkle effect comprises:
Obtaining the differentiation of the texture coordinate uv to the screen space coordinate xy to obtain a differentiation result, and scaling the differentiation result according to the value of the target particle size to obtain the texture coordinate differentiation;
calculating the length of the texture coordinate differential corresponding vector to obtain a first length value and a second length value, wherein the first length value is larger than the second length value;
And combining the second length value and a preset LOD level number threshold value to determine the target LOD level.
5. The method of claim 4, wherein the target LOD level is determined using the following formula in combination with the second length value and a preset LOD level number threshold:
L=max(0,NLEVELS-1+log2(minorLength))
Wherein, L is the target LOD level, NLEVELS is the LOD level number threshold, and minor Length is the second Length value.
6. The method of claim 4, wherein prior to sampling the slope probability distribution map with the slope value of the target slope and determining the target distribution probability for the pixel based on the sampling result, the method further comprises:
Constructing an elliptical region of the pixel point in a texture space according to the texture coordinates of the pixel point and the texture coordinate differentiation;
traversing the texels in the elliptic region, and carrying out rotation and scaling processing on the gradient of the pixel point aiming at each texel to obtain the target gradient of each texel.
7. The method of claim 6, wherein traversing texels in the elliptical region, for each texel, rotating and scaling the slope of the pixel point to obtain a target slope for each texel, comprises:
for each texel, acquiring a random rotation angle corresponding to the texel, and rotating the gradient of the pixel point according to the random rotation angle to obtain a rotation gradient;
and calculating a gradient scaling multiple by combining preset roughness and a roughness scaling multiple, and scaling the rotating gradient according to the gradient scaling multiple to generate a target gradient of each texel.
8. The method of claim 7, wherein the obtaining the random rotation angle corresponding to the texel and rotating the gradient of the pixel according to the random rotation angle to obtain a rotation gradient includes:
Converting a texel index of the texel into a consistency index according to the target LOD hierarchy, wherein the consistency index is used for indicating consistency of the random rotation angle corresponding to the texel on different LOD hierarchies;
Generating a random value based on the consistency index, and taking the random value as the random rotation angle;
And rotating the gradient of the pixel point according to the random rotation angle to obtain a rotation gradient.
9. The method of claim 7, wherein the slope scaling factor is calculated in combination with a predetermined roughness and roughness scaling factor according to the following formula:
The step S α is the gradient scaling multiple, the step alpha dict is a preset roughness adopted when the gradient probability distribution map is generated, the step alpha is obtained from a roughness map corresponding to the model to be rendered, and the step alpha x and the step alpha y are respectively the roughness scaling multiple required by the rotation gradient in different decomposition directions.
10. The method of claim 6, wherein sampling the slope probability distribution map using the slope value of the target slope and determining the target distribution probability for the pixel based on the sampling result comprises:
sampling the gradient probability distribution map by using a gradient value of a target gradient, and calculating the joint distribution probability of the pixels corresponding to the texels according to a sampling result;
determining Gaussian weights corresponding to the texels according to the distances between the texels and the texture coordinates of the pixel points;
and calculating the target distribution probability based on the joint distribution probability and the Gaussian weight corresponding to the texel.
11. The method of claim 10, wherein the sampling the slope probability distribution map using the slope value of the target slope, and calculating the joint distribution probability of the pixel corresponding to the texel based on the sampling result, comprises:
performing orthogonal decomposition on the target gradient to obtain a first gradient value and a second gradient value;
sampling the gradient distribution map by using the first gradient value to obtain a first distribution probability;
sampling the gradient distribution map by using the second gradient value to obtain a second distribution probability;
And carrying out fusion calculation on the first distribution probability and the second distribution probability to generate the joint distribution probability.
12. The method of claim 11, wherein the fusing the first distribution probability with the second distribution probability uses the following formula to generate the joint distribution probability:
Wherein P 3 is the joint distribution probability, P 1 is the first distribution probability, P 2 is the second distribution probability, α dict is a preset roughness adopted when generating the gradient probability distribution map, α is obtained from a roughness map corresponding to the model to be rendered, and α x and α y are roughness scaling factors required by the target gradient in different decomposition directions respectively.
13. The method of claim 11, wherein the calculating the target distribution probability based on the joint distribution probability and the gaussian weights corresponding to the texels comprises:
and carrying out weighted average on joint probability distribution corresponding to texels in the elliptic region based on the Gaussian weight so as to obtain the target distribution probability.
14. The method according to any one of claims 10 to 13, further comprising:
calculating target distribution probability of the pixel points on adjacent LOD levels of the target LOD level;
According to the target distribution probability on the adjacent LOD level and the target distribution probability on the target LOD level, the target distribution probability on the target LOD level is interpolated by adopting the following interpolation formula to obtain the interpolated target distribution probability:
P=PL(1-(L-[L]))+PL+1(L-[L])
Wherein P is the interpolated target distribution probability, L is the target LOD level, l+1 is the adjacent LOD level, P L is the target distribution probability on the target LOD level, and P L+1 is the target distribution probability on the adjacent LOD level;
And updating the target distribution probability of the pixel point on the target LOD level by using the interpolated target distribution probability.
15. The method according to any one of claims 1 to 13, wherein before said calculating the gradient of a pixel point in the model to be rendered from the macroscopic normal vector, the observation vector and the illumination vector of said pixel point, the method further comprises:
generating slope probability distribution maps corresponding to different LOD levels;
Wherein, the process of generating slope probability distribution map of LOD level of the nth layer comprises:
Generating a first Gaussian distribution function by taking preset roughness as a parameter;
obtaining 2 N numerical values by using the first Gaussian distribution function;
Generating 2 N second Gaussian distribution functions by taking the numerical value as a mean value based on a preset target standard deviation;
Fitting and superposing the 2 N second Gaussian distribution functions to generate a target Gaussian distribution function serving as a target gradient probability density;
taking the variable value of the target gradient probability density as a gradient value, adopting the target gradient probability density to obtain the distribution probability corresponding to each gradient value, and storing the gradient value and the distribution probability correspondingly;
Wherein N is an integer not less than 0.
16. The method according to any one of claims 1 to 13, wherein the computing the illumination of the model to be rendered in combination with the target distribution probability, the observation vector, the illumination vector, and the half angle vector, and rendering the flash effect data of the model to be rendered according to the obtained illumination result, comprises:
Calculating the normal distribution probability of the pixel point based on the projection value of the micro-surface normal on the macroscopic normal vector and the target distribution probability, wherein the normal distribution probability is used for representing the probability that the micro-surface is oriented in the second target direction;
Calculating first illumination data by adopting a Fresnel equation in combination with the half angle vector, the observation vector and a preset surface basic reflectivity, wherein the first illumination data represents the ratio of reflected light to total light;
Calculating second illumination data according to the observation vector, the illumination vector and the half-angle vector by adopting a geometric shading function, wherein the second illumination data represents the quantity of surface blocking light rays;
Calculating fusion illumination data by adopting a bidirectional reflection distribution function in combination with the normal distribution probability, the first illumination data and the second illumination data;
And obtaining the product of the fusion illumination data and the preset incident light intensity to obtain the illumination result.
17. A flash effect rendering apparatus in a game, comprising:
The gradient calculation module is used for calculating the gradient of the pixel point according to a macroscopic normal vector, an observation vector and an illumination vector of the pixel point in the model to be rendered, wherein the gradient is the projection of a micro-surface normal in world space on a two-dimensional tangential space, and the macroscopic normal vector, the observation vector and the illumination vector are vectors in world space;
the system comprises a hierarchy determining module, a gradient probability distribution mapping module and a gradient probability distribution processing module, wherein the hierarchy determining module is used for determining a target LOD hierarchy to which the pixel point belongs based on texture coordinates of the pixel point and target particle sizes required by a flashing effect, and the target LOD hierarchy is configured with gradient probability distribution mapping which comprises distribution probabilities of different gradient values in different coordinate directions;
The sampling module is used for sampling the gradient probability distribution map by utilizing a target gradient, determining target distribution probability of the pixel point according to a sampling result, wherein the target distribution probability is used for representing the probability that the projection direction is a first target direction, the target gradient is the gradient of the pixel point corresponding to texel, and the target gradient is obtained based on the gradient of the pixel point;
And the rendering module is used for carrying out illumination calculation on the model to be rendered by combining the target distribution probability, the observation vector, the illumination vector and the half-angle vector, and carrying out flash effect rendering on the model to be rendered according to the obtained illumination result.
18. A storage medium having stored thereon a computer program which, when executed by a processor, implements the method according to any of claims 1 to 16.
19. An electronic device, comprising:
A processor; and
A memory for storing executable instructions of the processor;
Wherein the processor is configured to perform the method of any one of claims 1 to 16 via execution of the executable instructions.
CN202210637233.7A 2022-06-07 2022-06-07 Flash effect rendering method and device in game, storage medium and electronic equipment Active CN115063517B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210637233.7A CN115063517B (en) 2022-06-07 2022-06-07 Flash effect rendering method and device in game, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210637233.7A CN115063517B (en) 2022-06-07 2022-06-07 Flash effect rendering method and device in game, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN115063517A CN115063517A (en) 2022-09-16
CN115063517B true CN115063517B (en) 2024-09-20

Family

ID=83199664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210637233.7A Active CN115063517B (en) 2022-06-07 2022-06-07 Flash effect rendering method and device in game, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115063517B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168131B (en) * 2022-12-09 2023-11-21 北京百度网讯科技有限公司 Cloth rendering method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077497A (en) * 2011-10-26 2013-05-01 中国移动通信集团公司 Method and device for zooming image in level-of-detail model
CN111311723A (en) * 2020-01-22 2020-06-19 腾讯科技(深圳)有限公司 Pixel point identification and illumination rendering method and device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6525740B1 (en) * 1999-03-18 2003-02-25 Evans & Sutherland Computer Corporation System and method for antialiasing bump texture and bump mapping
US9652882B2 (en) * 2014-04-05 2017-05-16 Sony Interactive Entertainment America Llc Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location
CN111583381B (en) * 2020-05-06 2024-03-01 网易(杭州)网络有限公司 Game resource map rendering method and device and electronic equipment
CN112419466A (en) * 2020-11-20 2021-02-26 苏州幻塔网络科技有限公司 In-game object surface highlight rendering method, device, equipment and storage medium
CN112734896B (en) * 2021-01-08 2024-04-26 网易(杭州)网络有限公司 Environment shielding rendering method and device, storage medium and electronic equipment
CN113144611B (en) * 2021-03-16 2024-05-28 网易(杭州)网络有限公司 Scene rendering method and device, computer storage medium and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077497A (en) * 2011-10-26 2013-05-01 中国移动通信集团公司 Method and device for zooming image in level-of-detail model
CN111311723A (en) * 2020-01-22 2020-06-19 腾讯科技(深圳)有限公司 Pixel point identification and illumination rendering method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115063517A (en) 2022-09-16

Similar Documents

Publication Publication Date Title
US8665266B2 (en) Global visualization process terrain database builder
US6985143B2 (en) System and method related to data structures in the context of a computer graphics system
US7170527B2 (en) Interactive horizon mapping
Li et al. [Retracted] Multivisual Animation Character 3D Model Design Method Based on VR Technology
US20100033482A1 (en) Interactive Relighting of Dynamic Refractive Objects
US11138800B1 (en) Optimizations to reduce multi-channel ray casting for color sampling
JP2015515059A (en) Method for estimating opacity level in a scene and corresponding apparatus
Schneider et al. Real-time rendering of complex vector data on 3d terrain models
CN115063517B (en) Flash effect rendering method and device in game, storage medium and electronic equipment
US7439970B1 (en) Computer graphics
US20240203030A1 (en) 3d model rendering method and apparatus, electronic device, and storage medium
Scholz et al. Real‐time isosurface extraction with view‐dependent level of detail and applications
CN116721218B (en) Three-dimensional real estate model light-weight method, system and equipment
Pajarola et al. DMesh: Fast depth-image meshing and warping
Selgrad et al. A compressed representation for ray tracing parametric surfaces
US20130194263A1 (en) Three-dimensional image display device and three-dimensional image display program
Scheib et al. Efficient fitting and rendering of large scattered data sets using subdivision surfaces
Lu Unreal engine nanite foliage shadow imposter
JP2002526843A (en) Energy propagation modeling device
Grottel et al. Real-Time Visualization of Urban Flood Simulation Data for Non-Professionals.
Hussain et al. Fast, simple, feature preserving and memory efficient simplification of triangle meshes
US7262768B2 (en) Incremental frustum-cache acceleration of line integrals for volume rendering
CN115205434B (en) Point cloud data visualization processing method and device
Binder et al. Massively Parallel Stackless Ray Tracing of Catmull-Clark Subdivision Surfaces
CN115861468A (en) Shadow rendering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant