CN114445538A - Real-time rendering method and device of target object, electronic equipment and storage medium - Google Patents

Real-time rendering method and device of target object, electronic equipment and storage medium Download PDF

Info

Publication number
CN114445538A
CN114445538A CN202111663207.3A CN202111663207A CN114445538A CN 114445538 A CN114445538 A CN 114445538A CN 202111663207 A CN202111663207 A CN 202111663207A CN 114445538 A CN114445538 A CN 114445538A
Authority
CN
China
Prior art keywords
parameter
texture
parameters
target
weather
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111663207.3A
Other languages
Chinese (zh)
Inventor
樊伟富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202111663207.3A priority Critical patent/CN114445538A/en
Publication of CN114445538A publication Critical patent/CN114445538A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/55Radiosity

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The application provides a real-time rendering method and device of a target object, electronic equipment and a storage medium. The method comprises the following steps: determining a plurality of pre-baking parameter sets according to a plurality of pre-acquired environmental parameters; baking the plurality of sets of pre-bake parameters with a bake assembly to determine a plurality of texture maps; responding to a target environment parameter corresponding to the current scene matched in the plurality of environment parameters, and determining an illumination result according to a texture map corresponding to the target environment parameter; and rendering the target object in the current scene in real time according to the illumination result. And then, a three-dimensional cloud layer illumination effect and a weather transition effect are rendered in real time according to an illumination result and changed day and night, so that day and night light shadow following and weather cloud layer changing can be obtained on the premise of spending minimum system resources, and the performance is almost equivalent to that of a static cloud texture. Because no Raymarch technology is used, the problem of texture cache loss cannot be caused, and the problem that the frame rate is not sustainable due to the fact that the power consumption is increased and the body generates heat cannot be caused.

Description

Real-time rendering method and device of target object, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for rendering a target object in real time, an electronic device, and a storage medium.
Background
In the related art, the virtual scene includes realistic dynamic environmental effects such as day-night cycle weather changes. Because Raymarch (also called light stepping) is needed, real-time volume cloud usually causes very high texture cache loss on a mobile terminal, further causes the problems of high power consumption, body heating, frame rate unrelenting and high cost, and cannot ensure that a cloud layer rendering system can adapt to the transformation of a day and night weather system, so that the real-time rendering of the volume cloud is realized.
Disclosure of Invention
In view of the above, an object of the present application is to provide a method and an apparatus for real-time rendering of a target object, an electronic device, and a storage medium.
Based on the above object, in a first aspect, the present application provides a method for real-time rendering of a target object, including:
determining a plurality of pre-baking parameter sets according to a plurality of pre-acquired environmental parameters;
baking the plurality of sets of pre-bake parameters with a bake assembly to determine a plurality of texture maps;
responding to a target environment parameter corresponding to the current scene matched in the plurality of environment parameters, and determining an illumination result according to a texture map corresponding to the target environment parameter;
and rendering the target object in the current scene in real time according to the illumination result.
In one possible implementation form of the method,
the environmental parameters comprise: a time parameter and a weather parameter;
the determining a plurality of pre-baking parameter sets according to a plurality of pre-acquired environmental parameters further includes:
acquiring a plurality of time parameters and a plurality of weather parameters;
mixing each time parameter and each weather parameter respectively to determine a plurality of texture sets;
acquiring an illumination equation;
and determining a pre-baking parameter set corresponding to each texture set according to the illumination equation to obtain a plurality of pre-baking parameter sets.
In one possible implementation form of the method,
the obtaining of the illumination equation further comprises:
acquiring the variation of the light energy;
integrating the light energy variation to determine the light intensity received by the vision camera;
determining a first scattering contribution of the direct light according to the scattering attenuation amount from the light to the target point, the phase function, the scattering coefficient and the scattering attenuation amount from the target point to the visual camera position;
determining a second scattering contribution of the ambient light according to the indirect ambient light intensity of the sky box and the scattering attenuation amount of the volume cloud to the target point;
an illumination equation is determined from the first and second scattering contributions.
In one possible implementation, the set of pre-baking parameters includes: scattering parameters and transmittance parameters;
determining, according to the illumination equation, a set of pre-baking parameters corresponding to each texture set to obtain a plurality of sets of pre-baking parameters, further including:
analyzing each texture set according to the illumination equation to determine the scattering parameter and the penetration parameter corresponding to each texture set;
and determining a plurality of pre-baking parameter sets according to the scattering parameter and the penetration parameter corresponding to each texture set.
In one possible implementation, the baking the plurality of sets of pre-baking parameters with the baking component to determine a plurality of texture maps further includes:
obtaining a scaling factor of the scattering parameter in each texture map;
and adjusting the scattering parameters according to the scaling factors to convert each texture map into a true color map.
In one possible implementation, the texture map includes: a plurality of storage channels;
the baking the plurality of sets of pre-baking parameters with a baking assembly to determine a plurality of texture maps, then further comprising:
grouping all the texture maps according to the weather parameters to determine a plurality of texture map groups; the weather parameters corresponding to each texture map in each texture map group are the same;
sequencing each texture map in each texture map group according to the time parameter in a time sequence;
for each texture map in each texture map group,
storing the set of prebaking parameters corresponding to the texture map in any one of the storage channels,
and storing the prebaking parameters corresponding to the texture maps in the next time sequence in any one of the rest storage channels.
In one possible implementation, the target environment parameter includes: a target weather parameter and a target time parameter;
the determining the illumination result according to the texture map corresponding to the target environment parameter further includes:
determining whether a weather parameter corresponding to the target weather parameter exists in all the weather parameters;
in response to a weather parameter corresponding to the target weather parameter being present, selecting all of the texture maps corresponding to the target weather parameter to determine a first set of candidate texture maps;
determining whether a time parameter corresponding to the target time parameter exists in the first set of candidate texture tiles;
and in response to the existence of the time parameter corresponding to the target time parameter, determining the illumination result according to the texture map corresponding to the target time parameter.
In a possible implementation manner, the determining whether there is a weather parameter corresponding to the target weather parameter in all the weather parameters further includes:
in response to the fact that no weather parameter corresponding to the target weather parameter exists, determining two weather parameters to be mixed in all the weather parameters according to the target weather parameter;
respectively determining a second candidate texture mapping group and a third candidate texture mapping group according to all the texture mapping corresponding to the two weather parameters to be mixed;
determining whether a time parameter corresponding to the target time parameter exists in the second and third candidate texture tile groups;
in response to there being a time parameter corresponding to the target time parameter, blending all texture maps corresponding to the target time parameter to determine the illumination result.
In one possible implementation, the determining whether the same temporal parameter as the target temporal parameter exists in the first candidate texture tile group further includes:
in response to the absence of the time parameter which is the same as the target time parameter, determining two time parameters corresponding to a time interval containing the target time parameter;
selecting texture maps corresponding to the two time parameters from the first candidate texture map set;
and determining the illumination result according to the texture map.
In one possible implementation, the determining whether the same temporal parameter as the target temporal parameter exists in the second candidate texture tile group and the third candidate texture tile group further includes:
in response to the absence of the time parameter which is the same as the target time parameter, determining two time parameters corresponding to a time interval containing the target time parameter;
selecting all texture maps corresponding to the two time parameters in the second candidate texture map group and the third candidate texture map group, respectively;
and determining the illumination result according to all the texture maps.
In a second aspect, the present application provides an apparatus for real-time rendering of a target object, comprising:
a first determining module configured to determine a plurality of pre-baking parameter sets according to a plurality of environmental parameters acquired in advance;
a second determination module configured to bake the plurality of sets of pre-bake parameters with a bake assembly to determine a plurality of texture maps;
a third determining module, configured to determine, in response to matching a target environment parameter corresponding to a current scene among the plurality of environment parameters, an illumination result according to a texture map corresponding to the target environment parameter;
and the rendering module is configured to render the target object in the current scene in real time according to the illumination result.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method for real-time rendering of target objects according to the first aspect when executing the program.
In a fourth aspect, the present application provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of real-time rendering of target objects according to the first aspect.
From the above description, according to the real-time rendering method, the device, the electronic device and the storage medium for the target object provided by the present application, a plurality of pre-baking parameter sets are determined according to a plurality of pre-acquired environmental parameters, a volume cloud baking component is implemented by combining a real-time volume cloud modeling algorithm, and the plurality of pre-baking parameter sets are baked by using the baking component, so that a plurality of texture maps are determined for pre-baking cloud layer illumination data in various weather day and night times; in response to the target environment parameters corresponding to the current scene being matched in the plurality of environment parameters, the illumination result can be determined according to the texture maps corresponding to the target environment parameters, and then the stereoscopic cloud layer illumination effect and weather transition effect changing along with day and night are rendered in real time according to the illumination result, so that day and night light shadow following and weather cloud layer changing can be obtained on the premise of extremely low system resource consumption, and the performance is almost equivalent to the static cloud texture. Because no Raymarch technology is used, the problem of texture cache loss cannot be caused, and the problem that the frame rate is not sustainable due to the fact that the power consumption is increased and the body generates heat cannot be caused.
Drawings
In order to more clearly illustrate the technical solutions in the present application or the related art, the drawings needed to be used in the description of the embodiments or the related art will be briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 illustrates an exemplary flowchart of a method for rendering a target object in real time according to an embodiment of the present application.
FIG. 2 shows a schematic view of light rays from a particular direction at a microscopic angle through a medium according to an embodiment of the present application.
Fig. 3 shows a schematic view of a camera viewing light through a cloud layer from a particular direction according to an embodiment of the application.
FIG. 4 shows a pixel deviation diagram of a texture map after compression according to an embodiment of the present application.
Fig. 5 shows a schematic diagram of a scene with volumetric clouds at 9 points in multiple clouds within a virtual scene according to an embodiment of the application.
Fig. 6 shows a schematic view of a scene with a volume cloud at 13 points on a sunny day within a virtual scene according to an embodiment of the application.
Fig. 7 illustrates an exemplary structural diagram of an apparatus for real-time rendering of a target object according to an embodiment of the present application.
Fig. 8 shows an exemplary structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings in combination with specific embodiments.
It should be noted that technical terms or scientific terms used in the embodiments of the present application should have a general meaning as understood by those having ordinary skill in the art to which the present application belongs, unless otherwise defined. The use of "first," "second," and similar terms in the embodiments of the present application do not denote any order, quantity, or importance, but rather the terms are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
As described in the background section, in the related art, realistic dynamic environmental effects such as diurnal cycle weather changes are included in a virtual scene. Because Raymarch (also called light stepping) is needed, real-time volume cloud usually causes very high texture cache loss on a mobile terminal, further causes the problems of high power consumption, body heating, frame rate unrelenting and high cost, and cannot ensure that a cloud layer rendering system can adapt to the transformation of a day and night weather system, so that the real-time rendering of the volume cloud is realized.
That is, the rendering technology of the target object is a common sky rendering scheme at the host side, and the importance of the sky rendering scheme to the scene environment experience is self-evident. However, the performance of the mobile terminal is not sufficient to stably support rendering modes such as Raymarch at present.
In the related art, for a volume cloud as an example, the Raymarch method essentially means that T term and S term are obtained by numerical integration in varying cloud layer density. For a specific camera direction, the position of light entering and exiting a cloud layer can be determined, N unequal sampling sections are divided on the section, for each section, noise textures can be sampled in the center, the current density is calculated by combining cloud layer parameters, the assumed density in the section of a smaller range is unchanged, the T term and the S term of the section can be obtained by analyzing integral calculation, and further, the T term and the S term in each section are accumulated by stepping the camera direction to obtain the final T term and the final S term, so that the final illumination value is obtained.
In the research, the applicant finds that the influence of cloud layer rendering on the overall environment is very important, and in consideration of the development of the calculation power of a mobile platform GPU, the applicant researches a set of volume cloud rendering system based on horizon modeling, and uses various optimization means to enable the volume cloud to be rendered on a mobile terminal (such as an iPhone X device) in real time in a time-consuming mode of about 1.5 ms. However, with the diversification of contents in a virtual scene and the advancement of android platform adaptation, the real-time rendering of the volume cloud faces more rendering pressure on the mobile end, so that the time budget reserved for cloud layer rendering by the whole rendering pipeline needs to be greatly compressed, and on this premise, it needs to be ensured that the optimized cloud layer rendering system can adapt to the transformation of a day and night weather system, which is also a problem to be solved urgently.
Therefore, according to the real-time rendering method, the device, the electronic equipment and the storage medium for the target object, a plurality of pre-baking parameter sets are determined according to a plurality of pre-acquired environment parameters, a volume cloud baking component is realized by combining a real-time volume cloud modeling algorithm, and the plurality of pre-baking parameter sets are baked by the baking component, so that a plurality of texture maps are determined and are used for pre-baking cloud layer illumination data under various weather day and night times; in response to the target environment parameters corresponding to the current scene being matched in the plurality of environment parameters, the illumination result can be determined according to the texture maps corresponding to the target environment parameters, and then the stereoscopic cloud layer illumination effect and weather transition effect changing along with day and night are rendered in real time according to the illumination result, so that day and night light shadow following and weather cloud layer changing can be obtained on the premise of extremely low system resource consumption, and the performance is almost equivalent to the static cloud texture. Because no Raymarch technology is used, the problem of texture cache loss cannot be caused, and the problem that the frame rate is not sustainable due to the fact that the power consumption is increased and the body generates heat cannot be caused.
The following describes a real-time rendering method of a target object provided in the embodiments of the present application with specific embodiments.
Fig. 1 illustrates an exemplary flowchart of a method for rendering a target object in real time according to an embodiment of the present application.
Referring to fig. 1, a method for real-time rendering of a target object provided in an embodiment of the present application specifically includes the following steps:
s102: and determining a plurality of pre-baking parameter sets according to a plurality of environmental parameters acquired in advance.
S104: baking the plurality of sets of prebake parameters with a bake component to determine a plurality of texture maps.
S106: and responding to the target environment parameter matched with the current scene in the plurality of environment parameters, and determining an illumination result according to the texture map corresponding to the target environment parameter.
S108: and rendering the target object in the current scene in real time according to the illumination result.
In the following embodiments of the present application, for convenience of explanation, the target object is described by taking a volume cloud as an example, and it should be noted that the target object includes, but is not limited to, a volume cloud, a volume fog, and the like.
FIG. 2 shows a schematic view of light rays from a particular direction at a microscopic angle through a medium according to an embodiment of the present application.
In some embodiments, since the cloud is a propagating medium, first the light interaction with the medium can be distinguished into three roles, absorption, luminescence and scattering. Referring to fig. 2, it can be seen that from a microscopic view, the light energy variation can be obtained by a light ray in a specific direction passing through a small distance.
Before step S102, an illumination equation may be obtained based on the volume cloud illumination theory, so as to determine the set of prebaking parameters according to the equation. Specifically, referring to fig. 2, the amount of change in light energy is obtained, and the expression may be
Figure BDA0003450236140000081
Where alpha denotes the absorption coefficient, sigma denotes the scattering coefficient,
Figure BDA0003450236140000082
representing the light energy not before the dx segment of the medium,
Figure BDA0003450236140000083
representing the scattering contribution of the direct light portion in the direction omega,
Figure BDA0003450236140000084
representing the scattering contribution of light around the dx medium in the ω direction, i.e. the phase function, the integral of the light energy over the sphere.
Fig. 3 shows a schematic view of a camera viewing light through a cloud layer from a particular direction according to an embodiment of the application.
Referring to fig. 3, it is studied that light rays are scattered and captured by the vision camera through a specific viewing angle, that is, the path of any one optical fiber in the direction ba is scattered at a certain point x between ba to enter the vision camera. a refers to the position of the vision camera, b refers to the position of the surface of the solid object, and x refers to the position where any light is scattered.
The light energy variation is further integrated to determine the light intensity received by the vision camera, and the expression is
Figure BDA0003450236140000085
Wherein the content of the first and second substances,
Figure BDA0003450236140000086
indicating the intensity of light received by the camera in a particular direction, a indicating the visual camera position, ω indicating the light direction, LbDenotes the intensity of light reflected or emitted by the surface of the object at point b,. tau. (b, a) denotes the integral of the sum of the absorption and scattering, i.e. the optical thickness, e-τ(x,a)The transmittance T (b, a) is shown. a. b, x these variables can be expressed in integralsThe expression absolute spatial position is understood, and one-dimensional coordinates with the ray direction as the axis are also understood.
Further, the disassembly can be continued to obtain
Figure BDA0003450236140000091
Wherein t is1And t2It is only a formal variable in an integral expression, and has no special meaning, and the formula expresses t is that light passes through suspected1To t2The ratio of the energy after attenuation to the energy before attenuation after this distance.
Can also obtain
Figure BDA0003450236140000092
Where ω represents direction and is a multidimensional variable, the writing method in the integral expression is different from that of the common one-dimensional variable. d Ω' represents the integral of ω direction over the entire spherical direction, i.e. the formula expresses the scattering sum of the light in all directions at x,
Figure BDA0003450236140000093
indicating that the light intensity from the direction omega' is received at a point of a distance x from the camera omega.
Figure BDA0003450236140000094
The scattering coefficient, i.e. the proportion of light that is scattered,
Figure BDA0003450236140000095
indicating how much of the light from the omega' direction is scattered and then propagates in the omega direction to be captured by the vision camera.
Still further, for the phase function, it may be used to calculate the single scattering contribution of the direct light, i.e. the first scattering contribution, denoted as
Figure BDA0003450236140000096
Wherein g represents an anisotropy coefficient, the larger g represents the larger scattering distribution difference in different directions, and theta represents the included angle between the omega direction and the omega' direction.
In the real-time volume cloud computing, in order to simplify the computation, only the energy propagation process of single scattering is computed, and the scattering contribution of the direct light needs to consider the scattering attenuation amount of the sunlight to the target point x, the phase function, the scattering coefficient, and the scattering attenuation amount of the target point x to the visual camera position a. Meanwhile, the contribution of the sky light to the cloud layer can be considered, and SH can be used for simplifying calculationo(i.e. sky box baking coefficient, SH zeroth order) as the average of the contributions from different directions, i.e. the scattering contribution of the ambient light, also called second scattering contribution, where the phase function part of the sky light may not be considered, and secondly the calculation of the scattering attenuation from the sky light entering the cloud layer to the target point x may be approximated using constant coefficient k.
Further, the air conditioner is provided with a fan,
Figure BDA0003450236140000097
can be further expressed as
Figure BDA0003450236140000101
Secondly, for cloud media, it can be assumed that the absorption coefficient and the scattering coefficient are linearly related to the medium density ρ (t) ideally, and the method can be deduced
Figure BDA0003450236140000102
And
Figure BDA0003450236140000103
further, SH can be simplified. Scattering contribution of (2):
Figure BDA0003450236140000104
where ρ (t) represents a density distribution function of the medium, and the argument t is a spatial coordinate point, and it is assumed that the scattering coefficient and the absorption coefficient are linearly related to the medium density ρ (t), σ (x) may be made μ ═ μσρ(x),a(x)=μαρ (x), where μαAnd muσAre constant coefficients.
Still further, a fixed integral conversion method may be used, where t ═ τ (x, a),
Figure BDA0003450236140000105
up to this point, SHoThe variables in the scattering contribution relate only to the penetration. The scattering contribution of the direct light can then be split and simplified, with the integral part equal to S (a, b), where S (a, b) is called the scattering parameter, to determine an illumination equation from the scattering contribution of the direct light and the scattering contribution of the ambient light, expressed as
Figure BDA0003450236140000111
Through further simplification, it can be concluded that cloud illumination data can be simply calculated for a specific camera angle on the basis of the known T term (i.e., scattering parameter) and S term (i.e., penetration parameter).
In some embodiments, since computation in a specific camera direction is needed in the process of rendering the volume cloud, the whole cloud layer prebaking data needs to cover the whole upper hemisphere, and the definition degree of each direction after mapping is almost the same, so that the conversion between ViewDir and UV can be realized in the Shader component.
FIG. 4 shows a schematic diagram of a spherical mapping code according to an embodiment of the present application.
Referring to fig. 4, further, according to the code in fig. 4, the camera direction may be defined by dividing the camera direction into two angles, which are a horizontal angle and a zenith angle, respectively, for any point of the two-dimensional plane, the horizontal angle may be defined by an included angle between a central line and the x-axis, and the radian size of the zenith angle is expressed by using a point-to-center distance, so as to implement the conversion from the two-dimensional to the three-dimensional ray direction.
For step S102, the environmental parameters may include time parameters and weather parameters, for example, in the weather system and day-night system data in the virtual scene, M time parameters and N weather parameters are acquired, where the time parameters control the directional light to affect the optical path propagation path, and the weather parameters mainly affect the modeling coverage of the volume cloud. After obtaining the plurality of time parameters and the plurality of weather parameters, each time parameter and each weather parameter are respectively mixed, for example, M × N different texture sets may be obtained. Analyzing each texture set by using the illumination equation obtained in the step, so as to determine the scattering parameter and the penetration parameter corresponding to each texture set, wherein the scattering parameter and the penetration parameter of each texture set can determine the pre-baking parameter of each texture set, so as to determine the pre-baking parameters of all the texture sets.
With respect to step S104, a cloud texture baking process for parallel pixel operations on a GPU may be implemented using ComputeShader. And submitting the baking-based material structure ComputePass to hardware calculation, reading texture data back to a CPU by using a driving API after synchronization, and finally saving the texture data into a file in an dds format, so that a baking component is used for baking a pre-baking parameter group of all texture sets to determine texture maps in two dimensions of a time parameter and a weather parameter. For example, 5 weather parameters and 10 time parameters are obtained, and finally 50 texture maps can be obtained.
In some embodiments, the present application avoids the use of HDR maps (also referred to as High Dynamic Range maps) because floating point maps bring more bandwidth consumption. In order to avoid the problem of insufficient precision caused by the fact that the scattering parameters in the set of prebaking parameters are floating point data, after the set of prebaking parameters is baked by the baking component to determine the texture map, a scaling factor of the scattering parameters can be set for each texture map. The HDR map is used during actual baking for the global kyoto case of scattering parameters of the read-back statistical picture at R8 for all pixels, then the scaling factor of the scattering parameters is automatically adjusted, and finally each texture map is converted into a 16-bit true color map (i.e., R8G8 map). Each memory channel in the R8G8 map represents a value of 0-1 with an accuracy of 0-256, i.e., the numerical accuracy is guaranteed to be 1/256. If the data needing baking is distributed in a small interval, obvious precision problems exist, so that the data are fully mapped between 0 and 1 by using coefficients to count the distribution range of the data, and the visual problems caused by precision are reduced. Specifically, the scaling factor may be pre-multiplied at baking and pre-divided at rendering, which may solve most of the accuracy problems.
To further reduce bandwidth consumption, in some embodiments, the texture map includes multiple memory channels, which may typically be 4 memory channels. In the baking system, there are changes of two dimensions of time parameter and weather parameter, which means that when the volume cloud in the scene actually needed to be rendered is between two pre-acquired weather parameters, that is, in transitional weather, and the time of the current scene is between two pre-acquired time parameters, four texture maps are needed to be used for sampling and mixing, so as to obtain the effect of the volume cloud in the current scene. For example, when the weather is a transition weather between a fine weather and a rainy weather, and the time parameter obtained in advance is only 9 points and 10 points, the time of the current scene is 9: 30 hours, acquiring a first texture map with a weather parameter of sunny days and a time parameter of 9 points; a second texture map with weather parameters of sunny days and time parameters of 10 points; a third texture mapping with weather parameters of rainy days and time parameters of 9 points; and a fourth texture map with the weather parameter of rainy days and the time parameter of 10 points. Sampling and mixing are carried out through the four texture maps, so that 9 required in the current scene is obtained: 30 volume cloud effect in sunny and rainy weather.
In order to reduce bandwidth consumption, all the texture maps can be grouped according to the weather parameters, so that a plurality of texture map groups are determined, and the weather parameters corresponding to each texture map in each texture map group are the same. That is, the classification is performed according to the weather parameters, for example, the weather parameters are that all texture maps in sunny days are grouped together, the weather parameters are that all texture maps in rainy days are grouped together, and so on. Then, each texture map in each texture map group is sequentially ordered according to time according to a time parameter, that is, taking a fine day texture map group as an example, the texture maps are sequentially ordered according to a sequence of 0 to 24 points, the texture map with the time parameter of 0 point is the first texture map, the texture map with the time parameter of 1 point is the second texture map … …, and the texture map with the time parameter of 24 points is the twenty-fifth texture map.
For each texture map in each texture map group, the scattering parameter corresponding to the texture map is stored in any one of 4 memory channels, and the penetration parameter is stored in any one of the remaining 3 memory channels. Next, the scattering parameter and the transmittance parameter corresponding to the texture map of the next time sequence are respectively stored in the remaining two storage channels. For example, two storage channels in the fine-day texture map at the time of 0 point store the scattering parameter and the penetration parameter of the storage channels, and the remaining two storage channels store the scattering parameter and the penetration parameter of the fine-day texture map at the time of 1 point. Thus, at this time, after the time parameters of the two texture maps are combined, the combined texture map is a 32-bit true color map, and the set of prebaking parameters of the two texture maps can be obtained in one texture map, so that the bandwidth consumption is saved.
FIG. 4 shows a pixel deviation diagram of a texture map after compression according to an embodiment of the present application.
Referring to fig. 4, an astc compression mode may be selected for the mobile terminal, and considering that there is no correlation between channels of the texture, a mask mapping compression mode in the astc is used, that is, a compression mode with equal weight for all storage channels. Through tests, the configuration of astcenc 4x4-mask can be used for producing cloud layer textures with better quality, and the size of the compressed texture pixel is only 8 bpp.
For step S106, the target environment parameters in the current scene may be obtained, and it is determined whether the environment parameters corresponding to the target environment parameters can be matched in the plurality of environment parameters determined in the preceding step, for example, the environment parameters identical to the target environment parameters are found in the plurality of environment parameters, and if yes, the illumination result is determined according to the texture map corresponding to the environment parameters.
In some embodiments, the target environmental parameters may include a target weather parameter and a target time parameter. If each texture mapping still only stores the scattering parameter and the penetration rate parameter corresponding to the texture mapping, whether the weather parameter with the same target weather parameter exists in all the weather parameters or not can be determined, and if the weather parameter with the same target weather parameter exists, all the texture mappings corresponding to the weather parameter are directly selected, so that a candidate texture mapping group is determined. And then determining whether the time parameter with the same target time parameter exists in the candidate texture mapping group, and if so, determining an illumination result directly according to the texture mapping corresponding to the time parameter. For example, the target weather parameter is sunny day, the target time parameter is 8 points, all texture maps with the weather parameters being sunny day are found, the texture maps with the time parameter of 8 points are found, after the texture maps are found, the texture maps with the 8 points on sunny day are determined, and the illumination result is directly determined according to the texture maps.
Fig. 5 shows a schematic diagram of a scene with volumetric clouds at 9 points in multiple clouds within a virtual scene according to an embodiment of the application.
In some embodiments, referring to fig. 5, if each texture map stores not only the scattering parameter and the transmittance parameter corresponding to the texture map, but also the scattering parameter and the transmittance parameter corresponding to the texture map at the next time of the same weather parameter. It is also possible to determine whether there is a weather parameter identical to the target weather parameter among all weather parameters, and if so, directly select all texture maps corresponding to the weather parameter, thereby determining a candidate texture map group. And then determining whether the time parameter with the same target time parameter exists in the candidate texture mapping group, and if so, determining an illumination result directly according to the texture mapping corresponding to the time parameter. For example, the target weather parameter is cloudy, the target time parameter is 9 points, all texture maps with the weather parameter of cloudy are found, texture maps with the time parameter of 9 points are found in the texture maps, after the texture maps are found, the texture maps with the cloud 9 points are determined, and the illumination result is directly determined according to the texture maps. And the illumination result is determined by substituting the pre-baking parameter group for finally determining the texture mapping into the illumination equation, and then the volume cloud in the current scene is rendered in real time according to the illumination result in step S108.
Fig. 6 shows a schematic view of a scene with a volume cloud at 13 points on a sunny day within a virtual scene according to an embodiment of the application.
Referring to fig. 6, if the target time parameter is 13 pm at this time, it can be seen that the volume cloud is also changed according to the change of the time parameter under the same weather parameter.
In some embodiments, if only the scattering parameter and the penetration parameter corresponding to the texture map are still stored in each texture map, if a weather parameter identical to the target weather parameter is not found, that is, the target weather parameter is transitional weather, two weather parameters to be mixed are determined in all the weather parameters according to the target weather parameter, and further, two candidate texture map groups a and B are respectively determined in all the texture maps corresponding to the two weather parameters with mixing. And determining whether a time parameter with the same target parameter exists in the candidate texture mapping groups A and B, if so, directly acquiring texture mapping corresponding to the time parameter in the candidate texture mapping groups A and B, and mixing the two texture mapping to determine an illumination result. For example, if the target weather parameter is transition weather in sunny days and rainy days, and the target time parameter is 8 points, texture maps with weather parameters in sunny days and rainy days are respectively found, then texture maps at 8 points are found in the texture maps in sunny days, texture maps at 8 points are found in the texture maps in rainy days, and the 8-point texture maps in sunny days and the 8-point texture maps in rainy days are mixed, so that the illumination result is determined. And the illumination result is determined by substituting the pre-baking parameter group for finally determining the texture mapping into the illumination equation, and then the volume cloud in the current scene is rendered in real time according to the illumination result in step S108.
It should be noted that, when at least two texture maps need to be mixed, each texture map may be given a different weight value, for example, when the cloud density in the transition weather is close to the cloud density in the sunny day, when 8-point texture maps in the sunny day and 8-point texture maps in the rainy day are mixed, the weight occupied by the 8-point texture maps in the sunny day is greater than that occupied by the 8-point texture maps in the rainy day, so that after the two maps are mixed, the mixed result is closer to the transition weather in which the cloud density is close to the sunny day.
In some embodiments, if each texture map stores not only the scattering parameter and the penetration parameter corresponding to the texture map, but also the scattering parameter and the penetration parameter corresponding to the texture map at the next moment of the same weather parameter. Similarly, if no weather parameter with the same target weather parameter is found, that is, the target weather parameter is transitional weather, two weather parameters to be mixed are determined in all the weather parameters according to the target weather parameter, and further, two candidate texture map groups a and B are respectively determined in all the texture maps corresponding to the two mixed weather parameters. And determining whether a time parameter with the same target parameter exists in the candidate texture mapping groups A and B, if so, directly acquiring texture mapping corresponding to the time parameter in the candidate texture mapping groups A and B, and mixing the two texture mapping to determine an illumination result. For example, if the target weather parameter is transition weather in sunny days and rainy days, and the target time parameter is 8 points, texture maps with weather parameters in sunny days and rainy days are respectively found, then texture maps at 8 points are found in the texture maps in sunny days, texture maps at 8 points are found in the texture maps in rainy days, and the 8-point texture maps in sunny days and the 8-point texture maps in rainy days are mixed, so that the illumination result is determined. And the illumination result is determined by substituting the pre-baking parameter group for finally determining the texture mapping into the illumination equation, and then the volume cloud in the current scene is rendered in real time according to the illumination result in step S108.
In some embodiments, if each texture map still only stores the scattering parameter and the penetration parameter corresponding to the texture map, after the weather parameter same as the target weather parameter can be found, the time parameter same as the target time parameter does not exist, two time parameters closest to the target time parameter need to be found, the texture maps corresponding to the two time parameters are obtained, and the two texture maps are mixed to determine the illumination result. For example, the target weather parameter is a clear day, and after all texture maps with weather parameters of the clear day are found, the target time parameter is 8: and 30, two texture maps with time parameters of 8 points and 9 points need to be found in the texture map of the sunny day, the 8-point texture map of the sunny day and the 9-point texture map of the sunny day are mixed, and the illumination result is determined. And the illumination result is determined by substituting the pre-baking parameter group for finally determining the texture mapping into the illumination equation, and then the volume cloud in the current scene is rendered in real time according to the illumination result in step S108.
It will be appreciated that when at least two texture maps need to be blended, each texture map may be given a different weight value, for example, when the target time parameter is 8: and when the time is 30 hours, mixing the 8-point texture mapping on the sunny day and the 9-point texture mapping on the sunny day by 50 percent respectively. If the target time parameter is 8: 10, the weight of the 8-point texture map in sunny day should be 5/6, and the weight of the 9-point texture map in sunny day should be 1/6, and then the two are mixed.
In some embodiments, if each texture map stores not only the scattering parameter and the penetration parameter corresponding to the texture map, but also the scattering parameter and the penetration parameter corresponding to the texture map at the next moment of the same weather parameter. After the weather parameter identical to the target weather parameter can be found, if the time parameter identical to the target time parameter does not exist, two time parameters corresponding to a time interval containing the target time parameter can be determined, texture maps corresponding to the two time parameters are selected from a candidate texture map group corresponding to the weather parameter identical to the target weather parameter, and an illumination result is determined according to the texture maps. For example, the target weather parameter is a clear day, and after all texture maps with weather parameters of the clear day are found, the target time parameter is 8: 30, it is necessary to find a texture map storing the set of pre-baking parameters of 8 points on a sunny day and a texture map storing the set of pre-baking parameters of 9 points on a sunny day at the same time, and the illumination result can be determined only according to the texture map. And the illumination result is determined by substituting the pre-baking parameter group for finally determining the texture mapping into the illumination equation, and then the volume cloud in the current scene is rendered in real time according to the illumination result in step S108.
In some embodiments, if each texture map still only stores the scattering parameter and the penetration parameter corresponding to the texture map, neither the weather parameter same as the target weather parameter nor the time parameter same as the target time parameter is found, all the texture maps corresponding to two weather parameters that can obtain the target weather parameter after mixing need to be found, all the texture maps corresponding to two time parameters that can obtain the target time parameter after mixing are found in the texture maps, and then the texture maps are mixed to determine the illumination result. For example, the target weather parameter is a transition weather of a sunny day and a rainy day, and the target time parameter is 8: and 30, finding 8-point texture maps in sunny days, 9-point texture maps in sunny days, 8-point texture maps in rainy days and 9-point texture maps in rainy days, and mixing the four texture maps according to the set weight so as to determine the illumination result. And the illumination result is determined by substituting the pre-baking parameter group for finally determining the texture mapping into the illumination equation, and then the volume cloud in the current scene is rendered in real time according to the illumination result in step S108.
In some embodiments, if each texture map stores not only the scattering parameter and the penetration parameter corresponding to the texture map, but also the scattering parameter and the penetration parameter corresponding to the texture map at the next moment of the same weather parameter. And under the condition that the weather parameters same as the target weather parameters do not exist, determining two weather parameters to be mixed in all the weather parameters according to the target weather parameters, and respectively determining two candidate texture mapping groups A and B in all the texture mapping corresponding to the two weather parameters to be mixed. And under the condition that the candidate texture map groups A and B do not have the time parameter same as the target time parameter, two time parameters corresponding to the time interval containing the target time parameter can be determined, then all the texture maps corresponding to the two time parameters are selected from the two candidate texture map groups A and B respectively, and the illumination result is determined according to all the texture maps. For example, the target weather parameter is a transition weather of a sunny day and a rainy day, and the target time parameter is 8: and 30, finding two weather parameters which can be mixed to obtain transitional weather, namely sunny weather and rainy weather, determining the texture maps of all the moments in sunny weather as a candidate texture map group A, and determining the texture maps of all the moments in rainy weather as a candidate texture map group B. And further, respectively finding a texture map which simultaneously stores the pre-baking parameter group of 8 points on the sunny day and a texture map which stores the pre-baking parameter group of 9 points on the sunny day from the candidate texture map groups A and B to find two texture maps, and mixing the two texture maps according to different set weights so as to determine the illumination result. And the illumination result is determined by substituting the pre-baking parameter group for finally determining the texture mapping into the illumination equation, and then the volume cloud in the current scene is rendered in real time according to the illumination result in step S108.
It should be noted that, in step S108, in the real-time rendering, the phase value calculated from the current sun direction may be applied to the scattering of all cloud layers, and compared with the phase function which is calculated during the baking process and participates in the mixing process, the visual problem of the mixture aliasing of the highlight portion of the cloud layer can be avoided. The sun direction is different under the illumination data of different baking time periods, wherein the included angle in the phase function is the included angle between the sunlight and the visual angle direction. The direct result of the phase function is that the scattering highlight of the cloud is very noticeable when looking at the sun. Therefore, two high light aliasing problems are caused by mixing of high light parts, the calculation of the part is moved to a real-time part in the illumination equation obtained by the application, and the cloud layer high light always follows the sun direction.
From the above description, according to the real-time rendering method, the device, the electronic device and the storage medium for the target object provided by the present application, a plurality of pre-baking parameter sets are determined according to a plurality of pre-acquired environmental parameters, a volume cloud baking component is implemented by combining a real-time volume cloud modeling algorithm, and the plurality of pre-baking parameter sets are baked by using the baking component, so that a plurality of texture maps are determined for pre-baking cloud layer illumination data in various weather day and night times; in response to the target environment parameter corresponding to the current scene being matched in the plurality of environment parameters, an illumination result can be determined according to the texture map corresponding to the target environment parameter, and then a stereoscopic cloud layer illumination effect and a weather transition effect which change along with day and night are rendered in real time according to the illumination result, so that day and night light and shadow following and weather cloud layer changing can be obtained on the premise of consuming very little system resources, and the performance is almost equivalent to static cloud texture. Because no Raymarch technology is used, the problem of texture cache loss cannot be caused, and the problem that the frame rate is not sustainable due to the fact that the power consumption is increased and the body generates heat cannot be caused.
It should be noted that the method of the embodiment of the present application may be executed by a single device, such as a computer or a server. The method of the embodiment can also be applied to a distributed scene and completed by the mutual cooperation of a plurality of devices. In such a distributed scenario, one of the multiple devices may only perform one or more steps of the method of the embodiment, and the multiple devices interact with each other to complete the method.
It should be noted that the above describes some embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Fig. 7 illustrates an exemplary structural diagram of an apparatus for real-time rendering of a target object according to an embodiment of the present application.
Based on the same inventive concept, corresponding to the method of any embodiment, the application also provides a real-time rendering device of the target object.
Referring to fig. 7, the apparatus for real-time rendering of the target object includes: the system comprises a first determining module, a second determining module, a third determining module and a rendering module; wherein the content of the first and second substances,
a first determining module configured to determine a plurality of pre-baking parameter sets according to a plurality of environmental parameters acquired in advance;
a second determination module configured to bake the plurality of sets of pre-bake parameters with a bake assembly to determine a plurality of texture maps;
a third determining module, configured to determine, in response to matching a target environment parameter corresponding to a current scene among the plurality of environment parameters, an illumination result according to a texture map corresponding to the target environment parameter;
and the rendering module is configured to render the target object in the current scene in real time according to the illumination result.
In one possible implementation, the environment parameter includes: a time parameter and a weather parameter;
the first determination module is further configured to:
acquiring a plurality of time parameters and a plurality of weather parameters;
mixing each time parameter and each weather parameter respectively to determine a plurality of texture sets;
acquiring an illumination equation;
and determining a pre-baking parameter set corresponding to each texture set according to the illumination equation to obtain a plurality of pre-baking parameter sets.
In one possible implementation manner, the apparatus further includes: an illumination equation determination module;
the illumination equation determination module is further configured to:
acquiring the variation of the light energy;
integrating the light energy variation to determine the light intensity received by the vision camera;
determining a first scattering contribution of the direct light according to the scattering attenuation amount from the sunlight to the target point, the phase function, the scattering coefficient and the scattering attenuation amount from the target point to the visual camera position;
determining a second scattering contribution of the ambient light according to the indirect ambient light intensity of the sky box and the scattering attenuation amount of the volume cloud to the target point;
an illumination equation is determined from the first and second scattering contributions.
In one possible implementation, the set of pre-baking parameters includes: a scattering parameter and a transmittance parameter;
the device, still include: a fourth determination module;
the fourth determination module is further configured to:
analyzing each texture set according to the illumination equation to determine the scattering parameter and the penetration parameter corresponding to each texture set;
and determining a plurality of pre-baking parameter sets according to the scattering parameter and the penetration parameter corresponding to each texture set.
In one possible implementation manner, the apparatus further includes: a conversion module;
the conversion module is further configured to:
obtaining a scaling factor of the scattering parameter in each texture map;
and adjusting the scattering parameters according to the scaling factors to convert each texture map into a true color map.
In one possible implementation, the texture map includes: a plurality of storage channels;
the device, still include: an optimization module;
the optimization module is further configured to:
grouping all the texture maps according to the weather parameters to determine a plurality of texture map groups; wherein the weather parameters corresponding to each texture map in each texture map group are the same;
sequencing each texture map in each texture map group according to the time parameter in a time sequence;
for each texture map in each texture map group,
storing the set of prebaking parameters corresponding to the texture map in any one of the storage channels,
and storing the prebaking parameters corresponding to the texture maps in the next time sequence in any one of the rest storage channels.
In one possible implementation, the target environment parameter includes: a target weather parameter and a target time parameter;
the third determination module is further configured to:
determining whether a weather parameter corresponding to the target weather parameter exists in all the weather parameters;
in response to a weather parameter corresponding to the target weather parameter being present, selecting all of the texture maps corresponding to the weather parameter to determine a first set of candidate texture maps;
determining whether a time parameter corresponding to the target time parameter exists in the first set of candidate texture tiles;
and in response to the existence of the time parameter corresponding to the target time parameter, determining the illumination result according to the texture map corresponding to the time parameter.
In one possible implementation, the third determining module is further configured to:
in response to the fact that no weather parameter corresponding to the target weather parameter exists, determining two weather parameters to be mixed in all the weather parameters according to the target weather parameter;
respectively determining a second candidate texture mapping group and a third candidate texture mapping group according to all the texture mapping corresponding to the two weather parameters to be mixed;
determining whether a time parameter corresponding to the target time parameter exists in the second and third candidate texture tile groups;
in response to there being a time parameter corresponding to the target time parameter, blending all texture maps corresponding to the time parameter to determine the illumination result.
In one possible implementation, the third determining module is further configured to:
in response to the absence of the time parameter corresponding to the target time parameter, determining two time parameters corresponding to a time interval containing the target time parameter;
selecting texture maps corresponding to the two time parameters from the first candidate texture map set;
and determining the illumination result according to the texture map.
In one possible implementation, the third determining module is further configured to:
in response to the absence of the time parameter corresponding to the target time parameter, determining two time parameters corresponding to a time interval containing the target time parameter;
selecting all texture maps corresponding to the two time parameters in the second candidate texture map group and the third candidate texture map group, respectively;
and determining the illumination result according to all the texture maps.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, the functionality of the various modules may be implemented in the same one or more software and/or hardware implementations as the present application.
The apparatus of the foregoing embodiment is used to implement the real-time rendering method for the corresponding target object in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Fig. 8 shows an exemplary structural diagram of an electronic device provided in an embodiment of the present application.
Based on the same inventive concept, corresponding to the method of any embodiment described above, the present application further provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the program, the processor implements the method for rendering the target object in real time according to any embodiment described above. Fig. 8 is a schematic diagram illustrating a more specific hardware structure of an electronic device according to this embodiment, where the electronic device may include: a processor 810, a memory 820, an input/output interface 830, a communication interface 840, and a bus 850. Wherein processor 810, memory 820, input/output interface 830, and communication interface 840 are communicatively coupled to each other within the device via bus 850.
The processor 810 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present specification.
The Memory 820 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 820 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 820 and called to be executed by the processor 810.
The input/output interface 830 is used for connecting an input/output module to input and output information. The input/output module may be configured as a component within the device (not shown) or may be external to the device to provide corresponding functionality. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 840 is used for connecting a communication module (not shown in the figure) to realize communication interaction between the device and other devices. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 850 includes a pathway for communicating information between various components of the device, such as processor 810, memory 820, input/output interface 830, and communication interface 840.
It should be noted that although the above-mentioned device only shows the processor 810, the memory 820, the input/output interface 830, the communication interface 840 and the bus 850, in a specific implementation, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.
The electronic device of the foregoing embodiment is used to implement the real-time rendering method of the corresponding target object in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Based on the same inventive concept, corresponding to any of the above-described embodiment methods, the present application also provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the real-time rendering method of a target object according to any of the above embodiments.
Computer-readable media of the present embodiments, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The computer instructions stored in the storage medium of the foregoing embodiment are used to enable the computer to execute the real-time rendering method of the target object according to any of the foregoing embodiments, and have the beneficial effects of corresponding method embodiments, which are not described herein again.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the context of the present application, features from the above embodiments or from different embodiments may also be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present application as described above, which are not provided in detail for the sake of brevity.
In addition, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown in the provided figures for simplicity of illustration and discussion, and so as not to obscure the embodiments of the application. Furthermore, devices may be shown in block diagram form in order to avoid obscuring embodiments of the application, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the embodiments of the application are to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the application, it should be apparent to one skilled in the art that the embodiments of the application can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
While the present application has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic ram (dram)) may use the discussed embodiments.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present application are intended to be included within the scope of the present application.

Claims (13)

1. A method for real-time rendering of a target object, comprising:
determining a plurality of pre-baking parameter sets according to a plurality of pre-acquired environmental parameters;
baking the plurality of sets of pre-bake parameters with a bake assembly to determine a plurality of texture maps;
responding to a target environment parameter corresponding to the current scene matched in the plurality of environment parameters, and determining an illumination result according to a texture map corresponding to the target environment parameter;
and rendering the target object in the current scene in real time according to the illumination result.
2. The method of claim 1, wherein the environmental parameters comprise: a time parameter and a weather parameter;
the determining a plurality of pre-baking parameter sets according to a plurality of pre-acquired environmental parameters further includes:
acquiring a plurality of time parameters and a plurality of weather parameters;
mixing each time parameter and each weather parameter respectively to determine a plurality of texture sets;
acquiring an illumination equation;
and determining a pre-baking parameter set corresponding to each texture set according to the illumination equation to obtain a plurality of pre-baking parameter sets.
3. The method of claim 2, wherein the obtaining the illumination equation further comprises:
acquiring the variation of the light energy;
integrating the light energy variation to determine the light intensity received by the vision camera;
determining a first scattering contribution of the direct light according to the scattering attenuation amount from the light to the target point, the phase function, the scattering coefficient and the scattering attenuation amount from the target point to the visual camera position;
determining a second scattering contribution of the ambient light according to the indirect ambient light intensity of the sky box and the scattering attenuation amount of the volume cloud to the target point;
an illumination equation is determined from the first and second scattering contributions.
4. The method of claim 2, wherein the set of prebaking parameters comprises: a scattering parameter and a transmittance parameter;
determining, according to the illumination equation, a set of pre-baking parameters corresponding to each texture set to obtain a plurality of sets of pre-baking parameters, further including:
analyzing each texture set according to the illumination equation to determine the scattering parameter and the penetration parameter corresponding to each texture set;
and determining a plurality of pre-baking parameter sets according to the scattering parameter and the penetration parameter corresponding to each texture set.
5. The method of claim 4, wherein said baking the plurality of sets of pre-bake parameters with a bake assembly to determine a plurality of texture maps, further comprises:
obtaining a scaling factor of the scattering parameter in each texture map;
and adjusting the scattering parameters according to the scaling factors to convert each texture map into a true color map.
6. The method of claim 3, wherein the texture mapping comprises: a plurality of storage channels;
the baking the plurality of sets of pre-baking parameters with a baking assembly to determine a plurality of texture maps, then further comprising:
grouping all the texture maps according to the weather parameters to determine a plurality of texture map groups; wherein the weather parameters corresponding to each texture map in each texture map group are the same;
sequencing each texture map in each texture map group according to the time parameter in a time sequence;
for each texture map in each texture map group,
storing the set of prebaking parameters corresponding to the texture map in any one of the storage channels,
and storing the prebaking parameters corresponding to the texture maps in the next time sequence in any one of the rest storage channels.
7. The method of claim 6, wherein the target environmental parameter comprises: a target weather parameter and a target time parameter;
the determining the illumination result according to the texture map corresponding to the target environment parameter further includes:
determining whether a weather parameter corresponding to the target weather parameter exists in all the weather parameters;
in response to a weather parameter corresponding to the target weather parameter being present, selecting all of the texture maps corresponding to the target weather parameter to determine a first set of candidate texture maps;
determining whether a time parameter corresponding to the target time parameter exists in the first set of candidate texture tiles;
and in response to the existence of the time parameter corresponding to the target time parameter, determining the illumination result according to the texture map corresponding to the target time parameter.
8. The method of claim 7, wherein determining whether a weather parameter corresponding to the target weather parameter exists among all of the weather parameters further comprises:
in response to the fact that no weather parameter corresponding to the target weather parameter exists, determining two weather parameters to be mixed in all the weather parameters according to the target weather parameter;
respectively determining a second candidate texture mapping group and a third candidate texture mapping group according to all the texture mapping corresponding to the two weather parameters to be mixed;
determining whether a time parameter corresponding to the target time parameter exists in the second and third candidate texture tile groups;
in response to there being a time parameter corresponding to the target time parameter, blending all texture maps corresponding to the target time parameter to determine the illumination result.
9. The method of claim 7, wherein determining whether the same temporal parameter as the target temporal parameter exists in the first set of candidate texture tiles further comprises:
in response to the absence of the time parameter which is the same as the target time parameter, determining two time parameters corresponding to a time interval containing the target time parameter;
selecting texture maps corresponding to the two time parameters from the first candidate texture map set;
and determining the illumination result according to the texture map.
10. The method of claim 8, wherein determining whether the same temporal parameter as the target temporal parameter exists in the second and third candidate texture tile groups further comprises:
in response to the absence of the time parameter which is the same as the target time parameter, determining two time parameters corresponding to a time interval containing the target time parameter;
selecting all texture maps corresponding to the two time parameters in the second candidate texture map group and the third candidate texture map group, respectively;
and determining the illumination result according to all the texture maps.
11. An apparatus for real-time rendering of a target object, comprising:
a first determining module configured to determine a plurality of pre-baking parameter sets according to a plurality of environmental parameters acquired in advance;
a second determination module configured to bake the plurality of sets of pre-bake parameters with a bake assembly to determine a plurality of texture maps;
a third determining module, configured to determine, in response to matching a target environment parameter corresponding to a current scene among the plurality of environment parameters, an illumination result according to a texture map corresponding to the target environment parameter;
and the rendering module is configured to render the target object in the current scene in real time according to the illumination result.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 10 when executing the program.
13. A non-transitory computer readable storage medium storing computer instructions for causing a computer to implement the method of any one of claims 1 to 10.
CN202111663207.3A 2021-12-31 2021-12-31 Real-time rendering method and device of target object, electronic equipment and storage medium Pending CN114445538A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111663207.3A CN114445538A (en) 2021-12-31 2021-12-31 Real-time rendering method and device of target object, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111663207.3A CN114445538A (en) 2021-12-31 2021-12-31 Real-time rendering method and device of target object, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114445538A true CN114445538A (en) 2022-05-06

Family

ID=81365797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111663207.3A Pending CN114445538A (en) 2021-12-31 2021-12-31 Real-time rendering method and device of target object, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114445538A (en)

Similar Documents

Publication Publication Date Title
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
CN107886562B (en) Water surface rendering method and device and readable storage medium
Liang et al. A visualization-oriented 3D method for efficient computation of urban solar radiation based on 3D–2D surface mapping
CN112927341A (en) Illumination rendering method and device, computer equipment and storage medium
US8803880B2 (en) Image-based lighting simulation for objects
US11276244B2 (en) Fixing holes in a computer generated model of a real-world environment
US11954169B2 (en) Interactive path tracing on the web
CN113674389B (en) Scene rendering method and device, electronic equipment and storage medium
WO2023142607A1 (en) Image rendering method and apparatus, and device and medium
US20210012562A1 (en) Probe-based dynamic global illumination
JP2018526724A (en) Display objects based on multiple models
Kolivand et al. Covering photo-realistic properties of outdoor components with the effects of sky color in mixed reality
CN115239784A (en) Point cloud generation method and device, computer equipment and storage medium
CN110866964A (en) GPU accelerated ellipsoid clipping map terrain rendering method
CN115965727A (en) Image rendering method, device, equipment and medium
CN117218273A (en) Image rendering method and device
CN116883573A (en) Map building rendering method and system based on WebGL
CN114445538A (en) Real-time rendering method and device of target object, electronic equipment and storage medium
EP2831846B1 (en) Method for representing a participating media in a scene and corresponding device
CN116030179B (en) Data processing method, device, computer equipment and storage medium
CN116824082B (en) Virtual terrain rendering method, device, equipment, storage medium and program product
KR102237382B1 (en) Method of harmonic rendering on augmented reality environment, and augmented reality system and recoding medium for performing thereof
CN117649481A (en) Volume cloud ground shadow rendering method, device and storage medium
Matzarakis et al. Estimation of Sky View Factor in urban environments
Chochlík Scalable multi-GPU cloud raytracing with OpenGL

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination