CN114419240B - Illumination rendering method and device, computer equipment and storage medium - Google Patents

Illumination rendering method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114419240B
CN114419240B CN202210337007.7A CN202210337007A CN114419240B CN 114419240 B CN114419240 B CN 114419240B CN 202210337007 A CN202210337007 A CN 202210337007A CN 114419240 B CN114419240 B CN 114419240B
Authority
CN
China
Prior art keywords
illumination
area
information
target
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210337007.7A
Other languages
Chinese (zh)
Other versions
CN114419240A (en
Inventor
曹舜
魏楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210337007.7A priority Critical patent/CN114419240B/en
Publication of CN114419240A publication Critical patent/CN114419240A/en
Application granted granted Critical
Publication of CN114419240B publication Critical patent/CN114419240B/en
Priority to PCT/CN2023/075162 priority patent/WO2023185262A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The application relates to a lighting rendering method, a lighting rendering device, a computer device and a storage medium. Can be applied to the field of games, including: acquiring illumination influence information of a plurality of illumination areas of a virtual scene on the surface of a target object in the virtual scene; for the pixel points on the surface of the target object, determining the illumination influence degree of each illumination area on the pixel points based on the illumination influence information; determining illumination rendering information aiming at the pixel points based on the illumination influence degree of each illumination area on the pixel points; and performing illumination rendering based on the illumination rendering information of the pixel points. The method can enhance the rendering effect.

Description

Illumination rendering method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an illumination rendering method and apparatus, a computer device, and a storage medium.
Background
With the development of internet technology, the popularization of personal terminals and the reduction of flow cost, the application of illumination rendering technology is increasing, and various virtual scenes can be rendered by using the illumination rendering technology, for example, pictures in a game can be rendered by using the illumination rendering technology.
In a conventional illumination rendering method, when an object in a virtual scene is subjected to illumination rendering, illumination within a small range around the object is used for the illumination rendering. Because only depending on the illumination in a small range around, there is a certain limitation, which results in the phenomenon of illumination jump at the boundary transition, and the rendering effect is poor.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a lighting rendering method, apparatus, computer device, computer readable storage medium and computer program product capable of enhancing rendering effect.
In one aspect, the present application provides a lighting rendering method. The method comprises the following steps: acquiring illumination influence information of a plurality of illumination areas of a virtual scene on the surface of a target object in the virtual scene; for the pixel points on the surface of the target object, determining the illumination influence degree of each illumination area on the pixel points based on the illumination influence information; determining illumination rendering information aiming at the pixel points based on the illumination influence degree of each illumination area on the pixel points; and performing illumination rendering based on the illumination rendering information of the pixel points.
On the other hand, the application also provides an illumination rendering device. The device comprises: the information acquisition module is used for acquiring illumination influence information of a plurality of illumination areas of a virtual scene on the surface of a target object in the virtual scene; the degree determining module is used for determining the illumination influence degree of each illumination area on the pixel points on the surface of the target object based on the illumination influence information; the information determining module is used for determining illumination rendering information aiming at the pixel points based on the illumination influence degree of each illumination area on the pixel points; and the illumination rendering module is used for performing illumination rendering based on the illumination rendering information of the pixel points.
In some embodiments, the information acquisition module is further configured to: for the surface point of the target object, determining illumination areas respectively hit by a plurality of rays taking the surface point as a starting point; counting the number of rays hitting each illumination area to obtain the illumination influence degree of each illumination area on the surface point; and obtaining illumination influence information of the surface of the target object based on the illumination influence degree of each illumination area on the plurality of surface points of the target object.
In some embodiments, the information acquisition module is further configured to: for each ray starting at the surface point, determining an intersection point if the ray intersects a virtual object in the plurality of illumination regions; and determining the illumination area to which the intersection point belongs as the illumination area hit by the ray.
In some embodiments, the plurality of lighting regions comprises an outdoor region, the lighting rendering apparatus further to: determining the outdoor area as an illumination area hit by the ray under the condition that the ray does not intersect all virtual objects in the plurality of illumination areas.
In some embodiments, each of the illumination areas is provided with an area mask, and the information obtaining module is further configured to: for each surface point, establishing a corresponding relation between the area mask of each illumination area and the illumination influence degree of the illumination area on the surface point to obtain the area influence degree information of the surface point; and obtaining illumination influence information of the surface of the target object based on the region influence degree information of each surface point.
In some embodiments, the extent determination module is further configured to: selecting surface points with the distance smaller than a preset distance from the surface points based on the distance between the pixel points and the surface points to obtain matching points corresponding to the pixel points; acquiring the illumination influence degree of each illumination area on the matching point from the illumination influence information; and determining the illumination influence degree of each illumination area on the pixel point based on the acquired illumination influence degree.
In some embodiments, the information determination module is further to: determining illumination areas with illumination influence degrees larger than an influence degree threshold value from the illumination areas to obtain target illumination areas corresponding to the pixel points; and determining illumination rendering information of the pixel points based on the illumination influence degree of each target illumination area on the pixel points and the illumination information of each target illumination area.
In some embodiments, each of the illumination areas is provided with an area mask, a plurality of illumination probes are arranged in the virtual scene, and the target illumination area comprises a plurality of sub-areas; the illumination information of the target illumination area comprises illumination information of a subarea; the information acquisition module is further configured to: determining an area mask matched by each of the illumination probes; for a sub-area in the target illumination area, based on an area mask matched with the illumination probe, selecting the illumination probe with the matched area mask consistent with the area mask of the target illumination area from the illumination probes with an overlapping relation with the sub-area to obtain a target illumination probe corresponding to the sub-area; and obtaining the illumination information of the sub-region based on the illumination conversion information of the target illumination probe corresponding to the sub-region.
In some embodiments, the illumination conversion information of the target illumination probe characterizes a conversion relationship between the illumination parameter information of the target illumination probe and the illumination parameter information of the ambient light; the information acquisition module is further configured to: aiming at a target illumination probe corresponding to the subregion, calculating to obtain illumination parameter information of the target illumination probe based on illumination parameter information of the environment light and illumination conversion information of the target illumination probe; and obtaining illumination information corresponding to the sub-region based on the illumination parameter information of each target illumination probe corresponding to the sub-region.
In some embodiments, the information acquisition module is further configured to: for each target illumination probe of said sub-area, determining a weight of said target illumination probe based on a distance between said target illumination probe and said sub-area; and weighting and calculating the illumination parameter information of each target illumination probe by using the weight of each target illumination probe to obtain illumination information corresponding to the subarea.
In some embodiments, the information determination module is further to: for each target illumination area, determining sub-areas meeting a distance approaching condition from the sub-areas of the target illumination area based on the distance between the pixel point and the sub-areas in the target illumination area to obtain target sub-areas; and determining illumination rendering information of the pixel points based on the illumination influence degree of each target illumination area on the pixel points and illumination information of target sub-areas in each target illumination area.
On the other hand, the application also provides computer equipment. The computer device comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps in the illumination rendering method when executing the computer program.
In another aspect, the present application also provides a computer-readable storage medium. The computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned illumination rendering method.
In another aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, performs the steps of the above-described illumination rendering method.
The illumination rendering method, the illumination rendering device, the computer equipment, the storage medium and the computer program product are used for acquiring illumination influence information of a plurality of illumination areas of a virtual scene on the surface of a target object in the virtual scene, determining the illumination influence degree of each illumination area on a pixel point based on the illumination influence information for the pixel point on the surface of the target object, determining the illumination rendering information for the pixel point based on the illumination influence degree of each illumination area on the pixel point, and performing illumination rendering based on the illumination rendering information of the pixel point. Because the illumination rendering information of the pixel point is determined based on the illumination influence degree of each illumination area on the pixel point, the illumination influence of each illumination area on the pixel point is fully considered by the illumination rendering information, the illumination rendering information is more reasonable, and the rendering effect achieved by illumination rendering based on the illumination rendering information of the pixel point is improved.
Drawings
FIG. 1 is a diagram of an application environment of a lighting rendering method in some embodiments;
FIG. 2 is a flow diagram illustrating a method for lighting rendering in some embodiments;
FIG. 3 is a schematic view of an illumination area in some embodiments;
FIG. 4 is a schematic diagram of illumination jump in some embodiments;
FIG. 5 is a schematic diagram of determining lighting information in some embodiments;
FIG. 6 is a schematic illustration of an illumination probe generated in some embodiments;
FIG. 7 is a schematic diagram of determining illumination conversion information in some embodiments;
FIG. 8 is a schematic diagram of determining lighting information in some embodiments;
FIG. 9 is a rendering effect comparison graph in some embodiments;
FIG. 10 is a flow diagram illustrating a method of lighting rendering in some embodiments;
FIG. 11 is an architectural diagram of a lighting rendering method in some embodiments;
FIG. 12 is a rendering effect comparison graph in some embodiments;
FIG. 13 is a block diagram of the architecture of a lighting rendering device in some embodiments;
FIG. 14 is a diagram of the internal structure of a computer device in some embodiments;
FIG. 15 is a diagram of the internal structure of a computer device in some embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
The illumination rendering method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be placed on the cloud or other server.
Specifically, the terminal 102 may obtain scene data corresponding to the virtual scene from the server 104, where the scene data may include a light source and a virtual object. The terminal 102 obtains illumination influence information of a plurality of illumination areas of the virtual scene on the surface of a target object in the virtual scene in the process of displaying the virtual scene or running an application program including the virtual scene, determines the illumination influence degree of each illumination area on a pixel point based on the illumination influence information for the pixel point on the surface of the target object, determines illumination rendering information for the pixel point based on the illumination influence degree of each illumination area on the pixel point, and performs illumination rendering based on the illumination rendering information of the pixel point.
The terminal 102 may be, but not limited to, various desktop computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart car-mounted devices, and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In some embodiments, as shown in fig. 2, there is provided a lighting rendering method, which may be executed by a terminal or a server, and may also be executed by both the terminal and the server, and which is exemplified by applying the method to the terminal 102 in fig. 1, including the following steps:
step 202, acquiring illumination influence information of a plurality of illumination areas of the virtual scene on the surface of a target object in the virtual scene.
The virtual scene refers to a digital scene outlined by a computer through a digital communication technology, and the virtual scene includes but is not limited to at least one of a two-dimensional virtual scene or a three-dimensional virtual scene. The Virtual scene may be, for example, a scene in a game, a scene in a VR (Virtual Reality, VR), or a scene in a cartoon.
The virtual scene may include a plurality of illumination areas, each illumination area being a spatial area in the virtual scene. Through the illumination information in each illumination area, the illumination condition of the virtual object in the virtual scene can be determined, that is, based on the illumination information in the illumination area, the illumination condition of the pixel point on the surface of the virtual object can be determined, so that the information required for rendering the pixel point is determined. When the illumination area includes a plurality of sub-areas, the illumination information of the illumination area includes illumination information of the respective sub-areas. The illumination area may be previously marked in the virtual scene. The manner of marking the illumination areas in the virtual scene may be flexible, for example, the illumination areas may be marked according to rooms in the virtual scene, for example, an area in which a room is located may be marked as an illumination area, and an area outside the room may be marked as an illumination area. The illumination areas may include an indoor area, which may be, for example, an illumination area marked for an area in which a room is located, and an outdoor area, which may be, for example, an illumination area marked for an area outside the room. An area mask may be set for each illumination area, and the area mask is used to uniquely identify the illumination area, for example, there are 4 illumination areas, 3 indoor areas and one outdoor area, the area masks of the 4 illumination areas are 0,1,2 and 3, respectively, and the area mask of the outdoor area is 0. The developer can set each illumination area in the editor in a visual mode, for example, different rooms are marked by using cuboids, each cuboid represents one illumination area, channels to which the illumination areas belong can be set, one area mask corresponds to one channel, the channels to which the illumination areas belong are used for uniquely identifying the illumination areas, and different illumination areas correspond to different channels. As shown in fig. 3, (a) shows a plurality of rooms in a virtual scene, and (b) shows a divided illumination area. The illumination area may also be referred to as an indoor area enclosure (indoor Volume), which is a 3D (three-dimensional) geometric body for marking an indoor area range of a room in a virtual scene, and the geometric body includes, but is not limited to, at least one of a rectangular parallelepiped, a triangular pyramid, a trapezoid, a cone, a cylinder, or a sphere.
The target object is a virtual object in a virtual scene, the virtual object refers to things in the virtual scene, the virtual object includes but is not limited to at least one of a virtual character, an animal, furniture, a building, and the like, and of course, the virtual object may also be part of furniture or a building, for example, a desktop of a table. The target object may be a virtual object in an indoor area, or may be a virtual object in an outdoor area.
The illumination influence information of the surface of the target object includes region influence degree information corresponding to each of a plurality of surface points of the target object, the surface points being points on the surface of the target object. The area influence degree information corresponding to the surface point comprises the illumination influence degree of each illumination area on the surface point. The area influence degree information may include a corresponding relationship between an area mask and an illumination influence degree, the illumination influence degree of the illumination area on the surface point is used to reflect a contribution degree of the illumination area to the illumination brightness of the surface point, and the greater the illumination influence degree is, the greater the contribution degree is to the illumination brightness of the surface point is. For example, the plurality of illumination regions are four illumination regions, the region masks of the four illumination regions are S0, S1, S2, and S3, respectively, and the region influence degree information of the surface point a of the target object is (S0 =0, S1=0, S2=0.4, S3= 0.6), which indicates that the illumination luminance of the surface point a is most contributed by the illumination region S3, that is, the illumination luminance of the surface point a is mainly formed by the illumination of the light ray of the illumination region S3.
The illumination influence degree of the illumination area on the surface point can be determined according to the number of rays entering the illumination area from the plurality of rays starting from the surface point, the illumination influence degree of the illumination area on the surface point and the number of rays entering the illumination area form a positive correlation relationship, and the greater the number of rays entering the illumination area, the greater the illumination influence degree of the illumination area on the surface point. The positive correlation refers to: under the condition that other conditions are not changed, the changing directions of the two variables are the same, and when one variable changes from large to small, the other variable also changes from large to small. It is understood that a positive correlation herein means that the direction of change is consistent, but does not require that when one variable changes at all, another variable must also change. For example, it may be set that the variable b is 100 when the variable a is 10 to 20, and the variable b is 120 when the variable a is 20 to 30. Thus, the change directions of a and b are both such that when a is larger, b is also larger. But b may be unchanged in the range of 10 to 20 a.
Specifically, the terminal may respond to a rendering instruction for the virtual scene, acquire illumination influence information corresponding to a surface of a target object in the virtual scene, and perform illumination rendering on the target object based on the illumination influence information of the surface of the target object to obtain a rendered virtual scene picture. For example, a pixel point on the surface of the target object is subjected to illumination rendering. The pixel point is the central position of the pixel. Pixels are the basic elements that make up an image. A pixel in an image is a small, square-shaped area within the image. Each pixel point on the surface of the target object is the central point of one pixel in a virtual scene picture to be rendered. The coordinates of the pixel point are expressed by three-dimensional coordinates, for example, the coordinates of the pixel point are (X, Y, Z), and X, Y, and Z respectively represent the coordinates of the pixel point on the X axis, the Y axis, and the Z axis of the three-dimensional coordinate system. In some embodiments, the lighting influence information of the target object surface may be generated by a Central Processing Unit (CPU), and the lighting influence information of the target object surface is used for performing lighting rendering on the target object, and a Graphics Processing Unit (GPU) may perform lighting rendering on the target object based on the lighting influence information of the target object surface that is generated in advance by the central processing unit during the lighting rendering on the target object, so as to improve rendering (rendering) performance of the GPU.
In some embodiments, the lighting influence information of the surface of the target object is a lighting map (Light map) of the surface of the target object. The Illumination map is a picture in which Illumination information is generated in advance by using a Global Illumination (GI) algorithm for a static target object in a virtual scene, and is used for representing an Illumination visual effect of the static object. Global Illumination (GI) is a process in which a light source irradiates an object, then scatters the light into a plurality of light rays, and irradiates the object again after a series of reactions such as reflection and refraction with other objects in the scene, and the light energy thus circulated is transmitted. Global illumination includes Precomputed semi-dynamic GI (PRT). The pre-computed semi-dynamic GI is the global illumination information calculated in a static scene by pre-computing the coefficient information of the transfer function of the illumination irradiance at a point on the surface of the object or illumination probe, and then using the dynamic illumination in combination with the coefficients of the pre-computed transfer function. Pre-computing the semi-dynamic GI may achieve a global illumination effect when objects of the scene are unchanged but illumination changes (e.g., a corresponding change in global illumination of the scene when day and night changes).
And 204, determining the illumination influence degree of each illumination area on the pixel point based on the illumination influence information for the pixel point on the surface of the target object.
Each pixel point on the surface of the target object is the central point of one pixel in a virtual scene picture to be rendered. The coordinates of the pixel point in the three-dimensional space can be represented by (X, Y, Z), and X, Y, Z respectively represent the coordinates of the pixel point on the X-axis, the Y-axis, and the Z-axis of the three-dimensional coordinate system. The coordinates of the pixel points in the two-dimensional space can be represented by (x, y).
Specifically, the illumination influence information on the surface of the target object includes area influence degree information corresponding to a plurality of surface points of the target object, and the area influence degree information corresponding to the surface points includes illumination influence degrees of the surface points by each illumination area, that is, includes a correspondence between an area mask and the illumination influence degrees. The pixel points on the surface of the target object may be the same point as the surface points of the target object having the illumination influencing program set, or may be different points.
In some embodiments, the terminal may select a surface point matched with the pixel point from each surface point of the target object having the region influence degree information, and determine the selected surface point as a matching point corresponding to the pixel point. Specifically, the terminal may select, from the surface points, a surface point whose distance from the pixel point is smaller than the first distance threshold as a matching point corresponding to the pixel point. The first distance threshold may be preset. The number of the matched points corresponding to the pixel points can be one or more, and a plurality refers to at least two. The terminal can determine the region influence degree information of the pixel points based on the region influence degree information of the matching points corresponding to the pixel points, wherein the region influence degree information comprises the illumination influence degree of each illumination region on the pixel points. For example, the terminal may perform weighting calculation on the region influence degree information of each matching point to obtain the region influence degree information of the pixel point, where the weighting calculated by the weighting calculation has a negative correlation with the distance between the matching point and the pixel point, and the smaller the distance between the matching point and the pixel point, the larger the weighting.
The weighted calculation of the region influence degree information of each matching point means that the weighted calculation of the illumination influence degree of the same illumination region in the region influence degree information of each matching point, for example, the matching points corresponding to the pixel point are a surface point a and a surface point B, assuming that the distance from the surface point a to the pixel point is the same as the distance from the surface point B to the pixel point, that is, the weighted weights are the same, for example, 0.5, the region influence degree information of the surface point a is (S0 =0, S1=0.1, S2=0.4, S3= 0.5), the region influence degree information of the surface point B is (S0 =0, S1=0.1, S2=0.6, S3= 0.3), when the weighted calculation is performed, the calculation is performed with the illumination region as the dimension, for example, the illumination region S2, the average calculation is performed on 0.4 and 0.6, and the region influence degree of the pixel point is 0.5, and the region influence degree information of the pixel point is (S0 =0, s1=0.1, S2=0.5, S3= 0.4). 0. 0.1, 0.5 and 0.4 are the light influence degrees of the light areas S0, S1, S2 and S3 on the pixel points in sequence.
The negative correlation relationship refers to: under the condition that other conditions are not changed, the changing directions of the two variables are opposite, and when one variable is changed from large to small, the other variable is changed from small to large. It is understood that the negative correlation herein means that the direction of change is reversed, but it is not required that when one variable changes at all, the other variable must also change.
In some embodiments, when the surface point includes the pixel point, a matching point corresponding to the pixel point may be determined as the pixel point itself, and the region influence degree information of the matching point is determined as the region influence degree information of the pixel point. When each surface point does not include the pixel point, the terminal can select the surface point, the distance between which and the pixel point is less than the first distance threshold value, from each surface point with the region influence degree information as the matching point corresponding to the pixel point.
And step 206, determining illumination rendering information aiming at the pixel points based on the illumination influence degree of each illumination area on the pixel points.
The illumination rendering information is information required for performing illumination rendering on the pixel point, and the illumination rendering information may include illumination intensity. The illumination area comprises a plurality of sub-areas, one light source can be represented by each sub-area, the illumination parameter information of each sub-area is the parameter information of the light source represented by the corresponding sub-area, and the illumination parameter information of each sub-area is used for determining the illumination information of each sub-area. The light source may be represented as a linear combination of one or more basis functions, the parameter information of the light source may include weights corresponding to the one or more basis functions, that is, the illumination parameter information of the sub-region may include weights corresponding to the one or more basis functions, a plurality of the basis functions is at least two, the basis functions are weighted by using the weights of the basis functions, and the illumination information of the sub-region may be obtained.
Specifically, the terminal may obtain illumination information of each illumination area, the illumination information of the illumination area includes illumination information of each sub-area in the illumination area, and illumination rendering information for the pixel point is determined based on the illumination influence degree of each illumination area on the pixel point and the illumination information of each illumination area. For example, the terminal may select a sub-region matched with the pixel point from each illumination region to obtain a target sub-region. For example, for each illumination area, a sub-area of the illumination area, in which the distance from the pixel point is smaller than the second distance threshold, may be determined as the target sub-area. The sub-regions may be of any three-dimensional geometry, for example, spherical or cubic. The sub-region has a center position, and a distance between the pixel point and the sub-region may be represented by a distance between the pixel point and the center position of the sub-region. The second distance threshold may be preset or set as desired.
In some embodiments, the terminal may calculate, based on the illumination parameter information of the target sub-region and the basis function, illumination information of the target sub-region, determine, based on the illumination information of each target sub-region, illumination rendering information for the pixel point, where the illumination information of the sub-region may include illumination intensities in multiple directions in a space, and the terminal may determine, based on the illumination information of the target sub-region, an illumination intensity of the target sub-region to the pixel point, to obtain the illumination rendering information of the pixel point.
In some embodiments, the terminal may select, from each illumination area, an illumination area having an illumination influence degree greater than an influence degree threshold based on the illumination influence degree, determine the selected illumination area as a target illumination area, and determine illumination rendering information for the pixel points based on the illumination influence degree of the target illumination area on the pixel points and illumination information of the target illumination area. The influence degree threshold may be preset, for example, the influence degree threshold is 0. Therefore, the illumination areas are screened before the illumination rendering information is determined, and the illumination areas which have no influence or little influence on the pixel points are filtered, so that the calculation amount is reduced, the computer resources are saved, and the rendering efficiency is improved. For each target illumination area, the terminal may determine a sub-area, in the target illumination area, in which the distance between the terminal and the pixel point is smaller than the second distance threshold, as a target sub-area. The illumination rendering information aiming at the pixel points is determined based on the illumination information of each target subregion, the illumination information of the target subregion can comprise illumination intensity towards multiple directions in a space, and the terminal can determine the illumination intensity of the target subregion towards the pixel points based on the illumination information of the target subregion to obtain the illumination rendering information of the pixel points.
And step 208, performing illumination rendering based on the illumination rendering information of the pixel points.
The illumination rendering refers to performing illumination calculation processing in a rendering process on pixel points in a virtual scene picture to be rendered so that the final pixel points have an illumination effect, namely, determining color values of the pixel points in the virtual scene picture rendered by the rendering point.
Specifically, after the terminal obtains the illumination rendering information of the pixel points, the illumination rendering can be performed based on the illumination rendering information of the pixel points, that is, the color values of the pixel points are calculated, for example, the illumination rendering information of the pixel points can include incident light rays from a plurality of target sub-regions, the terminal can determine the color values of the pixel points based on the illumination intensity of each incident light ray, and generate a virtual scene picture based on the color values of each pixel point.
In the illumination rendering method, illumination influence information of a plurality of illumination areas of a virtual scene on the surface of a target object in the virtual scene is acquired, for pixel points on the surface of the target object, the illumination influence degree of each illumination area on the pixel points is determined based on the illumination influence information, the illumination rendering information for the pixel points is determined based on the illumination influence degree of each illumination area on the pixel points, and illumination rendering is performed based on the illumination rendering information of the pixel points. Because the illumination rendering information of the pixel points is determined based on the illumination influence degree of each illumination area on the pixel points, the illumination influence of each illumination area on the pixel points is fully considered by the illumination rendering information, the illumination rendering information is more reasonable, and the rendering effect achieved by illumination rendering based on the illumination rendering information of the pixel points is improved.
The current illumination rendering technology has the condition that unnatural conditions occur at boundary transition, namely illumination jump conditions. As shown in fig. 4, a jump in color occurs on the gate. The illumination rendering method provided by the application sufficiently considers the illumination influence of each illumination area on the pixel points, so that the illumination rendering information is more reasonable, the condition of illumination jump at the boundary transition position is reduced, a rendered picture is more natural, and the rendering effect is improved.
In some embodiments, obtaining lighting impact information of a plurality of lighting regions of a virtual scene on a surface of a target object in the virtual scene comprises: for the surface point of the target object, determining illumination areas respectively hit by a plurality of rays taking the surface point as a starting point; counting the number of rays hitting each illumination area to obtain the illumination influence degree of each illumination area on the surface point; and obtaining illumination influence information of the surface of the target object based on the illumination influence degree of each illumination area on a plurality of surface points on the surface of the target object.
The target object has a plurality of surface points, and each surface point may have a normal. The number of rays starting from the surface point of the target object is plural, and an angle between the ray and a normal line of the surface point may be equal to or smaller than a preset angle, which is, for example, 90 degrees. A ray starting from a surface point of the target object may have an intersection with a virtual object in the illumination space or may not have an intersection with all virtual objects in the respective illumination spaces. Each ray starting from a surface point of the target object hits at least one illumination space, e.g., each ray hits and only one illumination space.
In particular, each illumination area may include an outdoor area and at least one indoor area. For a ray with a surface point of a target object as a starting point, the terminal can judge whether the ray intersects with the virtual object in each illumination space, and when the ray is determined not to intersect with the virtual object in each illumination area, the ray is determined to hit the outdoor area. When it is determined that the ray intersects the virtual object, an intersection point is determined, and an illumination area to which the intersection point belongs is determined as an illumination area to which the ray hits, for example, when the intersection point belongs to the illumination area S1, the ray hits the illumination area S1. Here, the intersection point is only an intersection point closest to the starting point in the ray direction, that is, a point at which the ray first collides, and for example, one ray extends infinitely and intersects the virtual object R1 and the virtual object R2 in this order along the ray direction, and since the ray is actually blocked by the virtual object R1, only the intersection point of the virtual object R1 and the ray is considered.
In some embodiments, for each illumination area, the number of rays hitting the illumination area among the rays starting from the surface point of the target object is in a positive correlation with the illumination influence degree of the illumination area on the surface point, for example, the terminal may count the number of rays hitting the illumination area, may determine the number of rays hitting the illumination area as the illumination influence degree of the illumination area on the surface point, or may normalize the number of rays hitting the illumination area by dividing the number of rays hitting the illumination area by the total number of rays, and determine the result of normalization as the illumination influence degree of the illumination area on the surface point. For each surface point of the target object, the total number of rays refers to the total number of all rays starting from the surface point, for example, the total number of rays is 10 if there are 10 rays starting from the surface point a, and if there are 4 irradiation regions, respectively, S0, S1, S2 and S3, 3 rays of the 10 rays hit S0, 5 rays hit S1, 2 rays hit S2, and 0 ray hit S3, then the irradiation influence degree of the irradiation region S1 on the point is the largest, and since the total number of rays is 10, 3, 5, 2, and 0 are normalized respectively to obtain 0.3, 0.5, 0.2, and 0, it can be determined that the irradiation influence degrees of the irradiation regions S0, S1, S2, and S3 on the surface point a are respectively 0.3, 0.5, 0.2, and 0.
In some embodiments, the identification of the illumination area may be referred to as a mask of the illumination area, and the mask may be flexibly set, for example, may be a number of 0,1,2, etc. The target object is an object in a virtual environment, the illumination influence information on the surface of the target object can be called an object mask, the object mask is calculated by emitting rays to the periphery at points on the surface of the object, if the rays hit the surface of the object, the mask of the area where the hit points (namely intersection points) belong is judged and recorded, after the masks of the areas where all the ray hit points belong are recorded, the number of rays corresponding to each mask is counted, and the occupation ratio of the number in the whole rays is the mask information of the points on the surface of the object in the channel. Rays that do not hit any object are classified as outdoor areas. The object mask can bake out the occupation of the object mask in different areas, such as the floor with door frames in two rooms, and the object mask can simultaneously reach the influence of the illumination of the two rooms, so that the problem of seam at the transition of the areas (namely the illumination jump problem) can be solved softly.
For example, the object mask may be determined using the following pseudo code:
for each target object, object here refers to the target object;
for each surface point on the target object, surface point refers to the surface point;
send first ray to scene// scene refers to the virtual scene, ray refers to the ray, launches a ray into the virtual scene, namely launches the ray with surface point as the starting point into the virtual scene;
if the ray hits the virtual object, the game object refers to the virtual object, and the hit virtual object refers to the point of intersection with the virtual object;
the Check if hit object point in the same region// hit object point refers to an intersection point, the region refers to an illumination area, and the illumination area to which the intersection point belongs is checked;
objectmask [ ("mask _ channel ] +// Objectmask") is an array, where Objectmask is used to store the number of rays that hit each illumination area, and mask _ channel (i.e., area mask) represents one illumination area, and mask channel can be, for example, a number set for the illumination area to distinguish different illumination areas;
for each object mask
Objectmask [ channel ]/= total-ray-num// channel and mask _ channel are the same meaning and all represent area masks, total-ray-num is the total number of rays emitted by a surface point, the number of rays in the Objectmask is normalized (namely, the number is divided by the total number of rays), the normalized rays are stored in the Objectmask, the final Objectmask refers to area influence degree information and is used for storing the illumination influence degree of each illumination area on the surface point, each surface point has one Objectmask, and the Objectmask corresponding to each surface point forms an object mask.
In some embodiments, the terminal obtains the illumination influence degrees of each illumination area on a plurality of surface points of the target object, and obtains the illumination influence information on the surface of the target object based on the illumination influence degrees, for example, for each surface point on the surface of the target object, the illumination influence degree of each illumination area on the surface point may be composed into area influence degree information corresponding to the surface point, the area influence degree information corresponding to the surface points on the surface of the target object is composed into the illumination influence information on the surface of the target object, that is, the illumination influence information on the surface of the target object, including the area influence degree information corresponding to the surface points of the target object.
In some embodiments, the terminal may determine the number of rays starting from the surface point, resulting in the total number of rays; and based on the total number of the rays, performing normalization processing on the number of the rays hitting each illumination area to obtain the illumination influence degree of each illumination area on the surface point. Specifically, the total number of rays refers to the number of rays starting from the surface point. For each illumination area, the terminal may calculate a ratio of the number of rays hitting the illumination area to the total number of rays, and determine the ratio as the illumination influence degree of the illumination area on the surface point. The number of rays hitting each illumination area is normalized, so that the calculation amount of the illumination influence degree is reduced, and the calculation efficiency is improved.
In this embodiment, for a surface point of a target object, illumination areas hit by multiple rays with the surface point as a starting point are determined, the number of rays hitting each illumination area is counted, the illumination influence degree of each illumination area on the surface point is obtained, and illumination influence information of the surface of the target object is obtained based on the illumination influence degree of each illumination area on the multiple surface points of the target object, so that the illumination influence degree of the area of interest on the surface point of the target object is accurately determined by the rays with the surface point of the target object as the starting point, and the accuracy of the illumination influence information of the surface of the target object is further improved.
In some embodiments, determining the illumination regions hit by the plurality of rays respectively starting from the surface point comprises: for each ray that starts at a surface point, determining an intersection point if the ray intersects a virtual object in the plurality of illumination regions; and determining the illumination area to which the intersection point belongs as the illumination area hit by the ray.
Specifically, for a ray having a surface point of the target object as a starting point, the terminal may determine whether the ray intersects a virtual object in each illumination space, determine an intersection point when it is determined that the ray intersects the virtual object, determine an illumination region to which the intersection point belongs as an illumination region to which the ray hits, for example, when the intersection point belongs to the illumination region S1, the ray hits the illumination region S1. Here, the intersection point is only an intersection point closest to the starting point in the ray direction, that is, a point at which the ray first collides, and for example, one ray extends infinitely and intersects the virtual object R1 and the virtual object R2 in this order along the ray direction, and since the ray is actually blocked by the virtual object R1, only the intersection point of the virtual object R1 and the ray is considered.
In this embodiment, for each ray with the surface point as the starting point, in the case that the ray intersects with a virtual object in the plurality of illumination areas, the intersection point is determined, and the illumination area to which the intersection point belongs is determined as the illumination area hit by the ray, so that the illumination area hit by the ray is determined through the intersection point between the ray and the virtual object, and accuracy and efficiency of determining the illumination area hit by the ray are improved.
In some embodiments, the plurality of lighting areas includes an outdoor area, and the lighting rendering method further includes: and under the condition that the ray does not intersect all virtual objects in the plurality of illumination areas, determining the outdoor area as the illumination area hit by the ray.
Wherein the rays may not intersect virtual objects in the respective illumination areas, in which case the rays enter the outdoor area. For example, when the target object is a virtual object in an indoor space, the ray on the surface point a of the target object directly enters the outdoor area without intersecting the virtual object in each indoor space.
Specifically, the terminal determines that the outdoor area is the illumination area hit by the ray when determining that the ray does not intersect with the virtual object of each of the plurality of illumination areas.
In this embodiment, under the condition that it is determined that the ray does not intersect with the virtual object in each of the plurality of illumination areas, it is determined that the outdoor area is the illumination area hit by the ray, so that the illumination area hit by the ray is accurately determined.
In some embodiments, each illumination area is provided with an area mask, and obtaining illumination influence information of the surface of the target object based on illumination influence degrees of each illumination area on a plurality of surface points of the target object respectively includes: for each surface point, establishing a corresponding relation between the area mask of each illumination area and the illumination influence degree of the illumination area on the surface point to obtain the area influence degree information of the surface point; and obtaining illumination influence information of the surface of the target object based on the region influence degree information of each surface point.
The area mask is used to uniquely identify the illumination areas, for example, there are 4 illumination areas, 3 indoor areas and one outdoor area, the area masks of the 4 illumination areas are 0,1,2 and 3, respectively, and the area mask of the outdoor area is 0.
The region influence degree information may include a correspondence between the region mask and the illumination influence degree. For example, the plurality of illumination areas are four illumination areas, the area masks of the four illumination areas are S0, S1, S2, and S3, respectively, and the area influence degree information of the surface point a of the target object is (S0 =0, S1=0, S2=0.4, S3= 0.6). The illumination of the target object surface influences the information about the degree of influence of the area corresponding to each of the plurality of surface points of the target object. The lighting impact information of the target object surface may be pre-generated prior to rendering.
Specifically, the terminal may store the area influence degree information corresponding to each of the plurality of surface points of the target object in a unified manner to obtain the illumination influence information of the surface of the target object, and for example, may store the area influence degree information corresponding to each of the plurality of surface points in an illumination map manner to obtain the illumination influence information of the surface of the target object.
In this embodiment, a corresponding relationship is established between the area mask of each illumination area and the illumination influence degree of the illumination area on the surface point to obtain the area influence degree information of the surface point, and the illumination influence degree of each illumination area on the surface point is found from the area influence degree information according to the mask area, so that the efficiency of determining the illumination influence degree is improved.
In some embodiments, determining the degree of illumination influence of each illumination region on the pixel point based on the illumination influence information comprises: selecting surface points with the distance smaller than a preset distance from the surface points based on the distance between the pixel points and the surface points to obtain matching points corresponding to the pixel points; acquiring the illumination influence degree of each illumination area on the matching point from the illumination influence information; and determining the illumination influence degree of each illumination area on the pixel points based on the acquired illumination influence degrees.
The matching points are surface points of which the distance between each surface point of the target object and the pixel point is smaller than a preset distance. The matching points may be one or more.
Specifically, the illumination influence information includes area influence degree information corresponding to each surface point, and the area influence degree information corresponding to the surface point includes illumination influence degrees of each illumination area on the surface point. The terminal can acquire the region influence degree information corresponding to the matching points from the illumination influence information, so that the illumination influence degree of each illumination region on the matching points is obtained.
In some embodiments, when there is one matching point, the terminal may determine, as the illumination influence degree of each illumination area on the pixel point, the illumination influence degree of each illumination area on the matching point. When there are multiple matching points, the terminal may perform weighted calculation on the illumination influence degrees corresponding to the same illumination area in the area influence degree information of each matching point to obtain the illumination influence degree of each illumination area on the pixel point. For example, the matching points corresponding to the pixel points are a surface point a and a surface point B, assuming that the distance from the surface point a to the pixel point is the same as the distance from the surface point B to the pixel point, that is, the weighted weight is the same, that is, 0.5, the information on the degree of influence of the region of the surface point a is (S0 =0, S1=0.1, S2=0.4, S3= 0.5), and the information on the degree of influence of the region of the surface point B is (S0 =0, S1=0.1, S2=0.6, S3= 0.3), when performing the weighted calculation, the calculation is performed with the illumination region as a dimension, for example, the illumination region S2 is calculated by averaging 0.4 and 0.6, and 0.5 is obtained, so that the information on the degree of influence of the region of the pixel point is (S0 =0, S1=0.1, S2=0.5, S3= 0.4).
In this embodiment, since the distance between the matching point and the pixel point is smaller than the preset distance, the distance between the matching point and the pixel point is short, and thus the illumination influence degree of the matching point can be inversely mapped to the illumination influence degree corresponding to the pixel point, so that the illumination influence degree of each illumination area on the matching point is obtained from the illumination influence information, the illumination influence degree of each illumination area on the pixel point is determined based on the obtained illumination influence degree, and the illumination influence degree of each illumination area on the pixel point can be accurately obtained.
In some embodiments, determining the illumination rendering information for the pixel point based on the illumination influence degree of each illumination area on the pixel point comprises: determining the illumination areas with the illumination influence degrees larger than the influence degree threshold value from the illumination areas to obtain target illumination areas corresponding to the pixel points; and determining illumination rendering information of the pixel points based on the illumination influence degree of each target illumination area on the pixel points and the illumination information of each target illumination area.
The target illumination area is an illumination area with the illumination influence degree larger than the influence degree threshold in each illumination area. The threshold value of the degree of influence may be preset, and may be 0, for example. The target illumination area may be one or more, and a plurality means at least two.
The target illumination area includes a plurality of sub-areas, and the illumination information of the target illumination area may include illumination information of respective sub-areas in the target illumination area. The illumination information of the sub-region is determined based on the illumination parameter information of the sub-region. The sub-region may represent a light source, the illumination parameter information of the sub-region is parameter information of the light source represented by the sub-region, the light source may be represented as a linear combination of one or more basis functions, the parameter information of the light source may include weights corresponding to the one or more basis functions, that is, the illumination parameter information of the sub-region may include weights corresponding to the one or more basis functions, a plurality of the basis functions is at least two, the basis functions are weighted by the weights of the basis functions, and illumination information (irradence) of the sub-region may be obtained, and the illumination information of the sub-region is used to represent the illumination information of the light source represented by the sub-region. Wherein, a basis function refers to an element of a specific base in the function space. In function space, each continuous function can be represented as a linear combination of basis functions. The illumination information may be represented by a spherical Harmonic Function (Sphere Harmonic Function), for example, the illumination information may be represented by a spherical Harmonic Function of a low order (2, 3 orders), and the basis Function may be a spherical Harmonic basis Function in the spherical Harmonic Function. Spherical harmonic is a spherical coordinate system form of Laplace's equationThe angular part of the solution. As shown in fig. 5, the spherical harmonics corresponding to the illumination information are:
Figure 518617DEST_PATH_IMAGE001
. Wherein L isenv(w) represents illumination information, liIs the weight of the light source on each spherical harmonic basis function, yiIs the harmonic basis function of each sphere. I in FIG. 5 is a weight vector, characterizing each liCombinations of (a) and (b). The weight l of each spherical harmonic basis function can be calculated using the raw information of the light source and the spherical harmonic basisi. For example, a formula can be utilized
Figure 216315DEST_PATH_IMAGE002
Calculate each li. Wherein,
Figure 841331DEST_PATH_IMAGE003
is an angle, indicating the direction of the light ray.
Specifically, the terminal may determine, as the target sub-region, a sub-region in the target illumination region, where a distance between the target illumination region and the pixel point is smaller than the second distance threshold. The terminal can determine illumination rendering information aiming at the pixel points based on the illumination information of each target subregion, the illumination information of the target subregion can comprise illumination intensity towards multiple directions in a space, and the terminal can determine the illumination intensity of the target subregion towards the pixel points based on the illumination information of the target subregion to obtain the illumination rendering information of the pixel points.
In some embodiments, for each illumination area, the terminal may store, in a unified manner, illumination parameter information corresponding to each subspace of the illumination area, for example, may store by using a 3D (3 dimensional) texture, so that each illumination area corresponds to one 3D texture, the illumination parameter information corresponding to each subspace of the illumination area is stored in the 3D texture corresponding to the illumination area, and the illumination parameter information corresponding to each subspace of the 3D texture may be referred to as a texel of the 3D texture. The 3D texture may also be referred to as an irradiator (Irradiance Volume) or an irradiator. The irradiation body can use 3D texture to store the illumination information of the illumination probe in the specified 3-dimensional area, and the illumination information of 8 illumination probes at the position of the object can be directly obtained through 3D interpolation when the object is subjected to illumination calculation. The central processing unit may generate the 3D texture of the illumination area in advance, and when performing illumination rendering, the terminal may obtain the illumination parameter information of the subspace from the 3D texture of the illumination area generated in advance, so as to perform illumination rendering.
In this embodiment, determine the illumination region that the illumination influence degree is greater than the influence degree threshold value from each illumination region, obtain the corresponding target illumination region of pixel to can filter the illumination region based on the illumination influence degree earlier, thereby filter the illumination region that does not produce the influence to the pixel (illumination area that illumination influence degree equals 0 promptly), thereby reduce the calculated amount, improve rendering efficiency.
In some embodiments, each illumination area is provided with an area mask, a plurality of illumination probes are arranged in the virtual scene, and the target illumination area comprises a plurality of sub-areas; the illumination parameter information of the target illumination area comprises illumination information of a subarea; the steps of obtaining the illumination information of the subarea are as follows: determining an area mask matched with each illumination probe; for the sub-region in the target illumination region, based on the region mask matched with the illumination probe, selecting the illumination probe with the matched region mask consistent with the region mask of the target illumination region from the illumination probes with the overlapping relation with the sub-region to obtain the target illumination probe corresponding to the sub-region; and obtaining illumination information of the subareas based on the illumination conversion information of the target illumination probes corresponding to the subareas.
Here, the illumination Probe (Light Probe) is an illumination detector placed in a virtual scene in advance, and the shape of the illumination Probe may be at least one of a sphere, a polygon, and the like, for example, a cube. The illumination probe may also be referred to as an illumination probe. Information about light passing through the scene white space can be captured and used by the light probe. Similar to the illumination map, the illumination probe stores baking information of illumination in the virtual scene, including pre-computed illumination conversion information. The difference is that the illumination map stores information about the illumination of the light onto the surface of the static object in the virtual scene, while the illumination probe stores information about the light passing through the empty space in the scene. The position of the illumination probe and the stored illumination conversion information are preset.
The target illumination area includes a plurality of sub-areas, and the shape of the sub-areas may be arbitrary, including but not limited to at least one of tetrahedrons, hexahedrons, or spheres. The illumination parameter information corresponding to the sub-region is determined based on the illumination conversion information of the illumination probe. The illumination conversion information is a conversion relation between illumination parameter information of the ambient light in the virtual scene and illumination parameter information of the illumination probe, and the illumination parameter information of the illumination probe can be obtained by using the illumination conversion information of the illumination probe and the illumination parameter information of the ambient light. For example, the illumination parameter information of the illumination probe is obtained by multiplying the illumination conversion information of the illumination probe by the illumination parameter information of the ambient light.
The overlapping relationship means that the two spatial regions include the same region, for example, if the spatial region represented by the subregion has the same region as the spatial region represented by the photo probe, then the subregion has an overlapping relationship with the photo probe.
Specifically, for each sub-region in the target illumination region, the terminal may obtain, from each illumination probe, an illumination probe that has an overlapping relationship with the sub-region and belongs to the target illumination region, to obtain a target illumination probe corresponding to the sub-region, where the target illumination probe corresponding to each sub-region may be one or more.
In some embodiments, the virtual scene may include ambient light, and the ambient light may be represented as a linear combination of a plurality of basis functions, that is, the ambient light corresponds to illumination parameter information, and the illumination parameter information of the ambient light includes weights of the respective basis functions constituting the ambient light. A plurality of illumination probes are arranged in the virtual scene, and the illumination probes should have illumination conversion information. The illumination conversion information is a conversion relation between illumination parameter information of ambient light in the virtual scene and illumination parameter information of the illumination probe, and the illumination parameter information of the illumination probe can be obtained by using the illumination conversion information of the illumination probe and the illumination parameter information of the ambient light.
In some embodiments, the terminal may determine the area masks matched by the illumination probe, and establish a correspondence between the illumination probe and the area masks, for example, the matched area masks may be stored in the illumination probe. Specifically, the terminal may determine whether the position of the illumination probe belongs to the illumination region, and when it is determined that the position of the illumination probe belongs to the illumination region, determine the area mask of the illumination region as the area mask matched with the illumination probe. For example, if there are 4 illumination regions, the region masks are S0, S1, S2, and S3, respectively, and the illumination probe P is located in the illumination region S1, the region mask of the illumination region S1, i.e., S1, is determined as the region mask matched with the illumination probe P. The illumination area may be referred to as a baking area. For example, the area mask that the illumination probe matches can be determined using the following pseudo code:
foreach probe stress// probe stress refers to the position of the illumination probe;
foreach region// region refers to the area of illumination
If the position of the illumination probe belongs to the illumination area
Mask = region mask// using the mask of the light probe as the mask matched by the light probe.
In some embodiments, for each sub-region, the terminal may obtain illumination conversion information of a target illumination probe corresponding to the sub-region, obtain illumination parameter information of ambient light, multiply the illumination conversion information of the target illumination probe with the parameter information of the ambient light to obtain the illumination parameter information corresponding to the target illumination probe, and after obtaining the illumination parameter information of each target illumination probe of the sub-region, the terminal may calculate the illumination parameter information of the sub-region based on the illumination parameter information of each target illumination probe of the sub-region. Specifically, the terminal may perform weighted calculation on the illumination parameter information of each target illumination probe in the sub-region, and determine a result of the weighted calculation as the illumination parameter information of the sub-region. Wherein the weight of the weighted calculation may be determined based on a distance between the target illumination probe and the sub-region, the larger the distance the smaller the weight. That is, the illumination parameter information of the sub-region is obtained by interpolating the illumination parameter information of the illumination probe found based on the position of the sub-region.
In some embodiments, the terminal may determine the center position of each illumination probe in the virtual scene based on the surface points of the virtual objects in the virtual scene. Specifically, for a virtual object in a virtual scene, determining a basic geometric shape in a three-dimensional grid model corresponding to the virtual object to perform rasterization processing, determining a pixel point in the basic geometric shape, determining a normal direction of the pixel point, and determining a position where a distance between the pixel point and the normal direction of the pixel point is a preset distance as a central position of the illumination probe. A three-dimensional mesh model (3D mesh) is a polygonal mesh consisting of a series of geometric shapes used to simulate the surface of a virtual object. For example, during game development, the surface of each virtual object is simulated by one or more three-dimensional mesh models. In the embodiment of the present application, the three-dimensional mesh model of the virtual object is a polygonal mesh that simulates a surface of the virtual object. The basic geometry, which is the smallest shape that constitutes the three-dimensional mesh model, may be, for example, a triangle. For example, each triangle representation (mesh triangles) of the object surface may be subdivided, i.e. the triangles are rasterized, each point of the rasterization being offset by a certain amount from the normal direction, so that a number of points on the object surface are generated, and data storage of the illumination is performed on these points, i.e. these points are taken as the center position of the illumination probe. As shown in fig. 6, a virtual room in a virtual scene is illustrated, the black filled circles in the room representing the generated light probes.
In some embodiments, the illumination conversion information of the illumination probe may be determined based on a ray tracing algorithm. The illumination conversion information may also be referred to as illumination transmission data or illumination transmission process parameters. In particular, after the position of each illumination probe is determined, illumination conversion information may be generated for each probe using a ray tracing algorithm. For example, the radiation can be emitted from the center of the illumination probe to the periphery, the radiation can continuously bounce when encountering an object until the radiation cannot hit any object, the direction of the last bounce and the direction of the first radiation are recorded, and the corresponding relationship between the position of the illumination probe and the direction of the last bounce and the direction of the first radiation is recorded, so that the illumination conversion information is obtained.
In some embodiments, the illumination information (Irradiance) of the illumination probe may be represented in spherical harmonics. The terminal can use the formula in fig. 7
Figure 638386DEST_PATH_IMAGE004
And calculating illumination conversion information. In the formula
Figure 840697DEST_PATH_IMAGE005
Representing illumination conversion information, p represents an illumination probe,
Figure 483031DEST_PATH_IMAGE006
the angle is represented as a function of time,
Figure 493713DEST_PATH_IMAGE007
representing the basis functions y from the illumination probe p, reflected directly or through any number of objectsiOf the light source.
Figure 55144DEST_PATH_IMAGE008
The fundamental function in the spherical harmonic corresponding to the illumination probe is expressed, and can be understood as the emission direction of the ambient light. y isi、yi-1、yi+1Which represents the corresponding basis function of the ambient light, i.e. the direction of incidence of the ambient light.
In some embodiments, the terminal may perform real-time illumination reconstruction on the illumination probe according to the illumination parameter information of the light source (ambient light) and the illumination probe. The illumination conversion information of the illumination probe may be an illumination transmission process parameter calculated from the visibility of the light and the light geometric angle projection weight information. For example, inCan utilize formulas
Figure 620117DEST_PATH_IMAGE009
And calculating the illumination information of the illumination probe.
Figure 66142DEST_PATH_IMAGE010
And representing emergent illumination information, namely a result of final illumination reconstruction.
Figure 288920DEST_PATH_IMAGE011
Representing the incident light (light source information),
Figure 896619DEST_PATH_IMAGE012
representing the illumination transfer function (illumination parameter information),
Figure 480047DEST_PATH_IMAGE013
representing a set of individual lighting directions, x representing a position,
Figure 323238DEST_PATH_IMAGE014
which indicates the direction of the incident light,
Figure 777353DEST_PATH_IMAGE015
indicating the outgoing illumination direction. As shown in fig. 8, (d) in fig. 8 is (ambient light) light source information, and (c) in fig. 8 is illumination conversion information. Fig. 8 (d) shows the result of the final illumination reconstruction. Fig. 8 (a) shows visibility of a ray, and fig. 8 (b) shows geometric angle projection weight information of a ray. The visibility of the ray represents the visibility function between the point and the point, and is a binary function, 0 represents invisible, and 1 represents visible. The ray geometry angle projection weight information represents a dot product between a normal direction at a point and a ray direction at the point.
In this embodiment, the area mask matched with each illumination probe is determined, for the sub-area in the target illumination area, based on the area mask matched with the illumination probe, the illumination probe whose matched area mask is consistent with the area mask of the target illumination area is selected from the illumination probes whose overlap relationship with the sub-area in the target illumination area is determined, the target illumination probe corresponding to the sub-area is obtained, based on the illumination conversion information of the target illumination probe corresponding to the sub-area, the illumination information of the sub-area is obtained, so that the illumination probe belonging to the target illumination area is accurately determined based on the area mask matched with the illumination probe, based on the illumination probe which has an overlap relationship with the sub-area in the target illumination area and belongs to the target illumination area, the illumination parameter information of the sub-area is determined, so that the illumination parameter information of the sub-area is not affected by other illumination areas, the phenomenon of light leakage is reduced, and therefore the rendering effect is more real.
Here, the light leakage refers to a phenomenon that the shielded light passes through the shielding object to illuminate the shielded area. As shown in fig. 9, (a) in fig. 9 is a rendering effect diagram in the case where there is light leakage, and (b) in fig. 9 is a rendering effect diagram in the case where there is no light leakage, and it can be seen from the diagram that (b) in fig. 9 is more realistic than (a) in fig. 9, and since light blocked by the eave in (a) in fig. 9 passes through the eave to illuminate the lower side of the eave, the lower side of the eave in (a) in fig. 9 is brighter than the lower side of the eave in (b) in fig. 9, and normally, since light is blocked, the brightness below the eave should be darker.
In some embodiments, the illumination conversion information of the target illumination probe characterizes a conversion relationship between the illumination parameter information of the target illumination probe and the illumination parameter information of the ambient light; obtaining the illumination information of the subarea based on the illumination conversion information of the target illumination probes corresponding to the subarea comprises: aiming at a target illumination probe corresponding to a subregion, calculating to obtain illumination parameter information of the target illumination probe based on illumination parameter information of ambient light and illumination conversion information of the target illumination probe; and obtaining illumination information corresponding to the sub-region based on the illumination parameter information of each target illumination probe corresponding to the sub-region.
Wherein the illumination information of the ambient light of the virtual scene is a linear combination of the plurality of basis functions. The ambient light includes, for example, sunlight and the like. The illumination parameter information of the ambient light includes weights of the basis functions corresponding to the illumination information of the ambient light. And the illumination parameter information of the target illumination probe comprises the weight of each basis function corresponding to the illumination information of the target illumination probe. Each basis function corresponding to the illumination information of the ambient light may be the same as or different from each basis function corresponding to the illumination information of the target illumination probe.
Specifically, the terminal may multiply the illumination parameter information of the ambient light by the illumination conversion information of the target illumination probe to obtain the illumination parameter information of the target illumination probe.
In some embodiments, the terminal may perform weighted calculation on the illumination parameter information of each target illumination probe corresponding to the sub-region to obtain the illumination parameter information corresponding to the sub-region, and the smaller the distance between the target illumination probe and the sub-region is, the larger the weight of the illumination parameter information of the target illumination probe is. The illumination parameter information corresponding to the sub-region comprises weights corresponding to one or more basis functions respectively, the number of the weights is at least two, and the terminal can utilize the weights of the basis functions in the illumination parameter information of the sub-region to perform linear combination on the basis functions to obtain the illumination information of the sub-region.
In this embodiment, because the illumination conversion information of the target illumination probe represents the conversion relationship between the illumination parameter information of the target illumination probe and the illumination parameter information of the ambient light, the illumination parameter information of the target illumination probe can be quickly obtained based on the illumination conversion information, the illumination information corresponding to the sub-region is obtained based on the illumination parameter information of each target illumination probe corresponding to the sub-region, and the efficiency of obtaining the illumination information is improved.
In some embodiments, obtaining the illumination information corresponding to the sub-region based on the illumination parameter information of each target illumination probe corresponding to the sub-region includes: for each target illumination probe of the sub-region, determining the weight of the target illumination probe based on the distance between the target illumination probe and the sub-region; and weighting and calculating the illumination parameter information of each target illumination probe by using the weight of each target illumination probe to obtain illumination information corresponding to the subarea.
Wherein the distance between the target illumination probe and the sub-region is in a negative correlation with the weight of the target illumination probe.
Specifically, the terminal may perform weighted calculation on the illumination parameter information of each target illumination probe by using the weight of each target illumination probe to obtain the illumination parameter information of the sub-region, where the illumination parameter information of the sub-region includes weights corresponding to one or more basis functions, and a plurality of the weights refers to at least two basis functions, and the terminal may perform linear combination on the basis functions by using the weights of the basis functions in the illumination parameter information of the sub-region to obtain the illumination information of the sub-region.
In this embodiment, as the distance is more similar to the illumination condition, the weight of the target illumination probe is determined based on the distance between the target illumination probe and the sub-region, and the illumination parameter information of each target illumination probe is weighted and calculated by using the weight of each target illumination probe to obtain the illumination information corresponding to the sub-region, thereby improving the accuracy of the illumination information.
In some embodiments, determining the illumination rendering information of the pixel point based on the illumination influence degree of each target illumination area on the pixel point and the illumination information of each target illumination area includes: for each target illumination area, determining sub-areas meeting a distance approaching condition from the sub-areas of the target illumination area based on the distance between the pixel point and the sub-areas in the target illumination area to obtain target sub-areas; and determining illumination rendering information of the pixel points based on the illumination influence degree of each target illumination area on the pixel points and the illumination information of the target sub-area in each target illumination area.
Wherein the distance closeness condition includes the distance being less than a second distance threshold. The second distance threshold may be preset or set as desired.
Specifically, for each target illumination area, the terminal may calculate a distance between the pixel point and each sub-area in the target illumination area, and when it is determined that the distance is greater than the second distance threshold, determine the sub-area as the target sub-area. The number of the target sub-areas screened from each target illumination area can be one or more, and a plurality refers to at least two.
In some embodiments, the terminal may determine irradiation information of each target sub-region at the pixel point, where the irradiation information includes the illumination intensity, and the terminal may perform weighted calculation on the irradiation information of the pixel point by each target sub-region by using the illumination influence degree of each target illumination region on the pixel point, and determine the illumination rendering information of the pixel point. For example, the terminal may determine the degree of influence of the target illumination area on the illumination of the pixel point as a weighting weight corresponding to a target sub-area in the target illumination area, perform weighting calculation on the illumination information of each target sub-area at the pixel point by using the weighting weight of each target sub-area, and determine the illumination rendering information of the pixel point.
In this embodiment, because the sub-region satisfying the distance approaching condition has a large influence on the illumination of the pixel point, for each target illumination region, the sub-region satisfying the distance approaching condition is determined from each sub-region of the target illumination region based on the distance between the pixel point and the sub-region in the target illumination region to obtain the target sub-region, and the illumination rendering information of the pixel point is determined based on the degree of influence on the pixel point by each target illumination region and the illumination information of the target sub-region in each target illumination region, so that the accuracy of the illumination rendering information is improved.
The present application further provides an embodiment of an image rendering method, and specifically, as shown in fig. 10, the image rendering method includes:
step 1002, for a surface point of a target object, determining illumination areas respectively hit by multiple rays with the surface point as a starting point, and counting the number of rays hitting each illumination area to obtain the illumination influence degree of each illumination area on the surface point.
The illumination area is an area in the virtual scene, and the target object is an object in the virtual scene. The setting of the illumination area may be automatically generated by using a method of cluster analysis.
And 1004, for each surface point, establishing a corresponding relation between the area mask of each illumination area and the illumination influence degree of the illumination area on the surface point to obtain area influence degree information of the surface point, and obtaining the illumination influence information of the surface of the target object based on the area influence degree information of each surface point.
Step 1006, the illumination area includes a plurality of sub-areas, an area mask matched with each illumination probe is determined, and an illumination probe having a matching area mask identical to the area mask of the target illumination area is selected from the illumination probes having an overlapping relationship with the sub-areas, so as to obtain a target illumination probe corresponding to the sub-area.
Step 1008, determining illumination parameter information of the sub-region based on the illumination conversion information of the target illumination probe corresponding to the sub-region, and determining illumination information of the sub-region based on the illumination parameter information of the sub-region.
Wherein the steps of generating the illumination probe and determining the illumination conversion information of the illumination probe may be an offline processing stage. As shown in fig. 11, the illumination rendering scheme of the present application includes an offline processing stage, a central processing stage, and a graphics processing stage, where the offline processing stage includes a probe placing stage and a probe baking stage, the probe placing stage is used to determine a position of an illumination probe, and the probe baking stage is used to determine illumination conversion information of the illumination probe. The illumination parameter information of the sub-region may be generated by the central processor during the global volume texture phase. The illumination parameter information for each sub-region of the illumination region may be stored using a 3D texture, such as the global volume texture stage in fig. 11 to generate the 3D texture. The photosphere harmonic update phase in fig. 11 may comprise the step of generating illumination parameter information of the illumination probe, and the sparse probe re-illumination phase may comprise the step of generating illumination information of the illumination probe.
Step 1010, for the pixel points on the surface of the target object, determining the illumination influence degree of each illumination area on the pixel points based on the illumination influence information of the surface of the target object.
Step 1012, determining sub-regions satisfying the distance approaching condition from each sub-region of the illumination regions to obtain target sub-regions, and determining illumination rendering information of the pixel points based on the illumination influence degree of each illumination region on the pixel points and the illumination information of the target sub-regions in each illumination region.
Wherein, the determining of the target sub-region may be performed by the central processor, for example, in fig. 11, the target sub-region corresponding to the pixel point is determined in the object volume selection stage.
And 1014, performing illumination rendering based on the illumination rendering information of the pixel points.
Wherein, as shown in fig. 11, the lighting rendering may be performed by a graphics processor.
The interpolation technology for the illumination probe by respectively producing mask information for an object and the illumination probe in a mode of setting an area and an area mask not only solves the problem of light leakage of the illumination probe, but also can generate a natural transition effect at an area transition position, and can achieve the effect performance of 60fps (frame Per Second) on a mobile platform. The illumination rendering method provided by the application can be applied to a semi-dynamic GI technology and provides a global illumination solution for a game studio, and can be realized by a Baking Engine (Baking Engine) and a rendering Engine (UE PRT render).
In some embodiments, the actual effect of the illumination rendering method proposed by the present application is tested. As shown in fig. 12, a comparison graph of the front and rear effects of light leakage is shown, and fig. 12 (a) shows the effect of untreated light leakage, and it can be seen that the wall on the right wall and the wall in the middle of the screen have a serious light leakage problem. Fig. 12 (b) is a correctly displayed illumination effect diagram in a case where the light leakage problem is solved by using the illumination rendering method provided by the present application. It can be seen from the figure that (b) in fig. 12 is more natural than (a) in fig. 12.
The illumination rendering method provided by the application can be applied to any scene needing illumination rendering, including but not limited to at least one of a game scene, an animation scene, a virtual reality scene and the like. The illumination rendering method is applied to games, cartoon or Virtual Reality (VR) scenes, and can solve the problems of light leakage and illumination jumping, so that the rendering effect is more natural.
In a game scene, the illumination rendering method provided by the application can be used for rendering to obtain a game picture. Specifically, the game comprises a virtual scene, the virtual scene comprises a plurality of illumination areas, each illumination area is provided with an area mask, the area masks are used for uniquely identifying the illumination areas, an illumination probe is generated in the virtual scene, and the area masks matched with the illumination probe are determined. Each illumination area comprises a plurality of sub-areas, an illumination parameter information set of the illumination area is generated according to illumination probes around the sub-areas, the illumination parameter information set comprises illumination parameter information corresponding to each sub-area in the illumination area, and the illumination information of the sub-areas is generated based on the illumination parameter information of the sub-areas. The virtual scene comprises a virtual object, for a target object to be rendered in the virtual scene, illumination influence information of a plurality of illumination areas of the virtual scene on the surface of the target object in the virtual scene can be acquired, for pixel points on the surface of the target object, for each illumination area, a sub-area meeting a distance approaching condition is determined from each sub-area of the illumination area based on the distance between the pixel point and the sub-area in the illumination area, so that a target sub-area is obtained, illumination rendering information of the pixel points is determined based on the illumination influence degree of each target illumination area on the pixel point and the illumination information of the target sub-area in each target illumination area, and illumination rendering is performed based on the illumination rendering information of the pixel points, so that a game picture is rendered.
In an animation scene, the illumination rendering method provided by the application can be used for rendering to obtain an animation picture. Specifically, the animation includes a virtual scene, the virtual scene includes a plurality of illumination areas, each illumination area is provided with an area mask, the area mask is used for uniquely identifying the illumination area, an illumination probe is generated in the virtual scene, and the area mask matched with the illumination probe is determined. Each illumination area comprises a plurality of sub-areas, an illumination parameter information set of the illumination area is generated according to illumination probes around the sub-areas, the illumination parameter information set comprises illumination parameter information corresponding to each sub-area in the illumination area, and the illumination information of the sub-areas is generated based on the illumination parameter information of the sub-areas. The virtual scene comprises a virtual object, for a target object to be rendered in the virtual scene, illumination influence information of a plurality of illumination areas of the virtual scene on the surface of the target object in the virtual scene can be acquired, for pixel points on the surface of the target object, for each illumination area, a sub-area meeting a distance approaching condition is determined from each sub-area of the illumination area based on the distance between the pixel point and the sub-area in the illumination area, so that a target sub-area is obtained, illumination rendering information of the pixel points is determined based on the illumination influence degree of each target illumination area on the pixel point and the illumination information of the target sub-area in each target illumination area, and illumination rendering is performed based on the illumination rendering information of the pixel points, so that an animation picture is rendered.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides an illumination rendering apparatus for implementing the illumination rendering method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the illumination rendering device provided below can refer to the limitations on the illumination rendering method in the foregoing, and details are not described herein again.
In some embodiments, as shown in fig. 13, there is provided a lighting rendering apparatus including: an information obtaining module 1302, a degree determining module 1304, an information determining module 1306, and a lighting rendering module 1308, wherein:
an information obtaining module 1302, configured to obtain illumination influence information of a plurality of illumination areas of a virtual scene on a surface of a target object in the virtual scene.
And a degree determining module 1304, configured to determine, for a pixel point on the surface of the target object, an illumination influence degree of each illumination area on the pixel point based on the illumination influence information.
The information determining module 1306 is configured to determine illumination rendering information for the pixel point based on an illumination influence degree of each illumination area on the pixel point.
And an illumination rendering module 1308, configured to perform illumination rendering based on the illumination rendering information of the pixel point.
In some embodiments, the information acquisition module is further configured to: determining illumination areas respectively hit by a plurality of rays taking the surface point as a starting point for the surface point of the target object; counting the number of rays hitting each illumination area to obtain the illumination influence degree of each illumination area on the surface point; and obtaining illumination influence information of the surface of the target object based on the illumination influence degree of each illumination area on a plurality of surface points of the target object.
In some embodiments, the information acquisition module is further configured to: for each ray with the surface point as a starting point, determining an intersection point under the condition that the ray intersects with the virtual objects in the plurality of illumination areas; and determining the illumination area to which the intersection point belongs as the illumination area hit by the ray.
In some embodiments, the plurality of lighting zones comprises an outdoor zone, the lighting rendering apparatus is further to: and under the condition that the ray does not intersect all virtual objects in the plurality of illumination areas, determining the outdoor area as the illumination area hit by the ray.
In some embodiments, each illumination area is provided with an area mask, and the information obtaining module is further configured to: for each surface point, establishing a corresponding relation between the area mask of each illumination area and the illumination influence degree of the illumination area on the surface point to obtain the area influence degree information of the surface point; and obtaining illumination influence information of the surface of the target object based on the region influence degree information of each surface point.
In some embodiments, the extent determination module is further to: selecting surface points with the distance smaller than a preset distance from the surface points based on the distance between the pixel points and the surface points to obtain matching points corresponding to the pixel points; acquiring the illumination influence degree of each illumination area on the matching point from the illumination influence information; and determining the illumination influence degree of each illumination area on the pixel points based on the acquired illumination influence degrees.
In some embodiments, the information determination module is further to: determining the illumination areas with the illumination influence degrees larger than the influence degree threshold value from the illumination areas to obtain target illumination areas corresponding to the pixel points; and determining illumination rendering information of the pixel points based on the illumination influence degree of each target illumination area on the pixel points and the illumination information of each target illumination area.
In some embodiments, each illumination area is provided with an area mask, a plurality of illumination probes are arranged in the virtual scene, and the target illumination area comprises a plurality of sub-areas; the illumination information of the target illumination area comprises the illumination information of the subarea; the information acquisition module is further configured to: determining an area mask matched with each illumination probe; for the sub-region in the target illumination region, based on the region mask matched with the illumination probe, selecting the illumination probe with the matched region mask consistent with the region mask of the target illumination region from the illumination probes with the overlapping relation with the sub-region to obtain the target illumination probe corresponding to the sub-region; and obtaining the illumination information of the subarea based on the illumination conversion information of the target illumination probe corresponding to the subarea.
In some embodiments, the illumination conversion information of the target illumination probe characterizes a conversion relationship between the illumination parameter information of the target illumination probe and the illumination parameter information of the ambient light; the information acquisition module is further configured to: aiming at a target illumination probe corresponding to a subregion, calculating to obtain illumination parameter information of the target illumination probe based on illumination parameter information of ambient light and illumination conversion information of the target illumination probe; and obtaining illumination information corresponding to the sub-region based on the illumination parameter information of each target illumination probe corresponding to the sub-region.
In some embodiments, the information acquisition module is further configured to: for each target illumination probe of the sub-region, determining a weight of the target illumination probe based on a distance between the target illumination probe and the sub-region; and performing weighted calculation on the illumination parameter information of each target illumination probe by using the weight of each target illumination probe to obtain illumination information corresponding to the subarea.
In some embodiments, the information determination module is further to: for each target illumination area, determining sub-areas meeting a distance approaching condition from the sub-areas of the target illumination area based on the distance between the pixel point and the sub-areas in the target illumination area to obtain target sub-areas; and determining illumination rendering information of the pixel points based on the illumination influence degree of each target illumination area on the pixel points and the illumination information of the target sub-area in each target illumination area.
The modules in the illumination rendering device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In some embodiments, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 14. The computer device includes a processor, a memory, an Input/Output interface (I/O for short), and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The database of the computer device is used for storing data involved in the illumination rendering method. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a lighting rendering method.
In some embodiments, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 15. The computer apparatus includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for communicating with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a lighting rendering method. The display unit of the computer equipment is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device, the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configurations shown in fig. 14 and 15 are block diagrams of only some of the configurations relevant to the present application, and do not constitute a limitation on the computing devices to which the present application may be applied, and a particular computing device may include more or less components than those shown, or some of the components may be combined, or have a different arrangement of components.
In some embodiments, a computer device is provided, comprising a memory in which a computer program is stored and a processor, which when executing the computer program, implements the steps in the above-described illumination rendering method.
In some embodiments, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when executed by a processor, implements the steps in the above-described illumination rendering method.
In some embodiments, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps in the above-described lighting rendering method.
It should be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant countries and regions.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include a Read-Only Memory (ROM), a magnetic tape, a floppy disk, a flash Memory, an optical Memory, a high-density embedded nonvolatile Memory, a resistive Random Access Memory (ReRAM), a Magnetic Random Access Memory (MRAM), a Ferroelectric Random Access Memory (FRAM), a Phase Change Memory (PCM), a graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (24)

1. A method of lighting rendering, the method comprising:
determining illumination areas respectively hit by a plurality of rays taking the surface point as a starting point for the surface point of a target object in a virtual scene; the illumination area belongs to the virtual scene;
counting the number of rays hitting each illumination area to obtain the illumination influence degree of each illumination area on the surface point;
obtaining illumination influence information of the surface of the target object based on the illumination influence degree of each illumination area on a plurality of surface points of the target object;
for a pixel point on the surface of the target object, selecting a surface point with a distance smaller than a preset distance from each surface point based on the distance between the pixel point and the surface point to obtain a matching point corresponding to the pixel point;
acquiring the illumination influence degree of each illumination area on the matching point from the illumination influence information;
determining the illumination influence degree of each illumination area on the pixel point based on the obtained illumination influence degree;
determining illumination rendering information aiming at the pixel points based on the illumination influence degree of each illumination area on the pixel points;
and performing illumination rendering based on the illumination rendering information of the pixel points.
2. The method of claim 1, wherein the degree of illumination of the surface point by the illumination region is positively correlated to the number of rays entering the illumination region.
3. The method of claim 1, wherein the determining the illumination regions hit by the plurality of rays starting from the surface point comprises:
for each ray with the surface point as a starting point, determining an intersection point if the ray intersects with a virtual object in the plurality of illumination areas;
and determining the illumination area to which the intersection point belongs as the illumination area hit by the ray.
4. The method of claim 3, wherein the plurality of illumination areas comprises an outdoor area, the method further comprising:
determining the outdoor area as an illumination area hit by the ray under the condition that the ray does not intersect all virtual objects in the plurality of illumination areas.
5. The method according to claim 1, wherein each of the illumination areas is provided with an area mask, and the obtaining the illumination influence information of the surface of the target object based on the illumination influence degree of each of the illumination areas on the plurality of surface points of the target object comprises:
for each surface point, establishing a corresponding relation between the area mask of each illumination area and the illumination influence degree of the illumination area on the surface point to obtain the area influence degree information of the surface point;
and obtaining illumination influence information of the surface of the target object based on the region influence degree information of each surface point.
6. The method according to claim 5, wherein the obtaining of the illumination influence information of the surface of the target object based on the region influence degree information of each of the surface points comprises:
and forming illumination influence information of the surface of the target object by using the region influence degree information of each surface point.
7. The method of claim 1, wherein the determining the illumination rendering information for the pixel point based on the illumination influence degree of each of the illumination regions on the pixel point comprises:
determining the illumination areas with the illumination influence degrees larger than the influence degree threshold value from the illumination areas to obtain target illumination areas corresponding to the pixel points;
and determining illumination rendering information of the pixel points based on the illumination influence degree of each target illumination area on the pixel points and the illumination information of each target illumination area.
8. The method of claim 7, wherein each of the illumination areas is provided with an area mask, the virtual scene is provided with a plurality of illumination probes, and the target illumination area comprises a plurality of sub-areas; the illumination information of the target illumination area comprises illumination information of a subarea; the steps of obtaining the illumination information of the sub-area are as follows:
determining an area mask matched by each of the illumination probes;
for a sub-area in the target illumination area, based on an area mask matched with the illumination probe, selecting the illumination probe with the matched area mask consistent with the area mask of the target illumination area from the illumination probes with an overlapping relation with the sub-area to obtain a target illumination probe corresponding to the sub-area;
and obtaining the illumination information of the sub-region based on the illumination conversion information of the target illumination probe corresponding to the sub-region.
9. The method according to claim 8, wherein the illumination conversion information of the target illumination probe characterizes a conversion relationship between the illumination parameter information of the target illumination probe and the illumination parameter information of the ambient light;
the obtaining of the illumination information of the sub-region based on the illumination conversion information of the target illumination probe corresponding to the sub-region comprises:
aiming at a target illumination probe corresponding to the subregion, calculating to obtain illumination parameter information of the target illumination probe based on illumination parameter information of the environment light and illumination conversion information of the target illumination probe;
and obtaining illumination information corresponding to the sub-region based on the illumination parameter information of each target illumination probe corresponding to the sub-region.
10. The method according to claim 9, wherein the obtaining illumination information corresponding to the sub-region based on the illumination parameter information of each target illumination probe corresponding to the sub-region comprises:
for each target illumination probe of said sub-area, determining a weight of said target illumination probe based on a distance between said target illumination probe and said sub-area;
and performing weighted calculation on the illumination parameter information of each target illumination probe by using the weight of each target illumination probe to obtain illumination information corresponding to the sub-region.
11. The method of claim 7, wherein the determining the illumination rendering information of the pixel point based on the illumination influence degree of each target illumination region on the pixel point and the illumination information of each target illumination region comprises:
for each target illumination area, determining sub-areas meeting a distance approaching condition from the sub-areas of the target illumination area based on the distance between the pixel point and the sub-areas in the target illumination area to obtain target sub-areas;
and determining illumination rendering information of the pixel points based on the illumination influence degree of each target illumination area on the pixel points and illumination information of target sub-areas in each target illumination area.
12. An illumination rendering apparatus, characterized in that the apparatus comprises:
the information acquisition module is used for determining illumination areas respectively hit by a plurality of rays taking the surface points as starting points for the surface points of the target object in the virtual scene; the illumination area belongs to the virtual scene; counting the number of rays hitting each illumination area to obtain the illumination influence degree of each illumination area on the surface point; obtaining illumination influence information of the surface of the target object based on illumination influence degrees of each illumination area on a plurality of surface points of the target object respectively;
the degree determining module is used for selecting surface points with the distance smaller than a preset distance from the surface points to obtain matching points corresponding to the pixel points according to the distance between the pixel points and the surface points on the surface of the target object; acquiring the illumination influence degree of each illumination area on the matching point from the illumination influence information; determining the illumination influence degree of each illumination area on the pixel point based on the obtained illumination influence degree;
the information determining module is used for determining illumination rendering information aiming at the pixel points based on the illumination influence degree of each illumination area on the pixel points;
and the illumination rendering module is used for performing illumination rendering based on the illumination rendering information of the pixel points.
13. The apparatus of claim 12, wherein the degree of illumination affected by the illumination region on the surface point is positively correlated to the number of rays entering the illumination region.
14. The apparatus of claim 12, wherein the information obtaining module is further configured to:
for each ray starting at the surface point, determining an intersection point if the ray intersects a virtual object in the plurality of illumination regions;
and determining the illumination area to which the intersection point belongs as the illumination area hit by the ray.
15. The apparatus of claim 14, wherein the plurality of illumination areas comprises an outdoor area, the apparatus further to:
determining the outdoor area as an illumination area hit by the ray under the condition that the ray does not intersect all virtual objects in the plurality of illumination areas.
16. The apparatus according to claim 12, wherein each of the illumination areas is provided with an area mask, and the information obtaining module is further configured to:
for each surface point, establishing a corresponding relation between the area mask of each illumination area and the illumination influence degree of the illumination area on the surface point to obtain the area influence degree information of the surface point;
and obtaining illumination influence information of the surface of the target object based on the region influence degree information of each surface point.
17. The apparatus of claim 16, wherein the information obtaining module is further configured to:
and forming illumination influence information of the surface of the target object by using the region influence degree information of each surface point.
18. The apparatus of claim 12, wherein the information determining module is further configured to:
determining the illumination areas with the illumination influence degrees larger than the influence degree threshold value from the illumination areas to obtain target illumination areas corresponding to the pixel points;
and determining illumination rendering information of the pixel points based on the illumination influence degree of each target illumination area on the pixel points and the illumination information of each target illumination area.
19. The apparatus according to claim 18, wherein each of the illumination areas is provided with an area mask, a plurality of illumination probes are provided in the virtual scene, and the target illumination area comprises a plurality of sub-areas; the illumination information of the target illumination area comprises illumination information of a subarea; the apparatus is further configured to:
determining an area mask matched by each of the illumination probes;
for a sub-area in the target illumination area, based on an area mask matched with the illumination probe, selecting the illumination probe with the matched area mask consistent with the area mask of the target illumination area from the illumination probes with an overlapping relation with the sub-area to obtain a target illumination probe corresponding to the sub-area;
and obtaining the illumination information of the sub-region based on the illumination conversion information of the target illumination probe corresponding to the sub-region.
20. The apparatus according to claim 19, wherein the illumination conversion information of the target illumination probe characterizes a conversion relationship between the illumination parameter information of the target illumination probe and the illumination parameter information of the ambient light; the apparatus is further configured to:
aiming at the target illumination probes corresponding to the sub-regions, based on the illumination parameter information of the environment light and the illumination conversion information of the target illumination probes, calculating to obtain the illumination parameter information of the target illumination probes;
and obtaining illumination information corresponding to the sub-region based on the illumination parameter information of each target illumination probe corresponding to the sub-region.
21. The apparatus of claim 20, wherein the apparatus is further configured to:
for each target illumination probe of said sub-area, determining a weight of said target illumination probe based on a distance between said target illumination probe and said sub-area;
and weighting and calculating the illumination parameter information of each target illumination probe by using the weight of each target illumination probe to obtain illumination information corresponding to the subarea.
22. The apparatus of claim 18, wherein the information determining module is further configured to:
for each target illumination area, determining sub-areas meeting a distance approaching condition from the sub-areas of the target illumination area based on the distance between the pixel point and the sub-areas in the target illumination area to obtain target sub-areas;
and determining illumination rendering information of the pixel points based on the illumination influence degree of each target illumination area on the pixel points and illumination information of target subareas in each target illumination area.
23. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 11 when executing the computer program.
24. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 11.
CN202210337007.7A 2022-04-01 2022-04-01 Illumination rendering method and device, computer equipment and storage medium Active CN114419240B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210337007.7A CN114419240B (en) 2022-04-01 2022-04-01 Illumination rendering method and device, computer equipment and storage medium
PCT/CN2023/075162 WO2023185262A1 (en) 2022-04-01 2023-02-09 Illumination rendering method and apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210337007.7A CN114419240B (en) 2022-04-01 2022-04-01 Illumination rendering method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114419240A CN114419240A (en) 2022-04-29
CN114419240B true CN114419240B (en) 2022-06-17

Family

ID=81263113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210337007.7A Active CN114419240B (en) 2022-04-01 2022-04-01 Illumination rendering method and device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114419240B (en)
WO (1) WO2023185262A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419240B (en) * 2022-04-01 2022-06-17 腾讯科技(深圳)有限公司 Illumination rendering method and device, computer equipment and storage medium
CN115330640B (en) * 2022-10-11 2023-01-10 腾讯科技(深圳)有限公司 Illumination mapping noise reduction method, device, equipment and medium
CN116503520A (en) * 2022-10-20 2023-07-28 腾讯科技(深圳)有限公司 Illumination control method, device, computer equipment and storage medium
CN116206006A (en) * 2023-03-02 2023-06-02 达瓦未来(北京)影像科技有限公司 Card style direct illumination effect rendering method based on UE rendering engine
CN116030180B (en) * 2023-03-30 2023-06-09 北京渲光科技有限公司 Irradiance cache illumination calculation method and device, storage medium and computer equipment
CN117788677A (en) * 2023-12-29 2024-03-29 摩尔线程智能科技(上海)有限责任公司 Global illumination determining method, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2817497A1 (en) * 2013-01-31 2014-07-31 Dirtt Environmental Solutions, Ltd. Method and system for efficient modeling of specular reflection
CN105335996A (en) * 2014-06-30 2016-02-17 北京畅游天下网络技术有限公司 Light irradiation effect calculation method and device
CN106981098A (en) * 2016-01-12 2017-07-25 西门子医疗有限公司 The visual angle of virtual scene component is represented
CN108236783A (en) * 2018-01-09 2018-07-03 网易(杭州)网络有限公司 The method, apparatus of illumination simulation, terminal device and storage medium in scene of game
CN111462343A (en) * 2020-03-31 2020-07-28 腾讯科技(深圳)有限公司 Data processing method and device, electronic equipment and storage medium
CN114119849A (en) * 2022-01-24 2022-03-01 阿里巴巴(中国)有限公司 Three-dimensional scene rendering method, device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5731566B2 (en) * 2013-04-23 2015-06-10 株式会社スクウェア・エニックス Information processing apparatus, control method, and recording medium
US9779541B2 (en) * 2015-05-22 2017-10-03 Disney Enterprises, Inc. Virtual object discrimination for fast global illumination rendering
US11534688B2 (en) * 2018-04-02 2022-12-27 Take-Two Interactive Software, Inc. Method and apparatus for enhanced graphics rendering in a video game environment
CN109173263B (en) * 2018-08-31 2021-08-24 腾讯科技(深圳)有限公司 Image data processing method and device
CN111862344B (en) * 2020-07-17 2024-03-08 抖音视界有限公司 Image processing method, apparatus and storage medium
CN112927341A (en) * 2021-04-02 2021-06-08 腾讯科技(深圳)有限公司 Illumination rendering method and device, computer equipment and storage medium
CN114419240B (en) * 2022-04-01 2022-06-17 腾讯科技(深圳)有限公司 Illumination rendering method and device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2817497A1 (en) * 2013-01-31 2014-07-31 Dirtt Environmental Solutions, Ltd. Method and system for efficient modeling of specular reflection
CN105335996A (en) * 2014-06-30 2016-02-17 北京畅游天下网络技术有限公司 Light irradiation effect calculation method and device
CN106981098A (en) * 2016-01-12 2017-07-25 西门子医疗有限公司 The visual angle of virtual scene component is represented
CN108236783A (en) * 2018-01-09 2018-07-03 网易(杭州)网络有限公司 The method, apparatus of illumination simulation, terminal device and storage medium in scene of game
CN111462343A (en) * 2020-03-31 2020-07-28 腾讯科技(深圳)有限公司 Data processing method and device, electronic equipment and storage medium
CN114119849A (en) * 2022-01-24 2022-03-01 阿里巴巴(中国)有限公司 Three-dimensional scene rendering method, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Philipp Lensing et al..Instant indirect illumination for dynamic mixed reality scenes.《 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)》.2013, *
韩鲁锋.基于物理的次表面散射绘制方法.《中国优秀硕士学位论文全文数据库 信息科技辑》.2016,(第02期), *

Also Published As

Publication number Publication date
CN114419240A (en) 2022-04-29
WO2023185262A1 (en) 2023-10-05

Similar Documents

Publication Publication Date Title
CN114419240B (en) Illumination rendering method and device, computer equipment and storage medium
US20230076326A1 (en) Illumination rendering method and apparatus, computer device, and storage medium
CN111968215B (en) Volume light rendering method and device, electronic equipment and storage medium
CN107077756B (en) Three-dimensional object visualization method, visualization apparatus, and computer-readable storage medium
CN111369655B (en) Rendering method, rendering device and terminal equipment
US7212207B2 (en) Method and apparatus for real-time global illumination incorporating stream processor based hybrid ray tracing
CN113674389B (en) Scene rendering method and device, electronic equipment and storage medium
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
CN114119853B (en) Image rendering method, device, equipment and medium
CN112712582B (en) Dynamic global illumination method, electronic device and computer readable storage medium
CA3199390A1 (en) Systems and methods for rendering virtual objects using editable light-source parameter estimation
CN116704102A (en) Automatic light distribution method based on point cloud scene and electronic equipment
CN115239784A (en) Point cloud generation method and device, computer equipment and storage medium
WO2024148898A1 (en) Image denoising method and apparatus, and computer device and storage medium
CN112819940B (en) Rendering method and device and electronic equipment
CN116012520B (en) Shadow rendering method, shadow rendering device, computer equipment and storage medium
CN115984440A (en) Object rendering method and device, computer equipment and storage medium
CN112473135B (en) Real-time illumination simulation method, device and equipment for mobile game and storage medium
CN115359172A (en) Rendering method and related device
CN116740255A (en) Rendering processing method, device, equipment and medium
CN117274473B (en) Multiple scattering real-time rendering method and device and electronic equipment
Cowan et al. Interactive rate acoustical occlusion/diffraction modeling for 2D virtual environments & games
CN116824082B (en) Virtual terrain rendering method, device, equipment, storage medium and program product
CN116977535B (en) Real-time ray tracing method and device, storage medium and electronic equipment
Andrade et al. An unstructured lumigraph based approach to the SVBRDF estimation problem

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40070915

Country of ref document: HK