CN114549730A - Light source sampling weight determination method for multi-light source scene rendering and related equipment - Google Patents

Light source sampling weight determination method for multi-light source scene rendering and related equipment Download PDF

Info

Publication number
CN114549730A
CN114549730A CN202011360803.XA CN202011360803A CN114549730A CN 114549730 A CN114549730 A CN 114549730A CN 202011360803 A CN202011360803 A CN 202011360803A CN 114549730 A CN114549730 A CN 114549730A
Authority
CN
China
Prior art keywords
rendering
light source
scene
rendered
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011360803.XA
Other languages
Chinese (zh)
Inventor
周鹏
舒思超
徐维超
马杨军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011360803.XA priority Critical patent/CN114549730A/en
Priority to PCT/CN2021/131989 priority patent/WO2022111400A1/en
Publication of CN114549730A publication Critical patent/CN114549730A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/55Radiosity

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the application discloses a light source sampling weight determining method for multi-light source scene rendering and related equipment, wherein the method comprises the following steps: acquiring a scene to be rendered in a multi-light-source scene at a target rendering visual angle, and acquiring a plurality of light sources arranged in the multi-light-source scene; performing ray tracing rendering of a single light source on a scene to be rendered under a target rendering visual angle to obtain each first rendering image corresponding to the scene to be rendered under the independent illumination of each light source; and determining a light source sampling weight corresponding to each light source under a target rendering visual angle according to the irradiation information in each first rendering image, wherein the light source sampling weight is used for performing ray tracing rendering of multiple light sources on a scene to be rendered. By implementing the embodiment of the application, the convergence rate of the multi-light source scene rendering can be increased.

Description

Light source sampling weight determination method for multi-light-source scene rendering and related equipment
Technical Field
The application relates to the technical field of image processing, in particular to a light source sampling weight determination method for multi-light source scene rendering and related equipment.
Background
Rendering is a process of generating an image from a model by using software, and can be widely applied to the fields of games, movies, animations and the like. Rendering may include ray tracing rendering, which is a special rendering algorithm in three-dimensional computer graphics, tracing rendering rays emitted from a virtual rendering camera, calculating a process of propagation of the rendering rays in a rendering scene, and finally presenting a mathematical model of the rendering scene as an image. In order to achieve a vivid rendering effect, a plurality of light sources are generally arranged in a rendering scene, each light source is correspondingly provided with a light source sampling weight, and the light source sampling weight determines whether to select the light source as an effective light source to participate in the operation of ray tracing rendering.
In the prior art, a random determination method is adopted for the light source sampling weights of a plurality of light sources, that is, the light source rendering weights of the light sources are the same. The light source rendering weights of the light sources are the same, so that the convergence rate of the multi-light source scene rendering is low.
Disclosure of Invention
The application provides a light source sampling weight determination method for multi-light source scene rendering and correlation, which can accelerate the convergence rate of the multi-light source scene rendering.
In a first aspect, an embodiment of the present application discloses a light source sampling weight determining method for multi-light source scene rendering. The method comprises the following steps:
acquiring a scene to be rendered in a multi-light-source scene at a target rendering visual angle, and acquiring a plurality of light sources arranged in the multi-light-source scene;
performing ray tracing rendering of a single light source on the scene to be rendered under a target rendering visual angle to obtain each first rendering image corresponding to the scene to be rendered under the independent illumination of each light source;
and determining light source sampling weights corresponding to the light sources under the target rendering visual angle according to the irradiation information in the first rendering images, wherein the light source sampling weights are used for performing ray tracing rendering of multiple light sources on the scene to be rendered.
According to the irradiation information of each first rendering image obtained after single-light-source ray tracing rendering is carried out on the scene to be rendered by each light source, the light source sampling weight corresponding to each light source is determined, and the probability that the light source corresponding to the light source sampling weight with the larger light source sampling weight is selected in the multi-light-source ray tracing rendering process can be increased by the light source sampling weight. By implementing the embodiment of the application, a more appropriate light source can be selected in the multi-light source ray tracing rendering process, so that the convergence rate of the multi-light source scene rendering is increased.
With reference to the first aspect, in a first possible implementation manner, the method further includes: and under the target rendering visual angle, performing multi-light-source ray tracing rendering on the scene to be rendered based on the light source sampling weight corresponding to each light source under the target rendering visual angle to obtain a second rendering image of the scene to be rendered under the illumination of the multi-light source. By implementing the embodiment of the application, based on the light source sampling weight of each light source, the probability that the light source corresponding to the larger light source sampling weight is selected in the ray tracing rendering process is increased, and the convergence rate of the rendering of a multi-light source scene can be increased.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner, the resolution of each of the first rendering images is smaller than the resolution of the second rendering image. By implementing the embodiment of the application, the first rendering image is set to be the image with the smaller resolution in the single light source ray tracing rendering process, the light source sampling weight of each light source is calculated, the calculation amount can be reduced, and the rendering speed is improved.
With reference to any one of the foregoing possible implementation manners of the first aspect, in a third possible implementation manner, the ray tracing rendering includes a bidirectional path tracing rendering. By implementing the method and the device, a plurality of light rays can be generated quickly, and the convergence speed is further accelerated.
With reference to the first aspect, in a fourth possible implementation manner, the irradiation information in each of the first rendered images is determined based on luminance information corresponding to each of the first rendered images, where the luminance information of each of the first rendered images is obtained by converting based on color information in each of the first rendered images.
With reference to any one of the foregoing possible implementation manners of the first aspect, in a fifth possible implementation manner, the plurality of light sources include a first light source; the scene to be rendered comprises a first intersection point and at least one second intersection point, the first intersection point is an intersection point where a first rendering ray and a first object in the scene to be rendered intersect on the surface of the first object, and the second intersection point is an intersection point where a reflected ray of the first intersection point intersects a second object in the scene to be rendered on the surface of the second object and/or an intersection point where rays emitted by the light sources intersect each third object in the scene to be rendered on the surface of each third object respectively; the first rendering ray is sent out to the scene to be rendered by a virtual rendering camera, and the virtual rendering camera is used for determining the target rendering visual angle;
the method for performing ray tracing rendering of a single light source on the scene to be rendered at the target rendering visual angle to obtain each first rendering image corresponding to the scene to be rendered under the independent illumination of each light source specifically comprises the following steps:
under the target rendering visual angle, performing illumination sampling on the first light source based on the first intersection point to obtain a first radiance of the first intersection point projected on the first rendered image by the first rendering light under the single illumination of the first light source;
under the target rendering visual angle, performing illumination sampling on the first light source based on each second intersection point to obtain each second radiance of each second intersection point projected on the first rendering image by the first rendering light under the single illumination of the first light source;
obtaining a first irradiance of the first intersection point projected on the first rendered image by the first rendering ray under the single illumination of the first light source according to the sum of the first radiance and the second radiances;
determining rendering display information of the first intersection point under the independent illumination of the first light source based on the first irradiance and the object attribute of the object where the first intersection point is located;
and acquiring a corresponding first rendering image of the scene to be rendered under the independent illumination of the first light source based on rendering display information of all the first intersection points in the scene to be rendered under the illumination of the first light source.
With reference to the first possible implementation manner of the first aspect, in a sixth possible implementation manner, the scene to be rendered includes a target intersection point, where the target intersection point is an intersection point where a target rendering ray and any object in the scene to be rendered intersect on a surface of the any object; the target rendering ray is sent out to the scene to be rendered by a virtual rendering camera, and the virtual rendering camera is used for determining the target rendering visual angle;
the above-mentioned performing, at the target rendering angle, ray tracing rendering of multiple light sources on the scene to be rendered based on the light source sampling weights corresponding to the light sources at the target rendering angle, and obtaining a second rendered image of the scene to be rendered under the illumination of the multiple light sources is specifically implemented as:
under the target rendering visual angle, performing multiple illumination sampling on each light source based on the target intersection point, wherein each illumination sampling corresponds to selecting each target light source from the multiple light sources, and each target light source is used for performing ray tracing rendering of multiple light sources on the target intersection point so as to obtain a second irradiance of the target intersection point projected on the second rendered image by the target rendering ray under each illumination sampling; wherein the probability of selecting each of the target light sources is proportional to the light source sampling weight of each of the light sources at the target rendering view angle;
determining rendering display information of the target intersection point under the multi-light-source illumination according to the second irradiance of the target intersection point under the multiple illumination samples and the object attribute of the object where the target intersection point is located;
and acquiring the second rendering image according to rendering display information of all target intersection points in the scene to be rendered under the illumination of multiple light sources.
With reference to the sixth possible implementation manner of the first aspect, in a seventh possible implementation manner, the light source sampling weight of each target light source at the target rendering view angle is greater than a preset threshold. According to the embodiment of the application, the sampling number of the light sources can be reduced by setting the light source sampling weight threshold, so that the calculation amount is reduced.
In a second aspect, an embodiment of the present application provides an apparatus for determining light source sampling weights for multi-light source scene rendering, where the apparatus includes:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a scene to be rendered in a multi-light-source scene at a target rendering visual angle and acquiring a plurality of light sources arranged in the multi-light-source scene;
the preprocessing module is used for performing ray tracing rendering of a single light source on the scene to be rendered acquired by the acquisition module under the target rendering visual angle to obtain each first rendering image corresponding to the scene to be rendered under the independent illumination of each light source;
and a determining module, configured to determine, according to the irradiation information in each first rendered image obtained by the preprocessing module, a light source sampling weight corresponding to each light source at the target rendering viewing angle, where the light source sampling weight is used to perform ray tracing rendering of multiple light sources on the scene to be rendered.
In a third aspect, an embodiment of the present application provides an electronic device, including: the system comprises a processor and a memory, wherein the processor and the memory are connected through a bus system;
the memory is used for storing instructions;
the processor is configured to invoke the instructions stored in the memory, to perform the first aspect, or to perform the method steps in any one of the possible implementations in combination with the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product comprising computer readable instructions for performing the first aspect described above or for performing the method steps in any one of the possible implementations in combination with the first aspect, when the computer readable instructions are executed by one or more processors.
In a fifth aspect, embodiments of the present application provide a computer storage medium comprising computer-readable instructions that, when executed by one or more processors, perform the first aspect or perform the method steps in any one of the possible implementations in combination with the first aspect.
It should be understood that the implementations and advantages of the various aspects described above in this application are mutually referenced.
Drawings
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a light source sampling weight determination method for multi-light source scene rendering according to an embodiment of the present disclosure;
FIG. 3 is a light ray diagram of bi-directional path-tracing rendering;
4-5 are schematic diagrams of human-computer interaction provided by embodiments of the present application;
6-8 are schematic diagrams of rendering result graphs provided by embodiments of the present application;
9-14 are rendering detail diagrams of rendering result diagrams provided by the embodiment of the application;
fig. 15 is a schematic structural diagram of a light source sampling weight determining apparatus for multi-light source scene rendering according to an embodiment of the present disclosure.
Detailed Description
The following describes embodiments of the present application in further detail with reference to the accompanying drawings.
The light source sampling weight determination method for multi-light source scene rendering provided in the embodiment of the present application may be executed by an electronic device. The electronic device may be a mobile terminal (e.g., a smartphone, a notebook, a tablet, etc.), an in-vehicle device, an internet-of-things device, or other device capable of rendering a scene. The electronic device may be a device running an android system, an IOS system, a windows system, and other systems.
Illustratively, the specific structure of the electronic device may be as shown in fig. 1, and the specific structure of the electronic device will be described in detail below with reference to fig. 1.
In some possible implementations, the electronic device 10 may include a processor 101 and a memory 102. The processor 101 may include a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU). Further, the electronic device 10 may also include a display 103. Wherein the CPU, GPU, memory 102, display 103, etc. in the electronic device 10 may establish a communication connection via a bus system (not shown in the figure).
Optionally, the CPU and the GPU may be located on the same chip, or may be separate chips.
The roles of the processor 101 and the memory 102 will be briefly described below.
The processor 101 is operative to run an operating system 1012 and application programs 1011. The application 1011 may be a graphics class application such as a game, video player, 3D modeling, and the like. The operating system 1012 provides a system interface through which the application programs 1011 generate instruction streams for rendering scenes, and system drivers, such as user-state drivers and/or kernel-state drivers, provided by the operating system 1012. Further, the processor 101 may also generate rendering data for rendering the scene. The processor 101 generates a render target through a render pipeline (pipeline), and caches and displays the render target on the display 103. In other words, the processor may be configured to execute the method for determining light source sampling weights for multi-light source scene rendering provided in the embodiment of the present application.
Illustratively, the processor 101 may include, but is not limited to, an application processor, one or more microprocessors, Digital Signal Processors (DSPs), Microcontrollers (MCUs), or artificial intelligence processors, among others.
In some possible implementations, the operating system 1012 and the application 1011 are run by a CPU, the rendering pipeline is run in a GPU, and the GPU takes the instruction stream generated by the CPU and generates a rendering target through the rendering pipeline, caches the rendering target, and displays it on the display 103.
The display 103 is used to display various rendered images generated by the electronic device 10, which may be a Graphical User Interface (GUI) of an operating system or image data processed by the processor 101. Alternatively, the display 103 may include any suitable type of display screen. Such as a Liquid Crystal Display (LCD) or a plasma display, or an organic light-emitting diode (OLED) display.
The memory 102 is used to store instructions and rendering data. The memory 102 may be a double data rate synchronous dynamic random access memory (DDR SDRAM) or other type of cache.
It should be noted that the rendering pipeline is a series of operations that the processor sequentially performs in the process of rendering the scene, such as ray tracing rendering provided by the embodiments of the present application. The rendering pipeline may run in the CPU and/or the GPU, in other words, the light source sampling weight determination method for multi-light source scene rendering provided by the embodiment of the present application may be implemented in the CPU and/or the GPU.
Referring to fig. 2, fig. 2 is a schematic flow chart of a light source sampling weight determination method for multi-light-source scene rendering according to an embodiment of the present disclosure. As shown in fig. 2, the specific implementation steps of the embodiment of the present application are as follows:
s201, the processor acquires a scene to be rendered in the multi-light-source scene according to a target rendering visual angle, and acquires a plurality of light sources arranged in the multi-light-source scene.
The target rendering perspective is determined by a position of the virtual rendering camera in the multi-light source scene. In a specific implementation, the processor may obtain the target rendering view angle according to a position of the virtual rendering camera and performance parameters (e.g., a focal length, etc.) of the virtual rendering camera. The target rendering visual angle determines a scene to be rendered, and the scenes to be rendered corresponding to different target rendering visual angles are different. It should be noted that the virtual rendering camera may be understood as a device that renders an image and ultimately presents the image, such as a display or the like. The virtual rendering camera may perform functions including, but not limited to, determining the target rendering perspective, and may also determine a rendering focal length to determine a field of view of a scene to be rendered, and the like.
In some possible implementations, the multi-light source scene may be embodied as a 3D model in a rendering engine.
In a multi-light source scene, a plurality of light sources are provided, and acquiring the plurality of light sources may be understood as acquiring light source attributes of the light sources, such as light source position, luminous intensity, luminous color, light source shape, and the like. For example, identifying the light source position by the (x, y, z) three-dimensional coordinates, the light source attributes of the acquired plurality of light sources may be generated into a light source list, as shown in table 1.
Table 1
Figure BDA0002803925990000051
S202, under the target rendering visual angle, the processor executes ray tracing rendering of a single light source on the scene to be rendered to obtain each first rendering image corresponding to the scene to be rendered under the independent illumination of each light source.
The ray tracing rendering of the single light source represents that only one light source in a multi-light source scene is in an on state, in other words, only the light source attribute of one light source participates in the operation of the ray tracing rendering.
It should be noted that the ray tracing rendering includes a path tracing rendering, where the path tracing rendering includes a one-way path tracing rendering and a two-way path tracing rendering. In the embodiment of the application, the unidirectional path tracing rendering and the bidirectional path tracing rendering can obtain each first rendering image corresponding to the image to be rendered under the independent illumination of each light source. The bidirectional path tracking rendering is to emit rendering rays from the virtual rendering camera and each light source respectively, so that a plurality of rays can be generated quickly, and the convergence speed is further increased. The convergence rate in the embodiment of the present application includes a rendering rate and a rendering effect, for example, a fast convergence rate may be understood as rendering a scene more realistic in the same time, or taking less time to achieve the same rendering effect.
For better understanding of the ray tracing rendering, the following describes a process of the bidirectional path tracing rendering with reference to fig. 3 by taking the example that the ray tracing rendering is implemented as the bidirectional path tracing rendering.
Referring to fig. 3, fig. 3 is a light ray diagram of bidirectional path-tracing rendering. As shown in fig. 3, the plurality of light sources includes a light source 1, a light source 2, a light source 3, and the like. The scene to be rendered includes a first intersection point i and at least one second intersection point (e.g., a second intersection point j, a second intersection point k), where the first intersection point i is an intersection point where the first rendering ray a1 intersects the first object 1 in the scene to be rendered on the surface of the first object 1. The second intersection point j is an intersection point where the reflected ray (e.g., ray r) of the first intersection point i intersects the second object 2 in the scene to be rendered on the surface of the second object 2. It is understood that, in the bidirectional path tracing rendering, the second intersection point j may also be an intersection point where the light ray (i.e. the light ray) emitted by the light source 1 intersects the second object 2 in the scene to be rendered on the surface of the second object 2, and in this case, the second object 2 may be regarded as a third object, i.e. the second object and the third object may be the same object. The second intersection point k is an intersection point where a light ray (e.g., the light ray (c)) emitted by the light source 1 intersects the third object 3 in the scene to be rendered on the surface of the third object 3.
The first rendering ray a1 is issued by the virtual rendering camera to the scene to be rendered, it being understood that the virtual rendering camera may issue a plurality of rendering rays to the scene to be rendered, and the issuing of the first rendering ray a1 is exemplified in fig. 3 and should not be construed as limiting the present application.
For example, the resolution of the first rendered image is set to 1 × 1, the sampling number of pixels x (SPP) is set to 1, and the first light source (i.e., light source 1) is in an on state.
Under the target rendering visual angle, the processor performs illumination sampling on the light source 1 based on the first intersection point i to obtain a first radiance of the first intersection point i, which is projected on a first rendering image by the first rendering ray a1 under the single illumination of the light source 1.
In a specific implementation, whether the ray (r) is visible or not is determined based on the position information of the first intersection point i and the light source position of the light source 1, for example, intersection calculation may be performed based on the light source position of the light source 1 and the position information of the first intersection point i, and if a connection line between the two does not intersect with other objects in the scene to be rendered, such as the second object 2 and the third object 3, the ray (r) is visible. The light source 1 mayThe first intersection point i is directly illuminated by the ray (i), and the first intersection point i is projected by the first rendering ray A1 on the first rendering image under the independent illumination of the light source 1 to form a first radiance L1Can be expressed as:
L1=bsdf1.cosα1.Li1(x,wi1) Equation 1
Wherein bsdf1Is a two-way scattering distribution function, determined by the material of the first object 1; alpha is alpha1Is the angle between the ray (r) and the normal of the first intersection point i, Li1Representing the incident radiance of ray.
Further, the first intersection point i may reflect the radiation of the light source 1, and if the first intersection point i is reflected to the second object 2 via the light ray r, the first intersection point i intersects the second intersection point j at the surface of the second object 2. And (3) carrying out illumination sampling on the light source 1 based on the second intersection point j, namely carrying out intersection operation on the position information of the second intersection point j and the light source position of the light source 1, and if a connecting line between the second intersection point j and the light source position is not intersected with other objects in the scene to be rendered, representing that the light ray is visible. The light source 1 may illuminate the first intersection point i via the light ray (r) and the light ray (r), and the second intersection point j may be projected by the first rendering light ray a1 on the second radiance L of the first rendered image under the single illumination of the light source 12Can be expressed as:
Figure BDA0002803925990000061
bsdf2is a two-way scattering distribution function determined by the material of the second object 2; alpha is alpha2Is the angle between the ray and the normal of the second intersection j, beta1Is the angle between the light ray r and the normal to the first intersection i, Li2Representing the incident radiance of the light ray (c),
Figure BDA0002803925990000063
is a first probability density function, and1it is related.
In some possible embodiments, based on bi-directional path tracing, the light source 1 may emit light toward the third object 3The light ray (c) intersects the second intersection point (k) at the surface of the third object (3). The virtual rendering camera is subjected to illumination sampling based on the second intersection point k, and it should be noted that, because the light path is reversible, the illumination sampling may be understood as sampling a light source from rendering light emitted by the virtual rendering camera, or sampling the virtual rendering camera from light emitted by the light source. The virtual rendering camera is sampled by the second intersection point k through the first rendering ray A1, namely the position information of the second intersection point k and the position information of the first intersection point i are subjected to intersection calculation, if the connecting line between the second intersection point k and the first intersection point i is not intersected with other objects in the scene to be rendered, the representative ray (the fifth ray) is visible, at the moment, the light source 1 can illuminate the first intersection point i through the ray (the third ray) and the ray (the fifth ray), and the second intersection point k is projected on the first rendering image through the first rendering ray A1 under the independent illumination of the light source 1, and has the second radiance L3Can be expressed as:
Figure BDA0002803925990000062
bsdf3is a bidirectional scattering distribution function, which is determined by the material of the third object 3; alpha (alpha) ("alpha")3Is the angle between the light ray (c) and the normal of the second intersection point (k), beta2Is the angle between the light ray (p) and the normal of the first intersection point (i), Li3Representing the incident radiation degree of the light ray (c),
Figure BDA0002803925990000064
is a second probability density function, and2it is related.
Further, the position information of the second intersection point k and the position information of the second intersection point j are subjected to intersection calculation, if a connecting line between the two does not intersect with other objects in the scene to be rendered, namely the connecting line represents that light rays are visible, the light source 1 can illuminate the first intersection point i through the light rays c, the light rays c and the light rays c, and the second intersection point k and the second intersection point j are projected on the first rendering image by the first rendering light ray A1 under the independent illumination of the light source 1 to form a second radiance L4Can be expressed as:
Figure BDA0002803925990000071
β3is the angle between the light ray and the normal of the second intersection point j;
Figure BDA0002803925990000072
is a third probability density function, and3it is related.
According to a first irradiance L1And respective second emittance, e.g. L2、L3、L4A first intersection point i is obtained, and a first irradiance L projected by the first rendering ray a1 on the first rendered image under the illumination of the light source 1 alone is obtained as (irrespective of the object lighting):
L=L1+L2+L3+L4equation 5
Based on the first irradiance L and the object properties (such as color, material, angle, etc.) of the object (i.e. the first object 1) where the first intersection point i is located, the rendering display information of the first intersection point i under the individual illumination of the light source 1 is determined. In a specific implementation, the rendering display information includes color information (i.e., RGB representation), and the first irradiance L is multiplied by the RGB representation of the first object 1, so as to obtain the RGB representation of the first intersection point i under the single illumination of the light source 1.
Since the preset resolution of the first rendered image is 1 × 1 and the sampling number is 1SPP, the RGB representation of the first intersection point i under the single illumination of the light source 1 is the RGB representation of the first rendered image, and the first rendered image can be obtained through the RGB representation of the first rendered image.
It should be noted that, if the number of samples of the first rendered image is set to be greater than 1SPP in advance. In other words, to obtain the color information of the pixel x, the virtual rendering camera sends out a plurality of rendering rays to the image to be rendered, and the number of the rendering rays is equal to the sampling number. For example, the sampling number of the first rendering image is preset to be 10SPP, the virtual camera sends 10 rendering light rays to the image to be rendered, 10 first intersection points are generated with objects in the scene to be rendered, the calculation of irradiance of each first intersection point is similar to the calculation of irradiance of the first intersection point i, irradiance of all first intersection points in the scene to be rendered is multiplied by the object attribute of the object where the first intersection point is located, and rendering display information of all first intersection points in the scene to be rendered under the illumination of the light source 1 is obtained respectively. Similarly, the rendering display information includes color information (i.e., RGB representation), and the RGB representations of the plurality of first intersections under the individual illumination of the light source 1 are added and averaged to obtain the RGB representation of the pixel x.
In some possible embodiments, if the preset resolution of the first rendered image is 1 × 1, the RGB representation of the pixel x is the RGB representation of the first rendered image, and the first rendered image can be obtained through the RGB representation of the first rendered image.
Optionally, in some possible embodiments, if the preset resolution of the first rendered image is greater than 1 × 1, that is, the first rendered image includes a plurality of pixels x, the RGB representations of the plurality of pixels x may be added to serve as the RGB representation of the first rendered image. Alternatively, the RGB representations of the plurality of pixels x may be added and averaged to obtain the RGB representation of the first rendered image. The first rendered image may be derived from an RGB representation of the first rendered image.
It can be understood that, in the above, the light source 1 is only turned on to exemplarily describe the ray tracing rendering process of the single light source, in the embodiment of the present application, each light source in the scene to be rendered may be sequentially turned on, and only one light source is turned on each time, for example, only the light source 2 is turned on, and other light sources except the light source 2 in the multi-light source scene are all turned off. The specific implementation manner of the single-light-source ray tracing rendering of the light source 2 may refer to the implementation manner of only turning on the light source 1, so that each first rendering image corresponding to the scene to be rendered under the independent illumination of each light source can be obtained. At this time, the processor may acquire image information, such as color information (i.e., RGB representation), luminance information (i.e., YUV representation), and the like, of the first rendered image.
S203, the processor determines the corresponding light source sampling weight of each light source under the target rendering visual angle according to the irradiation information in each first rendering image.
In some possible embodiments, the illumination information in each of the first rendered images is determined based on the brightness information corresponding to each of the first rendered images, wherein the brightness information of each of the first rendered images is obtained based on the color information in each of the first rendered images. For example, the processor may obtain an RGB representation of each first rendered image, convert the RGB representation of each first rendered image into a YUV representation, and use the luminance information (i.e., Y value) of each first rendered image as a light source sampling weight corresponding to each light source at the target rendering viewing angle. The brightness information of each first rendering image is in one-to-one correspondence with each light source, and the brightness information of each first rendering image is used as a light source sampling weight of the corresponding light source, and further, normalization processing can be performed on the light source sampling weight of each light source, that is, the light source sampling weight of each light source is converted into the range of 0 to 1. For example, the light source sampling weights of the light sources may be associated with the light source attributes in table 1, resulting in a new light source list as shown in table 2.
Table 2
Figure BDA0002803925990000081
According to the irradiation information of each first rendering image obtained after single-light-source ray tracing rendering is carried out on the scene to be rendered by each light source, the light source sampling weight corresponding to each light source is determined, and the probability that the light source corresponding to the light source sampling weight with the larger light source sampling weight is selected in the multi-light-source ray tracing rendering process can be increased by the light source sampling weight. By implementing the embodiment of the application, a more appropriate light source can be selected in the multi-light source ray tracing rendering process, so that the convergence rate of the multi-light source scene rendering is increased.
Further, based on the light source sampling weight obtained in step S203 corresponding to each light source at the target rendering viewing angle, the processor may perform multi-light-source ray tracing rendering on the scene to be rendered at the target rendering viewing angle, so as to obtain a second rendered image of the scene to be rendered under the multi-light-source illumination.
In a specific implementation, a plurality of light sources in a multi-light-source scene are in an on state. And the virtual rendering camera sends a target rendering ray to the scene to be rendered, and the intersection point of the target rendering ray and any object in the scene to be rendered, which is intersected on the surface of the any object, is the target intersection point. And under the target rendering visual angle, performing multiple illumination sampling on each light source based on the target intersection point, and acquiring the irradiance of the target intersection point under each illumination sampling. It can be understood that, unlike single-light-source illumination sampling of the first light source based on the first intersection point, when more than one light source is in an on state, illumination sampling of each light source is performed multiple times based on the target intersection point, each illumination sampling corresponds to selecting each target light source from the multiple light sources, the target light sources selected by each illumination sampling are not necessarily the same combination, for example, according to the relationship between the light sources and the light source sampling weights in table 2, the light sources 1 and 2, or the light sources 2 and 3, or the light sources 1, 2 and 3, etc., may be selected, and since the light source sampling weight of the light source 2 is the largest, the probability of selecting the light source 2 is greater than the probability of selecting the light sources 1 and 3. In other words, the probability of selecting each target light source is proportional to the light source sampling weight of each light source at the target rendering perspective. The processor may obtain a second irradiance at each illumination sample for the target intersection projected by the target rendering rays onto the second rendered image with reference to the obtaining of the first irradiance.
Optionally, in the embodiment of the present application, the ray tracing rendering performed by the multiple light sources may be a one-way path tracing rendering, or a two-way path tracing rendering.
And determining rendering display information of the target intersection point under the multi-light-source illumination according to the second irradiance of the target intersection point under the multiple illumination samples and the object attribute of the object where the target intersection point is located, wherein the rendering display information may include color information (namely, RGB representation). Illustratively, the irradiance of the target intersection point under each illumination sample is added and averaged to be the irradiance of the target intersection point under the multi-light-source illumination. And multiplying the irradiance of the target intersection point under the multi-light-source illumination by the RGB of the object where the target intersection point is located, thereby obtaining the RGB of the target intersection point under the multi-light-source illumination.
And adding rendering display information (such as RGB representation) of all target intersection points in the scene to be rendered under the illumination of multiple light sources, and averaging to obtain RGB representation of the second rendering image, wherein the RGB representation of the second rendering image can obtain the second rendering image.
According to the embodiment of the application, based on the light source sampling weight of each light source, the probability of selecting the light source corresponding to the larger light source sampling weight in the ray tracing rendering process is increased, and the convergence rate of the rendering of a multi-light source scene can be increased.
Further, the light source sampling weight of each target light source at the target rendering visual angle is greater than a preset threshold. In other words, the processor rejects the light sources by setting a preset threshold, for example, the preset threshold is 0.3, and the light source with the light source sampling weight not greater than 0.3 (for example, the light source 1 in table 3) is not within the light source selection range of the ray tracing rendering, i.e., the light source 1 does not participate in the operation of the ray tracing rendering. According to the embodiment of the application, the sampling number of the light sources can be reduced by setting the light source sampling weight threshold, so that the calculation amount of the processor is reduced.
In some possible embodiments, the resolution of each of the first rendered images is less than the resolution of the second rendered image. According to the embodiment of the application, the first rendering image is set to be the image with the smaller resolution before formal rendering, the light source sampling weight of each light source is calculated, the operation amount of a processor can be reduced, and the rendering speed is improved.
Some human-computer interaction embodiments of the present application are described below in conjunction with fig. 4-14.
In some embodiments of the present application, the electronic device may be provided with a rendering preprocessing mode, and the user may turn on the rendering preprocessing mode as needed. The rendering preprocessing mode is started, that is, the light source sampling weight method for implementing the multi-light source scene rendering provided by the embodiment of the present application. After the rendering preprocessing mode is started, the electronic equipment can accelerate the convergence rate of the multi-light-source scene rendering.
Illustratively, FIG. 4 shows a human-computer interaction diagram of one possible user-initiated rendering pre-processing mode. As shown in fig. 4, the screen of the electronic device may display a rendering interface of three-dimensional animation software (e.g., blend software), the rendering interface includes a multi-light source scene 40 and a setting interface 41, and for example, the multi-light source scene 40 is a residential building, and light sources are provided in each floor of the residential building. It can be understood that the walls, floors, light sources and the like of the residential building are set in the 3D modeling process, and the embodiment of the present application renders the model (i.e., the multi-light source scene 40) obtained by the 3D modeling based on the blend software. The electronic device may acquire the drop position of the virtual rendering camera in response to a user's click operation, drag operation, or the like of the virtual rendering camera in the setting interface 41. A target rendering perspective is determined based on the placement position of the virtual rendering camera and performance parameters (e.g., focal length, etc.) of the virtual rendering camera. The processor acquires a scene to be rendered in the multi-light-source scene 40 at the target rendering view angle, for example, the placement position of the virtual rendering camera is a room in a residential building, and the screen of the electronic device may display a scene to be rendered 50 as shown in fig. 5. In some possible implementations, the setup interface 41 may also include other information, such as shape information, attributes, rendering times, etc. of various objects in the multi-light source scene.
The electronic device may also be responsive to a resolution setting entered by a user in the settings interface 41, where the resolution set is a resolution of a second image resulting from the ray-tracing rendering of the scene to be rendered under the multiple light sources illumination.
The setup interface 41 also includes an option to turn on the rendering pre-processing mode, and the electronic device may also, for example, turn on the rendering pre-processing mode in response to a user checking the selection of the light source sampling weight pre-processing.
After the rendering preprocessing mode is started, the electronic device first performs ray tracing rendering of a single light source on the scene 50 to be rendered, exemplarily, the electronic device sequentially starts the single light source, performs bidirectional path tracing rendering on the scene 50 to be rendered respectively, obtains each first rendering image corresponding to the scene 50 to be rendered under independent illumination of each light source, and determines the light source sampling weight of each light source based on the irradiation information (for example, the Y value of each first rendering image) in each first rendering image. After obtaining the light source sampling weight of each light source, the electronic device starts all the light sources in the multi-light source scene 40, and performs ray tracing rendering of the multi-light source on the scene to be rendered 50, which may be bidirectional path tracing rendering, to obtain a second rendered image of the scene to be rendered under the illumination of the multi-light source. Illustratively, the electronic device may respond to a rendering time set by the user, for example, the rendering time is 60 s. Or the electronic device performs ray-tracing rendering of multiple light sources on the scene 50 to be rendered based on the default rendering time, resulting in the rendering result graph (i.e., the second rendered image) shown in fig. 6.
In order to illustrate that the light source sampling weight determining method for multi-light source scene rendering provided by the embodiment of the application can increase the convergence rate of multi-light source scene rendering, the inventors of the application have made a comparison test and close the rendering preprocessing mode (i.e., do not select the option of the light source sampling weight). In the case where the rendering time is the same as that when the rendering preprocessing mode is on, the rendering time is also 60s, for example. All light sources in the multi-light-source scene 40 are turned on, and the electronic device performs ray tracing rendering of the multi-light sources on the scene to be rendered 50 based on the default light source sampling weight, so as to obtain a rendering result graph shown in fig. 7. Further, when the rendering preprocessing mode is turned off, but the rendering time is long enough, for example, the rendering time is 2h, the electronic device turns on all the light sources in the multi-light-source scene 40 based on the default light source sampling weight, and performs ray tracing rendering of the multiple light sources on the scene to be rendered 50, so as to obtain the rendering result graph shown in fig. 8. At this time, the rendering result graph shown in fig. 8 may be considered to be an effect close to a real image, that is, fig. 8 may be used as a reference rendering result graph of a scene to be rendered.
As can be seen from comparison between the rendering result diagrams shown in fig. 6 and fig. 7, by implementing the light source sampling weight determining method for multi-light-source scene rendering provided by the embodiment of the present application, in the same rendering time, the second rendered image obtained by the embodiment of the present application is more vivid and has less noise. As can be seen from the rendering result comparison graphs shown in fig. 6 and 8, by implementing the light source sampling weight determining method for multi-light source scene rendering provided by the embodiment of the present application, the same rendering effect as that of the reference rendering result graph can be achieved in a shorter time. Namely, the convergence rate of the multi-light source scene rendering can be increased by implementing the embodiment of the application.
Furthermore, in order to clearly and intuitively present the rendering effect achieved by implementing the method, the method also provides a rendering detail schematic diagram of the rendering result diagram. See fig. 9-14.
It should be noted that fig. 9 and fig. 12 are rendering detail diagrams of the rendering result diagram shown in fig. 6; FIGS. 10 and 13 are rendering detail diagrams of the rendering result graph shown in FIG. 7; fig. 11 and 14 are rendering detail diagrams of the rendering result diagram shown in fig. 8. Fig. 9, 10 and 11 constitute one set of comparison combinations, and fig. 12, 13 and 14 constitute another set of comparison combinations, it being understood that one set of comparison combinations illustrates that all the rendering result maps shown within that combination are rendered for the same detail (i.e. the same object). According to the comparison between the rendering result graphs shown in fig. 9 and fig. 10, and between fig. 12 and fig. 13, it can be more obviously seen that the second rendered image obtained by implementing the embodiment of the present application is more vivid within the same rendering time.
According to fig. 9 and 11, and fig. 12 and 14, it can be seen that the rendering result obtained by implementing the present application is not much different from the reference rendering result graph, and it can be similar to that the same rendering effect as the reference rendering result graph can be achieved in a shorter time by implementing the embodiment of the present application. Namely, the convergence rate of the multi-light source scene rendering can be increased by implementing the embodiment of the application.
For example, in addition to measuring the convergence rate of the multi-light source scene rendering by using the fidelity of the rendering result graph, the convergence rate of the multi-light source scene rendering may also be measured by using the quantization data. The quantized data may include, but is not limited to, peak signal-to-noise ratio (PSNR), Structural Similarity (SSIM), root-mean-square error (RMSE), and the like. In other cases where the rendering conditions are the same (e.g., the rendering time is the same, and the rendering view angle is the same), the comparison of the quantized data in the embodiment of the present application (i.e., the rendering preprocessing mode is turned on) and the quantized data in the non-rendering preprocessing mode can be as shown in table 3.
Table 3
Figure BDA0002803925990000101
Figure BDA0002803925990000111
It should be noted that, a larger PSNR value indicates that the rendered image is closer to the reference rendering result map; a larger value of SSIM may also indicate that the rendered image is closer to the reference rendering result map; smaller values of RMSE indicate less error between the rendered image and the reference rendering map. Here, the reference rendering images are respectively fig. 8, fig. 11 and fig. 14, and it can be obtained from the quantization data in table 3, and the rendering image obtained by turning on the rendering preprocessing mode is closer to the reference rendering image than the rendering image obtained by not turning on the rendering preprocessing mode. Namely, the convergence rate of the multi-light source scene rendering can be increased by implementing the embodiment of the application.
Referring to fig. 15, fig. 15 is a schematic structural diagram of a light source sampling weight determining apparatus for multi-light source scene rendering according to an embodiment of the present disclosure. As shown in fig. 15, the light source sampling weight determining apparatus 150 for multi-light source scene rendering includes:
an obtaining module 1500, configured to obtain a scene to be rendered in a multi-light-source scene at a target rendering viewing angle, and obtain multiple light sources set in the multi-light-source scene;
the preprocessing module 1501 is configured to perform ray tracing rendering of a single light source on the scene to be rendered acquired by the acquisition module 1500 at a target rendering view angle, so as to obtain each first rendered image corresponding to the scene to be rendered under the independent illumination of each light source;
a determining module 1502, configured to determine, according to the irradiation information in each first rendered image obtained in the preprocessing module 1501, a light source sampling weight corresponding to each light source at a target rendering viewing angle, where the light source sampling weight is used to perform ray tracing rendering of multiple light sources on a scene to be rendered.
Further, the light source sampling weight determining apparatus 150 for multi-light source scene rendering further includes a processing module 1503;
a processing module 1503, configured to perform ray tracing rendering of multiple light sources on a scene to be rendered based on the light source sampling weights corresponding to the light sources determined by the determining module 1502 at the target rendering viewing angle;
the obtaining module 1500 is further configured to obtain a second rendered image of the scene to be rendered under the illumination of multiple light sources.
In some possible embodiments, the determining module 1502 is further configured to determine the irradiation information in each first rendered image based on the brightness information corresponding to each first rendered image, where the brightness information of each first rendered image is obtained by converting the brightness information based on the color information in each first rendered image.
In some possible embodiments, the plurality of light sources includes a first light source; the scene to be rendered comprises a first intersection point and at least one second intersection point, the first intersection point is an intersection point of a first rendering ray and a first object in the scene to be rendered on the surface of the first object, and the second intersection point is an intersection point of a reflection ray of the first intersection point and a second object in the scene to be rendered on the surface of the second object and/or an intersection point of rays emitted by each light source and each third object in the scene to be rendered on the surface of each third object respectively; the first rendering ray is sent out to the scene to be rendered by a virtual rendering camera, and the virtual rendering camera is used for determining the target rendering visual angle;
the obtaining module 1500 is further configured to perform illumination sampling on the first light source based on the first intersection point under the target rendering visual angle, so as to obtain a first radiance of the first intersection point projected on the first rendered image by the first rendering light under the independent illumination of the first light source;
the preprocessing module 1501 is further configured to perform illumination sampling on the first light source based on each second intersection point under the target rendering visual angle, so as to obtain each second radiance of each second intersection point projected on the first rendered image by the first rendering light under the independent illumination of the first light source;
the obtaining module 1500 is further configured to obtain, according to a sum of the first radiance and each second radiance, a first irradiance of the first intersection point, which is projected on the first rendered image by the first rendering light under the single illumination of the first light source;
the determining module 1502 is further configured to determine rendering display information of the first intersection point under the independent illumination of the first light source based on the first irradiance and the object attribute of the object where the first intersection point is located;
the obtaining module 1500 is further configured to obtain, based on rendering display information of all first intersection points in the scene to be rendered under the illumination of the first light source, a corresponding first rendering image of the scene to be rendered under the independent illumination of the first light source.
In some possible embodiments, the scene to be rendered includes a target intersection point, where the target intersection point is an intersection point where the target rendering ray intersects any object in the scene to be rendered on a surface of the any object; the target rendering ray is sent out to the scene to be rendered by a virtual rendering camera, and the virtual rendering camera is used for determining the target rendering visual angle;
the processing module 1503 is further configured to perform multiple illumination sampling on each light source based on the target intersection point at the target rendering view angle; each illumination sampling corresponds to each target light source selected from a plurality of light sources, each target light source is used for performing ray tracing rendering of the multiple light sources on the target intersection point, and the probability of selecting each target light source is proportional to the light source sampling weight of each light source under the target rendering visual angle;
the obtaining module 1500 is further configured to obtain a second irradiance of the target intersection point, which is projected on the second rendered image by the target rendering light under each illumination sampling;
the determining module 1502 is further configured to determine rendering display information of the target intersection point under the multi-light-source illumination according to each second irradiance of the target intersection point under the multiple-illumination sampling and the object attribute of the object where the target intersection point is located;
the obtaining module 1500 is further configured to obtain the second rendered image according to rendering display information of all target intersections in the scene to be rendered under the illumination of multiple light sources.
It should be noted that the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the several embodiments provided in the present application, it should be understood that the disclosed method, apparatus, and system may be implemented in other ways. The above-described embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.

Claims (12)

1. A light source sampling weight determination method for multi-light source scene rendering, the method comprising:
acquiring a scene to be rendered in a multi-light-source scene at a target rendering visual angle, and acquiring a plurality of light sources arranged in the multi-light-source scene;
performing ray tracing rendering of a single light source on the scene to be rendered under the target rendering visual angle to obtain each first rendering image corresponding to the scene to be rendered under the independent illumination of each light source;
and determining light source sampling weights corresponding to the light sources under the target rendering visual angle according to the irradiation information in the first rendering images, wherein the light source sampling weights are used for performing ray tracing rendering of multiple light sources on the scene to be rendered.
2. The method of claim 1, further comprising:
and under the target rendering visual angle, performing multi-light-source ray tracing rendering on the scene to be rendered based on the light source sampling weight corresponding to each light source under the target rendering visual angle to obtain a second rendering image of the scene to be rendered under the multi-light-source illumination.
3. The method of claim 2, wherein each of the first rendered images has a resolution less than a resolution of the second rendered image.
4. The method of any of claims 1-3, wherein the ray tracing rendering comprises a bi-directional path tracing rendering.
5. The method of claim 1, wherein the illumination information in each first rendered image is determined based on corresponding brightness information of the each first rendered image, wherein the brightness information of the each first rendered image is converted based on color information in the each first rendered image.
6. The method of any of claims 1-5, wherein the plurality of light sources comprises a first light source; the scene to be rendered comprises a first intersection point and at least one second intersection point, the first intersection point is an intersection point where a first rendering ray and a first object in the scene to be rendered intersect on the surface of the first object, and the second intersection point is an intersection point where a reflected ray of the first intersection point intersects a second object in the scene to be rendered on the surface of the second object and/or an intersection point where a ray emitted by each light source intersects each third object in the scene to be rendered on the surface of each third object respectively; the first rendering ray is sent out to the scene to be rendered by a virtual rendering camera, and the virtual rendering camera is used for determining the target rendering visual angle;
the performing ray tracing rendering of a single light source on the scene to be rendered under the target rendering visual angle to obtain each first rendering image corresponding to the scene to be rendered under the independent illumination of each light source comprises:
under the target rendering visual angle, illumination sampling is carried out on the first light source based on the first intersection point, and a first radiance of the first intersection point projected on the first rendering image by the first rendering light under the single illumination of the first light source is obtained;
under the target rendering visual angle, performing illumination sampling on the first light source based on each second intersection point to obtain each second radiance of each second intersection point projected on the first rendering image by the first rendering light under the independent illumination of the first light source;
according to the sum of the first radiance and each second radiance, obtaining a first irradiance of the first intersection point projected on the first rendering image by the first rendering ray under the condition that the first light source is singly illuminated;
determining rendering display information of the first intersection point under the independent illumination of the first light source based on the first irradiance and the object attribute of the object where the first intersection point is located;
and acquiring a corresponding first rendering image of the scene to be rendered under the independent illumination of the first light source based on rendering display information of all first intersection points in the scene to be rendered under the illumination of the first light source.
7. The method of claim 2, wherein the scene to be rendered comprises an object intersection point, and the object intersection point is an intersection point where an object rendering ray intersects any object in the scene to be rendered on a surface of the any object; the target rendering ray is sent out to the scene to be rendered by a virtual rendering camera, and the virtual rendering camera is used for determining the target rendering visual angle;
the performing, at the target rendering view angle, ray tracing rendering of multiple light sources on the scene to be rendered based on the light source sampling weights corresponding to the light sources at the target rendering view angle, and obtaining a second rendering image of the scene to be rendered under illumination of the multiple light sources includes:
under the target rendering visual angle, conducting multiple illumination sampling on each light source based on the target intersection point, wherein each illumination sampling corresponds to each target light source selected from the multiple light sources, and each target light source is used for performing ray tracing rendering of multiple light sources on the target intersection point so as to obtain a second irradiance of the target intersection point, which is projected on the second rendered image by the target rendering ray under each illumination sampling; wherein the probability of selecting the respective target light source is proportional to the light source sampling weight of the respective light source at the target rendering perspective;
determining rendering display information of the target intersection point under the multi-light-source illumination according to each second irradiance of the target intersection point under the multiple illumination samples and the object attribute of the object where the target intersection point is located;
and acquiring the second rendering image according to rendering display information of all target intersection points in the scene to be rendered under the illumination of multiple light sources.
8. The method of claim 7, wherein the light source sampling weight of each target light source at the target rendering view angle is greater than a preset threshold.
9. An apparatus for determining light source sampling weights for multi-light source scene rendering, the apparatus comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a scene to be rendered in a multi-light-source scene at a target rendering visual angle and acquiring a plurality of light sources arranged in the multi-light-source scene;
the preprocessing module is used for performing ray tracing rendering of a single light source on the scene to be rendered acquired by the acquisition module under the target rendering visual angle to obtain each first rendering image corresponding to the scene to be rendered under the independent illumination of each light source;
and the determining module is used for determining light source sampling weights corresponding to the light sources under the target rendering visual angle according to the irradiation information in the first rendering images obtained by the preprocessing module, wherein the light source sampling weights are used for performing ray tracing rendering of multiple light sources on the scene to be rendered.
10. An electronic device, characterized in that the electronic device comprises: the system comprises a processor and a memory, wherein the processor is connected with the memory through a bus system;
the memory is to store instructions;
the processor is configured to call instructions stored in the memory to perform the method of any of claims 1-8.
11. A computer program product comprising computer readable instructions for performing the method of any of claims 1-8 when executed by one or more processors.
12. A computer storage medium comprising computer readable instructions which, when executed by one or more processors, perform the method of any of claims 1-8.
CN202011360803.XA 2020-11-27 2020-11-27 Light source sampling weight determination method for multi-light source scene rendering and related equipment Pending CN114549730A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011360803.XA CN114549730A (en) 2020-11-27 2020-11-27 Light source sampling weight determination method for multi-light source scene rendering and related equipment
PCT/CN2021/131989 WO2022111400A1 (en) 2020-11-27 2021-11-22 Light source sampling weight determination method for multiple light source scenario rendering, and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011360803.XA CN114549730A (en) 2020-11-27 2020-11-27 Light source sampling weight determination method for multi-light source scene rendering and related equipment

Publications (1)

Publication Number Publication Date
CN114549730A true CN114549730A (en) 2022-05-27

Family

ID=81668044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011360803.XA Pending CN114549730A (en) 2020-11-27 2020-11-27 Light source sampling weight determination method for multi-light source scene rendering and related equipment

Country Status (2)

Country Link
CN (1) CN114549730A (en)
WO (1) WO2022111400A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082611A (en) * 2022-08-18 2022-09-20 腾讯科技(深圳)有限公司 Illumination rendering method, apparatus, device and medium
CN116681814A (en) * 2022-09-19 2023-09-01 荣耀终端有限公司 Image rendering method and electronic equipment
WO2024002130A1 (en) * 2022-06-29 2024-01-04 华为技术有限公司 Image rendering method and related device thereof

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100339B (en) * 2022-06-15 2023-06-20 北京百度网讯科技有限公司 Image generation method, device, electronic equipment and storage medium
CN115731336B (en) * 2023-01-06 2023-05-16 粤港澳大湾区数字经济研究院(福田) Image rendering method, image rendering model generation method and related devices
CN116524061B (en) * 2023-07-03 2023-09-26 腾讯科技(深圳)有限公司 Image rendering method and related device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8542231B2 (en) * 2009-06-29 2013-09-24 Crytek Gmbh Method, computer graphics image rendering system and computer-readable data storage medium for computing of indirect illumination in a computer graphics image of a scene
CN104200512A (en) * 2014-07-30 2014-12-10 浙江传媒学院 Multiple-light source rendering method based on virtual spherical light sources
CN104732579A (en) * 2015-02-15 2015-06-24 浙江传媒学院 Multi-light-source scene rendering method based on light fragmentation
CN105261059B (en) * 2015-09-18 2017-12-12 浙江大学 A kind of rendering intent based in screen space calculating indirect reference bloom
CN106251393B (en) * 2016-07-14 2018-11-09 山东大学 A kind of gradual Photon Mapping optimization method eliminated based on sample

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024002130A1 (en) * 2022-06-29 2024-01-04 华为技术有限公司 Image rendering method and related device thereof
CN115082611A (en) * 2022-08-18 2022-09-20 腾讯科技(深圳)有限公司 Illumination rendering method, apparatus, device and medium
CN115082611B (en) * 2022-08-18 2022-11-11 腾讯科技(深圳)有限公司 Illumination rendering method, apparatus, device and medium
WO2024037176A1 (en) * 2022-08-18 2024-02-22 腾讯科技(深圳)有限公司 Method and apparatus for rendering virtual scenario, and device and medium
CN116681814A (en) * 2022-09-19 2023-09-01 荣耀终端有限公司 Image rendering method and electronic equipment

Also Published As

Publication number Publication date
WO2022111400A1 (en) 2022-06-02

Similar Documents

Publication Publication Date Title
CN114549730A (en) Light source sampling weight determination method for multi-light source scene rendering and related equipment
US11570372B2 (en) Virtual camera for 3-d modeling applications
US20180018814A1 (en) Reinforcement learning for light transport
US11386613B2 (en) Methods and systems for using dynamic lightmaps to present 3D graphics
WO2021228031A1 (en) Rendering method, apparatus and system
US20100265250A1 (en) Method and system for fast rendering of a three dimensional scene
US20140267412A1 (en) Optical illumination mapping
US9082230B2 (en) Method for estimation of the quantity of light received at a point of a virtual environment
CN113052947B (en) Rendering method, rendering device, electronic equipment and storage medium
CN110930497B (en) Global illumination intersection acceleration method and device and computer storage medium
WO2023142607A1 (en) Image rendering method and apparatus, and device and medium
WO2024021557A1 (en) Reflected illumination determination method and apparatus, global illumination determination method and apparatus, medium, and device
US9659404B2 (en) Normalized diffusion profile for subsurface scattering rendering
Yao et al. Multi‐image based photon tracing for interactive global illumination of dynamic scenes
WO2021034837A1 (en) Ray-tracing with irradiance caches
CA3199390A1 (en) Systems and methods for rendering virtual objects using editable light-source parameter estimation
CN110334027B (en) Game picture testing method and device
US11308684B2 (en) Ray-tracing for auto exposure
CN112967369A (en) Light ray display method and device
EP2428935B1 (en) Method for estimating the scattering of light in a homogeneous medium
WO2022121654A1 (en) Transparency determination method and apparatus, and electronic device and storage medium
US20230090732A1 (en) System and method for real-time ray tracing in a 3d environment
Beqiraj Image-Based Lighting for OpenGL
CN115439595A (en) AR-oriented indoor scene dynamic illumination online estimation method and device
Croubois et al. Fast Image Based Lighting for Mobile Realistic AR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination