CN115690284A - Rendering method, device and storage medium - Google Patents

Rendering method, device and storage medium Download PDF

Info

Publication number
CN115690284A
CN115690284A CN202111627202.5A CN202111627202A CN115690284A CN 115690284 A CN115690284 A CN 115690284A CN 202111627202 A CN202111627202 A CN 202111627202A CN 115690284 A CN115690284 A CN 115690284A
Authority
CN
China
Prior art keywords
rendering
rendering result
angle
target
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111627202.5A
Other languages
Chinese (zh)
Inventor
谢坤
马库斯·斯坦伯格
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to PCT/CN2022/104247 priority Critical patent/WO2023005631A1/en
Publication of CN115690284A publication Critical patent/CN115690284A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing

Abstract

The application provides a rendering method, the rendering method is used for a rendering system, the rendering method is used for rendering an application, the application comprises at least one model, each model comprises a plurality of space cache areas, each space cache area comprises a plurality of angle areas, and the method comprises the following steps: in the process of rendering the current frame of the application, determining an angle area where an intersection point of the emergent ray and the model is located as a target angle area; obtaining an intermediate rendering result of the target angle area, which is pre-calculated before the current frame is rendered, wherein the intermediate rendering result of the target angle area is determined according to pre-rendering results of a plurality of incident rays passing through a target space cache area where the target angle area is located; and calculating the rendering result of the pixels in the current view plane according to the intermediate rendering result of the target angle area. According to the rendering method, the incident light and the emergent light are pre-calculated, so that the overall calculated amount and the real-time calculated amount in the ray tracing process are reduced, and the efficiency of ray tracing calculation is effectively improved while the high-quality rendering result is guaranteed.

Description

Rendering method, device and storage medium
Technical Field
The present application relates to the field of computers, and in particular, to a rendering method, an apparatus, and a storage medium.
Background
Ray tracing rendering technology has been the basic technology in the field of computer graphics, and up to now, the technology is the most important technology for realizing high-quality, realistic and high-quality images. However, the technology always needs a long calculation time to complete a large number of monte carlo integral calculation processes to generate a final calculation result. Therefore, the technology is always applied to the field of off-line rendering of scenes, such as movies and animations. In recent years, however, the industry is constantly trying to apply ray tracing rendering techniques to render scenes in real time, such as games, augmented reality, and the like.
Therefore, how to improve the efficiency of the real-time ray tracing rendering technology under the condition of ensuring high image quality becomes an urgent problem to be solved.
Disclosure of Invention
The application provides a rendering method which can improve rendering efficiency.
A first aspect of the application provides a rendering method for rendering an application, the application comprising at least one model, each model comprising a plurality of spatial cache regions, each spatial cache region comprising a plurality of angular regions, the method comprising: in the process of rendering the current frame of the application, determining an angle area where the intersection point of the emergent ray and the model is located as a target angle area; obtaining an intermediate rendering result of the target angle area, which is pre-calculated before the current frame is rendered, wherein the intermediate rendering result of the target angle area is determined according to pre-rendering results of a plurality of incident rays passing through a target space cache area where the target angle area is located; and calculating the rendering result of the pixels in the current view plane according to the intermediate rendering result of the target angle area.
The method comprises the steps of pre-rendering a target angle area before the current frame rendering moment to obtain an intermediate rendering result, and obtaining the rendering result of the current frame according to the intermediate rendering result when the current frame is rendered. The method avoids the large amount of real-time ray tracing calculation when the current frame is rendered, effectively saves the calculation time and improves the rendering efficiency.
In some possible designs, the method further comprises: and calculating the rendering result of the pixel according to the intermediate rendering result of the target angle area and the corresponding relation between the pixel and the angle area. By establishing the corresponding relation between the angle area and the pixel, after the intermediate rendering result of the angle area is determined, the rendering result of the pixel can be determined. By obtaining the intermediate rendering result of the angle area, a large amount of real-time rendering calculation is avoided, and the rendering efficiency is improved.
In some possible designs, the method further comprises: and calculating a rendering result of a pixel in the current view plane according to the intermediate rendering result of the target angle area and the intermediate rendering result of the angle area set, wherein a plurality of angle areas corresponding to the pixel comprise the target angle area and the angle area set, and the angle area set comprises at least one angle area on the model.
When the rendering result of one pixel is calculated, the RGB value of each light ray passing through the pixel is determined based on the intermediate rendering result, and the efficiency of calculating the rendering result of each pixel is effectively improved.
In some possible designs, the method further comprises: and calculating a rendering result of another pixel in the current view plane according to the intermediate rendering result of the target angle region and the intermediate rendering result of the angle region set, wherein a plurality of angle regions corresponding to the another pixel comprise the target angle region and the angle region set, and the angle region set comprises at least one angle region on the model.
By pre-calculating the rendering result of the target angle area, the rendering result of a plurality of pixels can be calculated and multiplexed, so that the calculation efficiency of real-time rendering is improved.
In some possible designs, the method further comprises: performing ray tracing rendering on the incident rays passing through the target space cache region to obtain a pre-rendering result of the incident rays; and calculating a middle rendering result of the target angle area according to the pre-rendering result of the incident rays.
And determining a middle calculation result of the emergent ray according to the prerendering result of the incident rays passing through the target space cache region. The method avoids a large amount of ray tracing on the target angle area according to the traditional method, and a plurality of target angle areas in the same space cache area can share the prerendering result of the incident ray. Therefore, compared with the traditional ray tracing, the rendering efficiency is remarkably improved when the intermediate rendering result is calculated based on the multiplexing of the pre-rendering result.
In some possible designs, the method further comprises: and carrying out weighted summation on the pre-rendering results of the incident rays to obtain an intermediate rendering result of the target angle area.
And determining an intermediate calculation result of the emergent ray based on a weighted summation mode according to the prerender result of the incident rays passing through the target space cache region. The method avoids a large amount of ray tracing on the target angle area according to the traditional method, and a plurality of target angle areas in the same space cache area can share the prerendering result of the incident ray. Therefore, compared to conventional ray tracing, based on multiplexing of the pre-rendering results, rendering efficiency is significantly improved when calculating intermediate rendering results.
In some possible designs, the target space buffer area is a hemisphere, and each incident ray is emitted from the center of the sphere of the target space buffer area and points to the target space buffer area.
After the target space cache region is approximately spherical or hemispherical, the sphere center is used as a virtual viewpoint, and pre-calculation of the space cache region is realized.
In some possible designs, the method further comprises: the intermediate rendering result is stored. When performing the real-time ray tracing rendering, the real-time rendering results of the current frame and the subsequent frame may obtain the required rendering result from the stored intermediate rendering results. The intermediate rendering results of the same space cache region can be multiplexed for multiple times, and the rendering efficiency of the space cache region with the intermediate rendering results in the rendering process is effectively improved.
A second aspect of the application provides a rendering engine, the system comprising a processing unit: the processing unit is used for determining an angle area where an intersection point of the emergent ray and the model is located as a target angle area in the process of rendering the current frame of the application; obtaining an intermediate rendering result of the target angle area, which is pre-calculated before the current frame is rendered, wherein the intermediate rendering result of the target angle area is determined according to pre-rendering results of a plurality of incident rays passing through a target space cache area where the target angle area is located; and calculating the rendering result of the pixels in the current view plane according to the intermediate rendering result of the target angle area.
In some possible designs, the processing unit is further configured to calculate a rendering result of the pixel according to the intermediate rendering result of the target angle region and the corresponding relationship between the pixel and the angle region. By establishing the corresponding relation between the angle area and the pixel, after the intermediate rendering result of the angle area is determined, the rendering result of the pixel can be determined. By obtaining the intermediate rendering result of the angle area, a large amount of real-time rendering calculation is avoided, and the rendering efficiency is improved.
In some possible designs, the processing unit is further configured to calculate a rendering result of a pixel in the current view plane according to an intermediate rendering result of the target angle region and an intermediate rendering result of a set of angle regions, where a plurality of angle regions corresponding to the pixel includes the target angle region and the set of angle regions, and the set of angle regions includes at least one angle region on the model.
In some possible designs, the processing unit is further configured to calculate a rendering result of another pixel in the current view plane according to the intermediate rendering result of the target angle region and an intermediate rendering result of a set of angle regions, where a plurality of angle regions corresponding to the another pixel include the target angle region and the set of angle regions, and the set of angle regions includes at least one angle region on the model.
In some possible designs, the processing unit is further configured to perform ray tracing rendering on the multiple incident rays passing through the target space cache region, and obtain a pre-rendering result of the multiple incident rays; and calculating a middle rendering result of the target angle area according to the pre-rendering result of the incident rays.
In some possible designs, the processing unit is further configured to perform weighted summation on the pre-rendering results of the multiple incident light rays, so as to obtain an intermediate rendering result of the target angle region.
In some possible designs, the target space buffer area is hemispherical, and each incident ray is emitted from the center of the target space buffer area and points to the target space buffer area.
In some possible designs, a storage unit is used for storing the intermediate rendering result.
A third aspect of the present application provides a cluster of computing devices comprising at least one computing device, each computing device comprising a processor and a memory; the processor of the at least one computing device is configured to execute instructions stored in the memory of the at least one computing device to cause the computing device to perform the method as provided by the first aspect.
A fourth aspect of the present application provides a computer program product comprising instructions which, when executed by a cluster of computer devices, cause the cluster of computer devices to perform the method as provided by the first aspect.
A fifth aspect of the present application provides a computer-readable storage medium comprising computer program instructions which, when executed by a cluster of computing devices, perform the method as provided by the first aspect.
Drawings
In order to more clearly illustrate the technical method of the embodiments of the present application, the drawings used in the embodiments will be briefly described below.
Fig. 1 is a schematic diagram of a spatial buffer area and a virtual viewpoint according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an angular resolution provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of an angular region of a three-dimensional model according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an angular region of another three-dimensional model provided in an embodiment of the present application;
fig. 5 is a flowchart of a rendering method according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of an incident light according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of an emergent light according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a rendering engine according to an embodiment of the present application;
FIG. 9 is a schematic structural diagram of a computing device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a computing device cluster according to an embodiment of the present application;
fig. 11 is a schematic diagram of a connection manner of a computing device cluster according to an embodiment of the present application;
fig. 12 is a schematic diagram of a connection manner of a computing device cluster according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
The terms "first" and "second," and the like in the description, claims, and drawings of the present application are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. Such as a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
It should be understood that, in the present application, "at least one" means one or more, "a plurality" means two or more, "at least two" means two or three and more, "and/or" for describing the association relationship of the associated objects, indicating that there may be three relationships, for example, "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In order to describe the scheme of the present application more clearly, some knowledge related to rendering is introduced below.
Space cache area (space cache area): the spatial buffer area refers to a smallest plane constituting unit in a two-dimensional or three-dimensional space. Generally, in rendering, a spatial cache region may be constructed based on any one of a centroid point of a patch, a pixel point of a texture/map space, or a point cloud. For example, a spherical/hemispherical spatial buffer area may be established with the centroid of the patch as the center of sphere, and the spatial buffer area is used for buffering rendering data.
Angular area (angle area): the angle region refers to the smallest unit for caching in a two-dimensional or three-dimensional space. In rendering, the surface of the spatial buffer area may be divided into a plurality of angle areas according to an angle with a coordinate axis. These angular regions may be any polygon, usually a quadrilateral. The intersection of the edges of these angular regions is the vertex of each angular region.
Number of rays traced per angular area (SPAA): the number of traced rays per angular region refers to the number of rays passing through each angular region. Wherein the angular region is the smallest unit in two-dimensional or three-dimensional space. Generally, the screen is seen to be formed by arranging pixels one by one, and each pixel may correspond to one or more angular regions in space. The color of a pixel is calculated from the color (red, green, blue, RGB) of its corresponding angular region. In ray tracing, the magnitude of the number of tracing rays per angular region may affect the result of the rendering. The larger the number of rays traced per angular region means that more rays are cast into the model in three-dimensional space from the viewpoint. The more the number of rays projected on each angle region is, the more accurate the rendering result calculation of each angle region can be.
Ray tracing (ray tracing): ray tracing, also known as track tracing or ray tracing, is a common technique from geometric optics that models the path a ray has traveled by tracing the ray that has interacted with an optical surface. It is used in optical system design, such as camera lens, microscope, telescope, binoculars, etc. When used for rendering, rays from the eye are traced instead of rays from the light source, and a mathematical model of the composed scene is generated by such a technique and is visualized. The result obtained is similar to that of the ray casting and scan line rendering method, but this method has a better optical effect. For example, more accurate simulation of reflection versus transmission and very high efficiency, so this method is often used when seeking such high quality results. Specifically, the ray tracing method first calculates the distance, direction and new position reached by a ray of light traveling in a medium before the ray is absorbed by the medium or changes direction. Then a new light ray is generated from the new position, and a complete light ray propagating path in the medium is finally calculated by using the same processing method. Since the algorithm is a complete simulation of the imaging system, complex pictures can be generated by simulation.
The material is as follows: the material of the model in the space can be divided into a transmission material, a specular reflection material and a diffuse reflection material according to the refraction and reflection condition. The transmissive material refers to a material that can transmit, such as water drops and crystal; the specular reflection material refers to a material which generates regular total reflection, such as a mirror surface, a smooth metal surface and the like; the diffuse reflection material refers to a material with irregular reflection, such as rough stone and table top. The space includes a plurality of space buffer areas, and each space buffer area includes a plurality of angle areas. Generally, a spatial buffer area corresponds to one type, and therefore each angle area included in the same spatial buffer area corresponds to the same material.
The conventional ray tracing technology needs to perform massive real-time computation on a large amount of rays in a three-dimensional space, which has a high requirement on the computing capability of hardware (such as a Central Processing Unit (CPU)) or a Graphics Processing Unit (GPU)) for performing the computation. It usually takes hours or even days to complete the computation of a frame of picture with high quality. In other words, it is difficult to guarantee the computational efficiency while guaranteeing the picture quality.
Therefore, to solve this problem, the present invention proposes a rendering method. The method can greatly improve the calculation efficiency while ensuring higher picture quality.
Specifically, the RGB values of the angle region in the space are pre-calculated, and the pre-calculated result is buffered in units of angle regions. The pre-calculation occurs before the real-time calculation, so that when the real-time calculation is performed, after the angle area corresponding to the pixel point is determined (tracked) according to the ray tracing method, the RGB value of the corresponding angle area can be directly obtained from the buffer data without performing the real-time calculation on the RGB value of the angle area. Furthermore, according to the RGB value of the angle area and the corresponding relation between the angle area and the pixel point, the RGB value of the pixel point can be quickly obtained, and therefore the rendering operation is completed.
Next, using fig. 1 as an example, the concepts of the angle area and the virtual viewpoint in the space and the relationship therebetween will be described.
As shown in fig. 1, the space includes at least a model 100, a virtual viewpoint 102, a virtual view plane 104, and a light source 106. In which the outer surface of the model 100 is divided into a plurality of regions, and fig. 1 shows the division of one of the faces. As shown in fig. 1, an outer surface of the model 100 is divided into 6 regions of varying sizes. The sizes of the respective regions may be the same or different. Generally, these regions are very tiny and may also be referred to as spatial cache regions. As mentioned above, each spatial cache region corresponds to a material. Further, after the spatial cache region is determined, the surface of the spatial cache region may be divided into a plurality of angle regions according to angles. How the division is made will be described below.
The virtual viewpoint 102 is used to simulate the presence of human eyes in a virtual three-dimensional space for perceiving three-dimensional structures. In some possible implementations, the virtual ten-point 102 may be a binocular viewpoint. Specifically, a binocular viewpoint or a multi-ocular viewpoint refers to acquiring two or more images from two or more different viewpoints to reconstruct a 3D structure or depth information of a target model.
The virtual viewing plane 104 is used to simulate the presence of a display screen in a virtual three-dimensional space. As with the display screen, the virtual view plane 104 is divided into a plurality of pixel points. As shown in fig. 1, the virtual view plane 104 includes at least 9 pixels. Each pixel point has a certain corresponding relation with a space cache region contained in the model in the space. That is, each pixel point also has a certain corresponding relationship with the angle region included in the model in space. Further, the RGB value of each pixel point can be determined by calculating the RGB value of the angle region.
It can be seen that the three first rays passing through the black boxed pixel points shown in fig. 1 (the pixel points in the middle of the virtual view plane 104) hit two spatial cache regions when they first contact the model 100. That is, the pixel point corresponds to at least the two hit spatial cache regions. In other words, the RGB values of the two spatial buffer areas can be used to calculate the RGB value of the pixel point.
It should be noted that the same spatial cache region may correspond to a plurality of pixel points, and one pixel point may also correspond to a plurality of spatial cache regions.
After the RGB value of each pixel in the virtual view plane 104 is determined, a frame of rendering result can be obtained. That is, each time ray tracing is performed, a frame of rendering results can be obtained.
The reverse ray tracing method belongs to one of ray tracing methods. The model in space can be seen because its surface refracts/reflects light from the light source and into the eye (virtual viewpoint 102). Conventional ray tracing requires tracing calculations for a large number of rays that fail to enter the eye (virtual viewpoint 102), and thus the large number of calculations is meaningless. Whereas reverse ray tracing assumes that rays are emitted from the eye (virtual viewpoint 102) and return to the light source after touching the model. By the method, effective rays are ensured to be calculated as much as possible, and therefore the calculation efficiency of ray tracing is improved.
For example, a plurality of first rays are emitted from the virtual viewpoint 102, and after passing through a certain pixel point in the virtual view plane 104, a model (such as the model 100) touched for the first time and a spatial buffer region (or an angle region) where the touch point is located are determined. Depending on the material of the model, the light tracing is continued until the light source 106 is traced, or the maximum number of traces is reached. The light source 106 may be one or more of the following light sources: point light sources, line light sources, or surface light sources, etc.
The RGB value of each first ray may be determined according to parameters of the light source, parameters of the model, an incident angle, and the like. Further, the RGB value of the pixel point may be determined according to the RGB value of the first light passing through the same pixel point. By analogy, the RGB value of each pixel point in the entire virtual view plane 104 can be obtained, and then a frame rendering result is obtained.
Optionally, the RGB values of the space buffer area may also be determined according to the RGB values of the first light rays contacting the same space buffer area, and the RGB values of the angle area may also be determined according to the RGB values of the first light rays contacting the same angle area.
Specifically, after determining the RGB values of the first rays, the RGB values of the space buffer area may be determined according to the RGB values of the trace rays on the same space buffer area. Alternatively, the average value may be obtained by averaging.
Similarly, after determining the RGB values of the first rays, the RGB values of the angle region may also be determined according to the RGB values of the tracking rays in the same angle region.
Next, taking fig. 2 as an example, the concept of angular resolution is described with respect to the spatial buffer area in the model 100 in fig. 1, and the relationship between the spatial buffer area and the angular area is described.
As shown in fig. 2, the model 201 represents a 3D model of a "rabbit", the surface of which is made up of multiple spatial buffer regions. The following description will be given by taking a spatial cache region 202 in the "rabbit" model as an example. As mentioned above, each spatial cache region corresponds to a material, and the material of the spatial cache region 202 is assumed to be a diffuse reflective material.
The spatial buffer area 202 represents a spatial buffer area (hemisphere) in the model 201. The color of the incident light directed to the outside of the hemisphere in different directions with the hemisphere center (the center of the space buffer area) as a virtual viewpoint is stored in the hemisphere. Where the center of the sphere is the eye in fig. 1 (virtual viewpoint 102).
Fig. 2 shows a plurality of incident lights emitted from the center of the sphere, and the more the number of incident lights passing through a spatial buffer area in the conventional ray tracing is, the more accurate the RGB of the spatial buffer area is calculated. In the embodiment provided by the present application, the greater the number SPAA of incident light passing through each angle region, the more light rays will be projected to the three-dimensional space from the virtual viewpoint of the 3D model. The more the number of rays projected on each angle area is, the more accurate the rendering result is calculated.
As previously mentioned, the surface of the 3D model may be angularly segmented into a plurality of spatial buffer regions. Further, each spatial buffer area may be divided into a plurality of angle areas according to angles. The following will explain the division of the angle area with reference to specific embodiments, respectively.
First, the angular resolution indicates the number of angular regions contained in one spatial buffer region. Taking a hemispherical space buffer area as an example, taking 1 degree as a unit, the hemisphere may be divided into 360 × 90 angular areas. That is, the angular resolution of the spatial buffer region is 360 × 90.
Referring to fig. 3, fig. 3 is a schematic view illustrating an effect of an angle region of a three-dimensional model according to an embodiment of the present disclosure.
As shown in FIG. 3, taking the three-dimensional model as a sphere as an example, the angular region can be represented as a center point
Figure BDA0003440271010000071
And a center point
Figure BDA0003440271010000072
The points of the neighborhood constitute a quadrilateral slightly bulging approximate square S0 on the surface of the sphere. And constructing a three-dimensional orthogonal coordinate system by taking the sphere center O of the sphere as an origin, wherein the three-dimensional orthogonal coordinate system comprises an x axis, a y axis and a z axis. In each coordinate of the center point P, r is a length of a line segment OP from the center O to the center point P, θ is an angle between the line segment OP and the positive z-axis, and φ is an angle between a projection of the line segment OP on the xOy plane and the x-axis. In some embodiments, n center points P _1, P _2, …, P _ n may be uniformly arranged on the sphere, and if the distance between the non-center point Q _ i and the center point P _ i is the shortest, the non-center point Q _ i and the center point P _ i belong to the same angular region.
Based on the above-mentioned effect schematic diagram of the three-dimensional model angle area, it can be seen that, as the division standard of the three-dimensional orthogonal coordinate is finer, the angle area formed by the non-center point Q _ i and the center point P _ i is finer. Therefore, a plurality of angle regions of the model can be obtained by the angle region division method in fig. 3.
Referring to fig. 4, fig. 4 is a schematic view illustrating an effect of an angle region of a three-dimensional model according to an embodiment of the present disclosure.
As shown in fig. 4, taking the three-dimensional model as a curved surface model as an example, the angular region can be represented as a square on the curved surface represented by P (u, t). And constructing a two-dimensional orthogonal coordinate system by using a set origin of the curved surface, wherein the coordinate system comprises a u axis and a t axis. u represents a shift amount in one direction of the set origin of the curved surface, t represents a shift amount in the other orthogonal direction, and P (u, t) represents a square composed of four vertices in the (u, t) coordinate system shown in fig. 4.
Similarly, based on the above-mentioned effect diagram of the three-dimensional model angle region, it can be seen that the more the division standard of the two-dimensional orthogonal coordinate is, the more the square represented by P (u, t) is. Therefore, a plurality of angle regions of the model can also be obtained by the angle region division method in fig. 4.
It is understood that the shape of the angle region is merely a specific example, and in practical applications, the angle region may have other shapes, and is not limited herein. The size of the angle area may be set as needed, and the size of the angle area may be set smaller as the accuracy requirement for the rendered image is higher.
Turning next to a rendering method 100, FIG. 5 shows a flow chart of the rendering method.
S101: the rendering system 200 calculates a pre-rendering result for each incident light in the space.
Before rendering, the source of each model in space is first described. The model and parameters in the current space are both generated by the rendering application. Optionally, the models may be selected and combined in the model library according to parameters and instructions included in the rendering application to form the content to be rendered in the space (each model in the space).
Next, a space cache region in a space is taken as an example to describe the rendering method provided in the embodiment of the present application.
Fig. 6 shows a hemispherical space buffer area, and the point o is the center of the hemisphere. The incident light is indicative of a ray that is emitted by the point o, through the surface of the spatial buffer region. Part of the incident light of this spatial buffer area is exemplarily shown in fig. 6.
Before real-time calculation, a certain amount of incident light can be simulated on the space cache region to realize pre-calculation of the space cache region. The distribution of the above-mentioned certain amount of incident light may be set as necessary.
For example, when performing pre-calculation, 10000 incident lights can be simulated on one spatial buffer area, and the 10000 incident lights are uniformly distributed in the whole hemisphere. It should be noted that the distribution of the incident light is independent of the angular resolution.
For another example, when the spatial buffer region is a sphere, the incident light may be uniformly distributed throughout the sphere.
The length r of the connection line between the intersection of the above-mentioned certain quantity of incident lights on the space buffer area and the point o is the same, and the included angle theta,
Figure BDA0003440271010000081
At least one of which is different. Further, r, theta and theta corresponding to each incident light can be determined according to the distribution condition of the set included angles
Figure BDA0003440271010000082
Thereby achieving simulation of incident light.
After determining the distribution of the incident light, the pre-rendering result (RGB value) of the certain amount of incident light may be calculated according to a ray tracing method or an inverse ray tracing method. That is, corresponding to fig. 1, point o in fig. 6 is the virtual viewpoint 102, and the hemispherical space buffer area corresponds to one of the space buffer areas in the model 100. Specifically, the pre-rendering result of each incident light is determined according to one or more of the light source in the space, the material of the contacted space buffer area, the contact angle of the contacted space buffer area, and the like.
The above describes how to calculate the pre-rendering result of the incident light by taking a ray in a spatial buffer as an example. According to the method, the pre-rendering result of the certain amount of incident light and the pre-rendering result of the certain amount of incident light corresponding to the plurality of spatial cache regions can be further calculated.
S103: the rendering system 200 stores the pre-rendering results of each incident light in the space.
After the calculation of the prerender results for a certain number of incident lights is completed in S101, it needs to be stored. Specifically, the storage may be performed in units of intersections. Wherein the intersection point indicates the intersection point of the incident light and the spatial buffer area.
Optionally, the two-dimensional angle may be stored as a unit. Wherein the two-dimensional angle is indicative of the sum of θ of the incident light
Figure BDA0003440271010000084
Two-dimensional array composed of the two parameters
Figure BDA0003440271010000083
It should be noted that the above-mentioned storage operation is described by taking a space cache region as an example. According to the method, the pre-rendering results of a certain amount of incident light and the pre-rendering results of a certain amount of incident light corresponding to a plurality of spatial cache regions can be further stored.
S105: the rendering system 200 calculates RGB values of each outgoing light in space.
The prediction calculation is not performed in units of angle regions, but the light simulation is performed in the hemisphere/sphere corresponding to the spatial buffer region according to a certain distribution rule. That is, the number of incident rays used in the pre-calculation may be 10000 rays regardless of the number of angle regions included in fig. 6. After the pre-rendering result of the incident light of each spatial buffer area is determined, the RGB value of each outgoing light can be further obtained.
Fig. 7 shows an exemplary output beam L1, which is directed from the region outside the spatial buffer region to the center o of the sphere. The direction of the emergent light L1 is the direction of the light when the reverse ray tracing is carried out in the real-time calculation, so that the stored pre-calculated RGB value can be directly obtained without calculating again for the light with the emergent direction consistent with the emergent light L1 in the real-time calculation process in the future as long as the RGB value of the L1 is stored in the pre-calculation process.
However, considering that the number of rays corresponding to one spatial buffer area can be countless, the angular area is considered as a storage unit. When emergent light falls into a certain angle area in the real-time calculation process, the RGB value of the angle area is obtained and used as the RGB value of the incident light. In order to obtain the RGB values of an angle region, a plurality of outgoing lights included in the angle region need to be pre-calculated.
Take the angular region S1 where the outgoing light L1 is located as an example. First, the RGB values of the outgoing light L1 are calculated. After the outgoing light L1 comes into contact with the point o, specular reflection, transmission, or diffuse reflection may occur. Based on the above three reflection/transmission principles, the RGB values of the outgoing light L1 can be obtained by weighting according to the pre-rendering result of a certain amount of incident light acquired in S101, and a specific obtaining method is as follows.
Fig. 7 shows a case where the material of the spatial buffer area is a diffuse reflection material. When the spatial cache region is made of a transparent material, the spatial cache region should be a complete sphere.
Taking the material of the spatial buffer area in fig. 7 as a diffuse reflection material as an example, after the emergent light L1 is determined, the RGB values of the light after diffuse reflection can be obtained based on a Bidirectional Reflection Distribution Function (BRDF) method. Specifically, the RGB values may be obtained by multiplying the RGB values of the respective incident lights calculated in S101 by a weight matrix. Wherein, the weight matrix is obtained according to the BRDF method.
Optionally, the weight matrix may be randomly generated or generated based on an artificial intelligence algorithm.
Optionally, because the diffuse reflection material is characterized in that the included angle between the reflected light and the normal direction is independent of the incident angle, the contribution of the incident light at any angle when calculating the RGB value of the incident light can be approximately considered to be the same. Further, the weights corresponding to the incident light rays at any angle are also approximately considered to be the same. It should be understood that the present invention is not limited to the calculation method of the weight value.
Taking the material of the spatial buffer area in fig. 7 as the specular reflection as an example, after the outgoing angle of the outgoing light L1 is determined, the RGB values of the light rays of the specular reflection corresponding to the outgoing light L1 can be obtained based on the BRDF method.
Optionally, in this possible implementation, the method for obtaining the weight matrix may also be randomly generated or generated based on an artificial intelligence algorithm.
Taking the material of the spatial buffer area in fig. 7 as an example of transmission, after the exit angle of the exit light L1 is determined, the RGB value of the exit light L1 may be obtained based on a Bidirectional Scattering Distribution Function (BSDF) method.
It should be noted that the method for simulating the incident light may be directly determined according to two angles relative to the center o of the sphere, and it is not necessary to emit an incident light from outside the spatial buffer area. In this implementation, no intersection calculations need to be performed. Wherein the intersection calculation instructions calculate the intersection of the incident light and the spatial cache region. By storing the intermediate rendering result by taking the angle as a unit, intersection calculation is effectively avoided, the calculation amount can be further reduced, and the efficiency of pre-calculation is improved.
S107: the rendering system 200 calculates intermediate rendering results for each angular region of each spatial cache region in space.
After the intermediate rendering result of one emergent light is determined, the intermediate rendering results of other emergent lights in the same angle area can be obtained according to the same method.
Further, the intermediate rendering result of the angle region can be obtained by solving the average value of the intermediate rendering results of the multiple emergent lights in the same angle region. The method of obtaining the average value may be an arithmetic average method or a weighted average method.
Optionally, when calculating the intermediate rendering result of each angle region, for the angle region of the diffuse reflection material, the intermediate rendering result of the angle region may be directly confirmed according to the intermediate rendering result of one piece of outgoing light. This is because for diffuse reflective materials, the incident angle does not affect the calculation of the rendering result, and therefore the intermediate rendering result of the angular region of the diffuse reflective material can be approximately considered to be equal to the intermediate rendering result of one outgoing ray passing through the angular region. Alternatively, it may be considered that the intermediate rendering result of the spatial buffer area of the diffuse reflection material is approximately equal to the intermediate rendering result of one outgoing ray passing through the spatial buffer area. That is, the angular resolution of the spatial buffer area is 1.
The above describes how to calculate the intermediate rendering result for an angle region on a spatial buffer region. According to the method, the intermediate rendering results of the angle areas of all the spatial cache areas in the space can be further obtained.
S109: the rendering system 200 stores intermediate rendering results for each angular region of each spatial cache region in space.
The intermediate rendering result of each angle region on each space cache region in the space obtained by calculation may be stored in units of the position of the space cache region where each angle region is located.
Optionally, the two-dimensional array corresponding to the vertex of the angle region may also be used
Figure BDA0003440271010000101
The storage is performed in units.
S111: the rendering system 200 obtains a real-time rendering result of the emergent light in the real-time calculation.
S101 to S109 pre-calculate and store the RGB values of the spatial buffer area in the space, so that in the real-time ray tracing process, the real-time rendering result can be determined according to the intermediate rendering result obtained by the pre-calculation.
When the light source, the model position and other elements in the space do not change, the intermediate rendering result of the space buffer area obtained through pre-calculation can be considered as the rendering result of emergent light in real-time ray tracing.
It should be noted that step S111 may be considered to be calculation of a rendering result for a certain frame (current frame), and the occurrence time of the above steps S101 to S109 may precede the frame. Alternatively, the occurrence time of steps S101 to S109 may be prior to the rendering result calculation time of the first frame in the space.
The method uses the angle area on the spatial cache area in the space as a unit, and pre-calculates and stores the rendering result of each angle area. Compared with the calculation of real-time ray tracing, the method greatly reduces the calculation amount of real-time ray tracing by putting the calculation of solving a large number of intermediate rendering results before the real-time calculation, and effectively improves the calculation efficiency of the real-time ray tracing.
Next, taking a space buffer area as an example, the difference between the amount of computation required by the rendering method provided by the present embodiment and the amount of computation required by the conventional ray tracing will be described.
Take a spatial buffer region in the 3D model of "rabbit" in fig. 2 as an example. Under the condition that the spatial cache region is hemispherical, in the traditional real-time ray tracing, a certain amount of sampling needs to be performed on each angle region based on the angle resolution. Wherein, each sampling needs light to be subjected to intersection and transmission/reflection calculation. Alternatively, the number of transmission/reflection may be plural.
Taking the angular resolution of 360 × 90, the sampling number of each angular region of 10000, the intersection calculation amount of a, the one-time transmission/reflection calculation amount of B, and the number of transmission/reflection times of n as an example, when the real-time ray tracing calculates a spatial buffer region, the required real-time calculation amount M1 is: 360 x 90 x 10000 x (a + nB). Wherein n is a positive integer greater than or equal to 1.
In the solution provided in this embodiment, first, in the pre-calculation process, a certain amount of incident light calculation is performed on the spatial buffer area without considering the angular resolution. Specifically, the intersection and transmission/reflection calculations need to be performed for each incident light. Alternatively, the number of transmission/reflection may be plural. Further, an intermediate rendering result of a certain amount of outgoing light is calculated for each angle region in each spatial buffer region. In the process of calculating the intermediate rendering result of the emergent light, calculation of a weight matrix and calculation of multiplication of the weight matrix and the pre-rendering result of the incident light are required to be performed once for each emergent light. Secondly, in the process of real-time calculation, when a certain number of samples are taken for each angle area based on the angle resolution, the intermediate rendering result of emergent light (pre-calculation stage) which is consistent with or close to the angle of real-time light can be directly obtained, or the intermediate rendering result of the angle area can be obtained. And obtaining the intermediate rendering result of the angle area by averaging the intermediate rendering result of the emergent light obtained in the pre-calculation stage.
Similarly, taking the number of incident lights as 10000, the intersection calculation amount as a, the one-time transmission/reflection calculation amount as B, the number of transmission/reflection times as n, the angular resolution as 360 × 90, the sampling number of each angular region as 10000, the calculation amount of the weight matrix as C1, and the calculation amount of multiplying the weight matrix by the pre-rendering result of the incident lights as C2 as an example, the method provided by this embodiment divides the required calculation amount into a pre-calculation stage and a real-time calculation when calculating the same spatial cache region. Wherein the calculated amount M2 of the pre-calculation stage is: 10000 (a + nB) +360 + 90 10000 (C1 + C2). The calculation amount M3 of the real-time calculation stage is 360 × 90 × 10000 × 1. Where "1" in M3 denotes that the intermediate rendering result of the outgoing light or the angle region obtained in the pre-calculation is acquired.
First, comparing M1 and M3, i.e., comparing the difference in real-time computation amount between the conventional method and the scheme provided in this example, it can be seen that M1 is (A + nB) times that of M3. In other words, the conventional method takes time in real-time calculation (a + nB) times as long as the scheme provided by the present embodiment. In general, when the number of three-dimensional models is large or complicated, the calculation amount of intersection calculation and transmission/reflection calculation is large, and it takes a long time. Wherein, the most occupied time is the intersection calculation. Therefore, the scheme provided by the embodiment greatly reduces the real-time calculation amount and improves the efficiency of real-time ray tracing.
Second, compare M1 and (M2 + M3), i.e., the difference in overall computational load for the conventional method and the scheme provided in this example. Wherein M2+ M3 is equal to 10000 (360 x 90 (C1 + C2+ 1) + (a + nB)). Specifically, the calculation amount C1 of the weight matrix and the calculation amount C2 obtained by multiplying the weight matrix by the prerendering result of the incident light belong to typical numerical calculation, and the calculation amount is far smaller than intersection and transmission/reflection calculation. Furthermore, the portion that occupies a large amount of computation (i.e., a + nB) does not need to be multiplied by the angular resolution. Therefore, the solution provided by the present embodiment is also smaller than the conventional method in terms of the overall amount of calculation.
It should be noted that, in the case that the overall calculation amount and the real-time calculation amount are smaller than those of the conventional method, the embodiment can still ensure a high-quality rendering result. Because in the solution of this embodiment, a higher frequency (e.g. 10000 times) sampling is still performed for each angle region in each space buffer in the space. In the rendering method, a high sampling frequency is an important factor for improving the picture quality.
Therefore, according to the scheme, the incident light and the emergent light are pre-calculated, the overall calculation amount and the real-time calculation amount in the ray tracing process are reduced, the efficiency of ray tracing calculation is effectively improved, and the high quality of a rendering result is guaranteed.
The present application further provides a rendering engine 300, as shown in fig. 8, including:
a communication unit 302, configured to acquire the content to be rendered and the related parameters in S101.
A storage unit 304, configured to store the model data and the relevant parameters of each model acquired in S101. In S103, the pre-rendering result of each incident light is stored. The storage unit 304 is also configured to store the RGB values of the respective outgoing lights in S105. In S107, the intermediate rendering results for each angle region of each spatial buffer region in the space are also stored in the storage unit 304.
The processing unit 306 is configured to perform pre-rendering on each model in the space in S101, and calculate a pre-rendering result of each incident ray. In S105, the operation of calculating the RGB values of the outgoing ray for each spatial cache model is performed by the processing unit 305. The processing unit 306 is further configured to calculate an intermediate rendering result for each angle region of each spatial buffer region in the space according to the RGB values of the outgoing light in S107. In S111, based on the intermediate rendering result, the operation of acquiring the real-time rendering result of the outgoing light in the real-time calculation is also performed by the processing unit 305.
Optionally, the communication unit 302 is further configured to return the obtained intermediate rendering result in S109.
The present application also provides a computing device 400. As shown in fig. 9, the computing device includes: a bus 402, a processor 404, a memory 406, and a communication interface 408. The processor 404, memory 406, and communication interface 408 communicate over a bus 402. Computing device 400 may be a server or a terminal device. It should be understood that the present application is not limited to the number of processors, memories in the computing device 400.
The bus 402 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one line is shown in FIG. 9, but this does not represent only one bus or one type of bus. Bus 404 may include a path that transfers information between components of computing device 400 (e.g., memory 406, processor 404, communication interface 408).
The processor 404 may include any one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Micro Processor (MP), or a Digital Signal Processor (DSP).
In some possible implementations, the processor 404 may include one or more graphics processors. The processor 404 is configured to execute instructions stored in the memory 406 to implement the rendering method 100 described above.
In some possible implementations, processor 404 may include one or more central processors and one or more graphics processors. The processor 404 is configured to execute instructions stored in the memory 406 to implement the rendering method 100 described above.
The memory 406 may include volatile memory (volatile memory), such as Random Access Memory (RAM). Processor 404 may also include non-volatile memory (non-volatile memory), such as read-only memory (ROM), flash memory, a Hard Disk Drive (HDD), or a Solid State Drive (SSD). The memory 406 stores executable program code that is executed by the processor 404 to implement the rendering method 100 described above. Specifically, the memory 406 stores instructions for the rendering engine 300 to execute the rendering method 100.
The communication interface 403 enables communication between the computing device 400 and other devices or communication networks using transceiver modules such as, but not limited to, network interface cards, transceivers, and the like.
The embodiment of the application also provides a computing device cluster. As shown in fig. 10, the cluster of computing devices includes at least one computing device 400. The computing device cluster included in the computing device cluster may be all terminal devices, may be all cloud servers, may be part of cloud servers, and may be part of terminal devices.
In the three deployments described above with respect to a cluster of computing devices, the memory 406 of one or more computing devices 400 in the cluster of computing devices may have stored therein the same instructions used by the rendering engine 300 to perform the rendering method 100.
In some possible implementations, one or more computing devices 400 in the cluster of computing devices may also be used to execute portions of the instructions used by the rendering engine 300 to perform the rendering method 100. In other words, a combination of one or more computing devices 400 may collectively execute the instructions used by the rendering engine 300 to perform the rendering method 100.
It is noted that the memory 406 in different computing devices 400 in the cluster of computing devices may store different instructions for performing portions of the functionality of the rendering method 100.
Fig. 11 shows one possible implementation. As shown in fig. 11, two computing devices 400A and 400B are connected via a communication interface 408. The memory in computing device 400A has stored thereon instructions for performing the functions of communication unit 302 and processing unit 306. Memory in computing device 400B has stored thereon instructions for performing the functions of storage unit 304. In other words, the memory 406 of the computing devices 400A and 400B collectively store instructions for the rendering engine 300 to perform the rendering method 100.
The connection manner between the computing device clusters shown in fig. 11 may be to consider that the rendering method 100 provided in the present application needs to store a large amount of pre-computed rendering results. Thus, consider the memory function being performed by computing device 400B.
It should be understood that the functionality of computing device 400A shown in fig. 11 may also be performed by multiple computing devices 400. Likewise, the functionality of computing device 400B may be performed by multiple computing devices 400.
In some possible implementations, one or more computing devices in a cluster of computing devices may be connected over a network. Wherein the network may be a wide area network or a local area network, etc. Fig. 12 shows one possible implementation. As shown in fig. 12, two computing devices 400C and 400D are connected via a network. In particular, connections are made to the network through communication interfaces in the respective computing devices. In this type of possible implementation, the memory 406 in the computing device 400C holds instructions for executing the communication unit 302. Also, instructions to execute the memory unit 304 and the processing unit 306 are stored in the memory 406 in the computing device 400D.
The connection manner between the computing device clusters shown in fig. 12 may be considered to be that the rendering method 100 provided in the present application needs to store a large amount of pre-computed rendering results and perform a large amount of computations for ray tracing, and therefore, functions implemented by the processing unit 306 and the storage unit 304 are considered to be executed by the computing device 400D.
It should be understood that the functionality of computing device 400C shown in fig. 12 may also be performed by multiple computing devices 400. Likewise, the functionality of computing device 400D may be performed by multiple computing devices 400.
The embodiment of the application also provides a computer readable storage medium. The computer-readable storage medium can be any available medium that a computing device can store or a data storage device, such as a data center, that contains one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), among others. The computer-readable storage medium includes instructions that direct a computing device to perform the rendering method 100 described above as applied to the rendering engine 300.
The embodiment of the application also provides a computer program product containing instructions. The computer program product may be a software or program product containing instructions capable of being run on a computing device or stored in any available medium. The computer program product, when run on at least one computer device, causes the at least one computer device to perform the rendering method 100 described above.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (17)

1. A rendering method for rendering an application, the application comprising at least one model, each model comprising a plurality of spatial buffer regions, each spatial buffer region comprising a plurality of angular regions, the method comprising:
in the process of rendering the current frame of the application, determining an angle area where an intersection point of the emergent ray and the model is located as a target angle area;
obtaining an intermediate rendering result of the target angle area, which is pre-calculated before the current frame is rendered, wherein the intermediate rendering result of the target angle area is determined according to pre-rendering results of a plurality of incident rays passing through a target space cache area where the target angle area is located;
and calculating the rendering result of the pixels in the current view plane according to the intermediate rendering result of the target angle area.
2. The method according to claim 1, wherein the calculating the rendering result of the pixel in the current view plane according to the intermediate rendering result of the target angle region comprises:
and calculating a rendering result of a pixel in the current view plane according to the intermediate rendering result of the target angle area and the intermediate rendering result of the angle area set, wherein the plurality of angle areas corresponding to the pixel comprise the target angle area and the angle area set, and the angle area set comprises at least one angle area on the model.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and calculating a rendering result of another pixel in the current view plane according to the intermediate rendering result of the target angle area and the intermediate rendering result of the angle area set, wherein the plurality of angle areas corresponding to the another pixel comprise the target angle area and the angle area set, and the angle area set comprises at least one angle area on the model.
4. The method according to any one of claims 1 to 3, wherein the obtaining of the intermediate rendering result of the target angular region pre-computed before rendering the current frame of the application comprises:
performing ray tracing rendering on the incident rays passing through the target space cache region to obtain a pre-rendering result of the incident rays;
and calculating a middle rendering result of the target angle area according to the pre-rendering result of the incident rays.
5. The method of claim 4, wherein the calculating an intermediate rendering result of the target angular region of the target spatial buffer region according to the pre-rendering results of the incident rays comprises:
and carrying out weighted summation on the pre-rendering results of the incident rays to obtain an intermediate rendering result of the target angle area.
6. The method as claimed in claim 4 or 5, wherein the target space buffer area is a hemisphere, and each incident ray is emitted from a sphere center of the target space buffer area and directed to the target space buffer area.
7. The method according to any one of claims 1 to 6, further comprising:
and storing the intermediate rendering result.
8. A rendering engine, the engine comprising:
the processing unit is used for determining an angle area where an intersection point of the emergent ray and the model is located as a target angle area in the process of rendering the current frame of the application; obtaining an intermediate rendering result of the target angle area, which is pre-calculated before the current frame is rendered, wherein the intermediate rendering result of the target angle area is determined according to pre-rendering results of a plurality of incident rays passing through a target space cache area where the target angle area is located; and calculating the rendering result of the pixels in the current view plane according to the intermediate rendering result of the target angle area.
9. The apparatus according to claim 8, wherein the processing unit is configured to compute a rendering result of a pixel in a current view plane according to an intermediate rendering result of the target angle region and an intermediate rendering result of a set of angle regions, where the plurality of angle regions corresponding to the pixel include the target angle region and the set of angle regions, and the set of angle regions includes at least one angle region on the model.
10. The method according to claim 8 or 9, wherein the processing unit is configured to calculate a rendering result of another pixel in the current view plane according to the intermediate rendering result of the target angle region and an intermediate rendering result of an angle region set, wherein the plurality of angle regions corresponding to the another pixel include the target angle region and the angle region set, and the angle region set includes at least one angle region on the model.
11. The method according to any one of claims 8 to 10, wherein the processing unit is configured to perform ray-tracing rendering on the multiple incident rays passing through the target spatial buffer area, and obtain a pre-rendering result of the multiple incident rays; and calculating a middle rendering result of the target angle area according to the pre-rendering result of the incident rays.
12. The method of claim 11, wherein the processing unit is configured to perform a weighted summation on the pre-rendering results of the incident light rays to obtain an intermediate rendering result of the target angular region.
13. The method of claim 12, wherein the target space cache region is hemispherical, and each incident ray is emitted from a center of a sphere of the target space cache region and directed to the target space cache region.
14. The method according to any one of claims 8 to 13, wherein the apparatus further comprises:
and the storage unit is used for storing the intermediate rendering result.
15. A cluster of computing devices comprising at least one computing device, each computing device comprising a processor and a memory;
the processor of the at least one computing device is to execute instructions stored in the memory of the at least one computing device to cause the cluster of computing devices to perform the method of any of claims 1 to 6.
16. A computer program product comprising instructions which, when executed by a cluster of computer devices, cause the cluster of computer devices to perform the method of any one of claims 1 to 6.
17. A computer readable storage medium comprising computer program instructions which, when executed by a cluster of computing devices, perform the method of any of claims 1 to 6.
CN202111627202.5A 2021-07-28 2021-12-28 Rendering method, device and storage medium Pending CN115690284A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/104247 WO2023005631A1 (en) 2021-07-28 2022-07-07 Rendering method and apparatus, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110858925X 2021-07-28
CN202110858925 2021-07-28

Publications (1)

Publication Number Publication Date
CN115690284A true CN115690284A (en) 2023-02-03

Family

ID=85059904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111627202.5A Pending CN115690284A (en) 2021-07-28 2021-12-28 Rendering method, device and storage medium

Country Status (2)

Country Link
CN (1) CN115690284A (en)
WO (1) WO2023005631A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977556A (en) * 2023-07-18 2023-10-31 广东国地规划科技股份有限公司 Rendering method, device and storage medium of CIM system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7289119B2 (en) * 2005-05-10 2007-10-30 Sony Computer Entertainment Inc. Statistical rendering acceleration
CN105261059B (en) * 2015-09-18 2017-12-12 浙江大学 A kind of rendering intent based in screen space calculating indirect reference bloom
CN112669426B (en) * 2020-12-25 2024-01-02 武汉青图科技工程有限公司 Three-dimensional geographic information model rendering method and system based on generation countermeasure network
CN112957731B (en) * 2021-03-26 2021-11-26 深圳市凉屋游戏科技有限公司 Picture rendering method, picture rendering device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977556A (en) * 2023-07-18 2023-10-31 广东国地规划科技股份有限公司 Rendering method, device and storage medium of CIM system
CN116977556B (en) * 2023-07-18 2024-02-06 广东国地规划科技股份有限公司 Rendering method, device and storage medium of CIM system

Also Published As

Publication number Publication date
WO2023005631A1 (en) 2023-02-02

Similar Documents

Publication Publication Date Title
US20180164592A1 (en) System and method for foveated image generation using an optical combiner
US20140340389A1 (en) System, method, and computer program product to produce images for a near-eye light field display
WO2021228031A1 (en) Rendering method, apparatus and system
US20180373200A1 (en) System and method for near-eye light field rendering for wide field of view interactive three-dimensional computer graphics
US20200120328A1 (en) High-Performance Light Field Display Simulator
US11790594B2 (en) Ray-tracing with irradiance caches
US20230230311A1 (en) Rendering Method and Apparatus, and Device
CN113643414B (en) Three-dimensional image generation method and device, electronic equipment and storage medium
US11823321B2 (en) Denoising techniques suitable for recurrent blurs
WO2022105641A1 (en) Rendering method, device and system
WO2023005631A1 (en) Rendering method and apparatus, and storage medium
US20090284524A1 (en) Optimized Graphical Calculation Performance by Removing Divide Requirements
CN112041894A (en) Improving realism of scenes involving water surface during rendering
US20230351555A1 (en) Using intrinsic functions for shadow denoising in ray tracing applications
US11385464B2 (en) Wide angle augmented reality display
CN115830202A (en) Three-dimensional model rendering method and device
CN115695685A (en) Special effect processing method and device, electronic equipment and storage medium
Hajisharif Real-time image based lighting with streaming HDR-light probe sequences
WO2023029424A1 (en) Method for rendering application and related device
WO2023109582A1 (en) Light ray data processing method and apparatus, device and storage medium
Bodonyi et al. Efficient tile-based rendering of lens flare ghosts
KR102332920B1 (en) System and method for rendering of 6 degree of freedom virtual reality
CN116152420A (en) Rendering method and device
CN116758208A (en) Global illumination rendering method and device, storage medium and electronic equipment
US20230090732A1 (en) System and method for real-time ray tracing in a 3d environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication