WO2022063260A1 - 一种渲染方法、装置及设备 - Google Patents

一种渲染方法、装置及设备 Download PDF

Info

Publication number
WO2022063260A1
WO2022063260A1 PCT/CN2021/120584 CN2021120584W WO2022063260A1 WO 2022063260 A1 WO2022063260 A1 WO 2022063260A1 CN 2021120584 W CN2021120584 W CN 2021120584W WO 2022063260 A1 WO2022063260 A1 WO 2022063260A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
current
patch
rendering result
target patch
Prior art date
Application number
PCT/CN2021/120584
Other languages
English (en)
French (fr)
Inventor
谢坤
尹青
Original Assignee
华为云计算技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为云计算技术有限公司 filed Critical 华为云计算技术有限公司
Priority to EP21871629.8A priority Critical patent/EP4213102A4/en
Publication of WO2022063260A1 publication Critical patent/WO2022063260A1/zh
Priority to US18/189,677 priority patent/US20230230311A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present application relates to the field of graphics rendering, and in particular, to a rendering method, apparatus and device.
  • Ray tracing rendering technology has always been the basic technology in the field of computer graphics. So far, this technology is the most important technology to achieve high-quality, realistic and high-quality images. However, this technology has always required a long calculation time to complete a large number of Monte Carlo integration calculation processes and generate final calculation results. Therefore, this technology has been used in offline rendering scenes, such as film and television, animation and other fields. With the upgrade of computer hardware computing power, in recent years, as some rendering business fields (games, virtual reality) that require strong real-time performance have begun to appear, the need for ray tracing rendering technology has become stronger and stronger.
  • the present application provides a rendering method, which can improve rendering efficiency.
  • a first aspect of the present application provides a rendering method for rendering an application, the application including at least one model, each model including a plurality of patches.
  • the method includes: in the process of rendering the current frame of the application, determining a target patch corresponding to a pixel in the current view plane corresponding to the current frame, the target patch being included in the multiple patches; obtaining the rendering of the historical frame of the application The historical rendering result of the target patch obtained in the process; according to the historical rendering result of the target patch, the current rendering result of the pixel is calculated.
  • the rendering method By reusing the historical rendering results of the target patch, the rendering method reduces the number of traced rays when performing ray tracing rendering on the target patch in the current frame, and improves the efficiency of ray tracing rendering.
  • the method further includes: in the process of rendering the current frame, the rendering method calculates the current corresponding to the target patch in the current frame by obtaining the historical rendering result corresponding to the target patch in the historical frame render result. Further, according to the current rendering result of the target patch, the current rendering result of the pixels in the current view plane is calculated. Then, the above method is performed on each pixel of the view plane to obtain the rendering result of the view plane, that is, to obtain the current frame.
  • the method further includes: performing ray tracing rendering on the target patch to obtain an intermediate rendering result of the target patch.
  • the calculation of the current rendering result of the pixel according to the historical rendering result of the target patch includes: calculating the current rendering result of the target patch according to the historical rendering result of the target patch and the intermediate rendering result of the target patch; Calculate the current rendering result of the pixel according to the current rendering result of the target patch.
  • the rendering method improves the current rendering result of the target patch by using the intermediate rendering result of the target patch after performing ray tracing on the target patch, and under the condition that the number of tracing rays emitted by the target patch remains unchanged , which improves the rendering result of the target patch and effectively improves the efficiency of ray tracing rendering.
  • the step of performing ray tracing rendering on the target patch to obtain intermediate rendering results for the target patch may occur before or after obtaining historical rendering results for the patch.
  • this step may also occur simultaneously with the step of obtaining the historical rendering results of the patch.
  • the method further includes: determining that the number of samples corresponding to the historical rendering result of the target patch is higher than a threshold. According to the historical rendering result of the target patch, calculating the current rendering result of the pixel includes: taking the historical rendering result of the target patch as the current rendering result of the target patch, and the current rendering result of the target patch is used to calculate The current rendering result for this pixel.
  • the historical rendering result of the target patch is directly used as the current rendering result of the target patch, which can avoid performing ray tracing rendering on the target patch, and Directly reuse the historical rendering knot of the patch, which effectively improves the overall rendering efficiency of the current view plane.
  • the method further includes: determining that the number of samples corresponding to the historical rendering result of the target patch is not higher than a threshold.
  • the calculating the current rendering result of the pixel according to the historical rendering result of the target patch includes: performing ray tracing rendering on the target patch to obtain an intermediate rendering result of the target patch; according to the intermediate rendering result of the target patch The current rendering result of the target patch is calculated based on the result and the historical rendering result of the target patch; the rendering result of the pixel is calculated according to the current rendering result of the target patch.
  • the historical rendering result of the target patch is reused, which can reduce the number of target patches.
  • the number of tracing rays emitted which effectively improves the overall rendering efficiency of the current view plane.
  • the acquired procedural rendering result of the target patch is also the intermediate rendering result of the target patch.
  • ray tracing rendering may be performed on the target patch to obtain an intermediate rendering result of the target patch, where the number of tracing rays of the ray tracing rendering performed here is less than a threshold.
  • the method further includes: storing the current rendering result of the target patch.
  • the current rendering result of the target patch can be stored in memory for reuse in the rendering process of subsequent frames.
  • the method further includes: generating the current view plane in a first application, and generating a historical rendering result of the target patch in a second application.
  • the method further includes: generating the historical rendering result of the target patch and the current view plane in the same application.
  • the method further includes: obtaining historical rendering results of the target patch based on ray tracing rendering.
  • a second aspect of the present application provides a rendering engine, the device includes a processing unit and a storage unit: the processing unit, in the process of rendering a current frame of an application, is used to determine a target patch corresponding to a pixel in the current view plane; Obtain the historical rendering result of the target patch obtained during the rendering process of the historical frame of the application; calculate the current rendering result of the pixel according to the historical rendering result of the target patch, wherein the application includes at least A model, each model includes a plurality of patches; the storage unit is used to store the historical rendering results of the target patches obtained during the rendering process of the historical frames of the application.
  • the processing unit is further configured to perform ray tracing rendering on the target patch before calculating the current rendering result of the pixel according to the historical rendering result of the target patch to obtain the target patch
  • the intermediate rendering result of the patch according to the historical rendering result of the target patch and the intermediate rendering result of the target patch, determine the current rendering result of the target patch; according to the current rendering result of the target patch, Determine the current rendering result for the pixel.
  • the processing unit is further configured to: determine that the number of samples corresponding to the historical rendering result of the target patch is higher than a threshold; use the historical rendering result of the patch as the current rendering result of the patch Rendering result, the current rendering result of the patch is used to determine the current rendering result of the pixel.
  • the processing unit is further configured to: determine that the number of samples corresponding to the historical rendering results of the target patch is not higher than a threshold; perform ray tracing rendering on the target patch to obtain the target patch the intermediate rendering result of the patch; according to the intermediate rendering result of the target patch and the historical rendering result of the target patch, determine the current rendering result of the target patch; according to the current rendering result of the target patch, A rendering result for the pixel is determined.
  • the storage unit is used to store the current rendering result of the target patch.
  • a third aspect of the present application provides a computer program product comprising instructions which, when executed by a cluster of computer devices, cause the cluster of computer devices to perform the method as provided by the first aspect or any possible design of the first aspect.
  • a fourth aspect of the present application provides a computer-readable storage medium, which is characterized by comprising computer program instructions, and when the computer program instructions are executed by a computing device cluster, the computing device cluster executes the method described in the first aspect or the first aspect. Any possible design of an aspect provides the method.
  • a fifth aspect of the present application provides a computing device cluster, including at least one computing device, each computing device including a processor and a memory; the processor of the at least one computing device is configured to execute instructions stored in the memory of the at least one computing device, to cause the computing device to perform the method as provided by the first aspect or any possible design of the first aspect.
  • the computing device cluster includes a computing device including a processor and a memory; the processor for executing instructions stored in the memory to run the second aspect or any possible design of the second aspect
  • the provided method provides a rendering engine to cause the computing device to perform the method provided by the first aspect or any possible design of the first aspect.
  • the computing device cluster includes at least two computing devices, each computing device including a processor and memory.
  • the processors of the at least two computing devices are configured to execute the instructions stored in the memories of the at least two computing devices to run the rendering engine provided by the method provided by the second aspect or any possible design of the second aspect, so that the computing The device cluster performs the method as provided by the first aspect or any possible design of the first aspect.
  • Each computing device runs an included partial unit of a rendering engine.
  • FIG. 1(a) is a schematic diagram of a rendering structure under a single viewpoint provided by an embodiment of the present application
  • FIG. 1(b) is a schematic diagram of a patch division provided in an embodiment of the present application.
  • 1(c) is a schematic diagram of the correspondence between a pixel and a patch provided by an embodiment of the present application
  • FIG. 1(d) is a schematic diagram of a pixel projection area provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a rendering structure provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an application scenario including multiple processes provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a multi-view scene structure according to an embodiment of the present application.
  • FIG. 5 is a flowchart of a rendering method provided by an embodiment of the present application.
  • Fig. 6 is a kind of current initial public information table provided by the embodiment of this application.
  • FIG. 7 provides a current initial correspondence table according to an embodiment of the present application.
  • Fig. 8 is a kind of current public information table provided by the embodiment of this application.
  • Fig. 9 is a kind of current correspondence table provided by the embodiment of this application.
  • FIG. 10 is a flowchart of another rendering method provided by an embodiment of the present application.
  • Fig. 11 is a kind of current shared information table provided by the embodiment of this application.
  • FIG. 12 is another current correspondence table provided by the embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a rendering engine provided by an embodiment of the application.
  • FIG. 14 is a schematic structural diagram of a computing device according to an embodiment of the application.
  • FIG. 15 is a schematic structural diagram of a computing device cluster according to an embodiment of the present application.
  • FIG. 16 is a schematic diagram of a connection mode of a computing device cluster provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of a connection manner of a computing device cluster according to an embodiment of the present application.
  • first and second in the embodiments of the present application are only used for the purpose of description, and cannot be understood as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • a tile is the smallest planar unit in two-dimensional or three-dimensional space.
  • the model in space needs to be divided into countless tiny planes.
  • These planes also known as patches, can be any polygon, but triangles and quadrilaterals are commonly used.
  • the intersections of the edges of these patches are the vertices of each patch.
  • Patches can be randomly divided according to information such as the material or color of the model. Also, consider that each patch has two sides, and usually only one side is visible. Therefore, in some cases it is necessary to perform backface culling on the patch.
  • the number of rays traced per patch (sample per mesh, SPM):
  • the number of rays traced per patch refers to the number of rays that pass through each patch.
  • a patch is the smallest unit in three-dimensional space.
  • the screen we see is composed of pixels arranged one by one, and each pixel corresponds to one or more patches in the space.
  • the color of a pixel is calculated from the color (red, green, blue, RGB) of its corresponding patch.
  • the number of rays traced per patch can affect the result of the rendering.
  • a larger number of traced rays per patch means that more rays are cast from the viewpoint to the model in 3D space. The more rays are cast on each patch, the more accurate the rendering result calculation for each patch can be.
  • Rasterization is the process of converting 3D graphics in screen space into raster images on a two-dimensional viewing plane.
  • the process of rasterization consists of two parts. The first part is to determine which integer grid areas in the window coordinates are occupied by the basic primitives. The second part of the job: assigning a rendering result and a depth value to each region. Converting the mathematical description of the model and the color information associated with the model into pixels on the screen for corresponding positions and the colors used to fill the pixels is called rasterization.
  • Ray tracing also known as ray tracing or ray tracing, is a general technique from geometric optics that traces rays that interact with optical surfaces to obtain a model of the path the rays travel through. It is used in the design of optical systems such as camera lenses, microscopes, telescopes, and binoculars. When used for rendering, tracing the light from the eye rather than the light source reveals the mathematical model of the choreographed scene produced by such a technique. The result is similar to that of the raycasting and scanline rendering methods, but with better optics. For example, reflection and refraction have more accurate simulation effects and are very efficient, so this method is often used when such high quality results are sought.
  • the ray tracing method first calculates the distance, direction, and new location of a ray that travels in the medium before being absorbed by the medium or changing direction. Then a new ray is generated from this new position, and the same processing method is used to finally calculate a complete path for the light to travel through the medium. Since the algorithm is a complete simulation of the imaging system, complex pictures can be simulated.
  • graphics rendering has gradually become the focus of the industry.
  • graphics rendering technologies There are two main graphics rendering technologies: rasterization and ray tracing.
  • rasterization can be done through raycasting calculations. But for additional visual effects: soft shadows, global illumination, caustics, etc., they need to be modeled through data and processed using other methods. For example, global illumination needs to be fitted with methods such as light map and irradiance map, and for soft shadows, shadow map technology is used for fitting. This development method is cumbersome, and the visual effect after fitting is not satisfactory.
  • the rasterization rendering technology can support simultaneous rendering of multiple viewpoints, the implementation process requires additional angle transformation processing during the final perspective transformation, but the accuracy is poor. Therefore, the rendering technology used below is mainly ray tracing.
  • the ray tracing mentioned in this article refers to a method of obtaining rendering results by simulating the casting of rays. Specifically, methods such as backward ray tracing, distributed ray tracing, and bidirectional path tracing can be included.
  • Figure 1(a) shows a schematic diagram of a rendering structure under a single viewpoint.
  • the rendering structure includes at least a virtual view point 100 , a virtual view plane 200 , a model 300 and a light source 302 .
  • the virtual viewpoint 100 is an eye or eyes of a person simulated in space for perceiving three-dimensional structures. Among them, each frame of picture corresponds to a space. According to the number of viewpoints, the virtual viewpoints 100 can be divided into monocular viewpoints, binocular viewpoints, and multi-view viewpoints. Specifically, binocular viewpoint or multi-view viewpoint refers to acquiring two or more images from two or more different viewpoints to reconstruct the 3D structure or depth information of the target model.
  • the virtual viewing plane 200 is an analog display screen in space.
  • the construction of the virtual view plane 200 is mainly determined by two factors, the distance from the virtual view point 100 to the virtual view plane 200 and the screen resolution.
  • the distance from the virtual view point 100 to the virtual view plane 200 refers to the vertical distance from the virtual view point 100 to the virtual view plane 200 . Further, the distance can be set as required.
  • the screen resolution refers to the number of pixels contained in the virtual viewing plane 200 .
  • the virtual view plane 200 includes one or more pixels.
  • the virtual view plane 200 includes 9 pixels (3*3).
  • the results obtained through the rendering operation can be used for output.
  • the rendering result of each pixel in the virtual view plane 200 together constitutes a frame of picture. That is, in one ray tracing, one virtual view plane 200 corresponds to one frame of picture.
  • Corresponding to the virtual viewing plane is a display screen on the client side for outputting the final result.
  • the screen resolution of the display is not necessarily equal to the screen resolution of the virtual viewing plane.
  • the rendering result on the virtual viewing plane 200 may be output to the display screen at a ratio of 1:1.
  • the rendering result on the virtual viewing plane 200 is output to the display screen according to a certain ratio.
  • the calculation of the specific ratio belongs to the prior art, and details are not repeated here.
  • One or more models 300 may be contained in the space. Which models 300 can be included in the rendering result corresponding to the virtual view plane 200 is determined by the relative position between the corresponding virtual view point 100 and each model 300 .
  • the surface of the model Before rendering operations, it is usually necessary to divide the surface of the model into multiple patches. Wherein, the size and shape of each patch may be consistent or inconsistent. Specifically, the method for dividing a patch belongs to the prior art, and details are not described here.
  • FIG. 1( b ) shows the patch division of one face of the model 300 .
  • one face of the model 300 is divided into 6 triangular facets of different sizes.
  • All the vertices in the space include not only the intersections (eg D1, D2, D4, D6) of each face of the model 300, but also the vertices (eg D0, D3, D5) of each face.
  • Figure 1(c) shows a schematic diagram of the correspondence between pixels and patches.
  • the bold block in FIG. 1( c ) is the projection of a pixel in the virtual view plane 200 on the model 300 in FIG. 1( a ). It can be seen that the pixel projection areas cover part of the areas of patches 1 to 6 respectively.
  • the pixel projection area indicates the area enclosed by the projection of the pixel on the model.
  • a pixel projection area can cover multiple patches or only one patch. Wherein, when one pixel projection area covers only one patch, it may cover the entire area of the patch, or may cover part of the area of the patch.
  • the projection area of one pixel covers part of the area of the patch 6 . That is, the patch 6 can cover multiple pixel projection areas at the same time.
  • each model in the space can be divided into multiple polygonal patches, and all the vertices in the space are a collection of vertices of each polygonal patch.
  • the pixel projection area corresponding to one pixel may cover one or more patches, and one patch may also cover the pixel projection area corresponding to one or more pixels.
  • the light source 302 is a virtual light source set in the space for generating a lighting environment in the space.
  • the type of light source 302 may be any one of the following light sources: point light source, area light source, line light source, and the like. Further, one or more light sources 302 may be included in the space. Further, when there are multiple light sources 302 in the space, different light source types may be different.
  • Operations such as the setting of the virtual viewpoint, the setting of the virtual viewing plane, the establishment of the model, and the division of the patches in the above-mentioned space are usually completed before the rendering operation is performed.
  • the above steps may be performed by a rendering engine such as a video rendering engine or a game rendering engine.
  • a rendering engine such as a video rendering engine or a game rendering engine.
  • game rendering engine unity
  • unreal engine unreal
  • the rendering engine can receive the above-mentioned relative positional relationship and related information.
  • the information includes the type and number of virtual viewpoints, the distance and screen resolution from the virtual viewing plane to the virtual viewpoint, the lighting environment, the relative positional relationship between each model and the virtual viewpoint, the patch division of each model, Patch number information and patch material information, etc.
  • the rendering engine may further execute the rendering method 600 below.
  • Figure 2 shows a ray traced scene graph.
  • the figure includes a virtual viewpoint 100, a virtual viewing plane 200, a model 300, a light source 302, and three rays (a first ray, a second ray, and a third ray).
  • the virtual view plane 200 presents the rendering result in units of pixels, and the rendering result of each pixel is equal to the average of the rendering results of the rays passing through the pixel in this ray tracing process.
  • the calculation of the rendering result of each ray belongs to the prior art, so it is not repeated here.
  • every ray of light is emitted from a light source, and after touching one or more patches in space, one of the following occurs at each point of contact: refraction, reflection, or diffuse reflection. Then pass through the virtual view plane 200 and finally enter the virtual view point 100 . That is, into the eyes of the user.
  • each patch has certain color and material characteristics.
  • the material of the patch can be divided into the following three types: transparent material, smooth opaque material and rough opaque material.
  • the refraction/reflection of the patch for light can be divided into three types: refraction, reflection and diffuse reflection. Among them, the light will be refracted when it touches a transparent material, the light will be reflected when it touches an opaque material with a smooth surface, and a diffuse reflection will occur when the light touches an opaque material with a rough surface.
  • the color of the light reflected from the contact point is usually the same from all angles.
  • the color of the same point that can be diffusely reflected from two different virtual viewpoints is the same.
  • the color of the outgoing light is determined by the color of the light source and the patch where the contact point is located, and the colors of each point on the same patch are the same. Therefore, the point approximation can be extended to such tiny units as patches.
  • the rendering results of each point can be stored on each patch.
  • the rendering results are stored in units of patches, which is beneficial to improve the computational efficiency of ray tracing.
  • the calculation of the rendering result of each patch in ray tracing requires a certain number of rays to be emitted from the patch.
  • the rendering result of each patch can be determined according to the color of these rays. For example, the rendering result stored in the patch can be determined by deleting the rendering results of rays with excessively different sample values, and then calculating the average value.
  • the virtual view plane 200 includes 9 pixels in total. At least three rays emitted from the virtual viewpoint 100 pass through the pixels in the middle (pixels with bold borders), which are the first ray, the second ray and the third ray respectively. Taking the first light ray as an example, after touching the surface 6 of the model 300 in space, the emitted light ray returns to the light source 302 .
  • the material of the triangular patch 6 enclosed by the vertices D0, D1 and D6 is a transparent material or a smooth opaque material, that is, refraction or reflection occurs, then the rendering result of the first ray is not stored on the patch 6.
  • the rendering result of the first ray can be stored on the patch 6.
  • the first contact point corresponding to the first light ray and the third contact point corresponding to the third light ray both fall inside the surface sheet 6 .
  • the rendering results of the first ray and the third ray may be different.
  • the rendering result stored on the patch 6 can be determined by averaging the rendering results of the above two rays.
  • one of the obviously abnormal rendering results may also be removed, and the rendering result of the other ray may be used as the rendering result stored on the patch 6 .
  • the current ray tracing rendering method is limited by the computing power and the design of the traditional graphics processing unit (GPU) architecture, and can only perform rendering within the viewpoint range for one viewpoint at a time. For example, when multiple users are connected to the Internet and enter the same rendering scene, the rendering results cannot be shared between their GPU rendering processes. In fact, for the same rendering scene, a large number of rays can share light paths within the range of different user viewpoints. Specifically, the light path distribution, the light intensity distribution, the probability distribution function of the current light distribution, and the light transmission matrix of a large number of rendered scenes can be shared and are unbiased.
  • GPU graphics processing unit
  • the following provides a rendering method that can be shared by multiple viewpoints. Specifically, when multiple viewpoints are in the same space at the same time, the intermediate calculation results of ray tracing can be shared among the multiple viewpoints, and then the rendering results are output accordingly. Further, in order to improve the quality of the aforementioned output image, an intermediate rendering result of a patch obtained by ray tracing of multiple viewpoints in a space partially including the same patch may also be used.
  • the multi-process belongs to one or more processes in an application.
  • the scene includes at least two different frames, that is, a historical frame and a current frame.
  • the formation time of the historical frame shown here precedes the formation time of the current frame.
  • one virtual view point corresponds to one virtual view plane
  • the rendering result of one ray tracing on one virtual view plane is one frame of picture.
  • one virtual viewpoint may correspond to one process. While a process generates a frame of pictures based on ray tracing, other processes in the same frame can generate a frame of pictures at the same time.
  • process 400 may compose content to be rendered 410 by sending information 404 to model library 408 .
  • the model library 408 includes one or more models. Generally speaking, the position of each model in space is fixed. Optionally, the position of each model can also be controlled by the process 404 by sending instructions. It should be noted that, for the model library 408, the light source may also be a model with specific parameters.
  • Parameters and instructions may be included in the information 404 .
  • the parameters include at least the coordinate parameters of the virtual viewpoint and the virtual viewpoint plane in space. Instructions can include modifications and movements of the model.
  • the model library 408 generates an initialization model set, that is, the content to be rendered 410 , by configuring the models.
  • the content to be rendered includes one or more models and model information.
  • the patch rendering result corresponding to a process in the historical frame can be used to determine the rendering result of a process in the current frame.
  • the patch rendering result corresponding to a certain process may include patch rendering results corresponding to one or more frames.
  • the rendering result of the current view plane may be calculated according to the patch rendering result corresponding to at least one historical frame of a certain process.
  • the history frame may include one or more frames.
  • the following is an example of the rendering result of a certain process in a historical frame corresponding to a frame of view plane rendering.
  • Process 400 may generate content to be rendered 410 by sending information 404 to model repository 408 .
  • the rendering engine may generate a patch rendering result 416 according to the content to be rendered 410 . Further, rendering results 420 may be obtained for output.
  • the process 500 may generate the rendering result 516 in a manner similar to the above-described process.
  • the rendering engine 410 when the rendering engine 410 generates the patch rendering result 512, it can at least be obtained according to the patch rendering result 414 in the historical frame.
  • the premise of the above situation is that the content to be rendered 410 and the content to be rendered 508 contain one or more identical patches.
  • the patch rendering result 418 can also be used to calculate the patch rendering result 512 .
  • process 400 and process 500 correspond to the same virtual viewpoint in different frames. That is, the process 400 and the process 500 are actually different processes in the same application, and the main difference lies in the different running times.
  • the history frame is the previous frame of the current frame.
  • the screen corresponding to the same viewpoint will not change much in several consecutive frames, especially in the upper and lower frames. Therefore, by reusing one or more patch rendering results in one or more historical frames, the quality and acquisition speed of the patch rendering results in the current frame can be improved.
  • the process 400 and the process 500 correspond to different virtual viewpoints in different frames. That is, the process 400 and the process 500 are actually different processes running at different times in the same application.
  • the history frame is the previous frame of the current frame.
  • two different viewpoints can be two players who are physically far away from each other.
  • the two spaces corresponding to the rendered images of such two players have a high probability of having the same patch in the upper and lower frames respectively.
  • the number of people online at the same time is usually between 100,000 and 1 million.
  • most of the player's pictures are concentrated in some typical scenes. Wherein, the scene corresponds to a space containing one or more patches.
  • the process shown in FIG. 3 may be executed on a local device or may be executed in the cloud.
  • the model library and rendering engine have similar deployment/running environments.
  • the local device can be a server.
  • the server can be one or more.
  • it can also be a terminal device.
  • the process can also run on a cloud server.
  • the number of cloud servers may be one or more.
  • Model repositories can be deployed on local devices.
  • the local device may be a server.
  • the server can be one or more.
  • it can also be a terminal device.
  • model library it can also be deployed on a cloud server.
  • the storage requirements are high.
  • the input of the rendering engine is the content to be rendered, and the output is the rendering result corresponding to the content to be rendered.
  • the rendering result of the patches included in the content to be rendered may also be output.
  • the rendering engine may be a computing device cluster composed of one or more computing devices, may also be a computer program product, or may be a physical device.
  • the above devices or products can be deployed on the local device side.
  • they can also be deployed on the cloud server side.
  • both the process and the model library can be deployed on the local device.
  • the rendering engine can be deployed on the cloud server side.
  • the model library can be deployed on the local device. The rendering engine and model library can be deployed on the cloud server side.
  • the process 400 and the process 500 are actually different processes in different applications.
  • the process can run on the local device, while the model library and rendering engine are more suitable for deployment on the cloud server side.
  • a multi-view scene structure diagram includes at least two virtual viewpoints 100 and 102 , virtual view planes 200 and 202 corresponding to the two virtual viewpoints, a model 300 and a light source 302 .
  • the virtual viewpoint here corresponds to the process in FIG. 3 .
  • one virtual viewpoint corresponds to one process.
  • the model can correspond to the model in the model library in FIG. 3 .
  • Figure 4 includes two left and right frames, which are the historical frame on the left and the current frame on the right.
  • a current initial public information table is established for one or more models 300 contained in the content to be rendered.
  • the content to be rendered includes a model 300 .
  • the public information table is established in units of patches in the model, and also includes information such as the rendering results of each patch.
  • an initial correspondence table is established for each virtual view plane in the current frame. Taking the virtual view plane 206 corresponding to the virtual view point 106 as an example, a current initial correspondence table is established.
  • the correspondence table is established in units of pixels in the virtual view plane, and also includes information such as the correspondence between each pixel and each patch in the model, and the color of each pixel.
  • a historical initial public information table has also been established for the model 300 , and a historical initial corresponding relationship table has been established for the virtual view plane 102 .
  • the historical public information table has been obtained according to the historical initial public information table and the historical initial corresponding relationship table in the historical frame.
  • the acquisition time of the historical public information table is earlier than the acquisition time of the current public information table, but the acquisition time of the historical public information table is not necessarily earlier than the establishment time of the current initial public information table.
  • the current correspondence table can be obtained according to the historical public information table, and then the rendering result of the virtual view plane 206 can be obtained.
  • Two possible implementations are described below. In the following two possible implementation manners, the current frame and the historical frame adopt the same implementation manner.
  • the current initial public information table is established according to the historical public information table, and then, according to the current initial corresponding relationship table, the patches that need to be ray traced in the current initial public information table are determined. After ray tracing rendering is performed on the patch, the current initial public information table is updated to obtain the current public information table. Further, the current initial correspondence table is updated according to the current public information table to obtain the current correspondence table. Finally, according to the current correspondence table, the rendering result corresponding to the view plane 206 is determined.
  • ray tracing rendering is first performed on the patches in the content to be rendered, and a current initial public information table is established according to the result of ray tracing rendering. Then, the current initial public information table is updated according to the historical public information table to obtain the current public information table. The current initial correspondence table is updated according to the current public information table to obtain the current correspondence table. Finally, according to the current correspondence table, the rendering result corresponding to the view plane 206 is determined.
  • FIG. 5 shows a flowchart of a rendering method, which introduces a rendering method 600 .
  • FIG. 5 shows a rendering flow chart of two view planes, respectively corresponding to the rendering flow charts of the current view plane and the historical view plane.
  • the current view plane corresponds to the current frame
  • the historical view plane corresponds to the historical frame.
  • the rendering method for the historical view plane is the same as the rendering method for the current view plane, and both are the rendering method 600 . Therefore, the following description is mainly based on the flowchart corresponding to the current view plane.
  • the rendering method may be performed by the rendering engine 800 .
  • the method includes 3 parts, namely the preprocessing part, the ray tracing part and the part of obtaining the current rendering result.
  • the preprocessing part includes S400 to S404.
  • the rendering engine 800 acquires the current content to be rendered and related parameters.
  • the rendering engine 800 obtains the current content to be rendered.
  • the currently rendered content may be generated based on the process in FIG. 3 .
  • the current content to be rendered can be formed after models are selected and combined in the model library according to the parameters and instructions included in the process. Therefore, the content to be rendered can be obtained from a device or process that can call the model library according to the process information.
  • the content to be rendered includes one or more models and information of each model.
  • model 300 in FIG. 4 Specifically, the patch division of each model, the patch number, and the coordinates of each model and patch.
  • the relevant parameters include the coordinates of the virtual view point and the virtual view plane, as well as light source parameters and the like.
  • the rendering engine 800 After the rendering engine 800 obtains the current content to be rendered and related parameters, it can render the current content to be rendered.
  • the rendering engine 800 establishes a current initial public information table by taking the patches in the content to be rendered currently as a unit.
  • the current initial public information table may be established according to the numbers of the individual patches in the current content to be rendered obtained in S200. Specifically, the current initial public information table includes the number of each patch and the sampling value, rendering result and material of each patch.
  • the sampling value refers to the number of times that the patch, as a ray, touches the patch in space for the first time in the process of ray tracing.
  • Color representation methods include RGB mode, CMYK mode, and Lab mode, etc.
  • the RGB mode is used as an example below.
  • FIG. 6 shows a current initial public information table, in which the patch numbers are sequentially numbered from 1 to p, where p indicates the number of patches in the space.
  • the sampling value and the stored rendering result need to be initialized.
  • the initial value of the sampling value of each patch may be set to 0.
  • the current initial public information table may also be initialized according to the historical public information table obtained in the historical frame.
  • the specific information on how to obtain the historical public information table will be introduced below.
  • the sampling value and the corresponding sampling value of each patch can be queried in the historical public information table. render result. Further, the sampling value and rendering result obtained by the query are updated to the sampling value corresponding to each patch in the current initial public information table and the stored initial value of the rendering result.
  • step S200 and step S202 do not have a fixed execution sequence.
  • step S202 may be performed before step S200, may be performed after step S200, or may be performed simultaneously with step S200.
  • the rendering engine 800 establishes a current initial correspondence table corresponding to the current view plane.
  • the corresponding position of each patch on the current view plane can be determined, thereby establishing the correspondence between each patch in the current content to be rendered and each pixel in the current view plane. Further, according to the corresponding relationship, a current initial corresponding relationship table can be established.
  • the current initial correspondence table includes the correspondence between the pixel and the patch, the depth value of the patch, the stored rendering result, and the rendering result of the pixel.
  • a patch is a tiny unit in three-dimensional space. After a series of changes in the coordinate system from the model coordinate system to the world coordinate system, then to the view coordinate system, then to the projected coordinate system, and finally to the viewport coordinate system After that, it is finally mapped on the two-dimensional view plane.
  • Each pixel in the viewing plane is judged in a traversal manner, and it is judged whether part or all of the area of the pixel is covered by a patch. For a pixel covered by a patch, record the corresponding relationship between the pixel and the covered patch.
  • the coverage relationship between pixels and patches has been introduced above and will not be repeated here.
  • the pixels in the view plane need to be numbered.
  • FIG. 7 shows a current initial correspondence table between pixels and patches under the current view plane.
  • the current initial correspondence table includes the correspondence between the pixel and the patch, the depth value of the patch, the rendering result, and the rendering result of the pixel.
  • the correspondence between pixels and patches may be that one pixel corresponds to one or more patches.
  • patches 1, 2 and 6 all cover part or all of the area of pixel 1.
  • Patch m covers part or all of the area of pixel n. where n and m are not necessarily equal.
  • the areas covered by the multiple patches may be different or the same. Specifically, because the depths of each patch are different, there may be a situation where the regions covered by two or more patches in the same pixel overlap.
  • pixels and patches may also be that one or more pixels correspond to one patch.
  • patch 1 covers part or all of both pixels 1 and 2.
  • the depth of each patch can be calculated from the depth of the vertices where the line segments enclosing the patch intersect.
  • the depth of each patch may be equal to the average of the depths of the aforementioned vertices.
  • the average value may be an arithmetic average value or a weighted average value.
  • the visible patch corresponding to each pixel can be determined according to the depth of the patch and the material of each patch in the current initial public information table, thereby improving the efficiency of ray tracing rendering.
  • a visible patch may be a target patch. That is, as a target for ray-traced rendering. The specific method for determining the visible patch will be described in detail in S406.
  • step S204 it is also necessary to perform initialization processing on the rendering result and the pixel rendering result stored on the visible patch, respectively. Specifically, both the rendering result and the pixel rendering result stored on the visible patch are initialized to 0.
  • steps S402 and S404 do not have a fixed execution sequence.
  • step S404 may be performed before step S402, may be performed after step S402, or may be performed simultaneously with step S402.
  • the patches to be ray traced can be further determined.
  • the ray tracing part includes S406 and S408.
  • the rendering engine 800 performs ray tracing on some of the patches according to the current initial correspondence table and the current initial public information table.
  • the visible patch set corresponding to each pixel is determined.
  • the visible patch set refers to a set of patches belonging to the visible patch in the patches corresponding to the pixel. Perform ray tracing operations on some visible patches according to the sampling value and sampling threshold of each patch in the visible patch set.
  • the visible patch set corresponding to each pixel can be obtained.
  • the visible patch set in each pixel is determined respectively.
  • the patch is a visible patch.
  • a pixel corresponds to multiple patches they should be arranged in descending order of depth.
  • visible patches include patches with depth values less than or equal to the depth value of the patch in which the opaque material first appears, in increasing order of depth.
  • the pixel n corresponds to the patch m
  • the patch m is a visible patch regardless of whether the patch m is an opaque material or not.
  • Pixel 1 corresponds to patches 1, 2 and 6. It is assumed that the depth relationship of these three patches is D1 ⁇ D2 ⁇ D6, and patch 2 is an opaque material, and patches 1 and 6 are both transparent materials. Then for pixel 1, the depth of the patch where the opaque material first appears is D2, so the visible patch should include patches with a depth value less than or equal to D2, namely patches 1 and 2.
  • the visible patch sets corresponding to different pixels in the viewing plane may be different.
  • the ray tracing operation can be performed on some patches according to the sampling threshold in the current initial public information table.
  • the sampling threshold can be set according to requirements. Specifically, the sampling values corresponding to the patches in the above-mentioned visible patch set in the current initial public information table are queried.
  • the rendering result of the patch is directly obtained to update the current initial public information table.
  • sampling value of the patch is less than the sampling threshold, perform ray tracing on the patch.
  • the operation of ray tracing is performed on the patch based on a certain probability.
  • k patches are randomly selected from the to-be-sampled patch set for random sampling.
  • k can be set as required, and k is less than or equal to the number of patches in the patch set to be sampled.
  • the selection method of the k patches can be a simple random method, or a low-difference sequence or other method.
  • the method of random sampling can be simple random sampling or super sampling.
  • the process of ray tracing is to emit rays from the virtual viewpoint to k patches in space, and perform ray tracing.
  • the virtual viewpoint can respectively emit the same number of rays to the k patches, and can also emit different numbers of rays respectively. It should be noted that, regardless of whether the same number of rays or different numbers of rays are emitted, the number of rays reaching each patch in each sampling can be less than or equal to the sampling threshold.
  • sample value of a patch is 1, it means that the patch that a beam of light touches for the first time in space is the patch, so the color of the patch is calculated. If the sample value of a patch is greater than 1, it means that the patch that two or more rays touch for the first time in space is the patch.
  • the calculation of the intermediate rendering result of the above-mentioned patch is realized by separately calculating the color of the light emitted from the virtual viewpoint.
  • the rendering result is the color of the light.
  • the rendering result is equal to the average of the sampled rays on the patch.
  • the average value may be an arithmetic average value or a weighted average value.
  • the rendering engine 800 obtains the current correspondence table and the current public information table according to the ray tracing result.
  • the current public information table and the current correspondence table can be obtained.
  • the rendering result of the patch is directly obtained for updating the current initial public information table.
  • the sampling value of the patch in the current initial public information table is not modified. Therefore, the information of the patch remains consistent in the current initial public information table and the current public information table.
  • the current rendering result can be obtained according to the rendering result of the patch in the current initial public information table and the intermediate rendering result of the patch obtained in step S206. Further, the current initial public information table is updated according to the current rendering result.
  • the current rendering result can be determined by calculating the average value of the rendering result of the patch in the current initial public information table and the intermediate rendering result of the patch obtained in step S206.
  • the average value may be an arithmetic average value or a weighted average value.
  • the update operation is performed on the sampling value of the patch to be sampled in the current initial public information table.
  • FIG. 8 shows a current public information table.
  • patch 1 belongs to a visible patch, and its sampling value S1 in the current initial public information table shown in FIG. 6 is greater than the sampling threshold. That is, when updating the common information table, the operation of updating the sampling value of patch 1 and the rendering result is not required.
  • patch 2 belongs to a visible patch and its sampling value S2 in the current initial public information table shown in FIG. 6 is smaller than the sampling threshold. That is, the sampling values and rendering results in the current public information table shown in FIG. 8 need to be updated.
  • the rendering result C2 in FIG. 6 is updated to C2'.
  • C2' is equal to the average value of the rendering result of patch 2 in step S406 and C2.
  • S2' is equal to S2 plus the number k of sampled rays on patch 2 in step S206.
  • the current initial correspondence table in FIG. 7 can be updated to obtain the current correspondence table.
  • the visible patch rendering result in the current correspondence table can be obtained.
  • pixel grid rendering results can be obtained. For example, as shown in FIG. 7 , for pixel 1, among the corresponding patches 1, 2 and 6, patch 6 belongs to the invisible patch. Therefore, taking patches 1 and 2 as units, by querying the current public information table shown in FIG. 8 , the rendering results C1 and C2 ′ of the visible patches corresponding to pixel 1 can be obtained.
  • the pixel rendering result can be determined according to the visible patch rendering result corresponding to the pixel.
  • the pixel rendering result may be equal to the average value of the visible patch rendering results.
  • the average value may be an arithmetic average value or a weighted average value.
  • the rendering result P1 of pixel 1 is equal to the average value of C1 and C2 ′.
  • the rendering result of the patch in the current initial public information table and the expanded intermediate rendering result sequence of the patch obtained in step S206 may also be formed into the first sequence. Further, by calculating the variance of the first sequence, the current rendering result can be determined.
  • the current rendering result may be equal to the mean value of the rendering result of the patch in the current initial public information table and the intermediate rendering result of the patch obtained in step S206.
  • the current rendering result can be obtained by multiplying the number sequence consisting of the rendering result of the patch in the current initial public information table and the intermediate rendering result of the patch obtained in step S206 by a certain coefficient matrix. Therefore, divide the current rendering result by the above coefficient matrix to get a sequence of numbers.
  • the sequence is the expanded current rendering result sequence.
  • the intermediate rendering result of the patch obtained in step S206 can be divided by the coefficient matrix to obtain an extended sequence of intermediate rendering results obtained in step S206.
  • the update of the rendering result occurs frame by frame. Therefore, the coefficient matrix can be a fixed value, or it can be the coefficient matrix used in the process of updating the rendering result in the previous frame.
  • a coefficient matrix with a fixed value may be used for the intermediate rendering result of the patch obtained in step S206 in the first frame (that is, the rendering result initialized in step S202).
  • patch 1 belongs to a visible patch, and its sampling value S1 in the current initial public information table shown in FIG. 6 is greater than the sampling threshold. That is, when updating the common information table, the operation of updating the sampling value of patch 1 and the rendering result is not required.
  • patch 2 belongs to a visible patch and its sampling value S2 in the current initial public information table shown in FIG. 6 is smaller than the sampling threshold. That is, the sample values and rendering results of the patch need to be updated.
  • the intermediate rendering result of the patch 2 in step S206 and the expanded C2 corresponding to the patch 2 are formed into a new sequence corresponding to the patch 2.
  • the variance of the new sequence is calculated, and the current rendering result is determined according to the variance and the first variance threshold.
  • the first variance threshold can be set according to requirements.
  • the rendering result corresponding to patch 2 in the current initial public information table is updated to C2'.
  • C2' is equal to the intermediate rendering result of the patch 2 in step S206.
  • the sampling value S2 corresponding to the patch 2 in the current initial public information table is updated to S2'. where S2' is equal to 1.
  • the rendering result corresponding to patch 2 in the current initial public information table is updated to C2'.
  • C2' is equal to the average value of the intermediate rendering result of the patch 2 in step S206 and C2.
  • the average value may be an arithmetic average value or a weighted average value.
  • the current initial correspondence table in FIG. 7 can be updated.
  • the visual patch rendering result in the current correspondence table can be obtained. Further, pixel grid rendering results can be obtained. For example, for pixel 1, among the corresponding patches 1, 2 and 6, patch 6 belongs to the invisible patch. Therefore, with patches 1 and 2 as the unit, the visible patch rendering results C1 and C2' corresponding to pixel 1 can be obtained by querying the current public information table shown in FIG. 8 .
  • the pixel rendering result can be determined according to the visible patch rendering result corresponding to the pixel.
  • the pixel rendering result may be equal to the average value of the visible patch rendering results.
  • the average value may be an arithmetic average value or a weighted average value.
  • the rendering result P1 of pixel 1 is equal to the average value of C1 and C2'.
  • step S202 the method for obtaining the historical public information table used in step S202 is the same as the method for obtaining the current public information table in this step.
  • ray tracing rendering is performed on the patches in the patch set to be sampled, and an intermediate rendering result of each patch in the patch set to be sampled is obtained.
  • the current public information table and the current correspondence table can be obtained, thereby obtaining the current rendering result.
  • the part of obtaining the current rendering result includes S410.
  • the rendering engine 800 obtains the current rendering result.
  • the current rendering result can be obtained.
  • the current rendering result obtained by S210 can be used for direct output on the output screen, and can also be used as the original image/data for the next denoising operation.
  • FIG. 10 shows a flow chart of another rendering method, introducing a rendering method 700 .
  • FIG. 10 shows a rendering flow chart of two view planes, respectively corresponding to the rendering flow charts of the current view plane and the historical view plane.
  • the current view plane corresponds to the current frame
  • the historical view plane corresponds to the historical frame.
  • the present application there are at least two implementation methods for obtaining the current correspondence table according to the historical public information table, and then obtaining the rendering result corresponding to the view plane.
  • the first is a rendering method 600 shown in FIG. 5 .
  • another rendering method 700 is presented.
  • the rendering method 600 is to initialize the current initial public information table according to the historical public information table before performing ray tracing on the content to be rendered, and then obtain the rendering result of the current patch, and finally obtain the current rendering result of the content to be rendered.
  • the rendering method 700 is to first perform ray tracing rendering on the content to be rendered, obtain the intermediate rendering result of the current patch, and then obtain the current rendering result of the content to be rendered according to the historical public information table.
  • the ray tracing rendering performed before the multiplexing step in the rendering method 700 may be conventional ray tracing rendering. Therefore, the intermediate rendering result of the current patch in the rendering method 700 may be the rendering result of the patch obtained by conventional ray tracing.
  • the intermediate rendering result of the current patch in the rendering method 700 may also be the rendering result of the current patch obtained in the rendering method 600 .
  • the intermediate rendering result of the current patch mentioned in the rendering method 700 is different from the intermediate rendering result of the current patch mentioned in the rendering method 600 .
  • the intermediate rendering result indicates the rendering result of the patch before the rendering result of the current patch is finally obtained. That is, in the two methods, in the process of calculating the current rendering result of the patch, the acquired procedural rendering result of the patch is also the intermediate rendering result of the patch.
  • ray tracing rendering can be performed on the patch to obtain an intermediate rendering result of the patch, where the number of tracing rays of the ray tracing rendering performed here is less than a threshold.
  • the rendering method for the historical view plane is the same as the rendering method for the current view plane, and both are the rendering method 700 . Therefore, the following description is mainly based on the flowchart corresponding to the current view plane.
  • the rendering method 700 may be performed by the rendering engine 800 .
  • the method includes three parts, namely, the ray tracing part, the information multiplexing part and the obtaining the current rendering result part.
  • the ray tracing part is the S400 and S402.
  • the rendering engine 800 acquires the current content to be rendered, related parameters, the current intermediate public information table and the current intermediate correspondence table.
  • the current intermediate public information table obtained by the rendering engine 800 may be the current public information table obtained in the conventional ray tracing process.
  • it may also be the current public information table obtained in step S208 of the rendering method 600 .
  • the method for obtaining the current public information table obtained in the conventional ray tracing process is similar to the method for establishing the current initial public information table in step S202 of the rendering method 600 , and thus will not be described again.
  • the current intermediate correspondence table obtained by the rendering engine 800 may be the current correspondence table obtained in the conventional ray tracing process.
  • it may also be the current correspondence table obtained in step S208 of the rendering method 600 .
  • the method for obtaining the current correspondence table obtained in the conventional ray tracing process is similar to the method for establishing the current initial correspondence table in step S204 of the rendering method 600 , and thus will not be described again.
  • the rendering engine 800 performs ray tracing on the current content to be rendered and obtains an intermediate rendering result of the current patch.
  • the content to be rendered currently includes one or more models, and each model includes at least one patch. Therefore, in this step, the intermediate rendering result of the current patch can be obtained by performing ray tracing on the current content to be rendered.
  • the specific ray tracing method may be the prior art, which will not be described again.
  • the specific ray tracing method may also be performed with reference to the rendering method 600 .
  • the rendering result of the current patch obtained in S208 in the rendering method 600 is the intermediate rendering result of the current patch in this step.
  • the sampling value of the patch in the content to be rendered at this time may be compared with the threshold value. If the sampling value is greater than the threshold, the rendering result can be directly output.
  • the rendering method 700 may be applicable to ray tracing methods with low sampling values. It is generally considered that after the sampling in S402 is performed, the sampling value of the aforementioned patch may be smaller than the threshold value. In view of the above two points, the following description will be given in the case where the aforementioned patch sampling value is less than the threshold.
  • the intermediate rendering result of the current patch can be obtained. Further, by reusing part of the information in the historical public information table, the rendering result of the current patch can be obtained.
  • the specific information multiplexing part includes S404 and S406.
  • the rendering engine 800 establishes a current shared information table in units of patches included in the content to be rendered currently.
  • the current shared information table can be established. Among them, the acquisition method of the historical public information table will be introduced later.
  • the current shared information table may also be established in units of visible patches in the current content to be rendered.
  • the specific method for obtaining the visible patch is similar to the method set in step S206 of the rendering method 600, so it will not be described again.
  • the following describes the establishment of the current shared information table in units of visible patches in the current content to be rendered as an example.
  • a current visible patch set corresponding to the current view plane is established. Specifically, the patches in the visible patch set corresponding to each pixel are extracted to form a current visible patch set together.
  • the current set of visible patches includes at least patches 1 , 2 and n. where n is less than or equal to p in Figure 6. That is, less than or equal to the total number of patches in the current content to be rendered.
  • the visible patches corresponding to different pixels may be partially the same.
  • the visible patch set for pixel 1 includes patches 1 and 2
  • the visible patch set for pixel 2 includes patches 1, 3, and 4. That is, in both pixel 1 and pixel 2, patch 1 belongs to the visible patch.
  • patch 1 belongs to the visible patch.
  • it for a patch that appears repeatedly in the current total set of visible patches, it only needs to appear once.
  • the current shared information table can be established by taking the patches in the current total visible patch set as a unit.
  • the current shared information table includes all the visible patches in the content to be rendered currently, the serial number corresponding to the historical view plane corresponding to each visible patch, and the rendering result of each visible patch in each space.
  • the shared information table also includes the updated patch rendering result, with an initial value of 0.
  • a certain patch in the current total set of visible patches is selected, and the historical public information table stored in the rendering engine 800 is searched.
  • the rendering result corresponding to the patch in the historical public information table is obtained.
  • the rendering results corresponding to the patch in each view plane in the current shared information table are obtained.
  • the number of traced rays can be appropriately reduced during ray tracing, so as to improve the efficiency of ray tracing. . Whether it is necessary to reduce the number of tracing rays can be determined according to the actual situation.
  • the number of ray tracings in step S402 plus the rendering result of the historical mesh acquired in step S404 may appear in the patch. Traces cases where the number of rays is still less than the sampling threshold. In such a case, the probability of occurrence of this type of situation in the application can be considered, and whether to adopt this type of method is decided according to the probability. Optionally, it is also possible to classify all the patches involved in the application, and select patches corresponding to some models to execute this type of rendering method.
  • patch 1 in the content to be rendered currently is also a visible patch in the second, third and ninth spaces.
  • the rendering results corresponding to patch 1 in the above three view planes are C1-2, C1-3 and C1-9 respectively.
  • the rendering result corresponding to the patch in the historical public information table is extracted. Further, the rendering result of the patch in the shared information table is obtained.
  • a current shared information table can be established for the visible patches in the content to be rendered currently.
  • the rendering engine 800 obtains the current correspondence table and the current public information table according to the current shared information table and the intermediate rendering result of the current patch.
  • the updated rendering results of the visible patches can be obtained.
  • the intermediate rendering result of the current patch and the rendering result corresponding to each view plane in the current shared information table are formed into a second sequence.
  • the rendering result of the visible patch is updated according to the variance of the second sequence and the second variance threshold.
  • the rendering result of the visible patch in the current shared information table is updated to the average value of the second sequence.
  • the average value may be an arithmetic average value or a weighted average value. Further, the rendering result corresponding to the patch in the current intermediate correspondence table may also be updated.
  • the current intermediate public information table may be the obtained current public information table ( FIG. 8 ) in the rendering method 600 .
  • the intermediate rendering result of the current patch can be determined according to FIG. 8 . Therefore, the following description takes FIG. 8 as an example.
  • the current shared information table is updated. Specifically, the space number corresponding to the current patch in FIG. 11 and the rendering result in the corresponding space are cleared. At the same time, the rendering result of the visible patch in the current intermediate correspondence table is not updated.
  • the current correspondence table in FIG. 12 can be obtained. Further, the rendering result of the current content to be rendered can be obtained.
  • the updated stored rendering result corresponding to the visible patch 1 determined in FIG. 11 is C1'.
  • At least the rendering results involving patch 1 in the visible patch rendering results in the current correspondence table are respectively updated to C1'.
  • the rendering result P1' of pixel 1 is equal to the average value of C1' and C2".
  • the current intermediate public information table (as shown in FIG. 8 ) can be updated according to the rendering result of the patch to obtain the current public information table.
  • the obtaining method of the historical public information table obtained in step S404 is the same as the obtaining method of the current public information table.
  • the rendering result of the current content to be rendered can be obtained.
  • the Get Current Rendering Result section is the Get Current Rendering Result section.
  • the rendering engine 800 obtains the rendering result of the content to be rendered currently.
  • the rendering results of each pixel can be further determined, and the rendering results of the content to be rendered obtained in step S400 are obtained.
  • rendering result obtained by S408 can be used for direct output on the output screen, and can also be used as the original image/data for the next denoising operation.
  • the present application also provides a rendering engine 800, as shown in FIG. 13, including:
  • the communication unit 802 is configured to acquire the current content to be rendered and related parameters at S200.
  • the communication unit S202 is further configured to receive the set sampling threshold in S206.
  • the communication unit 802 is further configured to receive the first variance threshold in S208.
  • the communication unit 802 is configured to acquire the current content to be rendered, related parameters, the current intermediate public information table and the current intermediate correspondence table.
  • the communication unit 802 is further configured to receive the second variance threshold in S406.
  • the storage unit 804 is configured to store the model data of the application acquired in S200.
  • the storage unit 804 is configured to store the current initial public information table and the historical public information table obtained in S202. It is also used to store the current initial correspondence table obtained in S204. Both the current correspondence table and the current public information table obtained in S208 are stored in the storage unit 804 .
  • the storage unit 804 is also used for storing the current rendering result obtained in S210.
  • the storage unit 804 is configured to store the current content to be rendered, related parameters, the current intermediate public information table and the current intermediate correspondence table obtained in S400. Both the intermediate rendering record of the current patch obtained in S402 and the current shared information table obtained in S404 will be stored in the storage unit 804 .
  • the storage unit 804 is further configured to store the current public information table and the current correspondence table obtained in S406.
  • the storage unit 804 also stores the rendering result of the content to be rendered currently obtained in S408.
  • the processing unit 806, in the rendering method 600 is configured to establish the current initial public information table in S202 and the current initial correspondence table in S204.
  • the processing unit 806 is further configured to perform ray tracing on some of the patches according to the current correspondence table and the current public information table in S206.
  • the operation of obtaining the current correspondence table and the current common information table in S208 is also performed by the processing unit 806 .
  • the obtaining operation of the current rendering result in S210 is also performed by the processing unit 806 .
  • the processing unit 806 is configured to perform ray tracing on the current content to be rendered and obtain an intermediate rendering result of the current patch in S402.
  • the operation of establishing the current shared information table in S404 is also performed by the processing unit 806 .
  • the processing unit 806 is configured to obtain the current public information table and the current correspondence table according to the current shared information table and the intermediate rendering result of the current patch.
  • the obtaining operation of the current rendering result in S408 is also performed by the processing unit 806 .
  • the processing unit 806 may include a multiplexing unit 808 and a ray tracing unit 810 .
  • the multiplexing unit 808 is configured to establish the current initial public information table in S202 and the current initial correspondence table in S204.
  • the ray tracing unit 810 is configured to perform ray tracing on some of the patches according to the current correspondence table and the current public information table in S206.
  • the operation of obtaining the current correspondence table and the current common information table in S208 is also performed by the ray tracing unit 810 .
  • the obtaining operation of the current rendering result in S210 is also performed by the ray tracing unit 810 .
  • the multiplexing unit 808 is configured to perform ray tracing on the current content to be rendered and obtain an intermediate rendering result of the current patch in S402.
  • the operation of establishing the current shared information table in S404 is also performed by the multiplexing unit 808 .
  • the ray tracing unit 810 is configured to obtain the current public information table and the current correspondence table according to the current shared information table and the intermediate rendering result of the current patch.
  • the obtaining operation of the current rendering result in S408 is also performed by the ray tracing unit 810 .
  • the communication unit 202 is further configured to return the current rendering result obtained in S210 and S408.
  • the present application also provides a computing device 900 .
  • the computing device includes a bus 902 , a processor 904 , a memory 906 and a communication interface 908 . Communication between processor 904 , memory 906 and communication interface 908 is via bus 902 .
  • Computing device 900 may be a server or a terminal device. It should be understood that the present application does not limit the number of processors and memories in the computing device 700 .
  • the bus 902 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus or the like.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one line is shown in FIG. 14, but it does not mean that there is only one bus or one type of bus.
  • Bus 904 may include pathways for communicating information between various components of computing device 900 (eg, memory 906, processor 904, communication interface 908).
  • the processor 904 may include a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor (MP), or a digital signal processor (DSP), etc. any one or more of the devices.
  • CPU central processing unit
  • GPU graphics processing unit
  • MP microprocessor
  • DSP digital signal processor
  • processor 904 may include one or more graphics processors.
  • the processor 904 is used to execute the instructions stored in the memory 906 to implement the rendering method 600 or the rendering method 700 described above.
  • processor 904 may include one or more central processing units and one or more graphics processors. The processor 904 is used to execute the instructions stored in the memory 906 to implement the rendering method 600 or the rendering method 700 described above.
  • Memory 906 may include volatile memory, such as random access memory (RAM).
  • the processor 904 may also include non-volatile memory (non-volatile memory), such as read-only memory (ROM), flash memory, hard disk drive (HDD), or solid state hard disk (solid state) drive, SSD).
  • ROM read-only memory
  • HDD hard disk drive
  • SSD solid state hard disk
  • the memory 906 stores executable program codes, and the processor 904 executes the executable program codes to implement the aforementioned rendering method 600 or rendering method 700 .
  • the memory 906 stores instructions of the rendering engine 800 for executing the rendering method 600 or the rendering method 700 .
  • the communication interface 903 uses transceiver modules such as, but not limited to, network interface cards, transceivers, etc., to implement communication between the computing device 900 and other devices or communication networks. For example, information 404 , information 406 , etc. may be obtained through the communication interface 903 .
  • Embodiments of the present application further provide a computing device cluster.
  • the computing device cluster includes at least one computing device 900 .
  • the computing device clusters included in the computing device cluster may all be terminal devices, may all be cloud servers, or may be partly cloud servers and partly terminal devices.
  • one or more computing devices 900 in the computing device cluster may store the same rendering engine 800 in the memory 906 for executing the rendering method 600 or the rendering method 700. instruction.
  • one or more computing devices 900 in the computing device cluster may also be used to execute some instructions of the rendering engine 800 for executing the rendering method 600 or the rendering method 700 .
  • a combination of one or more computing devices 900 may collectively execute the instructions of rendering engine 800 for performing rendering method 600 or rendering method 700 .
  • the memory 906 in different computing devices 900 in the computing device cluster may store different instructions for performing some functions of the rendering method 600 or the rendering method 700 .
  • Figure 16 shows one possible implementation.
  • two computing devices 900A and 900B are connected through a communication interface 908 .
  • Instructions for performing the functions of the communication unit 802 , the multiplexing unit 808 , and the ray tracing unit 810 are stored on memory in the computing device 900A.
  • Instructions for performing the functions of storage unit 804 are stored on memory in computing device 900B.
  • memory 906 of computing devices 900A and 900B collectively stores instructions for rendering engine 800 to perform rendering method 600 or rendering method 700 .
  • connection manner between the computing device clusters shown in FIG. 16 may take into account that the rendering method 600 or the rendering method 700 provided by the present application needs to store a large amount of historical rendering results of the patches in the historical frames. Therefore, consider offloading the storage function to computing device 900B.
  • computing device 900A shown in FIG. 16 may also be performed by multiple computing devices 900 .
  • the functions of computing device 900B may also be performed by multiple computing devices 900 .
  • one or more computing devices in a cluster of computing devices may be connected by a network.
  • the network may be a wide area network or a local area network, or the like.
  • Figure 17 shows one possible implementation. As shown in FIG. 17, two computing devices 900C and 900D are connected through a network. Specifically, the network is connected through a communication interface in each computing device.
  • instructions for executing the communication unit 802 and the multiplexing unit 808 are stored in the memory 906 in the computing device 900C. At the same time, the memory 906 in the computing device 900D stores instructions to execute the storage unit 804 and the ray tracing unit 810 .
  • connection mode between the computing device clusters shown in FIG. 17 may be based on the fact that the rendering method 600 or the rendering method 700 provided by the present application needs to perform a large amount of calculation of ray tracing and store the historical rendering results of the patches in a large number of historical frames. Therefore, Consider outsourcing the functions implemented by ray tracing unit 810 and storage unit 804 to computing device 900D.
  • computing device 900C shown in FIG. 17 may also be performed by multiple computing devices 900 .
  • the functions of computing device 900D may also be performed by multiple computing devices 900 .
  • Embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium may be any available medium that a computing device can store, or a data storage device such as a data center that contains one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state drives), and the like.
  • the computer-readable storage medium includes instructions that instruct a computing device to perform the above-described rendering method 600 or 700 applied to the rendering engine 800 .
  • Embodiments of the present application also provide a computer program product including instructions.
  • the computer program product may be a software or program product containing instructions, capable of being executed on a computing device or stored in any available medium.
  • the computer program product is run on at least one computer device, the at least one computer device is caused to perform the above rendering method 600 or 700 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Generation (AREA)

Abstract

一种渲染方法及装置。该方法用于渲染应用,该应用包括至少一个模型,每个模型包括多个面片,该渲染方法在渲染该应用的当前帧的过程中,确定当前视平面中像素对应的目标面片,获取该应用的历史帧的渲染过程中获得的目标面片的历史渲染结果,根据所述目标面片的历史渲染结果,计算所述像素的当前渲染结果。该渲染方法通过复用历史渲染结果,计算当前渲染结果,有效的提升了光线追踪渲染的效率。

Description

一种渲染方法、装置及设备 技术领域
本申请涉及图形渲染领域,特别涉及一种渲染方法、装置及设备。
背景技术
光线追踪渲染技术一直是计算机图形学领域的基础技术,至今为止,该技术是实现高品质,真实感,高画质图像的最主要技术。但该技术一直以来,需要较长的计算时间,才能完成大量的蒙特卡洛积分计算过程,生成最终计算结果。所以,该技术一直应用在离线渲染场景,如影视,动画等领域。随着计算机硬件算力升级,近年来,随着一些对于实时性需要较强的渲染业务领域(游戏,虚拟现实)开始出现,对于光线追踪渲染技术的需要越来越强烈。
针对光线追踪渲染技术,如何实现实时的图形渲染成为了业界重点关注的问题。
发明内容
本申请提供了一种渲染方法,该方法可以提升渲染的效率。
本申请的第一方面提供了一种渲染方法,该渲染方法用于渲染应用,该应用包括至少一个模型,每个模型包括多个面片。该方法包括:渲染该应用的当前帧的过程中,确定当前帧对应的当前视平面中像素对应的目标面片,该目标面片包括于该多个面片;获取该应用的历史帧的渲染过程中获得的该目标面片的历史渲染结果;根据该目标面片的历史渲染结果,计算所述像素的当前渲染结果。
该渲染方法通过复用该目标面片的历史渲染结果,减少了在对该目标面片在当前帧中进行光线追踪渲染时的追踪光线数,提升了光线追踪渲染的效率。
在一些可能的设计中,该方法还包括:该渲染方法在渲染当前帧的过程中,通过获取目标面片在历史帧中对应的历史渲染结果,计算该目标面片在当前帧中对应的当前渲染结果。进一步地,根据目标面片的当前渲染结果,计算当前视平面中像素的当前渲染结果。随后,对该视平面的每个像素执行以上方法,以获取该视平面的渲染结果,也即获取所述当前帧。
在一些可能的设计中,该方法还包括:对该目标面片进行光线追踪渲染,获得该目标面片的中间渲染结果。该根据该目标面片的历史渲染结果,计算该像素的当前渲染结果,包括:根据该目标面片的历史渲染结果和该目标面片的中间渲染结果,计算该目标面片的当前渲染结果;根据该目标面片的当前渲染结果,计算该像素的当前渲染结果。
该渲染方法通过在对该目标面片进行光线追踪之后,利用该目标面片的中间渲染结果提升该目标面片的当前渲染结果,在对该目标面片发出的追踪光线数不变的情况 下,提升了该目标面片的渲染结果,有效的提升了光线追踪渲染的效率。
对该目标面片进行光线追踪渲染以获得该目标面片的中间渲染结果这一步骤可以发生在获取该面片的历史渲染结果之前或之后。可选的,该步骤还可以与获取该面片的历史渲染结果这一步骤同时发生。
在一些可能的设计中,该方法还包括:确定该目标面片的历史渲染结果对应的采样数量高于阈值。根据该目标面片的历史渲染结果,计算该像素的当前渲染结果,包括:将该目标面片的历史渲染结果作为该目标面片的当前渲染结果,该目标面片的当前渲染结果用于计算该像素的当前渲染结果。
对于历史渲染结果对应的采样数量高于阈值的该目标面片,直接将该目标面片的历史渲染结果作为该目标面片的当前渲染结果,可以避免对该目标面片进行光线追踪渲染,并且直接复用该面片的历史渲染结,有效的提升了当前视平面的整体渲染效率。
在一些可能的设计中,该方法还包括:确定该目标面片的历史渲染结果对应的采样数量不高于阈值。该根据所述目标面片的历史渲染结果,计算该像素的当前渲染结果,包括:对该目标面片进行光线追踪渲染,获得该目标面片的中间渲染结果;根据该目标面片的中间渲染结果和该目标面片的历史渲染结果,计算该目标面片的当前渲染结果;根据该目标面片的当前渲染结果,计算该像素的渲染结果。
对于历史渲染结果对应的采样数量不高于阈值的该目标面片,在对该目标面片进行光线追踪渲染的同时,复用了该目标面片的历史渲染结果,可以减少对该目标面片发出的追踪光线数,从而有效的提升了当前视平面的整体渲染效率。
在计算该目标面片的当前渲染结果的过程中,获取的该目标面片的过程渲染结果也即该目标面片的中间渲染结果。一般的,可以对该目标面片进行光线追踪渲染以获得该目标面片的中间渲染结果,此处进行的光线追踪渲染的追踪光线数小于阈值。
在一些可能的设计中,该方法还包括:存储该目标面片的当前渲染结果。该目标面片的当前渲染结果可以存储至内存中,以供后续帧的渲染过程中被复用,通过为后续帧中该目标面片的当前渲染结果提供可以复用的历史渲染结果,可以有效的提升后续帧中该目标面片的渲染效率。
在一些可能的设计中,该方法还包括:该当前视平面在第一应用中生成,该目标面片的历史渲染结果在第二应用中生成。
在一些可能的设计中,该方法还包括:该目标面片的历史渲染结果和该当前视平面在同一应用中生成。
在一些可能的设计中,该方法还包括:该目标面片的历史渲染结果基于光线追踪渲染获得。
本申请的第二方面提供了一种渲染引擎,该装置包括处理单元和存储单元:该处理单元,在渲染应用的当前帧的过程中,用于确定当前视平面中像素对应的目标面片;获取所述应用的历史帧的渲染过程中获得的所述目标面片的历史渲染结果;根据所述目标面片的历史渲染结果,计算所述像素的当前渲染结果,其中,所述应用包括至少一个模型,每个模型包括多个面片;该存储单元,用于存储所述应用的历史帧的渲染过程中获得的所述目标面片的历史渲染结果。
在一些可能的设计中,该处理单元还用于,根据所述目标面片的历史渲染结果,计 算所述像素的当前渲染结果前,对所述目标面片进行光线追踪渲染,获得所述目标面片的中间渲染结果;根据所述目标面片的历史渲染结果和所述目标面片的中间渲染结果,确定所述目标面片的当前渲染结果;根据所述目标面片的当前渲染结果,确定所述像素的当前渲染结果。
在一些可能的设计中,该处理单元还用于:用于确定所述目标面片的历史渲染结果对应的采样数量高于阈值;将所述面片的历史渲染结果作为所述面片的当前渲染结果,所述面片的当前渲染结果用于确定所述像素的当前渲染结果。
在一些可能的设计中,该处理单元还用于:用于确定所述目标面片的历史渲染结果对应的采样数量不高于阈值;对所述目标面片进行光线追踪渲染,获得所述目标面片的中间渲染结果;根据所述目标面片的中间渲染结果和所述目标面片的历史渲染结果,确定所述目标面片的当前渲染结果;根据所述目标面片的当前渲染结果,确定所述像素的渲染结果。
在一些可能的设计中,该存储单元,用于存储所述目标面片的当前渲染结果。
本申请的第三方面提供了一种包含指令的计算机程序产品,当该指令被计算机设备集群运行时,使得该计算机设备集群执行如第一方面或第一方面的任意可能的设计提供的方法。
本申请的第四方面提供了一种计算机可读存储介质,其特征在于,包括计算机程序指令,当所述计算机程序指令由计算设备集群执行时,所述计算设备集群执行如第一方面或第一方面的任意可能的设计提供的方法。
本申请的第五方面提供了一种计算设备集群,包括至少一个计算设备,每个计算设备包括处理器和存储器;至少一个计算设备的处理器用于执行至少一个计算设备的存储器中存储的指令,以使得该计算设备执行如第一方面或第一方面的任意可能的设计提供的方法。
在一些可能的设计中,该计算设备集群包括一个计算设备,该计算设备包括处理器和存储器;该处理器用于执行该存储器中存储的指令以运行第二方面或第二方面的任意可能的设计提供的方法提供的渲染引擎,以使得该计算设备执行如第一方面或第一方面的任意可能的设计提供的方法。
在一些可能的设计中,该计算设备集群包括至少两个计算设备,每个计算设备包括处理器和存储器。该至少两个计算设备的处理器用于执行该该至少两个计算设备的存储器中存储的指令以运行第二方面或第二方面的任意可能的设计提供的方法提供的渲染引擎,以使得该计算设备集群执行如第一方面或第一方面的任意可能的设计提供的方法。每个计算设备运行了渲染引擎的包括的部分单元。
附图说明
为了更清楚地说明本申请实施例的技术方法,下面将对实施例中所需使用的附图作以简单地介绍。
图1(a)为本申请实施例提供的一种单一视点下的渲染结构示意图;
图1(b)为本申请实施例提供的一种面片划分示意图;
图1(c)为本申请实施例提供的一种像素与面片对应关系的示意图;
图1(d)为本申请实施例提供的一种像素投影区域的示意图;
图2为本申请实施例提供的一种渲染结构示意图;
图3为本申请实施例提供的一种包含多进程的应用场景示意图;
图4为本申请实施例提供的一种多视点的场景结构示意图;
图5为本申请实施例提供的一种渲染方法的流程图;
图6为本申请实施例提供的一种当前初始公共信息表;
图7为本申请实施例提供的一种当前初始对应关系表;
图8为本申请实施例提供的一种当前公共信息表;
图9为本申请实施例提供的一种当前对应关系表;
图10为本申请实施例提供的另一种渲染方法的流程图;
图11为本申请实施例提供的一种当前共享信息表;
图12为本申请实施例提供的另一种当前对应关系表;
图13为本申请实施例提供的一种渲染引擎的结构示意图;
图14为本申请实施例提供的一种计算设备的结构示意图;
图15为本申请实施例提供的一种计算设备集群的结构示意图;
图16为本申请实施例提供的一种计算设备集群的连接方式示意图;
图17为本申请实施例提供的一种计算设备集群的连接方式示意图。
具体实施方式
本申请实施例中的术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。
首先对本申请实施例中所涉及到的一些技术术语进行介绍。
面片(tile):面片是指二维或三维空间中最小的平面构成单元。通常在渲染中,需要将空间中的模型划分成无数个微小的平面。这些平面又被称为面片,它们可以是任意多边形,常用的是三角形和四边形。这些面片各条边的交点则是各个面片的顶点。面片可以是根据模型的材质或颜色等信息随机划分的。此外,考虑到每一个面片都有正反两面,而通常只有一面是可以被看到的。因此,在一些情况下需要对面片进行背面剔除的操作。
每面片追踪光线数(sample per mesh,SPM):每面片追踪光线数是指每一个面片中通过的光线数量。其中,面片是在三维空间中的最小单元。通常我们看到的屏幕是由一个个的像素排列而成的,每一个像素对应空间中的一个或多个面片。像素的颜色是根据其对应面片的颜色(red,green,blue,RGB)计算得到的。在光线追踪中,每面片追踪光线数的大小可以影响渲染的结果。每面片追踪光线数越大,意味着从视点会有更多的光线投向三维空间中的模型。每一面片上被投射的光线数越多,各个面片的渲染结果计算就可以更为准确。
光栅化(rasterization):光栅化是将屏幕空间中的3D图形,转化到二维视平面上的光栅图像的过程。光栅化的过程包含了两部分的工作,第一部分工作:决定窗口坐标中的哪些整型栅格区域被基本图元占用。第二部分工作:分配一个渲染结果和一个深度值到各个区域。把模型的数学描述以及与模型相关的颜色信息转换为屏幕上用于对应位置的像素及 用于填充像素的颜色,这个过程称为光栅化。
光线追踪(ray tracing):光线追踪又称为光迹跟踪或光线追迹,来自于几何光学的一项通用技术,它通过跟踪与光学表面发生交互作用的光线从而得到光线经过路径的模型。它用于光学系统设计,如照相机镜头、显微镜、望远镜以及双目镜等。当用于渲染时,跟踪从眼睛发出的光线而不是光源发出的光线,通过这样一项技术生成编排好的场景的数学模型显现出来。这样得到的结果类似于光线投射与扫描线渲染方法的结果,但是这种方法有更好的光学效果。例如对于反射与折射有更准确的模拟效果,并且效率非常高,所以当追求这样高质量结果时候经常使用这种方法。具体地,光线追踪方法首先计算一条光线在被介质吸收,或者改变方向前,光线在介质中传播的距离、方向以及到达的新位置。然后从这个新的位置产生出一条新的光线,使用同样的处理方法,最终计算出一个完整的光线在介质中传播的路径。由于该算法是成像系统的完全模拟,所以可以模拟生成复杂的图片。
图形渲染随着计算机算力的提高和行业发展的需要逐渐成为业界的焦点,目前图形渲染技术主要有光栅化和光线追踪两种。
对于光照的真实感实现,光栅化可以通过光线投射计算完成。但是对于额外的视觉效果:软阴影,全局光照,焦散等,则需要通过数据建模,使用其他的方法进行处理。例如全局光照需要使用光照贴图(light map),辐照度贴图(irradiance map)等方法进行拟合,对于软阴影使用阴影映射(shadow map)的技术进行拟合。此种开发方式较为繁琐,拟合后的视觉效果不尽如人意。虽然光栅化渲染技术可以支持多视点同时渲染,其实现过程需要在最后视角变换时,追加角度变换处理来实现,但是精确度较差。因此,下文采用的渲染技术主要是光线追踪。
需要说明的是本文所提到的光线追踪指示的是通过模拟光线的投射获得渲染结果的一类方法。具体地,可以包括向后光线追踪、分布式光线追踪和双向路径追踪等方法。
为了使得本申请的技术方案更加清楚、易于理解,在对本申请提供的渲染方法进行介绍之前,先对渲染技术涉及的三个基本概念——面片、顶点和像素之间的关系进行介绍。
图1(a)示出了一种单一视点下的渲染结构示意图。该渲染结构至少包含虚拟视点100、虚拟视平面200、模型300和光源302。
虚拟视点100是在空间中模拟的人的一只眼睛或多只眼睛,用于感知三维结构。其中,每一帧画面对应一个空间。按照视点数量划分,虚拟视点100可以分为单目视点、双目视点和多目视点。具体地,双目视点或多目视点是指从两个及两个以上的不同的视点获取两幅或多幅图像来重构目标模型3D结构或深度信息。
虚拟视平面200是一种空间中的模拟显示屏。虚拟视平面200的构建主要由虚拟视点100到虚拟视平面200的距离和屏幕分辨率这两个因素决定。
其中,虚拟视点100到虚拟视平面200的距离指的是虚拟视点100到虚拟视平面200的垂直距离。进一步地,该距离可以根据需求进行设置。
屏幕分辨率指的是虚拟视平面200所包含的像素数量。换言之,虚拟视平面200包含一个或多个像素。例如在图1(a)中,虚拟视平面200包含9像素(3*3)。
在一些可能的实现方式中,经过渲染操作获得的结果可以用于输出。在一次光线追踪中,虚拟视平面200中每一个像素的渲染结果共同构成一帧画面。也即,在一次光线追踪 中,一个虚拟视平面200对应一帧画面。
与虚拟视平面相对应的,是在用户端侧用于输出最终结果的显示屏。该显示屏的屏幕分辨率不一定等于虚拟视平面的屏幕分辨率。
当显示屏和虚拟视平面200的屏幕分辨率相等时,可以将虚拟视平面200上的渲染结果按照1:1的比例输出至显示屏。
当显示屏和虚拟视平面200的屏幕分辨率不同时,则将虚拟视平面200上的渲染结果按照一定的比例输出至显示屏。其中,具体的比例的计算属于现有技术,这里不再赘述。
空间中可以包含一个或多个模型300。虚拟视平面200对应的渲染结果中可以包含哪些模型300,由对应的虚拟视点100与各模型300之间的相对位置决定。
在进行渲染操作前,通常需要将模型表面划分成多个面片。其中,各个面片的大小和形状可以一致,也可以不一致。具体地,面片的划分方法属于现有技术,这里不再赘述。
图1(b)示出了模型300的一个面的面片划分情况。如图1(b)所示,模型300的一个面被划分成了6个不同大小的三角形面片。
空间中所有的顶点不仅包括模型300各个面的交点(例如D1、D2、D4、D6),还包括各个面片的顶点(例如D0、D3、D5)。
图1(c)示出了一种像素与面片对应关系的示意图。图1(c)中加粗的方框即图1(a)中虚拟视平面200中一个像素在模型300上的投影。可以看到,该像素投影区域分别覆盖了面片1至6的部分区域。所述像素投影区域指示的是该像素在模型上的投影所围成的区域。
一个像素投影区域可以覆盖多个面片,也可以仅覆盖一个面片。其中,当一个像素投影区域仅覆盖一个面片时,可以覆盖该面片的全部区域,也可以覆盖该面片的部分区域。
例如,如图1(d)所示,一个像素投影区域覆盖了面片6的部分区域。也即,面片6可以同时覆盖多个像素投影区域。
综上所述,空间中各模型的表面可以被划分成多个多边形面片,空间中的所有顶点即各个多边形面片顶点的集合。而一个像素对应的像素投影区域可以覆盖一个或多个面片,一个面片也可以覆盖一个或多个像素对应的像素投影区域。
光源302是空间中设置的虚拟光源,用于生成空间中的光照环境。光源302的类型可以是以下光源中的任意一种:点光源、面光源和线光源等。进一步地,空间中可以包括一个或多个光源302。进一步地,当空间中存在多个光源302时,不同的光源类型可以不同。
上述空间中虚拟视点的设置、虚拟视平面的设置、模型的建立和面片的划分等操作,通常都在执行渲染操作之前已经完成了。上述这些步骤可以由影视渲染引擎或游戏渲染引擎等渲染引擎执行。例如,游戏渲染引擎(unity)或虚幻引擎(unreal)等。
在将虚拟视点、虚拟视平面、光源和各模型的相对位置关系设置好后,渲染引擎即可接收上述相对位置关系及相关信息。具体地,所述信息包括虚拟视点的类型和数量、虚拟视平面到虚拟视点的距离和屏幕分辨率、光照环境、各模型与虚拟视点之间的相对位置关系、各模型的面片划分情况、面片编号信息和面片材质信息等。在获得上述信息后,渲染引擎可以进一步地执行下文中的渲染方法600。
接下来将对光线追踪的实现方法进行介绍。图2示出了一种光线追踪的场景图。图中包括虚拟视点100、虚拟视平面200、模型300、光源302和三束光线(第一光线、第二光 线和第三光线)。
虚拟视平面200呈现渲染结果是以像素为单位的,而每一个像素的渲染结果等于在本次光线追踪过程中穿过该像素的光线的渲染结果的平均值。而每一束光线的渲染结果的计算属于现有技术,因此不在此赘述。
实际上,每一束光线都是从光源发出,在接触到空间中一个或多个面片后,在每个接触点发生下述情形中的一种:折射、反射或漫反射。然后穿过虚拟视平面200,最后进入虚拟视点100。也即,进入用户的眼睛。
具体地,每个面片都具有一定的颜色和材质特征。面片的材质可以分为以下三种:透明材质、光滑的不透明材质和粗糙的不透明材质。根据面片材质的不同,面片对于光的折/反射情况又可以分为折射、反射和漫反射三种情况。其中,光线接触透明材质会发生折射,光线接触表面光滑的不透明材质会发生反射,光线接触表面粗糙的不透明材质会发生漫反射。
需要说明的是,对于会发生漫反射的材质而言,接触点从各个角度反射出光线的颜色通常是一样的。换言之,在模型和光源的相对位置和其他条件不变的前提下,不同的两个虚拟视点看到同一个可以发生漫反射的点的颜色是一样的。
因此,理论上来说,如果可以将空间中所有可以发生漫反射的点的出射光线的颜色保存在该点上,在之后的光线追踪中再需要计算该点的出射光线颜色时,可以直接复用。但是考虑到模型中的点的数量有无数个,因此对于点的渲染结果难以实现存储和复用。
出射光线的颜色由光源和接触点所在面片的颜色决定,而同一面片各个点的颜色一致的。因此,可以将点近似扩大至面片这一类微小单元。换言之,可以将各个点的渲染结果存储在各个所在的面片上。以面片为单位存储渲染结果,有利于提升光线追踪的计算效率。
在光线追踪中,我们认为光线是从虚拟视点发出,因此部分光线可能在接触到面片后不能回到光源,从而所述光线不具有颜色。因此,在光线追踪中对各个面片的渲染结果的计算需要对面片发出一定数量的光线。进一步地,可以通过根据这些光线的颜色,确定各个面片的渲染结果。例如,可以通过删除采样值中差异过大的光线的渲染结果再求均值的方法确定该面片存储的渲染结果。
从图2中可以看到,虚拟视平面200共包含9个像素。其中中间的像素(边框加粗的像素)中至少有三条从虚拟视点100处发出的光线穿过,分别为第一光线、第二光线和第三光线。以第一光线为例,在空间中接触到模型300的面片6后,其出射光线回到了光源302。
如果由顶点D0、D1和D6围成的三角形面片6的材质为透明材质或光滑的不透明材质,即会发生折射或反射,那么不将第一光线的渲染结果存储在面片6上。
如果由顶点D0、D1和D6围成的三角形面片6的材质为粗糙的不透明材质,即会发生漫反射,那么可以将第一光线的渲染结果存储在面片6上。
可以看到,第一光线对应的第一接触点和第三光线对应的第三接触点均落在了面片6的内部。如上所述,第一光线和第三光线的渲染结果可以不同。可以通过求上述两束光线的渲染结果的均值确定面片6上存储的渲染结果。可选的,也可以去掉其中一个明显异常的渲染结果,将另一束光线的渲染结果作为面片6上存储的渲染结果。
当前光线追踪渲染方法受困于算力和传统图形处理器(graphics processing unit,GPU)架构的设计,每次只能针对一个视点进行视点范围内的渲染。例如,当多个用户联网,进入同一个渲染场景时刻,其GPU渲染进程之间无法共享渲染成果。而实际上,针对相同的渲染场景,大量光线在不同用户视点范围内是可以进行光路共享的。具体地,大量渲染场景的光线路径分布、光照强度分布、当前光线分布的概率分布函数和光传输矩阵等,均可以共享,并且是无偏的。
面向渲染的过程,尤其是实时渲染,当前商用光线追踪无法提供多视点同时渲染的计算方法。目前大部分显卡的计算过程是针对单一用户的视点(camera)进行光线追踪计算。由于其SPM和反弹次数(bounce)设置受困于算力和硬件设计,形成充满噪点的图像后仍需要通过时序重建滤波器进行后处理。也即,不能进行不同视点之间的渲染结果共享。同时,更不能利用不同空间中的渲染结果进行共享。
有鉴于此,下文提供了一种可以多视点共享的渲染方法。具体地,当多个视点都同时处于同一空间中时,多个视点之间可以共享光线追踪的中间计算结果,然后据此输出渲染结果。进一步地,为了提升前述输出图像的质量,还可以利用部分包含相同面片的空间中多个视点的光线追踪得到的面片的中间渲染结果。
首先介绍一种包含多进程的应用场景。其中,所述多进程属于一个应用中一个或多个进程。如图3所示,该场景中至少包含两帧不同的画面,也即,历史帧和当前帧。就时间顺序而言,此处示出的历史帧的形成时间先于当前帧的形成时间。
如上所述,一个虚拟视点对应一个虚拟视平面,一个虚拟视平面一次光线追踪的渲染结果为一帧画面。此时,一个虚拟视点可以对应一个进程。而在一个进程基于光线追踪产生一帧画面的同时,同一帧内的其他进程可以同时产生一帧画面。
例如,以进程400为例。在传统的光线追踪渲染中,进程400可以通过向模型库408发送信息404来组成待渲染内容410。
其中,模型库408中包括一个或多个模型。通常来说,各个模型在空间中的位置是固定的。可选的,各个模型的位置也可以由进程404通过发送指令来控制。需要说明的是,对于模型库408而言,光源也可以是一种具有特定参数的模型。
信息404中可以包括参数和指令。其中,参数至少包括虚拟视点和虚拟视平面在空间中的坐标参数。而指令可以包括对模型的修改和移动等。
例如,在游戏中用户可以点击开始按钮,进而通过进程400向模型库408发送信息。模型库408通过配置模型从而生成初始化模型集合,即待渲染内容410。其中,待渲染内容中包括一个或多个模型及模型信息。
历史帧中某一进程对应的面片渲染结果,可以用于确定当前帧中某一进程的渲染结果。其中,某一进程对应的面片渲染结果可以包括一帧或多帧对应的面片渲染结果。换言之,可以根据某一进程的包含至少的一帧历史帧对应的面片渲染结果,计算当前视平面的渲染结果。
如上所述,历史帧可以包括一帧或多帧。下文以历史帧中某一进程对应一帧视平面渲染结果为例进行介绍。
下文以进程400和500为例进行介绍。
进程400可以通过向模型库408发送信息404,生成待渲染内容410。渲染引擎可以根据待渲染内容410,生成面片渲染结果416。进一步地,可以获得渲染结果420用于输出。
同样的,在当前帧中,进程500可以按照类似上述过程的方法生成渲染结果516。需要说明的是,渲染引擎410在生成面片渲染结果512时,至少可以根据历史帧中的面片渲染结果414获得。上述情况的前提是待渲染内容410和待渲染内容508中包含一个或多个相同的面片。可选的,若在待渲染内容412中存在和待渲染内容508中相同的面片,面片渲染结果418也可用于计算面片渲染结果512。
在一种可能的实现方式中,进程400和进程500在不同帧中对应同一虚拟视点。也即,进程400和进程500实际为同一应用中的不同进程,主要区别在于运行的时间不同。
例如,假设历史帧是当前帧的上一帧。在游戏画面中,同一视点对应的画面在连续的几帧,尤其是上下帧中,不会发生太大的变化。因此可以通过复用一个或多个历史帧中的一个或多个面片渲染结果,提升对当前帧中面片渲染结果的质量和获取速度。
在一种可能的实现方式中,进程400和进程500在不同帧中对应不同的虚拟视点。也即,进程400和进程500实际为同一应用中的在不同时间运行的不同进程。
例如,假设历史帧是当前帧的上一帧。在游戏中,两个不同视点可以是对应物理距离很远的两个玩家。而在游戏中,这样的两个玩家的渲染画面对应的两个空间,分别在上下两帧中存在相同面片的几率较大。因为对于大型的网络游戏而言,同时在线人数通常在10万到100万之间。并且,大多数玩家的画面集中在一些典型的场景中。其中,所述场景对应一个包含一个或多个面片的空间。
因此,通过复用一个或多个其他视点对应的历史帧中的一个或多个面片渲染结果,可以大大提升对当前帧中面片渲染结果的质量和获取速度。
需要说明的是,在上述两种可能的实现方式中,图3中示出的进程可以运行在本地设备上,也可以运行在云端。同样的,模型库和渲染引擎有着类似的部署/运行环境。
具体地,进程可以运行在本地设备上时,所述本地设备可以是服务器。其中,服务器可以是一台或多台。可选的,还可以是终端设备。例如手机、电脑或平板电脑等。
可选的,进程还可以运行在云服务器上。其中,云服务器可以是一台或多台。
模型库可以部署在本地设备上。所述本地设备可以是服务器。其中,服务器可以是一台或多台。可选的,还可以是终端设备。例如手机、电脑或平板电脑等。
可选的,也可以部署在云服务器上。考虑到模型库中需要存储应用中大量的模型数据,因此对于存储的要求较高。上述的一些终端设备,例如手机和平板电脑等可能不具备存储大量数据的能力,因此需要将模型库部署在一台或多台云服务器上。
渲染引擎的输入量是待渲染内容,输出量是待渲染内容对应的渲染结果。可选的,还可以输出待渲染内容包含的面片的渲染结果。
渲染引擎可以是由一台或多台计算设备组成的计算设备集群,也可以是一种计算机程序产品,还可以是一种实体装置。
上述的设备或产品均可以部署在本地设备侧。可选的,也均可以部署在云服务器侧。
例如,在上述的一类可能的实现方式中,也即,当进程400和进程500实际为同一应用对应的不同进程时。对于一些模型数据量较小的应用,进程和模型库均可以部署在本地 设备上。考虑到光线追踪的计算量,渲染引擎可以部署在云服务器侧。对于一些模型数据量较小的应用,模型库可以部署在本地设备上。渲染引擎和模型库可以部署在云服务器侧。
又例如,在上述的另一类可能的实现方式中,也即,当进程400和进程500实际为不同应用中的不同进程时。进程可以运行在本地设备上,而模型库和渲染引擎更适合部署在云服务器侧。
接下来介绍一种包含多视点的渲染场景结构图。如图4所示,一种多视点的场景结构图中至少包括两个虚拟视点100和102、两个虚拟视点对应的虚拟视平面200和202、模型300和光源302。
此处的虚拟视点对应图3中进程。具体地,一个虚拟视点对应一个进程。而模型可以对应图3中模型库中的模型。
接下来基于图4对空间中信息表的建立和更新情况进行概述。
图4中包含左右两帧画面,分别为左侧的历史帧和右侧的当前帧。
以右侧的当前帧为例。在接收了由模型300和光源302构成的待渲染内容后,对于待渲染内容中包含的一个或多个模型300建立当前初始公共信息表。此处,待渲染内容包含一个模型300。其中,公共信息表是以模型中的面片为单位建立的,还包括了各面片的渲染结果等信息。
同时,对当前帧中各个虚拟视平面建立初始对应关系表。以虚拟视点106对应的虚拟视平面206为例,建立当前初始对应关系表。其中,对应关系表是以虚拟视平面中的像素为单位建立的,还包括了各像素和模型中各面片的对应关系以及各像素的颜色等信息。
需要说明的是,上述的初始公共信息表和初始对应关系表的建立并无时间上的先后顺序。
同理,在左侧的历史帧中,也已经对于模型300建立了历史初始公共信息表,对于虚拟视平面102建立了历史初始对应关系表。此外,在历史帧中根据历史初始公共信息表和历史初始对应关系表已经获得了历史公共信息表。
需要说明的是,历史公共信息表的获得时间先于当前公共信息表的获得时间,但历史公共信息表的获得时间不一定先于当前初始公共信息表的建立时间。
在右侧的当前帧中,可以根据历史公共信息表,获得当前对应关系表,进而获得虚拟视平面206的渲染结果。下面对两种可能的实现方式进行介绍。在下述的两种可能的实现方式中,当前帧和历史帧采用的是相同的实现方式。
在一种可能的实现方式中,首先根据历史公共信息表建立当前初始公共信息表,然后根据当前初始对应关系表,确定当前初始公共信息表需要进行光线追踪的面片。在对所述面片进行光线追踪渲染后,更新当前初始公共信息表,获得当前公共信息表。进一步地,根据当前公共信息表更新当前初始对应关系表,获得当前对应关系表。最后,根据当前对应关系表,确定视平面206对应的渲染结果。
在一种可能的实现方式中,首先对待渲染内容中的面片进行光线追踪渲染,根据光线追踪渲染的结果建立当前初始公共信息表。然后根据历史公共信息表更新当前初始公共信息表,获得当前公共信息表。根据当前公共信息表对当前初始对应关系表进行更新,获得当前对应关系表。最后,根据当前对应关系表,确定视平面206对应的渲染结果。
接下来对渲染方法进行详细的介绍。
图5示出了一种渲染方法的流程图,介绍了渲染方法600。图5示出了两个视平面的渲染流程图,分别对应当前视平面和历史视平面的渲染流程图。其中,就时间顺序上而言,当前视平面对应当前帧,历史视平面对应历史帧。
需要说明的是,对于历史视平面的渲染方法和对当前视平面的渲染方法一致,均为渲染方法600。因此下文主要以当前视平面对应的流程图进行介绍。
该渲染方法可以由渲染引擎800执行。该方法包括3个部分,即预处理部分、光线追踪部分和获得当前渲染结果部分。
首先,预处理部分包括S400至S404。
S200,渲染引擎800获取当前待渲染内容及相关参数。
首先,渲染引擎800得当前待渲染内容。具体的,当前渲染内容可以是基于图3中的进程产生的。具体地,可以根据进程包含的参数和指令在模型库中对模型进行选择和组合后形成当前待渲染内容。因此,待渲染内容可以从一个可以根据进程的信息调用模型库的装置或进程处获得。
其次,待渲染内容中包括一个或多个模型及各个模型的信息。例如,图4中的模型300。具体地,各个模型的面片划分情况、面片编号以及各个模型和面片的坐标。
此外,所述相关参数包括虚拟视点和虚拟视平面的坐标,以及光源参数等。
渲染引擎800获取当前待渲染内容及相关参数后,即可对当前待渲染内容进行渲染。
S202,渲染引擎800以当前待渲染内容中的面片为单位,建立当前初始公共信息表。
根据S200中获取的当前待渲染内容中各个面片的编号可以建立当前初始公共信息表。具体地,当前初始公共信息表包括各个面片的编号以及各个面片的采样值、渲染结果和材质。
首先,可以对各个面片的采样值和存储的渲染结果进行初始化。所述采样值是指在光线追踪的过程中,面片作为光线首次在空间中接触到的面片的次数。
颜色的表征方式有RGB模式、CMYK模式和Lab模式等,下文以RGB模式为例。图6示出了一种当前初始公共信息表,其中面片编号从1依次编号至p,p指示的是空间内中面片的数量。
其中,采样值和存储的渲染结果均需要进行初始化。可选的,可以将各个面片的采样值的初始值设置为0。
可选的,在一些可能的实现方式中,还可以根据历史帧中获得的历史公共信息表对当前初始公共信息表进行初始化。具体的关于如何获得历史公共信息表将在下文中进行介绍。
需要说明的是,当历史视平面存在对应的历史公共信息表时,根据S202中建立的当前初始公共信息表中的面片编号,可以在历史公共信息表中查询各个面片对应的采样值和渲染结果。进一步地,将所述查询得到的采样值和渲染结果更新为当前初始公共信息表中各个面片对应的采样值和存储的渲染结果的初始值。
需要说明的是,步骤S200与步骤S202没有固定的执行时序。换言之,步骤S202可以先于步骤S200被执行,也可以后于S200被执行,还可以和步骤S200同时被执行。
S204,渲染引擎800建立当前视平面对应的的当前初始对应关系表。
根据在S200中获得的各个面片的坐标,可以确定各个面片在当前视平面上的对应位置,从而建立当前待渲染内容中各个面片与当前视平面中各个像素的对应关系。进一步地,根据所述对应关系,可以建立当前初始对应关系表。所述当前初始对应关系表包含像素与面片的对应关系、面片的深度值和存储的渲染结果以及像素的渲染结果。
具体地,面片是处于三维空间中的微小单元,在经过从模型坐标系到世界坐标系,然后到视图坐标系,再到投影坐标系,最后到视口坐标系这一系列坐标系的变化后,最终映射在二维的视平面上。以遍历的方式对视平面中的每一个像素进行判断,判断像素的部分或全部区域是否被面片所覆盖。对于有面片覆盖的像素,记录该像素与覆盖面片的对应关系。关于像素和面片的覆盖关系已经在上文中介绍过了,不再赘述。此外,还需要对视平面中的像素进行编号。
图7示出了一种当前视平面下像素与面片的当前初始对应关系表。该当前初始对应关系表中包含了像素与面片的对应关系、面片的深度值和渲染结果以及像素的渲染结果。
具体地,如图7所示,像素与面片之间的对应关系可以是一个像素对应一个或多个面片。例如,面片1,2和6均覆盖了像素1的部分或全部面积。面片m覆盖了像素n的部分或全部面积。其中,n和m不一定相等。
需要说明的是,当多个面片对应一个像素时,所述多个面片覆盖像素中区域可以不同,也可以相同。具体地,因为各个面片的深度不同,因此可以存在两个及两个以上的面片覆盖在同一个像素内的区域发生重叠的情况。
像素与面片之间的对应关系也可以是一个或多个像素对应一个面片。例如,面片1同时覆盖了像素1和2的部分或全部面积。
每个面片的深度可以根据围成该面片的线段相交的顶点的深度计算而来的。每个面片的深度可以等于上述顶点的深度的平均值。具体地,所述平均值可以是算术平均值,也可以是加权平均值。
可选的,可以根据面片的深度和当前初始公共信息表中各面片的材质,可以确定各个像素对应的可视面片,从而提升光线追踪渲染的效率。其中,一个可视面片可以是一个目标面片。也即,作为光线追踪渲染的目标。具体的可视面片的确定方法将在S406中详细介绍。
在步骤S204中,还需要对可视面片上存储的渲染结果和像素渲染结果分别进行初始化处理。具体地,可视面片上存储的渲染结果和像素渲染结果均初始化为0。
需要说明的是,步骤S402和S404没有固定的执行时序。换言之,步骤S404可以先于步骤S402被执行,也可以后于S402被执行,还可以和步骤S402同时被执行。
根据在S402和S404分别建立的第一公共信息表和第一对应关系表,可以进一步地确定需要进行光线追踪的面片。具体地,光线追踪部分包括S406和S408。
S206,渲染引擎800根据当前初始对应关系表、当前初始公共信息表对部分面片进行光线追踪。
根据当前初始对应关系表中各个像素对应的面片编号和面片对应的深度值和材质,确定各像素对应的可视面片集。所述可视面片集指的是在该像素对应的面片中属于可视面片的面片的集合。根据可视面片集中各面片的采样值和采样阈值,对部分可视面片进行光线追踪的操作。
首先,根据面片的材质、当前初始关系表中的面片编号和面片深度值,可以获得各像素对应的可视面片集。
具体地,以像素为单元,分别确定各个像素中的可视面片集。当一个像素对应一个面片时,该面片为可视面片。当一个像素对应多个面片时,应当按照深度由小到大的顺序进行排列。在这一情况下,在深度由小到大的顺序中,可视面片包括深度值小于或等于首次出现不透明材质的面片的深度值的面片。
以图7为例,像素n对应面片m,则无论面片m是否为不透明材质,面片m都是可视面片。像素1对应面片1,2和6,假设这三个面片的深度关系为D1<D2<D6,并且面片2为不透明材质,面片1和6均为透明材质。那么对于像素1而言,首次出现不透明材质的面片的深度为D2,因此可视面片应该包括深度值小于或等于D2的面片,即面片1和2。
需要说明的是,视平面中不同的像素对应的可视面片集可以不同。
其次,在获得各像素对应的可视面片集后,可以根据当前初始公共信息表中的采样阈值对部分面片进行光线追踪的操作。其中,采样阈值可以根据需求进行设置。具体地,查询上述可视面片集中的面片在当前初始公共信息表中对应的采样值。
若该面片的采样值大于或等于采样阈值,则直接获取该面片的渲染结果用于更新当前初始公共信息表。
若该面片的采样值小于采样阈值,则对该面片进行光线追踪的操作。
可选的,若该面片的采样值小于采样阈值,则基于一定几率对该面片进行光线追踪的操作。具体地,对于由采样值小于采样阈值的可视面片组成的待采样面片集,在待采样面片集中随机选取k个面片进行随机采样。其中,k可以根据需要进行设置,k小于等于待采样面片集中的面片数量。这k个面片的选取方法可以是简单随机方法,也可以是低差异序列等方法。随机采样的方法可以是简单随机采样,也可以是超级采样等方法。
光线追踪的过程是从虚拟视点向空间中的k个面片分别发出光线,并进行光线追踪。其中,虚拟视点可以对这k个面片分别发出相同数量的光线,也可以分别发出不同数量的光线。需要说明的是,无论是发出相同数量的光线,还是不同数量的光线,每一次采样中到达每一个面片的光线数量都可以小于或等于采样阈值。
如果一个面片的采样值为1,则代表有一束光线在空间中首次接触到的面片为该面片,因此计算该面片的颜色。如果一个面片的采样值大于1,则代表有两束或两束以上的光线在空间中首次接触到的面片为该面片。
对于上述面片的中间渲染结果的计算,是通过分别计算从虚拟视点发出的光线的颜色来实现的。对于采样值为1的面片,其渲染结果为光线的颜色。对于采样值大于或等于2的面片,其渲染结果等于该面片上的采样光线的平均值。具体地,所述平均值可以是算术平均值,也可以是加权平均值。
S208,渲染引擎800根据光线追踪结果获得当前对应关系表和当前公共信息表。
根据在步骤S408中获得的待采样面片集中面片的中间渲染结果,可以获得当前公共信息表和当前对应关系表。
当在S206中确定可视面片的采样值大于或等于采样阈值时,则直接获取该面片的渲染结果用于更新当前初始公共信息表。同时,不对当前初始公共信息表中面片的采样值进行修改。因此,该面片的信息在当前初始公共信息表和当前公共信息表中保持一致。
当在S206中确定可视面片的采样值小于采样阈值时,根据当前初始公共信息表中面片的渲染结果和在步骤S206中获得的面片的中间渲染结果,可以获得当前渲染结果。进一步地,根据当前渲染结果对当前初始公共信息表进行更新。
在一种可能的实现方式中,通过计算当前初始公共信息表中面片的渲染结果和在步骤S206中获得的面片的中间渲染结果的平均值的方法,可以确定当前渲染结果。其中,所述平均值可以是算术平均值,也可以是加权平均值。将当前初始公共信息表中待采样面片对应的渲染结果更新为当前渲染结果。同时,对当前初始公共信息表中的待采样面片的采样值执行更新的操作。图8示出了一种当前公共信息表。
在这一类可能的实现方式中,例如面片1属于可视面片,且其在图6所示的当前初始公共信息表中的采样值S1大于采样阈值。也即,在对公共信息表进行更新时,不需要对面片1的采样值和渲染结果进行更新的操作。又例如面片2属于可视面片且其在图6所示的当前初始公共信息表中的采样值S2小于采样阈值。也即,需要对其在图8所示的当前公共信息表中的采样值和渲染结果进行更新。具体地,将图6中的渲染结果C2更新为C2'。其中,C2'等于面片2在步骤S406中的渲染结果和C2的平均值。此外,还需要将面片2在图6和图8中的采样值S2更新为S2'。其中,S2'等于S2加上在步骤S206中面片2上的采样光线数k。
根据当前公共信息表,可以对图7中的当前初始对应关系表进行更新,从而获得当前对应关系表。
具体地,根据当前公共信息表,可以获得当前对应关系表中的可视面片渲染结果。进一步地,可以获得像素格渲染结果。例如,如图7所示,对于像素1,其对应的面片1、2和6中,面片6属于不可视面片。因此以面片1和2为单位,在图8所示的当前公共信息表中进行查询,可以获得像素1对应的可视面片渲染结果C1和C2'。
根据像素对应的可视面片渲染结果,可以确定像素渲染结果。其中,像素渲染结果可以等于可视面片渲染结果的平均值。具体地,所述平均值可以是算术平均值,也可以是加权平均值。例如,在图9所示的当前对应关系表中,像素1的渲染结果P1等于C1和C2'的平均值。
在一种可能的实现方式中,还可以将当前初始公共信息表中面片的渲染结果和扩展后的在步骤S206中获得的面片的中间渲染结果数列组成第一数列。进一步地,通过计算这个第一数列的方差,可以确定当前渲染结果。
在上一种可能的实现方式中提到,当前渲染结果可以等于当前初始公共信息表中面片的渲染结果和在步骤S206中获得的面片的中间渲染结果的均值。换言之,当前渲染结果可以由当前初始公共信息表中面片的渲染结果和在步骤S206中获得的面片的中间渲染结果组成的数列乘以一定的系数矩阵得到。因此,利用当前渲染结果除以上述系数矩阵即可得到一个数列。其中,该数列即扩展后的当前渲染结果数列。
同理,在步骤S206中获得的面片的中间渲染结果可以通过除以系数矩阵得到扩展后的在步骤S206中获得的中间渲染结果数列。需要说明的是,本实施例中渲染结果的更新是逐帧发生的。因此,系数矩阵可以是固定的数值,也可以是在上一帧中对渲染结果更新的过程中使用的系数矩阵。对于首帧中的在步骤S206中获得的面片的中间渲染结果(也即步骤S202中初始化的渲染结果),可以采用固定数值的系数矩阵。
在这一类可能的实现方式中,例如面片1属于可视面片,且其在图6所示的当前初始公共信息表中的采样值S1大于采样阈值。也即,在对公共信息表进行更新时,不需要对面片1的采样值和渲染结果进行更新的操作。又例如面片2属于可视面片且其在图6所示的当前初始公共信息表中的采样值S2小于采样阈值。也即,需要对该面片的采样值和渲染结果进行更新。
具体地,将面片2在步骤S206中的面片的中间渲染结果和面片2对应的扩展后的C2组成面片2对应的新的数列。计算所述新的数列的方差,根据所述方差和第一方差阈值,确定当前渲染结果。其中,第一方差阈值可以根据需求进行设置。
当所述数列的方差大于方差阈值时,将当前初始公共信息表中面片2对应的渲染结果更新为C2'。其中,C2'等于面片2在步骤S206中的面片的中间渲染结果。同时,将当前初始公共信息表中面片2对应的采样值S2更新为S2'。其中,S2'等于1。
当所述数列的方差小于或等于第一方差阈值时,将当前初始公共信息表中面片2对应的渲染结果更新为C2'。其中,C2'等于面片2在步骤S206中的面片的中间渲染结果和C2的平均值。所述平均值可以是算术平均值,也可以是加权平均值。
根据上述两种情形中的当前公共信息表,可以对图7中的当前初始对应关系表进行更新。
根据当前公共信息表,可以获得当前对应关系表中的可视面片渲染结果。进一步地,可以获得像素格渲染结果。例如,对于像素1,其对应的面片1、2和6中,面片6属于不可视面片。因此以面片1和2为单位,在图8所示的当前公共信息表中进行查询可以获得像素1对应的可视面片渲染结果C1和C2'。
根据像素对应的可视面片渲染结果,可以确定像素渲染结果。其中,像素渲染结果可以等于可视面片渲染结果的平均值。具体地,所述平均值可以是算术平均值,也可以是加权平均值。例如,在图9所示的当前对应关系表中,像素1的渲染结果P1等于C1和C2’的平均值。
需要说明的是,在步骤S202中用到的历史公共信息表的获得方法同本步骤中当前公共信息表的获得方法。
在S408中对待采样面片集中的面片进行了光线追踪渲染,获得了待采样面片集中各个面片的中间渲染结果。根据当前面片的中间渲染结果,可以获得当前公共信息表和当前对应关系表,从而获得当前渲染结果。具体地,获得当前渲染结果部分包括S410。
S410,渲染引擎800获得当前渲染结果。
根据当前对应关系表中各像素的渲染结果,可以获得当前渲染结果。
需要说明的是,S210获得的当前渲染结果可以用于在输出屏幕上直接进行输出,也可以作为下一步去噪操作的原始图像/数据。
图10示出了另一种渲染方法的流程图,介绍了渲染方法700。图10示出了两个视平面的渲染流程图,分别对应当前视平面和历史视平面的渲染流程图。其中,就时间顺序上而言,当前视平面对应当前帧,历史视平面对应历史帧。
如上文所述,本申请根据历史公共信息表,获得当前对应关系表,进而获得视平面对应的渲染结果的实现方法至少有两种。第一种是图5所示的一种渲染方法600。而下文中 给出的是另一种渲染方法700。
渲染方法600是在对待渲染内容进行光线追踪之前根据历史公共信息表,初始化其当前初始公共信息表,进而获得当前面片的渲染结果,最终获得待渲染内容的当前渲染结果。而渲染方法700是先对待渲染内容进行光线追踪渲染,获得当前面片的中间渲染结果,再根据历史公共信息表,获得待渲染内容的当前渲染结果。
需要说明的是,渲染方法700中先于复用步骤进行的光线追踪渲染可以是常规的光线追踪渲染。因此,渲染方法700中的当前面片的中间渲染结果可以是常规光线追踪获得的面片的渲染结果。
可选的,也可以是渲染方法600这样的光线追踪方法。因此,渲染方法700中的当前面片的中间渲染结果也可以是渲染方法600中获得的当前面片的渲染结果。
需要说明的是,在渲染方法700中提到的当前面片的中间渲染结果不同于渲染方法600中提到的当前面片的中间渲染结果。所述中间渲染结果指示的是在最终获得当前面片的渲染结果之前面片的渲染结果。也即,在这两种方法中,在计算该面片的当前渲染结果的过程中,获取的该面片的过程渲染结果也即该面片的中间渲染结果。一般的,可以对该面片进行光线追踪渲染以获得该面片的中间渲染结果,此处进行的光线追踪渲染的追踪光线数小于阈值。
在图10中,对于历史视平面的渲染方法和对当前视平面的渲染方法一致,均为渲染方法700。因此下文主要以当前视平面对应的流程图进行介绍。
渲染方法700可以由渲染引擎800执行。该方法包括3个部分,即光线追踪部分、信息复用部分和获得当前渲染结果部分。
首先,光线追踪部分为S400和S402。
S400:渲染引擎800获取当前待渲染内容、相关参数、当前中间公共信息表和当前中间对应关系表。
该步骤中渲染引擎获得当前待渲染内容、相关参数的方法与上文中S200完全一致,不再赘述。
需要说明的是,渲染引擎800在此获取的当前中间公共信息表,可以是在常规光线追踪过程中获得的当前公共信息表。可选的,也可以是在渲染方法600中的步骤S208中获得的当前公共信息表。其中,常规光线追踪过程中获得的当前公共信息表的获得方法与渲染方法600中的步骤S202建立当前初始公共信息表的方法类似,故不再赘述。
需要说明的是,渲染引擎800在此获取的当前中间对应关系表,可以是在常规光线追踪过程中获得的当前对应关系表。可选的,也可以是在渲染方法600中的步骤S208中获得的当前对应关系表。其中,常规光线追踪过程中获得的当前对应关系表的获得方法与渲染方法600中的步骤S204建立当前初始对应关系表的方法类似,故不再赘述。
S402:渲染引擎800对当前待渲染内容进行光线追踪并获得当前面片的中间渲染结果。
如前所述,当前待渲染内容中包括一个或多个模型,每个模型至少包含一个面片。因此在该步骤中对当前待渲染内容进行光线追踪即可获得当前面片的中间渲染结果。
具体的光线追踪方法可以是现有技术,不再赘述。
可选的,具体的光线追踪方法还可以参照渲染方法600执行。换言之,渲染方法600中S208中获得的当前面片的渲染结果即本步骤中的当前面片的中间渲染结果。
在这一步中对当前待渲染内容进行光线追踪后,该面片的采样值其实也会变化。因此,在一些可能的实现方式中,可以将此时待渲染内容中的面片的采样值和阈值进行对比。若所述采样值大于阈值,可以直接输出渲染结果。
但是考虑到第一方面,复用可以进一步的提升前述面片的渲染结果。第二方面,渲染方法700可以适用于采样值低的光线追踪方法。通常认为,在进行S402中的采样后,前述面片的采样值可以小于阈值。综合以上两点,下文将以前述面片采样值小于阈值的情况进行介绍。
在对待渲染内容进行光线追踪渲染后,可以获得当前面片的中间渲染结果。进一步地,通过复用历史公共信息表中的部分信息,可以获得当前面片的渲染结果。具体的信息复用部分包括S404和S406。
S404:渲染引擎800以当前待渲染内容包含的面片为单位,建立当前共享信息表。
以当前待渲染内容中的面片为单位,根据历史公共信息表,可以建立当前共享信息表。其中,历史公共信息表的获得方式将在后文中进行介绍。
可选的,也可以以当前待渲染内容中的可视面片为单位,建立当前共享信息表。具体的可视面片的获得方法类似渲染方法600中步骤S206中所设方法,故不再赘述。
下文以当前待渲染内容中的可视面片为单位,建立当前共享信息表为例进行介绍。
首先根据当前中间对应关系表中每一个像素对应的可视面片集,建立当前视平面对应的当前可视面片总集。具体地,将每一个像素对应的可视面片集中的面片提取出来,共同构成一个当前可视面片总集。
例如,如图11所示,当前可视面片总集至少包括面片1、2和n。其中,n小于或等于图6中的p。也即,小于或等于当前待渲染内容中的面片总数。
需要说明的是,不同像素对应的可视面片可以部分相同。例如,在图9中,像素1的可视面片集中包括面片1和2,而像素2的可视面片集中包括面片1、3和4。也即,在像素1和像素2中,面片1都属于可视面片。可选的,对于在当前可视面片总集中重复出现的面片,使其出现一次即可。
以当前可视面片总集中的面片为单位,可以建立当前共享信息表。当前共享信息表包括当前待渲染内容中所有的可视面片、各可视面片所对应的历史视平面对应的编号、各可视面片在各空间中渲染结果。共享信息表还包括更新后的面片渲染结果,初始值为0。
具体地,选取当前可视面片总集中的某一面片,搜索渲染引擎800中存储的历史公共信息表。当历史公共信息表中存有该面片的信息时,获取历史公共信息表中该面片对应的渲染结果。进一步地,获得当前共享信息表中该面片在各视平面中对应的的渲染结果。以此类推,可以为当前待渲染内容中所有可视面片建立一个当前共享信息表。
可选的,在渲染方法700中,考虑到可以在光线追踪之后通过复用历史公共信息表中的部分面片的渲染结果,可以在光线追踪时适当减少追踪光线数,以提升光线追踪的效率。关于是否需要减少追踪光线数可以根据实际情况决定。
需要说明的是,在光线追踪过程中,尤其是在上述的减少追踪光线数的情况下,可能出现步骤S402中的光线追踪数加上步骤S404中获取的历史面片的渲染结果中面片的追踪光线数仍然小于采样阈值的情况。在这样的情况下,可以考虑统计在该应用中该类情况出现的概率,根据所述概率决定是否采用该类方法。可选的,也可以对应用所涉的全部面片 进行分类,选取部分模型对应的面片执行这类渲染方法。
在这一类可能的实现方式中,如图11所示,当前待渲染内容中的面片1在第二、第三和第九空间中也是可视面片。同时,面片1在上述三个视平面对应的渲染结果分别是C1-2、C1-3和C1-9。
当历史公共信息表中存有该面片的信息时,提取历史公共信息表中该面片对应的渲染结果。进一步地,获得共享信息表中该面片的渲染结果。以此类推,可以为当前待渲染内容中的可视面片建立一个当前共享信息表。
S406,渲染引擎800根据当前共享信息表和当前面片的中间渲染结果,获得当前对应关系表和当前公共信息表。
根据和图11共享信息表中的各视平面对应的渲染结果,可以获得更新后的可视面片的渲染结果。
具体地,将当前面片的中间渲染结果和当前共享信息表中各视平面对应的渲染结果组成第二数列。根据所述第二数列的方差与第二方差阈值,更新可视面片的渲染结果。
当所述第二数列的方差小于或等于第二方差阈值时,将当前共享信息表中可视面片的渲染结果更新为第二数列的平均值。其中,所述平均值可以是算术平均值,也可以是加权平均值。进一步地,还可以更新当前中间对应关系表中面片对应的渲染结果。
如上所述,当前中间公共信息表可以是渲染方法600中的获得的当前公共信息表(图8)。换言之,当前面片的中间渲染结果可以根据图8确定。因此,下文以图8为例进行介绍。
在图8中,若由面片1在当前中间公共信息表中颜色C1和共享信息表中对应空间中的渲染结果C1-2、C1-3和C1-9共同组成第二数列,其方差小于或等于第二方差阈值,则将当前共享信息表中面片1对应的渲染结果去确定为C1'。其中,C1'等于C1、C1-2、C1-3和C1-9的平均值。同时,将图12所示的当前对应关系表中的可视面片渲染结果中的C1更新为C1'。
当所述第二数列的方差大于第二方差阈值时,对当前共享信息表进行更新。具体地,将图11中当前面片对应的空间编号和对应空间中的渲染结果清零。同时,不对当前中间对应关系表中可视面片的渲染结果进行更新。
例如,在图11中,若由面片1在如图8所示的当前中间公共信息表中颜色C1和当前共享信息表中对应空间中的渲染结果C1-2、C1-3和C1-9共同组成第二数列,其方差大于第二方差阈值,则将当前共享信息表中面片1对应的渲染结果更新为C1'。其中,C1'等于C1。同时,对图12中可视面片渲染结果中渲染结果C1不做修改(因为C1'等于C1)。
在获得了更新后的可视面片渲染结果的当前共享信息表后,可以获得图12中的当前对应关系表。进一步,可以获得当前待渲染内容的渲染结果。
例如,在图11中确定的可视面片1对应的更新后的存储渲染结果为C1'。至少将当前对应关系表中可视面片渲染结果中涉及面片1的渲染结果分别更新为C1'。进一步地,像素1的渲染结果P1'等于C1'和C2”的平均值。
在获得当前公共对应关系表的同时,可以根据面片的渲染结果对当前中间公共信息表(如图8)进行更新,获得当前公共信息表。
需要说明的是,在步骤S404中获取的历史公共信息表的获得方法同当前公共信息表 的获得方法。
在获得当前对应关系表后,可以获得当前待渲染内容的渲染结果。接下来是获得当前渲染结果部分。
S408,渲染引擎800获得当前待渲染内容的渲染结果。
在获得当前对应关系表中的可视面片的渲染结果后,可以被进一步确定各像素的渲染结果,并且获得步骤S400中获取的待渲染内容的渲染结果。
需要说明的是,S408获得的渲染结果可以用于在输出屏幕上直接进行输出,也可以作为下一步去噪操作的原始图像/数据。
本申请还提供一种渲染引擎800,如图13所示,包括:
通信单元802,用于在S200获取当前待渲染内容及相关参数。通信单元S202还用于在S206中接收设置的采样阈值。可选的,通信单元802还用于在S208中接收第一方差阈值。在S400中,通信单元802用于获取当前待渲染内容、相关参数、当前中间公共信息表和当前中间对应关系表。通信单元802还用于在S406中接收第二方差阈值。
存储单元804,用于存储在S200中获取的应用的模型数据。在渲染方法600中,存储单元804用于存储S202中获得的当前初始公共信息表和历史公共信息表。还用于存储在S204中获得的当前初始对应关系表。在S208中获得的当前对应关系表和当前公共信息表都被存储至存储单元804中。存储单元804还用于存储S210中获得的当前渲染结果。
在渲染方法700中,存储单元804用于存储S400中获得的当前待渲染内容、相关参数、当前中间公共信息表和当前中间对应关系表。在S402中获得的当前面片的中间渲染记过和在S404中获得的当前共享信息表都将被存储至存储单元804中。存储单元804还用于存储S406中获得的当前公共信息表和当前对应关系表。存储单元804还用存储S408中获得的当前待渲染内容的渲染结果。
处理单元806,在渲染方法600中,用于在S202中建立当前初始公共信息表和在S204中建立当前初始对应关系表。处理单元806还用于在S206中根据当前对应关系表、当前公共信息表对部分面片进行光线追踪。在S208中获得当前对应关系表和当前公共信息表的操作也由处理单元806执行。此外,S210中的当前渲染结果的获得操作也是由处理单元806执行。
在渲染方法700中,处理单元806用于在S402中对当前待渲染内容进行光线追踪并获得当前面片的中间渲染结果。在S404中建立当前共享信息表的操作也是由处理单元806执行。在步骤S406中,处理单元806用于根据当前共享信息表和当前面片的中间渲染结果,获得当前公共信息表和当前对应关系表。此外,S408中的当前渲染结果的获得操作也是由处理单元806执行。
具体地,处理单元806可以包括复用单元808和光线追踪单元810。
具体地,在渲染方法600中,复用单元808用于在S202中建立当前初始公共信息表和在S204中建立当前初始对应关系表。光线追踪单元810用于在S206中根据当前对应关系表、当前公共信息表对部分面片进行光线追踪。在S208中获得当前对应关系表和当前公共信息表的操作也由光线追踪单元810执行。此外,S210中的当前渲染结果的获得操作也是由光线追踪单元810执行。
在渲染方法700中,复用单元808用于在S402中对当前待渲染内容进行光线追踪并获得当前面片的中间渲染结果。在S404中建立当前共享信息表的操作也是由复用单元808执行。在步骤S406中,光线追踪单元810用于根据当前共享信息表和当前面片的中间渲染结果,获得当前公共信息表和当前对应关系表。此外,S408中的当前渲染结果的获得操作也是由光线追踪单元810执行。
可选的,通信单元202还用于返回S210和S408中的获得的当前渲染结果。
本申请还提供一种计算设备900。如图14所示,计算设备包括:总线902、处理器904、存储器906和通信接口908。处理器904、存储器906和通信接口908之间通过总线902通信。计算设备900可以是服务器或终端设备。应理解,本申请不限定计算设备700中的处理器、存储器的个数。
总线902可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图14中仅用一条线表示,但并不表示仅有一根总线或一种类型的总线。总线904可包括在计算设备900各个部件(例如,存储器906、处理器904、通信接口908)之间传送信息的通路。
处理器904可以包括中央处理器(central processing unit,CPU)、图形处理器(graphics processing unit,GPU)、微处理器(micro processor,MP)或者数字信号处理器(digital signal processor,DSP)等处理器中的任意一种或多种。
在一些可能的实现方式中,处理器904可以包含一个或多个图形处理器。该处理器904用于执行存储在存储器906中的指令以实现前述渲染方法600或渲染方法700。
在一些可能的实现方式中,处理器904可以包括一个或多个中央处理器和一个或多个图形处理器。该处理器904用于执行存储在存储器906中的指令以实现前述渲染方法600或渲染方法700。
存储器906可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM)。处理器904还可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM),快闪存储器,机械硬盘(hard disk drive,HDD)或固态硬盘(solid state drive,SSD)。存储器906中存储有可执行的程序代码,处理器904执行该可执行的程序代码以实现前述渲染方法600或渲染方法700。具体的,存储器906上存有渲染引擎800用于执行渲染方法600或渲染方法700的指令。
通信接口903使用例如但不限于网络接口卡、收发器一类的收发模块,来实现计算设备900与其他设备或通信网络之间的通信。例如,可以通过通信接口903获取信息404、信息406等。
本申请实施例还提供了一种计算设备集群。如图15所示,所述计算设备集群包括至少一个计算设备900。该计算设备集群中包括的计算设备集群可以全部是终端设备,也可以全部是云服务器,还可以部分是云服务器部分是终端设备。
在上述的三种关于计算设备集群的部署方式下,计算设备集群中的一个或多个计算设备900中的存储器906中可以存有相同的渲染引擎800用于执行渲染方法600或渲染方法 700的指令。
在一些可能的实现方式中,该计算设备集群中的一个或多个计算设备900也可以用于执行渲染引擎800用于执行渲染方法600或渲染方法700的部分指令。换言之,一个或多个计算设备900的组合可以共同执行渲染引擎800用于执行渲染方法600或渲染方法700的指令。
需要说明的是,计算设备集群中的不同的计算设备900中的存储器906可以存储不同的指令,用于执行渲染方法600或渲染方法700的部分功能。
图16示出了一种可能的实现方式。如图16所示,两个计算设备900A和900B通过通信接口908实现连接。计算设备900A中的存储器上存有用于执行通信单元802、复用单元808和光线追踪单元810的功能的指令。计算设备900B中的存储器上存有用于执行存储单元804的功能的指令。换言之,计算设备900A和900B的存储器906共同存储了渲染引擎800用于执行渲染方法600或渲染方法700的指令。
图16所示的计算设备集群之间的连接方式可以是考虑到本申请提供的渲染方法600或渲染方法700需要大量存储历史帧中的面片的历史渲染结果。因此,考虑将存储功能交由计算设备900B执行。
应理解,图16中示出的计算设备900A的功能也可以由多个计算设备900完成。同样,计算设备900B的功能也可以由多个计算设备900完成。
在一些可能的实现方式中,计算设备集群中的一个或多个计算设备可以通过网络连接。其中,所述网络可以是广域网或局域网等等。图17示出了一种可能的实现方式。如图17所示,两个计算设备900C和900D之间通过网络进行连接。具体地,通过各个计算设备中的通信接口与所述网络进行连接。在这一类可能的实现方式中,计算设备900C中的存储器906中存有执行通信单元802和复用单元808的指令。同时,计算设备900D中的存储器906中存有执行存储单元804和光线追踪单元810的指令。
图17所示的计算设备集群之间的连接方式可以是考虑到本申请提供的渲染方法600或渲染方法700需要进行光线追踪的大量计算和存储大量历史帧中的面片的历史渲染结果,因此考虑将光线追踪单元810和存储单元804实现的功能交由计算设备900D执行。
应理解,图17中示出的计算设备900C的功能也可以由多个计算设备900完成。同样,计算设备900D的功能也可以由多个计算设备900完成。
本申请实施例还提供了一种计算机可读存储介质。所述计算机可读存储介质可以是计算设备能够存储的任何可用介质或者是包含一个或多个可用介质的数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘)等。该计算机可读存储介质包括指令,所述指令指示计算设备执行上述应用于渲染引擎800的渲染方法600或700。
本申请实施例还提供了一种包含指令的计算机程序产品。所述计算机程序产品可以是包含指令的,能够运行在计算设备上或被储存在任何可用介质中的软件或程序产品。当所述计算机程序产品在至少一个计算机设备上运行时,使得至少一个计算机设备执行上述渲染方法600或700。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的保护范围。

Claims (16)

  1. 一种渲染方法,其特征在于,所述渲染方法用于渲染应用,所述应用包括至少一个模型,每个模型包括多个面片,所述方法包括:
    渲染所述应用的当前帧的过程中,确定当前视平面中像素对应的目标面片;
    获取所述应用的历史帧的渲染过程中获得的所述目标面片的历史渲染结果;
    根据所述目标面片的历史渲染结果,计算所述像素的当前渲染结果。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述目标面片的历史渲染结果,计算所述像素的当前渲染结果前,所述方法还包括:
    对所述目标面片进行光线追踪渲染,获得所述目标面片的中间渲染结果;
    所述根据所述目标面片的历史渲染结果,计算所述像素的当前渲染结果,包括:
    根据所述目标面片的历史渲染结果和所述目标面片的中间渲染结果,计算所述目标面片的当前渲染结果;
    根据所述目标面片的当前渲染结果,计算所述像素的当前渲染结果。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    确定所述目标面片的历史渲染结果对应的采样数量高于阈值;
    所述根据所述目标面片的历史渲染结果,计算所述像素的当前渲染结果,包括:
    将所述目标面片的历史渲染结果作为所述目标面片的当前渲染结果,所述目标面片的当前渲染结果用于计算所述像素的当前渲染结果。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    确定所述目标面片的历史渲染结果对应的采样数量不高于阈值;
    所述根据所述目标面片的历史渲染结果,计算所述像素的当前渲染结果,包括:
    对所述目标面片进行光线追踪渲染,获得所述目标面片的中间渲染结果;
    根据所述目标面片的中间渲染结果和所述目标面片的历史渲染结果,计算所述目标面片的当前渲染结果;
    根据所述目标面片的当前渲染结果,计算所述像素的渲染结果。
  5. 根据权利要求2至4任一所述的方法,其特征在于,所述方法还包括:
    存储所述目标面片的当前渲染结果。
  6. 根据权利要求2至5任一所述的方法,其特征在于,所述当前视平面在第一应用中生成,所述目标面片的历史渲染结果在第二应用中生成。
  7. 根据权利要求2至5任一所述的方法,其特征在于,所述目标面片的历史渲染结果和所述当前视平面在一个应用中生成。
  8. 根据权利要求1至7任一所述的方法,其特征在于,所述目标面片的历史渲染结果基于光线追踪渲染获得。
  9. 一种渲染引擎,其特征在于,所述渲染引擎包括处理单元和存储单元:
    所述处理单元,用于渲染应用的当前帧的过程中,确定当前视平面中像素对应的目标面片;获取所述应用的历史帧的渲染过程中获得的所述目标面片的历史渲染结果;根据所述目标面片的历史渲染结果,计算所述像素的当前渲染结果,其中,所述应用包括至少一个模型,每个模型包括多个面片;
    所述存储单元,用于存储所述应用的历史帧的渲染过程中获得的所述目标面片的 历史渲染结果。
  10. 如权利要求9所述的渲染引擎,其特征在于,所述处理单元,用于根据所述目标面片的历史渲染结果,计算所述像素的当前渲染结果前,对所述目标面片进行光线追踪渲染,获得所述目标面片的中间渲染结果;根据所述目标面片的历史渲染结果和所述目标面片的中间渲染结果,确定所述目标面片的当前渲染结果;根据所述目标面片的当前渲染结果,确定所述像素的当前渲染结果。
  11. 如权利要求9所述的渲染引擎,其特征在于,所述处理单元,用于确定所述目标面片的历史渲染结果对应的采样数量高于阈值;
    将所述面片的历史渲染结果作为所述面片的当前渲染结果,所述面片的当前渲染结果用于确定所述像素的当前渲染结果。
  12. 如权利要求9所述的渲染引擎,其特征在于,所述处理单元,用于确定所述目标面片的历史渲染结果对应的采样数量不高于阈值;
    对所述目标面片进行光线追踪渲染,获得所述目标面片的中间渲染结果;根据所述目标面片的中间渲染结果和所述目标面片的历史渲染结果,确定所述目标面片的当前渲染结果;根据所述目标面片的当前渲染结果,确定所述像素的渲染结果。
  13. 如权利要求10至12中任一所述的渲染引擎,其特征在于,所述存储单元,用于
    存储所述目标面片的当前渲染结果。
  14. 一种计算机程序产品,其特征在于,所述计算机程序产品包括指令,当所述指令被计算机设备集群运行时,使得所述计算机设备集群执行如权利要求的1至8中任一项所述的方法。
  15. 一种计算机可读存储介质,其特征在于,包括计算机程序指令,当所述计算机程序指令由计算设备集群执行时,所述计算设备集群执行如权利要求1至8中任一项所述的方法。
  16. 一种计算设备集群,其特征在于,包括至少一个计算设备,每个计算设备包括处理器和存储器;
    所述至少一个计算设备的处理器用于执行所述至少一个计算设备的存储器中存储的指令,以使得所述计算设备集群执行如权利要求1至8中任一项所述的方法。
PCT/CN2021/120584 2020-09-25 2021-09-26 一种渲染方法、装置及设备 WO2022063260A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21871629.8A EP4213102A4 (en) 2020-09-25 2021-09-26 RENDERING METHOD AND APPARATUS, AND DEVICE
US18/189,677 US20230230311A1 (en) 2020-09-25 2023-03-24 Rendering Method and Apparatus, and Device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202011023679 2020-09-25
CN202011023679.8 2020-09-25
CN202110080547.7 2021-01-21
CN202110080547.7A CN114255315A (zh) 2020-09-25 2021-01-21 一种渲染方法、装置及设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/189,677 Continuation US20230230311A1 (en) 2020-09-25 2023-03-24 Rendering Method and Apparatus, and Device

Publications (1)

Publication Number Publication Date
WO2022063260A1 true WO2022063260A1 (zh) 2022-03-31

Family

ID=80790874

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/120584 WO2022063260A1 (zh) 2020-09-25 2021-09-26 一种渲染方法、装置及设备

Country Status (4)

Country Link
US (1) US20230230311A1 (zh)
EP (1) EP4213102A4 (zh)
CN (1) CN114255315A (zh)
WO (1) WO2022063260A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023197689A1 (zh) * 2022-04-12 2023-10-19 华为云计算技术有限公司 一种数据处理的方法、系统和设备
EP4362478A1 (en) * 2022-10-28 2024-05-01 Velox XR Limited Apparatus, method, and computer program for network communications

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210030147A (ko) * 2019-09-09 2021-03-17 삼성전자주식회사 3d 렌더링 방법 및 장치
CN115115767A (zh) * 2022-06-01 2022-09-27 合众新能源汽车有限公司 一种场景渲染方法、装置、电子设备及存储介质
CN116485966A (zh) * 2022-10-28 2023-07-25 腾讯科技(深圳)有限公司 视频画面渲染方法、装置、设备和介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100265250A1 (en) * 2007-12-21 2010-10-21 David Koenig Method and system for fast rendering of a three dimensional scene
CN106127843A (zh) * 2016-06-16 2016-11-16 福建数博讯信息科技有限公司 三维虚拟场景的渲染方法和装置
CN111275803A (zh) * 2020-02-25 2020-06-12 北京百度网讯科技有限公司 3d模型渲染方法、装置、设备和存储介质
CN111340928A (zh) * 2020-02-19 2020-06-26 杭州群核信息技术有限公司 一种结合光线跟踪的Web端实时混合渲染方法、装置及计算机设备
CN111627116A (zh) * 2020-05-29 2020-09-04 联想(北京)有限公司 图像渲染控制方法、装置及服务器

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8633927B2 (en) * 2006-07-25 2014-01-21 Nvidia Corporation Re-render acceleration of frame with lighting change

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100265250A1 (en) * 2007-12-21 2010-10-21 David Koenig Method and system for fast rendering of a three dimensional scene
CN106127843A (zh) * 2016-06-16 2016-11-16 福建数博讯信息科技有限公司 三维虚拟场景的渲染方法和装置
CN111340928A (zh) * 2020-02-19 2020-06-26 杭州群核信息技术有限公司 一种结合光线跟踪的Web端实时混合渲染方法、装置及计算机设备
CN111275803A (zh) * 2020-02-25 2020-06-12 北京百度网讯科技有限公司 3d模型渲染方法、装置、设备和存储介质
CN111627116A (zh) * 2020-05-29 2020-09-04 联想(北京)有限公司 图像渲染控制方法、装置及服务器

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP4213102A4 *
ZHAO KAI-YUAN: "Real-Time Ambient Occlusion Based on RTX", XIANDAI JISUANJI (ZHUANYE BAN)/ MODERN COMPUTER (PROFESSIONAL EDITION), XIANDAI JISUANJI ZAZHISHE, CHINA, 25 December 2019 (2019-12-25), China , pages 87 - 91, XP055918126, ISSN: 1007-1423 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023197689A1 (zh) * 2022-04-12 2023-10-19 华为云计算技术有限公司 一种数据处理的方法、系统和设备
EP4362478A1 (en) * 2022-10-28 2024-05-01 Velox XR Limited Apparatus, method, and computer program for network communications

Also Published As

Publication number Publication date
EP4213102A1 (en) 2023-07-19
CN114255315A (zh) 2022-03-29
US20230230311A1 (en) 2023-07-20
EP4213102A4 (en) 2024-04-03

Similar Documents

Publication Publication Date Title
WO2022063260A1 (zh) 一种渲染方法、装置及设备
WO2021228031A1 (zh) 渲染方法、设备以及系统
US9508191B2 (en) Optimal point density using camera proximity for point-based global illumination
CN112184873B (zh) 分形图形创建方法、装置、电子设备和存储介质
US11232628B1 (en) Method for processing image data to provide for soft shadow effects using shadow depth information
CN110634178A (zh) 面向数字博物馆的三维场景精细化重建方法
WO2022156451A1 (zh) 一种渲染方法及装置
CN114863014B (zh) 一种三维模型的融合显示方法及设备
WO2022105641A1 (zh) 渲染方法、设备以及系统
CN116758208A (zh) 全局光照渲染方法、装置、存储介质及电子设备
JP7500017B2 (ja) 複数のデバイス間での3dオブジェクトの視覚化および操作を容易にする方法および装置
JP6852224B2 (ja) 全視角方向の球体ライトフィールドレンダリング方法
WO2023088047A1 (zh) 一种渲染方法及装置
CN116664752A (zh) 基于图案化光照实现全景显示的方法、系统及存储介质
US20230206567A1 (en) Geometry-aware augmented reality effects with real-time depth map
KR20230022153A (ko) 소프트 레이어링 및 깊이 인식 복원을 사용한 단일 이미지 3d 사진
WO2023029424A1 (zh) 一种对应用进行渲染的方法及相关装置
US11830140B2 (en) Methods and systems for 3D modeling of an object by merging voxelized representations of the object
WO2023109582A1 (zh) 处理光线数据的方法、装置、设备和存储介质
Chen et al. A quality controllable multi-view object reconstruction method for 3D imaging systems
AU2020449562B2 (en) Geometry-aware augmented reality effects with a real-time depth map
Hall et al. Networked and multimodal 3d modeling of cities for collaborative virtual environments
WO2023197689A1 (zh) 一种数据处理的方法、系统和设备
CN117974856A (zh) 渲染方法、计算设备及计算机可读存储介质
CN118172473A (zh) 确定光照信息的方法、装置、设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21871629

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021871629

Country of ref document: EP

Effective date: 20230414