WO2021228031A1 - Rendering method, apparatus and system - Google Patents

Rendering method, apparatus and system Download PDF

Info

Publication number
WO2021228031A1
WO2021228031A1 PCT/CN2021/092699 CN2021092699W WO2021228031A1 WO 2021228031 A1 WO2021228031 A1 WO 2021228031A1 CN 2021092699 W CN2021092699 W CN 2021092699W WO 2021228031 A1 WO2021228031 A1 WO 2021228031A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual scene
rendering
ray tracing
grids
grid
Prior art date
Application number
PCT/CN2021/092699
Other languages
French (fr)
Chinese (zh)
Inventor
李力
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021228031A1 publication Critical patent/WO2021228031A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing

Definitions

  • This application relates to the field of computer technology, and in particular to a rendering method, device, and system.
  • Rendering refers to the process of using software to generate images from a three-dimensional model.
  • the three-dimensional model is a description of a three-dimensional object in a strictly defined language or data structure, which includes geometry, viewpoint, texture, and lighting information.
  • the image is a digital image or a bitmap image.
  • rendering is similar to "artist's rendering of a scene”.
  • rendering is also used to describe "the process of calculating the effects in a video editing file to generate the final video output.” The process of rendering an image according to the model requires a large amount of calculation and consumes a lot of computing resources.
  • the present application provides a rendering method, which can effectively improve rendering efficiency.
  • a rendering method including:
  • the virtual scene including a light source and at least one three-dimensional model
  • the remote rendering platform performs forward ray tracing on the grids of the three-dimensional models of the virtual scene according to the light source of the virtual scene, wherein the surface of the three-dimensional model of the virtual scene is segmented to obtain the three-dimensional model of the virtual scene Grid
  • the remote rendering platform generates the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene according to the forward ray tracing results of the grids of the three-dimensional models of the virtual scene;
  • the remote rendering platform stores the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene
  • the remote rendering platform receives a first rendering request, and determines the first observable grid of each three-dimensional model of the virtual scene in the first rendering request;
  • the remote rendering platform determines the rendering result of the first observable grid from the stored pre-ray tracing results of the grids of the three-dimensional models of the virtual scene.
  • the remote rendering platform generates a first rendered image according to the rendering result of the first observable grid, or the remote rendering platform converts the first observable grid
  • the rendering result of is sent to the first terminal device, so that the first terminal device generates a first rendered image according to the rendering result of the first observable grid.
  • the remote rendering platform receives a second rendering request, and determines that the second rendering request is in the second observable grid of each three-dimensional model of the virtual scene; the remote rendering platform receives From the stored pre-ray tracing results of the grids of the three-dimensional models of the virtual scene, the rendering result of the second observable grid is determined.
  • the pre-ray tracing results of the grids of each 3D model of the virtual scene are pre-calculated and stored in the remote rendering platform.
  • different users need to render different perspectives of the same virtual scene, they only need to separately
  • the corresponding results can be queried in the pre-ray tracing results, and there is no need to perform separate calculations, which greatly reduces the amount of calculations.
  • the first rendering request is issued by the first terminal according to an operation of the first user, and the first rendering request carries the perspective of the first user in the virtual scene.
  • the method before the remote rendering platform performs forward ray tracing on the grids of the three-dimensional models of the virtual scene according to the light source of the virtual scene, the method further includes:
  • the remote rendering platform obtains the forward ray tracing parameters set by the provider of the virtual scene or the user who issued the first rendering request, and the forward ray tracing parameters include at least one of the following: the number of samples per unit area; and The number of light bounces;
  • the remote rendering platform performs forward ray tracing on the grids of the three-dimensional models of the virtual scene according to the light source of the virtual scene, including:
  • the remote rendering platform performs forward ray tracing on the grid of each three-dimensional model of the virtual scene according to the light source of the virtual scene and the forward ray tracing parameter.
  • the user can set the forward ray tracing parameters according to actual needs.
  • the forward ray tracing parameters can be set with higher requirements, and vice versa, the forward ray tracing parameters can be set with lower requirements. Tracking parameters.
  • the method before the remote rendering platform stores the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene, the method further includes:
  • the remote rendering platform performs reverse ray tracing of multiple preset viewing angles on part or all of the grids of each three-dimensional model of the virtual scene according to the light source of the virtual scene;
  • the remote rendering platform generates the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene according to the forward ray tracing results of the grids of the three-dimensional models of the virtual scene, including:
  • the remote rendering platform according to the forward ray tracing results of the grids of the three-dimensional models of the virtual scene and the backward ray tracing results of multiple preset viewing angles of part or all of the grids of the three-dimensional models of the virtual scene A pre-ray tracing result of the grid of each three-dimensional model of the virtual scene is generated.
  • the rendering result of the mesh includes the reverse ray tracing result and the forward ray tracing result of the mesh. If there is no reverse ray tracing result for part of the first observable mesh, then the first observable part of the The rendered result of the mesh includes the result of forward ray tracing, but not the result of reverse ray tracing.
  • the rendered image can have a stronger sense of reality and a better effect.
  • the remote rendering platform obtains preset viewing angle parameters set by the provider of the virtual scene or the user.
  • the preset viewing angle parameter may be the number of preset viewing angles, or may be multiple preset viewing angles.
  • the parameters set by the provider of the virtual scene or the user may also include the number of light bounces and so on.
  • the user can set preset viewing angle parameters according to actual needs.
  • the preset viewing angle parameter is larger, the effect of rendering the image is better.
  • the remote rendering platform performs reverse ray tracing of multiple preset viewing angles on part or all of the meshes of each three-dimensional model of the virtual scene according to the light source of the virtual scene, including:
  • the remote rendering platform performs reverse ray tracing of a plurality of preset viewing angles on a smooth grid of materials of each three-dimensional model of the virtual scene.
  • a rendering node including: a rendering application server and a rendering engine,
  • the rendering application server is configured to obtain a virtual scene, the virtual scene including a light source and at least one three-dimensional model;
  • the rendering engine is configured to perform forward ray tracing on the grid of each three-dimensional model of the virtual scene according to the light source of the virtual scene, wherein the surface segmentation of the three-dimensional model of the virtual scene obtains the The grid of the three-dimensional model; generating the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene according to the forward ray tracing results of the grids of the three-dimensional models of the virtual scene; storing the three-dimensional models of the virtual scene Pre-ray tracing results of the mesh of the model;
  • the rendering application server is configured to receive a first rendering request, and determine the first observable grid of each three-dimensional model of the virtual scene in the first rendering request;
  • the rendering engine is configured to determine the rendering result of the first observable grid from the stored pre-ray tracing results of the grids of the three-dimensional models of the virtual scene.
  • the remote rendering platform generates a first rendered image according to the rendering result of the first observable grid, or the remote rendering platform converts the first observable grid
  • the rendering result of is sent to the first terminal device, so that the first terminal device generates a first rendered image according to the rendering result of the first observable grid.
  • the remote rendering platform receives a second rendering request, and determines that the second rendering request is in the second observable grid of each three-dimensional model of the virtual scene; the remote rendering platform receives From the stored pre-ray tracing results of the grids of the three-dimensional models of the virtual scene, the rendering result of the second observable grid is determined.
  • the first rendering request is issued by the first terminal according to an operation of the first user, and the first rendering request carries the perspective of the first user in the virtual scene.
  • the rendering application server is also used to obtain the forward ray tracing parameters set by the provider of the virtual scene or the user who issued the first rendering request, and the forward ray tracing parameters include At least one of the following: the number of samples per unit area and the number of light bounces;
  • the rendering engine is further configured to perform forward ray tracing on the grids of each three-dimensional model of the virtual scene according to the light source of the virtual scene and the forward ray tracing parameters.
  • the rendering engine is further configured to perform reverse ray tracing of multiple preset viewing angles on part or all of the meshes of each three-dimensional model of the virtual scene according to the light source of the virtual scene;
  • the forward ray tracing results of the grids of the three-dimensional models of the virtual scene and the reverse ray tracing results of multiple preset viewing angles of part or all of the three-dimensional models of the virtual scene generate the virtual scene Pre-ray tracing results of the mesh of each 3D model.
  • the remote rendering platform obtains preset viewing angle parameters set by the provider of the virtual scene or the user.
  • the preset viewing angle parameter may be the number of preset viewing angles, or may be multiple preset viewing angles.
  • the parameters set by the provider of the virtual scene or the user may also include the number of light bounces and so on.
  • the rendering engine is further configured to perform reverse ray tracing of multiple preset viewing angles on a smooth grid of materials of each three-dimensional model of the virtual scene according to the light source of the virtual scene.
  • a rendering node including a memory and a processor, and the processor executes a program in the memory to execute the method provided in the first aspect and possible designs thereof.
  • the rendering node may include one or more computers, and each computer executes part or all of the steps in the method provided in the first aspect and possible designs.
  • a computer-readable storage medium which includes instructions that, when the instructions run on a computing node, cause the computing node to execute the method provided in the first aspect and possible designs thereof.
  • a rendering system including: a terminal device, a network device, and a remote rendering platform.
  • the terminal device communicates with the remote rendering platform through the network device, wherein the remote rendering platform is used for Implement the methods provided by the first aspect and its possible designs.
  • a computer program product including instructions, which when the instructions run on a rendering node, cause the rendering node to execute the method provided in the first aspect and possible designs thereof.
  • FIGS. 1A and 1B are schematic diagrams of the structure of the rendering system provided by the present application.
  • FIG. 2 is a schematic diagram of a rendered image obtained by observing a virtual scene from different angles provided by the present application
  • 3A to 3C are schematic diagrams of ray tracing rendering provided by this application.
  • FIG. 4 is another schematic diagram of ray tracing rendering provided by this application.
  • FIG. 5 is a schematic diagram of the comparison between the calculation amount when each user performs the calculation separately and the calculation amount when the common part is extracted for unified calculation provided by this application;
  • FIGS. 6A and 6B are schematic diagrams of the grids of various three-dimensional models provided by this application.
  • FIGS. 7A to 7D are schematic diagrams of forward ray tracing rendering provided by the present application.
  • FIG. 8 is another schematic diagram of forward ray tracing rendering provided by this application.
  • Fig. 9 is a schematic diagram of the number of light rays passing per unit area of the light source provided by the present application.
  • FIG. 10 is a schematic diagram of various preset viewing angles of reverse ray tracing rendering provided by the present application.
  • 11A and 11B are schematic diagrams of reverse ray tracing rendering provided by the present application.
  • FIG. 12 is a schematic diagram of obtaining a set of object surfaces observable from a user's perspective provided by the present application.
  • 13A and 13B are comparison diagrams where the number of light samples per grid in the three-dimensional model provided by the present application is 1 and n, respectively;
  • FIG. 14 is a schematic flowchart of a method for generating pre-ray tracing results proposed by this application.
  • FIG. 15 is a schematic flowchart of a rendering method proposed by this application.
  • FIG. 16 is a schematic diagram of the structure of a rendering node proposed in this application.
  • FIG. 17 is a schematic structural diagram of another rendering node proposed in this application.
  • FIG. 1A is a schematic structural diagram of a rendering system related to the present application.
  • the rendering system of the present application is used for a 2D image obtained by rendering a 3D model of a virtual scene by a rendering method, that is, a rendered image.
  • the rendering system of the present application may include: one or more terminal devices 10, a network device 20, and a remote rendering platform 30.
  • the remote rendering platform 30 may be specifically deployed on a public cloud.
  • the remote rendering platform 30 and the terminal device 10 are generally deployed in different data centers.
  • the terminal device 10 may be a device that needs to display rendered images in real time, for example, it may be a virtual reality (VR) device used for flight training, a computer used for virtual games, a smart phone used for a virtual mall, etc. , There is no specific limitation here.
  • the terminal device can be a device with high configuration and high performance (for example, multi-core, high clock speed, large memory, etc.), or a device with low configuration and low performance (for example, single core, low clock speed, small memory, etc.) equipment.
  • the terminal device 10 may include hardware, an operating system, and a rendering application client.
  • the network device 20 is used to transmit data between the terminal device 10 and the remote rendering platform 30 through a communication network of any communication mechanism/communication standard.
  • the communication network can be a wide area network, a local area network, a point-to-point connection, etc., or any combination thereof.
  • the remote rendering platform 30 includes one or more remote rendering nodes, and each remote rendering node includes rendering hardware, a virtualization service, a rendering engine, and a rendering application server from bottom to top.
  • the rendering hardware includes computing resources, storage resources, and network resources.
  • Computing resources can use heterogeneous computing architecture, for example, central processing unit (CPU) + graphics processing unit (GPU) architecture, CPU + AI chip, CPU + GPU + AI chip architecture, etc.
  • Storage resources can include storage devices such as memory and video memory.
  • Network resources can include network cards, port resources, address resources, and so on.
  • the virtualization service is a service that virtualizes the resources of the rendering node into vCPUs and other self through virtualization technology, and flexibly isolates independent resources to run the user's application according to the user's needs.
  • virtualization services may include virtual machine (VM) services and container (container) services.
  • VMs and containers can run rendering engines and rendering application servers.
  • the rendering engine is used to implement the rendering algorithm.
  • the rendering application server is used to call the rendering engine to complete the rendering of the rendered image.
  • the rendering application client on the terminal device 10 and the rendering application server of the remote rendering platform 30 are collectively referred to as a rendering application.
  • Common rendering applications can include: game applications, VR applications, movie special effects, animations, and so on.
  • the user inputs operation instructions through the rendering application client, the rendering application client sends the operation instructions to the rendering application server, the rendering application server calls the rendering engine to generate rendering results, and sends the rendering results to the rendering application client. Then, the rendering application client converts the rendering result into an image and presents it to the user.
  • the rendering application server and the rendering application client may be provided by a rendering application provider, and the rendering engine may be provided by a cloud service provider.
  • the rendering application can be a game application.
  • the game developer of the game application installs the game application server on the remote rendering platform provided by the cloud service provider, and the game developer of the game application provides the game application client through the Internet Download it to the user and install it on the user's terminal device.
  • the cloud service provider also provides a rendering engine, which can provide computing power for game applications.
  • the rendering application client, the rendering application server, and the rendering engine may all be provided by a cloud service provider.
  • the rendering system shown in FIG. 1B further includes a management device 40.
  • the management device 40 may be a device provided by a third party other than the user's terminal device and the remote rendering platform 30 of the cloud service provider.
  • the management device 40 may be a device provided by a game developer. The game developer can manage the rendering application through the management device 40. It can be understood that the management device 40 may be set on the remote rendering platform, and may also be set outside the remote rendering platform, which is not specifically limited here.
  • FIG. 1A or Figure 1B Take the rendering system shown in Figure 1A or Figure 1B as an example.
  • user A uses terminal device 1 and user B uses terminal device 2.
  • terminal device 1 needs to display the rendered image of the virtual scene generated from the perspective of user A
  • terminal device 2 needs to display the image from user B A rendered image of the virtual scene generated by the perspective.
  • the terminal device 1 and the terminal device 2 may independently use the resources of the remote rendering platform 30 to perform ray tracing rendering on the virtual scene, so as to obtain rendered images of different angles. specifically,
  • the terminal device 1 sends a first rendering request to the remote rendering platform 30 through the network device 20, and the remote rendering platform 30 calls the rendering engine according to the first rendering request from the user A’s perspective and uses ray tracing rendering to separately perform ray tracing on the virtual scene, thereby obtaining A rendered image of the virtual scene generated from the perspective of user A.
  • the terminal device 2 sends a second rendering request to the remote rendering platform 30 through the network device 20, and the remote rendering platform 30 calls the rendering engine according to the second rendering request from the user B’s perspective and uses ray tracing rendering to separately perform ray tracing on the virtual scene, thereby obtaining A rendered image of the virtual scene generated from the perspective of user B.
  • Ray tracing rendering is a rendering method that generates a rendered image by tracing the path of light emitted from the viewpoint of an observer (such as a camera or human eye) toward each pixel of the rendered image into the virtual scene.
  • the virtual scene includes a light source and a three-dimensional model.
  • the ray tracing rendering method starting from the observer's point of view (when the point of view is determined, the angle of view is naturally determined), the light that can reach the light source is traced backwards.
  • the virtual scene has only one light source 111 and one opaque sphere 112.
  • a light ray is emitted from the viewpoint E of the camera 113 (taking the camera 113 observers as an example), and is projected onto the pixel point O 1 of the rendered image 114, and then continues to shoot to a point P 1 of the opaque sphere 112, and then is reflected To the light source L, at this time, the light intensity and color of the point P 1 determine the light intensity and color of the pixel O 1 .
  • Another ray is emitted from the viewpoint E of the camera 111, and is projected to another pixel O 2 in the rendered image 114, and then continues to be emitted to a point P 2 of the opaque sphere 112, and then is reflected to the light source L, and the point There is an obstacle opaque sphere 112 between P 2 and the light source L.
  • the point P 2 is located in the shadow of the opaque sphere 112, the light intensity of the pixel point O 2 is zero, and the color is black.
  • the virtual scene has only one light source 121 and one transparent sphere 122.
  • a light ray is emitted from the viewpoint E of the camera 123, and is projected onto the pixel point O 3 on the rendered image 124, and then continues to be emitted to a point P 3 of the transparent sphere 122, and then is refracted to the light source L.
  • the point P The light intensity and color of 3 determine the light intensity and color of pixel O 3 .
  • the virtual scene has only one light source 131 and one transparent thin body 132.
  • a light ray is emitted from the viewpoint E of the camera 133 and is projected to the point O 4 of the grid, and then continues to be emitted to a point P 4 of the transparent thin body 132, and then is transmitted to the light source L.
  • the light of the point P 4 The intensity and color determine the light intensity and color of the pixel O 4.
  • the reflection scene in Figure 3A, the refraction scene in Figure 3B, and the transmission scene in Figure 3C are the simplest scenes.
  • Figure 3A assumes that there is only one opaque sphere in the virtual scene
  • Figure 3B assumes that there is only one opaque sphere in the virtual scene.
  • Figure 3C it is assumed that there is only one transparent thin body in the virtual scene.
  • the virtual scene is far more complicated than Figures 3A to 3C.
  • the light will be reflected, refracted and transmitted multiple times, which will cause the tracing of the light to become very complicated and consume a lot of computing resources.
  • the virtual scene includes a light source 140, two transparent spheres 141 and 142, and an opaque object 143.
  • a light ray is emitted from the viewpoint E of the camera 144, projected to a pixel O 4 in the rendered image 145, and continues to be projected to a point P 1 of the transparent sphere 141.
  • a shadow test line S1 is made from P 1 to the light source L, during which For objects that are not obstructed, the local light illumination model can be used to calculate the light intensity of the light source to P 1 in the direction of the line of sight E as the local light intensity of the point.
  • the tracking point at which reflected light R1 Tl and refract light light intensity P which also contributes to 1:00.
  • the intensity of the light in this direction is set to zero, and the tracing of the light direction is ended.
  • the refracted light T1 propagates inside the transparent object 141 and continues to be emitted and intersects the transparent object 142 at the point P 2. Since this point is inside the transparent object 142, it can be assumed that its local light intensity is zero.
  • reflected light R2 and refraction are generated.
  • the ray T2 in the direction of the reflected ray R2 can continue to be tracked recursively to calculate its ray intensity, and it will not continue here.
  • T2 and the opaque object 143 intersect at the point P 3 , and make the shadow test line S3 between P 3 and the light source L. There is no object obscuration, then the local light intensity at that place is calculated. Because the opaque object 143 is non-transparent , then, you may continue to track the direction of the reflected light rays R3 strength, combined with local light intensity to obtain the light intensity at P 3.
  • the tracking of the reflected light R3 is similar to the previous process, and the algorithm can proceed recursively. Repeat the above process until the ray meets the tracing termination condition. In this way, we can get the light intensity of pixel O 4 , which is its corresponding color value.
  • the rendering system only includes terminal device 1 and terminal device 2 as an example.
  • the number of terminal devices may be far more than two, and users of different terminal devices often have different perspectives. are different. Therefore, as the number of users increases, the number of rendered images from different perspectives that need to be generated in the same virtual scene also increases, and the amount of calculation will be very huge.
  • these terminal devices perform ray tracing rendering of the same virtual scene to obtain rendered images of different angles, there may be many calculations that are repeated, resulting in unnecessary waste of computing resources.
  • the rendering system proposed in this application can extract the common calculation part from the ray tracing rendering of the same virtual scene from different angles, and each user only needs to calculate the private part related to the vision separately, thereby effectively saving the rendering required Computing resources to improve rendering efficiency.
  • FIG. 5 is a schematic diagram of the comparison between the calculation amount when each user performs the calculation separately and the calculation amount when the common part is extracted for unified calculation.
  • the left side of FIG. 5 is the calculation amount when each user performs the calculation separately
  • the right side of FIG. 5 is the calculation amount when the common part is extracted for unified calculation.
  • the rendering engine of the rendering system can perform image rendering through the following rendering algorithms:
  • the light generated by the light source shines on the three-dimensional model.
  • the light source can be a point light source, a line light source, a surface light source, and so on.
  • the shape of the three-dimensional model can be various, for example, it can be a sphere, a cone, a curved object, a flat object, an object with an irregular surface, and so on.
  • the remote rendering platform divides the surface of the 3D model in the virtual scene into multiple grids.
  • the shapes of the meshes of the three-dimensional models with different shapes may be different.
  • the shapes of the mesh of a sphere and the mesh of a curved object may be completely different.
  • the meshes will be described below in conjunction with specific embodiments.
  • the grid can be expressed as the center point And the center point
  • the points in the neighborhood constitute an approximate square with slightly bulging sides on the surface of the sphere.
  • a three-dimensional orthogonal coordinate system is constructed with the center of the sphere as the origin, where the three-dimensional orthogonal coordinate system includes an x-axis, a y-axis, and a z-axis.
  • r is the length of the line segment OP from the center of the sphere O to the center point P
  • is the angle between the line segment OP and the positive z axis, It is expressed as the angle between the projection of the line segment OP on the xoy plane and the x axis.
  • the grid can be expressed as a square on the curved surface represented by P(u,t).
  • a two-dimensional orthogonal coordinate system is constructed with a set origin of the curved surface, where the coordinate system includes u-axis and t-axis.
  • u represents the offset in one direction of the origin of the curved surface
  • t represents the offset in the other orthogonal direction
  • P(u,t) represents the four vertices in the (u,t) coordinate system as shown in Figure 6B Consists of squares.
  • the shape of the grid described above is merely a specific example, and in actual applications, the grid may also have other shapes, which is not specifically limited here.
  • the size of the grid can be set as needed. If the accuracy of the rendered image is higher, the size of the grid can be set to be smaller.
  • the material of the aforementioned mesh can be smooth or rough.
  • smooth materials are those with specular reflection or transmission, such as mirrors, metal surfaces, water droplets, and so on.
  • Rough materials are materials that have diffuse reflection, such as natural wood and cloth.
  • Forward ray tracing refers to the forward tracing of the light transmission process in the virtual scene starting from the light source.
  • the remote rendering platform performs forward ray tracing on the light generated by the light source in the virtual scene, so as to obtain the light intensity of each grid in the 3D model in the virtual scene.
  • the forward ray tracing mainly includes four scenes: reflection, refraction, transmission, and direct shooting, which will be described below with reference to FIGS. 7A-7D and specific embodiments respectively.
  • the virtual scene has only one light source 211, an opaque sphere 212, and an opaque sphere 213.
  • a light emitted from the light source 211 is projected to a point 212 on an opaque sphere of P 1, then the opaque sphere is reflected to a center point 213 of the grid point Q 1 '. Therefore, the local light illumination model can be used to calculate the intensity of the light generated by the light source 211 at the point P 1 of the opaque sphere 212, and then continue to trace the light reflected by the opaque sphere 212 at the center point of the opaque sphere 213 as the point Q 1 The intensity of the light generated on the grid.
  • the virtual scene has only one light source 221, a transparent sphere 222, and an opaque sphere 223.
  • a light ray is emitted from the light source 221, projected onto a point P 2 of the transparent sphere 222, and then is refracted onto a grid with a center point of the opaque sphere 223 as the point Q 2. Therefore, the light intensity of the light generated by the light source 221 at the point P 2 of the transparent sphere 222 can be calculated through the local light illumination model, and then continue to trace the light after being refracted by the transparent sphere 222 at the center point of the opaque sphere 223 as the point Q 2 The intensity of the light generated on the grid.
  • the virtual scene has only one light source 231, a transparent thin body 232, and an opaque sphere 233.
  • a light ray is emitted from the light source 221, is projected to a point P 3 of the transparent thin body 232, and then is transmitted to a grid with a center point of the opaque sphere 233 as the point Q 3. Therefore, the local light illumination model can be used to calculate the light intensity of the light generated by the light source 231 at the point P 3 of the transparent thin body 232, and then continue to trace the light at the center of the opaque sphere 233 after being transmitted by the transparent thin body 232. The intensity of the light generated on the Q 3 grid.
  • the virtual scene has only one light source 241 and an opaque sphere 243.
  • a light ray is emitted from the light source 241 and is projected onto a grid with a center point of the opaque sphere 243 at the point Q 4. Therefore, the light intensity of the light generated by the light source 241 on the grid with the center point of the opaque sphere 243 being the point Q 4 can be calculated by the local light illumination model.
  • the reflection scene in FIG. 7A, the refraction scene in FIG. 7B, the transmission scene in FIG. 7C, and the direct scene in FIG. 7D is complicated.
  • the light intensity of each grid of the 3D model in the virtual scene is based on the light intensity generated by all the light reflected on the grid, and the light generated by all the light refracted on the grid.
  • the intensity, the light intensity of all the light rays transmitted to the grid, and the light intensity of all the light rays directly hitting the grid are calculated. For example, it can be the sum of the light intensities.
  • first light source 251 a first light source 251
  • second light source 252 a transparent sphere 253
  • transparent thin body 254 a first opaque sphere 255
  • second opaque sphere 256 a second opaque sphere 256 in the virtual scene.
  • the first light generated by the first light source 251 is projected to the point P 1 of the transparent sphere 253, and then is refracted onto the grid with the center point of the two opaque spheres 256 as the point Q; the second light generated by the first light source 251 Projected on the point P 2 of the transparent thin body 254, and then is transmitted to the grid with the center point of the second opaque sphere 256 as the point Q; the third light generated by the second light source 252 directly hits the second opaque sphere 256 Q is the center point on the grid points; third light generated by the second light source 252 is projected onto the first opaque sphere point P 3, and then is reflected to the center point 256 of the second opaque sphere grid point Q superior.
  • the light intensity of the grid where the point Q is located is based on the light intensity produced by the first light being refracted to the point Q, the light intensity produced by the second light being transmitted to the point Q, and the light intensity produced by the third light being directed to the point Q
  • the light intensity generated by the fourth light being reflected to the point Q is calculated, for example, it can be the sum of the light intensity.
  • the above examples are all taking the light emitted by the light source as one light, and the number of light bounces does not exceed 5 as an example. However, in fact, the number of light rays and the number of light bounces can also be other numbers. There is no specific limitation here. Normally, since the number of rays emitted by the light source is unlimited and computing resources are limited, under normal circumstances, it is impossible to perform forward ray tracing on all rays. Therefore, it is necessary to sample the rays emitted by the light source.
  • the parameters involved in the sampling mainly include the number of samples per unit space and the number of ray bounces (bounce) and so on.
  • the following will take the number of samples per unit area (SPUA) and the number of light bounces as examples for detailed introduction.
  • SPUA defines the number of rays sampled per unit area. Taking the example shown in FIG. 9 as an example, the spherical surface S is constructed with the light source L as the center, and the spherical surface S is divided into multiple unit areas. Then, SPUA is equal to the number of light rays generated by the light source L that pass through the unit area A. Theoretically, the number of rays generated by the light source L passing through a unit area is infinite. However, in the actual tracking process, it is impossible to trace all rays, and only a limited amount of rays can be traced. The greater the number of SPUAs, the greater the number of traced rays and the better the image quality, but the greater the amount of calculation. On the contrary, the smaller the number of SPUAs, the smaller the number of traced rays, and the worse the image quality, but the smaller the amount of calculation.
  • the number of ray bounces is the sum of the maximum number of reflections and the maximum number of refractions for tracing the ray before the forward tracing of the ray is terminated. Because in a complex scene, light will be reflected and refracted multiple times. In theory, the number of times the light is reflected and refracted can be infinite, but in the actual tracking process, it is impossible to track the light infinitely. Therefore, some tracking termination conditions need to be given. In the application, there can be the following termination conditions: the light will attenuate after many reflections and refractions, and the light contributes little to the light intensity of the viewpoint; or, the number of ray bounces, that is, the tracking depth, is greater than a certain value.
  • the more light bounces the more effective light that can be tracked, the better the refraction effect between multiple transparent objects, the more realistic, the better the image quality, but the greater the amount of calculation.
  • the fewer the number of light bounces the less effective light that can be traced, the worse the refraction effect between multiple transparent objects, the more distorted, and the worse the image quality, but the less the amount of calculation.
  • sampling parameters are only taken as specific examples. In actual applications, other sampling parameters may also be used, which are not specifically limited here.
  • Reverse ray tracing rendering is the process of tracing the light rays entering the grid of the 3D model from a preset perspective and passing them to the light source in the virtual scene.
  • the preset angle of view is a certain angle from which the user observes the virtual scene. For example, suppose that when the user views the virtual scene vertically, the preset angle of view is (90, 90), when the user views the virtual scene from the left 45 degrees, the preset angle of view is (45, 0), and the user views 45 degrees from the right In the virtual scene, the preset angle of view is (135, 0) and so on.
  • each grid has an open hemispherical space facing its normal direction.
  • the light entering the hemispherical space can be expressed as the end point of the grid center P and the hemisphere Any point O (for example, O 1 , O 2 or O 3 ) on the sphere of the space is the starting point, and reverse ray tracing is performed on the different preset viewing angles of each grid.
  • the preset viewing angle mentioned here refers to the light OP ( ⁇ , ⁇ ) in the hemispherical coordinate system, where 0 ⁇ 180, 0 ⁇ 360.
  • the space is continuous, but we can quantify the preset viewing angle according to the computing power and accuracy requirements. For example, you can set a preset viewing angle every 1 degree, or you can set a preset viewing angle every 2 degrees. It can be understood that the greater the number of preset viewing angles, the smaller the quantization error and the higher the accuracy.
  • the virtual scene has only one light source 311 and one opaque sphere 312.
  • Light is emitted from a predetermined viewing angle, an opaque sphere projected onto a grid point 312 P 1, then, it is reflected to the light source 311.
  • the light intensity of the light generated by the light source 311 on the grid of the opaque sphere 312 can be calculated by the local light illumination model.
  • the virtual scene has only one light source 321 and one transparent sphere 322.
  • a light ray is emitted from a preset angle of view, projected to a point P 2 of a grid of the transparent sphere 322, and then refracted to another point Q 2 of the transparent sphere 322, and then refracted to the light source L.
  • it can pass local illumination model calculation source 321 generates light generated in the light intensity on the two points Q, and then calculate the light intensity of the light generated from the point 2 is the point Q 2 at the center of the refracted P 2 P grid.
  • the reflection scene in Figure 11A and the refraction scene in Figure 11B are the simplest scenes.
  • Figure 11A assumes that there is only one opaque sphere in the virtual scene
  • Figure 11B assumes that there is only one transparent sphere in the virtual scene.
  • the virtual scene is far more complicated than that of Figures 11A to 11B.
  • the light intensity of each grid of the 3D model in the virtual scene is based on the light intensity of all the light reflected on the grid, the light intensity of all the light refracted to the grid, The light intensity of all the light rays transmitted to the grid and the light intensity of all the light rays directly hitting the grid are calculated, for example, it can be the sum of the light intensities.
  • the pre-ray tracing results of the meshes of each 3D model can be obtained. Assuming that there are n grids T 1 , T 2 ,..., T n in the virtual scene, after forward ray tracing, n grids T 1 , T 2 ,..., T n can be obtained after forward ray tracing The respective forward ray tracing results F 1 , F 2 ,..., F n are used as pre-ray tracing results of n grids T 1 , T 2 ,..., T n.
  • the remote rendering platform associates the pre-ray tracing results F 1 , F 2 ,..., F n of n grids T 1 , T 2 ,..., T n and n grids T 1 , T 2 ,..., T n Stored in the light intensity table 1.
  • the light intensity table 1 may be the table shown in Table 1:
  • forward ray tracing and reverse ray tracing are required, after forward ray tracing and reverse ray tracing are performed, the result of forward ray tracing and reverse ray tracing can be obtained
  • the inverse ray tracing results of each 3D model are processed to obtain the pre-ray tracing results of the meshes of each three-dimensional model.
  • the forward ray tracing results F 1 , F 2 ,..., F of the n grids T 1 , T 2 ,..., T n after the forward ray tracing can be obtained n .
  • Reverse ray tracing result obtained by reverse ray tracing After performing reverse ray tracing on the n grids T 1 , T 2 ,..., T n from the k-th angle, we can obtain the n grids T 1 , T 2 ,..., T n from the k-th angle. Reverse ray tracing result obtained by reverse ray tracing
  • the forward ray tracing results F 1 , F 2 ,..., F n obtained by the forward ray tracing of the n grids T 1 , T 2 ,..., T n respectively and the n grids T from the first angle 1 ,T 2 ,...,T n reverse ray tracing result obtained by reverse ray tracing Perform linear superposition to obtain the pre-ray tracing results of n grids T 1 , T 2 ,..., T n at the first angle
  • the forward ray tracing results F 1 , F 2 ,..., F n obtained by each of the n grids T 1 , T 2 ,..., T n through forward ray tracing are respectively and from the second angle to the n grids T 1 ,T 2 ,...,T n reverse ray tracing result obtained by reverse ray tracing Perform linear superposition to obtain the pre-ray tracing results of n grids T 1 , T 2 ,..., T n at the second angle
  • the pre-ray tracing results of n grids T 1 , T 2 ,..., T n at the first angle Pre-ray tracing results of n grids T 1 , T 2 ,..., T n at the second angle
  • Pre-ray tracing results of n grids T 1 , T 2 ,..., T n at the kth angle It should be understood that in some embodiments, normalization processing may also be performed. In order to reduce the space required for storing the pre-ray tracing results of the grids of each three-dimensional model, the pre-ray tracing results of the grids of each three-dimensional model can be stored in a sparse matrix manner.
  • the remote rendering platform stores the pre-ray tracing results of the n grids T 1 , T 2 ,..., T n and the n grids T 1 , T 2 ,..., T n in the light intensity table 2 in association with each other.
  • the light intensity table 2 may be the table shown in Table 2:
  • the mesh that needs to be reversed ray tracing can be the surface of an object that has reflection and refraction phenomena, such as a mirror surface, a transparent object, and the like.
  • the remote rendering platform stores the pre-ray tracing results of the n grids T 1 , T 2 ,..., T n and the n grids T 1 , T 2 ,..., T n in the light intensity table 3 in association with each other.
  • the light intensity table 3 may be the table shown in Table 3:
  • the direct addition of the pre-ray tracing results is taken as an example for description. In practical applications, the pre-ray tracing results may also be weighted addition, etc., which is not specifically limited here.
  • the remote rendering platform or terminal device adopts the projection-type intersection method, and proposes corresponding observables from the pre-calculated pre-ray tracing results of the grids of each 3D model The rendering result of the obtained grid, and finally generate the rendered image required by the user. How to obtain the rendering result of the observable grid will be described in detail below in conjunction with FIG. 12 and related specific embodiments.
  • the observer 511 observes the virtual scene from the viewpoint E, and the rendered image 512 generated by the observation has m pixels.
  • a light ray is emitted from the viewpoint E and projected onto the first pixel of the rendered image 512. Then, it is assumed that the light continues to be emitted to one of the grids T 1 of the three-dimensional model in the virtual scene, and the ray enters
  • the first angle of incidence of the grid is the first angle
  • the grid can be found from the pre-ray tracing results of the grids of each three-dimensional model as shown in Table 2 according to the first angle of incidence being the first angle Pre-ray tracing results at the same angle And this pre-ray tracing result As the rendering result of the first pixel
  • a light ray is emitted from the viewpoint E and projected onto the second pixel of the rendered image 512.
  • the grid can be found to be the same from the pre-ray tracing results of the grids of each three-dimensional model shown in Table 2 according to the second angle of incidence being the second angle.
  • Pre-ray tracing results for angles use the pre-ray tracing result as the rendering result of the second pixel.
  • a light ray is emitted from the viewpoint E and projected onto the m-th pixel of the rendered image 512. Then, it is assumed that the light continues to be emitted to one of the grids T n-9 of the three-dimensional model in the virtual scene, and the ray
  • the m-th angle of incidence into the grid is the k-th angle
  • the mesh can be found from the pre-ray tracing results of the grids of each three-dimensional model as shown in Table 2 according to the m-th angle of incidence as the k-th angle. Pre-ray tracing results of the same angle of the grid And use the pre-ray tracing result as the rendering result of the m-th pixel.
  • the first angle of incidence is exactly equal to the first angle
  • the second angle of incidence is exactly equal to the second angle
  • the m-th angle of incidence is exactly equal to the k-th angle.
  • the preset viewing angle Usually quantified, there may be cases where the angle of incidence is not exactly equal to the preset angle of view.
  • the first angle of incidence can be between the first angle and the second angle. In this case, it can be rounded up or down.
  • the processing is performed in a whole and other manner, and there is no specific limitation here.
  • SPP sample per pixel
  • a ray is emitted from the viewpoint E and projected onto the i-th pixel of the rendered image 512. Then, suppose that the ray continues to be emitted to one of the grids T 3 of the three-dimensional model in the virtual scene, and the ray enters the grid
  • the first angle of incidence of the grid is the first angle
  • the same angle of the grid can be found from the pre-ray tracing results of the grid of each three-dimensional model as shown in Table 2 according to the first angle of incidence as the first angle Pre-ray tracing results
  • Another ray is emitted from the viewpoint E and projected onto the i-th pixel of the rendered image 512.
  • the grid can be found from the pre-ray tracing results of the three-dimensional model grids shown in Table 2 based on the first incident angle being the third angle. Pre-ray tracing results for angles Then, the pre-ray tracing results can be And pre-ray tracing results The average value of is used as the rendering result of the i-th pixel.
  • the rendering result of the pixel point B is determined by the grid where the projection point on the opaque object 2 is located. That is, the light intensity of the pixel point B is relatively high. Therefore, although the pixel point A and the pixel point B are adjacent pixels, the rendering results of the pixel point A and the pixel point B will be very different, resulting in a sawtooth effect.
  • the whole process can be divided into at least two parts:
  • the first part the user uploads the virtual scene to the remote rendering platform in advance, and the remote rendering platform executes the calculation of the above-mentioned common part to obtain the light intensity of multiple grids and store them for future use.
  • the second part of the remote rendering platform After receiving the rendering request sent by the terminal device, the second part of the remote rendering platform performs the calculation of the private part to obtain the rendered image.
  • the two parts of the process will be described in detail below in conjunction with specific examples.
  • the example shown in FIG. 14 mainly illustrates the process of the first part
  • the example shown in FIG. 15 mainly illustrates the process of the second part.
  • FIG. 14 is a schematic flowchart of a method for generating a pre-ray tracing result proposed by the present application. As shown in Figure 14, this method is implemented on the basis of the rendering system shown in Figure 1A or Figure 1B, and includes the following steps:
  • the remote rendering platform acquires a virtual scene.
  • the virtual scene may be sent by the terminal device to the remote rendering platform, or sent by the management device to the remote rendering platform.
  • the virtual scene may have a unique identifier, that is, the identifier of the virtual scene.
  • the identifier of the virtual scene of virtual scene 1 is S 1
  • the identifier of the virtual scene of virtual scene 2 is S 2 ,
  • the definition of the virtual scene, the light source in the virtual scene, the three-dimensional model in the virtual scene, the grid in each three-dimensional model, etc. please refer to the above, and the details will not be repeated here.
  • the remote rendering platform performs forward ray tracing on the grid of each three-dimensional model of the virtual scene according to the light source of the virtual scene, to obtain a positive ray tracing result of the grid of each three-dimensional model.
  • step S102 before step S102 is performed, the method further includes: the provider of the virtual scene and the user who issued the rendering request setting forward ray tracing parameters.
  • step S102 may include: the remote rendering platform obtains the forward ray tracing parameters set by the provider of the virtual scene or the user who issued the rendering request, and analyzes the three-dimensional models of the virtual scene according to the light source and forward ray tracing parameters of the virtual scene. The grid performs forward ray tracing, and the result of forward ray tracing is obtained.
  • the forward ray tracing parameter includes at least one of the following: the number of samples per unit area, the number of ray bounces, and so on.
  • forward ray tracing can be added to the relevant content above, and will not be repeated here.
  • the remote rendering platform performs reverse ray tracing on part or all of the meshes of the three-dimensional models of the virtual scene with multiple preset angles of view to obtain the reflections of the meshes of the three-dimensional models. Trace the result to the ray.
  • step S103 before step S103 is performed, the method further includes: the provider of the virtual scene and the user who issued the rendering request setting the reverse ray tracing parameters.
  • step S103 may include: the remote rendering platform obtains the reverse ray tracing parameters set by the provider of the virtual scene or the user who issued the rendering request, and performs a calculation of each three-dimensional model of the virtual scene according to the light source and reverse ray tracing parameters of the virtual scene. Part or all of the meshes are reversed ray tracing, and the reverse ray tracing results of the meshes of each three-dimensional model are obtained.
  • the reverse ray tracing parameter includes at least one of the following: a preset viewing angle parameter, the number of light bounces, and so on.
  • the preset viewing angle parameter may be the number of preset viewing angles, or may be multiple preset viewing angles.
  • the remote rendering platform determines the pre-ray tracing result of the mesh of each three-dimensional model according to the forward ray tracing result and the reverse ray tracing result.
  • the remote rendering platform determines the first observable grid according to the reverse ray tracing result of the first observable grid and the forward ray tracing result of the first observable grid.
  • observable grid rendering results please refer to the process of determining the pre-ray tracing results of the grids of each three-dimensional model above, which will not be repeated here.
  • step S104 the pre-ray tracing result of the grid of each three-dimensional model can be directly determined according to the forward ray tracing result, and the description is not repeated here.
  • FIG. 15 is a schematic flowchart of a rendering method proposed by the present application. As shown in FIG. 15, the rendering method is implemented on the basis of the rendering system shown in FIG. 1A or FIG. 1B, and includes the following steps:
  • the first terminal device sends a first rendering request to the remote rendering platform through a network device.
  • the remote rendering platform receives the first rendering request sent by the first terminal device through the network device.
  • the first rendering request includes the identifier of the virtual scene and the perspective of the first user, wherein the identifier of the virtual scene is a unique identifier of the virtual scene, and the first user’s
  • the angle of view is the angle at which the first user observes the virtual scene.
  • the second terminal device sends a second rendering request to the remote rendering platform through the network device.
  • the remote rendering platform receives the second rendering request sent by the second terminal device through the network device.
  • the second rendering request includes an identifier of the virtual scene and a perspective of a second user, wherein the perspective of the second user is an angle at which the second user observes the virtual scene.
  • the remote rendering platform receives a first rendering request, determines that the first rendering request is in the first observable grid of each three-dimensional model of the virtual scene, from the stored three-dimensional models of the virtual scene In the pre-ray tracing result of the grid, the rendering result of the first observable grid is determined, so as to generate the first rendered image.
  • the remote rendering platform determines that the first rendering request is in the first observable grid of each three-dimensional model of the virtual scene, from the stored network of each three-dimensional model of the virtual scene In the pre-ray tracing result of the grid, the method of determining the rendering result of the first observable grid can refer to the calculation of the private part above, which will not be repeated here.
  • the remote rendering platform receives a second rendering request, determines that the second rendering request is in the second observable grid of each three-dimensional model of the virtual scene, from the stored three-dimensional models of the virtual scene In the pre-ray tracing result of the grid, the rendering result of the second observable grid is determined, so as to generate a second rendered image.
  • the remote rendering platform determines that the second rendering request is in the second observable grid of each three-dimensional model of the virtual scene, from the stored network of each three-dimensional model of the virtual scene In the pre-ray tracing result of the grid, the method of determining the rendering result of the second observable grid can refer to the calculation of the private part above, which will not be repeated here.
  • the remote rendering platform sends the first rendered image to the first terminal device through the network device.
  • the first terminal device receives the first rendered image sent by the remote rendering platform through the network device.
  • the remote rendering platform sends the second rendered image to the second terminal device through the network device.
  • the second terminal device receives the second rendered image sent by the remote rendering platform through the network device.
  • step sequence is only taken as a specific example.
  • the execution sequence can also be step S201 -> step S203 -> step S205 -> step S202 -> step S204 -> step S206, etc. , There is no specific limitation here.
  • the first rendered image and the second rendered image are generated by the remote rendering platform as an example for description.
  • the remote rendering platform can compare the pre-ray tracing results of the meshes of each 3D model. Respectively sent to the first terminal device and the second terminal device, and the first terminal device generates the first rendered image according to the pre-ray tracing results of the grid of each three-dimensional model, and the second terminal device generates the first rendered image according to the grid of each three-dimensional model.
  • the pre-ray tracing result of the grid generates the second rendered image, which is not specifically limited here.
  • FIG. 16 is a schematic structural diagram of a rendering node proposed in this application.
  • the rendering node includes: a rendering application server 610 and a rendering engine 620.
  • the rendering application server 610 is configured to obtain a virtual scene, the virtual scene including a light source and at least one three-dimensional model;
  • the rendering engine 620 is configured to perform forward ray tracing on the grid of each three-dimensional model of the virtual scene according to the light source of the virtual scene, wherein the surface of the three-dimensional model of the virtual scene is segmented to obtain the virtual scene
  • the grids of the three-dimensional models of the virtual scene generate the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene according to the forward ray tracing results of the grids of the three-dimensional models of the virtual scene; store each of the virtual scenes Pre-ray tracing results of the mesh of the 3D model;
  • the rendering application server 610 is configured to receive a first rendering request, and determine the first observable grid of each three-dimensional model of the virtual scene in the first rendering request;
  • the rendering engine 620 is configured to determine the rendering result of the first observable grid from the stored pre-ray tracing results of the grids of the three-dimensional models of the virtual scene.
  • the virtual scene in this embodiment there is no definition of the virtual scene in this embodiment.
  • the light source in the virtual scene, the three-dimensional model in the virtual scene, and the grids in each three-dimensional model, forward ray tracing, and the prediction of the grid of each three-dimensional model are not included in this embodiment.
  • the ray tracing result, the first observable grid, and the rendering result of the first observable grid, etc. are introduced.
  • the rendering application server 610 and the rendering engine 620 in this embodiment can be set in the rendering nodes of FIG. 1A and FIG. 1B.
  • the rendering node in this embodiment can also perform the steps performed by the rendering node in FIG. 14 and the steps performed by the rendering node in FIG. 15.
  • Figure 17 is a schematic structural diagram of a rendering node. As shown in FIG. 17, the rendering node includes: a processing system 910, a first memory 920, a smart network card 930, and a bus 940.
  • the processor system 910 may adopt a heterogeneous structure, that is, include one or more general-purpose processors, and one or more special processors, such as GPUs or AI chips, etc., where the general-purpose processors may be capable of processing Any type of electronic instruction equipment, including central processing unit (CPU), microprocessor, microcontroller, main processor, controller, application specific integrated circuit (ASIC) and so on.
  • the general-purpose processor executes various types of digital storage instructions, such as software or firmware programs stored in the first memory 920.
  • the general-purpose processor may be an x86 processor or the like.
  • the general-purpose processor sends commands to the first memory 920 through the physical interface to complete storage-related tasks.
  • the commands that the general-purpose processor can provide include read commands, write commands, copy commands, and erase commands.
  • the command may specify an operation related to a specific page and block of the first memory 920.
  • Special processors are used to complete complex operations of image rendering and so on.
  • the first memory 920 may include random access memory (RAM), flash memory (Flash Memory), etc., and may also be RAM, read-only memory (Read-Only Memory, ROM), or hard disk (Hard Disk Drive). , HDD) or Solid-State Drive (SSD).
  • the first memory 920 stores program codes for implementing the rendering engine and rendering application server.
  • the smart network card 930 is also called a network interface controller, a network interface card, or a local area network (LAN) adapter. Each smart network card 930 has a unique MAC address, which is burned into the read-only memory chip by the manufacturer of the smart network card 930 during production.
  • the smart network card 930 includes a processor 931, a second memory 932, and a transceiver 933.
  • the processor 931 is similar to a general-purpose processor, but the performance requirement of the processor 931 may be lower than that of a general-purpose processor. In a specific embodiment, the processor 931 may be an ARM processor or the like.
  • the second memory 932 may also be a flash memory, HDD or SDD, and the storage capacity of the second memory 932 may be smaller than the storage capacity of the first memory 920.
  • the transceiver 933 may be used to receive and send messages, and upload the received messages to the processor 931 for processing.
  • the smart network card 930 may also include a plurality of ports, and the ports may be any one or more of the three interface types of a thick cable interface, a thin cable interface, and a twisted pair interface.
  • the virtual scene in this embodiment there is no definition of the virtual scene in this embodiment.
  • the light source in the virtual scene, the three-dimensional model in the virtual scene, and the grids in each three-dimensional model, forward ray tracing, and the prediction of the grid of each three-dimensional model are not included in this embodiment.
  • the ray tracing result, the first observable grid, and the rendering result of the first observable grid, etc. are introduced.
  • the program codes of the rendering application server 610 and the rendering engine 620 in FIG. 16 may be set in the first memory 920 in FIG. 17.
  • the rendering node in this embodiment can also perform the steps performed by the rendering node in FIG. 14 and the steps performed by the rendering node in FIG. 15.
  • the computer may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium, (for example, a floppy disk, a storage disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A rendering method, comprising: a remote rendering platform acquiring a virtual scene, wherein the virtual scene comprises a light source and at least one three-dimensional model; the remote rendering platform performing, according to the light source of the virtual scene, forward ray tracing on grids of each three-dimensional model of the virtual scene, wherein the surface of the three-dimensional model of the virtual scene is segmented to obtain the grids of the three-dimensional model of the virtual scene; the remote rendering platform generating pre-ray-tracing results of the grids of each three-dimensional model of the virtual scene according to forward ray tracing results of the grids of each three-dimensional model of the virtual scene; the remote rendering platform storing the pre-ray-tracing results of the grids of each three-dimensional model of the virtual scene; the remote rendering platform receiving a first rendering request and determining first observable grids, on each three-dimensional model of the virtual scene, for the first rendering request; and the remote rendering platform determining rendering results of the first observable grids from the stored pre-ray-tracing results of the grids of each three-dimensional model of the virtual scene.

Description

渲染方法、设备以及系统Rendering method, equipment and system 技术领域Technical field
本申请涉及计算机技术领域,尤其涉及一种渲染方法、设备以及系统。This application relates to the field of computer technology, and in particular to a rendering method, device, and system.
背景技术Background technique
渲染是指用软件从三维模型生成图像的过程,其中,三维模型是用严格定义的语言或者数据结构对于三维物体的描述,它包括几何、视点、纹理以及照明信息。图像是数字图像或者位图图像。渲染这个术语类似于“艺术家对于场景的渲染”,另外,渲染也用于描述“计算视频编辑文件中的效果,以生成最终视频输出的过程”。根据模型渲染出图像的过程所需的计算量大,消耗的计算资源多。Rendering refers to the process of using software to generate images from a three-dimensional model. The three-dimensional model is a description of a three-dimensional object in a strictly defined language or data structure, which includes geometry, viewpoint, texture, and lighting information. The image is a digital image or a bitmap image. The term rendering is similar to "artist's rendering of a scene". In addition, rendering is also used to describe "the process of calculating the effects in a video editing file to generate the final video output." The process of rendering an image according to the model requires a large amount of calculation and consumes a lot of computing resources.
发明内容Summary of the invention
为了解决上述问题,本申请提供了一种渲染方法,能够有效提升渲染效率。In order to solve the above problems, the present application provides a rendering method, which can effectively improve rendering efficiency.
第一方面,提供了一种渲染方法,包括:In the first aspect, a rendering method is provided, including:
远程渲染平台获取虚拟场景,所述虚拟场景包括光源和至少一个三维模型;Acquiring a virtual scene by a remote rendering platform, the virtual scene including a light source and at least one three-dimensional model;
所述远程渲染平台根据所述虚拟场景的光源对所述虚拟场景的各三维模型的网格进行正向光线追踪,其中,所述虚拟场景的三维模型的表面分割得到所述虚拟场景的三维模型的网格;The remote rendering platform performs forward ray tracing on the grids of the three-dimensional models of the virtual scene according to the light source of the virtual scene, wherein the surface of the three-dimensional model of the virtual scene is segmented to obtain the three-dimensional model of the virtual scene Grid
所述远程渲染平台根据所述虚拟场景的各三维模型的网格的正向光线追踪结果生成所述虚拟场景的各三维模型的网格的预光线追踪结果;The remote rendering platform generates the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene according to the forward ray tracing results of the grids of the three-dimensional models of the virtual scene;
所述远程渲染平台存储所述虚拟场景的各三维模型的网格的预光线追踪结果;The remote rendering platform stores the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene;
所述远程渲染平台接收第一渲染请求,确定所述第一渲染请求在所述虚拟场景的各三维模型的第一可观察到的网格;The remote rendering platform receives a first rendering request, and determines the first observable grid of each three-dimensional model of the virtual scene in the first rendering request;
所述远程渲染平台从存储的所述虚拟场景的各三维模型的网格的预光线追踪结果中,确定所述第一可观察到的网格的渲染结果。The remote rendering platform determines the rendering result of the first observable grid from the stored pre-ray tracing results of the grids of the three-dimensional models of the virtual scene.
在一些可能的设计中,所述远程渲染平台根据所述第一可观察到的网格的渲染结果生成第一渲染图像,或者,所述远程渲染平台将所述第一可观察到的网格的渲染结果发送给第一终端设备,以使得所述第一终端设备根据所述第一可观察到的网格的渲染结果生成第一渲染图像。In some possible designs, the remote rendering platform generates a first rendered image according to the rendering result of the first observable grid, or the remote rendering platform converts the first observable grid The rendering result of is sent to the first terminal device, so that the first terminal device generates a first rendered image according to the rendering result of the first observable grid.
在一些可能的设计中,所述远程渲染平台接收第二渲染请求,确定所述第二渲染请求在所述虚拟场景的各三维模型的第二可观察到的网格;所述远程渲染平台从存储的所述虚拟场景的各三维模型的网格的预光线追踪结果中,确定所述第二可观察到的网格的渲染结果。In some possible designs, the remote rendering platform receives a second rendering request, and determines that the second rendering request is in the second observable grid of each three-dimensional model of the virtual scene; the remote rendering platform receives From the stored pre-ray tracing results of the grids of the three-dimensional models of the virtual scene, the rendering result of the second observable grid is determined.
上述方案中,虚拟场景的各三维模型的网格的预光线追踪结果是预先计算并存储在远程渲染平台中的,当不同的用户需要对同一虚拟场景的不同视角进行渲染时,只需要分别从预光线追踪结果中查询到对应的结果即可,不需要分别单独进行计算,从而大大减少了计算量。In the above solution, the pre-ray tracing results of the grids of each 3D model of the virtual scene are pre-calculated and stored in the remote rendering platform. When different users need to render different perspectives of the same virtual scene, they only need to separately The corresponding results can be queried in the pre-ray tracing results, and there is no need to perform separate calculations, which greatly reduces the amount of calculations.
在一些可能的设计中,所述第一渲染请求由第一终端根据第一用户的操作发出,所述第一渲染请求携带所述第一用户在所述虚拟场景中的视角。In some possible designs, the first rendering request is issued by the first terminal according to an operation of the first user, and the first rendering request carries the perspective of the first user in the virtual scene.
在一些可能的设计中,所述远程渲染平台根据所述虚拟场景的光源对所述虚拟场景的各三维模型的网格进行正向光线追踪前,所述方法还包括:In some possible designs, before the remote rendering platform performs forward ray tracing on the grids of the three-dimensional models of the virtual scene according to the light source of the virtual scene, the method further includes:
所述远程渲染平台获取所述虚拟场景的提供者或发出所述第一渲染请求的用户设置的正向光线追踪参数,所述正向光线追踪参数包括以下至少一个:每单位面积的采样数以及光线反弹次数;The remote rendering platform obtains the forward ray tracing parameters set by the provider of the virtual scene or the user who issued the first rendering request, and the forward ray tracing parameters include at least one of the following: the number of samples per unit area; and The number of light bounces;
所述远程渲染平台根据所述虚拟场景的光源对所述虚拟场景的各三维模型的网格进行正向光线追踪,包括:The remote rendering platform performs forward ray tracing on the grids of the three-dimensional models of the virtual scene according to the light source of the virtual scene, including:
所述远程渲染平台根据所述虚拟场景的光源和所述正向光线追踪参数对所述虚拟场景的各三维模型的网格进行正向光线追踪。The remote rendering platform performs forward ray tracing on the grid of each three-dimensional model of the virtual scene according to the light source of the virtual scene and the forward ray tracing parameter.
上述方案中,用户可以根据实际需要设置正向光线追踪参数,当对渲染图像质量要求较高时,可以设置要求更高的正向光线追踪参数,反之,则可以设置要求更低的正向光线追踪参数。In the above solution, the user can set the forward ray tracing parameters according to actual needs. When the quality of the rendered image is high, the forward ray tracing parameters can be set with higher requirements, and vice versa, the forward ray tracing parameters can be set with lower requirements. Tracking parameters.
在一些可能的设计中,所述远程渲染平台存储所述虚拟场景的各三维模型的网格的预光线追踪结果前,所述方法还包括:In some possible designs, before the remote rendering platform stores the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene, the method further includes:
所述远程渲染平台根据所述虚拟场景的光源对所述虚拟场景的各三维模型的部分或全部网格进行多个预设视角的反向光线追踪;The remote rendering platform performs reverse ray tracing of multiple preset viewing angles on part or all of the grids of each three-dimensional model of the virtual scene according to the light source of the virtual scene;
所述远程渲染平台根据所述虚拟场景的各三维模型的网格的正向光线追踪结果生成所述虚拟场景的各三维模型的网格的预光线追踪结果,包括:The remote rendering platform generates the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene according to the forward ray tracing results of the grids of the three-dimensional models of the virtual scene, including:
所述远程渲染平台根据所述虚拟场景的各三维模型的网格的正向光线追踪结果以及所述虚拟场景的各三维模型的部分或全部网格的多个预设视角的反向光线追踪结果生成所述虚拟场景的各三维模型的网格的预光线追踪结果。The remote rendering platform according to the forward ray tracing results of the grids of the three-dimensional models of the virtual scene and the backward ray tracing results of multiple preset viewing angles of part or all of the grids of the three-dimensional models of the virtual scene A pre-ray tracing result of the grid of each three-dimensional model of the virtual scene is generated.
可以理解,由于部分第一可观察到的网格可能没有反向光线追踪结果,所以,如果部分第一可观察到的网格存在反向光线追踪结果,那么,该部分的第一可观察到的网格的渲染结果包括该网格的反向光线追踪结果以及正向光线跟踪结果,如果部分第一可观察到的网格不存在反向光线追踪结果,那么,该部分的第一可观察到的网格的渲染结果包括正向光线跟踪结果,不包括反向光线追踪结果。It can be understood that since some of the first observable grids may not have reverse ray tracing results, if there is a reverse ray tracing result for some of the first observable grids, then the first observable part of the grid has reverse ray tracing results. The rendering result of the mesh includes the reverse ray tracing result and the forward ray tracing result of the mesh. If there is no reverse ray tracing result for part of the first observable mesh, then the first observable part of the The rendered result of the mesh includes the result of forward ray tracing, but not the result of reverse ray tracing.
上述方案中,通过增加反向光线追踪结果,可以使得渲染图像的真实感更强,效果更佳。In the above solution, by adding the reverse ray tracing result, the rendered image can have a stronger sense of reality and a better effect.
在一些可能的设计中,所述远程渲染平台获取所述虚拟场景的提供者或所述用户设置的预设视角参数。其中,预设视角参数可以是预设视角的数量,或者,可以是多个预设视角。此外,所述虚拟场景的提供者或所述用户设置的参数还可以包括光线反弹次数等等。In some possible designs, the remote rendering platform obtains preset viewing angle parameters set by the provider of the virtual scene or the user. The preset viewing angle parameter may be the number of preset viewing angles, or may be multiple preset viewing angles. In addition, the parameters set by the provider of the virtual scene or the user may also include the number of light bounces and so on.
上述方案中,用户可以根据实际需要设置预设视角参数,当预设视角参数越大时,渲染图像的效果越好。In the above solution, the user can set preset viewing angle parameters according to actual needs. When the preset viewing angle parameter is larger, the effect of rendering the image is better.
在一些可能的设计中,所述远程渲染平台根据所述虚拟场景的光源对所述虚拟场景的各三维模型的部分或全部网格进行多个预设视角的反向光线追踪,包括:In some possible designs, the remote rendering platform performs reverse ray tracing of multiple preset viewing angles on part or all of the meshes of each three-dimensional model of the virtual scene according to the light source of the virtual scene, including:
所述远程渲染平台根据所述虚拟场景的光源对所述虚拟场景的各三维模型的材料 为光滑的网格进行多个预设视角的反向光线追踪。According to the light source of the virtual scene, the remote rendering platform performs reverse ray tracing of a plurality of preset viewing angles on a smooth grid of materials of each three-dimensional model of the virtual scene.
上述方案中,只对材料为光滑的网格进行反向光线追踪,能够在控制计算量的前提下,保证渲染图像的质量。In the above solution, only reverse ray tracing is performed on a mesh with a smooth material, which can ensure the quality of the rendered image under the premise of controlling the amount of calculation.
第二方面,提供了一种渲染节点,包括:渲染应用服务端以及渲染引擎,In the second aspect, a rendering node is provided, including: a rendering application server and a rendering engine,
所述渲染应用服务端,用于获取虚拟场景,所述虚拟场景包括光源和至少一个三维模型;The rendering application server is configured to obtain a virtual scene, the virtual scene including a light source and at least one three-dimensional model;
所述渲染引擎,用于根据所述虚拟场景的光源对所述虚拟场景的各三维模型的网格进行正向光线追踪,其中,所述虚拟场景的三维模型的表面分割得到所述虚拟场景的三维模型的网格;根据所述虚拟场景的各三维模型的网格的正向光线追踪结果生成所述虚拟场景的各三维模型的网格的预光线追踪结果;存储所述虚拟场景的各三维模型的网格的预光线追踪结果;The rendering engine is configured to perform forward ray tracing on the grid of each three-dimensional model of the virtual scene according to the light source of the virtual scene, wherein the surface segmentation of the three-dimensional model of the virtual scene obtains the The grid of the three-dimensional model; generating the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene according to the forward ray tracing results of the grids of the three-dimensional models of the virtual scene; storing the three-dimensional models of the virtual scene Pre-ray tracing results of the mesh of the model;
所述渲染应用服务端,用于接收第一渲染请求,确定所述第一渲染请求在所述虚拟场景的各三维模型的第一可观察到的网格;The rendering application server is configured to receive a first rendering request, and determine the first observable grid of each three-dimensional model of the virtual scene in the first rendering request;
所述渲染引擎,用于从存储的所述虚拟场景的各三维模型的网格的预光线追踪结果中,确定所述第一可观察到的网格的渲染结果。The rendering engine is configured to determine the rendering result of the first observable grid from the stored pre-ray tracing results of the grids of the three-dimensional models of the virtual scene.
在一些可能的设计中,所述远程渲染平台根据所述第一可观察到的网格的渲染结果生成第一渲染图像,或者,所述远程渲染平台将所述第一可观察到的网格的渲染结果发送给第一终端设备,以使得所述第一终端设备根据所述第一可观察到的网格的渲染结果生成第一渲染图像。In some possible designs, the remote rendering platform generates a first rendered image according to the rendering result of the first observable grid, or the remote rendering platform converts the first observable grid The rendering result of is sent to the first terminal device, so that the first terminal device generates a first rendered image according to the rendering result of the first observable grid.
在一些可能的设计中,所述远程渲染平台接收第二渲染请求,确定所述第二渲染请求在所述虚拟场景的各三维模型的第二可观察到的网格;所述远程渲染平台从存储的所述虚拟场景的各三维模型的网格的预光线追踪结果中,确定所述第二可观察到的网格的渲染结果。In some possible designs, the remote rendering platform receives a second rendering request, and determines that the second rendering request is in the second observable grid of each three-dimensional model of the virtual scene; the remote rendering platform receives From the stored pre-ray tracing results of the grids of the three-dimensional models of the virtual scene, the rendering result of the second observable grid is determined.
在一些可能的设计中,所述第一渲染请求由第一终端根据第一用户的操作发出,所述第一渲染请求携带所述第一用户在所述虚拟场景中的视角。In some possible designs, the first rendering request is issued by the first terminal according to an operation of the first user, and the first rendering request carries the perspective of the first user in the virtual scene.
在一些可能的设计中,所述渲染应用服务端还用于获取所述虚拟场景的提供者或发出所述第一渲染请求的用户设置的正向光线追踪参数,所述正向光线追踪参数包括以下至少一个:每单位面积的采样数以及光线反弹次数;In some possible designs, the rendering application server is also used to obtain the forward ray tracing parameters set by the provider of the virtual scene or the user who issued the first rendering request, and the forward ray tracing parameters include At least one of the following: the number of samples per unit area and the number of light bounces;
所述渲染引擎还用于根据所述虚拟场景的光源和所述正向光线追踪参数对所述虚拟场景的各三维模型的网格进行正向光线追踪。The rendering engine is further configured to perform forward ray tracing on the grids of each three-dimensional model of the virtual scene according to the light source of the virtual scene and the forward ray tracing parameters.
在一些可能的设计中,所述渲染引擎,还用于根据所述虚拟场景的光源对所述虚拟场景的各三维模型的部分或全部网格进行多个预设视角的反向光线追踪;根据所述虚拟场景的各三维模型的网格的正向光线追踪结果以及所述虚拟场景的各三维模型的部分或全部网格的多个预设视角的反向光线追踪结果生成所述虚拟场景的各三维模型的网格的预光线追踪结果。In some possible designs, the rendering engine is further configured to perform reverse ray tracing of multiple preset viewing angles on part or all of the meshes of each three-dimensional model of the virtual scene according to the light source of the virtual scene; The forward ray tracing results of the grids of the three-dimensional models of the virtual scene and the reverse ray tracing results of multiple preset viewing angles of part or all of the three-dimensional models of the virtual scene generate the virtual scene Pre-ray tracing results of the mesh of each 3D model.
在一些可能的设计中,所述远程渲染平台获取所述虚拟场景的提供者或所述用户设置的预设视角参数。其中,预设视角参数可以是预设视角的数量,或者,可以是多个预设视角。此外,所述虚拟场景的提供者或所述用户设置的参数还可以包括光线反弹次数等等。In some possible designs, the remote rendering platform obtains preset viewing angle parameters set by the provider of the virtual scene or the user. The preset viewing angle parameter may be the number of preset viewing angles, or may be multiple preset viewing angles. In addition, the parameters set by the provider of the virtual scene or the user may also include the number of light bounces and so on.
在一些可能的设计中,所述渲染引擎还用于根据所述虚拟场景的光源对所述虚拟场景的各三维模型的材料为光滑的网格进行多个预设视角的反向光线追踪。In some possible designs, the rendering engine is further configured to perform reverse ray tracing of multiple preset viewing angles on a smooth grid of materials of each three-dimensional model of the virtual scene according to the light source of the virtual scene.
第三方面,提供了一种渲染节点,包括存储器以及处理器,所述处理器执行所述存储器中的程序以执行第一方面及其可能的设计提供的方法。具体的,渲染节点可以包括一台或多台计算机,每台计算机执行第一方面及其可能的设计提供的方法中的部分或全部步骤。In a third aspect, a rendering node is provided, including a memory and a processor, and the processor executes a program in the memory to execute the method provided in the first aspect and possible designs thereof. Specifically, the rendering node may include one or more computers, and each computer executes part or all of the steps in the method provided in the first aspect and possible designs.
第四方面,提供了一种计算机可读存储介质,包括指令,当所述指令在计算节点上运行时,使得所述计算节点执行第一方面及其可能的设计提供的方法。In a fourth aspect, a computer-readable storage medium is provided, which includes instructions that, when the instructions run on a computing node, cause the computing node to execute the method provided in the first aspect and possible designs thereof.
第五方面,提供了一种渲染系统,包括:终端设备、网络设备以及远程渲染平台,所述终端设备通过所述网络设备与所述远程渲染平台进行通信,其中,所述远程渲染平台用于执行第一方面及其可能的设计提供的方法。In a fifth aspect, a rendering system is provided, including: a terminal device, a network device, and a remote rendering platform. The terminal device communicates with the remote rendering platform through the network device, wherein the remote rendering platform is used for Implement the methods provided by the first aspect and its possible designs.
第六方面,提供了一种计算机程序产品,包括指令,当所述指令在渲染节点上运行时,使得所述渲染节点执行第一方面及其可能的设计提供的方法。In a sixth aspect, a computer program product is provided, including instructions, which when the instructions run on a rendering node, cause the rendering node to execute the method provided in the first aspect and possible designs thereof.
附图说明Description of the drawings
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例或背景技术中所需要使用的附图进行说明。In order to more clearly describe the technical solutions in the embodiments of the present application or the background art, the following will describe the drawings that need to be used in the embodiments of the present application or the background art.
图1A和图1B是本申请提供的渲染系统的结构示意图;1A and 1B are schematic diagrams of the structure of the rendering system provided by the present application;
图2是本申请提供的不同角度观察虚拟场景得到的渲染图像的示意图;2 is a schematic diagram of a rendered image obtained by observing a virtual scene from different angles provided by the present application;
图3A至图3C是本申请提供的光线跟踪渲染的示意图;3A to 3C are schematic diagrams of ray tracing rendering provided by this application;
图4是本申请提供的光线跟踪渲染的另一示意图;FIG. 4 is another schematic diagram of ray tracing rendering provided by this application;
图5是本申请提供的各个用户单独进行计算时的计算量与抽取公共部分统一进行计算时的计算量的对比示意图;FIG. 5 is a schematic diagram of the comparison between the calculation amount when each user performs the calculation separately and the calculation amount when the common part is extracted for unified calculation provided by this application;
图6A和图6B是本申请提供的各种三维模型的网格的示意图;6A and 6B are schematic diagrams of the grids of various three-dimensional models provided by this application;
图7A至图7D是本申请提供的正向光线跟踪渲染的示意图;7A to 7D are schematic diagrams of forward ray tracing rendering provided by the present application;
图8是本申请提供的正向光线跟踪渲染的另一示意图;FIG. 8 is another schematic diagram of forward ray tracing rendering provided by this application;
图9是本申请提供的光源每单位面积通过的光线数量的示意图;Fig. 9 is a schematic diagram of the number of light rays passing per unit area of the light source provided by the present application;
图10是本申请提供的反向光线跟踪渲染的各个预设视角的示意图;FIG. 10 is a schematic diagram of various preset viewing angles of reverse ray tracing rendering provided by the present application;
图11A和图11B是本申请提供的反向光线追踪渲染的示意图;11A and 11B are schematic diagrams of reverse ray tracing rendering provided by the present application;
图12是本申请提供的获取从用户角度可观察到的物体表面集合的示意图;FIG. 12 is a schematic diagram of obtaining a set of object surfaces observable from a user's perspective provided by the present application;
图13A和图13B是本申请提供的三维模型中每网格的光线采样数分别为1和n的对比图;13A and 13B are comparison diagrams where the number of light samples per grid in the three-dimensional model provided by the present application is 1 and n, respectively;
图14是本申请提出的一种预光线追踪结果的生成方法的流程示意图;FIG. 14 is a schematic flowchart of a method for generating pre-ray tracing results proposed by this application;
图15是本申请提出的一种渲染方法的流程示意图;FIG. 15 is a schematic flowchart of a rendering method proposed by this application;
图16是本申请提出的一种渲染节点的结构示意图;FIG. 16 is a schematic diagram of the structure of a rendering node proposed in this application;
图17是本申请提出的另一种渲染节点的结构示意图。FIG. 17 is a schematic structural diagram of another rendering node proposed in this application.
具体实施方式Detailed ways
参见图1A,图1A是本申请涉及的一种渲染系统的结构示意图。本申请的渲染系统用于 通过渲染方法对虚拟场景的3D模型进行渲染得到的2D图像,即渲染图像。本申请的渲染系统可以包括:一个或多个终端设备10、网络设备20以及远程渲染平台30。远程渲染平台30具体可以部署在公有云上。远程渲染平台30和终端设备10一般部署在不同的数据中心内。Referring to FIG. 1A, FIG. 1A is a schematic structural diagram of a rendering system related to the present application. The rendering system of the present application is used for a 2D image obtained by rendering a 3D model of a virtual scene by a rendering method, that is, a rendered image. The rendering system of the present application may include: one or more terminal devices 10, a network device 20, and a remote rendering platform 30. The remote rendering platform 30 may be specifically deployed on a public cloud. The remote rendering platform 30 and the terminal device 10 are generally deployed in different data centers.
终端设备10可以是需要实时显示渲染图像的设备,例如,可以是用于飞行训练的虚拟现实设备(virtual reality,VR)、可以是用于虚拟游戏的电脑以及用于虚拟商城的智能手机等等,此处不作具体限定。终端设备可以是高配置、高性能(例如,多核、高主频、内存大等等)的设备,也可以是低配置,低性能(例如,单核、低主频、内存小等等)的设备。在一具体的实施例中,终端设备10可以包括硬件、操作系统以及渲染应用客户端。The terminal device 10 may be a device that needs to display rendered images in real time, for example, it may be a virtual reality (VR) device used for flight training, a computer used for virtual games, a smart phone used for a virtual mall, etc. , There is no specific limitation here. The terminal device can be a device with high configuration and high performance (for example, multi-core, high clock speed, large memory, etc.), or a device with low configuration and low performance (for example, single core, low clock speed, small memory, etc.) equipment. In a specific embodiment, the terminal device 10 may include hardware, an operating system, and a rendering application client.
网络设备20用于在终端设备10通过任何通信机制/通信标准的通信网络与远程渲染平台30之间传输数据。其中,通信网络可以是广域网、局域网、点对点连接等方式,或它们的任意组合。The network device 20 is used to transmit data between the terminal device 10 and the remote rendering platform 30 through a communication network of any communication mechanism/communication standard. Among them, the communication network can be a wide area network, a local area network, a point-to-point connection, etc., or any combination thereof.
远程渲染平台30包括一个或多个远程渲染节点,每个远程渲染节点自下而上包括渲染硬件、虚拟化服务、渲染引擎以及渲染应用服务端。其中,渲染硬件包括计算资源、存储资源以及网络资源。计算资源可以采用异构计算架构,例如,可以采用中央处理器(central processing unit,CPU)+图形处理器(graphics processing unit,GPU)架构,CPU+AI芯片,CPU+GPU+AI芯片架构等等,此处不作具体限定。存储资源可以包括内存、显存等存储设备。网络资源可以包括网卡、端口资源、地址资源等。虚拟化服务是通过虚拟化技术将渲染节点的资源虚拟化为vCPU等自已,并按照用户的需要灵活地隔离出相互独立的资源以运行用户的应用程序的服务。常见地,虚拟化服务可以包括虚拟机(virtual machine,VM)服务以及容器(container)服务,VM和容器可以运行渲染引擎和渲染应用服务端。渲染引擎用于实现渲染算法。渲染应用服务端用于调用渲染引擎以完成渲染图像的渲染。The remote rendering platform 30 includes one or more remote rendering nodes, and each remote rendering node includes rendering hardware, a virtualization service, a rendering engine, and a rendering application server from bottom to top. Among them, the rendering hardware includes computing resources, storage resources, and network resources. Computing resources can use heterogeneous computing architecture, for example, central processing unit (CPU) + graphics processing unit (GPU) architecture, CPU + AI chip, CPU + GPU + AI chip architecture, etc. , There is no specific limitation here. Storage resources can include storage devices such as memory and video memory. Network resources can include network cards, port resources, address resources, and so on. The virtualization service is a service that virtualizes the resources of the rendering node into vCPUs and other self through virtualization technology, and flexibly isolates independent resources to run the user's application according to the user's needs. Commonly, virtualization services may include virtual machine (VM) services and container (container) services. VMs and containers can run rendering engines and rendering application servers. The rendering engine is used to implement the rendering algorithm. The rendering application server is used to call the rendering engine to complete the rendering of the rendered image.
终端设备10上的渲染应用客户端和远程渲染平台30的渲染应用服务端合称渲染应用。常见的渲染应用可以包括:游戏应用、VR应用、电影特效以及动画等等。用户通过渲染应用客户端输入操作指令,渲染应用客户端将操作指令发送给渲染应用服务端,渲染应用服务端调用渲染引擎生成渲染结果,将渲染结果发送至渲染应用客户端。然后再由渲染应用客户端将渲染结果转换成图像呈现给用户。可在一具体的实施方式中,渲染应用服务端和渲染应用客户端可以是渲染应用提供商提供的,渲染引擎可以是云服务提供商提供的。举个例子说明,渲染应用可以是游戏应用,游戏应用的游戏开发商将游戏应用服务端安装在云服务提供商提供的远程渲染平台上,游戏应用的游戏开发商将游戏应用客户端通过互联网提供给用户下载,并安装在用户的终端设备上。此外,云服务提供商还提供了渲染引擎,渲染引擎可以为游戏应用提供计算能力。在另一种具体的实施方式中,渲染应用客户端、渲染应用服务端和渲染引擎可以均是云服务提供商提供的。The rendering application client on the terminal device 10 and the rendering application server of the remote rendering platform 30 are collectively referred to as a rendering application. Common rendering applications can include: game applications, VR applications, movie special effects, animations, and so on. The user inputs operation instructions through the rendering application client, the rendering application client sends the operation instructions to the rendering application server, the rendering application server calls the rendering engine to generate rendering results, and sends the rendering results to the rendering application client. Then, the rendering application client converts the rendering result into an image and presents it to the user. In a specific implementation manner, the rendering application server and the rendering application client may be provided by a rendering application provider, and the rendering engine may be provided by a cloud service provider. For example, the rendering application can be a game application. The game developer of the game application installs the game application server on the remote rendering platform provided by the cloud service provider, and the game developer of the game application provides the game application client through the Internet Download it to the user and install it on the user's terminal device. In addition, the cloud service provider also provides a rendering engine, which can provide computing power for game applications. In another specific implementation manner, the rendering application client, the rendering application server, and the rendering engine may all be provided by a cloud service provider.
在图1B所示的渲染系统中,还包括管理设备40。管理设备40可以是用户的终端设备和云服务提供商的远程渲染平台30之外的第三方提供的设备。例如,管理设备40可以是游戏开发商提供的设备。游戏开发商可以通过管理设备40对渲染应用进行管理。可以理解,管理设备40可以设置于远程渲染平台之上,也可以设置于远程渲染平台之外,此处不作具体限定。The rendering system shown in FIG. 1B further includes a management device 40. The management device 40 may be a device provided by a third party other than the user's terminal device and the remote rendering platform 30 of the cloud service provider. For example, the management device 40 may be a device provided by a game developer. The game developer can manage the rendering application through the management device 40. It can be understood that the management device 40 may be set on the remote rendering platform, and may also be set outside the remote rendering platform, which is not specifically limited here.
以图1A或图1B所示的渲染系统为例,在多用户参与的虚拟场景中,为了能够让每个用 户都产生置身其中的真实感,用户A通过终端设备1以及用户B通过终端设备2加入到同一个虚拟场景中。因此,如图2所示,假设虚拟场景如图2中的(a)所示,终端设备1需要显示从用户A的视角生成的该虚拟场景的渲染图像,终端设备2需要显示从用户B的视角生成的该虚拟场景的渲染图像。终端设备1和终端设备2可以分别独立地利用远程渲染平台30的资源对虚拟场景进行光线跟踪渲染(ray tracing render),从而得到不同角度的渲染图像。具体地,Take the rendering system shown in Figure 1A or Figure 1B as an example. In a virtual scene where multiple users participate, in order to enable each user to have a sense of reality, user A uses terminal device 1 and user B uses terminal device 2. Join the same virtual scene. Therefore, as shown in Figure 2, assuming a virtual scene as shown in Figure 2(a), terminal device 1 needs to display the rendered image of the virtual scene generated from the perspective of user A, and terminal device 2 needs to display the image from user B A rendered image of the virtual scene generated by the perspective. The terminal device 1 and the terminal device 2 may independently use the resources of the remote rendering platform 30 to perform ray tracing rendering on the virtual scene, so as to obtain rendered images of different angles. specifically,
终端设备1通过网络设备20向远程渲染平台30发出第一渲染请求,远程渲染平台30调用渲染引擎根据第一渲染请求从用户A的视角出发采用光线跟踪渲染单独对虚拟场景进行光线跟踪,从而得到用户A的视角生成的该虚拟场景的渲染图像。The terminal device 1 sends a first rendering request to the remote rendering platform 30 through the network device 20, and the remote rendering platform 30 calls the rendering engine according to the first rendering request from the user A’s perspective and uses ray tracing rendering to separately perform ray tracing on the virtual scene, thereby obtaining A rendered image of the virtual scene generated from the perspective of user A.
终端设备2通过网络设备20向远程渲染平台30发出第二渲染请求,远程渲染平台30调用渲染引擎根据第二渲染请求从用户B的视角出发采用光线跟踪渲染单独对虚拟场景进行光线跟踪,从而得到用户B的视角生成的该虚拟场景的渲染图像。The terminal device 2 sends a second rendering request to the remote rendering platform 30 through the network device 20, and the remote rendering platform 30 calls the rendering engine according to the second rendering request from the user B’s perspective and uses ray tracing rendering to separately perform ray tracing on the virtual scene, thereby obtaining A rendered image of the virtual scene generated from the perspective of user B.
下面将详细对终端设备1以及终端设备2采用的光线跟踪渲染方法进行详细的介绍。光线跟踪渲染是通过跟踪从观察者(例如相机或者人眼)的视点朝着渲染图像的每个像素发射的光线射入虚拟场景的光的路径来产生渲染图像的渲染方法。其中,虚拟场景包括光源以及三维模型。光线追踪渲染的方法中,从观察者的视点(当视点确定时,视角自然也确定了)出发,逆向跟踪能够到达光源的光线。由于只有最后能够进入观察者的视点的光线才是有用的,因此,逆向跟踪光线能够有效减少数据的处理量。光线追踪渲染中主要存在反射、折射以及透射三种场景,下面将分别结合具体的实施例进行说明。The ray tracing rendering method adopted by the terminal device 1 and the terminal device 2 will be described in detail below. Ray tracing rendering is a rendering method that generates a rendered image by tracing the path of light emitted from the viewpoint of an observer (such as a camera or human eye) toward each pixel of the rendered image into the virtual scene. Among them, the virtual scene includes a light source and a three-dimensional model. In the ray tracing rendering method, starting from the observer's point of view (when the point of view is determined, the angle of view is naturally determined), the light that can reach the light source is traced backwards. Since only the rays that can finally enter the observer's point of view are useful, the reverse tracing of rays can effectively reduce the amount of data processing. There are mainly three scenes of reflection, refraction, and transmission in ray tracing rendering, which will be described below in conjunction with specific embodiments.
如图3A所示,反射场景中,假设虚拟场景只有一个光源111以及一个不透明球体112。从相机113(以相机113位观察者为例)的视点E发出一条光线,投射到渲染图像114的像素点O 1上,然后,继续射出到不透明球体112的一个点P 1,然后,被反射到光源L,此时,点P 1的光线强度和颜色决定了像素点O 1的光线强度和颜色。从相机111的视点E发出另一条光线,投射到渲染图像114中的另一个像素点O 2,然后,继续射出到不透明球体112的一个点P 2,然后,被反射到光源L,并且,点P 2和光源L之间存在障碍物不透明球体112,此时,点P 2位于不透明球体112的阴影中,像素点O 2的光线强度为零,颜色为黑色。 As shown in FIG. 3A, in the reflection scene, it is assumed that the virtual scene has only one light source 111 and one opaque sphere 112. A light ray is emitted from the viewpoint E of the camera 113 (taking the camera 113 observers as an example), and is projected onto the pixel point O 1 of the rendered image 114, and then continues to shoot to a point P 1 of the opaque sphere 112, and then is reflected To the light source L, at this time, the light intensity and color of the point P 1 determine the light intensity and color of the pixel O 1 . Another ray is emitted from the viewpoint E of the camera 111, and is projected to another pixel O 2 in the rendered image 114, and then continues to be emitted to a point P 2 of the opaque sphere 112, and then is reflected to the light source L, and the point There is an obstacle opaque sphere 112 between P 2 and the light source L. At this time, the point P 2 is located in the shadow of the opaque sphere 112, the light intensity of the pixel point O 2 is zero, and the color is black.
如图3B所示,折射场景中,假设虚拟场景只有一个光源121以及一个透明球体122。从相机123的视点E发出一条光线,投射到渲染图像124上的像素点O 3上,然后,继续射出到透明球体122的一个点P 3,然后,被折射到光源L,此时,点P 3的光线强度和颜色决定了像素点O 3的光线强度和颜色。 As shown in FIG. 3B, in the refraction scene, it is assumed that the virtual scene has only one light source 121 and one transparent sphere 122. A light ray is emitted from the viewpoint E of the camera 123, and is projected onto the pixel point O 3 on the rendered image 124, and then continues to be emitted to a point P 3 of the transparent sphere 122, and then is refracted to the light source L. At this time, the point P The light intensity and color of 3 determine the light intensity and color of pixel O 3 .
如图3C所示,透射场景中,假设虚拟场景只有一个光源131以及一个透明薄体132。从相机133的视点E发出一条光线,投射到网格的点O 4,然后,继续射出到透明薄体132的一个点P 4,然后,被透射到光源L,此时,点P 4的光线强度和颜色决定了像素点O 4的光线强度和颜色。 As shown in FIG. 3C, in the transmission scene, it is assumed that the virtual scene has only one light source 131 and one transparent thin body 132. A light ray is emitted from the viewpoint E of the camera 133 and is projected to the point O 4 of the grid, and then continues to be emitted to a point P 4 of the transparent thin body 132, and then is transmitted to the light source L. At this time, the light of the point P 4 The intensity and color determine the light intensity and color of the pixel O 4.
但是,上述图3A中的反射场景、图3B中的折射场景以及图3C中的透射场景都是最简单的场景,图3A中假设虚拟场景中仅仅存在一个不透明球体,图3B中假设虚拟场景中仅仅存在一个透明球体,图3C中假设虚拟场景中仅仅存在一个透明薄 体,在实际应用中,虚拟场景远远比图3A至图3C要复杂,例如,虚拟场景中可能同时存在多个不透明物体以及多个透明物体,因此,光线会被多次反射、折射和透射,从而导致光线的跟踪变得非常复杂,对计算资源的消耗非常大。However, the reflection scene in Figure 3A, the refraction scene in Figure 3B, and the transmission scene in Figure 3C are the simplest scenes. Figure 3A assumes that there is only one opaque sphere in the virtual scene, and Figure 3B assumes that there is only one opaque sphere in the virtual scene. There is only one transparent sphere. In Figure 3C, it is assumed that there is only one transparent thin body in the virtual scene. In actual applications, the virtual scene is far more complicated than Figures 3A to 3C. For example, there may be multiple opaque objects in the virtual scene at the same time. As well as multiple transparent objects, the light will be reflected, refracted and transmitted multiple times, which will cause the tracing of the light to become very complicated and consume a lot of computing resources.
如图4所示的复杂的虚拟场景中,假设虚拟场景包括有一个光源140、两个透明球体141、142以及一个不透明物体143。从相机144的视点E发出一条光线,投射到渲染图像145中的一个像素点O 4,并继续射出到透明球体141的一个点P 1,从P 1向光源L作一条阴影测试线S1,其间没有遮挡的物体,于是,可以用局部光照明模型计算光源对P 1在其视线E的方向上的光线强度,作为该点的局部光线强度。同时,还要跟踪该点处反射光线R1和折射光线T1,它们也对P 1点的光线强度有贡献。在反射光线R1方向上,没有再与其他物体相交,那么就设该方向的光线强度为零,并结束这光线方向的跟踪。然后,继续对折射光线T1方向进行跟踪,来计算该光线的光线强度贡献。折射光线T1在透明物体141内部传播,继续射出与透明物体142相交于点P 2,由于该点在透明物体142内部,可以假设它的局部光线强度为零,同时,产生了反射光线R2和折射光线T2,在反射光线R2方向,可以继续递归跟踪下去计算它的光线强度,在这里就不再继续下去了。继续对折射光线T2进行跟踪,T2与不透明物体143交于点P 3,作P 3与光源L的阴影测试线S3,没有物体遮挡,那么计算该处的局部光线强度,由于不透明物体143是非透明的,那么,可以继续跟踪反射光线R3方向的光线强度,结合局部光线强度来得到P 3处的光线强度。反射光线R3的跟踪与前面的过程类似,算法可以递归的进行下去。重复上面的过程,直到光线满足跟踪终止条件。这样我们就可以得到像素点O 4的光线强度,也就是它相应的颜色值。 In the complex virtual scene shown in FIG. 4, it is assumed that the virtual scene includes a light source 140, two transparent spheres 141 and 142, and an opaque object 143. A light ray is emitted from the viewpoint E of the camera 144, projected to a pixel O 4 in the rendered image 145, and continues to be projected to a point P 1 of the transparent sphere 141. A shadow test line S1 is made from P 1 to the light source L, during which For objects that are not obstructed, the local light illumination model can be used to calculate the light intensity of the light source to P 1 in the direction of the line of sight E as the local light intensity of the point. At the same time, also the tracking point at which reflected light R1 Tl and refract light, light intensity P which also contributes to 1:00. In the direction of the reflected light R1, if it does not intersect with other objects, the intensity of the light in this direction is set to zero, and the tracing of the light direction is ended. Then, continue to track the direction of the refracted light T1 to calculate the light intensity contribution of the light. The refracted light T1 propagates inside the transparent object 141 and continues to be emitted and intersects the transparent object 142 at the point P 2. Since this point is inside the transparent object 142, it can be assumed that its local light intensity is zero. At the same time, reflected light R2 and refraction are generated. The ray T2, in the direction of the reflected ray R2, can continue to be tracked recursively to calculate its ray intensity, and it will not continue here. Continue to track the refracted light T2. T2 and the opaque object 143 intersect at the point P 3 , and make the shadow test line S3 between P 3 and the light source L. There is no object obscuration, then the local light intensity at that place is calculated. Because the opaque object 143 is non-transparent , then, you may continue to track the direction of the reflected light rays R3 strength, combined with local light intensity to obtain the light intensity at P 3. The tracking of the reflected light R3 is similar to the previous process, and the algorithm can proceed recursively. Repeat the above process until the ray meets the tracing termination condition. In this way, we can get the light intensity of pixel O 4 , which is its corresponding color value.
在上述实施例中,是以渲染系统只包括终端设备1以及终端设备2为例进行说明的,在实际应用中,终端设备的数量可能远远不止两个,不同终端设备的用户的视角往往都是不一样的。因此,随着用户数量的增多,在同一虚拟场景中需要生成的不同视角的渲染图像的数量也随之增多,计算量会非常庞大。并且,由于这些终端设备都是对同一虚拟场景进行光线跟踪渲染从而得到的不同角度的渲染图像,可能有很多的计算都是重复的,导致了不必要的计算资源的浪费。In the above embodiment, the rendering system only includes terminal device 1 and terminal device 2 as an example. In practical applications, the number of terminal devices may be far more than two, and users of different terminal devices often have different perspectives. are different. Therefore, as the number of users increases, the number of rendered images from different perspectives that need to be generated in the same virtual scene also increases, and the amount of calculation will be very huge. Moreover, since these terminal devices perform ray tracing rendering of the same virtual scene to obtain rendered images of different angles, there may be many calculations that are repeated, resulting in unnecessary waste of computing resources.
本申请提出的渲染系统能够从同一虚拟场景的不同角度的光线跟踪渲染中抽取出公共计算部分统一计算,每个用户只需要单独计算与视觉相关的私人部分即可,从而有效节省渲染所需的计算资源,提升渲染效率。参见图5,图5是各个用户单独进行计算时的计算量与抽取公共部分统一进行计算时的计算量的对比示意图。其中,图5的左边是各个用户单独进行计算时的计算量,图5的右边是抽取公共部分统一进行计算时的计算量。The rendering system proposed in this application can extract the common calculation part from the ray tracing rendering of the same virtual scene from different angles, and each user only needs to calculate the private part related to the vision separately, thereby effectively saving the rendering required Computing resources to improve rendering efficiency. Refer to FIG. 5, which is a schematic diagram of the comparison between the calculation amount when each user performs the calculation separately and the calculation amount when the common part is extracted for unified calculation. Among them, the left side of FIG. 5 is the calculation amount when each user performs the calculation separately, and the right side of FIG. 5 is the calculation amount when the common part is extracted for unified calculation.
图5的左边所示,各个用户单独进行计算时,总的计算量等于各个用户单独进行计算的计算量之和。即,总的计算量=用户1的单独计算量1+用户2的单独计算量2+…+用户n的单独计算量n。As shown on the left side of Figure 5, when each user performs calculations individually, the total calculation amount is equal to the sum of the calculation amounts of each user calculation separately. That is, the total calculation amount = the individual calculation amount 1 of the user 1 + the individual calculation amount 2 of the user 2 +...+ the individual calculation amount n of the user n.
图5的右边所示,将公共计算部分统一进行计算时,总的计算量等于公共计算部分统一进行计算的计算量加上各个用户私人部分进行计算的计算量之和。即,总的计算量=公共计算部分的计算量+用户1的角度计算量1+用户2的角度计算量2+…+用 户n的角度计算量n。As shown on the right side of Figure 5, when the public computing part is uniformly calculated, the total calculation amount is equal to the total calculation amount of the public calculation part plus the calculation amount of each user's private part. That is, the total calculation amount = the calculation amount of the common calculation part + the angle calculation amount 1 of the user 1 + the angle calculation amount 2 of the user 2 +... + the angle calculation amount n of the user n.
从图上的对比可以明显看出,抽取公共部分统一进行计算能够比各个用户单独进行计算节约计算量,并且,用户的数量越多,节约的计算量越多。It can be clearly seen from the comparison in the figure that extracting common parts for unified calculation can save calculations compared to individual calculations for each user, and the more users there are, the more calculations can be saved.
该渲染系统的渲染引擎可以通过如下的渲染算法进行图像渲染:The rendering engine of the rendering system can perform image rendering through the following rendering algorithms:
假设虚拟场景中存在一个或者多个光源,以及一个或者多个三维模型。光源产生的光线照射到三维模型上。其中,光源可以是点光源、线光源或者面光源等等。三维模型的形状可以是多种多样的,例如,可以是球体、锥体、曲面物体、平面物体以及表面不规则物体等等。Assume that there are one or more light sources and one or more three-dimensional models in the virtual scene. The light generated by the light source shines on the three-dimensional model. Among them, the light source can be a point light source, a line light source, a surface light source, and so on. The shape of the three-dimensional model can be various, for example, it can be a sphere, a cone, a curved object, a flat object, an object with an irregular surface, and so on.
在公共部分的计算:Calculations in the public part:
远程渲染平台将虚拟场景中的三维模型的表面分割成多个网格。其中,形状不同的三维模型的网格的形状可以是不同的,例如,球体的网格和曲面物体的网格的形状可以完全不同,下面将分别结合具体的实施例对网格进行说明。The remote rendering platform divides the surface of the 3D model in the virtual scene into multiple grids. The shapes of the meshes of the three-dimensional models with different shapes may be different. For example, the shapes of the mesh of a sphere and the mesh of a curved object may be completely different. The meshes will be described below in conjunction with specific embodiments.
如图6A所示,以三维模型为球体为例,网格可以表示为中心点
Figure PCTCN2021092699-appb-000001
以及中心点
Figure PCTCN2021092699-appb-000002
邻域的点构成的球体表面上的四边略鼓的近似方块。以球体的球心作为原点构建三维正交坐标系,其中,三维正交坐标系包括x轴,y轴以及z轴。中心点P的各个坐标中,r表示为球心O至中心点P的线段OP的长度,θ表示为线段OP与正z轴之间的夹角,
Figure PCTCN2021092699-appb-000003
表示为线段OP在xoy平面上的投影与x轴之间的夹角。在一具体的实施例中,可以在球体上均匀地设置n个中心点P 1,P 2,…,P n,如果非中心点Q j与中心点P i的距离最短,则非中心点Q j与中心点P i属于同一个网格。
As shown in Figure 6A, taking the 3D model as a sphere as an example, the grid can be expressed as the center point
Figure PCTCN2021092699-appb-000001
And the center point
Figure PCTCN2021092699-appb-000002
The points in the neighborhood constitute an approximate square with slightly bulging sides on the surface of the sphere. A three-dimensional orthogonal coordinate system is constructed with the center of the sphere as the origin, where the three-dimensional orthogonal coordinate system includes an x-axis, a y-axis, and a z-axis. In the coordinates of the center point P, r is the length of the line segment OP from the center of the sphere O to the center point P, and θ is the angle between the line segment OP and the positive z axis,
Figure PCTCN2021092699-appb-000003
It is expressed as the angle between the projection of the line segment OP on the xoy plane and the x axis. In a particular embodiment, may be uniformly disposed on a sphere center point of the n P 1, P 2, ..., P n, if non-central point from the center point P i and Q J shortest, the center point of the non-Q the center point P i and j belong to the same grid.
如图6B所示,以三维模型为曲面物体为例,网格可以表示为P(u,t)所代表的曲面表面上的方块。以曲面的一个设定原点构建二维正交坐标系,其中,坐标系包括u轴,t轴。u表示为曲面设定原点一个方向的偏移量,t表示另一个正交方向的偏移量,P(u,t)表示如图6B所示的(u,t)坐标系中四个顶点所组成的方块。As shown in FIG. 6B, taking the three-dimensional model as a curved object as an example, the grid can be expressed as a square on the curved surface represented by P(u,t). A two-dimensional orthogonal coordinate system is constructed with a set origin of the curved surface, where the coordinate system includes u-axis and t-axis. u represents the offset in one direction of the origin of the curved surface, t represents the offset in the other orthogonal direction, and P(u,t) represents the four vertices in the (u,t) coordinate system as shown in Figure 6B Consists of squares.
可以理解,上述的网格的形状仅仅是作为具体的举例,在实际应用中,网格还可能是其他的形状,此处不作具体限定。另外,网格的尺寸可以根据需要进行设置,对渲染出的图像的精度要求越高的情况下,网格的尺寸可以设置得越小。It can be understood that the shape of the grid described above is merely a specific example, and in actual applications, the grid may also have other shapes, which is not specifically limited here. In addition, the size of the grid can be set as needed. If the accuracy of the rendered image is higher, the size of the grid can be set to be smaller.
上述网格的材料可以是光滑的,也可以是粗糙的。其中,光滑的材料为存在镜面反射的材料或者存在透射的材料,例如,镜面、金属表面以及水珠等等。粗糙的材料为存在漫反射的材料,例如,天然的木头以及布等等。当虚拟场景中的三维模型的网格均是粗糙的材料时,远程渲染平台可以只进行正向光线跟踪,或者,远程渲染平台可以进行正向光线跟踪并对所有网格进行反向光线跟踪;当虚拟场景中的三维模型的网格包括光滑的材料以及粗糙的材料时,远程渲染平台可以进行正向光线跟踪并对材料为光滑的网格进行反向光线跟踪,或者,远程渲染平台可以进行正向光线跟踪并对所有网格进行反向光线跟踪;当虚拟场景中的三维模型的网格均为光滑的材料时,远程渲染平台可以只进行正向光线跟踪,或者,远程渲染平台可以进行正向光线跟踪并对所有网格进行反向光线跟踪。下面将详细介绍正向光线跟踪和反向光线跟踪的概念。The material of the aforementioned mesh can be smooth or rough. Among them, smooth materials are those with specular reflection or transmission, such as mirrors, metal surfaces, water droplets, and so on. Rough materials are materials that have diffuse reflection, such as natural wood and cloth. When the meshes of the 3D model in the virtual scene are all rough materials, the remote rendering platform can only perform forward ray tracing, or the remote rendering platform can perform forward ray tracing and reverse ray tracing for all meshes; When the mesh of the 3D model in the virtual scene includes smooth materials and rough materials, the remote rendering platform can perform forward ray tracing and reverse ray tracing for smooth mesh materials, or the remote rendering platform can perform Forward ray tracing and reverse ray tracing for all meshes; when the meshes of the 3D model in the virtual scene are all smooth materials, the remote rendering platform can only perform forward ray tracing, or the remote rendering platform can perform Forward ray tracing and reverse ray tracing for all meshes. The concept of forward ray tracing and reverse ray tracing will be introduced in detail below.
正向光线跟踪是指从光源出发,正向跟踪光线在虚拟场景中的传递过程。远程渲 染平台对虚拟场景中的光源产生的光线进行正向光线跟踪,从而得到虚拟场景中的三维模型中每个网格的光线强度。正向光线跟踪主要存在反射、折射、透射以及直射四种场景,下面将分别结合图7A-图7D以及具体的实施例进行说明。Forward ray tracing refers to the forward tracing of the light transmission process in the virtual scene starting from the light source. The remote rendering platform performs forward ray tracing on the light generated by the light source in the virtual scene, so as to obtain the light intensity of each grid in the 3D model in the virtual scene. The forward ray tracing mainly includes four scenes: reflection, refraction, transmission, and direct shooting, which will be described below with reference to FIGS. 7A-7D and specific embodiments respectively.
如图7A所示,反射场景中,假设虚拟场景只有一个光源211、不透明球体212以及不透明球体213。从光源211发出一条光线,投射到不透明球体212的一个点P 1上,然后,被反射到不透明球体213的一个中心点为点Q 1的网格上。因此,可以通过局部光照明模型计算光源211产生的光线在不透明球体212的点P 1上产生的光线强度,然后,继续跟踪光线被不透明球体212反射之后在不透明球体213的中心点为点Q 1的网格上产生的光线强度。 As shown in FIG. 7A, in the reflection scene, it is assumed that the virtual scene has only one light source 211, an opaque sphere 212, and an opaque sphere 213. A light emitted from the light source 211, is projected to a point 212 on an opaque sphere of P 1, then the opaque sphere is reflected to a center point 213 of the grid point Q 1 '. Therefore, the local light illumination model can be used to calculate the intensity of the light generated by the light source 211 at the point P 1 of the opaque sphere 212, and then continue to trace the light reflected by the opaque sphere 212 at the center point of the opaque sphere 213 as the point Q 1 The intensity of the light generated on the grid.
如图7B所示,折射场景中,假设虚拟场景只有一个光源221、透明球体222以及不透明球体223。从光源221发出一条光线,投射到透明球体222的一个点P 2上,然后,被折射到不透明球体223的一个中心点为点Q 2的网格上。因此,可以通过局部光照明模型计算光源221产生的光线在透明球体222的点P 2上产生的光线强度,然后,继续跟踪光线被透明球体222折射之后在不透明球体223的中心点为点Q 2的网格上产生的光线强度。 As shown in FIG. 7B, in the refraction scene, it is assumed that the virtual scene has only one light source 221, a transparent sphere 222, and an opaque sphere 223. A light ray is emitted from the light source 221, projected onto a point P 2 of the transparent sphere 222, and then is refracted onto a grid with a center point of the opaque sphere 223 as the point Q 2. Therefore, the light intensity of the light generated by the light source 221 at the point P 2 of the transparent sphere 222 can be calculated through the local light illumination model, and then continue to trace the light after being refracted by the transparent sphere 222 at the center point of the opaque sphere 223 as the point Q 2 The intensity of the light generated on the grid.
如图7C所示,透射场景中,假设虚拟场景只有一个光源231、透明薄体232以及不透明球体233。从光源221发出一条光线,投射到透明薄体232的一个点P 3,然后,被透射到不透明球体233的一个中心点为点Q 3的网格。因此,可以通过局部光照明模型计算光源231产生的光线在透明薄体232的点P 3上产生的光线强度,然后,继续跟踪光线被透明薄体232透射之后在不透明球体233的中心点为点Q 3的网格上产生的光线强度。 As shown in FIG. 7C, in the transmission scene, it is assumed that the virtual scene has only one light source 231, a transparent thin body 232, and an opaque sphere 233. A light ray is emitted from the light source 221, is projected to a point P 3 of the transparent thin body 232, and then is transmitted to a grid with a center point of the opaque sphere 233 as the point Q 3. Therefore, the local light illumination model can be used to calculate the light intensity of the light generated by the light source 231 at the point P 3 of the transparent thin body 232, and then continue to trace the light at the center of the opaque sphere 233 after being transmitted by the transparent thin body 232. The intensity of the light generated on the Q 3 grid.
如图7D所示,直射场景中,假设虚拟场景只有一个光源241以及不透明球体243。从光源241发出一条光线,投射到不透明球体243的一个中心点为点Q 4的网格上。因此,可以通过局部光照明模型计算光源241产生的光线在不透明球体243的中心点为点Q 4的网格上产生的光线强度。 As shown in FIG. 7D, in a direct scene, it is assumed that the virtual scene has only one light source 241 and an opaque sphere 243. A light ray is emitted from the light source 241 and is projected onto a grid with a center point of the opaque sphere 243 at the point Q 4. Therefore, the light intensity of the light generated by the light source 241 on the grid with the center point of the opaque sphere 243 being the point Q 4 can be calculated by the local light illumination model.
但是,上述图7A中的反射场景、图7B中的折射场景、图7C中的透射场景以及图7D的直射场景都是最简单的场景,在实际应用中,虚拟场景远远比图7A至图7D要复杂,例如,虚拟场景中可能同时存在多个不透明物体以及多个透明物体,因此,光线会被多次反射、折射和透射,另外光源可以可能不止一个,而是两个或者更多。However, the reflection scene in FIG. 7A, the refraction scene in FIG. 7B, the transmission scene in FIG. 7C, and the direct scene in FIG. 7D is complicated. For example, there may be multiple opaque objects and multiple transparent objects in the virtual scene at the same time. Therefore, the light will be reflected, refracted and transmitted multiple times. In addition, there may be more than one light source, but two or more.
因此,在正向光线跟踪时,虚拟场景中的三维模型的每个网格的光线强度根据所有反射到该网格上所有光线产生的光线强度、所有折射到该网格上所有光线产生的光线强度、所有透射到该网格上所有光线产生的光线强度以及所有直射到该网格上所有光线产生的光线强度计算得出,例如可以是这些光线强度之和。Therefore, during forward ray tracing, the light intensity of each grid of the 3D model in the virtual scene is based on the light intensity generated by all the light reflected on the grid, and the light generated by all the light refracted on the grid. The intensity, the light intensity of all the light rays transmitted to the grid, and the light intensity of all the light rays directly hitting the grid are calculated. For example, it can be the sum of the light intensities.
以图8所示为例,虚拟场景中有第一光源251、第二光源252、透明球体253、透明薄体254、第一不透明球体255以及第二不透明球体256。其中,第一光源251产生的第一光线投射到透明球体253的点P 1,然后,被折射到二不透明球体256的中心点为点Q的网格上;第一光源251产生的第二光线投射到透明薄体254的点P 2上,然后,被透射到第二不透明球体256的中心点为点Q的网格上;第二光源252产生的第三光线直射到第二不透明球体256的中心点为点Q的网格上;第二光源252产生的 第三光线投射到第一不透明球体的点P 3上,然后,被反射到第二不透明球体256的中心点为点Q的网格上。因此,点Q所在的网格的光线强度根据第一光线被折射到点Q产生的光线强度,第二光线被透射到点Q产生的光线强度,第三光线被直射到点Q产生的光线强度以及第四光线被反射到点Q产生的光线强度计算得出,例如可以是这些光线强度之和。 Taking the example shown in FIG. 8 as an example, there are a first light source 251, a second light source 252, a transparent sphere 253, a transparent thin body 254, a first opaque sphere 255, and a second opaque sphere 256 in the virtual scene. Among them, the first light generated by the first light source 251 is projected to the point P 1 of the transparent sphere 253, and then is refracted onto the grid with the center point of the two opaque spheres 256 as the point Q; the second light generated by the first light source 251 Projected on the point P 2 of the transparent thin body 254, and then is transmitted to the grid with the center point of the second opaque sphere 256 as the point Q; the third light generated by the second light source 252 directly hits the second opaque sphere 256 Q is the center point on the grid points; third light generated by the second light source 252 is projected onto the first opaque sphere point P 3, and then is reflected to the center point 256 of the second opaque sphere grid point Q superior. Therefore, the light intensity of the grid where the point Q is located is based on the light intensity produced by the first light being refracted to the point Q, the light intensity produced by the second light being transmitted to the point Q, and the light intensity produced by the third light being directed to the point Q And the light intensity generated by the fourth light being reflected to the point Q is calculated, for example, it can be the sum of the light intensity.
可以理解,上述例子均是以光源发出的光线为1条光线,光线的反弹次数不超过5次为例进行说明的,但是,实际上光线的数量和光线的反弹次数还可以是其他的数目,此处不作具体限定。通常地,由于光源发出的光线的数量是无限的,计算资源是有限的,通常情况下,不可能对所有的光线进行正向光线跟踪,于是,需要对光源发出的光线进行采样。It is understandable that the above examples are all taking the light emitted by the light source as one light, and the number of light bounces does not exceed 5 as an example. However, in fact, the number of light rays and the number of light bounces can also be other numbers. There is no specific limitation here. Normally, since the number of rays emitted by the light source is unlimited and computing resources are limited, under normal circumstances, it is impossible to perform forward ray tracing on all rays. Therefore, it is necessary to sample the rays emitted by the light source.
在对正向光线跟踪的光线进行采样时,采样涉及的参数主要包括每单位空间的采样数以及光线反弹(bounce)次数等等。下面将分别以每单位面积的采样数(sample per unit area,SPUA)以及光线反弹次数为例,进行详细的介绍。When sampling the rays of the forward ray tracing, the parameters involved in the sampling mainly include the number of samples per unit space and the number of ray bounces (bounce) and so on. The following will take the number of samples per unit area (SPUA) and the number of light bounces as examples for detailed introduction.
SPUA定义了每个单位面积采样得到的光线的数量。以图9所示为例,以光源L为中心,构建球面S,并将球面S划分为多个单位面积,那么,SPUA等于光源L产生的光线中透过单位面积A的光线的数量。理论上来说,单位面积通过的光源L产生的光线的数量是无穷的,但是,在实际的跟踪进行过程中,不可能对所有光线进行跟踪,只能对有限的光线进行跟踪。SPUA的数量越大,跟踪的光线数量越多,图像质量也就越好,但是,计算量越大。相反,SPUA的数量越小,跟踪的光线数量越少,图像质量也就越差,但是,计算量越小。SPUA defines the number of rays sampled per unit area. Taking the example shown in FIG. 9 as an example, the spherical surface S is constructed with the light source L as the center, and the spherical surface S is divided into multiple unit areas. Then, SPUA is equal to the number of light rays generated by the light source L that pass through the unit area A. Theoretically, the number of rays generated by the light source L passing through a unit area is infinite. However, in the actual tracking process, it is impossible to trace all rays, and only a limited amount of rays can be traced. The greater the number of SPUAs, the greater the number of traced rays and the better the image quality, but the greater the amount of calculation. On the contrary, the smaller the number of SPUAs, the smaller the number of traced rays, and the worse the image quality, but the smaller the amount of calculation.
光线返弹次数为在光线的正向跟踪终止前,对光线进行跟踪的最大反射次数和最大折射次数之和。因为在复杂场景中,光线会被多次反射和折射,理论上来说,光线被反射和折射的次数可以是无限次,但是,在实际的跟踪进行过程中,不可能对光线进行无穷的跟踪,因而需要给出一些跟踪的终止条件。在应用中,可以有以下的终止条件:光线在经过许多次反射和折射以后,就会产生衰减,光线对于视点的光线强度贡献很小;或者,光线返弹次数即跟踪深度大于一定值。这里,光线返弹次数越多,可以被跟踪到的有效光线越多,多个透明物体之间的折射效果越好,越逼真,图像质量也就越好,但是,计算量越大。相反,光线返弹次数越少,可以被跟踪到的有效光线越少,多个透明物体之间的折射效果越差,越失真,图像质量也就越差,但是,计算量越少。The number of ray bounces is the sum of the maximum number of reflections and the maximum number of refractions for tracing the ray before the forward tracing of the ray is terminated. Because in a complex scene, light will be reflected and refracted multiple times. In theory, the number of times the light is reflected and refracted can be infinite, but in the actual tracking process, it is impossible to track the light infinitely. Therefore, some tracking termination conditions need to be given. In the application, there can be the following termination conditions: the light will attenuate after many reflections and refractions, and the light contributes little to the light intensity of the viewpoint; or, the number of ray bounces, that is, the tracking depth, is greater than a certain value. Here, the more light bounces, the more effective light that can be tracked, the better the refraction effect between multiple transparent objects, the more realistic, the better the image quality, but the greater the amount of calculation. On the contrary, the fewer the number of light bounces, the less effective light that can be traced, the worse the refraction effect between multiple transparent objects, the more distorted, and the worse the image quality, but the less the amount of calculation.
可以理解,上述采样参数仅仅是作为具体的示例,在实际应用中,还可以采用其他的采样参数,此处不作具体限定。It can be understood that the above-mentioned sampling parameters are only taken as specific examples. In actual applications, other sampling parameters may also be used, which are not specifically limited here.
为了减少计算量,如果虚拟场景中的光源产生的光线不是照射三维模型方向的光线,可以不必进行正向光线跟踪。In order to reduce the amount of calculation, if the light generated by the light source in the virtual scene is not the light illuminating the direction of the 3D model, it is not necessary to perform forward ray tracing.
反向光线追踪渲染是通过跟踪从预设视角进入三维模型的网格的光线,在虚拟场景中的传递至光源的过程。这里,预设视角是用户观察虚拟场景的某个角度。举例来说,假设用户垂直观看虚拟场景时,预设视角为(90,90),用户从左侧45度观看虚拟场景时,预设视角为(45,0),用户从右侧45度观看虚拟场景时,预设视角为 (135,0)等等。上述从预设视角进行反向跟踪得到的光线只有人眼或者相机处于该预设视角才能观看得到,因此,为了能够实现不同角度对虚拟场景的观察,需要从各个预设视角进行反向光线跟踪。以图10所示为例对各个预设视角进行说明,每个网格都存在一个朝向其法线方向的开放半球空间,进入该半球空间光线可以表达为以网格中心P为终点,以半球空间的球面上任意一点O(例如,O 1,O 2或者O 3)为起点,对每个网格的不同的预设视角分别进行反向光线跟踪,这里说的预设视角是指光线OP在半球坐标系中的(θ,π),其中,0<θ<180,0<π<360。空间是连续的,但我们可以根据算力和精度要求,对预设视角进行量化,例如,可以每个1度设定一个预设视角,也可以每隔2度设定一个预设视角。可以理解,预设视角的数量越多,量化误差越小,准确度越高。 Reverse ray tracing rendering is the process of tracing the light rays entering the grid of the 3D model from a preset perspective and passing them to the light source in the virtual scene. Here, the preset angle of view is a certain angle from which the user observes the virtual scene. For example, suppose that when the user views the virtual scene vertically, the preset angle of view is (90, 90), when the user views the virtual scene from the left 45 degrees, the preset angle of view is (45, 0), and the user views 45 degrees from the right In the virtual scene, the preset angle of view is (135, 0) and so on. The above-mentioned light obtained by reverse tracing from the preset angle of view can only be viewed by the human eye or the camera at the preset angle of view. Therefore, in order to realize the observation of the virtual scene from different angles, reverse ray tracing from each preset angle of view is required . Take the example shown in Figure 10 to illustrate each preset viewing angle. Each grid has an open hemispherical space facing its normal direction. The light entering the hemispherical space can be expressed as the end point of the grid center P and the hemisphere Any point O (for example, O 1 , O 2 or O 3 ) on the sphere of the space is the starting point, and reverse ray tracing is performed on the different preset viewing angles of each grid. The preset viewing angle mentioned here refers to the light OP (Θ,π) in the hemispherical coordinate system, where 0<θ<180, 0<π<360. The space is continuous, but we can quantify the preset viewing angle according to the computing power and accuracy requirements. For example, you can set a preset viewing angle every 1 degree, or you can set a preset viewing angle every 2 degrees. It can be understood that the greater the number of preset viewing angles, the smaller the quantization error and the higher the accuracy.
反向光线追踪渲染中主要存在反射以及折射两种场景,下面将分别结合具体的实施例进行说明。There are mainly two scenes of reflection and refraction in reverse ray tracing rendering, which will be described below in conjunction with specific embodiments.
如图11A所示,反射场景中,假设虚拟场景只有一个光源311以及一个不透明球体312。从预设视角发出一条光线,投射到不透明球体312的一个网格的点P 1上,然后,被反射到光源311。此时,可以通过局部光照明模型计算光源311产生的光线在不透明球体312的该网格上产生的光线强度。 As shown in FIG. 11A, in the reflection scene, it is assumed that the virtual scene has only one light source 311 and one opaque sphere 312. Light is emitted from a predetermined viewing angle, an opaque sphere projected onto a grid point 312 P 1, then, it is reflected to the light source 311. At this time, the light intensity of the light generated by the light source 311 on the grid of the opaque sphere 312 can be calculated by the local light illumination model.
如图11B所示,折射场景中,假设虚拟场景只有一个光源321以及一个透明球体322。从预设视角发出一条光线,投射到透明球体322的一个网格的点P 2,然后,被折射到透明球体322的另一个点Q 2,然后,被折射到光源L,此时,可以通过局部光照明模型计算光源321产生的光线在点Q 2上产生的光线强度,然后,再计算光线从点Q 2折射到点P 2时在中心点为P 2的网格上产生的光线强度。 As shown in FIG. 11B, in the refraction scene, it is assumed that the virtual scene has only one light source 321 and one transparent sphere 322. A light ray is emitted from a preset angle of view, projected to a point P 2 of a grid of the transparent sphere 322, and then refracted to another point Q 2 of the transparent sphere 322, and then refracted to the light source L. At this time, it can pass local illumination model calculation source 321 generates light generated in the light intensity on the two points Q, and then calculate the light intensity of the light generated from the point 2 is the point Q 2 at the center of the refracted P 2 P grid.
但是,上述图11A中的反射场景以及图11B中的折射场景都是最简单的场景,图11A中假设虚拟场景中仅仅存在一个不透明球体,图11B中假设虚拟场景中仅仅存在一个透明球体,在实际应用中,虚拟场景远远比图11A至图11B要复杂,例如,虚拟场景中可能同时存在多个不透明物体以及多个透明物体,因此,光线会被多次反射、折射和透射,从而导致光线的跟踪变得非常复杂,此处不再展开描述。However, the reflection scene in Figure 11A and the refraction scene in Figure 11B are the simplest scenes. Figure 11A assumes that there is only one opaque sphere in the virtual scene, and Figure 11B assumes that there is only one transparent sphere in the virtual scene. In actual applications, the virtual scene is far more complicated than that of Figures 11A to 11B. For example, there may be multiple opaque objects and multiple transparent objects in the virtual scene at the same time. Therefore, the light will be reflected, refracted and transmitted multiple times, resulting in The ray tracing has become very complicated and will not be described here.
在反向光线跟踪时,虚拟场景中的三维模型的每个网格的光线强度根据所有反射到该网格上所有光线产生的光线强度、所有折射到该网格上所有光线产生的光线强度、所有透射到该网格上所有光线产生的光线强度以及所有直射到该网格上所有光线产生的光线强度计算得出,例如,可以是这些光线强度之和。In reverse ray tracing, the light intensity of each grid of the 3D model in the virtual scene is based on the light intensity of all the light reflected on the grid, the light intensity of all the light refracted to the grid, The light intensity of all the light rays transmitted to the grid and the light intensity of all the light rays directly hitting the grid are calculated, for example, it can be the sum of the light intensities.
应理解,上述例子均是以从某个预设视角出发进行反向光线跟踪为例进行说明的,但是,实际上同一个网格上不同角度所汇聚的光线也是存在差异的,特别是对于光滑表面而言,因此,需要从各个预设视角出发进行反向光线跟踪。It should be understood that the above examples are all based on reverse ray tracing from a certain preset angle of view. However, in fact, the light gathered by different angles on the same grid is also different, especially for smooth In terms of surface, therefore, reverse ray tracing needs to be performed from each preset viewing angle.
在只需要进行正向光线跟踪的情况下,在远程渲染平台进行了正向光线跟踪之后,就可以得到各三维模型的网格的预光线追踪结果。假设虚拟场景中存在n个网格T 1,T 2,…,T n,在进行正向光线跟踪之后,可以得到通过正向光线跟踪之后n个网格T 1,T 2,…,T n各自的正向光线追踪结果F 1,F 2,…,F n,以作为n个网格T 1,T 2,…,T n的预光线追踪结果。远程渲染平台将n个网格T 1,T 2,…,T n以及n个网格T 1,T 2,…,T n各自的预光线 追踪结果F 1,F 2,…,F n关联存储到光强表格1中。在一具体的实施例中,光强表格1可以是如表1所示的表格: In the case where only forward ray tracing is required, after forward ray tracing is performed on the remote rendering platform, the pre-ray tracing results of the meshes of each 3D model can be obtained. Assuming that there are n grids T 1 , T 2 ,..., T n in the virtual scene, after forward ray tracing, n grids T 1 , T 2 ,..., T n can be obtained after forward ray tracing The respective forward ray tracing results F 1 , F 2 ,..., F n are used as pre-ray tracing results of n grids T 1 , T 2 ,..., T n. The remote rendering platform associates the pre-ray tracing results F 1 , F 2 ,..., F n of n grids T 1 , T 2 ,..., T n and n grids T 1 , T 2 ,..., T n Stored in the light intensity table 1. In a specific embodiment, the light intensity table 1 may be the table shown in Table 1:
表1 光强表格1Table 1 Light intensity table 1
Figure PCTCN2021092699-appb-000004
Figure PCTCN2021092699-appb-000004
在需要进行正向光线跟踪和反向光线跟踪的情况下,在进行了正向光线跟踪和反向光线跟踪之后,就可以对正向光线跟踪得到的正向光线追踪结果和反向光线跟踪得到的反向光线追踪结果进行处理,得到各三维模型的网格的预光线追踪结果。具体地,In the case that forward ray tracing and reverse ray tracing are required, after forward ray tracing and reverse ray tracing are performed, the result of forward ray tracing and reverse ray tracing can be obtained The inverse ray tracing results of each 3D model are processed to obtain the pre-ray tracing results of the meshes of each three-dimensional model. specifically,
假设虚拟场景中存在n个网格T 1,T 2,…,T n,进行正向光线跟踪以及分别从k个角度对n个网格T 1,T 2,…,T n进行反向光线跟踪。 Assuming that there are n grids T 1 , T 2 ,..., T n in the virtual scene, perform forward ray tracing and perform reverse ray ray on the n grids T 1 , T 2 ,..., T n from k angles. track.
对光源发出的光线进行正向光线跟踪之后,可以得到通过正向光线跟踪之后n个网格T 1,T 2,…,T n各自的正向光线追踪结果F 1,F 2,…,F nAfter forward ray tracing the light emitted by the light source, the forward ray tracing results F 1 , F 2 ,..., F of the n grids T 1 , T 2 ,..., T n after the forward ray tracing can be obtained n .
对n个网格T 1,T 2,…,T n分别从第一角度进行反向光线跟踪之后,可以得到从第一角度对n个网格T 1,T 2,…,T n分别进行反向光线跟踪得到的反向光线追踪结果
Figure PCTCN2021092699-appb-000005
对n个网格T 1,T 2,…,T n分别从第二角度进行反向光线跟踪之后,可以得到从第二角度对n个网格T 1,T 2,…,T n分别进行反向光线跟踪得到的反向光线跟踪结果
Figure PCTCN2021092699-appb-000006
对n个网格T 1,T 2,…,T n分别从第k角度进行反向光线跟踪之后,可以得到从第k角度对n个网格T 1,T 2,…,T n分别进行反向光线跟踪得到的反向光线跟踪结果
Figure PCTCN2021092699-appb-000007
After performing reverse ray tracing on the n grids T 1 , T 2 ,..., T n from the first angle, it can be obtained that the n grids T 1 , T 2 ,..., T n are respectively performed from the first angle Reverse ray tracing result obtained by reverse ray tracing
Figure PCTCN2021092699-appb-000005
After performing reverse ray tracing on the n grids T 1 , T 2 ,..., T n from the second angle, we can get the n grids T 1 , T 2 ,..., T n from the second angle. Reverse ray tracing result obtained by reverse ray tracing
Figure PCTCN2021092699-appb-000006
After performing reverse ray tracing on the n grids T 1 , T 2 ,..., T n from the k-th angle, we can obtain the n grids T 1 , T 2 ,..., T n from the k-th angle. Reverse ray tracing result obtained by reverse ray tracing
Figure PCTCN2021092699-appb-000007
将n个网格T 1,T 2,…,T n各自通过正向光线跟踪得到的正向光线跟踪结果F 1,F 2,…,F n分别和从第一角度对n个网格T 1,T 2,…,T n进行反向光线跟踪得到的反向光线跟踪结果
Figure PCTCN2021092699-appb-000008
进行线性叠加,从而得到第一角度的n个网格T 1,T 2,…,T n的预光线追踪结果
Figure PCTCN2021092699-appb-000009
将n个网格T 1,T 2,…,T n各自通过正向光线跟踪得到的正向光线跟踪结果F 1,F 2,…,F n分别和从第二角度对n个网格T 1,T 2,…,T n进行反向光线跟踪得到的反向光线跟踪结果
Figure PCTCN2021092699-appb-000010
进行线性叠加,从而得到第二角度的n个网格T 1,T 2,…,T n的预光线追踪结果
Figure PCTCN2021092699-appb-000011
将n个网格T 1,T 2,…,T n各自通过正向光线跟踪得到的正向光线跟踪结果F 1,F 2,…,F n分别和从第k角度对n个网格T 1,T 2,…,T n进行反向光线跟踪得到的反向光线跟踪结果
Figure PCTCN2021092699-appb-000012
进行线性叠加,从而得到第k角度的n个网格T 1,T 2,…,T n的预光线追踪结果
Figure PCTCN2021092699-appb-000013
The forward ray tracing results F 1 , F 2 ,..., F n obtained by the forward ray tracing of the n grids T 1 , T 2 ,..., T n respectively and the n grids T from the first angle 1 ,T 2 ,...,T n reverse ray tracing result obtained by reverse ray tracing
Figure PCTCN2021092699-appb-000008
Perform linear superposition to obtain the pre-ray tracing results of n grids T 1 , T 2 ,..., T n at the first angle
Figure PCTCN2021092699-appb-000009
The forward ray tracing results F 1 , F 2 ,..., F n obtained by each of the n grids T 1 , T 2 ,..., T n through forward ray tracing are respectively and from the second angle to the n grids T 1 ,T 2 ,...,T n reverse ray tracing result obtained by reverse ray tracing
Figure PCTCN2021092699-appb-000010
Perform linear superposition to obtain the pre-ray tracing results of n grids T 1 , T 2 ,..., T n at the second angle
Figure PCTCN2021092699-appb-000011
The forward ray tracing results F 1 , F 2 ,..., F n obtained by each of the n grids T 1 , T 2 ,..., T n through the forward ray tracing respectively and the n grid T from the k-th angle 1 ,T 2 ,...,T n reverse ray tracing result obtained by reverse ray tracing
Figure PCTCN2021092699-appb-000012
Perform linear superposition to obtain the pre-ray tracing results of n grids T 1 , T 2 ,..., T n at the k-th angle
Figure PCTCN2021092699-appb-000013
这里,第一角度的n个网格T 1,T 2,…,T n的预光线追踪结果
Figure PCTCN2021092699-appb-000014
第二角度的n个网格T 1,T 2,…,T n的预光线追踪结果
Figure PCTCN2021092699-appb-000015
第k角度的n个网格T 1,T 2,…,T n的预光线追踪结果
Figure PCTCN2021092699-appb-000016
应理解,在一些实施例中,还可以进行归一化处理。为了降低存储各三维模型的网格的预光线追踪结果所需要的空间,可以采用稀疏矩阵的方式对各三维模型的网格的预光线追踪结果进行存储。
Here, the pre-ray tracing results of n grids T 1 , T 2 ,..., T n at the first angle
Figure PCTCN2021092699-appb-000014
Pre-ray tracing results of n grids T 1 , T 2 ,..., T n at the second angle
Figure PCTCN2021092699-appb-000015
Pre-ray tracing results of n grids T 1 , T 2 ,..., T n at the kth angle
Figure PCTCN2021092699-appb-000016
It should be understood that in some embodiments, normalization processing may also be performed. In order to reduce the space required for storing the pre-ray tracing results of the grids of each three-dimensional model, the pre-ray tracing results of the grids of each three-dimensional model can be stored in a sparse matrix manner.
远程渲染平台将n个网格T 1,T 2,…,T n以及n个网格T 1,T 2,…,T n各自的预光线追踪结果关联存储到光强表格2中。在一具体的实施例中,光强表格2可以是如表2所示的表格: The remote rendering platform stores the pre-ray tracing results of the n grids T 1 , T 2 ,..., T n and the n grids T 1 , T 2 ,..., T n in the light intensity table 2 in association with each other. In a specific embodiment, the light intensity table 2 may be the table shown in Table 2:
表2 光强表格2Table 2 Light intensity table 2
Figure PCTCN2021092699-appb-000017
Figure PCTCN2021092699-appb-000017
为了简便起见,上述内容是以对虚拟场景中的n个网格均进行反向光线跟踪为例进行说明的,在实际应用中,可能只对n个网格中的部分网格(t个网络)进行反向光线跟踪,此时,只需要将t个网格的每个网格的正向光线跟踪结果和t个网格的每个网格的k个角度的反向光线跟踪结果进行线性叠加以作为预光线追踪结果,剩余的n-t个网格的预光线追踪结果记录为正向光线跟踪结果即可。这里,需要进行反向光线跟踪的网格可以是镜面、透明物体等存在反射和折射现象的物体的表面。远程渲染平台将n个网格T 1,T 2,…,T n以及n个网格T 1,T 2,…,T n各自的预光线追踪结果关联存储到光强表格3中。在一具体的实施例中,光强表格3可以是如表3所示的表格: For the sake of brevity, the above content is described using reverse ray tracing for all n grids in the virtual scene. In practical applications, only part of the n grids (t network ) Perform reverse ray tracing. At this time, you only need to linearize the forward ray tracing result of each grid of t grids and the reverse ray tracing result of k angles of each grid of t grids. The superposition is used as the pre-ray tracing result, and the pre-ray tracing results of the remaining nt grids are recorded as the positive ray tracing result. Here, the mesh that needs to be reversed ray tracing can be the surface of an object that has reflection and refraction phenomena, such as a mirror surface, a transparent object, and the like. The remote rendering platform stores the pre-ray tracing results of the n grids T 1 , T 2 ,..., T n and the n grids T 1 , T 2 ,..., T n in the light intensity table 3 in association with each other. In a specific embodiment, the light intensity table 3 may be the table shown in Table 3:
表3 光强表格3Table 3 Light Intensity Table 3
Figure PCTCN2021092699-appb-000018
Figure PCTCN2021092699-appb-000018
可以看出,表3中假设网格T 1以及网格T n均为粗糙材料的网格,因此,不需要进行反向光线跟踪,也自然只存在正向光线跟踪结果,不存在反向光线跟踪结果。 It can be seen that in Table 3, it is assumed that the grid T 1 and the grid T n are grids of coarse materials. Therefore, there is no need to perform reverse ray tracing. Naturally, there are only forward ray tracing results, and there is no reverse ray. Tracking Results.
上述例子中均以预光线追踪结果的直接相加为例进行说明,在实际应用中,预光线追踪结果还可以是加权相加等等,此处不作具体限定。In the above examples, the direct addition of the pre-ray tracing results is taken as an example for description. In practical applications, the pre-ray tracing results may also be weighted addition, etc., which is not specifically limited here.
在私人部分的计算:Calculations in the private part:
当不同的用户从不同的预设视角观察虚拟场景时,远程渲染平台或者终端设备采用投射式求交方法,从预先计算好的各三维模型的网格的预光线追踪结果中提出相应的可观察到的网格的渲染结果,并最终生成用户所需要的渲染图像。下面将结合图 12以及相关具体实施例对如何获取可观察到的网格的渲染结果进行详细的介绍。When different users observe the virtual scene from different preset angles of view, the remote rendering platform or terminal device adopts the projection-type intersection method, and proposes corresponding observables from the pre-calculated pre-ray tracing results of the grids of each 3D model The rendering result of the obtained grid, and finally generate the rendered image required by the user. How to obtain the rendering result of the observable grid will be described in detail below in conjunction with FIG. 12 and related specific embodiments.
如图12所示,假设观察者511从视点E出发对虚拟场景进行观察,并且,观察生成的渲染图像512具有m个像素点。As shown in FIG. 12, it is assumed that the observer 511 observes the virtual scene from the viewpoint E, and the rendered image 512 generated by the observation has m pixels.
首先,从视点E发出一条光线,投射到渲染图像512的第一个像素点上,然后,假设光线继续射出到虚拟场景中的三维模型的其中一个网格T 1上,并且,该光线射入该网格的第一射入角度为第一角度,那么可以根据第一射入角度为第一角度从如表2所示的各三维模型的网格的预光线追踪结果中查找到该网格相同角度的预光线追踪结果
Figure PCTCN2021092699-appb-000019
并将该预光线追踪结果
Figure PCTCN2021092699-appb-000020
作为第一像素点的渲染结果
Figure PCTCN2021092699-appb-000021
First, a light ray is emitted from the viewpoint E and projected onto the first pixel of the rendered image 512. Then, it is assumed that the light continues to be emitted to one of the grids T 1 of the three-dimensional model in the virtual scene, and the ray enters The first angle of incidence of the grid is the first angle, then the grid can be found from the pre-ray tracing results of the grids of each three-dimensional model as shown in Table 2 according to the first angle of incidence being the first angle Pre-ray tracing results at the same angle
Figure PCTCN2021092699-appb-000019
And this pre-ray tracing result
Figure PCTCN2021092699-appb-000020
As the rendering result of the first pixel
Figure PCTCN2021092699-appb-000021
然后,从视点E发出一条光线,投射到渲染图像512的第二个像素点上,然后,假设光线继续射出到虚拟场景中的三维模型的其中一个网格T 10上,并且,该光线射入该网格的第二射入角度为第五角度,那么可以根据第二射入角度为第二角度从如表2所示的各三维模型的网格的预光线追踪结果中查找该网格相同角度的预光线追踪结果
Figure PCTCN2021092699-appb-000022
并将该预光线追踪结果作为第二像素点的渲染结果。
Then, a light ray is emitted from the viewpoint E and projected onto the second pixel of the rendered image 512. Then, suppose the light continues to be emitted to one of the grids T 10 of the three-dimensional model in the virtual scene, and the ray enters The second angle of incidence of the grid is the fifth angle, then the grid can be found to be the same from the pre-ray tracing results of the grids of each three-dimensional model shown in Table 2 according to the second angle of incidence being the second angle. Pre-ray tracing results for angles
Figure PCTCN2021092699-appb-000022
And use the pre-ray tracing result as the rendering result of the second pixel.
…;
最后,从视点E发出一条光线,投射到渲染图像512的第m个像素点上,然后,假设光线继续射出到虚拟场景中的三维模型的其中一个网格T n-9上,并且,该光线射入该网格的第m射入角度为第k角度,那么可以根据第m射入角度为第k角度从如表2所示的各三维模型的网格的预光线追踪结果中查找该网格相同角度的预光线追踪结果
Figure PCTCN2021092699-appb-000023
并将该预光线追踪结果作为第m像素点的渲染结果。
Finally, a light ray is emitted from the viewpoint E and projected onto the m-th pixel of the rendered image 512. Then, it is assumed that the light continues to be emitted to one of the grids T n-9 of the three-dimensional model in the virtual scene, and the ray The m-th angle of incidence into the grid is the k-th angle, then the mesh can be found from the pre-ray tracing results of the grids of each three-dimensional model as shown in Table 2 according to the m-th angle of incidence as the k-th angle. Pre-ray tracing results of the same angle of the grid
Figure PCTCN2021092699-appb-000023
And use the pre-ray tracing result as the rendering result of the m-th pixel.
至此,m个像素点的渲染结果都已经确定了,渲染图像512已经可以确定出来了。So far, the rendering results of the m pixels have been determined, and the rendered image 512 can be determined.
上述例子中,均假设第一射入角恰好等于第一角度,第二射入角恰好等于第二角度,第m射入角恰好等于第k角度,但是,由于在实际应用中,预设视角通常进行了量化,可能存在射入角不是恰好等于预设视角的情况,例如,第一射入角可以位于第一角度和第二角度之间,此时,可以通过向上取整或者向下取整等等方式进行处理,此处不作具体限定。In the above examples, it is assumed that the first angle of incidence is exactly equal to the first angle, the second angle of incidence is exactly equal to the second angle, and the m-th angle of incidence is exactly equal to the k-th angle. However, in practical applications, the preset viewing angle Usually quantified, there may be cases where the angle of incidence is not exactly equal to the preset angle of view. For example, the first angle of incidence can be between the first angle and the second angle. In this case, it can be rounded up or down. The processing is performed in a whole and other manner, and there is no specific limitation here.
为了简便起见,上述图12对应的实施例是以每像素采样数(sample per pixel,SPP)等于1为例进行说明,即,每个像素点仅仅通过一条光线,其中,SPP可以定义为每个像素采样得到的光线的数量,但是,在实际应用中,为了提高渲染图像的画质,通常令SPP的数值更大。For the sake of simplicity, the embodiment corresponding to FIG. 12 is described by taking the sample per pixel (SPP) equal to 1 as an example, that is, each pixel only passes through one light, where SPP can be defined as each The number of rays obtained by pixel sampling. However, in practical applications, in order to improve the quality of the rendered image, the value of SPP is usually larger.
假设SPP等于2时,下面将以渲染图像的一个像素点为例,说明获得该渲染图像的光线强度的过程。Assuming that SPP is equal to 2, the following will take a pixel of the rendered image as an example to illustrate the process of obtaining the light intensity of the rendered image.
从视点E发出一条光线,投射到渲染图像512的第i个像素点上,然后,假设光线继续射出到虚拟场景中的三维模型的其中一个网格T 3上,并且,该光线射入该网格的第一射入角度为第一角度,那么可以根据第一射入角度为第一角度从如表2所示的各三维模型的网格的预光线追踪结果中查找到该网格相同角度的预光线追踪结果
Figure PCTCN2021092699-appb-000024
从视点E发出另一条光线,投射到渲染图像512的第i个像素点上,然后,假设光线继续射出到虚拟场景中的三维模型的其中一个网格T 4上,并且,该光线射入该网格的第一射入角度为第三角度,那么可以根据第一射入角度为第三角度从如表2 所示的各三维模型的网格的预光线追踪结果中查找到该网格相同角度的预光线追踪结果
Figure PCTCN2021092699-appb-000025
那么,可以将预光线追踪结果
Figure PCTCN2021092699-appb-000026
以及预光线追踪结果
Figure PCTCN2021092699-appb-000027
的平均值作为第i像素点的渲染结果。
A ray is emitted from the viewpoint E and projected onto the i-th pixel of the rendered image 512. Then, suppose that the ray continues to be emitted to one of the grids T 3 of the three-dimensional model in the virtual scene, and the ray enters the grid The first angle of incidence of the grid is the first angle, then the same angle of the grid can be found from the pre-ray tracing results of the grid of each three-dimensional model as shown in Table 2 according to the first angle of incidence as the first angle Pre-ray tracing results
Figure PCTCN2021092699-appb-000024
Another ray is emitted from the viewpoint E and projected onto the i-th pixel of the rendered image 512. Then, suppose that the ray continues to be emitted onto one of the grids T 4 of the three-dimensional model in the virtual scene, and the ray enters the The first incident angle of the grid is the third angle, then the grid can be found from the pre-ray tracing results of the three-dimensional model grids shown in Table 2 based on the first incident angle being the third angle. Pre-ray tracing results for angles
Figure PCTCN2021092699-appb-000025
Then, the pre-ray tracing results can be
Figure PCTCN2021092699-appb-000026
And pre-ray tracing results
Figure PCTCN2021092699-appb-000027
The average value of is used as the rendering result of the i-th pixel.
可以理解,当SPP的值更高的时候,可以以此进行类推,为了简便起见,此处不再展开赘述。It can be understood that when the value of SPP is higher, it can be deduced by analogy. For the sake of brevity, it will not be repeated here.
SPP的数量可以影响到渲染图像的画质的原因在于:以图13A所示为例,如果Spp为1(即每像素只有一条光线经过),那么,即使光线发生微小偏移,像素点的渲染结果也可能发生很大的变化。以图13A所示为例,如果光线从像素点A经过,那么光线将会被投射到光线强度较低的不透明物体1上,此时,像素点A的渲染结果是由不透明物体1上的投射点所在的网格决定的,即,像素点A的光线强度较低。如果光线从像素点B经过,那么光线将会被投射到光线强度较高的不透明物体2上,此时,像素点B的渲染结果是由不透明物体2上的投射点所在的网格决定的,即,像素点B的光线强度较高的。因此,尽管像素点A和像素点B是相邻像素,但是,像素点A和像素点B的渲染结果会相差甚远,从而产生锯齿效应。为了解决上述问题,以图13B所示为例,如果SPP为n(即从视点向渲染图像上的同一像素点发射n条光线,n为大于1的整数),然后,这n条光线透过像素点各自投射在不透明物体1或不透明物体2的n个投射点所在的网格上,从而可以根据n个投射点所在的网格的光线强度分别确定该像素点的n个光线强度,最后,对这n个光线强度求平均,从而得到该像素的渲染结果。如果该像素渲染结果符合画面参考帧(数学期望值),则采样噪声越低。因此,SPP的数量越多,渲染图像的抗锯齿效果越好,噪声指标越低,渲染图像的画质也自然越好。The reason why the number of SPPs can affect the image quality of the rendered image is: Take Figure 13A as an example, if Spp is 1 (that is, only one light passes through each pixel), then even if the light shifts slightly, the pixel rendering The results may also vary greatly. Take the example shown in Figure 13A, if the light passes through pixel A, then the light will be projected on the opaque object 1 with lower light intensity. At this time, the rendering result of pixel A is projected on the opaque object 1. It is determined by the grid where the point is located, that is, the light intensity of the pixel point A is low. If the light passes through the pixel point B, the light will be projected onto the opaque object 2 with higher light intensity. At this time, the rendering result of the pixel point B is determined by the grid where the projection point on the opaque object 2 is located. That is, the light intensity of the pixel point B is relatively high. Therefore, although the pixel point A and the pixel point B are adjacent pixels, the rendering results of the pixel point A and the pixel point B will be very different, resulting in a sawtooth effect. In order to solve the above problem, take the example shown in Figure 13B, if the SPP is n (that is, n rays are emitted from the viewpoint to the same pixel on the rendered image, and n is an integer greater than 1), then these n rays of light pass through Each pixel is projected on the grid where the n projection points of the opaque object 1 or the opaque object 2 are located, so that the n light intensity of the pixel can be determined according to the light intensity of the grid where the n projection points are located, and finally, The n light intensity is averaged to obtain the rendering result of the pixel. If the pixel rendering result matches the picture reference frame (mathematical expectation value), the sampling noise is lower. Therefore, the greater the number of SPPs, the better the anti-aliasing effect of the rendered image, the lower the noise index, and the better the quality of the rendered image.
整个过程至少可以分成两个部分:第一部分:用户预先将虚拟场景上传到远程渲染平台,远程渲染平台执行上述的公共部分的计算,从而得到多个网格的光线强度并进行存储以备用。第二部分远程渲染平台在接收到终端设备发送的渲染请求之后,再执行私人部分的计算,得到渲染图像。下面将结合具体的例子对两部分的过程进行详细的说明,其中,图14所示的例子主要说明第一部分的过程,图15所示的例子主要说明第二部分的过程。The whole process can be divided into at least two parts: The first part: the user uploads the virtual scene to the remote rendering platform in advance, and the remote rendering platform executes the calculation of the above-mentioned common part to obtain the light intensity of multiple grids and store them for future use. After receiving the rendering request sent by the terminal device, the second part of the remote rendering platform performs the calculation of the private part to obtain the rendered image. The two parts of the process will be described in detail below in conjunction with specific examples. The example shown in FIG. 14 mainly illustrates the process of the first part, and the example shown in FIG. 15 mainly illustrates the process of the second part.
参见图14,图14是本申请提出的一种预光线追踪结果的生成方法的流程示意图。如图14所示,本方法是在图1A或者图1B所示的渲染系统的基础上实现的,包括如下步骤:Refer to FIG. 14, which is a schematic flowchart of a method for generating a pre-ray tracing result proposed by the present application. As shown in Figure 14, this method is implemented on the basis of the rendering system shown in Figure 1A or Figure 1B, and includes the following steps:
S101:远程渲染平台获取虚拟场景。S101: The remote rendering platform acquires a virtual scene.
在一具体的实施方式中,虚拟场景可以是终端设备发送给远程渲染平台,也可以是管理设备发送给远程渲染平台。In a specific embodiment, the virtual scene may be sent by the terminal device to the remote rendering platform, or sent by the management device to the remote rendering platform.
在一具体的实施方式中,虚拟场景可以具有唯一的标识,即,虚拟场景的标识。这里,不同的虚拟场景具有不同的虚拟场景的标识。例如,虚拟场景1的虚拟场景的标识为S 1,虚拟场景2的虚拟场景的标识为S 2,,…。 In a specific implementation, the virtual scene may have a unique identifier, that is, the identifier of the virtual scene. Here, different virtual scenes have different identifiers of the virtual scenes. For example, the identifier of the virtual scene of virtual scene 1 is S 1 , and the identifier of the virtual scene of virtual scene 2 is S 2 ,,...
在一具体的实施方式中,虚拟场景的定义,虚拟场景中的光源、虚拟场景中的三维模型以及各三维模型中的网格等等的介绍请参见上文,此处不再重复赘述。In a specific embodiment, the definition of the virtual scene, the light source in the virtual scene, the three-dimensional model in the virtual scene, the grid in each three-dimensional model, etc., please refer to the above, and the details will not be repeated here.
S102:所述远程渲染平台根据所述虚拟场景的光源对所述虚拟场景的各三维模型的网格进行正向光线追踪,得到各三维模型的网格的正向光线追踪结果。S102: The remote rendering platform performs forward ray tracing on the grid of each three-dimensional model of the virtual scene according to the light source of the virtual scene, to obtain a positive ray tracing result of the grid of each three-dimensional model.
在一具体的实施方式中,在执行步骤S102之前,所述方法还包括:虚拟场景的提供者以及发出渲染请求的用户设置正向光线追踪参数。此时,步骤S102可以包括:远程渲染平台获取虚拟场景的提供者或发出渲染请求的用户设置的正向光线追踪参数,并根据虚拟场景的光源和正向光线追踪参数对虚拟场景的各三维模型的网格进行正向光线追踪,得到正向光线跟踪结果。其中,所述正向光线追踪参数包括以下至少一个:每单位面积的采样数以及光线反弹次数等等。In a specific implementation, before step S102 is performed, the method further includes: the provider of the virtual scene and the user who issued the rendering request setting forward ray tracing parameters. At this time, step S102 may include: the remote rendering platform obtains the forward ray tracing parameters set by the provider of the virtual scene or the user who issued the rendering request, and analyzes the three-dimensional models of the virtual scene according to the light source and forward ray tracing parameters of the virtual scene. The grid performs forward ray tracing, and the result of forward ray tracing is obtained. Wherein, the forward ray tracing parameter includes at least one of the following: the number of samples per unit area, the number of ray bounces, and so on.
在一具体的实施方式中,正向光线追踪的介绍可以参加上文中的相关内容,此处不再进行赘述。In a specific implementation, the introduction of forward ray tracing can be added to the relevant content above, and will not be repeated here.
S103:所述远程渲染平台根据所述虚拟场景的光源对所述虚拟场景的各三维模型的部分或全部网格进行多个预设视角的反向光线追踪,得到各三维模型的网格的反向光线追踪结果。S103: According to the light source of the virtual scene, the remote rendering platform performs reverse ray tracing on part or all of the meshes of the three-dimensional models of the virtual scene with multiple preset angles of view to obtain the reflections of the meshes of the three-dimensional models. Trace the result to the ray.
在一具体的实施方式中,在执行步骤S103之前,所述方法还包括:虚拟场景的提供者以及发出渲染请求的用户设置反向光线追踪参数。此时,步骤S103可以包括:远程渲染平台获取虚拟场景的提供者或发出渲染请求的用户设置的反向光线追踪参数,并根据虚拟场景的光源和反向光线追踪参数对虚拟场景的各三维模型的部分或者全部网格进行反向光线追踪,得到各三维模型的网格的反向光线追踪结果。其中,所述反向光线追踪参数包括以下至少一个:预设视角参数以及光线反弹次数等等。其中,预设视角参数可以是预设视角的数量,或者,可以是多个预设视角。In a specific implementation, before step S103 is performed, the method further includes: the provider of the virtual scene and the user who issued the rendering request setting the reverse ray tracing parameters. At this time, step S103 may include: the remote rendering platform obtains the reverse ray tracing parameters set by the provider of the virtual scene or the user who issued the rendering request, and performs a calculation of each three-dimensional model of the virtual scene according to the light source and reverse ray tracing parameters of the virtual scene. Part or all of the meshes are reversed ray tracing, and the reverse ray tracing results of the meshes of each three-dimensional model are obtained. Wherein, the reverse ray tracing parameter includes at least one of the following: a preset viewing angle parameter, the number of light bounces, and so on. The preset viewing angle parameter may be the number of preset viewing angles, or may be multiple preset viewing angles.
S104:所述远程渲染平台根据正向光线跟踪结果反向光线跟踪结果,确定各三维模型的网格的预光线追踪结果。S104: The remote rendering platform determines the pre-ray tracing result of the mesh of each three-dimensional model according to the forward ray tracing result and the reverse ray tracing result.
在一具体的实施方式中,远程渲染平台根据所述第一可观察到的网格的反向光线追踪结果和所述第一可观察到的网格的正向光线追踪结果,确定所述第一可观察到的网格的渲染结果的过程可以参见上文中确定各三维模型的网格的预光线追踪结果的过程,此处不再赘述。In a specific embodiment, the remote rendering platform determines the first observable grid according to the reverse ray tracing result of the first observable grid and the forward ray tracing result of the first observable grid. For a process of observable grid rendering results, please refer to the process of determining the pre-ray tracing results of the grids of each three-dimensional model above, which will not be repeated here.
可以理解,上述例子是以虚拟场景中存在需要进行反向光线跟踪的网格为例进行说明的,在虚拟场景不存在需要进行反向光线跟踪的网格的情况下,则不需要执行步骤S103,并且,在步骤S104中直接根据正向光线跟踪结果确定各三维模型的网格的预光线追踪结果即可,此处不再展开描述。It can be understood that the above example is described by taking the presence of a grid that needs reverse ray tracing in the virtual scene as an example. If there is no grid that needs reverse ray tracing in the virtual scene, there is no need to perform step S103. And, in step S104, the pre-ray tracing result of the grid of each three-dimensional model can be directly determined according to the forward ray tracing result, and the description is not repeated here.
参见图15,图15是本申请提出的一种渲染方法的流程示意图。如图15所示,本渲染方法是在图1A或者图1B所示的渲染系统的基础上实现的,包括如下步骤:Refer to FIG. 15, which is a schematic flowchart of a rendering method proposed by the present application. As shown in FIG. 15, the rendering method is implemented on the basis of the rendering system shown in FIG. 1A or FIG. 1B, and includes the following steps:
S201:第一终端设备通过网络设备向远程渲染平台发送第一渲染请求。相应地,远程渲染平台接收第一终端设备通过网络设备发送的第一渲染请求。S201: The first terminal device sends a first rendering request to the remote rendering platform through a network device. Correspondingly, the remote rendering platform receives the first rendering request sent by the first terminal device through the network device.
在一具体的实施方式中,所述第一渲染请求包括虚拟场景的标识以及第一用户的视角视角,其中,所述虚拟场景的标识为所述虚拟场景的唯一标识,所述第一用户的视角为第一用户观察所述虚拟场景的角度。In a specific embodiment, the first rendering request includes the identifier of the virtual scene and the perspective of the first user, wherein the identifier of the virtual scene is a unique identifier of the virtual scene, and the first user’s The angle of view is the angle at which the first user observes the virtual scene.
S202:第二终端设备通过网络设备向远程渲染平台发送第二渲染请求。相应地, 远程渲染平台接收第二终端设备通过网络设备发送的第二渲染请求。S202: The second terminal device sends a second rendering request to the remote rendering platform through the network device. Correspondingly, the remote rendering platform receives the second rendering request sent by the second terminal device through the network device.
在一具体的实施方式中,所述第二渲染请求包括所述虚拟场景的标识以及第二用户的视角,其中,所述第二用户的视角为第二用户观察所述虚拟场景的角度。In a specific embodiment, the second rendering request includes an identifier of the virtual scene and a perspective of a second user, wherein the perspective of the second user is an angle at which the second user observes the virtual scene.
S203:所述远程渲染平台接收第一渲染请求,确定所述第一渲染请求在所述虚拟场景的各三维模型的第一可观察到的网格,从存储的所述虚拟场景的各三维模型的网格的预光线追踪结果中,确定所述第一可观察到的网格的渲染结果,从而生成第一渲染图像。S203: The remote rendering platform receives a first rendering request, determines that the first rendering request is in the first observable grid of each three-dimensional model of the virtual scene, from the stored three-dimensional models of the virtual scene In the pre-ray tracing result of the grid, the rendering result of the first observable grid is determined, so as to generate the first rendered image.
在一具体的实施方式中,远程渲染平台确定所述第一渲染请求在所述虚拟场景的各三维模型的第一可观察到的网格,从存储的所述虚拟场景的各三维模型的网格的预光线追踪结果中,确定所述第一可观察到的网格的渲染结果的方式可以参照上文中的私人部分的计算,此处不再赘述。In a specific embodiment, the remote rendering platform determines that the first rendering request is in the first observable grid of each three-dimensional model of the virtual scene, from the stored network of each three-dimensional model of the virtual scene In the pre-ray tracing result of the grid, the method of determining the rendering result of the first observable grid can refer to the calculation of the private part above, which will not be repeated here.
S204:所述远程渲染平台接收第二渲染请求,确定所述第二渲染请求在所述虚拟场景的各三维模型的第二可观察到的网格,从存储的所述虚拟场景的各三维模型的网格的预光线追踪结果中,确定所述第二可观察到的网格的渲染结果,从而生成第二渲染图像。S204: The remote rendering platform receives a second rendering request, determines that the second rendering request is in the second observable grid of each three-dimensional model of the virtual scene, from the stored three-dimensional models of the virtual scene In the pre-ray tracing result of the grid, the rendering result of the second observable grid is determined, so as to generate a second rendered image.
在一具体的实施方式中,远程渲染平台确定所述第二渲染请求在所述虚拟场景的各三维模型的第二可观察到的网格,从存储的所述虚拟场景的各三维模型的网格的预光线追踪结果中,确定所述第二可观察到的网格的渲染结果的方式可以参照上文中的私人部分的计算,此处不再赘述。In a specific embodiment, the remote rendering platform determines that the second rendering request is in the second observable grid of each three-dimensional model of the virtual scene, from the stored network of each three-dimensional model of the virtual scene In the pre-ray tracing result of the grid, the method of determining the rendering result of the second observable grid can refer to the calculation of the private part above, which will not be repeated here.
S205:远程渲染平台将第一渲染图像通过网络设备发送给第一终端设备。相应地,第一终端设备接收远程渲染平台通过网络设备发送的第一渲染图像。S205: The remote rendering platform sends the first rendered image to the first terminal device through the network device. Correspondingly, the first terminal device receives the first rendered image sent by the remote rendering platform through the network device.
S206:远程渲染平台将第二渲染图像通过网络设备发送给第二终端设备。相应地,第二终端设备接收远程渲染平台通过网络设备发送的第二渲染图像。S206: The remote rendering platform sends the second rendered image to the second terminal device through the network device. Correspondingly, the second terminal device receives the second rendered image sent by the remote rendering platform through the network device.
可以理解,上述步骤顺序仅仅是作为一种具体的示例,在其他的例子中,执行顺序也可以是步骤S201->步骤S203->步骤S205->步骤S202->步骤S204->步骤S206等等,此处不作具体限定。It can be understood that the above step sequence is only taken as a specific example. In other examples, the execution sequence can also be step S201 -> step S203 -> step S205 -> step S202 -> step S204 -> step S206, etc. , There is no specific limitation here.
可以理解,上述例子中是以由远程渲染平台生成第一渲染图像以及第二渲染图像为例进行说明,在其他的实施方式中,远程渲染平台可以将各三维模型的网格的预光线追踪结果分别发送给第一终端设备以及第二终端设备,并由第一终端设备根据各三维模型的网格的预光线追踪结果生成第一渲染图像,并且,由第二终端设备根据各三维模型的网格的预光线追踪结果生成第二渲染图像,此处不作具体限定。It can be understood that in the above example, the first rendered image and the second rendered image are generated by the remote rendering platform as an example for description. In other embodiments, the remote rendering platform can compare the pre-ray tracing results of the meshes of each 3D model. Respectively sent to the first terminal device and the second terminal device, and the first terminal device generates the first rendered image according to the pre-ray tracing results of the grid of each three-dimensional model, and the second terminal device generates the first rendered image according to the grid of each three-dimensional model. The pre-ray tracing result of the grid generates the second rendered image, which is not specifically limited here.
参见图16,图16是本申请提出的一种渲染节点的结构示意图。如图16所示,渲染节点包括包括:渲染应用服务端610以及渲染引擎620。Refer to FIG. 16, which is a schematic structural diagram of a rendering node proposed in this application. As shown in FIG. 16, the rendering node includes: a rendering application server 610 and a rendering engine 620.
所述渲染应用服务端610,用于获取虚拟场景,所述虚拟场景包括光源和至少一个三维模型;The rendering application server 610 is configured to obtain a virtual scene, the virtual scene including a light source and at least one three-dimensional model;
所述渲染引擎620,用于根据所述虚拟场景的光源对所述虚拟场景的各三维模型的网格进行正向光线追踪,其中,所述虚拟场景的三维模型的表面分割得到所述虚拟 场景的三维模型的网格;根据所述虚拟场景的各三维模型的网格的正向光线追踪结果生成所述虚拟场景的各三维模型的网格的预光线追踪结果;存储所述虚拟场景的各三维模型的网格的预光线追踪结果;The rendering engine 620 is configured to perform forward ray tracing on the grid of each three-dimensional model of the virtual scene according to the light source of the virtual scene, wherein the surface of the three-dimensional model of the virtual scene is segmented to obtain the virtual scene The grids of the three-dimensional models of the virtual scene; generate the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene according to the forward ray tracing results of the grids of the three-dimensional models of the virtual scene; store each of the virtual scenes Pre-ray tracing results of the mesh of the 3D model;
所述渲染应用服务端610,用于接收第一渲染请求,确定所述第一渲染请求在所述虚拟场景的各三维模型的第一可观察到的网格;The rendering application server 610 is configured to receive a first rendering request, and determine the first observable grid of each three-dimensional model of the virtual scene in the first rendering request;
所述渲染引擎620,用于从存储的所述虚拟场景的各三维模型的网格的预光线追踪结果中,确定所述第一可观察到的网格的渲染结果。The rendering engine 620 is configured to determine the rendering result of the first observable grid from the stored pre-ray tracing results of the grids of the three-dimensional models of the virtual scene.
为了简便起见,本实施例中并没有对虚拟场景的定义,虚拟场景中的光源、虚拟场景中的三维模型以及各三维模型中的网格、正向光线追踪、各三维模型的网格的预光线追踪结果、第一可观察到的网格以及第一可观察到的网格的渲染结果等等进行介绍,具体请参见上文相关内容,此处不作具体限定。另外,本实施例中的渲染应用服务端610以及渲染引擎620可以设置于图1A以及图1B的渲染节点中,具体请参见图1A以及图1B,此处不再赘述。本实施方式中的渲染节点还可以执行图14中渲染节点执行的步骤,以及图15中渲染节点执行的步骤。For simplicity, there is no definition of the virtual scene in this embodiment. The light source in the virtual scene, the three-dimensional model in the virtual scene, and the grids in each three-dimensional model, forward ray tracing, and the prediction of the grid of each three-dimensional model are not included in this embodiment. The ray tracing result, the first observable grid, and the rendering result of the first observable grid, etc. are introduced. For details, please refer to the relevant content above, which is not specifically limited here. In addition, the rendering application server 610 and the rendering engine 620 in this embodiment can be set in the rendering nodes of FIG. 1A and FIG. 1B. For details, please refer to FIG. 1A and FIG. The rendering node in this embodiment can also perform the steps performed by the rendering node in FIG. 14 and the steps performed by the rendering node in FIG. 15.
图17是一种渲染节点的结构示意图。如图17所示,渲染节点包括:处理系统910、第一存储器920、智能网卡930以及总线940。Figure 17 is a schematic structural diagram of a rendering node. As shown in FIG. 17, the rendering node includes: a processing system 910, a first memory 920, a smart network card 930, and a bus 940.
处理器系统910可以是采用异构结构,即,包括一个或者多个通用处理器,以及,一个或者多个特殊处理器,例如,GPU或者AI芯片等等,其中,通用处理器可以是能够处理电子指令的任何类型的设备,包括中央处理器(Central Processing Unit,CPU)、微处理器、微控制器、主处理器、控制器以及专用集成电路(Application Specific Integrated Circuit,ASIC)等等。通用处理器执行各种类型的数字存储指令,例如存储在第一存储器920中的软件或者固件程序。在一具体的实施例中,通用处理器可以是x86处理器等等。通用处理器通过物理接口将命令发送给第一存储器920,以完成存储相关的任务,例如,通用处理器可以提供的命令包括读取命令、写入命令、复制命令以及擦除命令等等。所述命令可以指定与第一存储器920的特定页和块有关的操作。特殊处理器用于完成图像渲染的复杂运算等等。The processor system 910 may adopt a heterogeneous structure, that is, include one or more general-purpose processors, and one or more special processors, such as GPUs or AI chips, etc., where the general-purpose processors may be capable of processing Any type of electronic instruction equipment, including central processing unit (CPU), microprocessor, microcontroller, main processor, controller, application specific integrated circuit (ASIC) and so on. The general-purpose processor executes various types of digital storage instructions, such as software or firmware programs stored in the first memory 920. In a specific embodiment, the general-purpose processor may be an x86 processor or the like. The general-purpose processor sends commands to the first memory 920 through the physical interface to complete storage-related tasks. For example, the commands that the general-purpose processor can provide include read commands, write commands, copy commands, and erase commands. The command may specify an operation related to a specific page and block of the first memory 920. Special processors are used to complete complex operations of image rendering and so on.
第一存储器920可以包括是随机存取存储器(Random Access Memory,RAM)、快闪存储器(Flash Memory)等,也可以是RAM,只读存储器(Read-Only Memory,ROM)或者硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD)。第一存储器920存储了实现渲染引擎以及渲染应用服务端的程序代码。The first memory 920 may include random access memory (RAM), flash memory (Flash Memory), etc., and may also be RAM, read-only memory (Read-Only Memory, ROM), or hard disk (Hard Disk Drive). , HDD) or Solid-State Drive (SSD). The first memory 920 stores program codes for implementing the rendering engine and rendering application server.
智能网卡930,还被称为网络接口控制器、网络接口卡或者局域网(Local Area Network,LAN)适配器。每块智能网卡930都有一个唯一的MAC地址,是智能网卡930厂家在生产时烧入只读存储芯片中的。智能网卡930包括处理器931、第二存储器932以及收发器933。处理器931与通用处理器相类似,但是,处理器931的性能要求可以低于通用处理器的性能要求。在一具体的实施例中,处理器931可以是ARM处理器等等。第二存储器932也可以是快闪存储器、HDD或者SDD,第二存储器932的存储容量可以小于第一存储器920的存储容量。收发器933可以用于接收和 发送报文,并将接收到的报文上传给处理器931进行处理。智能网卡930还可以包括多个端口,端口可以是粗缆接口、细缆接口和双绞线接口三种接口类型中的任意一种或者多种。The smart network card 930 is also called a network interface controller, a network interface card, or a local area network (LAN) adapter. Each smart network card 930 has a unique MAC address, which is burned into the read-only memory chip by the manufacturer of the smart network card 930 during production. The smart network card 930 includes a processor 931, a second memory 932, and a transceiver 933. The processor 931 is similar to a general-purpose processor, but the performance requirement of the processor 931 may be lower than that of a general-purpose processor. In a specific embodiment, the processor 931 may be an ARM processor or the like. The second memory 932 may also be a flash memory, HDD or SDD, and the storage capacity of the second memory 932 may be smaller than the storage capacity of the first memory 920. The transceiver 933 may be used to receive and send messages, and upload the received messages to the processor 931 for processing. The smart network card 930 may also include a plurality of ports, and the ports may be any one or more of the three interface types of a thick cable interface, a thin cable interface, and a twisted pair interface.
为了简便起见,本实施例中并没有对虚拟场景的定义,虚拟场景中的光源、虚拟场景中的三维模型以及各三维模型中的网格、正向光线追踪、各三维模型的网格的预光线追踪结果、第一可观察到的网格以及第一可观察到的网格的渲染结果等等进行介绍,具体请参见上文相关内容,此处不作具体限定。另外,图16中的渲染应用服务端610以及渲染引擎620的程序代码可以设置于图17的第一存储器920中。本实施方式中的渲染节点还可以执行图14中渲染节点执行的步骤,以及图15中渲染节点执行的步骤。For simplicity, there is no definition of the virtual scene in this embodiment. The light source in the virtual scene, the three-dimensional model in the virtual scene, and the grids in each three-dimensional model, forward ray tracing, and the prediction of the grid of each three-dimensional model are not included in this embodiment. The ray tracing result, the first observable grid, and the rendering result of the first observable grid, etc. are introduced. For details, please refer to the relevant content above, which is not specifically limited here. In addition, the program codes of the rendering application server 610 and the rendering engine 620 in FIG. 16 may be set in the first memory 920 in FIG. 17. The rendering node in this embodiment can also perform the steps performed by the rendering node in FIG. 14 and the steps performed by the rendering node in FIG. 15.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、存储盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态存储盘Solid State Disk(SSD))等。In the above-mentioned embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented by software, it can be implemented in the form of a computer program product in whole or in part. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions described in the embodiments of the present application are generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media. The usable medium may be a magnetic medium, (for example, a floppy disk, a storage disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Claims (16)

  1. 一种渲染方法,其特征在于,包括:A rendering method is characterized in that it includes:
    远程渲染平台获取虚拟场景,所述虚拟场景包括光源和至少一个三维模型;Acquiring a virtual scene by a remote rendering platform, the virtual scene including a light source and at least one three-dimensional model;
    所述远程渲染平台根据所述虚拟场景的光源对所述虚拟场景的各三维模型的网格进行正向光线追踪,其中,所述虚拟场景的三维模型的表面分割得到所述虚拟场景的三维模型的网格;The remote rendering platform performs forward ray tracing on the grids of the three-dimensional models of the virtual scene according to the light source of the virtual scene, wherein the surface of the three-dimensional model of the virtual scene is segmented to obtain the three-dimensional model of the virtual scene Grid
    所述远程渲染平台根据所述虚拟场景的各三维模型的网格的正向光线追踪结果生成所述虚拟场景的各三维模型的网格的预光线追踪结果;The remote rendering platform generates the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene according to the forward ray tracing results of the grids of the three-dimensional models of the virtual scene;
    所述远程渲染平台存储所述虚拟场景的各三维模型的网格的预光线追踪结果;The remote rendering platform stores the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene;
    所述远程渲染平台接收第一渲染请求,确定所述第一渲染请求在所述虚拟场景的各三维模型的第一可观察到的网格;The remote rendering platform receives a first rendering request, and determines the first observable grid of each three-dimensional model of the virtual scene in the first rendering request;
    所述远程渲染平台从存储的所述虚拟场景的各三维模型的网格的预光线追踪结果中,确定所述第一可观察到的网格的渲染结果。The remote rendering platform determines the rendering result of the first observable grid from the stored pre-ray tracing results of the grids of the three-dimensional models of the virtual scene.
  2. 根据权利要求1所述的方法,其特征在于,所述第一渲染请求由第一终端根据第一用户的操作发出,所述第一渲染请求携带所述第一用户在所述虚拟场景中的视角。The method according to claim 1, wherein the first rendering request is issued by the first terminal according to the operation of the first user, and the first rendering request carries the information of the first user in the virtual scene. Perspective.
  3. 根据权利要求1或2所述的方法,其特征在于,所述远程渲染平台根据所述虚拟场景的光源对所述虚拟场景的各三维模型的网格进行正向光线追踪前,所述方法还包括:The method according to claim 1 or 2, characterized in that, before the remote rendering platform performs forward ray tracing on the grids of the three-dimensional models of the virtual scene according to the light source of the virtual scene, the method further include:
    所述远程渲染平台获取所述虚拟场景的提供者或发出所述第一渲染请求的用户设置的正向光线追踪参数,所述正向光线追踪参数包括以下至少一个:每单位面积的采样数以及光线反弹次数;The remote rendering platform obtains the forward ray tracing parameters set by the provider of the virtual scene or the user who issued the first rendering request, and the forward ray tracing parameters include at least one of the following: the number of samples per unit area; and The number of light bounces;
    所述远程渲染平台根据所述虚拟场景的光源对所述虚拟场景的各三维模型的网格进行正向光线追踪,包括:The remote rendering platform performs forward ray tracing on the grids of the three-dimensional models of the virtual scene according to the light source of the virtual scene, including:
    所述远程渲染平台根据所述虚拟场景的光源和所述正向光线追踪参数对所述虚拟场景的各三维模型的网格进行正向光线追踪。The remote rendering platform performs forward ray tracing on the grid of each three-dimensional model of the virtual scene according to the light source of the virtual scene and the forward ray tracing parameter.
  4. 根据权利要求1至3任一所述的方法,其特征在于,所述远程渲染平台存储所述虚拟场景的各三维模型的网格的预光线追踪结果前,所述方法还包括:The method according to any one of claims 1 to 3, wherein before the remote rendering platform stores the pre-ray tracing results of the meshes of the three-dimensional models of the virtual scene, the method further comprises:
    所述远程渲染平台根据所述虚拟场景的光源对所述虚拟场景的各三维模型的部分或全部网格进行多个预设视角的反向光线追踪;The remote rendering platform performs reverse ray tracing of multiple preset viewing angles on part or all of the grids of each three-dimensional model of the virtual scene according to the light source of the virtual scene;
    所述远程渲染平台根据所述虚拟场景的各三维模型的网格的正向光线追踪结果生成所述虚拟场景的各三维模型的网格的预光线追踪结果,包括:The remote rendering platform generates the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene according to the forward ray tracing results of the grids of the three-dimensional models of the virtual scene, including:
    所述远程渲染平台根据所述虚拟场景的各三维模型的网格的正向光线追踪结果以及所述虚拟场景的各三维模型的部分或全部网格的多个预设视角的反向光线追踪结果生成所述虚拟场景的各三维模型的网格的预光线追踪结果。The remote rendering platform according to the forward ray tracing results of the grids of the three-dimensional models of the virtual scene and the backward ray tracing results of multiple preset viewing angles of part or all of the grids of the three-dimensional models of the virtual scene A pre-ray tracing result of the grid of each three-dimensional model of the virtual scene is generated.
  5. 根据权利要求4所述的方法,其特征在于,所述远程渲染平台获取所述虚拟场景的提供者或所述用户设置的预设视角参数。The method according to claim 4, wherein the remote rendering platform obtains preset viewing angle parameters set by the provider of the virtual scene or the user.
  6. 根据权利要求4或5所述的方法,其特征在于,所述远程渲染平台根据所述虚拟场景的光源对所述虚拟场景的各三维模型的部分或全部网格进行多个预设视角的 反向光线追踪,包括:The method according to claim 4 or 5, wherein the remote rendering platform performs multiple preset perspective reflections on part or all of the meshes of each three-dimensional model of the virtual scene according to the light source of the virtual scene. To ray tracing, including:
    所述远程渲染平台根据所述虚拟场景的光源对所述虚拟场景的各三维模型的材料为光滑的网格进行多个预设视角的反向光线追踪。According to the light source of the virtual scene, the remote rendering platform performs reverse ray tracing of a plurality of preset viewing angles on a smooth grid of materials of each three-dimensional model of the virtual scene.
  7. 一种渲染节点,其特征在于,包括:渲染应用服务端以及渲染引擎,A rendering node, which is characterized by comprising: a rendering application server and a rendering engine,
    所述渲染应用服务端,用于获取虚拟场景,所述虚拟场景包括光源和至少一个三维模型;The rendering application server is configured to obtain a virtual scene, the virtual scene including a light source and at least one three-dimensional model;
    所述渲染引擎,用于根据所述虚拟场景的光源对所述虚拟场景的各三维模型的网格进行正向光线追踪,其中,所述虚拟场景的三维模型的表面分割得到所述虚拟场景的三维模型的网格;根据所述虚拟场景的各三维模型的网格的正向光线追踪结果生成所述虚拟场景的各三维模型的网格的预光线追踪结果;存储所述虚拟场景的各三维模型的网格的预光线追踪结果;The rendering engine is configured to perform forward ray tracing on the grid of each three-dimensional model of the virtual scene according to the light source of the virtual scene, wherein the surface segmentation of the three-dimensional model of the virtual scene obtains the The grid of the three-dimensional model; generating the pre-ray tracing results of the grids of the three-dimensional models of the virtual scene according to the forward ray tracing results of the grids of the three-dimensional models of the virtual scene; storing the three-dimensional models of the virtual scene Pre-ray tracing results of the mesh of the model;
    所述渲染应用服务端,用于接收第一渲染请求,确定所述第一渲染请求在所述虚拟场景的各三维模型的第一可观察到的网格;The rendering application server is configured to receive a first rendering request, and determine the first observable grid of each three-dimensional model of the virtual scene in the first rendering request;
    所述渲染引擎,用于从存储的所述虚拟场景的各三维模型的网格的预光线追踪结果中,确定所述第一可观察到的网格的渲染结果。The rendering engine is configured to determine the rendering result of the first observable grid from the stored pre-ray tracing results of the grids of the three-dimensional models of the virtual scene.
  8. 根据权利要求7所述的渲染节点,其特征在于,所述第一渲染请求由第一终端根据第一用户的操作发出,所述第一渲染请求携带所述第一用户在所述虚拟场景中的视角。The rendering node according to claim 7, wherein the first rendering request is issued by a first terminal according to an operation of a first user, and the first rendering request carries the first user in the virtual scene Perspective.
  9. 根据权利要求7或8所述的渲染节点,其特征在于,The rendering node according to claim 7 or 8, wherein:
    所述渲染应用服务端,还用于获取所述虚拟场景的提供者或发出所述第一渲染请求的用户设置的正向光线追踪参数,所述正向光线追踪参数包括以下至少一个:每单位面积的采样数以及光线反弹次数;The rendering application server is further configured to obtain forward ray tracing parameters set by the provider of the virtual scene or the user who issued the first rendering request, and the forward ray tracing parameters include at least one of the following: per unit The number of samples of the area and the number of light bounces;
    所述渲染引擎,用于根据所述虚拟场景的光源和所述正向光线追踪参数对所述虚拟场景的各三维模型的网格进行正向光线追踪。The rendering engine is configured to perform forward ray tracing on the grid of each three-dimensional model of the virtual scene according to the light source of the virtual scene and the forward ray tracing parameters.
  10. 根据权利要求7至9任一所述的渲染节点,其特征在于,The rendering node according to any one of claims 7 to 9, wherein:
    所述渲染引擎,还用于根据所述虚拟场景的光源对所述虚拟场景的各三维模型的部分或全部网格进行多个预设视角的反向光线追踪;根据所述虚拟场景的各三维模型的网格的正向光线追踪结果以及所述虚拟场景的各三维模型的部分或全部网格的多个预设视角的反向光线追踪结果生成所述虚拟场景的各三维模型的网格的预光线追踪结果。The rendering engine is further configured to perform reverse ray tracing of a plurality of preset viewing angles on part or all of the grids of each three-dimensional model of the virtual scene according to the light source of the virtual scene; The forward ray tracing result of the mesh of the model and the reverse ray tracing result of a plurality of preset viewing angles of part or all of the meshes of each three-dimensional model of the virtual scene are generated. Pre-ray tracing results.
  11. 根据权利要求10所述的渲染节点,其特征在于,The rendering node according to claim 10, wherein:
    所述渲染应用服务端,还用于获取所述虚拟场景的提供者或所述用户设置的预设视角参数。The rendering application server is also used to obtain preset viewing angle parameters set by the provider of the virtual scene or the user.
  12. 根据权利要求10或11所述的渲染节点,其特征在于,The rendering node according to claim 10 or 11, wherein:
    所述渲染引擎,用于根据所述虚拟场景的光源对所述虚拟场景的各三维模型的材料为光滑的网格进行多个预设视角的反向光线追踪。The rendering engine is configured to perform reverse ray tracing of a plurality of preset viewing angles on a smooth grid of materials of each three-dimensional model of the virtual scene according to the light source of the virtual scene.
  13. 一种渲染节点,其特征在于,包括存储器以及处理器,所述处理器执行所述存储器中的程序以执行如权利要求1至6任一所述的方法。A rendering node, characterized by comprising a memory and a processor, and the processor executes the program in the memory to execute the method according to any one of claims 1 to 6.
  14. 一种计算机可读存储介质,其特征在于,包括指令,当所述指令在计算节点 上运行时,使得所述计算节点执行如权利要求1至6任一所述的方法。A computer-readable storage medium, characterized by comprising instructions, which when run on a computing node, cause the computing node to execute the method according to any one of claims 1 to 6.
  15. 一种渲染系统,其特征在于,包括:终端设备、网络设备以及远程渲染平台,所述终端设备通过所述网络设备与所述远程渲染平台进行通信,其中,所述远程渲染平台用于执行权利要求1至权利要求6任一所述的方法。A rendering system, characterized by comprising: a terminal device, a network device, and a remote rendering platform, the terminal device communicates with the remote rendering platform through the network device, wherein the remote rendering platform is used to execute rights The method of any one of claim 1 to claim 6.
  16. 一种计算机程序产品,其特征在于,包括指令,当所述指令在渲染节点上运行时,使得所述渲染节点执行如权利要求1至6任一所述的方法。A computer program product, characterized by comprising instructions, which when the instructions are run on a rendering node, cause the rendering node to execute the method according to any one of claims 1 to 6.
PCT/CN2021/092699 2020-05-09 2021-05-10 Rendering method, apparatus and system WO2021228031A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202010388225 2020-05-09
CN202010388225.4 2020-05-09
CN202110502479.9A CN113628317A (en) 2020-05-09 2021-05-08 Rendering method, device and system
CN202110502479.9 2021-05-08

Publications (1)

Publication Number Publication Date
WO2021228031A1 true WO2021228031A1 (en) 2021-11-18

Family

ID=78378028

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/092699 WO2021228031A1 (en) 2020-05-09 2021-05-10 Rendering method, apparatus and system

Country Status (2)

Country Link
CN (1) CN113628317A (en)
WO (1) WO2021228031A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115879322A (en) * 2023-01-30 2023-03-31 安世亚太科技股份有限公司 Multi-physical-field simulation processing method and device, electronic equipment and storage medium
CN115953520A (en) * 2023-03-10 2023-04-11 浪潮电子信息产业股份有限公司 Recording and playback method and device for virtual scene, electronic equipment and medium
WO2023087911A1 (en) * 2021-11-19 2023-05-25 腾讯科技(深圳)有限公司 Data processing method and device and readable storage medium
CN116168131A (en) * 2022-12-09 2023-05-26 北京百度网讯科技有限公司 Cloth rendering method and device, electronic equipment and storage medium
CN116433818A (en) * 2023-03-22 2023-07-14 宝钢工程技术集团有限公司 Cloud CPU and GPU parallel rendering method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635787A (en) * 2022-08-11 2024-03-01 华为云计算技术有限公司 Image rendering method, device and equipment
CN115953524B (en) * 2023-03-09 2023-05-23 腾讯科技(深圳)有限公司 Data processing method, device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680042A (en) * 2017-09-27 2018-02-09 杭州群核信息技术有限公司 Rendering intent, device, engine and storage medium
CN109118567A (en) * 2018-08-16 2019-01-01 郑州云海信息技术有限公司 A kind of ray trace method, system, equipment and computer readable storage medium
CN110738626A (en) * 2019-10-24 2020-01-31 广东三维家信息科技有限公司 Rendering graph optimization method and device and electronic equipment
US20200058103A1 (en) * 2018-08-14 2020-02-20 Nvidia Corporation Using previously rendered scene frames to reduce pixel noise
CN111080765A (en) * 2019-12-23 2020-04-28 北京工业大学 Ray tracing volume rendering method based on gradient sampling

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680042A (en) * 2017-09-27 2018-02-09 杭州群核信息技术有限公司 Rendering intent, device, engine and storage medium
US20200058103A1 (en) * 2018-08-14 2020-02-20 Nvidia Corporation Using previously rendered scene frames to reduce pixel noise
CN109118567A (en) * 2018-08-16 2019-01-01 郑州云海信息技术有限公司 A kind of ray trace method, system, equipment and computer readable storage medium
CN110738626A (en) * 2019-10-24 2020-01-31 广东三维家信息科技有限公司 Rendering graph optimization method and device and electronic equipment
CN111080765A (en) * 2019-12-23 2020-04-28 北京工业大学 Ray tracing volume rendering method based on gradient sampling

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023087911A1 (en) * 2021-11-19 2023-05-25 腾讯科技(深圳)有限公司 Data processing method and device and readable storage medium
CN116168131A (en) * 2022-12-09 2023-05-26 北京百度网讯科技有限公司 Cloth rendering method and device, electronic equipment and storage medium
CN116168131B (en) * 2022-12-09 2023-11-21 北京百度网讯科技有限公司 Cloth rendering method and device, electronic equipment and storage medium
CN115879322A (en) * 2023-01-30 2023-03-31 安世亚太科技股份有限公司 Multi-physical-field simulation processing method and device, electronic equipment and storage medium
CN115953520A (en) * 2023-03-10 2023-04-11 浪潮电子信息产业股份有限公司 Recording and playback method and device for virtual scene, electronic equipment and medium
CN115953520B (en) * 2023-03-10 2023-07-14 浪潮电子信息产业股份有限公司 Recording and playback method and device for virtual scene, electronic equipment and medium
CN116433818A (en) * 2023-03-22 2023-07-14 宝钢工程技术集团有限公司 Cloud CPU and GPU parallel rendering method
CN116433818B (en) * 2023-03-22 2024-04-16 宝钢工程技术集团有限公司 Cloud CPU and GPU parallel rendering method

Also Published As

Publication number Publication date
CN113628317A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
WO2021228031A1 (en) Rendering method, apparatus and system
US11570372B2 (en) Virtual camera for 3-d modeling applications
US8619078B2 (en) Parallelized ray tracing
US11429690B2 (en) Interactive path tracing on the web
US20240029338A1 (en) Ray-tracing with irradiance caches
US20230230311A1 (en) Rendering Method and Apparatus, and Device
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
US20220198622A1 (en) High dynamic range support for legacy applications
US11823321B2 (en) Denoising techniques suitable for recurrent blurs
WO2022105641A1 (en) Rendering method, device and system
CN110930497A (en) Global illumination intersection acceleration method and device and computer storage medium
CN115205438A (en) Image rendering method and device
CN112041894A (en) Improving realism of scenes involving water surface during rendering
US11875478B2 (en) Dynamic image smoothing based on network conditions
US20230298243A1 (en) 3d digital avatar generation from a single or few portrait images
CN116758208A (en) Global illumination rendering method and device, storage medium and electronic equipment
US11836844B2 (en) Motion vector optimization for multiple refractive and reflective interfaces
KR20230013099A (en) Geometry-aware augmented reality effects using real-time depth maps
CN114245907A (en) Auto-exposure ray tracing
Fu et al. Dynamic shadow rendering with shadow volume optimization
WO2023029424A1 (en) Method for rendering application and related device
CN116012666B (en) Image generation, model training and information reconstruction methods and devices and electronic equipment
WO2024109006A1 (en) Light source elimination method and rendering engine
US20240177394A1 (en) Motion vector optimization for multiple refractive and reflective interfaces
US20240112308A1 (en) Joint neural denoising of surfaces and volumes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21804994

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21804994

Country of ref document: EP

Kind code of ref document: A1