WO2020207202A1 - 阴影渲染方法、装置、计算机设备及存储介质 - Google Patents

阴影渲染方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2020207202A1
WO2020207202A1 PCT/CN2020/079618 CN2020079618W WO2020207202A1 WO 2020207202 A1 WO2020207202 A1 WO 2020207202A1 CN 2020079618 W CN2020079618 W CN 2020079618W WO 2020207202 A1 WO2020207202 A1 WO 2020207202A1
Authority
WO
WIPO (PCT)
Prior art keywords
shadow
rendering
multiple pixels
coordinates
pixels
Prior art date
Application number
PCT/CN2020/079618
Other languages
English (en)
French (fr)
Inventor
侯仓健
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2021530180A priority Critical patent/JP7190042B2/ja
Priority to EP20787238.3A priority patent/EP3955212A4/en
Publication of WO2020207202A1 publication Critical patent/WO2020207202A1/zh
Priority to US17/327,585 priority patent/US11574437B2/en
Priority to US18/093,311 priority patent/US20230143323A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/12Shadow map, environment map

Definitions

  • This application relates to the field of image rendering technology, and in particular to a shadow rendering method, device, computer equipment and storage medium.
  • the terminal in order to simulate a more realistic three-dimensional scene, the terminal usually renders the shadow of an object in the three-dimensional scene in real time, and the object can be a character or an object.
  • third-party plug-ins can be used to achieve real-time shadow rendering.
  • the grid of the shadow area is obtained from the scene grid, and the The shadow texture is rendered on the shadow area to get a shadowed scene grid.
  • the shadows directly rendered by third-party plug-ins usually produce "ghost images" with the original 3D model in the scene (for example, the terrain is interspersed with each other, the ground has decals, etc.) Problem, the shadow rendering effect is poor.
  • a shadow rendering method is provided.
  • a shadow rendering method which is executed by a computer device, and the method includes:
  • the at least one rendering structure is used to render at least one shadow of at least one virtual object, and each rendering structure includes a plurality of pixels;
  • the model coordinates of the multiple pixels are obtained, and a model coordinate is used to describe the texture information of a pixel relative to the model vertex of a virtual object;
  • each shadow map is used to provide texture information of a shadow
  • a shadow rendering device which includes:
  • the first acquiring module is configured to acquire at least one rendering structure in the virtual scene according to the light direction in the virtual scene, the at least one rendering structure is used to render at least one shadow of at least one virtual object, and each rendering structure Including multiple pixels;
  • the second acquiring module is used to acquire the model coordinates of the multiple pixels according to the current view angle and the depth information of the multiple pixels.
  • a model coordinate is used to describe the texture information of a pixel relative to the model base point of a virtual object ;
  • the sampling module is used to sample at least one shadow map according to the model coordinates of the multiple pixels to obtain multiple sampling points corresponding to the multiple pixels, and each shadow map is used to provide texture information of a shadow ;
  • the rendering module is used to render the multiple sampling points in the virtual scene to obtain the at least one shadow.
  • a computer device in one aspect, includes a memory and a processor.
  • the memory stores computer-readable instructions.
  • the processor executes Operations performed by the shadow rendering method in any of the above possible implementations.
  • one or more non-volatile storage media storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, one or more processors can execute any of the above A possible implementation of the operation performed by the shadow rendering method.
  • FIG. 1 is a schematic diagram of an implementation environment of a shadow rendering method provided by an embodiment of the present application
  • FIG. 2 is a flowchart of a shadow rendering method provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of determining a target pixel provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a rendering effect provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a rendering effect provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a rendering effect provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a shadow rendering device provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • FIG. 1 is a schematic diagram of an implementation environment of a shadow rendering method provided by an embodiment of the present application.
  • a rendering engine 102 may be installed on the terminal 101.
  • the terminal 101 may obtain and display a virtual scene from the server 103.
  • the server 103 may be a game server, and the terminal 102 may obtain the game scene from the game server.
  • the terminal 101 is used to provide shadow rendering services.
  • the terminal 101 can render virtual objects in any virtual scene and shadows of virtual objects based on the rendering engine 102.
  • the virtual objects can be characters or objects.
  • the virtual scene may be any virtual scene displayed on the terminal.
  • the virtual scene may be a game scene, an interior design simulation scene, etc.
  • the embodiment of the present application does not specifically limit the content of the virtual scene.
  • the rendering engine 102 may be installed on the terminal in the form of an independent rendering client, so that the terminal can directly implement shadow rendering through the rendering client.
  • the rendering engine 102 may be unity3D (United 3D), It can be Unreal (unreal engine, Unreal Engine) or OpenGL (open graphics library), etc.
  • an application client for providing virtual scenes may be installed on the terminal.
  • the application client may be a game client, a three-dimensional design client, etc.
  • the rendering engine 102 may use an API (application programming interface, application program Programming interface) is encapsulated in the kernel layer of the terminal operating system.
  • the terminal operating system provides the API to the upper application client, so that the application client on the terminal can call the API to render virtual objects in the virtual scene and the The shadow of the virtual object.
  • the terminal since the terminal performs shadow rendering based on third-party plug-ins (for example, Fast Shadow Receiver), it obtains the shadow area grid according to a baked scene mesh.
  • third-party plug-ins for example, Fast Shadow Receiver
  • the scene grid used usually occupies 1.5-4M of memory, which squeezes the memory resources of the terminal and affects the processing efficiency of the terminal CPU.
  • the scene grid is usually embedded in the SDK (software development kit, software development kit) of the application client, it will occupy 5-10M of disk space, which will increase the size of the SDK installation package of the application client.
  • a cooked 10M scene grid usually increases the size of the installation package by 2M, which is not conducive to streamlining the SDK volume of the application client.
  • FIG. 2 is a flowchart of a shadow rendering method provided by an embodiment of the present application.
  • this embodiment can be applied to a terminal or an application client on the terminal.
  • the shadow rendering method includes:
  • the terminal obtains a depth map of a virtual scene.
  • the virtual scene may be any virtual scene displayed on the terminal, and the data of the virtual scene may be stored locally or from the cloud.
  • the virtual scene may include at least one virtual object, and at least one model is obtained by modeling the at least one virtual object, so that the at least one virtual object is displayed based on the at least one model, where one model is used to represent a virtual object.
  • the specific display form, the virtual object can be a character or an object.
  • the depth map is used to represent the depth information of the at least one virtual object in the virtual scene, and the depth information is used to represent the front and back position relationship of the virtual object in the current perspective, for example, character A in the depth map of a virtual scene If the depth of is smaller than the depth of building B, the visual effect presented in the virtual scene is: character A is located in front of building B (that is, character A is located closer to the current perspective).
  • the terminal can obtain the depth map through the target buffer in the rendering engine, because the target buffer usually stores multiple maps in the rendering process, such as light maps, depth maps, and normal maps. Multiple" means at least two. Therefore, in the acquisition process, the terminal may use the depth map identifier as an index, and query according to the index in the target buffer, and when the index can hit any texture, the texture is acquired as the depth map. For example, when the rendering engine is unity3D, the depth map can be obtained from the target buffer through Camera.SetTargetBuffers.
  • the terminal considering that in a real-time rendering scene, the terminal first renders the virtual scene according to the depth map, and then renders the shadow of each virtual object in the virtual scene. Therefore, the terminal can obtain the depth map based on The depth map generates a copy of the depth map that is the same as the depth map. In the subsequent steps 202-213, the depth map copy is directly accessed, which solves the problem that the same depth map cannot be read and written during the rendering process. .
  • the terminal may execute the Blit command on the depth map of the virtual scene in the BeforeForwardAlpha rendering stage of the virtual scene to obtain a copy of the depth map.
  • the terminal may also directly store two identical depth maps locally, so that when rendering the virtual scene, access one of the depth maps, and when rendering the shadows in the virtual scene, access the other one. Depth map.
  • the terminal may also obtain the depth map based on the built-in depth obtaining interface of the rendering engine.
  • the depth texture may be obtained by setting Camera.depthTextureMode to Depth.
  • the terminal multiplies the viewpoint matrix of the illumination direction by the world coordinates of at least one virtual object in the virtual scene to generate at least one shadow map for performing shadow projection on the at least one virtual object from the illumination direction.
  • the multiple coordinate systems may include: model/object coordinates, also known as object coordinate systems. ), world coordinate system (world coordinates), view (eye/camera coordinates, also known as camera coordinate system) coordinate system, and screen coordinate system (window/screen coordinates, also known as window coordinate system), where the model coordinate system is in the following steps Details are described in 208-209, and the screen coordinate system is described in detail in step 205 below.
  • the world coordinate system is the real coordinate system where the virtual scene is located.
  • the world coordinate system uses the scene base point of the virtual scene as the coordinate origin.
  • the position of the pixel of the virtual object in the world coordinate system is called the world coordinate.
  • the visual coordinate system is the coordinate system where the user observes the virtual scene based on the current perspective.
  • the visual coordinate system takes the viewpoint as the coordinate origin, and the position of each virtual object's pixel in the visual coordinate system is called the visual coordinate.
  • the current perspective can be expressed in the form of a camera
  • the view coordinate system can also be referred to as a camera coordinate system.
  • the current perspective can be represented as a camera in the rendering engine.
  • the light source can also be represented as a camera.
  • the shadow map is equivalent to the light source shadowing each virtual object in the virtual scene from the light direction.
  • the resulting texture is projected.
  • the view matrix of the light direction is also a conversion matrix from the world coordinate system to the model coordinate system of the light source.
  • the terminal directly multiplies the viewpoint matrix of the illumination direction by the world coordinates of at least one virtual object in the virtual scene, transforms the at least one virtual object from the current perspective to the perspective of the illumination direction, and converts the at least A real-time image of a virtual object is obtained as the at least one shadow map, where each shadow map corresponds to a virtual object, and each shadow map is used to provide texture (UV) information of the shadow of a virtual object.
  • each shadow map corresponds to a virtual object
  • each shadow map is used to provide texture (UV) information of the shadow of a virtual object.
  • the terminal can directly obtain the shadow map under the light source perspective (light source camera) from the single camera of the current perspective based on the transformation of the viewpoint matrix, which avoids setting a new camera at the light source, thereby reducing rendering
  • the number of cameras shortens the rendering time of the terminal and improves the rendering efficiency of the terminal.
  • the terminal can mount (that is, configure) a slave camera on the main camera corresponding to the current viewing angle, and input the viewpoint matrix of the light direction into the slave camera to output the shadow map, where the The slave camera is subordinate to the master camera, which can reduce the number of rendering cameras, shorten the rendering time of the terminal, and improve the rendering efficiency of the terminal.
  • the slave camera may be a CommandBuffer.
  • the terminal may not perform the transformation of the viewpoint matrix, but directly set a new camera at the light source, and obtain the shadow map based on the new camera, thereby reducing the amount of calculation in the shadow rendering process.
  • the terminal determines the initial size and initial position of the at least one rendering structure according to the at least one virtual object, where the initial size and the initial position match the at least one virtual object.
  • the at least one rendering structure is used to render at least one shadow of the at least one virtual object, each rendering structure corresponds to one shadow of a virtual object, and each rendering structure includes a plurality of pixels.
  • the at least one rendering structure may be a cube, a sphere, or a cylinder.
  • different rendering structures may be used.
  • the implementation of this application The example does not specifically limit the shape of the rendering structure.
  • a cube is usually used as the rendering structure.
  • the terminal may recognize the type of the virtual scene, thereby determining the rendering structure corresponding to the type according to the type of the virtual scene. For example, when the terminal recognizes that the virtual scene is a game scene, the rendering structure is determined to be a cube .
  • matching the initial size and initial position with at least one virtual object means that for each virtual object, the area of the bottom surface of the rendering structure corresponding to the virtual object is greater than or equal to the area of the model bottom surface of the virtual object.
  • the initial position of the rendering structure is at a position that can overlap with the bottom surface of the virtual object model in both the horizontal direction and the vertical direction.
  • the terminal can determine the initial size and initial position according to the virtual object, generate a rendering structure according to the initial size and initial position, and repeat the above process to obtain at least one rendering structure.
  • the terminal may determine, for each virtual object, a square that can exactly contain the bottom surface of the virtual object as the bottom surface of the cube. Since the six sides of the cube are the same, The initial size of each surface of the cube can be determined, and further, the center of the bottom surface of the cube and the center of the bottom surface of the virtual object can be placed so as to obtain the initial position of the cube, where the center of the bottom surface may refer to the geometric center of the bottom surface , Can also refer to the geometric center of gravity of the bottom surface.
  • the terminal determines the direction of the at least one rendering structure according to the illumination direction, and adjusts the initial size and initial position to obtain the at least one rendering structure.
  • the terminal may determine the direction of the rendering structure corresponding to the virtual object (that is, the orientation of the rendering structure) as the light direction for each virtual object, and determine that the virtual object is in the light direction
  • the multiple tangents on the above, the area enclosed by the multiple intersections of the multiple tangents and the shadow projection surface in the virtual scene is determined as a shadow area, so that the rendering structure is translated from the initial position determined in step 203 above
  • To a position that can cover the shadow area adjust the rendering structure from the initial size to a size that can cover the shadow area.
  • the “covering” mentioned here means that any surface of the rendering structure overlaps the shadow area, or the rendering structure can include the shadow area inside the rendering structure.
  • the terminal can enable each rendering structure to reduce the number of pixels that need to be rendered in the rendering structure while covering the shadow area without affecting the shadow rendering effect, thereby improving rendering efficiency.
  • the cube is usually adjusted to a rectangular parallelepiped.
  • the embodiment of the present application does not specifically limit the adjusted form of the rendering structure.
  • the shadow projection surface can be any surface in the virtual scene that supports shadow projection.
  • the shadow projection surface can be smooth or uneven.
  • the shadow projection surface can be a lawn, a wall, or Pavement, etc.
  • the terminal can obtain at least one rendering structure in the virtual scene according to the light direction in the virtual scene. Since each rendering structure corresponds to the shadow of a virtual object, the terminal renders each
  • the process of rendering a structure is a process of rendering each shadow. In the process of rendering the rendering structure, it can include two stages: a vertex shading stage and a pixel shading stage.
  • the terminal may store the viewpoint position in the virtual scene and the ray direction determined by the viewpoint and the pixel in the vertex data of each pixel of each rendering structure during the vertex shading stage, Thus, based on the information stored in the vertex data, the following step 206 is executed. In addition, the terminal may also perform the following step 205 in the pixel coloring stage to obtain the depth information of each pixel.
  • the terminal may not perform the above step 204, that is, the terminal generates the rendering structure after determining the initial size and initial position of the rendering structure according to the virtual object, and determining the direction of the rendering structure according to the light direction. , Without further adjusting the size and position of the rendering structure, the following step 206 is directly executed, thereby simplifying the process of shadow rendering.
  • the terminal obtains the depth information of the multiple pixels from the depth map according to the screen coordinates of the multiple pixels in each rendering structure.
  • One screen coordinate is used to describe the screen base point of a pixel relative to the terminal screen. location information.
  • the screen coordinate system is the coordinate system where the virtual scene is displayed based on the terminal screen.
  • the screen coordinate system takes the screen base point of the terminal screen as the coordinate origin, and the position of the pixel point of each virtual object in the screen coordinate system is called the screen coordinate .
  • the screen base point may be any point on the terminal screen, for example, the screen base point may be the upper left corner point of the screen.
  • the virtual scene is usually not completely displayed on the terminal screen, and the user can observe more virtual objects in the virtual scene by controlling the translation or rotation of the current viewing angle.
  • each rendering structure can include multiple pixels.
  • the terminal can determine the coordinates and the coordinates from the depth map according to the screen coordinates of the pixel. For a point where the screen coordinates of the pixel point are consistent, the depth information of the point is determined as the depth information of the pixel point, and the above process is repeated to obtain the depth information of multiple pixels in each rendering structure.
  • the depth map accessed in step 205 may be a copy of the depth map mentioned in step 201.
  • the terminal uses the viewpoint of the current viewing angle as the ray endpoint, and determines a target whose distance from the viewpoint meets the depth information of the pixel in the direction of the ray determined by the viewpoint and the pixel. pixel.
  • the terminal can store the viewpoint position and the ray direction in the vertex data of each pixel. Therefore, in the above step 206, the terminal can directly access the The vertex data is used to obtain the position of the viewpoint and the direction of the ray. Since an endpoint and a direction can uniquely determine a ray, the terminal uses the viewpoint as the endpoint of the ray. In the direction of the ray, it can uniquely determine a ray according to the depth information of the pixel. The target pixel.
  • the terminal may not store the viewpoint position and the ray direction in the vertex data, but only store a viewpoint position separately in the vertex shading stage.
  • the viewpoint position is obtained from the vertex data, from The direction indicated by the line connecting the position of the viewpoint to the position of the pixel determines the direction of the ray, thereby determining the target pixel, which is not repeated here.
  • Fig. 3 is a schematic diagram of determining the target pixel provided by the embodiment of the present application.
  • the rendering structure is a cube as an example for illustration.
  • Fig. 3 shows a surface of the cube for illustration. The same can be applied to any pixel on any surface of the cube: since the pixel on the surface of the cube is rendered when the cube is rendered, for any pixel A on the surface of the cube, the viewpoint can be obtained from the vertex data of point A V position and ray direction. Therefore, a ray VA can be uniquely determined according to the coordinates of point A, the coordinates of point V and the direction of the ray.
  • the depth value of point A obtained from the depth map in step 205 is 600
  • point B with a depth of 600 from point V can be determined on ray VA. Since the depth value of point A can indicate that the point on the surface of the cube actually corresponds to the depth inside the cube, in the above steps 205-206, the target pixel point B actually determined by the terminal is the pixel point corresponding to point A on the depth map (that is, the pixel point corresponding to point A on the virtual object).
  • the terminal can determine the target pixel on the depth map corresponding to the pixel based on the depth value of each pixel on the rendering structure, so that in the subsequent step 210, it can be based on the model of each target pixel.
  • the shadow map can be pasted to the rendering structure according to the model coordinates of the target pixel on the depth map. Therefore, the shadow of each virtual object can be rendered in real time in the virtual scene.
  • the shadow map is sampled according to the model coordinates of point B, so that the rendered point A can show the visual effect of the shadow corresponding to point B.
  • the process of obtaining the model coordinates of point B will be described in detail in steps 207-209.
  • the terminal determines the world coordinate of the target pixel as the world coordinate of the pixel.
  • a world coordinate is used to describe the position information of a pixel relative to the scene base point of the virtual scene.
  • the terminal determines the value of the target pixel in the world coordinate system as the world coordinate of the pixel of the rendering structure, and repeats the above steps 206-207 for each pixel of the rendering structure. Therefore, the terminal can obtain the world coordinates of the multiple pixels according to the current viewing angle and the depth information of the multiple pixels.
  • the terminal multiplies the viewpoint matrix of the current view by the world coordinates of the plurality of pixels to obtain the local coordinates of the plurality of pixels.
  • the viewpoint matrix is a conversion matrix that is mapped from the world coordinate system to the model coordinate system.
  • the model coordinate system in the rendering engine is the coordinate system where the virtual object is located in the three-dimensional model.
  • each virtual object model has its own model coordinates.
  • the position of each virtual object's pixel in the model coordinate system is also the model coordinate.
  • the number of model coordinate systems is equal to the number of virtual objects.
  • the model coordinate system is an imaginary coordinate system.
  • the center of each virtual object is the coordinate origin. Therefore, even the virtual The object changes dynamically, and the relative position of the model coordinate system and the virtual object is always constant.
  • a local coordinate refers to the coordinate value of a pixel in the [-1,1] space of the model coordinate system, because the viewpoint matrix of the current perspective is mapped from the world coordinate system to the model of at least one rendering structure The conversion matrix of the coordinate system, so the viewpoint matrix of the current viewing angle is left multiplied by the world coordinates of the multiple pixels, so that the local coordinates of the multiple pixels can be obtained.
  • the terminal maps the local coordinates of the multiple pixels to the texture mapping space to obtain the model coordinates of the multiple pixels.
  • the terminal obtains the model coordinates of the multiple pixels according to the world coordinates of the multiple pixels.
  • a model coordinate is used to describe the texture information of a pixel point relative to the model base point of a virtual object.
  • a model coordinate in the above step 209 refers to the coordinate value of a pixel in the [0,1] space of the model coordinate system, because the value range of the texture (UV) information of at least one shadow map acquired in the above step 202 Is [0,1], and after multiplying the viewpoint matrix to the left by the world coordinates of multiple pixels, you actually get the local coordinates within the range of [-1,1], so you need to add multiple pixels
  • the local coordinates in the [-1,1] space are mapped to the model coordinates in the [0,1] space, so that the model coordinates in the [0,1] space can correspond one-to-one with the texture information in the shadow map to facilitate Perform the sampling process of step 210 described below.
  • the terminal maps the local coordinates of multiple pixels to the texture mapping space (UV space)
  • the local coordinates of each pixel can be input to the conversion function to output the multiple pixels in the UV space.
  • the conversion function can be a built-in function of the rendering engine.
  • the terminal obtains the model coordinates of the multiple pixels according to the current view angle and the depth information of the multiple pixels, that is, obtains the world coordinates of the multiple pixels, and combines the multiple pixels through the viewpoint matrix.
  • the world coordinates of the pixels are converted into the local coordinates of multiple pixels, and then mapped to the UV space to obtain the model coordinates of the multiple pixels.
  • the terminal marks the material of the virtual object that is not allowed to be occluded as a target material, and the target material is a material that does not support shadow rendering during the rendering process.
  • FIG. 4 is a kind of image provided by an embodiment of the present application.
  • Figure 4 For a schematic diagram of the rendering effect, see Figure 4, where the shadows of the characters in the figure occlude the boots of the characters and affect the rendering effect of the virtual scene.
  • the terminal can mark the material of the virtual object that is not allowed to be occluded as the target material by performing step 210, because the target material does not support shadow rendering
  • the rendering process of the following steps 211-212 it is impossible to perform overlapping rendering on any virtual object marked with the target material, which prevents the shadow from occluding the virtual object and optimizes the rendering effect of the virtual scene. Improved the realism of shadow rendering. It should be noted that, when marking the material of the virtual object in the above process, only a mark is made, and the material of the virtual object is not substantially changed.
  • the terminal can mark the material of the virtual object that is not allowed to be occluded based on the StencilBuffer (stencil buffer). For example, the virtual object that is allowed to be occluded can be marked on the shadow projection surface.
  • the Stencil Pass material of the object is marked as Replace, and the Stencil Pass material of the virtual object that is not allowed to be occluded on the shadow projection surface is marked as IncrSat.
  • the terminal will check the material of the virtual object on which the shadow is cast. , Sampling and drawing are performed only when Stencil is Comp Equal, which can avoid shadows from occluding virtual objects and optimize the rendering effect of virtual scenes.
  • the terminal may not perform the above step 210. After obtaining the model coordinates of multiple pixels, it directly performs the following step 211 to sample at least one shadow map, which shortens the shadow rendering time and speeds up The efficiency of shadow rendering.
  • the terminal samples at least one shadow map according to the model coordinates of the multiple pixels to obtain multiple sampling points corresponding to the multiple pixels, and each shadow map is used to provide texture information of a shadow.
  • the terminal samples the at least one shadow map obtained in step 202 according to the model coordinates of the multiple pixels. Since the model coordinates of the multiple pixels are in the [0,1] space, The texture information of each shadow map is also located in the [0,1] space. Therefore, for multiple pixels in each rendering structure, the terminal can display the shadow of the virtual object corresponding to the rendering structure. In the texture map, multiple sampling points corresponding to each of the multiple pixels are found.
  • the sampling process in step 211 above is: for each pixel, the terminal acquires the point corresponding to the model coordinates of the pixel as a sampling point, and repeats the sampling process for each rendering structure to obtain each pixel. Multiple sampling points of a rendering structure. For example, the model coordinates of a certain pixel are (0.5, 0.5), and the sampling point of the pixel is determined at (0.5, 0.5) in the shadow map.
  • the terminal renders the multiple sampling points in the virtual scene to obtain at least one shadow.
  • the terminal when it renders based on the multiple sampling points, it can call the built-in drawing interface (draw call) to access the GPU (graphics processing unit) of the terminal to draw the multiple sampling points , Get the at least one shadow.
  • draw call built-in drawing interface
  • GPU graphics processing unit
  • the terminal may also not perform the above step 201, that is, not acquire the at least one shadow map, but directly store one or more target shadow maps locally, so that in the above steps 211-212, According to the model coordinates of the multiple pixels, the one or more target shadow maps are sampled to obtain multiple sampling points corresponding to the multiple pixels, and the multiple sampling points are rendered in the virtual scene , Get at least one shadow.
  • all rendering structures may sample the same target shadow map so that all virtual objects have the same shadow.
  • the target shadow map can be a disc.
  • different rendering structures can sample different target shadow maps so that different virtual objects have different shadows.
  • the target shadow map includes a circle and a square. The rendering structure corresponding to the character samples the wafer, and the rendering structure corresponding to the building samples the other film.
  • FIG. 5 is a schematic diagram of a rendering effect provided by an embodiment of the present application. Referring to FIG. 5, since the terminal executes the material marking in step 210, the boots of the characters in the figure are restored to the effect of not being blocked by shadows.
  • Figure 6 is a schematic diagram of a rendering effect provided by an embodiment of the present application. See Figure 6. Since the terminal executes the process of obtaining shadow maps in step 202 above, the rendered shadow is consistent with the contour of the virtual object itself. A locally stored target shadow map is used. For example, if the target shadow map is a wafer, the rendered shadow contour is shown in Figure 4.
  • the terminal can make full use of the function of the rendering engine to provide a shadow rendering method, which has no additional storage overhead, no additional production overhead, no additional memory at runtime, and stable operating efficiency without being affected by virtual objects. Mobile influence, and strong scalability.
  • the method provided in the embodiment of the present application obtains at least one rendering structure in the virtual scene according to the illumination direction in the virtual scene, thereby using the at least one rendering structure as a model for shadow rendering, and according to the current perspective and the multiple
  • the depth information of the pixel points, the model coordinates of the multiple pixels are obtained, so that the model coordinates and the UV space of the shadow map can be one-to-one correspondence, and at least one shadow map is sampled according to the model coordinates of the multiple pixels to obtain
  • the multiple sampling points corresponding to the multiple pixel points are rendered in the virtual scene to obtain the at least one shadow, which improves the shadow rendering effect and can also be implemented based on the function of the rendering engine itself
  • the real-time rendering of shadows is improved, and the processing efficiency of the terminal CPU is improved.
  • the coordinate system transformation is performed through the viewpoint matrix of the light direction, and the shadow map is directly obtained from the single camera of the current viewing angle, which avoids setting up a new camera at the light source, thereby reducing the number of rendering cameras and shortening the terminal Rendering time improves the rendering efficiency of the terminal.
  • adjusting the initial size and initial direction of at least one rendering structure according to the illumination direction can enable each rendering structure to cover the shadow area without affecting the shadow rendering effect, and reduce the amount of the rendering structure.
  • the number of pixels that need to be rendered improves rendering efficiency.
  • the depth information of multiple pixels is determined according to the screen coordinates of multiple pixels.
  • the depth map can be saved as a copy of the same depth map, and directly accessed during subsequent rendering This copy of the depth map reduces the rendering burden of the terminal.
  • the local coordinates of the pixel are obtained, and the local coordinates are mapped to the UV space to obtain the model coordinates of the pixel, which facilitates sampling and sampling based on the UV information of each shadow map. draw.
  • the material of the virtual object that is not allowed to be occluded is marked in advance, so that the terminal cannot perform overlapping rendering on any virtual object marked with the target material, which prevents the shadow from blocking the virtual object and optimizes the rendering of the virtual scene The effect improves the realism of shadow rendering.
  • steps in the embodiments of the present application are not necessarily executed in sequence in the order indicated by the step numbers. Unless specifically stated in this article, there is no strict order for the execution of these steps, and these steps can be executed in other orders. Moreover, at least part of the steps in each embodiment may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. The execution of these sub-steps or stages The sequence is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • Table 1 shows the shadow rendering efficiency of the three solutions when 14 high-precision models are tested on the terminal at rest:
  • Shadow rendering scheme Single frame duration (ms) FPS (frame rate) Unity3D native solution 42 twenty four Fast Shadow Receiver 37 27 Technical solution of this application 37 27
  • the unity3D native rendering engine requires 42 milliseconds (ms) to render a frame of shadow and FPS (frames per second, the number of frames per second, that is, the frame rate) is 24.
  • FPS frames per second, the number of frames per second, that is, the frame rate
  • Both the solution and the third-party plug-in require 37ms to render a frame of shadow and both have an FPS of 27. Therefore, the technical solution of this application and the Fast Shadow Receiver have the same shadow rendering efficiency when the model is stationary, but both have higher shadow rendering efficiency than the unity3D native solution.
  • Table 2 shows the shadow rendering efficiency of the two solutions when the model is moving.
  • 100 disc shadows are used to move quickly in the same virtual scene.
  • the technical solution of this application further reduces the time spent in rendering a frame of shadow and improves each The number of shadow frames transmitted per second also improves the shadow rendering efficiency of the terminal, and because there is no additional overhead and burden, it also improves the processing efficiency of the terminal CPU.
  • FIG. 7 is a schematic structural diagram of a shadow rendering device provided by an embodiment of the present application.
  • the device includes a first acquisition module 701, a second acquisition module 702, a sampling module 703, and a rendering module 704, which will be described in detail below:
  • the first acquisition module 701 is configured to acquire at least one rendering structure in the virtual scene according to the light direction in the virtual scene, the at least one rendering structure is used to render at least one shadow of at least one virtual object, and each rendering structure
  • the volume includes multiple pixels
  • the second acquiring module 702 is used to acquire the model coordinates of the multiple pixels according to the current view angle and the depth information of the multiple pixels, and a model coordinate is used to describe the texture of a pixel relative to the model base point of a virtual object information;
  • the sampling module 703 is configured to sample at least one shadow map according to the model coordinates of the multiple pixels to obtain multiple sampling points corresponding to the multiple pixels, and each shadow map is used to provide a shadow texture information;
  • the rendering module 704 is configured to render the multiple sampling points in the virtual scene to obtain the at least one shadow.
  • the device provided by the embodiment of the present application acquires at least one rendering structure in the virtual scene according to the light direction in the virtual scene, thereby using the at least one rendering structure as a shadow rendering model, and according to the current perspective and the multiple
  • the depth information of the pixel points, the model coordinates of the multiple pixels are obtained, so that the model coordinates and the UV space of the shadow map can be one-to-one correspondence, and at least one shadow map is sampled according to the model coordinates of the multiple pixels to obtain
  • the multiple sampling points corresponding to the multiple pixel points are rendered in the virtual scene to obtain the at least one shadow.
  • Real-time rendering of the shadow can be realized based on the function of the rendering engine, saving The memory resources of the terminal improve the processing efficiency of the terminal CPU.
  • the second obtaining module 702 includes:
  • the first acquiring unit is configured to acquire the world coordinates of the multiple pixels according to the current view angle and the depth information of the multiple pixels, and a world coordinate is used to describe the position of a pixel relative to the scene base point of the virtual scene information;
  • the second acquiring unit is configured to acquire the model coordinates of the multiple pixels according to the world coordinates of the multiple pixels.
  • the first acquiring unit is used to:
  • the world coordinate of the target pixel is determined as the world coordinate of the pixel.
  • the second acquiring unit is used to:
  • the viewpoint matrix is a conversion matrix that maps from the world coordinate system to the model coordinate system
  • the local coordinates of the multiple pixels are mapped to the texture mapping space to obtain the model coordinates of the multiple pixels.
  • the device further includes:
  • the depth information of the multiple pixels is obtained from the depth map, and one screen coordinate is used to describe the position information of a pixel relative to the screen base point of the terminal screen.
  • the first obtaining module 701 is used to:
  • the direction of the at least one rendering structure is determined, and the initial size and initial position are adjusted to obtain the at least one rendering structure.
  • the at least one rendering structure is a cube, a sphere, or a cylinder.
  • the device further includes:
  • the device further includes:
  • the shadow rendering device provided in the above embodiment renders shadows
  • only the division of the above functional modules is used as an example for illustration.
  • the above functions can be allocated by different functional modules as required, namely The internal structure of the terminal is divided into different functional modules to complete all or part of the functions described above.
  • the shadow rendering device provided by the above-mentioned embodiment and the shadow rendering method embodiment belong to the same concept, and the specific implementation process is detailed in the shadow rendering method embodiment, which will not be repeated here.
  • FIG. 8 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • the terminal 800 can be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, moving picture expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, moving picture expert compressing standard audio Level 4) Player, laptop or desktop computer.
  • the terminal 800 may also be called user equipment, portable terminal, laptop terminal, desktop terminal and other names.
  • the terminal 800 includes a processor 801 and a memory 802.
  • the processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 801 may adopt at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array). achieve.
  • the processor 801 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the awake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state.
  • the processor 801 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used to render and draw content that needs to be displayed on the display screen.
  • the processor 801 may further include an AI (Artificial Intelligence) processor, which is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 802 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 802 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 802 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 801 to implement the method provided by the embodiment of the shadow rendering method in this application. Shadow rendering method.
  • the terminal 800 may optionally further include: a peripheral device interface 803 and at least one peripheral device.
  • the processor 801, the memory 802, and the peripheral device interface 803 may be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 803 through a bus, a signal line or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 804, a touch display screen 805, a camera 806, an audio circuit 807, a positioning component 808, and a power supply 809.
  • the peripheral device interface 803 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 801 and the memory 802.
  • the processor 801, the memory 802, and the peripheral device interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 801, the memory 802 and the peripheral device interface 803 or The two can be implemented on separate chips or circuit boards, which are not limited in this embodiment.
  • the radio frequency circuit 804 is used to receive and transmit RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 804 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 804 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on.
  • the radio frequency circuit 804 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: metropolitan area network, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area network and/or WiFi (Wireless Fidelity, wireless fidelity) network.
  • the radio frequency circuit 804 may also include NFC (Near Field Communication) related circuits, which is not limited in this application.
  • the display screen 805 is used to display UI (User Interface).
  • the UI can include graphics, text, icons, videos, and any combination thereof.
  • the display screen 805 also has the ability to collect touch signals on or above the surface of the display screen 805.
  • the touch signal can be input to the processor 801 as a control signal for processing.
  • the display screen 805 may also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • the display screen 805 there may be one display screen 805, which is provided with the front panel of the terminal 800; in other embodiments, there may be at least two display screens 805, which are respectively arranged on different surfaces of the terminal 800 or in a folded design; In still other embodiments, the display screen 805 may be a flexible display screen, which is arranged on the curved surface or the folding surface of the terminal 800. Furthermore, the display screen 805 can also be set as a non-rectangular irregular pattern, that is, a special-shaped screen.
  • the display screen 805 may be made of materials such as LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode, organic light emitting diode).
  • the camera assembly 806 is used to capture images or videos.
  • the camera assembly 806 includes a front camera and a rear camera.
  • the front camera is set on the front panel of the terminal, and the rear camera is set on the back of the terminal.
  • the camera assembly 806 may also include a flash.
  • the flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to a combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • the audio circuit 807 may include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals to be input to the processor 801 for processing, or input to the radio frequency circuit 804 to implement voice communication. For the purpose of stereo collection or noise reduction, there may be multiple microphones, which are respectively set in different parts of the terminal 800.
  • the microphone can also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is used to convert the electrical signal from the processor 801 or the radio frequency circuit 804 into sound waves.
  • the speaker can be a traditional membrane speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert the electrical signal into human audible sound waves, but also convert the electrical signal into human inaudible sound waves for purposes such as distance measurement.
  • the audio circuit 807 may also include a headphone jack.
  • the positioning component 808 is used to locate the current geographic position of the terminal 800 to implement navigation or LBS (Location Based Service, location-based service).
  • the positioning component 808 may be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, the Granus system of Russia, or the Galileo system of the European Union.
  • the power supply 809 is used to supply power to various components in the terminal 800.
  • the power source 809 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery may support wired charging or wireless charging.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal 800 further includes one or more sensors 810.
  • the one or more sensors 810 include, but are not limited to, an acceleration sensor 811, a gyroscope sensor 812, a pressure sensor 813, a fingerprint sensor 814, an optical sensor 815, and a proximity sensor 816.
  • the acceleration sensor 811 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 800.
  • the acceleration sensor 811 can be used to detect the components of the gravitational acceleration on three coordinate axes.
  • the processor 801 may control the touch screen 805 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 811.
  • the acceleration sensor 811 may also be used for the collection of game or user motion data.
  • the gyroscope sensor 812 can detect the body direction and rotation angle of the terminal 800, and the gyroscope sensor 812 can cooperate with the acceleration sensor 811 to collect the user's 3D actions on the terminal 800.
  • the processor 801 can implement the following functions according to the data collected by the gyroscope sensor 812: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 813 may be disposed on the side frame of the terminal 800 and/or the lower layer of the touch screen 805.
  • the processor 801 performs left and right hand recognition or quick operation according to the holding signal collected by the pressure sensor 813.
  • the processor 801 controls the operability controls on the UI interface according to the user's pressure operation on the touch display screen 805.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 814 is used to collect the user's fingerprint.
  • the processor 801 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 814, or the fingerprint sensor 814 identifies the user's identity according to the collected fingerprint.
  • the processor 801 authorizes the user to perform related sensitive operations, which include unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 814 may be provided on the front, back or side of the terminal 800. When a physical button or a manufacturer logo is provided on the terminal 800, the fingerprint sensor 814 may be integrated with the physical button or the manufacturer logo.
  • the optical sensor 815 is used to collect the ambient light intensity.
  • the processor 801 may control the display brightness of the touch screen 805 according to the ambient light intensity collected by the optical sensor 815. Specifically, when the ambient light intensity is high, the display brightness of the touch screen 805 is increased; when the ambient light intensity is low, the display brightness of the touch screen 805 is decreased.
  • the processor 801 may also dynamically adjust the shooting parameters of the camera assembly 806 according to the ambient light intensity collected by the optical sensor 815.
  • the proximity sensor 816 also called a distance sensor, is usually arranged on the front panel of the terminal 800.
  • the proximity sensor 816 is used to collect the distance between the user and the front of the terminal 800.
  • the processor 801 controls the touch screen 805 to switch from the on-screen state to the off-screen state; when the proximity sensor 816 detects When the distance between the user and the front of the terminal 800 gradually increases, the processor 801 controls the touch display screen 805 to switch from the rest screen state to the bright screen state.
  • FIG. 8 does not constitute a limitation on the terminal 800, and may include more or fewer components than shown, or combine some components, or adopt different component arrangements.
  • a computer-readable storage medium such as a memory including instructions, which may be executed by a processor in a terminal to complete the shadow rendering method in the foregoing embodiment.
  • the computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • Non-volatile memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash memory, or optical storage.
  • Volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory.
  • RAM may be in various forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

本申请公开了一种阴影渲染方法、装置、计算机设备及存储介质,该方法包括:根据虚拟场景中的光照方向,获取该虚拟场景中的至少一个渲染结构体;根据当前视角和该多个像素点的深度信息,获取该多个像素点的模型坐标;根据该多个像素点的模型坐标,对至少一个阴影贴图进行采样,得到与该多个像素点所对应的多个采样点;在该虚拟场景中对该多个采样点进行渲染,得到该至少一个阴影。

Description

阴影渲染方法、装置、计算机设备及存储介质
本申请要求于2019年04月11日提交中国专利局,申请号为201910290577.3,申请名称为“阴影渲染方法、装置、终端及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像渲染技术领域,特别涉及一种阴影渲染方法、装置、计算机设备及存储介质。
背景技术
随着图像渲染技术的发展,为了模拟出更加逼真的三维场景,终端通常会实时地对三维场景内的对象的阴影进行渲染,该对象可以是人物,也可以是物体。
目前,可以通过第三方插件(例如Fast Shadow Receiver,快速阴影接收器)实现实时阴影渲染,例如根据光源数据和已烘焙好的场景网格,从该场景网格中获取阴影区域的网格,将阴影纹理渲染在该阴影区域上,得到具有阴影的场景网格,第三方插件直接渲染出来的阴影通常会和场景中原本的三维模型产生“重影”(例如地形互相穿插、地面有贴花等)问题,阴影渲染效果差。
发明内容
根据本申请提供的各种实施例,提供了一种阴影渲染方法、装置、计算机设备及存储介质。
一方面,提供了一种阴影渲染方法,由计算机设备执行,该方法包括:
根据虚拟场景中的光照方向,获取该虚拟场景中的至少一个渲染结构体,该至少一个渲染结构体用于渲染至少一个虚拟对象的至少一个阴影,每个渲染结构体包括多个像素点;
根据当前视角和该多个像素点的深度信息,获取该多个像素点的模型坐标,一个模型坐标用于描述一个像素点相对于一个虚拟对象的模型顶点的纹理信息;
根据该多个像素点的模型坐标,对至少一个阴影贴图进行采样,得到与该 多个像素点所对应的多个采样点,每个阴影贴图用于提供一个阴影的纹理信息;及
在该虚拟场景中对该多个采样点进行渲染,得到该至少一个阴影。
一方面,提供了一种阴影渲染装置,该装置包括:
第一获取模块,用于根据虚拟场景中的光照方向,获取该虚拟场景中的至少一个渲染结构体,该至少一个渲染结构体用于渲染至少一个虚拟对象的至少一个阴影,每个渲染结构体包括多个像素点;
第二获取模块,用于根据当前视角和该多个像素点的深度信息,获取该多个像素点的模型坐标,一个模型坐标用于描述一个像素点相对于一个虚拟对象的模型基点的纹理信息;
采样模块,用于根据该多个像素点的模型坐标,对至少一个阴影贴图进行采样,得到与该多个像素点所对应的多个采样点,每个阴影贴图用于提供一个阴影的纹理信息;
渲染模块,用于在该虚拟场景中对该多个采样点进行渲染,得到该至少一个阴影。
一方面,提供了一种计算机设备,计算机设备包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如上述任一种可能实现方式的阴影渲染方法所执行的操作。
一方面,提供了一个或多个存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如上述任一种可能实现方式的阴影渲染方法所执行的操作。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种阴影渲染方法的实施环境示意图;
图2是本申请实施例提供的一种阴影渲染方法的流程图;
图3是本申请实施例提供的确定目标像素点的示意图;
图4是本申请实施例提供的一种渲染效果的示意图;
图5是本申请实施例提供的一种渲染效果的示意图;
图6是本申请实施例提供的一种渲染效果的示意图;
图7是本申请实施例提供的一种阴影渲染装置的结构示意图;及
图8是本申请实施例提供的终端的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
图1是本申请实施例提供的一种阴影渲染方法的实施环境示意图。参见图1,在该实施环境中,终端101上可以安装有渲染引擎102。终端101可以从服务器103中获取虚拟场景并展示,服务器103可以是游戏服务器,终端102可以从游戏服务器中获取游戏场景。
其中,该终端101用于提供阴影渲染服务,终端101可以基于该渲染引擎102,对任一虚拟场景中的虚拟对象以及虚拟对象的阴影进行渲染,该虚拟对象可以是人物,也可以是物体,该虚拟场景可以是任一展示在终端上的虚拟场景,例如该虚拟场景可以是游戏场景、室内设计模拟场景等,本申请实施例不对该虚拟场景的内容进行具体限定。
在一些实施例中,该渲染引擎102可以以独立的渲染客户端的形式安装在终端上,使得终端可以直接通过该渲染客户端实现阴影渲染,例如,该渲染引擎102可以是unity3D(联合三维),可以是Unreal(unreal engine,虚幻引擎),也可以是OpenGL(open graphics library,开放图形库)等。
在一些实施例中,终端上可以安装有用于提供虚拟场景的应用客户端,该应用客户端可以是游戏客户端、三维设计客户端等,该渲染引擎102可以以API(application programming interface,应用程序编程接口)的形式封装在终端操作系统的内核层,终端操作系统向上层的应用客户端提供该API,从而使得终端上的该应用客户端能够调用该API,渲染虚拟场景中的虚拟对象以及该虚拟对象的阴影。
在相关技术中,由于终端基于第三方插件(例如Fast Shadow Receiver,快速阴影接收器)进行阴影渲染,根据一张已烘焙(bake)好的场景网格(mesh)来获取阴影区域的网格,在第三方插件的运行过程中,所用的该场景网格通常会占用1.5~4M的内存,从而挤占了终端的内存资源,影响了终端CPU的处理效率。
另一方面,由于该场景网格通常嵌入在应用客户端的SDK(software development kit,软件开发工具包)中,会占用5~10M的磁盘空间,就会增大该应用客户端的SDK的安装包大小,例如,一个已烘焙好的10M的场景网格,通常会使安装包大小增加2M,不利于精简应用客户端的SDK体量。
基于上述实施环境,图2是本申请实施例提供的一种阴影渲染方法的流程图,参见图2,该实施例可以应用于终端,也可以应用于终端上的应用客户端,本申请实施例仅以终端为例进行说明,该阴影渲染方法包括:
201、终端获取虚拟场景的深度贴图。
其中,该虚拟场景可以是任一展示在终端上的虚拟场景,该虚拟场景的数据可以是存储在本地,也可以来自于云端。该虚拟场景内可以包括至少一个虚拟对象,通过对至少一个虚拟对象进行建模,得到至少一个模型,从而基于该至少一个模型显示该至少一个虚拟对象,其中,一个模型用于表示一个虚拟对象的具体显示形式,该虚拟对象可以是人物,也可以是物体。
其中,该深度贴图用于表示该虚拟场景中该至少一个虚拟对象的深度信息,该深度信息用于表示虚拟对象在当前视角下的前后位置关系,例如,在一个虚拟场景的深度贴图中人物A的深度小于建筑物B的深度,则在该虚拟场景呈现出的视觉效果为:人物A位于建筑物B的前面(也即是人物A位于更靠近当前视角的位置)。
在上述步骤201中,终端可以通过渲染引擎内的目标缓冲区获取该深度贴图,由于该目标缓冲区通常存储有渲染过程中的多个贴图,例如光照贴图、深度贴图、法线贴图等,“多个”是指至少两个。因此,在获取过程中,终端可以以深度贴图标识作为索引,在目标缓冲区中根据该索引进行查询,当该索引能够命中任一贴图时,将该贴图获取为该深度贴图。例如,当渲染引擎为unity3D时,可以通过Camera.SetTargetBuffers从目标缓冲区中获取该深度贴图。
在一些实施例中,考虑到在实时渲染的场景下,终端先根据该深度贴图渲染出虚拟场景,然后在该虚拟场景中渲染各个虚拟对象的阴影,因此终端可以 在获取该深度贴图后,基于该深度贴图生成一张与该深度贴图相同的深度贴图副本,在后续步骤202-213中,直接访问该深度贴图副本,解决了在渲染过程中不能对同一张深度贴图同时进行读写操作的问题。例如,终端可以在该虚拟场景的BeforeForwardAlpha渲染阶段,对该虚拟场景的深度贴图执行Blit命令,得到该深度贴图副本。
在一些实施例中,终端还可以直接在本地存储有两张相同的深度贴图,从而在渲染虚拟场景时,访问其中一张深度贴图,在渲染虚拟场景中的阴影时,访问其中的另一张深度贴图。
在一些实施例中,终端还可以基于渲染引擎内置的深度获取接口来获取该深度贴图,例如,当渲染引擎为unity3D时,可以基于将Camera.depthTextureMode设置为Depth的方式来获取该深度贴图。
202、终端将光照方向的视点矩阵左乘该虚拟场景中至少一个虚拟对象的世界坐标,生成从该光照方向对该至少一个虚拟对象进行阴影投射的至少一个阴影贴图。
在一些实施例中,在渲染引擎中进行渲染时,通常会涉及到多个坐标系之间的坐标变换,该多个坐标系可以包括:模型坐标系(model/object coordinates,又称物体坐标系)、世界坐标系(world coordinates)、视(eye/camera coordinates,又称摄像机坐标系)坐标系和屏幕坐标系(window/screen coordinates,又称窗口坐标系),其中,模型坐标系在下述步骤208-209中进行详述,屏幕坐标系在下述步骤205中进行详述。
在一些实施例中,世界坐标系即为该虚拟场景所在的真实坐标系,对一个虚拟场景而言,通常有且只有一个世界坐标系,世界坐标系以虚拟场景的场景基点作为坐标原点,各个虚拟对象的像素点在世界坐标系中的位置称作世界坐标。
在一些实施例中,视坐标系即为用户基于当前视角观察虚拟场景时所在的坐标系,视坐标系以视点为坐标原点,各个虚拟对象的像素点在视坐标系中的位置称作视坐标。由于在通常的渲染引擎中,当前视角可以表现为一个摄像机(camera)的形式,因此视坐标系也可以称为摄像机坐标系。当用户基于当前视角观察该虚拟场景时,相当于通过虚拟场景中的一个摄像机观察该虚拟场景,其中,该摄像机相当于一个特殊的透明虚拟对象的模型。
基于上述情况,当前视角可以表现为渲染引擎中的一个摄像机,同理,光 源也可以表现为一个摄像机的形式,那么阴影贴图就相当于是光源从光照方向对该虚拟场景内的各个虚拟对象进行阴影投射所得到的贴图。其中,由于光源camera也相当于一个特殊的透明虚拟对象的模型,则该光照方向的视点(view)矩阵也即是从世界坐标系映射到该光源的模型坐标系的转换矩阵。
在上述过程中,终端直接将光照方向的视点矩阵左乘虚拟场景内至少一个虚拟对象的世界坐标,将该至少一个虚拟对象从当前视角变换到光照方向的视角,将光照方向的视角下该至少一个虚拟对象的实时图像获取为该至少一个阴影贴图,其中,每个阴影贴图对应于一个虚拟对象,每个阴影贴图用于提供一个虚拟对象的阴影的纹理(UV)信息。
在上述步骤202中,终端能够基于视点矩阵的变换,从当前视角的单个摄像机中直接得到光源视角(光源camera)下的阴影贴图,也就避免了在光源处设置一个新的摄像机,从而减少渲染摄像机的数量,缩短终端的渲染时间,提升终端的渲染效率。
在一些实施例中,终端可以在当前视角所对应的主摄像机上挂载(也即是配置)一个从摄像机,将光线方向的视点矩阵输入该从摄像机,即可输出该阴影贴图,其中,该从摄像机从属于该主摄像机,从而能够减少渲染摄像机的数量,缩短终端的渲染时间,提升终端的渲染效率,例如,该从摄像机可以是CommandBuffer。
当然,在一些实施例中,终端还可以不进行视点矩阵的变换,而是直接在光源处设置一个新的摄像机,基于该新的摄像机获取该阴影贴图,从而减少了阴影渲染过程的计算量。
203、终端根据该至少一个虚拟对象,确定至少一个渲染结构体的初始大小和初始位置,该初始大小和该初始位置与该至少一个虚拟对象相匹配。
其中,该至少一个渲染结构体用于渲染该至少一个虚拟对象的至少一个阴影,每个渲染结构体对应于一个虚拟对象的一个阴影,每个渲染结构体包括多个像素点。
可选地,该至少一个渲染结构体可以为立方体(cube)、球体或者圆柱体,在不同的渲染引擎中,对同一个虚拟场景进行阴影渲染时,可以采用不同的渲染结构体,本申请实施例不对该渲染结构体的形态进行具体限定,例如,在unity3D中通常采用立方体作为渲染结构体。
可选地,终端可以对虚拟场景的类型进行识别,从而根据该虚拟场景的类 型确定与该类型对应的渲染结构体,例如,当终端识别出虚拟场景为游戏场景时,确定渲染结构体为立方体。
在上述过程中,初始大小和初始位置与至少一个虚拟对象相匹配是指:对每个虚拟对象,与该虚拟对象所对应的渲染结构体的底面面积大于或等于该虚拟对象的模型底面面积,该渲染结构体的初始位置位于与该虚拟对象的模型底面在水平方向和垂直方向均能够重合的位置。
在上述过程中,对每个虚拟对象,终端可以根据该虚拟对象确定初始大小和初始位置,按照该初始大小和初始位置,生成一个渲染结构体,重复执行上述过程,得到至少一个渲染结构体。
在一些实施例中,当该渲染结构体为立方体时,终端可以对每个虚拟对象,将恰好能够包含虚拟对象的模型底面的正方形确定为立方体的底面,由于立方体的六个面均相同,因此能够确定该立方体每个面的初始大小,进一步地,将该立方体的底面中心与该虚拟对象的底面中心重合放置,从而能够得到该立方体的初始位置,其中该底面中心可以是指底面的几何中心,也可以是指底面的几何重心。
204、终端根据该光照方向,确定该至少一个渲染结构体的方向,调整该初始大小和初始位置,得到该至少一个渲染结构体。
在上述过程中,终端可以对每个虚拟对象,将与该虚拟对象所对应的渲染结构体的方向(也即是渲染结构体的朝向)确定为该光照方向,确定该虚拟对象在该光照方向上的多条切线,将该多条切线与该虚拟场景中阴影投射面的多个交点所围成的区域确定为阴影区域,从而将该渲染结构体从上述步骤203中所确定的初始位置平移至能够覆盖该阴影区域的位置,将该渲染结构体从初始大小调整为能够覆盖该阴影区域的大小。这里所说的“覆盖”是指该渲染结构体的任一表面与该阴影区域重叠,或者该渲染结构体能够将该阴影区域包含在渲染结构体的内部。
通过上述步骤204,终端能够使得每个渲染结构体能够在覆盖阴影区域且不影响阴影渲染效果的情况下,减小该渲染结构体中所需要渲染的像素点的数量,提升了渲染效率。基于上述示例,当对立方体形状的渲染结构体的初始大小进行调整后,该立方体通常会被调整为长方体,本申请实施例不对该渲染结构体调整后的形态进行具体限定。
其中,该阴影投射面可以是在虚拟场景中的任一个支持投射阴影的面,该 阴影投射面可以是光滑的,也可以是凹凸不平的,例如,该阴影投射面可以是草坪、墙面或者路面等。
在上述步骤203-204中,终端可以根据虚拟场景中的光照方向,获取该虚拟场景中的至少一个渲染结构体,由于每个渲染结构体对应于一个虚拟对象的阴影,因此,终端渲染每个渲染结构体的过程就是渲染每个阴影的过程,在渲染该渲染结构体的过程中,可以包括两个阶段:顶点着色阶段和像素着色阶段。
在上述情况下,终端可以在顶点着色阶段时,在每个渲染结构体的每个像素点的顶点数据中存储该虚拟场景中的视点位置,以及该视点与该像素点所确定的射线方向,从而基于顶点数据中存储的信息,执行下述步骤206。此外,终端还可以在像素着色阶段,执行下述步骤205,获取各个像素点的深度信息。
在一些实施例中,终端还可以不执行上述步骤204,也即是终端在根据虚拟对象确定渲染结构体的初始大小和初始位置,以及根据光照方向确定渲染结构体的方向后,生成渲染结构体,不对该渲染结构体的大小和位置进行进一步调整,直接执行下述步骤206,从而简化了阴影渲染的流程。
205、终端根据每个渲染结构体中的多个像素点的屏幕坐标,从深度贴图中获取该多个像素点的深度信息,一个屏幕坐标用于描述一个像素点相对于终端屏幕的屏幕基点的位置信息。
其中,屏幕坐标系即为基于终端屏幕显示该虚拟场景时所在的坐标系,屏幕坐标系以终端屏幕的屏幕基点为坐标原点,各个虚拟对象的像素点在屏幕坐标系中的位置称作屏幕坐标。该屏幕基点可以是终端屏幕上的任一点,例如,该屏幕基点可以是屏幕的左上角点。在一些实施例中,在终端屏幕上通常不能完整的显示该虚拟场景,用户可以通过控制当前视角的平移或者旋转等操作,来观察到虚拟场景中更多的虚拟对象。
上述过程中,每个渲染结构体中可以包括多个像素点,对每个渲染结构体中的每个像素点而言,终端可以根据该像素点的屏幕坐标,从深度贴图中确定坐标与该像素点的屏幕坐标一致的点,将该点的深度信息确定为该像素点的深度信息,重复执行上述过程,从而能够获取到每个渲染结构体中的多个像素点的深度信息。需要说明的是,当终端上包括两张深度贴图时,上述步骤205中所访问的深度贴图可以是上述步骤201中所提及的深度贴图副本。
206、终端对每个像素点,以该当前视角的视点为射线端点,在该视点与该像素点所确定的射线方向上,确定与该视点之间的距离符合该像素点的深度信 息的目标像素点。
由于在上述步骤204所提及的顶点着色阶段中,终端可以将视点位置和该射线方向存储在各个像素点的顶点数据中,因此,在上述步骤206中,终端可以直接访问每个像素点的顶点数据,获取该视点位置和该射线方向,由于一个端点和一个方向能够唯一确定一条射线,终端根据以视点为射线端点,在该射线方向上,能够根据该像素点的深度信息,唯一确定一个目标像素点。
可选地,终端还可以不在顶点数据中存储该视点位置和该射线方向,而是在顶点着色阶段只单独存储一个视点位置,在上述步骤206中,从顶点数据中获取该视点位置,将从该视点位置出发向该像素点位置的连线所指示的方向确定该射线方向,从而确定该目标像素点,在此不作赘述。
图3是本申请实施例提供的确定目标像素点的示意图,如图3所示,以该渲染结构体为立方体为例进行说明,图3画出了立方体中的一个面进行示意,实际上对于立方体中任一个面中的任一个像素点均可以同理类推:由于在渲染立方体时是渲染立方体表面的像素点,对于立方体表面的任一像素点A,从A点的顶点数据中可以获取视点V位置和射线方向,因此根据A点坐标、V点坐标和射线方向,能够唯一确定一条射线VA,假设在上述步骤205中从深度贴图中获取到的A点的深度值为600,则从V点出发,在射线VA上可以确定距离V点深度为600的B点,由于A点的深度值能够表示在立方体表面的点实际上对应于立方体内部的深度,因此在上述步骤205-206中,终端实际上确定的目标像素点B为A点在深度贴图上所对应的像素点(也即是A点在虚拟对象上所对应的像素点)。
在上述过程中,终端能够基于渲染结构体上各个像素点的深度值,确定深度贴图上与该像素点所对应的目标像素点,从而在后续步骤210中,能够根据每个目标像素点的模型坐标进行采样,也就能够将阴影贴图按照深度贴图上目标像素点的模型坐标贴到该渲染结构体上,因此在虚拟场景中也可以实时地渲染各个虚拟对象的阴影。基于上述示例,也即是,在对A点进行采样贴图的阶段,按照B点的模型坐标对阴影贴图进行采样,进而能够使得渲染完成后的A点呈现出B点对应阴影的视觉效果,在下述步骤207-209中将对获取B点的模型坐标的过程进行详述。
207、终端将该目标像素点的世界坐标确定为该像素点的世界坐标。
其中,一个世界坐标用于描述一个像素点相对于该虚拟场景的场景基点的 位置信息。
在上述过程中,终端将目标像素点在世界坐标系中的取值确定为渲染结构体的像素点的世界坐标,对每个渲染结构体的多个像素点,重复执行上述步骤206-207,从而终端能够得到根据该当前视角和该多个像素点的深度信息,获取该多个像素点的世界坐标。
208、终端将该当前视角的视点矩阵左乘该多个像素点的世界坐标,得到该多个像素点的本地坐标,该视点矩阵为从世界坐标系映射到模型坐标系的转换矩阵。
在一些实施例中,渲染引擎中的模型坐标系即为虚拟对象在三维模型中所在的坐标系,对虚拟场景中的至少一个虚拟对象而言,每个虚拟对象的模型都具有自身的模型坐标系,各个虚拟对象的像素点在模型坐标系中的位置也即是模型坐标。换言之,在一个虚拟场景中可以具有至少一个模型坐标系,模型坐标系的数量等于虚拟对象的数量,模型坐标系是一个假想的坐标系,以各个虚拟对象的中心为坐标原点,因此,哪怕虚拟对象是动态变化的,该模型坐标系与虚拟对象的相对位置也始终是不变的。
在上述过程中,一个本地坐标是指一个像素点在模型坐标系的[-1,1]空间下的坐标值,由于当前视角的视点矩阵是从世界坐标系映射到至少一个渲染结构体的模型坐标系的转换矩阵,因此将该当前视角的视点矩阵左乘该多个像素点的世界坐标,从而可以得到多个像素点的本地坐标。
209、终端将该多个像素点的本地坐标映射到纹理映射空间,得到该多个像素点的模型坐标。
在上述过程中,终端根据该多个像素点的世界坐标,获取该多个像素点的模型坐标。其中,一个模型坐标用于描述一个像素点相对于一个虚拟对象的模型基点的纹理信息。
上述步骤209中一个模型坐标指的是一个像素点在模型坐标系的[0,1]空间下的坐标值,由于上述步骤202中获取的至少一个阴影贴图的纹理(UV)信息的取值范围是[0,1],而将视点矩阵左乘多个像素点的世界坐标后,实际上得到的是取值范围在[-1,1]内的本地坐标,因此需要将该多个像素点从[-1,1]空间的本地坐标映射到[0,1]空间的模型坐标,从而在[0,1]空间下的模型坐标才能够与阴影贴图中的纹理信息一一对应,以便于执行下述步骤210的采样过程。
在一些实施例中,当终端将多个像素点的本地坐标映射到纹理映射空间 (UV空间)时,可以将每个像素点的本地坐标输入转换函数,从而输出该多个像素点在UV空间的模型坐标,该转换函数可以是渲染引擎内置的函数。
在上述步骤206-209中,终端根据当前视角和该多个像素点的深度信息,获取该多个像素点的模型坐标,也即是获取多个像素点的世界坐标,通过视点矩阵将多个像素点的世界坐标转换为多个像素点的本地坐标,进而映射到UV空间得到多个像素点的模型坐标。
210、终端将不允许遮挡的虚拟对象的材质标记为目标材质,该目标材质为渲染过程中不支持阴影渲染的材质。
在一些实施例中,如果直接基于该多个像素点的模型坐标进行采样和渲染,会使得渲染出的阴影对虚拟对象的本体产生遮挡效果,例如,图4是本申请实施例提供的一种渲染效果的示意图,参见图4,图中人物的阴影反而对人物的靴子产生了遮挡,影响了虚拟场景的渲染效果。
为避免虚拟对象的阴影遮挡住虚拟对象的本体,因此,终端可以通过执行上述步骤210,对不允许被遮挡的虚拟对象的材质(material)标记为目标材质,由于该目标材质是不支持阴影渲染的材质,那么在下述步骤211-212的渲染过程中,就无法对标记有目标材质的任一虚拟对象进行重叠渲染,也就避免了阴影对虚拟对象造成遮挡,优化了虚拟场景的渲染效果,提高了阴影渲染的拟真度。需要说明的是,在上述过程中对虚拟对象的材质进行标记时,仅仅是做一个记号,而并非对虚拟对象的材质进行实质性更换。
在一些实施例中,当渲染引擎为unity3D时,终端可以基于StencilBuffer(模板缓冲区)来对该不允许被遮挡的虚拟对象的材质进行标记,例如,可以将阴影投射面上允许被遮挡的虚拟对象的Stencil Pass材质标记为Replace,将阴影投射面上不允许被遮挡的虚拟对象的Stencil Pass材质标记为IncrSat,当渲染阴影的过程中终端会对该阴影所投射到的虚拟对象的材质进行检查,只有当Stencil为Comp Equal时才进采样和绘制,从而能够避免了阴影对虚拟对象造成遮挡,优化了虚拟场景的渲染效果。
在一些实施例中,终端还可以不执行上述步骤210,当获取多个像素点的模型坐标后,直接执行下述步骤211,对至少一个阴影贴图进行采样,缩短了阴影渲染的时间,加快了阴影渲染的效率。
211、终端根据该多个像素点的模型坐标,对至少一个阴影贴图进行采样,得到与该多个像素点所对应的多个采样点,每个阴影贴图用于提供一个阴影的 纹理信息。
在上述过程中,终端根据该多个像素点的模型坐标,对上述步骤202中获取到的至少一个阴影贴图中进行采样,由于该多个像素点的模型坐标在[0,1]空间中,而每个阴影贴图的纹理信息也位于[0,1]空间中,因此对每个渲染结构体中的多个像素点而言,终端都能够在与该渲染结构体所对应的虚拟对象的阴影贴图中,找到与该多个像素点中每个像素点所一一对应的多个采样点。
上述步骤211中的采样过程也即是:对每个像素点,终端将与该像素点的模型坐标相对应的点获取为一个采样点,对每个渲染结构体重复执行上述采样过程,得到每个渲染结构体的多个采样点。例如,某个像素点的模型坐标为(0.5,0.5),在阴影贴图中确定位于(0.5,0.5)位置为该像素点的采样点。
212、终端在该虚拟场景中对该多个采样点进行渲染,得到至少一个阴影。
在上述过程中,终端基于该多个采样点进行渲染时,可以调用内置的绘制接口(draw call),从而访问终端的GPU(graphics processing unit,图形处理器),对该多个采样点进行绘制,得到该至少一个阴影。
在一些实施例中,终端还可以不执行上述步骤201,也即是不获取该至少一个阴影贴图,而是直接在本地存储有一个或多个目标阴影贴图,从而在上述步骤211-212中,根据该多个像素点的模型坐标,对该一个或多个目标阴影贴图进行采样,得到与该多个像素点所对应的多个采样点,在该虚拟场景中对该多个采样点进行渲染,得到至少一个阴影。
当目标阴影贴图为一个时,可以是所有的渲染结构体都对同一张目标阴影贴图进行采样,使得所有的虚拟对象具有相同的阴影,例如,该目标阴影贴图可以为圆片。当然,当该目标阴影贴图为多个时,不同的渲染结构体可以对不同的目标阴影贴图进行采样,使得不同的虚拟对象具体不同的阴影,例如,该目标阴影贴图包括圆片和方片,与人物对应的渲染结构体对圆片进行采样,与建筑物对应的渲染结构体对方片进行采样等。
图5是本申请实施例提供的一种渲染效果的示意图,参见图5,由于终端执行了上述步骤210中的材质标记,因此图中人物的靴子恢复成了没有被阴影所遮挡的效果。
图6是本申请实施例提供的一种渲染效果的示意图,参见图6,由于终端执行了上述步骤202中获取阴影贴图的过程,因此渲染出的阴影与虚拟对象本身的轮廓是一致的,如果采取本地存储的目标阴影贴图,例如该目标阴影贴图采 取圆片的情况下,渲染出的阴影轮廓如图4所示。
在上述步骤201-212中,终端能够充分利用渲染引擎的功能,提供一种阴影渲染方法,该方法无额外存储开销,无额外制作开销,运行时无额外内存,运行效率稳定不受虚拟对象的移动影响,并且可扩展性强。
上述所有可选技术方案,可以采用任意结合形成本公开的可选实施例,在此不再一一赘述。
本申请实施例提供的方法,通过根据虚拟场景中的光照方向,获取该虚拟场景中的至少一个渲染结构体,从而将该至少一个渲染结构体作为阴影渲染的模型,根据当前视角和该多个像素点的深度信息,获取该多个像素点的模型坐标,从而能够使得模型坐标与阴影贴图的UV空间一一对应,根据该多个像素点的模型坐标,对至少一个阴影贴图进行采样,得到与该多个像素点所对应的多个采样点,在该虚拟场景中对该多个采样点进行渲染,得到该至少一个阴影,,提高了阴影渲染效果,还可以基于渲染引擎本身的功能实现了阴影的实时渲染,提高了终端CPU的处理效率。
进一步地,通过光照方向的视点矩阵进行坐标系的变换,从当前视角的单个摄像机中直接得到阴影贴图,也就避免了在光源处设置一个新的摄像机,从而减少渲染摄像机的数量,缩短终端的渲染时间,提升终端的渲染效率。
进一步地,根据光照方向,调整至少一个渲染结构体的初始大小和初始方向,能够使得每个渲染结构体能够在覆盖阴影区域且不影响阴影渲染效果的情况下,减小该渲染结构体中所需要渲染的像素点的数量,提升了渲染效率。
进一步地,根据多个像素点的屏幕坐标来确定多个像素点的深度信息,在获取深度贴图时,可以将深度贴图另存为一张与该深度贴图相同的深度贴图副本,后续渲染时直接访问该深度贴图副本,减小了终端的渲染负担。
进一步地,根据任一像素点的世界坐标,获取该像素点的本地坐标,再将本地坐标映射到UV空间,得到该像素点的模型坐标,从而便于了基于各个阴影贴图的UV信息进行采样和绘制。
进一步地,预先对不允许遮挡的虚拟对象的材质进行标记,使得终端无法对标记有目标材质的任一虚拟对象进行重叠渲染,也就避免了阴影对虚拟对象造成遮挡,优化了虚拟场景的渲染效果,提高了阴影渲染的拟真度。
应该理解的是,本申请各实施例中的各个步骤并不是必然按照步骤标号指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的 顺序限制,这些步骤可以以其它的顺序执行。而且,各实施例中至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
为了进一步地直观说明上述本申请实施例所提供的阴影渲染方法所带来的有益效果,表1中展示了在终端上测试14个高精度模型静止时三种方案的阴影渲染效率:
表1
阴影渲染方案 单帧时长(ms) FPS(帧率)
Unity3D原生方案 42 24
Fast Shadow Receiver 37 27
本申请的技术方案 37 27
从表1中可以看出,渲染引擎unity3D原生方案在渲染一帧阴影时需要42毫秒(ms)且FPS(frames per second,每秒传输帧数,也即帧率)为24,本申请的技术方案和第三方插件(Fast Shadow Receiver,快速阴影接收器)在渲染一帧阴影时均需要37ms且FPS均为27。因此,本申请的技术方案和Fast Shadow Receiver在模型静止时阴影渲染效率相同,但两者都比unity3D原生方案的阴影渲染效率高。
进一步地,表2中展示了在模型移动时两种方案的阴影渲染效率,在表2所示的动态压力测试中,使用的是100个圆片阴影在同一个虚拟场景下进行快速移动。
表2
阴影渲染方案 单帧时长(ms) FPS(帧率)
Fast Shadow Receiver 40 25
本申请的技术方案 29 34
从表2中可以看出在模型移动时,本申请的技术方案在渲染一帧阴影时仅需要29ms且FPS为34,而Fast Shadow Receiver渲染一帧阴影时需要长达40ms且FPS为25。因此,本申请的技术方案在模型移动时比Fast Shadow Receiver的阴影渲染效率更高。
根据上述表1和表2,综合考虑模型在静止时和移动时的阴影渲染效率,相较于Fast Shadow Receiver,本申请的技术方案进一步地缩短了渲染一帧阴影时耗费的时间,提升了每秒传输阴影的帧数,也即提高了终端的阴影渲染效率,并且由于没有额外的开销和负担,还提升了终端CPU的处理效率。
图7是本申请实施例提供的一种阴影渲染装置的结构示意图,参见图7,该装置包括第一获取模块701、第二获取模块702、采样模块703和渲染模块704,下面进行详述:
第一获取模块701,用于根据虚拟场景中的光照方向,获取该虚拟场景中的至少一个渲染结构体,该至少一个渲染结构体用于渲染至少一个虚拟对象的至少一个阴影,每个渲染结构体包括多个像素点;
第二获取模块702,用于根据当前视角和该多个像素点的深度信息,获取该多个像素点的模型坐标,一个模型坐标用于描述一个像素点相对于一个虚拟对象的模型基点的纹理信息;
采样模块703,用于根据该多个像素点的模型坐标,对至少一个阴影贴图进行采样,得到与该多个像素点所对应的多个采样点,每个阴影贴图用于提供一个阴影的纹理信息;
渲染模块704,用于在该虚拟场景中对该多个采样点进行渲染,得到该至少一个阴影。
本申请实施例提供的装置,通过根据虚拟场景中的光照方向,获取该虚拟场景中的至少一个渲染结构体,从而将该至少一个渲染结构体作为阴影渲染的模型,根据当前视角和该多个像素点的深度信息,获取该多个像素点的模型坐标,从而能够使得模型坐标与阴影贴图的UV空间一一对应,根据该多个像素点的模型坐标,对至少一个阴影贴图进行采样,得到与该多个像素点所对应的多个采样点,在该虚拟场景中对该多个采样点进行渲染,得到该至少一个阴影,可以基于渲染引擎本身的功能实现了阴影的实时渲染,节约了终端的内存资源,提高了终端CPU的处理效率。
在一种可能实施方式中,基于图7的装置组成,该第二获取模块702包括:
第一获取单元,用于根据该当前视角和该多个像素点的深度信息,获取该多个像素点的世界坐标,一个世界坐标用于描述一个像素点相对于该虚拟场景的场景基点的位置信息;
第二获取单元,用于根据该多个像素点的世界坐标,获取该多个像素点的 模型坐标。
在一种可能实施方式中,该第一获取单元用于:
对每个像素点,以该当前视角的视点为射线端点,在该视点与该像素点所确定的射线方向上,确定与该视点之间的距离符合该像素点的深度信息的目标像素点;
将该目标像素点的世界坐标确定为该像素点的世界坐标。
在一种可能实施方式中,该第二获取单元用于:
将该当前视角的视点矩阵左乘该多个像素点的世界坐标,得到该多个像素点的本地坐标,该视点矩阵为从世界坐标系映射到模型坐标系的转换矩阵;
将该多个像素点的本地坐标映射到纹理映射空间,得到该多个像素点的模型坐标。
在一种可能实施方式中,基于图7的装置组成,该装置还包括:
根据该多个像素点的屏幕坐标,从深度贴图中获取该多个像素点的深度信息,一个屏幕坐标用于描述一个像素点相对于终端屏幕的屏幕基点的位置信息。
在一种可能实施方式中,该第一获取模块701用于:
根据该至少一个虚拟对象,确定该至少一个渲染结构体的初始大小和初始位置,该初始大小和该初始位置与该至少一个虚拟对象相匹配;
根据该光照方向,确定该至少一个渲染结构体的方向,调整该初始大小和初始位置,得到该至少一个渲染结构体。
在一种可能实施方式中,该至少一个渲染结构体为立方体、球体或者圆柱体。
在一种可能实施方式中,基于图7的装置组成,该装置还包括:
将该光照方向的视点矩阵左乘该至少一个虚拟对象的世界坐标,生成从该光照方向对该至少一个虚拟对象进行阴影投射的该至少一个阴影贴图。
在一种可能实施方式中,基于图7的装置组成,该装置还包括:
将不允许遮挡的虚拟对象的材质标记为目标材质,该目标材质为渲染过程中不支持阴影渲染的材质。
需要说明的是:上述实施例提供的阴影渲染装置在渲染阴影时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将终端的内部结构划分成不同的功能模块,以完 成以上描述的全部或者部分功能。另外,上述实施例提供的阴影渲染装置与阴影渲染方法实施例属于同一构思,其具体实现过程详见阴影渲染方法实施例,这里不再赘述。
图8是本申请实施例提供的终端的结构示意图。该终端800可以是:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端800还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端800包括有:处理器801和存储器802。
处理器801可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器801可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器801也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器801可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器801还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器802可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器802还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器802中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器801所执行以实现本申请中阴影渲染方法实施例提供的阴影渲染方法。
在一些实施例中,终端800还可选包括有:外围设备接口803和至少一个外围设备。处理器801、存储器802和外围设备接口803之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口803相连。具体地,外围设备包括:射频电路804、触摸显示屏805、摄像头806、 音频电路807、定位组件808和电源809中的至少一种。
外围设备接口803可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器801和存储器802。在一些实施例中,处理器801、存储器802和外围设备接口803被集成在同一芯片或电路板上;在一些其他实施例中,处理器801、存储器802和外围设备接口803中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路804用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路804通过电磁信号与通信网络以及其他通信设备进行通信。射频电路804将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路804包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路804可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:城域网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路804还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
显示屏805用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏805是触摸显示屏时,显示屏805还具有采集在显示屏805的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器801进行处理。此时,显示屏805还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏805可以为一个,设置终端800的前面板;在另一些实施例中,显示屏805可以为至少两个,分别设置在终端800的不同表面或呈折叠设计;在再一些实施例中,显示屏805可以是柔性显示屏,设置在终端800的弯曲表面上或折叠面上。甚至,显示屏805还可以设置成非矩形的不规则图形,也即异形屏。显示屏805可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件806用于采集图像或视频。可选地,摄像头组件806包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深 摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件806还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路807可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器801进行处理,或者输入至射频电路804以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端800的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器801或射频电路804的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路807还可以包括耳机插孔。
定位组件808用于定位终端800的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件808可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统、俄罗斯的格雷纳斯系统或欧盟的伽利略系统的定位组件。
电源809用于为终端800中的各个组件进行供电。电源809可以是交流电、直流电、一次性电池或可充电电池。当电源809包括可充电电池时,该可充电电池可以支持有线充电或无线充电。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端800还包括有一个或多个传感器810。该一个或多个传感器810包括但不限于:加速度传感器811、陀螺仪传感器812、压力传感器813、指纹传感器814、光学传感器815以及接近传感器816。
加速度传感器811可以检测以终端800建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器811可以用于检测重力加速度在三个坐标轴上的分量。处理器801可以根据加速度传感器811采集的重力加速度信号,控制触摸显示屏805以横向视图或纵向视图进行用户界面的显示。加速度传感器811还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器812可以检测终端800的机体方向及转动角度,陀螺仪传感器812可以与加速度传感器811协同采集用户对终端800的3D动作。处理器 801根据陀螺仪传感器812采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器813可以设置在终端800的侧边框和/或触摸显示屏805的下层。当压力传感器813设置在终端800的侧边框时,可以检测用户对终端800的握持信号,由处理器801根据压力传感器813采集的握持信号进行左右手识别或快捷操作。当压力传感器813设置在触摸显示屏805的下层时,由处理器801根据用户对触摸显示屏805的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器814用于采集用户的指纹,由处理器801根据指纹传感器814采集到的指纹识别用户的身份,或者,由指纹传感器814根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器801授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器814可以被设置终端800的正面、背面或侧面。当终端800上设置有物理按键或厂商Logo时,指纹传感器814可以与物理按键或厂商Logo集成在一起。
光学传感器815用于采集环境光强度。在一个实施例中,处理器801可以根据光学传感器815采集的环境光强度,控制触摸显示屏805的显示亮度。具体地,当环境光强度较高时,调高触摸显示屏805的显示亮度;当环境光强度较低时,调低触摸显示屏805的显示亮度。在另一个实施例中,处理器801还可以根据光学传感器815采集的环境光强度,动态调整摄像头组件806的拍摄参数。
接近传感器816,也称距离传感器,通常设置在终端800的前面板。接近传感器816用于采集用户与终端800的正面之间的距离。在一个实施例中,当接近传感器816检测到用户与终端800的正面之间的距离逐渐变小时,由处理器801控制触摸显示屏805从亮屏状态切换为息屏状态;当接近传感器816检测到用户与终端800的正面之间的距离逐渐变大时,由处理器801控制触摸显示屏805从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图8中示出的结构并不构成对终端800的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在示例性实施例中,还提供了一种计算机可读存储介质,例如包括指令的存储器,上述指令可由终端中的处理器执行以完成上述实施例中阴影渲染方法。例如,该计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种阴影渲染方法,由计算机设备执行,所述方法包括:
    根据虚拟场景中的光照方向,获取所述虚拟场景中的至少一个渲染结构体,所述至少一个渲染结构体用于渲染至少一个虚拟对象的至少一个阴影,每个渲染结构体包括多个像素点;
    根据当前视角和所述多个像素点的深度信息,获取所述多个像素点的模型坐标,所述模型坐标用于描述像素点相对于虚拟对象的模型顶点的纹理信息;
    根据所述多个像素点的模型坐标,对至少一个阴影贴图进行采样,得到与所述多个像素点所对应的多个采样点,每个阴影贴图用于提供一个阴影的纹理信息;及
    在所述虚拟场景中对所述多个采样点进行渲染,得到所述至少一个阴影。
  2. 根据权利要求1所述的方法,其特征在于,所述根据当前视角和所述多个像素点的深度信息,获取所述多个像素点的模型坐标包括:
    根据所述当前视角和所述多个像素点的深度信息,获取所述多个像素点的世界坐标,所述世界坐标用于描述像素点相对于所述虚拟场景的场景基点的位置信息;及
    根据所述多个像素点的世界坐标,获取所述多个像素点的模型坐标。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述当前视角和所述多个像素点的深度信息,获取所述多个像素点的世界坐标包括:
    对每个像素点,以所述当前视角的视点为射线端点,在所述视点与所述像素点所确定的射线方向上,确定与所述视点之间的距离符合所述像素点的深度信息的目标像素点;及
    将所述目标像素点的世界坐标确定为所述像素点的世界坐标。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述多个像素点的世界坐标,获取所述多个像素点的模型坐标包括:
    将所述当前视角的视点矩阵左乘所述多个像素点的世界坐标,得到所述多个像素点的本地坐标;及
    将所述多个像素点的本地坐标映射到纹理映射空间,得到所述多个像素点的模型坐标。
  5. 根据权利要求1所述的方法,其特征在于,所述根据当前视角和所述多 个像素点的深度信息,获取所述多个像素点的模型坐标之前,所述方法还包括:
    根据所述多个像素点的屏幕坐标,从深度贴图中获取所述多个像素点的深度信息,一个屏幕坐标用于描述一个像素点相对于终端屏幕的屏幕基点的位置信息。
  6. 根据权利要求1所述的方法,其特征在于,所述根据虚拟场景中的光照方向,获取所述虚拟场景中的至少一个渲染结构体包括:
    根据所述至少一个虚拟对象,确定所述至少一个渲染结构体的初始大小和初始位置,所述初始大小和所述初始位置与所述至少一个虚拟对象相匹配;及
    根据所述光照方向,确定所述至少一个渲染结构体的方向,调整所述初始大小和初始位置,得到所述至少一个渲染结构体。
  7. 根据权利要求1所述的方法,其特征在于,所述至少一个渲染结构体为立方体、球体或者圆柱体。
  8. 根据权利要求1所述的方法,其特征在于,所述根据所述多个像素点的模型坐标,对至少一个阴影贴图进行采样,得到与所述多个像素点所对应的多个采样点之前,所述方法还包括:
    将所述光照方向的视点矩阵左乘所述至少一个虚拟对象的世界坐标,生成从所述光照方向对所述至少一个虚拟对象进行阴影投射的所述至少一个阴影贴图。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    将不允许遮挡的虚拟对象的材质标记为目标材质,所述目标材质为渲染过程中不支持阴影渲染的材质。
  10. 一种阴影渲染装置,所述装置包括:
    第一获取模块,用于根据虚拟场景中的光照方向,获取所述虚拟场景中的至少一个渲染结构体,所述至少一个渲染结构体用于渲染至少一个虚拟对象的至少一个阴影,每个渲染结构体包括多个像素点;
    第二获取模块,用于根据当前视角和所述多个像素点的深度信息,获取所述多个像素点的模型坐标,一个模型坐标用于描述一个像素点相对于一个虚拟对象的模型基点的纹理信息;
    采样模块,用于根据所述多个像素点的模型坐标,对至少一个阴影贴图进行采样,得到与所述多个像素点所对应的多个采样点,每个阴影贴图用于提供一个阴影的纹理信息;
    渲染模块,用于在所述虚拟场景中对所述多个采样点进行渲染,得到所述至少一个阴影。
  11. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行执行以下步骤:
    根据虚拟场景中的光照方向,获取所述虚拟场景中的至少一个渲染结构体,所述至少一个渲染结构体用于渲染至少一个虚拟对象的至少一个阴影,每个渲染结构体包括多个像素点;
    根据当前视角和所述多个像素点的深度信息,获取所述多个像素点的模型坐标,所述模型坐标用于描述像素点相对于虚拟对象的模型顶点的纹理信息;
    根据所述多个像素点的模型坐标,对至少一个阴影贴图进行采样,得到与所述多个像素点所对应的多个采样点,每个阴影贴图用于提供一个阴影的纹理信息;及
    在所述虚拟场景中对所述多个采样点进行渲染,得到所述至少一个阴影。
  12. 根据权利要求11所述的计算机设备,其特征在于,所述根据当前视角和所述多个像素点的深度信息,获取所述多个像素点的模型坐标包括:
    根据所述当前视角和所述多个像素点的深度信息,获取所述多个像素点的世界坐标,所述世界坐标用于描述像素点相对于所述虚拟场景的场景基点的位置信息;及
    根据所述多个像素点的世界坐标,获取所述多个像素点的模型坐标。
  13. 根据权利要求12所述的计算机设备,其特征在于,所述根据所述当前视角和所述多个像素点的深度信息,获取所述多个像素点的世界坐标包括:
    对每个像素点,以所述当前视角的视点为射线端点,在所述视点与所述像素点所确定的射线方向上,确定与所述视点之间的距离符合所述像素点的深度信息的目标像素点;及
    将所述目标像素点的世界坐标确定为所述像素点的世界坐标。
  14. 根据权利要求12所述的计算机设备,其特征在于,所述根据所述多个像素点的世界坐标,获取所述多个像素点的模型坐标包括:
    将所述当前视角的视点矩阵左乘所述多个像素点的世界坐标,得到所述多个像素点的本地坐标;及
    将所述多个像素点的本地坐标映射到纹理映射空间,得到所述多个像素点 的模型坐标。
  15. 根据权利要求11所述的计算机设备,其特征在于,所述根据当前视角和所述多个像素点的深度信息,获取所述多个像素点的模型坐标之前,所述计算机可读指令还使得所述处理器执行如下步骤:
    根据所述多个像素点的屏幕坐标,从深度贴图中获取所述多个像素点的深度信息,一个屏幕坐标用于描述一个像素点相对于终端屏幕的屏幕基点的位置信息。
  16. 根据权利要求11所述的计算机设备,其特征在于,所述根据虚拟场景中的光照方向,获取所述虚拟场景中的至少一个渲染结构体包括:
    根据所述至少一个虚拟对象,确定所述至少一个渲染结构体的初始大小和初始位置,所述初始大小和所述初始位置与所述至少一个虚拟对象相匹配;及
    根据所述光照方向,确定所述至少一个渲染结构体的方向,调整所述初始大小和初始位置,得到所述至少一个渲染结构体。
  17. 根据权利要求11所述的计算机设备,其特征在于,所述至少一个渲染结构体为立方体、球体或者圆柱体。
  18. 根据权利要求11所述的计算机设备,其特征在于,所述根据所述多个像素点的模型坐标,对至少一个阴影贴图进行采样,得到与所述多个像素点所对应的多个采样点之前,所述计算机可读指令还使得所述处理器执行如下步骤:
    将所述光照方向的视点矩阵左乘所述至少一个虚拟对象的世界坐标,生成从所述光照方向对所述至少一个虚拟对象进行阴影投射的所述至少一个阴影贴图。
  19. 根据权利要求11所述的计算机设备,其特征在于,所所述计算机可读指令还使得所述处理器执行如下步骤:
    将不允许遮挡的虚拟对象的材质标记为目标材质,所述目标材质为渲染过程中不支持阴影渲染的材质。
  20. 一个或多个存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如权利要求1至权利要求9任一项所述的阴影渲染方法的步骤。
PCT/CN2020/079618 2019-04-11 2020-03-17 阴影渲染方法、装置、计算机设备及存储介质 WO2020207202A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2021530180A JP7190042B2 (ja) 2019-04-11 2020-03-17 シャドウレンダリング方法、装置、コンピュータデバイスおよびコンピュータプログラム
EP20787238.3A EP3955212A4 (en) 2019-04-11 2020-03-17 SHADOW RENDERING METHOD AND APPARATUS, COMPUTER DEVICE, AND MEDIA
US17/327,585 US11574437B2 (en) 2019-04-11 2021-05-21 Shadow rendering method and apparatus, computer device, and storage medium
US18/093,311 US20230143323A1 (en) 2019-04-11 2023-01-04 Shadow rendering method and apparatus, computer device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910290577.3A CN109993823B (zh) 2019-04-11 2019-04-11 阴影渲染方法、装置、终端及存储介质
CN201910290577.3 2019-04-11

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/327,585 Continuation US11574437B2 (en) 2019-04-11 2021-05-21 Shadow rendering method and apparatus, computer device, and storage medium

Publications (1)

Publication Number Publication Date
WO2020207202A1 true WO2020207202A1 (zh) 2020-10-15

Family

ID=67133281

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/079618 WO2020207202A1 (zh) 2019-04-11 2020-03-17 阴影渲染方法、装置、计算机设备及存储介质

Country Status (5)

Country Link
US (2) US11574437B2 (zh)
EP (1) EP3955212A4 (zh)
JP (1) JP7190042B2 (zh)
CN (1) CN109993823B (zh)
WO (1) WO2020207202A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112494941A (zh) * 2020-12-14 2021-03-16 网易(杭州)网络有限公司 虚拟对象的显示控制方法及装置、存储介质、电子设备
CN112562051A (zh) * 2020-11-30 2021-03-26 腾讯科技(深圳)有限公司 虚拟对象显示方法、装置、设备及存储介质
CN113487662A (zh) * 2021-07-02 2021-10-08 广州博冠信息科技有限公司 画面显示方法、装置、电子设备和存储介质
CN113821345A (zh) * 2021-09-24 2021-12-21 网易(杭州)网络有限公司 游戏中的移动轨迹渲染方法、装置及电子设备
CN113935893A (zh) * 2021-09-09 2022-01-14 完美世界(北京)软件科技发展有限公司 素描风格的场景渲染方法、设备及存储介质
CN116778127A (zh) * 2023-07-05 2023-09-19 广州视景医疗软件有限公司 一种基于全景图的三维数字场景构建方法及系统

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993823B (zh) * 2019-04-11 2022-11-25 腾讯科技(深圳)有限公司 阴影渲染方法、装置、终端及存储介质
CN110517346B (zh) * 2019-08-30 2021-06-18 腾讯科技(深圳)有限公司 虚拟环境界面的展示方法、装置、计算机设备及存储介质
CN110585713B (zh) * 2019-09-06 2021-10-15 腾讯科技(深圳)有限公司 游戏场景的阴影实现方法、装置、电子设备及可读介质
CN110544291B (zh) * 2019-09-11 2023-05-09 珠海金山数字网络科技有限公司 一种图像渲染方法及装置
CN110717964B (zh) * 2019-09-26 2023-05-02 深圳市名通科技股份有限公司 场景建模方法、终端及可读存储介质
CN110675479B (zh) * 2019-10-14 2023-05-16 北京代码乾坤科技有限公司 动态光照处理方法、装置、存储介质及电子装置
CN110930492B (zh) * 2019-11-20 2023-11-28 网易(杭州)网络有限公司 模型渲染的方法、装置、计算机可读介质及电子设备
CN112929636A (zh) * 2019-12-05 2021-06-08 北京芯海视界三维科技有限公司 3d显示设备、3d图像显示方法
CN110990104B (zh) * 2019-12-06 2023-04-11 珠海金山数字网络科技有限公司 一种基于Unity3D的纹理渲染方法及装置
CN111311723B (zh) * 2020-01-22 2021-11-02 腾讯科技(深圳)有限公司 像素点识别及光照渲染方法、装置、电子设备和存储介质
CN111292405B (zh) * 2020-02-06 2022-04-08 腾讯科技(深圳)有限公司 一种图像渲染的方法以及相关装置
CN111462295B (zh) * 2020-03-27 2023-09-05 咪咕文化科技有限公司 增强现实合拍中的阴影处理方法、设备及存储介质
CN111724313B (zh) * 2020-04-30 2023-08-01 完美世界(北京)软件科技发展有限公司 一种阴影贴图生成方法与装置
CN111726479B (zh) * 2020-06-01 2023-05-23 北京像素软件科技股份有限公司 图像渲染的方法及装置、终端、可读存储介质
CN111833423A (zh) * 2020-06-30 2020-10-27 北京市商汤科技开发有限公司 展示方法、装置、设备和计算机可读存储介质
CN111915710A (zh) * 2020-07-10 2020-11-10 杭州渲云科技有限公司 基于实时渲染技术的建筑渲染方法
CN111968216B (zh) * 2020-07-29 2024-03-22 完美世界(北京)软件科技发展有限公司 一种体积云阴影渲染方法、装置、电子设备及存储介质
CN111932664B (zh) * 2020-08-27 2023-06-23 腾讯科技(深圳)有限公司 图像渲染方法、装置、电子设备及存储介质
CN112037314A (zh) * 2020-08-31 2020-12-04 北京市商汤科技开发有限公司 图像显示方法、装置、显示设备及计算机可读存储介质
CN112245926B (zh) * 2020-11-16 2022-05-17 腾讯科技(深圳)有限公司 虚拟地形的渲染方法、装置、设备及介质
CN112288873B (zh) * 2020-11-19 2024-04-09 网易(杭州)网络有限公司 渲染方法和装置、计算机可读存储介质、电子设备
CN112370783B (zh) * 2020-12-02 2024-06-11 网易(杭州)网络有限公司 虚拟对象渲染方法、装置、计算机设备和存储介质
CN112734896B (zh) * 2021-01-08 2024-04-26 网易(杭州)网络有限公司 环境遮蔽渲染方法、装置、存储介质及电子设备
CN112734900A (zh) * 2021-01-26 2021-04-30 腾讯科技(深圳)有限公司 阴影贴图的烘焙方法、装置、设备及计算机可读存储介质
CN112819940B (zh) * 2021-01-29 2024-02-23 网易(杭州)网络有限公司 渲染方法、装置和电子设备
CN112884874B (zh) * 2021-03-18 2023-06-16 腾讯科技(深圳)有限公司 在虚拟模型上贴花的方法、装置、设备及介质
CN113012274B (zh) * 2021-03-24 2023-07-28 北京壳木软件有限责任公司 一种阴影渲染的方法、装置以及电子设备
CN113256781B (zh) * 2021-06-17 2023-05-30 腾讯科技(深圳)有限公司 虚拟场景的渲染和装置、存储介质及电子设备
CN113283543B (zh) * 2021-06-24 2022-04-15 北京优锘科技有限公司 一种基于WebGL的图像投影融合方法、装置、存储介质和设备
CN113592997B (zh) * 2021-07-30 2023-05-30 腾讯科技(深圳)有限公司 基于虚拟场景的物体绘制方法、装置、设备及存储介质
CN114170359A (zh) * 2021-11-03 2022-03-11 完美世界(北京)软件科技发展有限公司 体积雾渲染方法、装置、设备及存储介质
CN114494384B (zh) * 2021-12-27 2023-01-13 北京吉威空间信息股份有限公司 建筑物阴影分析方法、装置、设备及存储介质
CN114461064B (zh) * 2022-01-21 2023-09-15 北京字跳网络技术有限公司 虚拟现实交互方法、装置、设备和存储介质
CN114742934A (zh) * 2022-04-07 2022-07-12 北京字跳网络技术有限公司 图像渲染方法、装置、可读介质及电子设备
US20230332922A1 (en) * 2022-04-15 2023-10-19 Onxmaps, Inc. Methods and systems for providing a real-time viewshed visualization
CN115239869B (zh) * 2022-09-22 2023-03-24 广州简悦信息科技有限公司 阴影处理方法、阴影渲染方法及设备
CN115830208B (zh) * 2023-01-09 2023-05-09 腾讯科技(深圳)有限公司 全局光照渲染方法、装置、计算机设备和存储介质
CN116245998B (zh) * 2023-05-09 2023-08-29 北京百度网讯科技有限公司 渲染贴图生成方法及装置、模型训练方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020021302A1 (en) * 2000-06-22 2002-02-21 Lengyel Jerome E. Method and apparatus for modeling and real-time rendering of surface detail
US20110069068A1 (en) * 2009-09-21 2011-03-24 Samsung Electronics Co., Ltd. Image processing apparatus and method
CN108830923A (zh) * 2018-06-08 2018-11-16 网易(杭州)网络有限公司 图像渲染方法、装置及存储介质
CN109993823A (zh) * 2019-04-11 2019-07-09 腾讯科技(深圳)有限公司 阴影渲染方法、装置、终端及存储介质

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4079410B2 (ja) * 2002-02-15 2008-04-23 株式会社バンダイナムコゲームス 画像生成システム、プログラム及び情報記憶媒体
US6876362B1 (en) * 2002-07-10 2005-04-05 Nvidia Corporation Omnidirectional shadow texture mapping
JP4181430B2 (ja) * 2003-03-10 2008-11-12 シャープ株式会社 図形処理装置、図形処理方法、図形処理プログラム、および、プログラム記録媒体
CN102254340B (zh) * 2011-07-29 2013-01-09 北京麒麟网文化股份有限公司 一种基于gpu加速的环境光遮挡图像绘制方法及系统
US9286649B2 (en) 2013-05-31 2016-03-15 Qualcomm Incorporated Conditional execution of rendering commands based on per bin visibility information with added inline operations
CN103903296B (zh) * 2014-04-23 2016-08-24 东南大学 虚拟家装室内场景设计中的阴影渲染方法
CN104103092A (zh) * 2014-07-24 2014-10-15 无锡梵天信息技术股份有限公司 一种基于聚光灯实时动态阴影的实现方法
CN104103089A (zh) * 2014-07-29 2014-10-15 无锡梵天信息技术股份有限公司 一种基于图像屏幕空间的实时软阴影实现方法
US9728002B2 (en) * 2015-12-18 2017-08-08 Advanced Micro Devices, Inc. Texel shading in texture space
CN106469463B (zh) * 2016-09-27 2019-04-30 上海上大海润信息系统有限公司 一种基于cpu与gpu混合的渲染方法
US10204395B2 (en) * 2016-10-19 2019-02-12 Microsoft Technology Licensing, Llc Stereoscopic virtual reality through caching and image based rendering
US10636201B2 (en) * 2017-05-05 2020-04-28 Disney Enterprises, Inc. Real-time rendering with compressed animated light fields
CN107180444B (zh) * 2017-05-11 2018-09-04 腾讯科技(深圳)有限公司 一种动画生成方法、装置、终端和系统
CN108038897B (zh) * 2017-12-06 2021-06-04 北京像素软件科技股份有限公司 阴影贴图生成方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020021302A1 (en) * 2000-06-22 2002-02-21 Lengyel Jerome E. Method and apparatus for modeling and real-time rendering of surface detail
US20110069068A1 (en) * 2009-09-21 2011-03-24 Samsung Electronics Co., Ltd. Image processing apparatus and method
CN108830923A (zh) * 2018-06-08 2018-11-16 网易(杭州)网络有限公司 图像渲染方法、装置及存储介质
CN109993823A (zh) * 2019-04-11 2019-07-09 腾讯科技(深圳)有限公司 阴影渲染方法、装置、终端及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3955212A4 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112562051A (zh) * 2020-11-30 2021-03-26 腾讯科技(深圳)有限公司 虚拟对象显示方法、装置、设备及存储介质
CN112562051B (zh) * 2020-11-30 2023-06-27 腾讯科技(深圳)有限公司 虚拟对象显示方法、装置、设备及存储介质
CN112494941A (zh) * 2020-12-14 2021-03-16 网易(杭州)网络有限公司 虚拟对象的显示控制方法及装置、存储介质、电子设备
CN112494941B (zh) * 2020-12-14 2023-11-28 网易(杭州)网络有限公司 虚拟对象的显示控制方法及装置、存储介质、电子设备
CN113487662A (zh) * 2021-07-02 2021-10-08 广州博冠信息科技有限公司 画面显示方法、装置、电子设备和存储介质
CN113487662B (zh) * 2021-07-02 2024-06-11 广州博冠信息科技有限公司 画面显示方法、装置、电子设备和存储介质
CN113935893A (zh) * 2021-09-09 2022-01-14 完美世界(北京)软件科技发展有限公司 素描风格的场景渲染方法、设备及存储介质
CN113821345A (zh) * 2021-09-24 2021-12-21 网易(杭州)网络有限公司 游戏中的移动轨迹渲染方法、装置及电子设备
CN113821345B (zh) * 2021-09-24 2023-06-30 网易(杭州)网络有限公司 游戏中的移动轨迹渲染方法、装置及电子设备
CN116778127A (zh) * 2023-07-05 2023-09-19 广州视景医疗软件有限公司 一种基于全景图的三维数字场景构建方法及系统
CN116778127B (zh) * 2023-07-05 2024-01-05 广州视景医疗软件有限公司 一种基于全景图的三维数字场景构建方法及系统

Also Published As

Publication number Publication date
US11574437B2 (en) 2023-02-07
US20210279948A1 (en) 2021-09-09
EP3955212A1 (en) 2022-02-16
EP3955212A4 (en) 2022-06-15
CN109993823B (zh) 2022-11-25
US20230143323A1 (en) 2023-05-11
JP2022527686A (ja) 2022-06-03
CN109993823A (zh) 2019-07-09
JP7190042B2 (ja) 2022-12-14

Similar Documents

Publication Publication Date Title
WO2020207202A1 (zh) 阴影渲染方法、装置、计算机设备及存储介质
US11393154B2 (en) Hair rendering method, device, electronic apparatus, and storage medium
US20210225067A1 (en) Game screen rendering method and apparatus, terminal, and storage medium
US11436779B2 (en) Image processing method, electronic device, and storage medium
CN112870707B (zh) 虚拟场景中的虚拟物体展示方法、计算机设备及存储介质
CN111464749B (zh) 进行图像合成的方法、装置、设备及存储介质
CN110033503B (zh) 动画显示方法、装置、计算机设备及存储介质
CN111932664A (zh) 图像渲染方法、装置、电子设备及存储介质
CN111311757B (zh) 一种场景合成方法、装置、存储介质及移动终端
CN112884874B (zh) 在虚拟模型上贴花的方法、装置、设备及介质
JP7186901B2 (ja) ホットスポットマップの表示方法、装置、コンピュータ機器および読み取り可能な記憶媒体
WO2018209710A1 (zh) 一种图像处理方法及装置
CN112884873B (zh) 虚拟环境中虚拟物体的渲染方法、装置、设备及介质
CN109725956A (zh) 一种场景渲染的方法以及相关装置
CN111833243B (zh) 一种数据展示方法、移动终端和存储介质
WO2021027890A1 (zh) 车牌图像生成方法、装置及计算机存储介质
CN110853128A (zh) 虚拟物体显示方法、装置、计算机设备及存储介质
CN110517346B (zh) 虚拟环境界面的展示方法、装置、计算机设备及存储介质
CN112308103B (zh) 生成训练样本的方法和装置
CN110505510B (zh) 大屏系统中的视频画面显示方法、装置及存储介质
CN112381729B (zh) 图像处理方法、装置、终端及存储介质
CN117582661A (zh) 虚拟模型渲染方法、装置、介质及设备
CN116704107B (zh) 一种图像渲染方法和相关装置
US20240161390A1 (en) Method, apparatus, electronic device and storage medium for control based on extended reality
CN112184543B (zh) 一种用于鱼眼摄像头的数据展示方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20787238

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021530180

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020787238

Country of ref document: EP

Effective date: 20211111