WO2024093609A1 - 一种叠加光遮挡的渲染方法、装置及相关产品 - Google Patents

一种叠加光遮挡的渲染方法、装置及相关产品 Download PDF

Info

Publication number
WO2024093609A1
WO2024093609A1 PCT/CN2023/123241 CN2023123241W WO2024093609A1 WO 2024093609 A1 WO2024093609 A1 WO 2024093609A1 CN 2023123241 W CN2023123241 W CN 2023123241W WO 2024093609 A1 WO2024093609 A1 WO 2024093609A1
Authority
WO
WIPO (PCT)
Prior art keywords
vertex
occlusion
information
light
superimposed light
Prior art date
Application number
PCT/CN2023/123241
Other languages
English (en)
French (fr)
Inventor
利伟民
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2024093609A1 publication Critical patent/WO2024093609A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading

Definitions

  • the present application relates to the technical field of model rendering, and in particular to rendering with superimposed light occlusion.
  • superimposed light refers to other light sources that illuminate virtual objects in addition to the main light source.
  • light directed at an object should cast a shadow on the object if it is blocked.
  • the calculation of occluded shadows often consumes a lot of computing device performance. Therefore, for many mobile devices running games, in order to ensure device performance, only the occlusion effect of the main light source is often displayed, and the occlusion calculation of superimposed light is removed. This results in unnatural light spots appearing where shadows should appear when superimposed light shines on virtual objects. In the industry, this problem is called light leakage. In order to solve the light leakage problem, the following two methods can be used:
  • Render Texture is a texture created and updated by the game engine Unity at runtime.
  • the object coordinates are converted to the spatial coordinate system of the light direction, and then the new coordinates are used to sample the corresponding Render Texture to obtain the depth. If the object depth is greater than the depth of the object saved in the Render Texture, it means that the object is occluded.
  • this solution calculates the occlusion frame by frame, and each frame needs to calculate the refresh relationship, resulting in very high computing performance consumption.
  • each superimposed light will occupy a Render Texture. If there are many superimposed lights, too many Render Textures read in each frame will cause high bandwidth usage. High bandwidth means high power consumption. Therefore, this solution is not applicable to mobile devices.
  • the embodiments of the present application provide a rendering method, device and related products of superimposed light occlusion, which aim to render a more natural and accurate superimposed light occlusion effect and reduce the consumption of device performance.
  • the first aspect of the present application provides a rendering method for superimposed light occlusion, comprising:
  • the superimposed light occlusion rendering data of the virtual object is obtained according to the decoded vertex occlusion information and the light intensity information of the superimposed light respectively corresponding to the multiple vertices.
  • a second aspect of the present application provides a rendering device for superimposed light occlusion, comprising:
  • a first acquisition unit used to acquire direction information of superimposed light in the environment where the virtual object is located
  • a second acquisition unit is used to acquire data of a plurality of vertices on the model of the virtual object; wherein the vertex data includes an encoding result of vertex occlusion information, and the vertex occlusion information is obtained after performing light occlusion detection on the vertex in advance;
  • a decoding unit used for decoding the encoding result according to the direction information to obtain decoded vertex occlusion information
  • the rendering unit obtains the superimposed light occlusion rendering data of the virtual object according to the decoded vertex occlusion information respectively corresponding to the multiple vertices and the light intensity information of the superimposed light.
  • a third aspect of the present application provides a rendering device for superimposed light occlusion, the device comprising a processor and a memory:
  • the memory is used to store a computer program and transmit the computer program to the processor
  • the processor is used to execute the steps of the rendering method of superimposed light occlusion provided in the first aspect according to the computer program.
  • a fourth aspect of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium is used to store a computer program, and the computer program is used to execute the steps of the rendering method of superimposed light occlusion provided in the first aspect.
  • a fifth aspect of the present application provides a computer program product, including a computer program, which, when executed by a computer device, implements the steps of the rendering method for superimposed light occlusion provided in the first aspect.
  • the vertex of the virtual object is pre-detected for light occlusion, and the vertex occlusion information is obtained, so that it is in the data of the vertex.
  • the direction information of the superimposed light is used to decode the encoding result of the vertex occlusion information stored in the vertex data vertex data vertex by vertex, and the decoded vertex occlusion information of each vertex can be obtained.
  • the superimposed light occlusion rendering data for the virtual object is obtained according to the decoded vertex occlusion information and the light intensity information of the superimposed light, and the occlusion effect of the superimposed light on the virtual object can be presented by the superimposed light occlusion rendering data. Since the vertex occlusion information is pre-detected and stored in the vertex data in the form of coding, it only needs to take the vertex data to decode it accordingly during rendering, and it is not necessary to read the texture frame by frame, so it can save the performance consumption of the computing device, and ensure the high performance of the device while rendering the superimposed light occlusion effect, which is more suitable for application on mobile devices.
  • the rendering scheme is based on the model vertex, so it is not affected by the change of the movement direction of the virtual object in the world coordinate system where it is located.
  • the scheme also takes into account the important role of the superimposed light direction, and uses the superimposed light direction information to decode the encoding results in the vertex data, so that the decoded vertex occlusion information is combined with the superimposed light intensity to form the superimposed light occlusion rendering data. Therefore, this rendering scheme can show a more natural and physically accurate superimposed light occlusion effect, further improving the visual experience of users (game players or animation viewers).
  • FIG1 is a scene architecture diagram of a rendering method for superimposed light occlusion provided in an embodiment of the present application
  • FIG2 is a flow chart of a rendering method for superimposed light occlusion provided in an embodiment of the present application
  • FIG3 is a rendering of a virtual character without superimposed light occlusion rendering
  • FIG4 is a rendering of a virtual character after a superimposed light shielding effect is rendered according to an embodiment of the present application
  • FIG5 is a schematic diagram of the workflow of vertex shader and pixel shader respectively executed during shader rendering
  • FIG6 is a schematic flow chart of a process of preprocessing model vertices provided in an embodiment of the present application.
  • FIG7 is a schematic diagram of setting a plurality of virtual light sources with a vertex of a virtual object model as the center of a sphere provided by an embodiment of the present application;
  • FIG8 is a schematic diagram of emitting rays to a target vertex through each virtual light source provided by an embodiment of the present application.
  • FIG9 is a schematic diagram of a result of occlusion detection provided by an embodiment of the present application.
  • FIG10 is a flowchart of another rendering method for superimposed light occlusion provided in an embodiment of the present application.
  • FIG11 is a diagram of spherical harmonic basis functions
  • FIG12 is a schematic structural diagram of a rendering device for superimposed light shielding provided in an embodiment of the present application.
  • FIG13 is a schematic diagram of a structure of a server in an embodiment of the present application.
  • FIG. 14 is a schematic diagram of the structure of a terminal device in an embodiment of the present application.
  • a rendering method, device and related products of superimposed light occlusion are provided in the present application, with the aim of providing a rendering scheme that can achieve a more natural and physically accurate superimposed light occlusion effect and reduce the consumption of equipment performance. Save labor costs and workload, and improve the efficiency of rendering work.
  • the vertex occlusion information corresponding to each vertex is obtained by pre-detecting the occlusion of light on each vertex on the model of the virtual object. Encode it and store it in the vertex data.
  • the corresponding positions on the virtual object that block the transmission of light form natural and accurately positioned shadows at other positions of the virtual object, such as the shadow behind the neck formed by long hair, the shadow of clothes folds, etc.
  • Houdini A 3D computer graphics software.
  • HDA Houdini Digital Asset, packages Houdini node networks into reusable digital assets.
  • Houdini Engine Houdini Engine, which can import HDA into other software for use.
  • Unity is a cross-platform 2D and 3D game engine developed by Unity Technologies. It can develop cross-platform video games and extend to the HTML5 web platform based on WebGL technology, as well as new generation multimedia platforms such as tvOS, Oculus Rift, and ARKit.
  • Overlay light A light source that shines on an object in addition to the main light source.
  • Spherical harmonic function It is the angular part of the solution of Laplace's equation in spherical coordinates. It is a famous function in modern mathematics and is widely used in quantum mechanics, computer graphics, rendering and lighting processing, and spherical mapping.
  • Basis function In mathematics, a basis function is a basis in a function space, just like a coordinate axis in Euler space. In the function space, every continuous function can be expressed as a linear combination of basis functions.
  • Vertex The model surface of a virtual object modeled in 3D includes many vertices, and there are lines between adjacent vertices. Three vertices and the lines between them form a triangle, and the model surface includes many triangles of similar form.
  • UV Texture coordinates for 3D modeling usually have two coordinate axes, U and V, so they are called UV coordinates.
  • U represents the distribution on the horizontal coordinate
  • V represents the distribution on the vertical coordinate.
  • UV2, UV3 Only the second and third set of model UV. This is only used to save the encoding result of vertex occlusion information.
  • the vertex data includes the UV2 and UV3 corresponding to the vertex.
  • Tangent plane Any single vertex in all the Mesh triangles or quadrilaterals that make up a model is the origin of space, the normal of the vertex is the N-axis, and the plane passing through the vertex origin and perpendicular to the vertex normal N-axis is the tangent plane.
  • Tangent space The tangent plane passes through the vertex origin.
  • the two axes in the same direction as the UV texture axis of the vertex are taken as the TB axes on the tangent plane.
  • the normal is the N axis.
  • the local space spanned by the TBN vector axes is called the vertex tangent space.
  • RenderTexture A RenderTexture is a texture that Unity creates and updates at runtime.
  • Shader is an editable program used to implement image rendering and replace the fixed rendering pipeline. Vertex Shader is mainly responsible for the calculation of the geometric relationship of vertices, while Pixel Shader is mainly responsible for the calculation of the color of the source.
  • Virtual scene The virtual scene displayed (or provided) when the application is running on the terminal can be a simulation of the real world, a semi-simulation and semi-fictional scene, or a purely fictional scene.
  • the virtual scene can be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene.
  • the following embodiments are illustrated by taking the virtual scene as a three-dimensional virtual scene, but this is not limited.
  • the virtual scene is also used for a virtual scene battle between at least two virtual objects.
  • Virtual object refers to an object that can be moved in a virtual scene.
  • the movable object can be at least one of a virtual person, a virtual animal, and an animated character.
  • the virtual scene is a three-dimensional virtual scene
  • the virtual object can be a three-dimensional model created based on animation skeleton technology.
  • Each virtual object has its own shape and volume in the three-dimensional virtual scene and occupies a part of the space in the three-dimensional virtual scene.
  • the execution subject of the rendering method of superimposed light occlusion provided in the embodiment of the present application may be a terminal device.
  • Unity is run on the terminal device to obtain superimposed light occlusion rendering data for virtual objects.
  • the terminal device may specifically include but is not limited to mobile phones, computers, intelligent voice interaction devices, smart home appliances, vehicle-mounted terminals, aircraft, etc.
  • the embodiments of the present invention can be applied to various scenarios, including but not limited to digital humans, virtual humans, games, virtual reality, extended reality (XR, Extended Reality), etc.
  • the execution subject of the rendering method of superimposed light occlusion provided in the embodiment of the present application may also be a server, that is, Unity can be run on the server to obtain superimposed light occlusion rendering data for virtual objects. If the game is running on the terminal device, the server can further send the rendering data to the terminal device, on which the game is based. Rendering data is used to display the screen in detail.
  • the process of preprocessing to obtain the encoding result of vertex occlusion information can be implemented on the terminal device or on the server communicating with the terminal device. Therefore, the implementation subject of the technical solution of the present application is not limited in the embodiments of the present application.
  • FIG1 exemplarily shows a scene architecture diagram of a rendering method for superimposed light occlusion.
  • the figure includes a server and various forms of terminal devices.
  • the server shown in FIG1 can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers.
  • the server can also be a cloud server that provides basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms.
  • FIG2 is a flow chart of a rendering method for superimposed light occlusion provided in an embodiment of the present application.
  • the rendering method for superimposed light occlusion as shown in FIG2 includes:
  • the environment in which the virtual object is located may specifically refer to the virtual scene in which it is located.
  • At least one superimposed light is set in the virtual scene.
  • the superimposed light has its own projection direction (or illumination direction).
  • the direction information of the superimposed light can be represented by a vector, such as the direction information represented by the direction vector in the coordinate system.
  • the superimposed light direction information obtained in this step will be used for information decoding.
  • each vertex in the model of the virtual object has its own corresponding data, such as position, color, etc.
  • the vertex data also includes the encoding result of the vertex occlusion information.
  • the main purpose of obtaining the vertex data in this step is to obtain the encoding result of the vertex occlusion information stored therein.
  • the vertex occlusion information is obtained after pre-light occlusion detection of the vertex, which can be understood as a detection completed at the stage before the game engine renders the superimposed light occlusion effect. After the vertex occlusion information is obtained in advance, it is stored in the vertex data before being applied to the rendering link.
  • the direction information of the superimposed light in the environment where the virtual object is located and the encoding result of the vertex occlusion information of multiple vertices on the model of the virtual object are obtained, and S202 can be carried out to decode the information.
  • the vertex occlusion information is specifically encoded based on the direction information of the ray emitted by the virtual light source toward the vertex.
  • the ray can also be understood as the light emitted by the virtual light source to the vertex. Since the light source is virtual in the preprocessing stage, the light is not the light emitted by the light source existing in the virtual scene, so it is referred to as a ray here to avoid misunderstanding.
  • the vertex occlusion information obtained after decoding is similar to the vertex occlusion information before encoding, but it cannot be guaranteed to be completely consistent. Therefore, in order to avoid misunderstanding of the two concepts, the information obtained by decoding in S202 is referred to as the decoded vertex occlusion information here.
  • the vertex is used as the center of the sphere, and the virtual light source is used as a scattered point set within a preset range around the vertex. Rays are emitted to the vertex to simulate the direction of light from the virtual light source illuminating the vertex.
  • the rays formed by these different virtual light sources are discrete, and the occlusion of each ray is also independent and discrete. Therefore, for the vertex, the vertex occlusion information of these different rays can be regarded as an approximate function distributed on the spherical space. Spherical harmonics can be used to To build an expression association with the approximate function.
  • spherical harmonics are used for encoding, and the encoding result of the vertex occlusion information is specifically the spherical harmonic coefficients obtained by encoding the vertex occlusion information using ray direction information and spherical harmonics.
  • the spherical harmonic coefficients can be decoded according to the direction information of the superimposed light and the spherical harmonics to obtain the decoded vertex occlusion information.
  • the use of spherical harmonics for encoding and decoding not only fits the distribution form of the vertex occlusion information, but also saves the storage space occupied by the information, facilitates indexing, and can also achieve relatively accurate information restoration.
  • the decoded vertex occlusion information corresponding to multiple vertices can be multiplied by the light intensity of the superimposed light to obtain the superimposed light occlusion rendering data of the virtual object.
  • there are multiple superimposed lights in the virtual scene where the virtual object is located and then in S202, the encoding results of the vertex occlusion information of multiple vertices can be decoded one by one with the direction information of the multiple superimposed lights to obtain the decoded vertex occlusion information, and the decoded vertex occlusion information includes a decoding result set corresponding to the multiple superimposed lights one by one, and the decoding result set includes the decoded vertex occlusion information of the multiple vertices for the same superimposed light.
  • K1 and K2 represent two superimposed lights in different directions.
  • the direction information of the superimposed light K1 is represented as S1
  • the direction information of the superimposed light K2 is represented as S2.
  • S1 is used to decode the encoding results of the vertex occlusion information of each vertex obtained in S201 to obtain the decoded vertex occlusion information about these vertices. Since all this information is obtained by decoding S1, it can be merged into a decoding result set P1.
  • S2 is used to decode the encoding results of the vertex occlusion information of each vertex obtained in S201 to obtain the decoded vertex occlusion information about these vertices.
  • each decoding result set such as P1 and P2 corresponds to multiple vertices of the model. If all vertices of the model are used in the preprocessing stage, the decoding result set also corresponds to all vertices of the model: including the decoding results of the occlusion information of all vertices.
  • the sub-rendering data of the corresponding superimposed light can be obtained according to the decoding result set and the light intensity information of the corresponding superimposed light; and the superimposed light occlusion rendering data of the virtual object under the multiple superimposed lights can be obtained through the sub-rendering data of the multiple superimposed lights.
  • the decoding result sets corresponding to the multiple superimposed lights are multiplied with the light intensity information of the corresponding superimposed lights to obtain the superimposed light occlusion rendering data of the virtual object under the multiple superimposed lights.
  • each decoding result contained in P1 can be multiplied with the light intensity information of the superimposed light K1;
  • each decoding result contained in P2 can be multiplied with the light intensity information of the superimposed light K2.
  • the rendering method of superimposed light occlusion provided in the above embodiment, since the vertex occlusion information is detected in advance and stored in the vertex data in an encoded form, only the vertex data needs to be obtained for corresponding decoding during rendering, and there is no need to read the texture frame by frame. Therefore, it can save the performance consumption of the computing device, and ensure the high performance of the device while rendering the superimposed light occlusion effect, which is more suitable for application on mobile devices.
  • the rendering scheme is based on model vertices, so it is not affected by the change in the movement direction of the virtual object in its world coordinate system.
  • this rendering scheme can show a more natural and physically accurate superimposed light occlusion effect, further improving the visual experience of users (game players or animation viewers).
  • the direction information of the superimposed light used for decoding can be specifically converted to the tangent space of the vertex, that is, the direction information in the tangent space is converted.
  • any single vertex in all Mesh mesh triangles or quadrilaterals that make up a model is the origin of space, the normal of the vertex is the N axis, and the plane passing through the vertex origin and perpendicular to the vertex normal N axis is the tangent plane.
  • the two axes in the same direction of the UV texture axis where the vertex is located are taken as the TB axis on the tangent plane, and the normal is the N axis.
  • the local space spanned by the TBN vector axis is called the vertex tangent space. If the vertex occlusion information itself is encoded by the direction information of the light in the tangent space of the vertex, then in the rendering method of superimposed light occlusion provided in the present application, decoding can also decode the encoding result of the occlusion information of the corresponding vertex by superimposing the direction information of the light in the vertex tangent space.
  • step S202 decodes the encoding result according to the direction information to obtain the decoded vertex occlusion information, which may specifically include:
  • the direction information is converted into the tangent space of the target vertex to obtain the converted direction information, wherein the converted direction information is used to identify the direction of the superimposed light in the tangent space of the target vertex; then, the encoding result of the vertex occlusion information of the target vertex is decoded according to the converted direction information to obtain the decoded vertex occlusion information corresponding to the target vertex.
  • the superimposed light direction information can be converted in the coordinate system and the information can be decoded in the above manner.
  • the vertex occlusion information itself is information obtained based on vertices that are not affected by the world coordinate system, its encoding and decoding both use the direction information of the vertex tangent space, so that the encoding result and the decoding result do not deviate from the tangent space of the vertex, thereby ensuring relatively accurate encoding and decoding results.
  • Figure 3 is a virtual character rendering without superimposed light occlusion rendering.
  • Figure 4 is a virtual character rendering after the superimposed light occlusion effect is rendered, provided in an embodiment of the present application.
  • Figure 3 corresponds to the rendering effect of only showing the occlusion effect of the main light source, removing the occlusion calculation of the superimposed light;
  • Figure 4 shows the rendering effect obtained by applying the rendering method of superimposed light occlusion provided in an embodiment of the present application.
  • the light illuminating the back of the neck of the model of the virtual object should be blocked by its long hair, and the light illuminating the back of the right ear should be blocked by the bow in its hair accessories, thereby presenting a shadow, while the corresponding position in Figure 3 does not show a shadow, but is as bright as other skin positions.
  • the shadows of the corresponding positions can be rendered. As shown in Figure 4, shadows are displayed on the back of the neck and the back of the right ear.
  • the arrows at the bottom of each of Figures 3 and 4 indicate the wrinkles on the collar.
  • the color of the model clothes can only be displayed in a colorless form in the application documents, the contrast effect of the dark clothes at the wrinkles here is not obvious.
  • the rendering process can be achieved through the shader of the Unity engine (shader) implementation.
  • the vertex shader (Vertex Shader) and the pixel shader (Pixel Shader) in the shader are responsible for different steps in the above embodiment respectively.
  • the vertex shader is responsible for converting the direction information of the superimposed light to the tangent space of the vertex, and then decoding the encoding result of the vertex occlusion information of the vertex based on the direction information of the superimposed light in the tangent space of the vertex, and obtaining the decoded vertex occlusion information corresponding to the vertex.
  • the pixel shader can also be called a fragment shader, which is used to execute S203, and calculate according to the decoded occlusion information and the light intensity information of the superimposed light to obtain the superimposed light occlusion rendering data for the virtual object.
  • the decoded information in the [0,1] interval can be remapped using the parameters passed by the external program to obtain the occlusion values corresponding to multiple vertices.
  • the pixel shader obtains the superimposed light occlusion rendering data for the virtual object according to the occlusion values corresponding to the multiple vertices and the light intensity information of the superimposed light.
  • FIG5 is a schematic diagram of the workflow of the vertex shader and the pixel shader respectively executed during the shader rendering process.
  • the process of shadow rendering of the virtual object model when the superimposed light is blocked is described in detail.
  • the model can be preprocessed before rendering.
  • the preprocessing process is described in detail below in conjunction with the embodiments. It should be noted that the following preprocessing operations must be performed on the vertices in the model.
  • the target vertex is one of the multiple vertices of the model and is not special. The other vertices except the target vertex can perform the same operations according to the steps below.
  • FIG6 is a schematic flow chart of the process of preprocessing model vertices.
  • the process can be implemented by a model preprocessing tool.
  • the process of preprocessing model vertices includes the following steps:
  • the target vertex can be used as the center of the sphere, and points can be evenly scattered within the preset radius as simulated light sources.
  • the radius is 1 in the world coordinate system where the model is located, and 1000 virtual light sources are set.
  • the radius and the number of virtual light sources can be set according to actual needs, and no specific numerical limit is made here.
  • Figure 7 is a schematic diagram of setting several virtual light sources with a vertex of a virtual object model as the center of the sphere.
  • the virtual object shown in Figure 7 is in the shape of a girl, and several virtual light sources surround the sphere with the vertex as the center of the sphere.
  • the occlusion information is used to identify whether the ray emitted by the corresponding virtual halo is occluded by other vertices except the target vertex.
  • each virtual light source is used as a starting point to emit rays toward the target vertex, so as to simulate the light direction of each virtual light source illuminating the target vertex.
  • the purpose of providing a virtual light source is to detect whether there is an obstruction between the virtual light source and the target vertex (other vertices that block the light emitted by it from being smoothly projected to the target vertex). If the length of the ray is configured to be greater than or equal to the distance between the virtual light source and the target vertex, the ray may collide with the target vertex, resulting in incorrect detection.
  • the length of the ray emitted by the target virtual light source to the target vertex is configured to be less than the distance between the target virtual light source and the target vertex.
  • the ray length should also not be set too short, otherwise obstructions that are closer to the target vertex may be missed.
  • the ray length is configured to be 0.999 times the distance between the target vertex and the target virtual light source. This length can be configured according to the size of the model, the characteristics of the model shape, etc., and is not limited here.
  • Figure 8 is a schematic diagram of emitting rays to the target vertex through each virtual light source.
  • Figure 9 is a schematic diagram of the results of occlusion detection. There are multiple darker areas near the head of the model in the center of Figure 9. These darker areas in Figure 9 indicate that the rays are blocked in this area. If occlusion occurs, the occlusion value of this ray can be recorded as 1, and if it is not blocked, it can be recorded as 0. The above occlusion value of 0 or 1 is used as the vertex occlusion information of the target vertex.
  • the vertex occlusion information corresponding to the target vertex includes N occlusion values, which respectively correspond to the N virtual light sources set up for the target vertex.
  • the ray's direction information is used to encode the ray's occlusion information.
  • the model preprocessing tool for executing the above steps S601-S603 is the Hondini software. Since Houdini uses a right-handed coordinate system and Unity uses a left-handed coordinate system, in order to allow the encoded data to be correctly used in Unity in the end, in the embodiment of the present application, S603 may include the following specific operations:
  • the occlusion information of the corresponding ray in the vertex occlusion information corresponding to the target vertex is encoded by converting the plurality of rays into the direction information in the tangent space of the target vertex.
  • the initial conversion of the direction information in different coordinate systems is achieved. Since the ray is in the world coordinate system where the virtual object model is located, the vertex has its corresponding tangent space. If the data stored in the vertex is to be calculated correctly after the model is rotated, the ray direction needs to be converted to the tangent space coordinate. In other words, in order to achieve accurate encoding of vertex-level information, the direction information converted to the left-handed coordinate system is further converted to the tangent space of the target vertex. Then, the occlusion information of the corresponding ray in the vertex occlusion information is encoded with the direction information converted to the tangent space.
  • N virtual light sources sending rays to the target vertex, and N occlusion values of the target vertex are obtained.
  • the direction information of the rays in N different directions converted to the tangent space of the target vertex is used to encode the corresponding occlusion values.
  • the above conversion makes the encoding based on the direction information of the ray in the vertex tangent space, realizes one-to-one accurate encoding, and is convenient for restoring relatively accurate information during decoding.
  • Left-handed coordinate system conversion only the x-axis of the reflection line direction xyz is needed.
  • S604 storing the encoding results of the vertex occlusion information corresponding to the plurality of vertices in the space of the UV data of the corresponding vertex Position in the middle.
  • Models UV2 and UV3 were introduced earlier, each of which contains data storage spaces.
  • the encoding results of the vertex occlusion information corresponding to the vertex can be stored in the spaces.
  • the encoding results obtained by spherical harmonic function encoding include 4 spherical harmonic coefficients, UV2 and UV3 each contain two spaces, and 2 of the 4 spherical harmonic coefficients can be stored in the two spaces of UV2, and the other 2 spherical harmonic coefficients can be stored in the two spaces of UV3.
  • the game engine Unity obtains the encoding results of the vertex occlusion information corresponding to multiple vertices on the model of the virtual object, specifically, the vertex shader of Unity obtains the encoding results of the vertex occlusion information of the corresponding vertex from the UV data of multiple vertices.
  • the data spaces in UV2 and UV3 are cleverly used in this application. Since the encoding results exist in the UV data spaces of the corresponding vertices, it is very convenient for the shader of the game engine to read the data, thereby saving the performance consumption of the computing device when reading the device and performing operations.
  • the export path of the model can be consistent with the export path. Specifically, it can be exported to the relevant folder of the shader (Shader) under the resources of Unity.
  • a switch or option is provided in the interface for enabling the shader to select whether to enable the rendering function in the scheme of the present embodiment.
  • the user checks the option of enabling the rendering function of the present embodiment, which enables the shader to support the rendering function of the embodiment of the present application, so that the enabled shader can complete the subsequent rendering work for the superimposed light occlusion effect.
  • FIG10 is a flowchart of another rendering method for superimposed light occlusion provided in an embodiment of the present application.
  • Part A of the flowchart shows the preprocessing process of the model by Houdini software
  • part B of the flowchart shows the rendering process of the shader of the Unity engine.
  • the preprocessing process and the rendering process are described in detail in the embodiments introduced above, and will not be repeated here.
  • the overall process including the preprocessing process can be referred to FIG10.
  • the occlusion values of the discrete rays in the above-mentioned embodiment are regarded as an approximate function f(x) distributed on the spherical space.
  • the spherical harmonic basis function is:
  • the spherical harmonic coefficients are It can be seen that the expressions of the approximate function and spherical harmonic coefficients are as follows:
  • the direction information of each superimposed light is obtained in the vertex shader and converted to the tangent space coordinate system.
  • the tangent T, auxiliary tangent B, and normal N can be obtained to form a TBN matrix.
  • the inverse matrix invTBN matrix is obtained through the TBN matrix.
  • the superimposed light direction vector xyz is right-multiplied by the invTBN matrix to convert the superimposed light direction vector xyz in the tangent coordinate system.
  • the tangent space superimposed light direction information (direction vector) and the spherical harmonic coefficients stored in the vertex data are used to decode the spherical harmonic function and calculate the occlusion value.
  • the decoding formula is as follows:
  • n 2nd order coefficient
  • n 2 4.
  • ci is the spherical harmonic coefficient stored in the vertex data
  • FIG. 12 is a rendering device of superimposed light occlusion provided in the embodiment of the present application.
  • the rendering device for superimposed light occlusion shown in FIG12 includes:
  • a first acquisition unit 1201 is used to acquire direction information of superimposed light in the environment where the virtual object is located;
  • the second acquisition unit 1202 is used to acquire data of a plurality of vertices on the model of the virtual object; wherein the vertex data includes an encoding result of vertex occlusion information, and the vertex occlusion information is obtained after performing light occlusion detection on the vertex in advance;
  • a decoding unit 1203, configured to decode the encoding result according to the direction information to obtain decoded vertex occlusion information
  • the rendering unit 1204 is used to obtain the superimposed light occlusion rendering data of the virtual object according to the decoded vertex occlusion information and the light intensity information of the superimposed light respectively corresponding to the multiple vertices.
  • the vertex occlusion information is detected in advance and stored in the vertex data in an encoded form, it is only necessary to obtain the vertex data during rendering to decode it accordingly, without the need to read the texture frame by frame. Therefore, it can save the performance consumption of the computing device, and ensure the high performance of the device while rendering the superimposed light occlusion effect, which is more suitable for application on mobile devices.
  • the rendering scheme is based on model vertices, so it is not affected by the change in the direction of movement of the virtual object in its world coordinate system. And the important role of the superimposed light direction is also taken into account in this scheme.
  • the encoding result in the vertex data is decoded with the superimposed light direction information, so that the decoded vertex occlusion information is combined with the superimposed light intensity to form the superimposed light occlusion rendering data. Therefore, this rendering scheme can show a more natural and physically accurate superimposed light occlusion effect, further enhancing the visual experience of users (game players or animation viewers).
  • the decoding unit is used to:
  • any target vertex among the multiple vertices convert the direction information into the tangent space of the target vertex to obtain converted direction information, where the converted direction information is used to identify the direction of the superimposed light in the tangent space of the target vertex;
  • the encoding result is decoded according to the conversion direction information to obtain decoded vertex occlusion information corresponding to the target vertex.
  • the number of superimposed lights in the environment is multiple, and the decoding unit is used to:
  • the rendering unit is used for:
  • the occlusion rendering data of the virtual object under the multiple superimposed lights are obtained through the sub-rendering data of the multiple superimposed lights.
  • the encoding result of the vertex occlusion information is a spherical harmonic coefficient obtained by encoding the vertex occlusion information using a spherical harmonic function; the decoding unit is specifically used to:
  • the spherical harmonics coefficients are decoded according to the direction information.
  • the rendering unit is used to:
  • the superimposed light occlusion rendering data for the virtual object is obtained according to the occlusion values respectively corresponding to the multiple vertices and the light intensity information of the superimposed light.
  • the rendering device for superimposed light occlusion further includes: a preprocessing unit, configured to perform the following operations before obtaining the encoding results of vertex occlusion information respectively corresponding to a plurality of vertices on the model of the virtual object:
  • multiple virtual light sources are set within a preset range around the target vertex;
  • the occlusion information corresponding to the multiple virtual light sources is determined to be used together as the vertex occlusion information corresponding to the target vertex, wherein the occlusion information is used to identify whether the rays emitted by the corresponding virtual halo are occluded by other vertices other than the target vertex;
  • the corresponding occlusion information is encoded using the direction information of the multiple rays, and the encoding results of the occlusion information of the multiple rays are collectively used as the encoding results of the occlusion information corresponding to the target vertex.
  • the preprocessing unit is specifically used for:
  • the occlusion information of the corresponding rays in the vertex occlusion information corresponding to the target vertex is encoded using the direction information of the multiple rays converted into the tangent space of the target vertex.
  • the length of a ray emitted by the target virtual light source to the target vertex is configured to be smaller than the distance between the target virtual light source and the target vertex.
  • the rendering device for superimposed light occlusion further includes:
  • a storage unit used for storing the encoding results of the vertex occlusion information respectively corresponding to the plurality of vertices in the empty positions of the UV data of the corresponding vertices;
  • the second acquisition unit 1202 is specifically configured to acquire encoding results of vertex occlusion information of corresponding vertices from UV data of the plurality of vertices.
  • the following introduces the structure of the rendering device for superimposed light generation in the form of a server and a terminal device respectively.
  • FIG13 is a schematic diagram of a server structure provided by an embodiment of the present application.
  • the server 900 may have relatively large differences due to different configurations or performances, and may include one or more central processing units (CPU) 922 (for example, one or more processors) and a memory 932, and one or more storage media 930 (for example, one or more mass storage devices) storing application programs 942 or data 944.
  • the memory 932 and the storage medium 930 may be temporary storage or permanent storage.
  • the program stored in the storage medium 930 may include one or more modules (not shown in the figure), and each module may include a series of instruction operations on the server.
  • the central processing unit 922 may be configured to communicate with the storage medium 930 and execute a series of instruction operations in the storage medium 930 on the server 900.
  • the server 900 may also include one or more power supplies 926, one or more wired or wireless network interfaces 950, one or more input and output interfaces 958, and/or one or more operating systems 941, such as Windows Server TM , Mac OS X TM , Unix TM , Linux TM , FreeBSD TM , etc.
  • operating systems 941 such as Windows Server TM , Mac OS X TM , Unix TM , Linux TM , FreeBSD TM , etc.
  • CPU 922 is used to perform the following steps:
  • the superimposed light occlusion rendering data for the virtual object is obtained according to the decoded vertex occlusion information and the light intensity information of the superimposed light respectively corresponding to the multiple vertices.
  • the present application also provides another rendering device for superimposed light generation, as shown in FIG14.
  • the terminal can be any terminal device including a mobile phone, a tablet computer, a personal digital assistant (English full name: Personal Digital Assistant, English abbreviation: PDA), a sales terminal (English full name: Point of Sales, English abbreviation: POS), a car computer, etc., taking the terminal as a mobile phone as an example:
  • FIG14 is a block diagram showing a partial structure of a mobile phone related to a terminal provided in an embodiment of the present application.
  • the mobile phone includes: a radio frequency (full name in English: Radio Frequency, English abbreviation: RF) circuit 1010, a memory 1020, an input unit 1030, a display unit 1040, a sensor 1050, an audio circuit 1060, a wireless fidelity (full name in English: wireless fidelity, English abbreviation: WiFi) module 1070, a processor 1080, and a power supply 1090 and other components.
  • RF radio frequency
  • the RF circuit 1010 can be used for receiving and sending signals during information transmission or communication. In particular, after receiving the downlink information of the base station, it is sent to the processor 1080 for processing; in addition, the designed uplink data is sent to the base station.
  • the RF circuit 1010 includes but is not limited to an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (full name: Low Noise Amplifier, English abbreviation: LNA), a duplexer, etc.
  • the RF circuit 1010 can also communicate with the network and other devices through wireless communication.
  • the above-mentioned wireless communications may use any communication standard or protocol, including but not limited to Global System of Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), e-mail, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile communications
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Messaging Service
  • the memory 1020 can be used to store software programs and modules.
  • the processor 1080 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 1020.
  • the memory 1020 can mainly include a program storage area and a data storage area, wherein the program storage area can store an operating system, an application required for at least one function (such as a sound playback function, an image playback function, etc.), etc.; the data storage area can store data created according to the use of the mobile phone (such as audio data, a phone book, etc.), etc.
  • the memory 1020 can include a high-speed random access memory, and can also include a non-volatile memory, such as at least one disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the input unit 1030 can be used to receive input digital or character information, and to generate key signal input related to the user settings and function control of the mobile phone.
  • the input unit 1030 may include a touch panel 1031 and other input devices 1032.
  • the touch panel 1031 also known as a touch screen, can collect the user's touch operation on or near it (such as the user's operation on the touch panel 1031 or near the touch panel 1031 using any suitable object or accessory such as a finger, stylus, etc.), and drive the corresponding connection device according to a pre-set program.
  • the touch panel 1031 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it to the processor 1080, and can receive and execute commands sent by the processor 1080.
  • the touch panel 1031 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 1030 may also include other input devices 1032.
  • the other input devices 1032 may include but are not limited to one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, etc.
  • the display unit 1040 may be used to display information input by the user or information provided to the user and various menus of the mobile phone.
  • the display unit 1040 may include a display panel 1041.
  • the display panel 1041 may be configured in the form of a liquid crystal display (full name in English: Liquid Crystal Display, English abbreviation: LCD), an organic light-emitting diode (full name in English: Organic Light-Emitting Diode, English abbreviation: OLED), etc.
  • the touch panel 1031 may cover the display panel 1041.
  • the touch panel 1031 When the touch panel 1031 detects a touch operation on or near it, it is transmitted to the processor 1080 to determine the type of the touch event, and then the processor 1080 provides a corresponding visual output on the display panel 1041 according to the type of the touch event.
  • the touch panel 1031 and the display panel 1041 are used as two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1031 and the display panel 1041 may be integrated to implement the input and output functions of the mobile phone.
  • the mobile phone may also include at least one sensor 1050, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1041 according to the brightness of the ambient light, and the proximity sensor may turn off the display panel 1041 and/or the backlight when the mobile phone is moved to the ear.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (generally three axes), and can detect the magnitude and direction of gravity when stationary.
  • the audio circuit 1060, the speaker 1061, and the microphone 1062 can provide an audio interface between the user and the mobile phone.
  • the audio circuit 1060 can transmit the received audio data to the speaker 1061 after converting the electrical signal, which is converted into a sound signal for output; on the other hand, the microphone 1062 converts the collected sound signal into an electrical signal, which is received by the audio circuit 1060 and converted into audio data, and then the audio data is output to the processor 1080 for processing, and then sent to another mobile phone through the RF circuit 1010, or the audio data is output to the memory 1020 for further processing.
  • WiFi is a short-range wireless transmission technology.
  • the mobile phone can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 1070. It provides users with wireless broadband Internet access.
  • FIG14 shows the WiFi module 1070, it is understandable that it is not a necessary component of the mobile phone and can be replaced without changing the WiFi module 1070 as needed. The invention is omitted within the scope of the essence of the invention.
  • the processor 1080 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire mobile phone. It executes various functions of the mobile phone and processes data by running or executing software programs and/or modules stored in the memory 1020 and calling data stored in the memory 1020.
  • the processor 1080 may include one or more processing units; preferably, the processor 1080 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, and the modem processor mainly processes wireless communications. It is understandable that the above-mentioned modem processor may not be integrated into the processor 1080.
  • the mobile phone also includes a power supply 1090 (such as a battery) for supplying power to various components.
  • a power supply 1090 (such as a battery) for supplying power to various components.
  • the power supply can be logically connected to the processor 1080 through a power management system, so that the power management system can manage charging, discharging, power consumption and other functions.
  • the mobile phone may also include a camera, a Bluetooth module, etc., which will not be described in detail here.
  • the processor 1080 included in the terminal also has the following functions:
  • the superimposed light occlusion rendering data of the virtual object is obtained according to the decoded vertex occlusion information and the light intensity information of the superimposed light respectively corresponding to the multiple vertices.
  • An embodiment of the present application further provides a computer-readable storage medium for storing a computer program, wherein the computer program is used to execute any one of the implementations of the rendering method of superimposed light occlusion described in the aforementioned embodiments.
  • the embodiment of the present application also provides a computer program product including a computer program, which, when executed on a computer, enables the computer to execute any one of the implementations of the rendering method of superimposed light occlusion described in the aforementioned embodiments.
  • the systems described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, To be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including a number of instructions to enable a computer device (which can be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (full name in English: Read-Only Memory, English abbreviation: ROM), random access memory (full name in English: Random Access Memory, English abbreviation: RAM), disk or optical disk and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

本申请公开一种叠加光遮挡的渲染方法、装置及相关产品,可应用于数字人、虚拟人、游戏、虚拟现实、扩展现实等场景。获取虚拟对象所处环境中叠加光的方向信息和模型上多个顶点的数据,逐个顶点地去利用叠加光的方向信息对顶点数据中保存的顶点遮挡信息的编码结果进行解码,便可获得各个顶点的解码后的顶点遮挡信息。根据解码后的顶点遮挡信息和叠加光的光强信息得到对于虚拟对象的叠加光遮挡渲染数据。不需要逐帧读取纹理,节省对计算设备的性能消耗,适用于在移动设备上应用。该渲染方案以模型顶点为基础,不受虚拟对象运动方向变化的影响。考虑到了叠加光方向的重要作用,展示更加自然、物理准确的叠加光遮挡效果,进一步提升用户视觉体验。

Description

一种叠加光遮挡的渲染方法、装置及相关产品
本申请要求于2022年11月03日提交中国专利局、申请号为202211372129.6、申请名称为“一种叠加光遮挡的渲染方法、装置及相关产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及模型渲染技术领域,尤其涉及叠加光遮挡的渲染。
背景技术
在游戏开发和动画制作等场景中,叠加光是指除了主光源外照射虚拟对象的其他光源。真实世界中,射向物体的光在受到遮挡的情况下,应当在物体上投射出阴影。但是对于游戏开发和动画制作而言,遮挡阴影的计算往往对计算设备的性能产生较大的消耗。因此,对于许多移动设备上运行游戏中,为了保证设备性能,往往仅展示主光源的遮挡效果,去除了叠加光的遮挡计算。这导致叠加光照射到虚拟对象时,在应当出现阴影的位置出现了不自然的光斑。在业内,而这一问题被称为漏光问题。为了解决漏光问题,目前可以采用如下的两种方法:
一种方案是对于每盏叠加光,逐帧计算对应其光方向的空间坐标系下每个物体的最小深度并保存到该帧对应的一张Render Texture中。Render Texture(渲染纹理)是游戏引擎Unity在运行时创建和更新的一种纹理。在计算到物体受到该盏叠加光的影响时,将物体坐标转换到光方向的空间坐标系中,再利用新坐标去采样对应的Render Texture来获得深度。如果物体深度比Render Texture中保存的该物体的深度大,则代表该物体被遮挡。但是这一方案逐帧计算遮挡每帧都需要计算刷新的关系,导致运算性能消耗十分高。并且每盏叠加光都会占据一张Render Texture,若叠加光的盏数较多,每帧读取的Render Texture过多会造成带宽占用高。高带宽意味着高功率消耗。因此,这一方案在移动设备上并不适用。
发明内容
本申请实施例提供了一种叠加光遮挡的渲染方法、装置及相关产品,旨在渲染出更加自然、准确的叠加光遮挡效果并减少对设备性能的消耗。
本申请第一方面提供了一种叠加光遮挡的渲染方法,包括:
获取虚拟对象所处环境中叠加光的方向信息和所述虚拟对象的模型上多个顶点的数据;其中,顶点的数据包含顶点遮挡信息的编码结果;
根据所述方向信息对所述编码结果进行解码,得到解码后的顶点遮挡信息,所述顶点遮挡信息为预先对所述顶点进行光线遮挡检测后得到;
根据所述多个顶点分别对应的所述解码后的顶点遮挡信息和所述叠加光的光强信息,得到所述虚拟对象的叠加光遮挡渲染数据。
本申请第二方面提供了一种叠加光遮挡的渲染装置,包括:
第一获取单元,用于获取虚拟对象所处环境中叠加光的方向信息;
第二获取单元,用于获取所述虚拟对象的模型上多个顶点的数据;其中,顶点的数据包含顶点遮挡信息的编码结果,所述顶点遮挡信息为预先对所述顶点进行光线遮挡检测后得到;
解码单元,用于根据所述方向信息对所述编码结果进行解码,得到解码后的顶点遮挡信息;
渲染单元,根据所述多个顶点分别对应的所述解码后的顶点遮挡信息和所述叠加光的光强信息,得到所述虚拟对象的叠加光遮挡渲染数据。
本申请第三方面提供了一种叠加光遮挡的渲染设备,所述设备包括处理器以及存储器:
所述存储器用于存储计算机程序,并将所述计算机程序传输给所述处理器;
所述处理器用于根据所述计算机程序执行第一方面提供的叠加光遮挡的渲染方法的步骤。
本申请第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,所述计算机程序用于执行第一方面提供的叠加光遮挡的渲染方法的步骤。
本申请第五方面提供了一种计算机程序产品,包括计算机程序,该计算机程序被计算机设备执行时实现第一方面提供的叠加光遮挡的渲染方法的步骤。
从以上技术方案可以看出,本申请实施例具有以下优点:
本申请技术方案中预先对虚拟对象的顶点进行光线遮挡检测,得到顶点遮挡信息,使其处于该顶点的数据中。在实际需要渲染叠加光在虚拟对象的模型上的遮挡效果时,只要获取虚拟对象所处环境中叠加光的方向信息和模型上多个顶点的数据,逐个顶点地去利用叠加光的方向信息对顶点数据中保存的顶点遮挡信息的编码结果进行解码,便可以获得各个顶点的解码后的顶点遮挡信息。最后根据这些解码后的顶点遮挡信息和叠加光的光强信息得到对于虚拟对象的叠加光遮挡渲染数据,通过叠加光遮挡渲染数据便可以呈现出叠加光在虚拟对象上的遮挡效果。由于顶点遮挡信息是预先检测出并以编码的形式存储在顶点数据中,在渲染时只需要拿取顶点数据便可以相应解码,不需要逐帧读取纹理,因此能够节省对计算设备的性能消耗,在叠加光遮挡效果渲染呈现的同时保证了设备的高性能,更加适用于在移动设备上应用。此外,该渲染方案以模型顶点为基础,因此不受虚拟对象在其所处世界坐标系中的运动方向变化的影响。并且在该方案中也考虑到了叠加光方向的重要作用,以叠加光的方向信息解码顶点数据中的编码结果,使解码后的顶点遮挡信息在与叠加光光强共同形成的叠加光遮挡渲染数据。因此,该渲染方案能够展示更加自然、物理准确的叠加光遮挡效果,进一步提升用户(游戏玩家或动画观看者)的视觉体验。
附图说明
图1为本申请实施例提供的一种叠加光遮挡的渲染方法的场景架构图;
图2为本申请实施例提供的一种叠加光遮挡的渲染方法的流程图;
图3为一种未进行叠加光遮挡渲染的虚拟人物效果图;
图4为本申请实施例提供的一种叠加光遮挡效果被渲染后的虚拟人物效果图;
图5为着色器渲染过程中顶点着色器和像素着色器分别执行的工作流程示意图;
图6为本申请实施例提供的一种预处理模型顶点的过程的示意流程图;
图7为本申请实施例提供的一种以虚拟对象模型的某一顶点作为球心设置若干虚拟光源的示意图;
图8为本申请实施例提供的一种通过各虚拟光源向目标顶点发射射线的示意图;
图9为本申请实施例提供的一种遮挡检测的结果示意图;
图10为本申请实施例提供的另一种叠加光遮挡的渲染方法的流程图;
图11为球谐基函数的图表;
图12为本申请实施例提供的一种叠加光遮挡的渲染装置的结构示意图;
图13为本申请实施例中服务器的一个结构示意图;
图14为本申请实施例中终端设备的一个结构示意图。
具体实施方式
在动画制作场景或游戏场景中,若要获得叠加光的遮挡效果,可以采用逐帧读取每盏叠加光对应的Render Texture的方式比较深度信息来确定叠加光在物体上的遮挡情况,导致带宽负担重,功耗大,消耗计算设备的大量运算性能。以美术人员绘制遮挡图随能够在一定程度上节省运算性能,但是依赖人工绘制技术的遮挡图生产效率低,带来较高的人工成本和人工工作负担。并且绘制的遮挡图无法应对叠加光方向以及虚拟对象运动状态的变化,物理效果并不准确。以上两种方案尽管目的是解决漏光问题,但是在解决漏光问题的同时也带来了新的问题或者解决后的渲染效果难以满足自然、逼真的视觉要求。
鉴于以上问题,在本申请中提供了一种叠加光遮挡的渲染方法、装置及相关产品,目的是提供能够实现更加自然、物理准确的叠加光遮挡效果的渲染方案,并减少对设备性能的消耗。节省人工的成本和工作负担,提升渲染工作的效率。在本申请提供的技术方案中,通过预先对虚拟对象的模型上各顶点进行光线的遮挡检测,来获得各顶点对应的顶点遮挡信息。对其编码存入到顶点数据中。在进行渲染时,只需将顶点遮挡信息的编码结果从顶点数据中取出,以实际叠加光的方向信息对其解码得到解码后的各顶点遮挡信息。最后利用模型各顶点的解码后的顶点遮挡信息和叠加光的光强获得对于虚拟对象的叠加光遮挡渲染数据。此方案对叠加光遮挡的渲染效果上,既符合叠加光提供的光强条件,又能够体现遮挡情况的存在,从用户的视觉观察上能够发现虚拟对象在叠加光的照射下,虚拟对象身上遮挡光线传输的对应位置在虚拟对象的其他位置形成了自然、位置准确的阴影,例如长发形成的脖颈后方的阴影、衣服褶皱的阴影等。
首先对本申请下文的实施例中可能涉及的若干个名词术语进行解释。
Houdini:一款三维计算机图形软件。
HDA:Houdini Digital Asset,把Houdini节点网络打包成可重复使用的数字资产。
Houdini Engine:Houdini引擎,可将HDA导入其他软件中使用。
Unity:Unity是一种跨平台的2D和3D游戏引擎,由Unity Technologies研发,可开发跨平台的视频游戏、并延伸于基于WebGL技术的HTML5网页平台,以及tvOS、Oculus Rift、ARKit等新一代多媒体平台。
叠加光:除主光源外,照射到物体身上的光源。
漏光问题:为了提高性能,移动设备经常会去除叠加光的阴影遮挡计算,导致叠加光照射在物体上时,在阴影处出现不自然的光斑。
球谐函数:是拉普拉斯方程的球坐标系形式解的角度部分,是近代数学的一个著名函数,在量子力学,计算机图形学,渲染光照处理以及球面映射等方面广泛应用。
基函数:在数学中,一个基函数是一个函数空间(Function Space)中的一个基底,就像欧拉空间中的一个坐标轴一样。在函数空间中,每个连续的函数都可以表示为基函数的线性组合。
顶点:三维建模出的虚拟对象的模型表面包括许多顶点,邻近的顶点与顶点之间具有连线。三个顶点及两两之间的连线组成一个三角形,模型表面包括许多类似形式构成的三角形。
UV:三维建模的纹理坐标通常具有U和V两个坐标轴,因此称之为UV坐标。U代表横向坐标上的分布、V代表纵向坐标上的分布。
UV2,UV3:只第二,第三套模型UV。此处仅用作保存顶点遮挡信息的编码结果。顶点的数据中包括该顶点对应的UV2,UV3。
切线平面:组成一个模型的所有Mesh网格三角形或者四边形中的任意单个顶点为空间原点,顶点的法线为N轴,过顶点原点垂直于顶点法线N轴的平面为切线平面。
切线空间:切线平面上过顶点原点,在切线平面上取该顶点所在的UV纹理轴相同方向的两轴分别作为TB轴,法线为N轴,由TBN向量轴所张成的局部空间,就叫做顶点切线空间。
RenderTexture:RenderTexture(渲染纹理)是一种Unity在运行时创建和更新的纹理。
带宽:显存带宽是指显示芯片与显存之间的数据传输速率,它以字节/秒为单位,计算公式为:显存带宽=工作频率×显存位宽/8bit。在移动设备上,带宽是影响其功耗的重要因素之一。
Shader:着色器(Shader)是用来实现图像渲染的,用来替代固定渲染管线的可编辑程序。其中Vertex Shader(顶点着色器)主要负责顶点的几何关系等的运算,Pixel Shader(像素着色器)主要负责片源颜色等的计算。
虚拟场景:应用程序在终端上运行时显示(或提供)的虚拟场景可以是对真实世界的仿真场景,也可以是半仿真半虚构的场景,还可以是纯虚构的场景。虚拟场景可以是二维虚拟场景、2.5维虚拟场景和三维虚拟场景中的任意一种,下述实施例以虚拟场景是三维虚拟场景来举例说明,但对此不加以限定。可选地,该虚拟场景还用于至少两个虚拟对象之间的虚拟场景对战。
虚拟对象:是指在虚拟场景中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物中的至少一种。当虚拟场景为三维虚拟场景时,虚拟对象可以是基于动画骨骼技术创建的三维立体模型。每个虚拟对象在三维虚拟场景中具有自身的形状和体积,占据三维虚拟场景中的一部分空间。
本申请实施例提供的叠加光遮挡的渲染方法的执行主体可以为终端设备。例如在终端设备上运行Unity,获得对于虚拟对象的叠加光遮挡渲染数据。作为示例,终端设备具体可以包括但不限于手机、电脑、智能语音交互设备、智能家电、车载终端、飞行器等。本发明实施例可应用于各种场景,包括但不限于数字人、虚拟人、游戏、虚拟现实、扩展现实(XR,Extended Reality)等。此外,本申请实施例提供的叠加光遮挡的渲染方法的执行主体也可以是服务器,即可以在服务器上运行Unity获得对于虚拟对象的叠加光遮挡渲染数据。如果游戏在终端设备上运行,服务器可以进一步将渲染数据发送给终端设备,在其上基于 渲染数据进行画面的具体展示。此外预处理获得顶点遮挡信息编码结果的过程可以在终端设备上实现,也可以在于终端设备通信的服务器中实现。故本申请实施例中对于执行本申请技术方案的实现主体不做限定。
图1示例性地展示了一种叠加光遮挡的渲染方法的场景架构图。图中包括服务器以及多种形式的终端设备。图1所示的服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统。另外,服务器还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器。
图2为本申请实施例提供的一种叠加光遮挡的渲染方法的流程图。如图2所示的叠加光遮挡的渲染方法中,包括:
S201,获取虚拟对象所处环境中叠加光的方向信息和虚拟对象的模型上多个顶点的数据。
此处,虚拟对象所处环境具体可以是指其所在的虚拟场景。在虚拟场景中设置了至少一盏叠加光。叠加光具有其自身的投射方向(或称光照方向)。为了实现逼真、自然的渲染,首先需要获取叠加光的方向信息。此处,叠加光的方向信息可以通过矢量进行表示,例如坐标系中方向向量表示的方向信息。后续应用中,本步骤获得的叠加光方向信息将用于信息解码。
虚拟对象的模型中每个顶点具有自身对应的数据,例如位置、颜色等。在本申请实施例中,顶点的数据还包含顶点遮挡信息的编码结果。本步骤获取顶点数据的主要目的是获取其中存有的自身顶点遮挡信息的编码结果。需要说明的是,顶点遮挡信息为预先对顶点进行光线遮挡检测后得到,可以理解为是在游戏引擎渲染叠加光遮挡效果之前的阶段完成的检测。预先得到了顶点遮挡信息后将其在应用到渲染环节之前,存储在了顶点数据中。
本步骤获取虚拟对象所处环境中叠加光的方向信息和虚拟对象的模型上多个顶点中顶点遮挡信息的编码结果,即可开展S202,进行信息的解码。
S202,根据方向信息对编码结果进行解码,得到解码后的顶点遮挡信息。
需要说明的是,本申请实施例的模型预处理环节中,顶点遮挡信息具体是基于虚拟光源朝着顶点发出的射线的方向信息进行编码的,此处,射线也可以理解为虚拟光源向顶点发射的光线。由于预处理环节中,光源是虚拟的,因此光线并不是虚拟场景中存在的光源发射的光线,避免误解此处将其称为射线。为了实现对顶点遮挡信息的准确解码,在本申请实施例中,采用与编码所用的一致的技术手段,以S201获得的叠加光的方向信息对顶点遮挡信息的编码信息进行解码。需要说明的是,经过解码得到的顶点遮挡信息与编码前的顶点遮挡信息近似,但是并不能保证完全一致,故为了避免误解两个概念此处将经过S202解码得到的信息称为解码后的顶点遮挡信息。
在可选实现方式中,顶点作为球心,虚拟光源作为设置在距离顶点周边预设范围内的散点,向顶点发射射线,以模拟虚拟光源照射该顶点的光方向。这些不同的虚拟光源形成的射线是离散的,各射线被遮挡的情况也是互相独立,相互离散的。因此对于该顶点,这些不同的射线的顶点遮挡信息可以看做是分布于球面空间上的近似函数。通过球谐函数可 以与该近似函数构建表达式上的关联。作为可选的实现方式,编码采用了球谐函数,顶点遮挡信息的编码结果具体为利用射线方向信息和球谐函数对顶点遮挡信息进行编码后得到的球谐系数。在本步骤S202中,可以根据叠加光的方向信息和球谐函数对球谐系数进行解码,从而得到解码后的顶点遮挡信息。利用球谐函数编码和解码,既贴合了顶点遮挡信息的分布形式,又能够节省信息占用的存储空间、方便索引,还可以实现相对准确的信息还原。
S203,根据多个顶点分别对应的解码后的顶点遮挡信息和叠加光的光强信息,得到虚拟对象的叠加光遮挡渲染数据。
实际应用中,可以将多个顶点分别对应的解码后的顶点遮挡信息与叠加光的光强相乘,从而得到虚拟对象的叠加光遮挡渲染数据。在一些可能的实现场景中,虚拟对象所在的虚拟场景中叠加光数量为多盏,进而S202中可以分别以多盏叠加光的方向信息对多个顶点的顶点遮挡信息的编码结果逐一解码,得到解码后的顶点遮挡信息,所述解码后的顶点遮挡信息包括与多盏叠加光一一对应的解码结果集合,解码结果集合包括所述多个顶点针对同一盏叠加光的解码后的顶点遮挡信息。例如,以K1和K2代表两盏不同方向的叠加光。叠加光K1的方向信息表示为S1,叠加光K2的方向信息表示为S2。在S202中通过S1来解码S201中获得的各顶点的顶点遮挡信息的编码结果,得到关于这些顶点的解码后的顶点遮挡信息。由于这些信息均是通过S1解码得到的,可以将其汇入到一个解码结果集合P1中。类似地,通过S2来解码S201中获得的各顶点的顶点遮挡信息的编码结果,得到关于这些顶点的解码后的顶点遮挡信息。由于这些信息均是通过S2解码得到的,可以将其汇入到一个解码结果集合P2中。可见,每个解码结果集合例如P1、P2均对应于模型的多个顶点。如果预处理阶段采用的是模型的所有顶点,则解码结果集合也对应于该模型的所有顶点:包括了对所有顶点的遮挡信息的解码结果。
对于多盏叠加光的示例实现场景,在本步骤S203中可以根据所述解码结果集合与所对应叠加光的光强信息,得到所对应叠加光的子渲染数据;通过所述多盏叠加光的子渲染数据,得到所述虚拟对象在所述多盏叠加光下的叠加光遮挡渲染数据。
例如将多盏叠加光分别对应的解码结果集合与对应的叠加光的光强信息相乘,得到虚拟对象在多盏叠加光下的叠加光遮挡渲染数据。例如,对于叠加光K1的方向信息S1解码得到的解码结果集合P1,在本步骤S203中可以将P1中包含的各解码结果与叠加光K1的光强信息相乘;对于叠加光K2的方向信息S2解码得到的解码结果集合P2,将P2中包含的各解码结果与叠加光K2的光强信息相乘。如此,实现了对于虚拟场景中照射虚拟对象的多盏叠加光的遮挡渲染。利用每盏叠加光对应的解码结果与其光强信息相乘,也能够呈现每盏叠加光自身自然、物理准确的遮挡效果。
以上实施例提供的叠加光遮挡的渲染方法中,由于顶点遮挡信息是预先检测出并以编码的形式存储在顶点数据中,在渲染时只需要拿取顶点数据便可以相应解码,不需要逐帧读取纹理,因此能够节省对计算设备的性能消耗,在叠加光遮挡效果渲染呈现的同时保证了设备的高性能,更加适用于在移动设备上应用。此外,该渲染方案以模型顶点为基础,因此不受虚拟对象在其所处世界坐标系中的运动方向变化的影响。并且在该方案中也考虑 到了叠加光方向的重要作用,以叠加光的方向信息解码顶点数据中的编码结果,使解码后的顶点遮挡信息在与叠加光光强共同形成的叠加光遮挡渲染数据。因此,该渲染方案能够展示更加自然、物理准确的叠加光遮挡效果,进一步提升用户(游戏玩家或动画观看者)的视觉体验。
叠加光遮挡的渲染方法中,用于解码使用的叠加光的方向信息可以具体转换到顶点的切线空间,即,转换得到切线空间中的方向信息。结合前文介绍的名词概念,组成一个模型的所有Mesh网格三角形或者四边形中的任意单个顶点为空间原点,顶点的法线为N轴,过顶点原点垂直于顶点法线N轴的平面为切线平面。切线平面上过顶点原点,在切线平面上取该顶点所在的UV纹理轴相同方向的两轴分别作为TB轴,法线为N轴,由TBN向量轴所张成的局部空间,就叫做顶点切线空间。如果顶点遮挡信息本身是通过光在顶点的切线空间中的方向信息进行编码的,则在本申请提供的叠加光遮挡的渲染方法中,解码也可以通过叠加光在顶点切线空间的方向信息对相应顶点的遮挡信息的编码结果进行解码。为便于理解,以多个顶点中的任一目标顶点为例进行说明(目标顶点仅是为了指代顶点方便而命名,不作为指代特定的顶点的限定表述)。在本申请上述实施例提供的方法中步骤S202根据方向信息对编码结果进行解码,得到解码后的顶点遮挡信息,具体可以包括:
对于目标顶点,将方向信息转换到目标顶点的切线空间中,得到转换方向信息,所述转换方向信息用于标识所述叠加光在所述目标顶点的切线空间中的方向;接着,根据转换方向信息对目标顶点的顶点遮挡信息的编码结果进行解码,得到目标顶点对应的解码后的顶点遮挡信息。
对于其他的顶点,也可以类似举例的目标顶点,依照上述方式进行叠加光方向信息在坐标系中的转换并进行信息解码。在本申请实施例中,由于顶点遮挡信息本身是基于不受世界坐标系影响的顶点而获取的信息,其编码和解码均运用顶点切线空间的方向信息,可以使得编码结果和解码结果均未脱离顶点的切线空间,从而保证相对准确的编解码结果。通过此方式,可以极大程度上抵抗模型所在世界坐标系中出现的方向变化(例如任务移动方向、旋转、叠加光照射方向改变等)对叠加光遮挡的渲染效果的影响,保证渲染效果逼真、自然、符合物理准确性。
图3为一种未进行叠加光遮挡渲染的虚拟人物效果图。图4为本申请实施例提供的一种叠加光遮挡效果被渲染后的虚拟人物效果图。其中,图3对应于仅展示主光源的遮挡效果,去除了叠加光的遮挡计算的渲染效果;图4展示了应用本申请实施例提供的叠加光遮挡的渲染方法得到的渲染效果。通过比对图3和图4不难发现,图3在一些本该具有阴影的位置并未体现出阴影,例如照射虚拟对象的模型脖颈后侧的光应当被其长发遮挡,照射右耳后侧的光应当被其发饰中的蝴蝶结遮挡,从而呈现阴影,而图3中相应位置并未呈现阴影,而是光亮如其他皮肤位置。而通过实施本申请技术方案,能够渲染出对应位置的阴影,如图4所示,脖颈后侧和右耳后侧均有阴影显示。图3和图4各自最下方的箭头均指示衣领褶皱处,通过执行本申请技术方案应当也能够体现出图3无褶皱阴影和图4有褶皱阴影,但因模型衣服颜色仅能在申请文件中以无彩色形式显示,故此处深色衣服在褶皱处展示对比效果并不明显。
在以上提供的叠加光遮挡的渲染方法中,渲染的过程可以通过Unity引擎的着色器 (shader)实现。例如着色器中顶点着色器(Vertex Shader),像素着色器(Pixel Shader)分别负责上述实施例中不同的步骤。顶点着色器负责将叠加光的方向信息转换到顶点的切线空间,再基于叠加光在顶点的切线空间中的方向信息对顶点的顶点遮挡信息的编码结果进行解码,得到顶点对应的解码后的顶点遮挡信息。像素着色器又可称为片元着色器,其用于执行S203,根据解码后的遮挡信息和叠加光的光强信息进行运算得到对于虚拟对象的叠加光遮挡渲染数据。此外,作为一种可选的实现方式,为了更加灵活地供用户调整遮挡的软硬边,还可以在顶点着色器解码后,将解码后取值在[0,1]区间的信息使用外部程序传递的参数进行重新映射,得到多个顶点分别对应的遮挡值。随后,像素着色器再根据多个顶点分别对应的遮挡值和叠加光的光强信息,得到对于虚拟对象的叠加光遮挡渲染数据。以上过程可以参见图5,图5为着色器渲染过程中顶点着色器和像素着色器分别执行的工作流程示意图。
在以上实施例中对于虚拟对象模型在叠加光受遮挡情况下阴影渲染的过程进行了详细的说明。前面提到,在渲染之前可以与预先对模型进行预处理。以下结合实施例对预处理过程进行详细的说明。需要说明的是,对于模型中的顶点,均要进行如下预处理操作,为了便于说明,本申请下方介绍中,仅以其中的目标顶点作为示例进行说明。目标顶点为模型的多个顶点之一,不具有特殊性。目标顶点之外的其他顶点均可以按照下方步骤进行相同的操作。
图6为预处理模型顶点的过程的示意流程图。该过程具体可以由模型预处理工具去实现。如图6所示,预处理模型顶点的过程包括如下步骤:
S601,在目标顶点的周边预设范围内设置多个虚拟光源。
在具体实现时,可以以目标顶点作为球心,在其预设半径范围内均匀撒点作为模拟光源。作为示例,半径为模型所在世界坐标系下的尺度1,设置虚拟光源1000个。当然实际应用中,半径以及虚拟光源个数均可以根据实际需求进行设置,此处不做具体的数值限定。虚拟光源设置越多,最终解码得到的顶点遮挡信息越准确;虚拟光源设置越少,预处理消耗的运算时间越短。图7为以虚拟对象模型的某一顶点作为球心设置若干虚拟光源的示意图。图7中所示的虚拟对象为女孩的形态,若干虚拟光源围绕以顶点为球心的球面上。
S602,根据多个虚拟光源向目标顶点发出的射线,将确定出的所述多个虚拟光源分别对应的遮挡信息共同作为所述目标顶点对应的顶点遮挡信息。
遮挡信息用于标识所对应虚拟光晕发出的射线有无受到目标顶点以外其他顶点遮挡。
在本步骤中,以各个虚拟光源作为起点,朝着目标顶点发出射线,以此模拟各虚拟光源照射该目标顶点的光方向。提供虚拟光源的目的是检测虚拟光源与目标顶点之间是否有遮挡物(遮挡其发射的光线顺利投射到目标顶点的其他顶点)。如果将射线的长度配置为大于或者等于虚拟光源与目标顶点的距离,则有可能将射线碰撞到目标顶点,导致检测有误。为了避免此问题,本申请中,对于多个虚拟光源中任一目标虚拟光源,目标虚拟光源向目标顶点发出的射线的长度被配置为小于目标虚拟光源与目标顶点之间的距离。通过限制射线的长度,可以避免将目标顶点误判为遮挡物,提升叠加光遮挡渲染的准确性。需要说明的是,射线长度也最好设置不要过短,这样可能或漏检测距离目标顶点较近的遮挡物。作 为示例,将射线长度配置为目标顶点与目标虚拟光源之间距离的0.999倍。这一长度可以根据模型的尺寸、模型造型的特点等,进行配置,此处不做限定。图8为通过各虚拟光源向目标顶点发射射线的示意图。在图8所示射线的基础上,图9为遮挡检测的结果示意图。在图9中心的部分靠近模型头部的位置有多处颜色较深的区域,这些在图9中显示颜色较深的区域表示射线在此区域被遮挡。如遇遮挡,可以将这射线的遮挡值记为1,如果没有被遮挡,则记为0。以上述0或1的遮挡值作为该目标顶点的顶点遮挡信息。
如前文所述,设置的虚拟光源有许多,例如N个虚拟光源向目标顶点发射射线,便会有N个射线对应的遮挡值。可以将多个虚拟光源发出的射线的遮挡信息共同作为目标顶点对应的顶点遮挡信息。即,目标顶点对应的顶点遮挡信息包含N个遮挡值,分别对应于针对于目标顶点所设置的N个虚拟光源。
S603,以多条射线的方向信息对所对应遮挡信息进行编码,将得到的多条射线的遮挡信息的编码结果共同作为目标顶点对应的遮挡信息的编码结果。
在本申请实施例中,采用射线的方向信息对该射线的遮挡信息进行编码。从而,实现了一对一准确编码。在一种可选实现方式中,执行以上步骤S601-S603的模型预处理工具为Hondini软件。由于Houdini采用的是右手坐标系,Unity采用的是左手坐标系,为了让编码的数据最终能在Unity中正确地被使用,本申请实施例中,S603可以包括以下具体的操作:
将多条射线的方向信息从右手坐标系转换到左手坐标系中;
将转换到左手坐标系中的方向信息转换到目标顶点的切线空间中;
以多条射线转换到目标顶点的切线空间中的方向信息,对目标顶点对应的顶点遮挡信息中对应射线的遮挡信息进行编码。
通过对射线方向信息在左右手坐标系的转换,实现了方向信息在不同坐标系的初步转换。由于射线处在虚拟对象模型所处的世界坐标系中,顶点则具有其对应的切线空间,若要令后续保存在顶点的数据在模型发生旋转后也能计算正确,射线方向需要转换到切线空间坐标下。也就是说,为了实现顶点级别的信息的准确编码,进一步将转换到左手坐标系中的方向信息转换到目标顶点的切线空间中。接着再以转换到切线空间的方向信息对顶点遮挡信息中对应射线的遮挡信息进行编码。例如共有N个虚拟光源对目标顶点发送射线,得到该目标顶点的N个遮挡值,N个不同方向的射线转换到目标顶点切线空间的方向信息用来对对应的遮挡值进行编码。以上转换使编码依据了顶点切线空间中射线的方向信息,实现了一对一的准确编码,方便解码时还原到相对准确的信息。左手坐标系转换:只需要取反射线方向xyz的x轴即可。在进行切线空间的坐标系转换,可以为:通过Houdini获取该顶点的切线T,辅切线B,法线N,TBN三个向量的x轴取反,组成unity左手坐标系下的TBN矩阵。通过TBN矩阵求得其逆矩阵invTBN矩阵。使用invTBN矩阵右乘射线方向向量,即可得unity切线空间坐标系下的射线方向向量。
执行完S603得到目标顶点对应的遮挡信息的编码结果。与之类似地,执行S601-S603可以得到模型各个顶点对应的遮挡信息的编码结果。为了方便后续渲染,节省数据读取时对于设备计算性能的消耗,采取了如下方式进行存储,详见S604。
S604,将多个顶点分别对应的顶点遮挡信息的编码结果存储在对应顶点的UV数据的空 位中。
前面有介绍到模型UV2和UV3,其中各自含有数据存储空位。可以将顶点对应的顶点遮挡信息的编码结果存储至空位中。例如通过球谐函数编码得到的编码结果包括4个球谐系数,UV2和UV3各自含有两个空位,可以将4个球谐系数中的2个球谐系数存入UV2的两个空位中,另外2个球谐系数存入UV3的两个空位中。从而,在渲染阶段,游戏引擎Unity获取虚拟对象的模型上多个顶点分别对应的顶点遮挡信息的编码结果,具体可以是Unity的顶点着色器从多个顶点的UV数据中获取对应顶点的顶点遮挡信息的编码结果。本申请中巧妙地运用了UV2和UV3中的数据空位。由于编码结果存在了对应的顶点的UV数据空位中,因此,十分方便游戏引擎的着色器读取数据,从而可以节省计算设备在读取设备和进行运算时的性能消耗。
以上结合图6介绍了预处理模型顶点的过程。其本质在于,提取模型每个顶点的顶点遮挡信息并加以编码,之后存入模型顶点的数据中,将顶点遮挡信息的编码结果作为顶点数据的一部分。从而获得了具有顶点遮挡信息的编码结果的模型。实际使用Houdini产品时,依照下方步骤进行设置:
1)在Houdini中导入虚拟对象的模型。具体导入的可以是Unity的虚拟对象模型的文件。
2)区分遮挡部件和非遮挡部件。例如透明的部件为非遮挡部件。不透明的部件为遮挡部件。这一操作可以由美术人员人工进行区分,也可以通过一些透明度识别算法自动加以判断。
3)导出模型。模型的导出路径可以与导出路径一致。具体地,可以导出到Unity的资源下着色器(Shader)的相关文件夹中。
在本申请实施例中,在启用着色器的界面提供了可供选择是否启用本实施例方案中渲染功能的开关或者选项。例如,当需要采用Unity引擎渲染叠加光遮挡的效果时,用户勾选启动本实施例的渲染功能的选项,即可以令着色器支持本申请实施例渲染功能,使启用的着色器能够完成后续的对于叠加光遮挡效果的渲染工作。
图10为本申请实施例提供的另一种叠加光遮挡的渲染方法的流程图,在流程图的A部分展示了Houdini软件对模型的预处理过程,流程图的B部分展示了Unity引擎的着色器的渲染过程。关于预处理过程和渲染过程在前文介绍的实施例中均有详细描述,此处不再赘述,包含预处理过程的整体流程可以参照图10。
下面具体介绍球谐函数在编码、解码中的应用。
将前文介绍的实施例中离散射线各自的遮挡值看作一个分布于球面空间上的近似函数f(x),球谐基函数为球谐系数为可知近似函数与球谐系数表达式如下方公式(1)和(2):

根据球谐基函数图表,本方案使用球谐函数在直角坐标系下的L0L1阶表达式为基函数取半径r=1。球谐基函数的图表可以参照图11。具体采用了图11中框取的直角坐标中的表达式。为了方便表达,设i=l*(l+1)+m。从而上述表达式(1)和(2)可分别变化为(3)和(4):

ci=∫sf(s)yi(s)ds      (4)
若求球谐系数ci,可使用蒙特卡洛方法离散化后可得表达式(5):
公式(5)中,ω(sj)是权重系数。因为函数是分布在均匀球面上的,对于均匀的球面上采样,权重系数求和后的结果是球面面积,提出求和符号外面就变成4πr2,因为r=1,所以ω(sj)是等同于4π。故公式(5)中ω(sj)可提出求和公式外,得:
以上述球谐系数公式(6)中的f(sj)代入前述介绍的以多个虚拟光源的射线进行遮挡检测得到的多个遮挡值,即可求得4个球谐系数作为编码结果,再进行保存。即,把遮挡值作为函数的结果代入公式(6)。
在Unity引擎进行解码时,顶点着色器中,获取每盏叠加光的方向信息,转换到切线空间坐标系下。具体地,可获取切线T,辅切线B,法线N,组成TBN矩阵。通过TBN矩阵求得其逆矩阵invTBN矩阵。使用invTBN矩阵右乘叠加光方向向量xyz,即可转换得切线坐标系下的叠加光方向向量xyz。此处,解码时使用前一步骤中的切线空间叠加光的方向信息(方向向量)与顶点数据中保存的球谐系数,进行球谐函数的解码,计算求得遮挡值。
解码公式如下:
其中n=2阶系数,则n2=4。ci为顶点数据中保存的球谐系数;yi(s)为球谐基函数,可以参照图11,其中,i=l*(l+1)+m。
基于前文实施例提供的叠加光遮挡的渲染方法,本申请中还相应提供了一种叠加光遮挡的渲染装置。以下结合图12进行说明。图12为本申请实施例提供的叠加光遮挡的渲染装 置的结构示意图。如图12所示的叠加光遮挡的渲染装置包括:
第一获取单元1201,用于获取虚拟对象所处环境中叠加光的方向信息;
第二获取单元1202,用于获取所述虚拟对象的模型上多个顶点的数据;其中,顶点的数据包含顶点遮挡信息的编码结果,所述顶点遮挡信息为预先对所述顶点进行光线遮挡检测后得到;
解码单元1203,用于根据所述方向信息对所述编码结果进行解码,得到解码后的顶点遮挡信息;
渲染单元1204,用于根据所述多个顶点分别对应的所述解码后的顶点遮挡信息和所述叠加光的光强信息,得到所述虚拟对象的叠加光遮挡渲染数据。
由于顶点遮挡信息是预先检测出并以编码的形式存储在顶点数据中,在渲染时只需要拿取顶点数据便可以相应解码,不需要逐帧读取纹理,因此能够节省对计算设备的性能消耗,在叠加光遮挡效果渲染呈现的同时保证了设备的高性能,更加适用于在移动设备上应用。此外,该渲染方案以模型顶点为基础,因此不受虚拟对象在其所处世界坐标系中的运动方向变化的影响。并且在该方案中也考虑到了叠加光方向的重要作用,以叠加光的方向信息解码顶点数据中的编码结果,使解码后的顶点遮挡信息在与叠加光光强共同形成的叠加光遮挡渲染数据。因此,该渲染方案能够展示更加自然、物理准确的叠加光遮挡效果,进一步提升用户(游戏玩家或动画观看者)的视觉体验。
在可选的实现方式中,所述解码单元,用于:
对于所述多个顶点中的任一目标顶点,将所述方向信息转换到所述目标顶点的切线空间中,得到转换方向信息,所述转换方向信息用于标识所述叠加光在所述目标顶点的切线空间中的方向;
根据所述转换方向信息对所述编码结果进行解码,得到所述目标顶点对应的解码后的顶点遮挡信息。
在可选的实现方式中,所述环境中叠加光的数量为多盏,所述解码单元,用于:
分别以多盏叠加光的方向信息对所述多个顶点的顶点遮挡信息的编码结果逐一解码,得到解码后的顶点遮挡信息,所述解码后的顶点遮挡信息包括与所述多盏叠加光一一对应的解码结果集合;所述解码结果集合包括所述多个顶点针对同一盏叠加光的解码后的顶点遮挡信息;
所述渲染单元,用于:
根据所述解码结果集合与所对应叠加光的光强信息,得到所对应叠加光的子渲染数据;
通过所述多盏叠加光的子渲染数据,得到所述虚拟对象在所述多盏叠加光下的遮挡渲染数据。
在可选的实现方式中,所述顶点遮挡信息的编码结果为利用球谐函数对所述顶点遮挡信息进行编码后得到的球谐系数;所述解码单元,具体用于:
根据所述方向信息对所述球谐系数进行球谐函数解码。
在可选的实现方式中,所述渲染单元,用于:
采用外部程序传递的参数对所述多个顶点解码后的顶点遮挡信息进行重新映射处理, 得到所述多个顶点分别对应的遮挡值;
根据所述多个顶点分别对应的遮挡值和所述叠加光的光强信息,得到对于所述虚拟对象的叠加光遮挡渲染数据。
在可选的实现方式中,叠加光遮挡的渲染装置还包括:预处理单元,用于在获取所述虚拟对象的模型上多个顶点分别对应的顶点遮挡信息的编码结果之前,执行以下操作:
对于所述多个顶点中的任一目标顶点,在所述目标顶点的周边预设范围内设置多个虚拟光源;
根据所述多个虚拟光源向所述目标顶点发出的射线,将确定出的所述多个虚拟光源分别对应的遮挡信息共同作为所述目标顶点对应的顶点遮挡信息,所述遮挡信息用于标识所对应虚拟光晕发出的射线有无受到所述目标顶点以外其他顶点遮挡;
以多条射线的方向信息对所对应遮挡信息进行编码,将得到的多条射线的遮挡信息的编码结果共同作为所述目标顶点对应的遮挡信息的编码结果。
在可选的实现方式中,所述预处理单元具体用于:
将多条射线的方向信息从右手坐标系转换到左手坐标系中;
将转换到所述左手坐标系中的方向信息转换到所述目标顶点的切线空间中;
以所述多条射线转换到所述目标顶点的切线空间中的方向信息,对所述目标顶点对应的顶点遮挡信息中对应射线的遮挡信息进行编码。
在可选的实现方式中,对于所述多个虚拟光源中任一目标虚拟光源,所述目标虚拟光源向所述目标顶点发出的射线的长度被配置为小于所述目标虚拟光源与所述目标顶点之间的距离。
在可选的实现方式中,所述叠加光遮挡的渲染装置还包括:
存储单元,用于将所述多个顶点分别对应的顶点遮挡信息的编码结果存储在对应顶点的UV数据的空位中;
第二获取单元1202,具体用于:从所述多个顶点的UV数据中获取对应顶点的顶点遮挡信息的编码结果。
下面就服务器形式和终端设备形式分别介绍叠加光生成的渲染设备的结构。
图13是本申请实施例提供的一种服务器结构示意图,该服务器900可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上中央处理器(central processing units,CPU)922(例如,一个或一个以上处理器)和存储器932,一个或一个以上存储应用程序942或数据944的存储介质930(例如一个或一个以上海量存储设备)。其中,存储器932和存储介质930可以是短暂存储或持久存储。存储在存储介质930的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对服务器中的一系列指令操作。更进一步地,中央处理器922可以设置为与存储介质930通信,在服务器900上执行存储介质930中的一系列指令操作。
服务器900还可以包括一个或一个以上电源926,一个或一个以上有线或无线网络接口950,一个或一个以上输入输出接口958,和/或,一个或一个以上操作系统941,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM等等。
其中,CPU 922用于执行如下步骤:
获取虚拟对象所处环境中叠加光的方向信息和所述虚拟对象的模型上多个顶点的数据;其中,顶点的数据包含顶点遮挡信息的编码结果;
根据所述方向信息对所述编码结果进行解码,得到解码后的顶点遮挡信息,所述顶点遮挡信息为预先对所述顶点进行光线遮挡检测后得到;
根据所述多个顶点分别对应的所述解码后的顶点遮挡信息和所述叠加光的光强信息,得到对于所述虚拟对象的叠加光遮挡渲染数据。
本申请实施例还提供了另一种叠加光生成的渲染设备,如图14所示,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该终端可以为包括手机、平板电脑、个人数字助理(英文全称:Personal Digital Assistant,英文缩写:PDA)、销售终端(英文全称:Point of Sales,英文缩写:POS)、车载电脑等任意终端设备,以终端为手机为例:
图14示出的是与本申请实施例提供的终端相关的手机的部分结构的框图。参考图14,手机包括:射频(英文全称:Radio Frequency,英文缩写:RF)电路1010、存储器1020、输入单元1030、显示单元1040、传感器1050、音频电路1060、无线保真(英文全称:wireless fidelity,英文缩写:WiFi)模块1070、处理器1080、以及电源1090等部件。本领域技术人员可以理解,图14中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图14对手机的各个构成部件进行具体的介绍:
RF电路1010可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器1080处理;另外,将设计上行的数据发送给基站。通常,RF电路1010包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(英文全称:Low Noise Amplifier,英文缩写:LNA)、双工器等。此外,RF电路1010还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(英文全称:Global System of Mobile communication,英文缩写:GSM)、通用分组无线服务(英文全称:General Packet Radio Service,GPRS)、码分多址(英文全称:Code Division Multiple Access,英文缩写:CDMA)、宽带码分多址(英文全称:Wideband Code Division Multiple Access,英文缩写:WCDMA)、长期演进(英文全称:Long Term Evolution,英文缩写:LTE)、电子邮件、短消息服务(英文全称:Short Messaging Service,SMS)等。
存储器1020可用于存储软件程序以及模块,处理器1080通过运行存储在存储器1020的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器1020可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1020可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元1030可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元1030可包括触控面板1031以及其他输入设备1032。触控面板1031,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1031上或在触控面板1031附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板1031可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1080,并能接收处理器1080发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1031。除了触控面板1031,输入单元1030还可以包括其他输入设备1032。具体地,其他输入设备1032可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元1040可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元1040可包括显示面板1041,可选的,可以采用液晶显示器(英文全称:Liquid Crystal Display,英文缩写:LCD)、有机发光二极管(英文全称:Organic Light-Emitting Diode,英文缩写:OLED)等形式来配置显示面板1041。进一步的,触控面板1031可覆盖显示面板1041,当触控面板1031检测到在其上或附近的触摸操作后,传送给处理器1080以确定触摸事件的类型,随后处理器1080根据触摸事件的类型在显示面板1041上提供相应的视觉输出。虽然在图14中,触控面板1031与显示面板1041是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板1031与显示面板1041集成而实现手机的输入和输出功能。
手机还可包括至少一种传感器1050,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1041的亮度,接近传感器可在手机移动到耳边时,关闭显示面板1041和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路1060、扬声器1061,传声器1062可提供用户与手机之间的音频接口。音频电路1060可将接收到的音频数据转换后的电信号,传输到扬声器1061,由扬声器1061转换为声音信号输出;另一方面,传声器1062将收集的声音信号转换为电信号,由音频电路1060接收后转换为音频数据,再将音频数据输出处理器1080处理后,经RF电路1010以发送给比如另一手机,或者将音频数据输出至存储器1020以便进一步处理。
WiFi属于短距离无线传输技术,手机通过WiFi模块1070可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图14示出了WiFi模块1070,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变 发明的本质的范围内而省略。
处理器1080是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器1020内的软件程序和/或模块,以及调用存储在存储器1020内的数据,执行手机的各种功能和处理数据。可选的,处理器1080可包括一个或多个处理单元;优选的,处理器1080可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1080中。
手机还包括给各个部件供电的电源1090(比如电池),优选的,电源可以通过电源管理系统与处理器1080逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管未示出,手机还可以包括摄像头、蓝牙模块等,在此不再赘述。
在本申请实施例中,该终端所包括的处理器1080还具有以下功能:
获取虚拟对象所处环境中叠加光的方向信息和所述虚拟对象的模型上多个顶点的数据;其中,顶点的数据包含顶点遮挡信息的编码结果;
根据所述方向信息对所述编码结果进行解码,得到解码后的顶点遮挡信息,所述顶点遮挡信息为预先对所述顶点进行光线遮挡检测后得到;
根据所述多个顶点分别对应的所述解码后的顶点遮挡信息和所述叠加光的光强信息,得到所述虚拟对象的叠加光遮挡渲染数据。
本申请实施例还提供一种计算机可读存储介质,用于存储计算机程序,该计算机程序用于执行前述各个实施例所述的一种叠加光遮挡的渲染方法中的任意一种实施方式。
本申请实施例还提供一种包括计算机程序的计算机程序产品,当其在计算机上运行时,使得计算机执行前述各个实施例所述的一种叠加光遮挡的渲染方法中的任意一种实施方式。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、设备的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统和方法,可以通过其它的方式实现。例如,以上所描述的系统实施例仅仅是示意性的,例如,所述系统的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个系统可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的系统可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可 以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文全称:Read-Only Memory,英文缩写:ROM)、随机存取存储器(英文全称:Random Access Memory,英文缩写:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (13)

  1. 一种叠加光遮挡的渲染方法,所述方法由终端设备执行,所述方法包括:
    获取虚拟对象所处环境中叠加光的方向信息和所述虚拟对象的模型上多个顶点的数据;其中,顶点的数据包含顶点遮挡信息的编码结果,所述顶点遮挡信息为预先对所述顶点进行光线遮挡检测后得到;
    根据所述方向信息对所述编码结果进行解码,得到解码后的顶点遮挡信息;
    根据所述多个顶点分别对应的所述解码后的顶点遮挡信息和所述叠加光的光强信息,得到所述虚拟对象的叠加光遮挡渲染数据。
  2. 根据权利要求1所述的叠加光遮挡的渲染方法,所述根据所述方向信息对所述编码结果进行解码,得到解码后的顶点遮挡信息,包括:
    对于所述多个顶点中的任一目标顶点,将所述方向信息转换到所述目标顶点的切线空间中,得到转换方向信息,所述转换方向信息用于标识所述叠加光在所述目标顶点的切线空间中的方向;
    根据所述转换方向信息对所述编码结果进行解码,得到所述目标顶点对应的解码后的顶点遮挡信息。
  3. 根据权利要求1所述的叠加光遮挡的渲染方法,所述环境中叠加光的数量为多盏,所述根据所述方向信息对所述编码结果进行解码,得到解码后的顶点遮挡信息,包括:
    分别以多盏叠加光的方向信息对所述多个顶点的顶点遮挡信息的编码结果逐一解码,得到解码后的顶点遮挡信息,所述解码后的顶点遮挡信息包括与所述多盏叠加光一一对应的解码结果集合,所述解码结果集合包括所述多个顶点针对同一盏叠加光的解码后的顶点遮挡信息;
    所述根据所述多个顶点分别对应的所述解码后的顶点遮挡信息和所述叠加光的光强信息,得到所述虚拟对象的叠加光遮挡渲染数据,包括:
    根据所述解码结果集合与所对应叠加光的光强信息,得到所对应叠加光的子渲染数据;
    通过所述多盏叠加光的子渲染数据,得到所述虚拟对象在所述多盏叠加光下的叠加光遮挡渲染数据。
  4. 根据权利要求1所述的叠加光遮挡的渲染方法,所述顶点遮挡信息的编码结果为利用球谐函数对所述顶点遮挡信息进行编码后得到的球谐系数;所述根据所述方向信息对所述编码结果进行解码,包括:
    根据所述方向信息对所述球谐系数进行球谐函数解码。
  5. 根据权利要求1所述的叠加光遮挡的渲染方法,所述根据所述多个顶点分别对应的所述解码后的顶点遮挡信息和所述叠加光的光强信息,得到对于所述虚拟对象的叠加光遮挡渲染数据,包括:
    采用外部程序传递的参数对所述解码后的顶点遮挡信息进行重新映射处理,得到所述多个顶点分别对应的遮挡值;
    根据所述多个顶点分别对应的遮挡值和所述叠加光的光强信息,得到对于所述虚拟对象的叠加光遮挡渲染数据。
  6. 根据权利要求1所述的叠加光遮挡的渲染方法,在获取所述虚拟对象的模型上多个顶点分别对应的顶点遮挡信息的编码结果之前,所述方法还包括:
    对于所述多个顶点中的任一目标顶点,在所述目标顶点的周边预设范围内设置多个虚拟光源;
    根据所述多个虚拟光源向所述目标顶点发出的射线,将确定出的所述多个虚拟光源分别对应的遮挡信息共同作为所述目标顶点对应的顶点遮挡信息,所述遮挡信息用于标识所对应虚拟光晕发出的射线有无受到所述目标顶点以外其他顶点遮挡;
    以多条射线的方向信息对所对应遮挡信息进行编码,将得到的多条射线的遮挡信息的编码结果共同作为所述目标顶点对应的遮挡信息的编码结果。
  7. 根据权利要求6所述的叠加光遮挡的渲染方法,所述以多条射线的方向信息对所述目标顶点对应的顶点遮挡信息中对应射线的遮挡信息进行编码,包括:
    将多条射线的方向信息从右手坐标系转换到左手坐标系中;
    将转换到所述左手坐标系中的方向信息转换到所述目标顶点的切线空间中;
    以所述多条射线转换到所述目标顶点的切线空间中的方向信息,对所述目标顶点对应的顶点遮挡信息中对应射线的遮挡信息进行编码。
  8. 根据权利要求6所述的叠加光遮挡的渲染方法,对于所述多个虚拟光源中任一目标虚拟光源,所述目标虚拟光源向所述目标顶点发出的射线的长度被配置为小于所述目标虚拟光源与所述目标顶点之间的距离。
  9. 根据权利要求1所述的叠加光遮挡的渲染方法,所述方法还包括:
    将所述多个顶点分别对应的顶点遮挡信息的编码结果存储在对应顶点的UV数据的空位中;
    所述获取所述虚拟对象的模型上多个顶点分别对应的顶点遮挡信息的编码结果,包括:
    从所述多个顶点的UV数据中获取对应顶点的顶点遮挡信息的编码结果。
  10. 一种叠加光遮挡的渲染装置,包括:
    第一获取单元,用于获取虚拟对象所处环境中叠加光的方向信息;
    第二获取单元,用于获取所述虚拟对象的模型上多个顶点的数据;其中,顶点的数据包含顶点遮挡信息的编码结果,所述顶点遮挡信息为预先对所述顶点进行光线遮挡检测后得到;
    解码单元,用于根据所述方向信息对所述编码结果进行解码,得到解码后的顶点遮挡信息;
    渲染单元,根据所述多个顶点分别对应的所述解码后的顶点遮挡信息和所述叠加光的光强信息,得到对于所述虚拟对象的叠加光遮挡渲染数据。
  11. 一种叠加光遮挡的渲染设备,所述设备包括处理器以及存储器:
    所述存储器用于存储计算机程序,并将所述计算机程序传输给所述处理器;
    所述处理器用于根据所述计算机程序执行权利要求1至9中任一项所述的叠加光遮挡的渲染方法的步骤。
  12. 一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,所述计 算机程序用于执行权利要求1至9任一项所述的叠加光遮挡的渲染方法的步骤。
  13. 一种计算机程序产品,包括计算机程序,该计算机程序被计算机设备执行时实现权利要求1至9任一项所述的叠加光遮挡的渲染方法的步骤。
PCT/CN2023/123241 2022-11-03 2023-10-07 一种叠加光遮挡的渲染方法、装置及相关产品 WO2024093609A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211372129.6 2022-11-03
CN202211372129.6A CN117994392A (zh) 2022-11-03 2022-11-03 一种叠加光遮挡的渲染方法、装置及相关产品

Publications (1)

Publication Number Publication Date
WO2024093609A1 true WO2024093609A1 (zh) 2024-05-10

Family

ID=90897980

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/123241 WO2024093609A1 (zh) 2022-11-03 2023-10-07 一种叠加光遮挡的渲染方法、装置及相关产品

Country Status (2)

Country Link
CN (1) CN117994392A (zh)
WO (1) WO2024093609A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111420404A (zh) * 2020-03-20 2020-07-17 网易(杭州)网络有限公司 游戏中对象渲染的方法及装置、电子设备、存储介质
CN111760277A (zh) * 2020-07-06 2020-10-13 网易(杭州)网络有限公司 光照渲染的方法及装置
US20200380790A1 (en) * 2019-06-03 2020-12-03 Eidos Interactive Corp. Systems and methods for augmented reality applications

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200380790A1 (en) * 2019-06-03 2020-12-03 Eidos Interactive Corp. Systems and methods for augmented reality applications
CN111420404A (zh) * 2020-03-20 2020-07-17 网易(杭州)网络有限公司 游戏中对象渲染的方法及装置、电子设备、存储介质
CN111760277A (zh) * 2020-07-06 2020-10-13 网易(杭州)网络有限公司 光照渲染的方法及装置

Also Published As

Publication number Publication date
CN117994392A (zh) 2024-05-07

Similar Documents

Publication Publication Date Title
US11232534B2 (en) Scheme for compressing vertex shader output parameters
CN109685876B (zh) 毛发渲染方法、装置、电子设备及存储介质
RU2677584C1 (ru) Использование межкадровой когерентности в архитектуре построения изображений с сортировкой примитивов на промежуточном этапе
CN110196746B (zh) 交互界面渲染方法及装置、电子设备、存储介质
US6700586B1 (en) Low cost graphics with stitching processing hardware support for skeletal animation
WO2022042436A1 (zh) 图像渲染方法、装置、电子设备及存储介质
KR20220044587A (ko) 이미지 렌더링 방법 및 관련 기기
WO2021008627A1 (zh) 游戏角色渲染方法、装置、电子设备及计算机可读介质
CN109427096A (zh) 一种基于增强现实的自动导览方法和系统
EP3699869A1 (en) Using compute shaders as front end for vertex shaders
US20230120253A1 (en) Method and apparatus for generating virtual character, electronic device and readable storage medium
CN105556565A (zh) 片段着色器执行顶点着色器计算
KR20140139553A (ko) 그래픽 프로세싱 유닛들에서 가시성 기반 상태 업데이트들
TWI535277B (zh) 用於深度緩衝之方法、設備及系統
JP2021524094A (ja) アプリケーションの仮想シーン認識及びインタラクションキーマッチング方法、並びに計算装置
KR20120125395A (ko) 그래픽 시스템에서 2차 프로세서를 이용하기 위한 시스템 및 방법
CN110533755A (zh) 一种场景渲染的方法以及相关装置
US20210343072A1 (en) Shader binding management in ray tracing
CN116091676B (zh) 虚拟对象的面部渲染方法及点云特征提取模型的训练方法
CN109427100A (zh) 一种基于虚拟现实的配件组装方法和系统
CN109685884A (zh) 一种基于虚拟现实的三维建模方法和系统
WO2022143367A1 (zh) 一种图像渲染方法及其相关设备
WO2018209710A1 (zh) 一种图像处理方法及装置
RU2680355C1 (ru) Способ и система удаления невидимых поверхностей трёхмерной сцены
CN115375822A (zh) 云模型的渲染方法、装置、存储介质及电子装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23884541

Country of ref document: EP

Kind code of ref document: A1