WO2019033859A1 - 模拟光照的渲染方法及终端 - Google Patents

模拟光照的渲染方法及终端 Download PDF

Info

Publication number
WO2019033859A1
WO2019033859A1 PCT/CN2018/093322 CN2018093322W WO2019033859A1 WO 2019033859 A1 WO2019033859 A1 WO 2019033859A1 CN 2018093322 W CN2018093322 W CN 2018093322W WO 2019033859 A1 WO2019033859 A1 WO 2019033859A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
preset
normal
virtual object
terminal
Prior art date
Application number
PCT/CN2018/093322
Other languages
English (en)
French (fr)
Inventor
郭金辉
李斌
陈慧
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2020509081A priority Critical patent/JP7386153B2/ja
Priority to KR1020207005286A priority patent/KR102319179B1/ko
Publication of WO2019033859A1 publication Critical patent/WO2019033859A1/zh
Priority to US16/789,263 priority patent/US11257286B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/55Radiosity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Definitions

  • the present application relates to modeling techniques in the field of electronic applications, and more particularly to rendering of simulated illumination.
  • Terminal generally uses Unity development tools to generate 3D virtual objects.
  • the terminal can obtain the design model from the model design tool, and import the design model in the Unity development tool to perform 3D scenes and 3D.
  • the model design tool usually used is Cinema 4D.
  • Cinema 4D is not compatible with Unity, the model design tool will provide a better design model for ambient light effects, but the Unity development tool does not provide such a good ambient light to import the design model. The effect in Unity is very poor.
  • the usual solution is to simulate the ambient light effect in the design tool through a combination of multiple lights in Unity, but using a combination of multiple lights to simulate the ambient light, resulting in a serious decline in design model performance and flexibility, and three-dimensional
  • the effect of the illumination is not controllable due to the immutability of the light combination model, resulting in poor display of the three-dimensional virtual object or character.
  • the embodiment of the present application is expected to provide a method and a terminal for simulating illumination, which can simulate ambient light close to the real environment on the basis of ensuring the shadow detail of the three-dimensional design model, and use the simulated ambient light to three-dimensional
  • the virtual object model is processed to improve the display effect of the three-dimensional virtual object model or the virtual character model.
  • the embodiment of the present application provides a rendering method for simulating illumination, including:
  • the terminal acquires first mesh vertex information of the preset first virtual object model, first color information corresponding to the first mesh vertex information, and first normal information, where the preset first virtual object model is to be Processing the preset model, the first normal information is obtained by baking the high modulus corresponding to the preset first virtual object model;
  • the terminal performs vertex space conversion on the first normal information to obtain second normal information corresponding to the first mesh vertex information;
  • the terminal obtains first illumination information corresponding to the first mesh vertex information according to the preset color setting rule and the second normal information, where the preset color setting rule is used to represent the correspondence between the color and the illumination relationship;
  • the terminal uses the first illumination information, the first color information, and the first mesh vertex information to render the first virtual model object to obtain a second virtual object model.
  • the embodiment of the present application provides a terminal, including:
  • An acquiring unit configured to acquire first mesh vertex information of the preset first virtual object model, first color information corresponding to the first mesh vertex information, and first normal information, where the preset first virtual
  • the object model is a preset model to be processed, and the first normal information is baked by the high modulus corresponding to the preset first virtual object model;
  • a conversion unit configured to perform vertex space conversion on the first normal information to obtain second normal information corresponding to the first mesh vertex information
  • the acquiring unit is further configured to obtain, according to the preset color setting rule and the second normal information, first illumination information corresponding to the first mesh vertex information, where the preset color setting rule is used for characterization The correspondence between color and light;
  • the rendering unit is further configured to use the first illumination information, the first color information, and the first mesh vertex information to render the first virtual model object to obtain a second virtual object model.
  • the embodiment of the present application further provides a terminal, including:
  • processor a processor, a memory, a display, and a communication bus, wherein the processor, the memory, and the display are connected by the communication bus;
  • the processor is configured to invoke a rendering related program of the simulated illumination stored in the memory, and perform the following steps:
  • first mesh vertex information of the preset first virtual object model, first color information corresponding to the first mesh vertex information, and first normal information where the preset first virtual object model is to be processed a preset model, the first normal information is obtained by baking the high modulus corresponding to the preset first virtual object model; and the first normal information is transformed into a vertex space to obtain the first network.
  • the second normal information corresponding to the vertices information; the first illumination information corresponding to the first mesh vertex information is obtained according to the preset color setting rule and the second normal information, and the preset color setting rule Corresponding relationship between color and illumination; using the first illumination information, the first color information, and the first mesh vertex information to render the first virtual model object to obtain a second virtual object model;
  • the display is configured to display the second virtual object model.
  • the embodiment of the present application provides a computer readable storage medium, which is applied to a terminal, where the computer readable storage medium stores one or more rendering programs for simulating illumination, and the one or more rendering programs of the simulated illumination may be Executed by one or more processors to implement the above-described method of rendering the simulated illumination.
  • the embodiment of the present application provides a method for rendering a simulated illumination and a terminal, which acquires a first mesh vertex information of a preset first virtual object model, a first color information corresponding to the first mesh vertex information, and a first method.
  • the line information, the first virtual object model is preset as a preset model to be processed, and the first normal information is baked by a high modulus corresponding to the preset first virtual object model; and the first normal information is transformed into a vertex space, Obtaining second normal information corresponding to the first mesh vertex information; and according to the preset color setting rule and the second normal information, obtaining first illumination information corresponding to the first mesh vertex information, and the preset color setting rule is used by
  • the first virtual model object is rendered by using the first illumination information, the first color information, and the first mesh vertex information to obtain a second virtual object model.
  • the terminal can parse the illumination information corresponding to each mesh vertex according to the fine normal information determined by the high mode, the illumination information can be used as the ambient light to render the first virtual object model. Due to the high precision of the normal information, the shadow detail of the 3D design model is guaranteed, and the ambient light close to the real environment is simulated. Therefore, the accuracy of the second virtual object model rendered by the method is high, and the second is improved. The display effect of the virtual object model.
  • FIG. 1 is a flowchart 1 of a method for rendering simulated illumination according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of an exemplary normal map effect according to an embodiment of the present application.
  • FIG. 3 is a second flowchart of a method for rendering simulated illumination according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of an exemplary normal map creation interface provided by an embodiment of the present application.
  • FIG. 5 is a schematic plan view showing an exemplary model according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an exemplary normal map provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an exemplary rendering effect provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram 1 of a terminal according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram 2 of a terminal according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram 3 of a terminal according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram 4 of a terminal according to an embodiment of the present application.
  • the rendering method of the simulated illumination provided by the embodiment of the present application can be applied to any application or function that uses the three-dimensional model, and the application terminal can install the above application and perform data interaction through the server corresponding to the application. Implement the corresponding functions.
  • the rendering method adopted by the terminal (development terminal) in performing the processing of the three-dimensional model is mainly described to achieve a better process of displaying the three-dimensional model.
  • the embodiment of the present application provides a method for rendering simulated illumination. As shown in FIG. 1 , the method may include:
  • the terminal when performing the three-dimensional model processing, the terminal may first use the three-dimensional model design tool to establish the model, and then use the three-dimensional model development tool to further process the model, and finally obtain the required model. 3D model.
  • the three-dimensional model design tool may be 3ds Max, ZByush, or Cinema 4D, etc., and the embodiment of the present application does not limit the types of model design tools; and the 3D model development tool may be Unity 3D, Unreal, etc., this application Embodiments do not limit the types of model development tools.
  • the three-dimensional model design tool is Cinema 4D, and the three-dimensional model development tool is Unity 3D.
  • the terminal establishes the required three-dimensional virtual object model in the three-dimensional design tool. After the three-dimensional virtual object model is established, the three-dimensional virtual object model is exported for further model processing by the three-dimensional model development tool.
  • the main description in the embodiment of the present application is a process of rendering the three-dimensional virtual object model in the three-dimensional model development tool after the three-dimensional virtual object model is established.
  • the terminal establishes a virtual object according to the requirements of application development, and the terminal may establish a low-progress low-mode for the same virtual object, that is, preset the first virtual object model, and establish A high-precision high-mode, that is, a preset third virtual object model, that is, the preset first virtual object model and the preset third virtual object model are both model establishments for the same virtual object, the only difference Is the precision of modeling.
  • the terminal may acquire related information of the preset first virtual object model and the preset third virtual object model, and then the terminal may perform normal map generation according to the preset third virtual object model.
  • the method of making normal maps can be high-mode baking. Simply put, it is a high-precision model of several million or tens of millions or even hundreds of millions of faces (that is, the preset third virtual object model). Make a low-precision model with thousands of faces and tens of thousands of faces (that is, preset the first virtual object model), and then bake the high-mode details onto the low-mode to get a normal map.
  • the 3D model in the terminal approximates an object by combining multiple polygon faces.
  • the 3D development tool of the terminal may parse the preset first virtual object model, and obtain the first mesh vertex information of the preset first virtual object model and the first corresponding to the first mesh vertex information. Color information, and also parsing the normal map to obtain the first normal information, the first virtual object model is a preset model to be processed, and the first normal information is preset by the first virtual object model corresponding to the high mode (ie, pre- Let the third virtual object model be baked.
  • the preset first virtual object model acquired by the terminal may obtain the UV map and the diffuse texture of the preset first virtual object model, and then the terminal parses the UV map to obtain the first network of the preset first virtual object model.
  • the three-dimensional model is constructed by different aspects. Therefore, there are many vertices in the preset first virtual object model, and the first mesh vertex information refers to It is preset coordinate information of each mesh vertex of the first virtual object model.
  • S102 Perform vertex space conversion on the first normal information to obtain second normal information corresponding to the first mesh vertex information.
  • the terminal Since the first normal information acquired by the terminal is the relevant normal information of the model space, when rendering, the terminal needs to ensure that the illumination direction and the normal information are in the same coordinate system, and the illumination direction is generally in the world space. Medium, and the normal information is in the model space. Therefore, the first mesh vertex information of the preset first virtual object model, the first color information corresponding to the first mesh vertex information, and the first normal are acquired at the terminal. After the information, the terminal needs to perform vertex space conversion on the first normal information to meet the rendering requirement, and finally obtain the second normal information corresponding to the first mesh vertex information.
  • each mesh vertex may correspond to one normal information (collectively referred to as first normal information), and the vertex information of each mesh vertex is collectively referred to as the first mesh vertex information, and therefore, the preset The first mesh vertex information of the first virtual object model is corresponding to the first normal information.
  • the terminal performs the vertex space conversion on the first normal information to obtain the second normal information in the world space.
  • the normal line is a straight line perpendicular to one surface.
  • the angle of the surface can be known, and the color information that should be obtained by the surface can be calculated. Use this principle to perform simulated lighting processing.
  • the first normal information corresponding to each mesh vertex is saved to the corresponding pixel on the normal map. Therefore, in the normal map, by storing the normal of each pixel in a texture, rendering The darkness of the pixel can be determined according to the normal of each pixel. That is, the first normal information records the numerical details of the highlights and shadows of each vertex information, and still stores the first normal information as three colors of RGB on the normal map.
  • a normal is a three-dimensional vector
  • a three-dimensional vector is composed of three components: X, Y, Z, etc., so that the three components are stored as values of three colors of red, green, and blue to generate a new one.
  • Texture which is the normal map.
  • the red and green channels in the normal map represent the up and down and left and right offsets
  • the blue channel represents the vertical offset.
  • the terminal imagines a pixel on the normal map as a normal.
  • This normal map is 512*512 pixels, that is, 262144 pixels.
  • the first virtual object model is equivalent to 262144 normals on the preset first virtual object model (of course, this is certainly not the case), so that a preset virtual object model of a few hundred faces instantly looks like It has the same detail effect as hundreds of thousands of faces.
  • the first illumination information corresponding to the first mesh vertex information is obtained according to the preset color setting rule and the second normal information, and the preset color setting rule is used to represent the correspondence between the color and the illumination.
  • the terminal After the terminal performs the vertex space conversion on the first normal information to obtain the second normal information corresponding to the first mesh vertex information, since the second normal information is in one-to-one correspondence with the first mesh vertex information, the terminal The terminal may set the first illumination information corresponding to the first mesh vertex information according to the preset color setting rule and the second normal information, that is, the terminal may project the normal line according to the second normal information.
  • the projection point of the normal line can be converted into the UV coordinate of the normal map (corresponding to the vertex information of the first grid).
  • the terminal can set the rule corresponding to the UV coordinate according to the preset color.
  • the grid vertices are color-set to obtain the first illumination information, wherein the preset color setting rule is used to represent the correspondence between the color and the illumination.
  • the terminal may store one color information at each mesh vertex of the first network vertex information according to the second normal information and the preset color setting rule, and obtain the second color information of the first mesh vertex information, which will be The two color information is used as the first illumination information.
  • the projection range is x (-1, 1), y (-1, 1), which constitutes a circle. Therefore, the effective range of the normal map is basically a circle. In this case, when the first illumination information is obtained based on the second normal information, the area in which the first illumination information is stored in the light map is a circle.
  • the light map can be a material capture (MatCap, Material Capture) texture, and a MatCap texture of a specific material sphere is used as a view-space environment map of the current material to achieve uniformity.
  • the terminal can provide one or more suitable MatCap textures as a "guidance" of the lighting results without providing any illumination.
  • the shader in the embodiment of the present application is a tool for rendering a three-dimensional model.
  • the preset color setting rule is based on the principle that the black light absorption is less and the white light absorption is more. Therefore, when the illumination of a mesh vertex is strong, the second color information set by the gray color should be whitened. When the illumination of a mesh vertex is weak, the second color information set by it should be turned black, so that the terminal can obtain the first illumination information that uses the second color information to represent the illumination.
  • the color information can be selected in 0-255. The closer to 0, the more black, and the closer to 255, the more white. That is to say, the light intensity in the center of the circle in the light map, the second color information is white, the edge light is weak, and the second color information is black.
  • the first virtual model object is rendered by using the first illumination information, the first color information, and the first mesh vertex information to obtain a second virtual object model.
  • the terminal After the terminal obtains the first illumination information corresponding to the first mesh vertex information according to the preset color setting rule and the second normal information, the terminal can use the first illumination information to simulate the ambient light, and the first Illumination information is also obtained using the relevant normal information, combined with high-precision lighting and shadow details, and therefore close to the real ambient light. In this way, the terminal can render the first virtual model object according to the first illumination information, the first color information, and the first mesh vertex information to obtain a second virtual object model.
  • the terminal may use the first illumination information and the first color information to fill the vertex color of each mesh vertex corresponding to the first mesh vertex information, and obtain the color information of the main vertex of the vertex, thereby using the main
  • the vertex color information processes the preset first virtual object model.
  • the rendering process of the preset virtual object model by the terminal may be processed in various aspects such as texture, color, and the like, which is not limited in the embodiment of the present application.
  • the normal map is created based on the UV of the model, and the detail values of the highlight and the shadow can be recorded, and then after the terminal obtains the normal map, the normal map is applied to the preset first virtual object. After the model, the accuracy of the obtained second virtual object model is high, and the unevenness of the surface texture can be well reflected.
  • the first virtual object model is a candlestick model
  • the candlestick model is on the left side
  • the candlestick model becomes the right icon, which is very stereoscopic. sense.
  • the terminal can use the normal map to process the texture details when rendering the 3D model, and the general terminal uses the vertex shader to implement.
  • a method for rendering simulated illumination may further include: S105-S107. as follows:
  • the normal map of the same high-precision model of the simulated virtual object is generated, thereby parsing the first normal information in the normal map.
  • the process of obtaining the normal map by the terminal may be a high-modulation method, that is, the terminal acquires the second mesh vertex information corresponding to the corresponding high-mode (ie, the preset third virtual object model) of the preset first virtual object model, according to the The second mesh vertex information and the preset normal line model obtain the first normal direction, and finally determine the first corresponding to the first normal direction according to the correspondence between the preset second mesh vertex information and the first mesh vertex Normal information.
  • the normal map due to the appearance of the normal map, the light and shadow detail data of the high-surface model is simulated for the low-surface model. The most important thing is the normal direction of the light incident direction and the incident point.
  • the angle, the normal map essentially records the relevant information of this angle, and the calculation of the illumination is closely related to the normal direction on a certain surface.
  • the terminal may know the second mesh vertex information corresponding to the preset third virtual object model (ie, the information of each mesh vertex in the preset third virtual object model), when the illumination is to the preset third virtual object.
  • the first normal direction of the point obtained by interpolation ie, the preset normal model
  • the preset third virtual object model is projected on the preset first virtual object model to form a two-dimensional projection (for example, xy plane projection), and then the terminal acquires the first mesh vertex information.
  • Corresponding first normal of each mesh vertex is in two projection directions (in the x and y directions) on the two-dimensional projection. Finally, the terminal will take the first normal direction of each mesh vertex as the z direction. This gives the first normal information on the vertices of each network. Then, the terminal saves the first normal information of each point to the corresponding pixel on the normal map, and the actual calculation is to map the size of the x, y, and z directions in the first normal information into the color space rgb, that is, x The value exists in r, the y value exists in g, and the z value exists in b.
  • the terminal performs the preset first virtual object model rendering, the normal map is obtained, and the terminal is the first normal information obtained by parsing the normal map.
  • the normal map in the embodiment of the present application may be an object space normal map.
  • the terminal normal map optimization model shadow control assumes that the preset third virtual object model is a face model, then the display window of the face model for normal map creation is shown in FIG. 4, and the face is baked by the baking method. The model is projected to obtain a planar unfolded view as shown in Fig. 5. Finally, the terminal expands the planar expanded view shown in Fig. 5 in the RGB values, and obtains the normal map as shown in Fig. 6.
  • a method for rendering simulated illumination provided by the embodiment of the present application further includes: S108. as follows:
  • the terminal Before the terminal performs the rendering of the preset first virtual object model, the terminal needs to acquire a scene file that needs to establish a three-dimensional model, and establish a first scene according to the scene file, and then preset the first virtual object in the first scene. Model display and processing.
  • the embodiment of the present application does not limit the type of the first scenario, and may be various scenarios such as snow scenes and deserts.
  • S108 is a primary execution sequence in which the terminal starts model processing, that is, the terminal execution S108 may be before S105-107.
  • a method for rendering simulated illumination provided by the embodiment of the present application may further include: S109. as follows:
  • the terminal After the terminal obtains the rendered second virtual object model, since the second virtual object model is the rendered model after rendering, and the entire model is processed in the first scenario, the terminal can The second virtual object model is displayed or displayed in the first scene.
  • the embodiment of the present application provides a method for rendering a simulated illumination.
  • the method for performing vertex space conversion on the first normal information in S102 to obtain second normal information corresponding to the first mesh vertex information may include: S1021- S1022. as follows:
  • S1021 Perform vertex space conversion on the first normal information, and convert to a tangent space to obtain third normal information.
  • S1022 Normalize the third normal information to obtain second normal information.
  • the terminal uses the MatCap map, mainly to convert the first normal information from the object space to the tangent space, and switch to the area suitable for extracting the texture UV [0, 1] on.
  • the tangent space used by the high mode is on the low mode (preset the first virtual object model), and the terminal generates the normal map. It must be confirmed which faces on the low mode correspond to which faces on the low mode, and then the normals of the faces on the high mode are converted to the coordinates of the tangent space constructed on the low mode side.
  • the normal information ie, the normal value
  • the transformation matrix of the coordinate system can obtain the external coordinates, wherein the first normal information stored in the high mode corresponds to the normal in the object space on the high mode.
  • the terminal performs the vertex space conversion of the first normal information
  • the specific implementation of the conversion to the tangent space is as follows: for each mesh vertex corresponding to the first mesh vertex information, the object space is converted to the tangent space, and can be used.
  • Model-view matrix but the normal vector in the first normal information uses the model-view matrix to transform from object space to eye space. In the eye space, the direction of tangent still conforms to the definition. Normal (normal) is no longer perpendicular to the tangent of the mesh vertex, so the model-view matrix does not apply to normal. This is because, suppose that T is tangent, MV is a model-view matrix, and P1 and P2 are two mesh vertices connected by tangent, then,
  • the normal transformation maintain perpendicular to the tangent, and now assume that the normal matrix matrix is G, then normal and tangent perpendicular to formula (3):
  • the normal matrix matrix is the transposed matrix of the inverse matrix of the model-view matrix.
  • the first normal information can be converted from the model space to the tangent space by the normal matrix matrix, and the third normal information is obtained.
  • Unity is taken as an example.
  • Unity's built-in transformation matrix normal matrix can be expressed as: UNITY_MATRIX_IT_MV, which is the inverse transpose matrix of UNITY_MATRIX_MV (model-view matrix), which functions to get the first normal information from The model space is transformed into the tangent space to obtain the third normal information.
  • UNITY_MATRIX_IT_MV is the inverse transpose matrix of UNITY_MATRIX_MV (model-view matrix)
  • This process is implemented in the vertex shader as follows:
  • the terminal converts the first normal information from the object space to the tangent space
  • the third normal information is obtained, and the third normal information needs to be switched to be suitable for extraction.
  • the second normal information is obtained.
  • the terminal normalizes the third normal information, and the process of obtaining the second normal information is implemented in the vertex shader, as follows:
  • Output.position mul(UNITY_MATRIX_MVP, input.position);
  • the embodiment of the present application provides a method for rendering a simulated illumination, in which the first virtual model object is rendered by using the first illumination information, the first color information, and the first mesh vertex information, to obtain a second virtual object model.
  • the method may include: S1041-S1042. as follows:
  • S1041 Perform interpolation processing on the first color information and the second color information corresponding to the first mesh vertex information to obtain main vertex color information of each mesh vertex corresponding to the first mesh vertex information.
  • S1042 Perform drawing according to the main vertex color information and the correspondence relationship of each mesh vertex to obtain a second virtual object model.
  • the color of the main vertex corresponding to each mesh vertex information in the preset first virtual object model of the terminal is the first color information
  • the terminal renders the preset first virtual object model by using the light map.
  • the terminal performs interpolation processing on the first color information and the second color information corresponding to the first mesh vertex information to obtain color information of the main vertex of each mesh vertex corresponding to the first mesh vertex information;
  • the terminal draws according to the correspondence between the main vertex color information and each mesh vertex to obtain a second virtual object model.
  • the first color information in the embodiment of the present application may be the original main vertex color information, and the terminal may obtain the detail texture according to the second normal information in the normal map, thereby obtaining the detailed color information.
  • the process of obtaining the color information of the new street by the terminal is as follows:
  • Float 3detailMask tex2D(_DetailTex, input.detailUVCoordsAndDepth.xy).rgb;
  • Float 3detailColor lerp(_DetailColor.rgb,mainColor,detailMask);
  • the terminal may first perform a difference between the detail color and the first color information, which is referred to as a new main vertex color information, and then with the second color information extracted from the lightmap (first illumination information). ) to work to get the final main vertex color information.
  • a difference between the detail color and the first color information which is referred to as a new main vertex color information
  • the second color information extracted from the lightmap first illumination information
  • mainColor lerp(detailColor,mainColor,saturate(input.detailUVCoordsAndDepth.z*DetailTexDepthOffset));
  • float3matCapColor tex2D(_MatCap,input.diffuseUVAndMatCapCoords.zw).rgb;
  • float4finalColor float4(mainColor*matCapColor*2.0, MainColor.a);
  • the rendering of the preset first virtual object model by the terminal is performed by combining the original model, the normal map, and the MatCap texture, and the simulated ambient light is obtained and the output of the shadow detail is ensured.
  • the correspondence between the mesh vertex and the color information (corresponding relationship between the main vertex color information and each mesh vertex), the terminal draws according to the correspondence relationship between each mesh vertex and the color information, and obtains the second virtual object model.
  • the three-dimensional character model is compared.
  • the effect of the three-dimensional character model implemented by the rendering method used in the embodiment of the present application is as shown in the model 1 in FIG. 7 , and the previous rendering mode is implemented.
  • the effect of the three-dimensional character model is shown in the model 2 in Fig. 7. From the comparison, it can be seen that the accuracy of the display effect of the model 1 is higher than that of the model 2, and the display effect of the second virtual object model is improved.
  • the result of rendering the three-dimensional virtual character model in the implementation of the present application is that the head-to-body interface is weakened by the ambient light with the normal map.
  • the traces at the seams are mainly the alignment of the shadows at the seams of the normal map seams.
  • Matcap which resembles the head and body parts, maintains the same amount of light at the joints at the seams when simulating ambient light, avoiding seams. The position is more obvious in the case of different light quantities, that is, the point rendering at the seams of the respective blocks, parts or cut surfaces in the three-dimensional character model is weakened.
  • the embodiment of the present application provides a terminal 1, and the terminal 1 may include:
  • the acquiring unit 10 is configured to acquire first mesh vertex information of the preset first virtual object model, first color information corresponding to the first mesh vertex information, and first normal information, where the preset first The virtual object model is a preset model to be processed, and the first normal information is obtained by baking the high modulus corresponding to the preset first virtual object model;
  • the converting unit 11 is configured to perform vertex space conversion on the first normal information to obtain second normal information corresponding to the first mesh vertex information;
  • the obtaining unit 10 is further configured to obtain, according to the preset color setting rule and the second normal information, first illumination information corresponding to the first mesh vertex information, where the preset color setting rule is used for Characterizing the correspondence between color and light;
  • the rendering unit 12 is further configured to use the first illumination information, the first color information, and the first mesh vertex information to render the first virtual model object to obtain a second virtual object model.
  • the converting unit 11 is specifically configured to perform vertex space conversion on the first normal information, convert to a tangent space, and obtain third normal information; and the third method The line information is normalized to obtain the second normal information.
  • the acquiring unit 10 is specifically configured to: at each mesh vertex of the first network vertex information, according to the second normal information and the preset color setting rule Storing a color information to obtain second color information of the first mesh vertex information, and using the second color information as the first illumination information.
  • the rendering unit 12 is configured to perform interpolation processing on the first color information and the second color information corresponding to the first mesh vertex information to obtain the first Main vertex color information of each mesh vertex corresponding to a mesh vertex information; and drawing according to the main vertex color information and the correspondence relationship of each of the mesh vertexes to obtain the second virtual object model.
  • the acquiring unit 10 is further configured to: acquire the first mesh vertex information of the preset first virtual object model, and the first color information corresponding to the first mesh vertex information. Obtaining second mesh vertex information corresponding to the preset third virtual object model, and the preset third virtual object model is a corresponding high mode of the preset first virtual object model, and the first normal information And obtaining a first normal direction according to the second mesh vertex information and the preset normal line model; and determining, according to the correspondence between the preset second mesh vertex information and the first mesh vertex, The first normal information corresponding to a normal direction.
  • the terminal 1 further includes: an establishing unit 13;
  • the acquiring unit 10 is further configured to: before acquiring the first mesh vertex information of the preset first virtual object model, the first color information corresponding to the first mesh vertex information, and the first normal information, Get the scene file;
  • the establishing unit 13 is configured to establish a first scenario according to the scenario file.
  • the terminal 1 further includes: a display unit 14;
  • the display unit 14 is configured to perform, according to the correspondence relationship between the main vertex color information and each of the mesh vertices, after the second virtual object model is obtained, in the first scene, display The second virtual object model.
  • the terminal can parse the illumination information corresponding to each mesh vertex according to the fine normal information determined by the high mode, the illumination information can be used as the ambient light to render the first virtual object model,
  • the accuracy of the normal information is very high, which guarantees the shadow detail of the 3D design model and simulates the ambient light close to the real environment. Therefore, the precision of the second virtual object model rendered in this way is high, and the second virtual is improved.
  • the display effect of the object model since the terminal can parse the illumination information corresponding to each mesh vertex according to the fine normal information determined by the high mode, the illumination information can be used as the ambient light to render the first virtual object model,
  • the accuracy of the normal information is very high, which guarantees the shadow detail of the 3D design model and simulates the ambient light close to the real environment. Therefore, the precision of the second virtual object model rendered in this way is high, and the second virtual is improved.
  • the display effect of the object model is very high, which guarantees the shadow detail of the 3D design model and simulates the ambient light close to the real
  • the embodiment of the present application provides a terminal, where the terminal may include:
  • the processor 15 is configured to invoke a rendering related program of the simulated illumination stored by the memory 16, and perform the following steps:
  • first mesh vertex information of the preset first virtual object model, first color information corresponding to the first mesh vertex information, and first normal information where the preset first virtual object model is to be processed a preset model, the first normal information is baked by the high modulus corresponding to the preset first virtual object model; and the first normal information is subjected to vertex space conversion to obtain the first a second normal information corresponding to the mesh vertex information; and obtaining, according to the preset color setting rule and the second normal information, first illumination information corresponding to the first mesh vertex information, the preset color Setting a rule for characterizing the correspondence between the color and the illumination; and rendering the first virtual model object by using the first illumination information, the first color information, and the first mesh vertex information to obtain a Two virtual object models.
  • the display 17 is configured to display the second virtual object model.
  • the processor 15 is configured to perform vertex space conversion on the first normal information, convert to a tangent space, obtain third normal information, and normalize the third normal information. Processing, obtaining the second normal information;
  • the processor 15 is configured to store, according to the second normal information and the preset color setting rule, a color information at each mesh vertex of the first network vertex information, Obtaining second color information of the first mesh vertex information, and using the second color information as the first illumination information.
  • the processor 15 is configured to perform interpolation processing on the first color information and the second color information corresponding to the first mesh vertex information to obtain the first mesh vertex information. Corresponding main vertex color information of each mesh vertex; and drawing according to the main vertex color information and the corresponding relationship of each mesh vertex to obtain the second virtual object model.
  • the processor 15 is further configured to: acquire the first mesh vertex information of the preset first virtual object model, the first color information corresponding to the first mesh vertex information, and the first method. Before the line information, acquiring second mesh vertex information corresponding to the preset third virtual object model, where the preset third virtual object model is a corresponding high mode of the preset first virtual object model; a second mesh vertex information and a preset normal line model to obtain a first normal direction; and determining, according to a preset correspondence between the second mesh vertex information and the first mesh vertex, to determine the first normal direction The first normal information.
  • the processor 15 is further configured to: acquire the first mesh vertex information of the preset first virtual object model, the first color information corresponding to the first mesh vertex information, and the first method. Before the line information, acquiring a scene file; and establishing a first scene according to the scene file.
  • the display 17 is configured to perform, according to the correspondence relationship between the primary vertex color information and each of the mesh vertices, after obtaining the second virtual object model, in the first scenario
  • the second virtual object model is displayed.
  • the terminal can parse the illumination information corresponding to each mesh vertex according to the fine normal information determined by the high mode, the illumination information can be used as the ambient light to render the first virtual object model,
  • the accuracy of the normal information is very high, which guarantees the shadow detail of the 3D design model and simulates the ambient light close to the real environment. Therefore, the precision of the second virtual object model rendered in this way is high, and the second virtual is improved.
  • the display effect of the object model since the terminal can parse the illumination information corresponding to each mesh vertex according to the fine normal information determined by the high mode, the illumination information can be used as the ambient light to render the first virtual object model,
  • the accuracy of the normal information is very high, which guarantees the shadow detail of the 3D design model and simulates the ambient light close to the real environment. Therefore, the precision of the second virtual object model rendered in this way is high, and the second virtual is improved.
  • the display effect of the object model is very high, which guarantees the shadow detail of the 3D design model and simulates the ambient light close to the real
  • the above memory may be a volatile memory, such as a random access memory (RAM), or a non-volatile memory, such as a read only memory.
  • RAM random access memory
  • ROM Read-Only Memory
  • flash memory hard disk (HDD, Hard Disk Drive) or solid state drive (SSD, Solid-State Drive); or a combination of the above types of memory, and provide to the processor Instructions and data.
  • HDD Hard Disk Drive
  • SSD Solid-State Drive
  • the processor may be an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), or a Programmable Logic Device (PLD). At least one of a Programmable Logic Device, a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor. It is to be understood that, for the different devices, the electronic device for implementing the above-mentioned functions of the processor may be other, which is not specifically limited in the embodiment of the present application.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • CPU Central Processing Unit
  • controller a controller
  • microcontroller a microcontroller
  • the functional modules in this embodiment may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software function module.
  • the integrated unit may be stored in a computer readable storage medium if it is implemented in the form of a software function module and is not sold or used as a stand-alone product.
  • the technical solution of the embodiment is essentially The portion that contributes to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a computer readable storage medium, including instructions for causing a computer A device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) performs all or part of the steps of the method described in this embodiment.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes.
  • the embodiment of the present application provides a computer readable storage medium, which is applied to a terminal, where the computer readable storage medium stores one or more rendering programs for simulating illumination, and the one or more rendering programs for simulating illumination. It can be executed by one or more processors to implement the methods described in the first embodiment and the second embodiment.
  • embodiments of the present application can be provided as a method, system, or computer program product. Accordingly, the application can take the form of a hardware embodiment, a software embodiment, or an embodiment in combination with software and hardware. Moreover, the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例公开了一种模拟光照的渲染方法,该方法可以包括:通过获取预设第一虚拟对象模型的第一网格顶点信息、第一网格顶点信息对应的第一颜色信息,以及第一法线信息,预设第一虚拟对象模型为待处理的预设模型,第一法线信息由预设第一虚拟对象模型对应的高模进行烘焙得到;将第一法线信息进行顶点空间转换,得到与第一网格顶点信息对应的第二法线信息;根据预设颜色设置规则和第二法线信息,得到与第一网格顶点信息对应的第一光照信息,预设颜色设置规则用于表征颜色与光照的对应关系;采用第一光照信息、第一颜色信息和第一网格顶点信息,对第一虚拟模型对象进行渲染,得到第二虚拟对象模型。本申请实施例还同时公开了一种终端。

Description

模拟光照的渲染方法及终端
本申请要求于2017年8月18日提交中国专利局、申请号为201710711285.3、发明名称为“一种模拟光照的渲染方法及终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子应用领域中的建模技术,尤其涉及模拟光照的渲染。
背景技术
随着科学技术的不断发展,电子技术也得到了飞速的发展,电子产品的种类也越来越多,人们也享受到了科技发展带来的各种便利。现在人们可以通过各种类型的电子设备或终端,以及安装在终端(应用终端)上的各种功能的应用享受随着科技发展带来的舒适生活。
目前存在社交应用或游戏应用中采用虚拟对象(例如三维虚拟人物)进行用户角色或形象的模拟,形象而生动。终端(开发终端)一般采用Unity开发工具进行三维虚拟对象的生成,首先,终端(开发终端)可以从模型设计工具中获取设计模型,在Unity开发工具中导入该设计模型,以便进行三维场景和三维虚拟对象的处理,通常采用的模型设计工具为Cinema 4D。但是由于Cinema 4D与Unity的兼容性不好,即模型设计工具中会提供一个比较好的环境光效果的设计模型,但是在Unity开发工具中是没有提供这么良好的环境光,从而将设计模型导入到Unity中效果是非常差的。
这时,通常的解决办法为在Unity中通过多个灯光的组合模拟设计工具中的环境光效果,但是使用多个灯光的组合模拟的环境光,导致设计模型性能和灵活性严重下降,且三维虚拟人物在移动时,由于灯光组合模型的不可变性,导致光照的效果也不可控制,从而造成三维虚拟对象或人物的展示效果很差。
发明内容
为解决上述技术问题,本申请实施例期望提供一种模拟光照的渲染方法及终端,能够在保证三维设计模型阴影细节的基础上,模拟出接近真实环境的环 境光,利用模拟的环境光对三维虚拟对象模型进行处理,从而提高三维虚拟对象模型或虚拟人物模型的展示效果。
本申请的技术方案是这样实现的:
本申请实施例提供了一种模拟光照的渲染方法,包括:
终端获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息,所述预设第一虚拟对象模型为待处理的预设模型,所述第一法线信息由所述预设第一虚拟对象模型对应的高模进行烘焙得到;
所述终端将所述第一法线信息进行顶点空间转换,得到与所述第一网格顶点信息对应的第二法线信息;
所述终端根据预设颜色设置规则和所述第二法线信息,得到与所述第一网格顶点信息对应的第一光照信息,所述预设颜色设置规则用于表征颜色与光照的对应关系;
所述终端采用所述第一光照信息、所述第一颜色信息和所述第一网格顶点信息,对所述第一虚拟模型对象进行渲染,得到第二虚拟对象模型。
本申请实施例提供了一种终端,包括:
获取单元,用于获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息,所述预设第一虚拟对象模型为待处理的预设模型,所述第一法线信息由所述预设第一虚拟对象模型对应的高模进行烘焙得到;
转换单元,用于将所述第一法线信息进行顶点空间转换,得到与所述第一网格顶点信息对应的第二法线信息;
所述获取单元,还用于根据预设颜色设置规则和所述第二法线信息,得到与所述第一网格顶点信息对应的第一光照信息,所述预设颜色设置规则用于表征颜色与光照的对应关系;
渲染单元,还用于采用所述第一光照信息、所述第一颜色信息和所述第一网格顶点信息,对所述第一虚拟模型对象进行渲染,得到第二虚拟对象模型。
本申请实施例还提供了一种终端,包括:
处理器、存储器、显示器及通信总线,所述处理器、所述存储器及所述显 示器通过所述通信总线连接;
所述处理器,用于调用所述存储器存储的模拟光照的渲染相关程序,执行如下步骤:
获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息,所述预设第一虚拟对象模型为待处理的预设模型,所述第一法线信息由所述预设第一虚拟对象模型对应的高模进行烘焙得到;将所述第一法线信息进行顶点空间转换,得到与所述第一网格顶点信息对应的第二法线信息;根据预设颜色设置规则和所述第二法线信息,得到与所述第一网格顶点信息对应的第一光照信息,所述预设颜色设置规则用于表征颜色与光照的对应关系;采用所述第一光照信息、所述第一颜色信息和所述第一网格顶点信息,对所述第一虚拟模型对象进行渲染,得到第二虚拟对象模型;
所述显示器,用于显示所述第二虚拟对象模型。
本申请实施例提供了一种计算机可读存储介质,应用于终端中,所述计算机可读存储介质存储有一个或者多个模拟光照的渲染程序,所述一个或者多个模拟光照的渲染程序可被一个或者多个处理器执行,以实现上述模拟光照的渲染方法。
本申请实施例提供了一种模拟光照的渲染方法及终端,通过获取预设第一虚拟对象模型的第一网格顶点信息、第一网格顶点信息对应的第一颜色信息,以及第一法线信息,预设第一虚拟对象模型为待处理的预设模型,第一法线信息由预设第一虚拟对象模型对应的高模进行烘焙得到;将第一法线信息进行顶点空间转换,得到与第一网格顶点信息对应的第二法线信息;根据预设颜色设置规则和第二法线信息,得到与第一网格顶点信息对应的第一光照信息,预设颜色设置规则用于表征颜色与光照的对应关系;采用第一光照信息、第一颜色信息和第一网格顶点信息,对第一虚拟模型对象进行渲染,得到第二虚拟对象模型。采用上述技术实现方案,由于终端可以根据高模确定的精细的法线信息解析出每个网格顶点对应的光照信息,因此,可以利用这个光照信息作为环境光对第一虚拟对象模型进行渲染,由于法线信息的精度很高,保证了三维设计模型阴影细节,并且模拟出接近真实环境的环境光,因此,这样渲染出来 的第二虚拟对象模型的展示效果的精度会很高,提高第二虚拟对象模型的展示效果。
附图说明
图1为本申请实施例提供的一种模拟光照的渲染方法的流程图一;
图2为本申请实施例提供的一种示例性的法线贴图效果示意图;
图3为本申请实施例提供的一种模拟光照的渲染方法的流程图二;
图4为本申请实施例提供的一种示例性的法线贴图制作界面图;
图5为本申请实施例提供的一种示例性的模型的平面展开示意图;
图6为本申请实施例提供的一种示例性的法线贴图示意图;
图7为本申请实施例提供的一种示例性的渲染效果示意图;
图8为本申请实施例提供的一种终端的结构示意图一;
图9为本申请实施例提供的一种终端的结构示意图二;
图10为本申请实施例提供的一种终端的结构示意图三;
图11为本申请实施例提供的一种终端的结构示意图四。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本申请实施例提供的一种模拟光照的渲染方法可以应用在任意使用三维模型的应用或是功能中,而应用终端可以安装上述应用,并通过与应用对应的服务器进行数据交互实现相应的功能。
在本申请实施例中,主要描述终端(开发终端)在进行三维模型的处理时采用的渲染方法,以达到较好的三维模型展示的过程。
实施例一
本申请实施例提供了一种模拟光照的渲染方法,如图1所示,该方法可以包括:
S101、获取预设第一虚拟对象模型的第一网格顶点信息、第一网格顶点信息对应的第一颜色信息,以及第一法线信息,预设第一虚拟对象模型为待处理 的预设模型,第一法线信息由预设第一虚拟对象模型对应的高模进行烘焙得到。
需要说明的是,在本申请实施例中,终端在进行三维模型处理的时候,可以先采用三维模型设计工具进行模型的建立,然后再利用三维模型开发工具进行模型的进一步处理,最终得到需要的三维模型。
在本申请实施例中,三维模型设计工具可以为3ds Max、ZByush或Cinema 4D等,本申请实施例不限制模型设计工具的种类;并且,三维模型开发工具可以为Unity 3D、Unreal等,本申请实施例不限制模型开发工具的种类。在本申请实施例一种可能的实现方式中,三维模型设计工具为Cinema 4D,三维模型开发工具为Unity 3D。
终端在三维设计工具中建立所需的三维虚拟对象模型,在建立好三维虚拟对象模型后,将三维虚拟对象模型导出,供三维模型开发工具进行进一步的模型处理。
本申请实施例中主要描述的是在建立好三维虚拟对象模型之后,在三维模型开发工具中对该三维虚拟对象模型进行渲染的过程。
需要说明的是,在本申请实施例中,终端根据应用开发的需求建立虚拟对象,该终端可以针对相同的虚拟对象建立出一个低进度的低模,即预设第一虚拟对象模型,以及建立出一个高精度的高模,即预设第三虚拟对象模型,也就是说,预设第一虚拟对象模型和预设第三虚拟对象模型都是针对相同虚拟对象进行的模型建立,唯一不同的是建模的精度。
这样,终端可以获取预设第一虚拟对象模型和预设第三虚拟对象模型的相关信息,于是,终端可以根据预设第三虚拟对象模型进行法线贴图的生成。其中,制作法线贴图的制作方法可以为高模烘焙,简单来说就是,做个几百万面或者几千万甚至上亿面的高精度模型(即预设第三虚拟对象模型),再做一个几千面几万面的低精度模型(即预设第一虚拟对象模型),然后把高模的细节信息烘焙到低模上就能得到一张法线贴图了。终端里的三维模型,是通过多个多边形面组合来近似模拟一个物体的,它不是圆滑的,面数越多,则越接近真实物体,这样光照到某个面当中的一个点时,法线是通过这个面的几个顶点通过插值得到的,插值其实也是为了模拟这个点"正确"的法线方向,不然整个 面所有点的法线一致的话,光照上去,得到的模型夸张点就像一面面镜子拼接起来了。
在上述实现的基础上,终端的三维开发工具就可以解析预设第一虚拟对象模型,获取该预设第一虚拟对象模型的第一网格顶点信息和第一网格顶点信息对应的第一颜色信息,同时也解析法线贴图获取到第一法线信息,第一虚拟对象模型为待处理的预设模型,第一法线信息由预设第一虚拟对象模型对应的高模(即预设第三虚拟对象模型)进行烘焙得到。
这里,终端获取的预设第一虚拟对象模型中可以获取该预设第一虚拟对象模型的UV贴图和diffuse贴图,于是,终端解析UV贴图,得到该预设第一虚拟对象模型的第一网格顶点信息,该终端解析diffuse贴图,获取该预设第一虚拟对象模型的顶点颜色信息,即第一颜色信息。
需要说明的是,在本申请实施例中,三维模型的构建都是由不同的切面搭建的,因此,预设第一虚拟对象模型的中存在很多顶点,而第一网格顶点信息就指的是预设第一虚拟对象模型的每个网格顶点的坐标信息。
S102、将第一法线信息进行顶点空间转换,得到与第一网格顶点信息对应的第二法线信息。
由于终端获取的第一法线信息是模型空间的相关法线信息,而在渲染的时候,终端需要保证光照方向和法线信息是处在同一坐标系中的,而光照方向一般是在世界空间中,而法线信息在模型空间中,因此,在终端获取了预设第一虚拟对象模型的第一网格顶点信息、第一网格顶点信息对应的第一颜色信息,以及第一法线信息之后,该终端就需要对第一法线信息进行顶点空间转换,以满足渲染的需求,最终得到与第一网格顶点信息对应的第二法线信息。
在本申请实施例中,每个网格顶点都可以对应一个法线信息(统称为第一法线信息),每个网格顶点的顶点信息统称为第一网格顶点信息,因此,预设第一虚拟对象模型的第一网格顶点信息是对应第一法线信息的。
在本申请实施例中,终端将第一法线信息进行顶点空间转换后得到是世界空间中的第二法线信息。
需要说明的是,法线是垂直于一个面的直线,通过计算光线与这条法线的角度就可以知道与面的角度,进而可以计算出面应得到的颜色信息的,本申请 实施例中就是利用这个原理进行模拟光照的处理。每个网格顶点对应的第一法线信息是保存到法线贴图上对应像素点的,因此,在法线贴图中,通过把每个像素点的法线存储在一张纹理中,在渲染的时候可以根据每个像素点的法线确定像素点的阴暗程度。也就是第一法线信息记录了每个顶点信息高光和阴影的数值细节,并且还是将第一法线信息当作RGB三个颜色存储在法线贴图上的。
这里,一条法线是一个三维向量,一个三维向量由X、Y、Z等3个分量组成,于是,就以这3个分量当作红绿蓝3个颜色的值存储,生成一张新的贴图,即法线贴图。其中,法线贴图中红绿通道代表上下左右的偏移,蓝通道代表垂直偏移。
可以理解的是,终端把法线贴图上一个像素点想象成一根法线,这张法线贴图是512*512像素大小,也就是有262144个像素,那么把这张法线贴图贴到预设第一虚拟对象模型上就相当于该预设第一虚拟对象模型上有262144根法线(当然实际上肯定不是这样的),这样一个几百面的预设第一虚拟对象模型瞬间看上去就像是有几十万面一样的细节效果。
S103、根据预设颜色设置规则和第二法线信息,得到与第一网格顶点信息对应的第一光照信息,预设颜色设置规则用于表征颜色与光照的对应关系。
终端在将第一法线信息进行顶点空间转换,得到与第一网格顶点信息对应的第二法线信息之后,由于第二法线信息与第一网格顶点信息一一对应,因此,该终端可以根据预设颜色设置规则和第二法线信息,得到与第一网格顶点信息对应的第一光照信息了,也就是说,终端可以根据第二法线信息对应的法线投影在一个正平面上,于是,上述法线的投影点可以换算为法线贴图的UV坐标(与第一网格顶点信息对应),此时,终端就可以根据预设颜色设置规则对UV坐标对应的每个网格顶点进行颜色设置,从而得到第一光照信息了,其中,预设颜色设置规则用于表征颜色与光照的对应关系。
这里,终端可以根据第二法线信息和预设颜色设置规则,在第一网络顶点信息的每个网格顶点处存储一个颜色信息,得到第一网格顶点信息的第二颜色信息,将第二颜色信息作为第一光照信息。
需要说明的是,在本申请实施例中,当各个方向的法线投影到一个正平面时,其投影范围在x(-1,1),y(-1,1),构成了一个圆,因此,法线贴图的 有效范围基本上是一个圆形。这样的话,基于第二法线信息得到第一光照信息在制作成光照贴图时,该光照贴图中存储第一光照信息的区域是一个圆形。
在本申请实施例中,光照贴图可以为材质捕获(MatCap,Material Capture)贴图,使用某特定材质球的MatCap贴图,作为当前材质的视图空间环境贴图(view-space environment map),来实现具有均匀表面着色的反射材质物体的显示。基于MatCap思想的着色器(Shader),终端可以无需提供任何光照,只需提供一张或多张合适的MatCap贴图作为光照结果的“指导”即可。
需要说明的是,本申请实施例中的着色器是进行三维模型渲染的工具。
在本申请实施例中,预设颜色设置规则基于的原则为黑色吸光少,白色吸光多,因此,当一个网格顶点的光照强的时候,其设置的第二颜色信息应该调偏白,而当一个网格顶点的光照弱的时候,其设置的第二颜色信息应该调偏黑,这样终端就可以得到用第二颜色信息表征光照的第一光照信息了。
需要说明的是,颜色信息的选取可以在0-255中进行选取,越靠近0的颜色越偏黑,越靠近255的颜色越偏白。也就是说,光照贴图中的圆形中心区域光强,第二颜色信息偏白,边缘光弱,第二颜色信息偏黑。
S104、采用第一光照信息、第一颜色信息和第一网格顶点信息,对第一虚拟模型对象进行渲染,得到第二虚拟对象模型。
终端在根据预设颜色设置规则和第二法线信息,得到与第一网格顶点信息对应的第一光照信息之后,由于该终端已经可以利用第一光照信息模拟环境光了,并且该第一光照信息还是利用相关的法线信息得到的,结合了高精度的光照和阴影的细节,因此,接近真实环境光。这样,终端就可以根据采用第一光照信息、第一颜色信息和第一网格顶点信息,对第一虚拟模型对象进行渲染,得到第二虚拟对象模型。
在本申请实施例中,终端可以采用第一光照信息、第一颜色信息对第一网格顶点信息对应的每个网格顶点进行顶点颜色的填充,得到顶点的主顶点颜色信息,从而使用主顶点颜色信息对预设第一虚拟对象模型进行处理。
进一步地,在本申请实施例中,终端对预设第一虚拟对象模型的渲染处理,可以是纹理、颜色等各方面的处理,本申请实施例不作限制。
在本申请实施例中,法线贴图是基于模型的UV来创建的,可以记录高光 与阴影的细节数值,那么终端在得到了法线贴图之后,采用法线贴图作用在预设第一虚拟对象模型之后,得到的第二虚拟对象模型的精度是较高的,表面纹理的凹凸性都能很好的体现。
示例性的,假设预设第一虚拟对象模型为烛台模型,如图2所示,左边为烛台模型,当采用法线贴图作用烛台模型之后,该烛台模型变为右边图示,显得很有立体感。
也就是说,终端在进行三维模型的渲染的时候可以采用法线贴图进行纹理细节的处理,一般终端采用的是顶点着色器来实现的。
进一步地,S101之前,如图3所示,本申请实施例提供的一种模拟光照的渲染方法还可以包括:S105-S107。如下:
S105、获取预设第三虚拟对象模型对应的第二网格顶点信息,预设第三虚拟对象模型为预设第一虚拟对象模型的对应的高模。
S106、根据第二网格顶点信息和预设法线模型,得到第一法线方向。
S107、根据预设第二网格顶点信息与第一网格顶点的对应关系,确定与第一法线方向对应的第一法线信息。
在本申请实施例中,终端在着色器中实现渲染之前,先进行了模拟的虚拟对象相同的高精度模型的法线贴图的生成的,从而解析出了法线贴图中的第一法线信息的。终端获取法线贴图的过程可以为高模烘托方法,即终端获取预设第一虚拟对象模型的对应的高模(即预设第三虚拟对象模型)对应的第二网格顶点信息,根据第二网格顶点信息和预设法线模型,得到第一法线方向,最后根据预设第二网格顶点信息与第一网格顶点的对应关系,确定与第一法线方向对应的第一法线信息。
需要说明的是,由于法线贴图的出现,是为了低面数的模型模拟出高面数的模型的光照与阴影细节数据的,那么,最重要的就是光入射方向与入射点的法线夹角,法线贴图本质上就是记录了这个夹角的相关信息的,光照的计算与某个面上的法线方向息息相关的。
这里,终端可以获知预设第三虚拟对象模型对应的第二网格顶点信息(即预设第三虚拟对象模型中的每个网格顶点的信息),当光照到预设第三虚拟对象的某个面当中的一点时,通过这个面的几个网格顶点进行插值(即预设法线 模型)得到的该点的第一法线方向;然后,终端根据预设第二网格顶点信息与第一网格顶点的对应关系,将预设第三虚拟对象模型投影在预设第一虚拟对象模型上,形成二维投影(例如x-y平面投影),于是,终端获取第一网格顶点信息对应的每个网格顶点的第一法线在二维投影上的两个投影方向(x和y方向上),最后,终端将获取的每个网格顶点的第一法线方向作为z方向,这样就得到了每个网个顶点上的第一法线信息了。然后,终端将每点的第一法线信息保存到法线贴图上对应像素点,实际计算是把第一法线信息中的x,y,z方向大小映射到颜色空间rgb里,即把x值存在r里,把y值存在g里,把z值存在b里。终端在进行预设第一虚拟对象模型渲染的时候得到的是法线贴图,终端是通过对法线贴图解析得到的第一法线信息的。
在一种可能的实现方式中,在本申请实施例中的法线贴图可以为object space normal map。
需要说明的是,在制作法线贴图的时候也可以制作出其他空间中的法线贴图,本申请实施例不作限制。
示例性的,终端法线贴图优化模型阴影控制,假设预设第三虚拟对象模型为人脸模型,那么图4中显示了人脸模型进行法线贴图制作的显示窗口,经过烘焙方法,将人脸模型进行投影得到了如图5所示的平面展开图,最终终端将图5所示的平面展开图存储在RGB值中,得到了如图6所示的法线贴图。
进一步的,在S105之前,本申请实施例提供的一种模拟光照的渲染方法还包括:S108。如下:
S108、获取场景文件,根据场景文件建立第一场景。
在终端进行预设第一虚拟对象模型的渲染之前,该终端需要获取需要建立三维模型的场景文件,并根据该场景文件建立第一场景,然后在该第一场景中进行预设第一虚拟对象模型的展示和处理。
需要说明的是,本申请实施例不限制第一场景的类型,可以为雪景、沙漠等各种场景。
在本申请实施例中,S108为终端开始进行模型处理的首要执行顺序,即终端执行S108可以在S105-107之前。
进一步地,S104之后,基于S108的实现基础上,本申请实施例提供的一 种模拟光照的渲染方法还可以包括:S109。如下:
S109、在第一场景中,显示第二虚拟对象模型。
终端在得到了渲染完的第二虚拟对象模型之后,由于该第二虚拟对象模型为已经渲染绘制后的模型了,并且,整个模型是在第一场景中进行处理的,这样的话,终端就可以在第一场景中显示或展示该第二虚拟对象模型了。
实施例二
本申请实施例提供了一种模拟光照的渲染方法中,S102的将第一法线信息进行顶点空间转换,得到与第一网格顶点信息对应的第二法线信息的方法可以包括:S1021-S1022。如下:
S1021、将第一法线信息进行顶点空间转换,转换到切线空间,得到第三法线信息。
S1022、将第三法线信息进行归一化处理,得到第二法线信息。
在本申请实施例中,终端要使用MatCap贴图,主要是要将第一法线信息从模型空间(object space)转换到切线空间(tangent space),并切换到适合提取纹理UV的区域【0,1】上。
需要说明的是,在法线贴图生成的过程中,高模(预设第三虚拟对象模型)所用的tangent space,就是低模(预设第一虚拟对象模型)上的,终端生成法线贴图,必定会确认高模上哪些面都对应低模上的哪个面,然后高模上的这几个面的法线,都会转换为低模这个面上所构建的tangent space的坐标。这样,当低模变形时,即三角面变化时,它的tangent space也会跟着变化,保存在法线贴图里的法线信息(即法线值)乘以低模这个面的tangent space到外部坐标系的转换矩阵,即可得到外部坐标,其中,高模保存的第一法线信息对应高模上object space里的法线。
这里,终端将第一法线信息进行顶点空间转换,转换到切线空间的具体实施如下:对于第一网格顶点信息对应的每个网格顶点来说,从object space转换到tangent space,可以使用model-view矩阵,但是第一法线信息中的法线向量使用model-view矩阵从object space到eye space(眼部空间)的变换的话,在eye space中,tangent(切线)的方向仍符合定义,normal(法线)则不再垂直于网格顶点的tangent了,因此,model-view矩阵不适用于normal。这是 因为,假设令T为tangent,MV为model-view矩阵,P1、P2为tangent联系的2个网格顶点,那么,
T=P2-P1                                       (1)
T'=T*MV=(P2-P1)*MV=P2*MV-P1*MV=P2'-P1'       (2)
因此,由公式(1)和公式(2)可知,T'保留了tangent的定义,但对于normal,也可以找到N=Q2-Q1代表它,但是变换后Q2'-Q1'却不能保证垂直于T',因此,object space到view space角度关系被改变了。如何求出normal的变换,维持与tangent垂直,现在假设normal matrix矩阵为G,那么,normal与tangent垂直可以得出公式(3):
N'.T'=(GN).(MT)=0                         (3)
将公式(3)点积转化为叉积后得到公式(4):
(GN).(MT)=(GN) T*(MT)=(GN) T(MT)=(N TG T)(MT)=N TG TMT=0  (4)
其中,N TT为0。
若G TM=I,则公式(4)成立,因此,G=(M -1) T
也就是说,normal matrix矩阵是model-view矩阵的逆矩阵的转置矩阵。
在这样的情况下,第一法线信息可以通过normal matrix矩阵从模型空间转换到切线空间了,从而得到了第三法线信息了。
示例性的,以Unity为例进行说明,Unity内置的变换矩阵normal matrix矩阵可以表示为:UNITY_MATRIX_IT_MV,其是UNITY_MATRIX_MV(model-view矩阵)的逆转置矩阵,其作用正是将第一法线信息从模型空间转换到切线空间,得到第三法线信息。这一过程是在顶点着色器中实现的,如下:
//MatCap坐标准备:将法线从模型空间转换到切线空间,存储于TEXCOORD1的后两个纹理坐标zw
output.diffuseUVAndMatCapCoords.z=dot(normalize(UNITY_MATRIX_IT_MV[0].xyz),normalize(input.normal));
output.diffuseUVAndMatCapCoords.w=dot(normalize(UNITY_MATRIX_IT_MV[1].xyz),normalize(input.normal));
在本申请实施例中,终端将第一法线信息从模型空间(object space)转换到切线空间(tangent space)之后,得到第三法线信息,还需要将第三法线信 息切换到适合提取纹理UV的区域【0,1】上,得到第二法线信息。
示例性的,以Unity为例进行说明,终端将第三法线信息进行归一化处理,得到第二法线信息的这一过程是在顶点着色器中实现的,如下:
//归一化法线值区间【-1,1】转换到适用于纹理的区间【0,1】
output.diffuseUVAndMatCapCoords.zw=output.diffuseUVAndMatCapCoords.zw*0.5+0.5;
需要说明的是,由于终端转换空间后得到的切线空间的第三法线信息对应的区域是【-1,1】,要转换到提取纹理UV的区域【0,1】上,就需要乘以0.5并加上0.5。
进一步地,终端最终得到的第二法线信息示例性的,如下:
//坐标变换
output.position=mul(UNITY_MATRIX_MVP,input.position);
//细节纹准备UV,存储于TEXCOORD0的前两个坐标xy
output.detailUVCoordsAndDepth.xy=TRANSFORM_TEX(input.UVCoordsC hannel1,_DetailTex);
//深度信息准备,存储于TEXCOORD0的第三个坐标z
output.detailUVCoordsAndDepth.z=output.position.z;
本申请实施例提供了一种模拟光照的渲染方法中,S104的采用第一光照信息、第一颜色信息和第一网格顶点信息,对第一虚拟模型对象进行渲染,得到第二虚拟对象模型的方法可以包括:S1041-S1042。如下:
S1041、将第一网格顶点信息对应的第一颜色信息和第二颜色信息进行插值处理,得到第一网格顶点信息对应的每个网格顶点的主顶点颜色信息。
S1042、根据主顶点颜色信息和每个网格顶点的对应关系进行绘制,得到第二虚拟对象模型。
在本申请实施例中,终端的预设第一虚拟对象模型中的每个网格顶点信息对应的主顶点颜色为第一颜色信息,终端采用光照贴图对该预设第一虚拟对象模型进行渲染处理,这样的话,终端将第一网格顶点信息对应的第一颜色信息和第二颜色信息进行插值处理,得到第一网格顶点信息对应的每个网格顶点的主顶点颜色信息;然后,该终端根据主顶点颜色信息和每个网格顶点的对应关 系进行绘制,得到第二虚拟对象模型。
需要说明的是,本申请实施例中的第一颜色信息可以为原主顶点颜色信息,而终端可以根据法线贴图中的第二法线信息,得到细节纹理,从而得到细节颜色信息。
示例性的,终端得到新街颜色信息的过程如下:
//细节纹理
float 3detailMask=tex2D(_DetailTex,input.detailUVCoordsAndDepth.xy).rgb;
//细节颜色信息
float 3detailColor=lerp(_DetailColor.rgb,mainColor,detailMask);
在本申请实施例子中,终端可以先将细节颜色和第一颜色信息进行差值,称为新的主顶点颜色信息,然后再与从光照贴图中提取出来的第二颜色信息(第一光照信息)进行作用,得到最终的主顶点颜色信息。示例性的如下:
//细节颜色和主顶点颜色进行插值,成为新的主顶点颜色信息
mainColor=lerp(detailColor,mainColor,saturate(input.detailUVCoordsAndDepth.z*DetailTexDepthOffset));
//从提供的MatCap贴图中,提取出对应第一光照信息(解析的过程)
float3matCapColor=tex2D(_MatCap,input.diffuseUVAndMatCapCoords.zw).rgb;
//最终的主顶点颜色信息
float4finalColor=float4(mainColor*matCapColor*2.0,MainColor.a);
需要说明的是,在本申请实施例中,终端对预设第一虚拟对象模型的渲染,是结合原模型、法线贴图和MatCap贴图进行渲染,得到模拟环境光且保证阴影细节的输出的各个网格顶点与颜色信息的对应关系(主顶点颜色信息和每个网格顶点的对应关系),终端根据各个网格顶点与颜色信息的对应关系进行绘制,得到了第二虚拟对象模型。
示例性的,以三维人物模型进行对比,如图7所示,本申请实施例中采用的渲染方式实现的三维人物模型的效果如图7中的模型1所示,而之前的渲染方式实现的三维人物模型的效果如图7中的模型2所示,由此对比可知,模型 1比模型2展示效果的精度会很高,提高第二虚拟对象模型的展示效果。
进一步地,在本申请实施例中,以三维人物模型场景来说,采用本申请实施中的渲染三维虚拟人物模型的结果是在类似头部与身体衔接处,通过环境光配合法线贴图弱化了接缝处的痕迹,主要是将法线贴图接缝处位置的阴影对齐,类似头部和身体部分的Matcap在模拟环境光时,在接缝处的坐标点位置的光量保持一致,避免接缝位置在不同光量的情况下痕迹比较明显,也就是说,针对三维人物模型中的各个区块,部位或者切面等接缝处的点渲染弱化。
实施例三
基于实施例一和实施例二的同一发明构思下,如图8所示,本申请实施例提供了一种终端1,该终端1可以包括:
获取单元10,用于获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息,所述预设第一虚拟对象模型为待处理的预设模型,所述第一法线信息由所述预设第一虚拟对象模型对应的高模进行烘焙得到;
转换单元11,用于将所述第一法线信息进行顶点空间转换,得到与所述第一网格顶点信息对应的第二法线信息;
所述获取单元10,还用于根据预设颜色设置规则和所述第二法线信息,得到与所述第一网格顶点信息对应的第一光照信息,所述预设颜色设置规则用于表征颜色与光照的对应关系;
渲染单元12,还用于采用所述第一光照信息、所述第一颜色信息和所述第一网格顶点信息,对所述第一虚拟模型对象进行渲染,得到第二虚拟对象模型。
在一种可能的实现方式中,所述转换单元11,具体用于将所述第一法线信息进行顶点空间转换,转换到切线空间,得到第三法线信息;以及将所述第三法线信息进行归一化处理,得到所述第二法线信息。
在一种可能的实现方式中,所述获取单元10,具体用于根据所述第二法线信息和所述预设颜色设置规则,在所述第一网络顶点信息的每个网格顶点处存储一个颜色信息,得到所述第一网格顶点信息的第二颜色信息,将所述第二颜色信息作为所述第一光照信息。
在一种可能的实现方式中,所述渲染单元12,具体用于将所述第一网格顶点信息对应的所述第一颜色信息和所述第二颜色信息进行插值处理,得到所述第一网格顶点信息对应的每个网格顶点的主顶点颜色信息;以及根据所述主顶点颜色信息和所述每个网格顶点的对应关系进行绘制,得到所述第二虚拟对象模型。
在一种可能的实现方式中,所述获取单元10,还用于所述获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息之前,获取预设第三虚拟对象模型对应的第二网格顶点信息,所述预设第三虚拟对象模型为所述预设第一虚拟对象模型的对应的高模;及根据所述第二网格顶点信息和预设法线模型,得到第一法线方向;以及根据预设第二网格顶点信息与第一网格顶点的对应关系,确定与所述第一法线方向对应的所述第一法线信息。
在一种可能的实现方式中,基于图8,如图9所示,所述终端1还包括:建立单元13;
所述获取单元10,还用于所述获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息之前,获取场景文件;
所述建立单元13,用于根据所述场景文件建立第一场景。
在一种可能的实现方式中,基于图9,如图10所示,所述终端1还包括:显示单元14;
所述显示单元14,用于所述根据所述主顶点颜色信息和所述每个网格顶点的对应关系进行绘制,得到所述第二虚拟对象模型之后,在所述第一场景中,显示所述第二虚拟对象模型。
可以理解的是,由于终端可以根据高模确定的精细的法线信息解析出每个网格顶点对应的光照信息,因此,可以利用这个光照信息作为环境光对第一虚拟对象模型进行渲染,由于法线信息的精度很高,保证了三维设计模型阴影细节,并且模拟出接近真实环境的环境光,因此,这样渲染出来的第二虚拟对象模型的展示效果的精度会很高,提高第二虚拟对象模型的展示效果。
实施例四
基于实施例一和实施例二的同一发明构思下,如11所示,本申请实施例提供了一种终端,该终端可以包括:
处理器15、存储器16、显示器17及通信总线18,所述处理器15、所述存储器16及所述显示器17通过所述通信总线18连接;
所述处理器15,用于调用所述存储器16存储的模拟光照的渲染相关程序,执行如下步骤:
获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息,所述预设第一虚拟对象模型为待处理的预设模型,所述第一法线信息由所述预设第一虚拟对象模型对应的高模进行烘焙得到;及将所述第一法线信息进行顶点空间转换,得到与所述第一网格顶点信息对应的第二法线信息;及根据预设颜色设置规则和所述第二法线信息,得到与所述第一网格顶点信息对应的第一光照信息,所述预设颜色设置规则用于表征颜色与光照的对应关系;以及采用所述第一光照信息、所述第一颜色信息和所述第一网格顶点信息,对所述第一虚拟模型对象进行渲染,得到第二虚拟对象模型。
所述显示器17,用于显示所述第二虚拟对象模型。
可选的,所述处理器15,具体用于将所述第一法线信息进行顶点空间转换,转换到切线空间,得到第三法线信息;以及将所述第三法线信息进行归一化处理,得到所述第二法线信息;
可选的,所述处理器15,具体用于根据所述第二法线信息和所述预设颜色设置规则,在所述第一网络顶点信息的每个网格顶点处存储一个颜色信息,得到所述第一网格顶点信息的第二颜色信息,将所述第二颜色信息作为所述第一光照信息。
可选的,所述处理器15,具体用于将所述第一网格顶点信息对应的所述第一颜色信息和所述第二颜色信息进行插值处理,得到所述第一网格顶点信息对应的每个网格顶点的主顶点颜色信息;以及根据所述主顶点颜色信息和所述每个网格顶点的对应关系进行绘制,得到所述第二虚拟对象模型。
可选的,所述处理器15,还用于所述获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信 息之前,获取预设第三虚拟对象模型对应的第二网格顶点信息,所述预设第三虚拟对象模型为所述预设第一虚拟对象模型的对应的高模;及根据所述第二网格顶点信息和预设法线模型,得到第一法线方向;以及根据预设第二网格顶点信息与第一网格顶点的对应关系,确定与所述第一法线方向对应的所述第一法线信息。
可选的,所述处理器15,还用于所述获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息之前,获取场景文件;以及根据所述场景文件建立第一场景。
可选的,所述显示器17,用于所述根据所述主顶点颜色信息和所述每个网格顶点的对应关系进行绘制,得到所述第二虚拟对象模型之后,在所述第一场景中,显示所述第二虚拟对象模型。
可以理解的是,由于终端可以根据高模确定的精细的法线信息解析出每个网格顶点对应的光照信息,因此,可以利用这个光照信息作为环境光对第一虚拟对象模型进行渲染,由于法线信息的精度很高,保证了三维设计模型阴影细节,并且模拟出接近真实环境的环境光,因此,这样渲染出来的第二虚拟对象模型的展示效果的精度会很高,提高第二虚拟对象模型的展示效果。
在实际应用中,上述的存储器可以是易失性存储器(volatile memory),例如随机存取存储器(RAM,Random-Access Memory);或者非易失性存储器(non-volatile memory),例如只读存储器(ROM,Read-Only Memory),快闪存储器(flash memory),硬盘(HDD,Hard Disk Drive)或固态硬盘(SSD,Solid-State Drive);或者上述种类的存储器的组合,并向处理器提供指令和数据。
上述处理器可以为特定用途集成电路(ASIC,Application Specific Integrated Circuit)、数字信号处理器(DSP,Digital Signal Processor)、数字信号处理装置(DSPD,Digital Signal Processing Device)、可编程逻辑装置(PLD,Programmable Logic Device)、现场可编程门阵列(FPGA,Field Programmable Gate Array)、中央处理器(CPU,Central Processing Unit)、控制器、微控制器、微处理器中的至少一种。可以理解地,对于不同的设备,用于实现上述处理器功能的电子器件还可以为其它,本申请实施例不作具体限定。
实施例五
在本实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个计算机可读存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请实施例中提供了一种计算机可读存储介质,应用于终端中,所述计算机可读存储介质存储有一个或者多个模拟光照的渲染程序,所述一个或者多个模拟光照的渲染程序可被一个或者多个处理器执行,以实现实施例一和实施例二所述的方法。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述,以上实施例仅用以说明本申请实施例的技术方案,而非对其限制;尽管参照前述实施例对本申请实施例进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (16)

  1. 一种模拟光照的渲染方法,包括:
    终端获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息,所述预设第一虚拟对象模型为待处理的预设模型,所述第一法线信息由所述预设第一虚拟对象模型对应的高模进行烘焙得到;
    所述终端将所述第一法线信息进行顶点空间转换,得到与所述第一网格顶点信息对应的第二法线信息;
    所述终端根据预设颜色设置规则和所述第二法线信息,得到与所述第一网格顶点信息对应的第一光照信息,所述预设颜色设置规则用于表征颜色与光照的对应关系;
    所述终端采用所述第一光照信息、所述第一颜色信息和所述第一网格顶点信息,对所述第一虚拟模型对象进行渲染,得到第二虚拟对象模型。
  2. 根据权利要求1所述的方法,所述终端将所述第一法线信息进行顶点空间转换,得到与所述第一网格顶点信息对应的第二法线信息,包括:
    所述终端将所述第一法线信息进行顶点空间转换,转换到切线空间,得到第三法线信息;
    所述终端将所述第三法线信息进行归一化处理,得到所述第二法线信息。
  3. 根据权利要求1所述的方法,所述终端根据预设颜色设置规则和所述第二法线信息,得到与所述第一网格顶点信息对应的第一光照信息,包括:
    所述终端根据所述第二法线信息和所述预设颜色设置规则,在所述第一网络顶点信息的每个网格顶点处存储一个颜色信息,得到所述第一网格顶点信息的第二颜色信息,将所述第二颜色信息作为所述第一光照信息。
  4. 根据权利要求3所述的方法,所述终端采用所述第一光照信息、所述第一颜色信息和所述第一网格顶点信息,对所述第一虚拟模型对象进行渲染,得到第二虚拟对象模型,包括:
    所述终端将所述第一网格顶点信息对应的所述第一颜色信息和所述第二颜色信息进行插值处理,得到所述第一网格顶点信息对应的每个网格顶点的主顶点颜色信息;
    所述终端根据所述主顶点颜色信息和所述每个网格顶点的对应关系进行绘制,得到所述第二虚拟对象模型。
  5. 根据权利要求1所述的方法,所述终端获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息之前,所述方法还包括:
    终端获取预设第三虚拟对象模型对应的第二网格顶点信息,所述预设第三虚拟对象模型为所述预设第一虚拟对象模型的对应的高模;
    所述终端根据所述第二网格顶点信息和预设法线模型,得到第一法线方向;
    所述终端根据预设第二网格顶点信息与第一网格顶点的对应关系,确定与所述第一法线方向对应的所述第一法线信息。
  6. 根据权利要求1所述的方法,所述终端获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息之前,所述方法还包括:
    终端获取场景文件,根据所述场景文件建立第一场景。
  7. 根据权利要求6所述的方法,所述终端根据所述主顶点颜色信息和所述每个网格顶点的对应关系进行绘制,得到所述第二虚拟对象模型之后,所述方法包括:
    所述终端在所述第一场景中,显示所述第二虚拟对象模型。
  8. 一种终端,包括:
    获取单元,用于获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息,所述预设第一虚拟对象模型为待处理的预设模型,所述第一法线信息由所述预设第一虚拟对象模型对应的高模进行烘焙得到;
    转换单元,用于将所述第一法线信息进行顶点空间转换,得到与所述第一网格顶点信息对应的第二法线信息;
    所述获取单元,还用于根据预设颜色设置规则和所述第二法线信息,得到与所述第一网格顶点信息对应的第一光照信息,所述预设颜色设置规则用于表征颜色与光照的对应关系;
    渲染单元,还用于采用所述第一光照信息、所述第一颜色信息和所述第一网格顶点信息,对所述第一虚拟模型对象进行渲染,得到第二虚拟对象模型。
  9. 根据权利要求8所述的终端,
    所述转换单元,具体用于将所述第一法线信息进行顶点空间转换,转换到切线空间,得到第三法线信息;以及将所述第三法线信息进行归一化处理,得到所述第二法线信息。
  10. 根据权利要求8所述的终端,
    所述获取单元,具体用于根据所述第二法线信息和所述预设颜色设置规则,在所述第一网络顶点信息的每个网格顶点处存储一个颜色信息,得到所述第一网格顶点信息的第二颜色信息,将所述第二颜色信息作为所述第一光照信息。
  11. 根据权利要求10所述的终端,
    所述渲染单元,具体用于将所述第一网格顶点信息对应的所述第一颜色信息和所述第二颜色信息进行插值处理,得到所述第一网格顶点信息对应的每个网格顶点的主顶点颜色信息;以及根据所述主顶点颜色信息和所述每个网格顶点的对应关系进行绘制,得到所述第二虚拟对象模型。
  12. 根据权利要求8所述的终端,
    所述获取单元,还用于所述获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息之前,获取预设第三虚拟对象模型对应的第二网格顶点信息,所述预设第三虚拟对象模型为所述预设第一虚拟对象模型的对应的高模;及根据所述第二网格顶点信息和预设法线模型,得到第一法线方向;以及根据预设第二网格顶点信息与第一网格顶点的对应关系,确定与所述第一法线方向对应的所述第一法线信息。
  13. 根据权利要求8所述的终端,所述终端还包括:建立单元;
    所述获取单元,还用于所述获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息之前,获取场景文件;
    所述建立单元,用于根据所述场景文件建立第一场景。
  14. 根据权利要求13所述的终端,所述终端还包括:显示单元;
    所述显示单元,用于所述根据所述主顶点颜色信息和所述每个网格顶点的对应关系进行绘制,得到所述第二虚拟对象模型之后,在所述第一场景中,显示所述第二虚拟对象模型。
  15. 一种终端,包括:
    处理器、存储器、显示器及通信总线,所述处理器、所述存储器及所述显示器通过所述通信总线连接;
    所述处理器,用于调用所述存储器存储的模拟光照的渲染相关程序,执行如下步骤:
    获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息,所述预设第一虚拟对象模型为待处理的预设模型,所述第一法线信息由所述预设第一虚拟对象模型对应的高模进行烘焙得到;将所述第一法线信息进行顶点空间转换,得到与所述第一网格顶点信息对应的第二法线信息;根据预设颜色设置规则和所述第二法线信息,得到与所述第一网格顶点信息对应的第一光照信息,所述预设颜色设置规则用于表征颜色与光照的对应关系;采用所述第一光照信息、所述第一颜色信息和所述第一网格顶点信息,对所述第一虚拟模型对象进行渲染,得到第二虚拟对象模型;
    所述显示器,用于显示所述第二虚拟对象模型。
  16. 一种计算机可读存储介质,应用于终端中,所述计算机可读存储介质存储有一个或者多个模拟光照的渲染程序,所述一个或者多个模拟光照的渲染程序可被一个或者多个处理器执行,以实现上述权利要求1至7所述的模拟光照的渲染方法。
PCT/CN2018/093322 2017-08-18 2018-06-28 模拟光照的渲染方法及终端 WO2019033859A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2020509081A JP7386153B2 (ja) 2017-08-18 2018-06-28 照明をシミュレートするレンダリング方法及び端末
KR1020207005286A KR102319179B1 (ko) 2017-08-18 2018-06-28 조명을 시뮬레이션하기 위한 렌더링 방법, 및 단말
US16/789,263 US11257286B2 (en) 2017-08-18 2020-02-12 Method for rendering of simulating illumination and terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710711285.3A CN109427088B (zh) 2017-08-18 2017-08-18 一种模拟光照的渲染方法及终端
CN201710711285.3 2017-08-18

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/789,263 Continuation US11257286B2 (en) 2017-08-18 2020-02-12 Method for rendering of simulating illumination and terminal

Publications (1)

Publication Number Publication Date
WO2019033859A1 true WO2019033859A1 (zh) 2019-02-21

Family

ID=65361998

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/093322 WO2019033859A1 (zh) 2017-08-18 2018-06-28 模拟光照的渲染方法及终端

Country Status (5)

Country Link
US (1) US11257286B2 (zh)
JP (1) JP7386153B2 (zh)
KR (1) KR102319179B1 (zh)
CN (1) CN109427088B (zh)
WO (1) WO2019033859A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110390709A (zh) * 2019-06-19 2019-10-29 北京巴别时代科技股份有限公司 卡通渲染勾边圆滑方法
CN111739135A (zh) * 2020-07-30 2020-10-02 腾讯科技(深圳)有限公司 虚拟角色的模型处理方法、装置及可读存储介质
JP2020198066A (ja) * 2019-06-03 2020-12-10 アイドス インタラクティブ コープ 拡張現実アプリケーション用システム及び方法
CN112494941A (zh) * 2020-12-14 2021-03-16 网易(杭州)网络有限公司 虚拟对象的显示控制方法及装置、存储介质、电子设备
CN114612600A (zh) * 2022-03-11 2022-06-10 北京百度网讯科技有限公司 虚拟形象生成方法、装置、电子设备和存储介质

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363836A (zh) * 2019-07-19 2019-10-22 杭州绝地科技股份有限公司 一种基于Matcap贴图的角色渲染方法、装置和设备
CN111667581B (zh) * 2020-06-15 2023-08-22 网易(杭州)网络有限公司 3d模型的处理方法、装置、设备及存储介质
CN111862344B (zh) * 2020-07-17 2024-03-08 抖音视界有限公司 图像处理方法、设备和存储介质
CN112435285B (zh) * 2020-07-24 2024-07-30 上海幻电信息科技有限公司 法线贴图的生成方法及装置
CN111882631B (zh) * 2020-07-24 2024-05-03 上海米哈游天命科技有限公司 一种模型渲染方法、装置、设备及存储介质
CN112270759B (zh) * 2020-10-30 2022-06-24 北京字跳网络技术有限公司 基于图像的光效处理方法、装置、设备及存储介质
CN112700541B (zh) * 2021-01-13 2023-12-26 腾讯科技(深圳)有限公司 一种模型更新方法、装置、设备及计算机可读存储介质
CN112819929B (zh) * 2021-03-05 2024-02-23 网易(杭州)网络有限公司 渲染水面方法及装置、电子设备、存储介质
CN112884873B (zh) * 2021-03-12 2023-05-23 腾讯科技(深圳)有限公司 虚拟环境中虚拟物体的渲染方法、装置、设备及介质
CN113034661B (zh) * 2021-03-24 2023-05-23 网易(杭州)网络有限公司 一种MatCap贴图生成的方法及装置
CN113034350B (zh) * 2021-03-24 2023-03-24 网易(杭州)网络有限公司 一种植被模型的处理方法和装置
CN113077541B (zh) * 2021-04-02 2022-01-18 广州益聚未来网络科技有限公司 一种虚拟天空画面的渲染方法及相关设备
US11423601B1 (en) 2021-04-29 2022-08-23 Dg Holdings, Inc. Transforming a three-dimensional virtual model to a manipulatable format
CN113362435B (zh) * 2021-06-16 2023-08-08 网易(杭州)网络有限公司 虚拟对象模型的虚拟部件变化方法、装置、设备及介质
CN113592999B (zh) * 2021-08-05 2022-10-28 广州益聚未来网络科技有限公司 一种虚拟发光体的渲染方法及相关设备
CN113590330A (zh) * 2021-08-05 2021-11-02 北京沃东天骏信息技术有限公司 一种网格模型渲染方法及装置、存储介质
CN114241114B (zh) * 2021-12-22 2024-09-10 上海完美时空软件有限公司 材质的渲染方法及装置、存储介质、电子装置
CN114255641B (zh) * 2022-01-17 2023-09-29 广州易道智慧信息科技有限公司 虚拟机器视觉系统中模拟光源的制作方法及系统
CN114898032B (zh) * 2022-05-10 2023-04-07 北京领为军融科技有限公司 一种基于着色器存储缓存对象的光点渲染方法
CN115063518A (zh) * 2022-06-08 2022-09-16 Oppo广东移动通信有限公司 轨迹渲染方法、装置、电子设备及存储介质
CN116244886B (zh) * 2022-11-29 2024-03-15 北京瑞风协同科技股份有限公司 一种虚实试验数据匹配方法及系统
US20240331299A1 (en) * 2023-03-10 2024-10-03 Tencent America LLC Joint uv optimization and texture baking
CN116778053B (zh) * 2023-06-20 2024-07-23 北京百度网讯科技有限公司 基于目标引擎的贴图生成方法、装置、设备及存储介质
CN117173314B (zh) * 2023-11-02 2024-02-23 腾讯科技(深圳)有限公司 一种图像处理方法、装置、设备、介质及程序产品

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104157000A (zh) * 2014-08-14 2014-11-19 无锡梵天信息技术股份有限公司 模型表面法线的计算方法
CN104966312A (zh) * 2014-06-10 2015-10-07 腾讯科技(深圳)有限公司 一种3d模型的渲染方法、装置及终端设备
US9639773B2 (en) * 2013-11-26 2017-05-02 Disney Enterprises, Inc. Predicting a light probe for an outdoor image

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000132707A (ja) * 1997-11-07 2000-05-12 Snk:Kk ゲームシステム及びゲームシステムにおける表示方法
US7952583B2 (en) * 2000-06-19 2011-05-31 Mental Images Gmbh Quasi-monte carlo light transport simulation by efficient ray tracing
JP2002230579A (ja) * 2001-02-02 2002-08-16 Dainippon Printing Co Ltd 画像作成方法および装置
US6894695B2 (en) * 2001-04-27 2005-05-17 National Semiconductor Corporation Apparatus and method for acceleration of 2D vector graphics using 3D graphics hardware
US8115774B2 (en) * 2006-07-28 2012-02-14 Sony Computer Entertainment America Llc Application of selective regions of a normal map based on joint position in a three-dimensional model
WO2008016645A2 (en) * 2006-07-31 2008-02-07 Onlive, Inc. System and method for performing motion capture and image reconstruction
US8629867B2 (en) * 2010-06-04 2014-01-14 International Business Machines Corporation Performing vector multiplication
US9679362B2 (en) * 2010-12-30 2017-06-13 Tomtom Global Content B.V. System and method for generating textured map object images
CN104268922B (zh) * 2014-09-03 2017-06-06 广州博冠信息科技有限公司 一种图像渲染方法及图像渲染装置
KR102558737B1 (ko) * 2016-01-04 2023-07-24 삼성전자주식회사 3d 렌더링 방법 및 장치
CN106204735B (zh) * 2016-07-18 2018-11-09 中国人民解放军理工大学 Unity3D地形数据在Direct3D 11环境中的使用方法
US10643375B2 (en) * 2018-02-26 2020-05-05 Qualcomm Incorporated Dynamic lighting for objects in images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9639773B2 (en) * 2013-11-26 2017-05-02 Disney Enterprises, Inc. Predicting a light probe for an outdoor image
CN104966312A (zh) * 2014-06-10 2015-10-07 腾讯科技(深圳)有限公司 一种3d模型的渲染方法、装置及终端设备
CN104157000A (zh) * 2014-08-14 2014-11-19 无锡梵天信息技术股份有限公司 模型表面法线的计算方法

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020198066A (ja) * 2019-06-03 2020-12-10 アイドス インタラクティブ コープ 拡張現実アプリケーション用システム及び方法
JP7089495B2 (ja) 2019-06-03 2022-06-22 アイドス インタラクティブ コープ 拡張現実アプリケーション用システム及び方法
CN110390709A (zh) * 2019-06-19 2019-10-29 北京巴别时代科技股份有限公司 卡通渲染勾边圆滑方法
CN111739135A (zh) * 2020-07-30 2020-10-02 腾讯科技(深圳)有限公司 虚拟角色的模型处理方法、装置及可读存储介质
CN111739135B (zh) * 2020-07-30 2023-03-21 腾讯科技(深圳)有限公司 虚拟角色的模型处理方法、装置及可读存储介质
CN112494941A (zh) * 2020-12-14 2021-03-16 网易(杭州)网络有限公司 虚拟对象的显示控制方法及装置、存储介质、电子设备
CN112494941B (zh) * 2020-12-14 2023-11-28 网易(杭州)网络有限公司 虚拟对象的显示控制方法及装置、存储介质、电子设备
CN114612600A (zh) * 2022-03-11 2022-06-10 北京百度网讯科技有限公司 虚拟形象生成方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN109427088B (zh) 2023-02-03
JP2020531980A (ja) 2020-11-05
CN109427088A (zh) 2019-03-05
KR20200029034A (ko) 2020-03-17
KR102319179B1 (ko) 2021-10-28
US11257286B2 (en) 2022-02-22
JP7386153B2 (ja) 2023-11-24
US20200184714A1 (en) 2020-06-11

Similar Documents

Publication Publication Date Title
WO2019033859A1 (zh) 模拟光照的渲染方法及终端
CN112316420B (zh) 模型渲染方法、装置、设备及存储介质
CN112215934B (zh) 游戏模型的渲染方法、装置、存储介质及电子装置
CN104268922B (zh) 一种图像渲染方法及图像渲染装置
US20150325044A1 (en) Systems and methods for three-dimensional model texturing
CN108230435B (zh) 采用立方图纹理的图形处理
US20230120253A1 (en) Method and apparatus for generating virtual character, electronic device and readable storage medium
CN110533707A (zh) 照明估计
CN114119818A (zh) 场景模型的渲染方法、装置及设备
CN113826144B (zh) 使用单幅彩色图像和深度信息的面部纹理贴图生成
CN113112581A (zh) 三维模型的纹理贴图生成方法、装置、设备及存储介质
CN101477700A (zh) 面向Google Earth与Sketch Up的真三维立体显示方法
JP2024508457A (ja) 3dモデリングを強化するために一時的なテクスチャアプリケーションを提供する方法及びシステム
CN114241151A (zh) 三维模型简化方法、装置、计算机设备和计算机存储介质
WO2017219643A1 (zh) 输入文字的3d效果生成、输入文字的3d显示方法和系统
CN114119848B (zh) 一种模型渲染方法、装置、计算机设备及存储介质
CN110569098B (zh) 2d及3d混合的人机界面生成方法、系统、设备及介质
CN101511034A (zh) 面向Skyline的真三维立体显示方法
CN115761105A (zh) 光照渲染方法、装置、电子设备及存储介质
CN115063330A (zh) 头发渲染方法、装置、电子设备及存储介质
CN103729888A (zh) 一种方便调试的3d投影装置及其调试方法
CN109598790B (zh) 一种三维模型绘制方法及装置、一种计算设备及存储介质
WO2023221683A1 (zh) 图像渲染方法、装置、设备和介质
Tang et al. Research on 3D Rendering Effect under Multi-strategy
CN115131493A (zh) 动态光线特效的展示方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18845510

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020509081

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20207005286

Country of ref document: KR

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 18845510

Country of ref document: EP

Kind code of ref document: A1