WO2019033859A1 - 模拟光照的渲染方法及终端 - Google Patents
模拟光照的渲染方法及终端 Download PDFInfo
- Publication number
- WO2019033859A1 WO2019033859A1 PCT/CN2018/093322 CN2018093322W WO2019033859A1 WO 2019033859 A1 WO2019033859 A1 WO 2019033859A1 CN 2018093322 W CN2018093322 W CN 2018093322W WO 2019033859 A1 WO2019033859 A1 WO 2019033859A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- preset
- normal
- virtual object
- terminal
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/55—Radiosity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Definitions
- the present application relates to modeling techniques in the field of electronic applications, and more particularly to rendering of simulated illumination.
- Terminal generally uses Unity development tools to generate 3D virtual objects.
- the terminal can obtain the design model from the model design tool, and import the design model in the Unity development tool to perform 3D scenes and 3D.
- the model design tool usually used is Cinema 4D.
- Cinema 4D is not compatible with Unity, the model design tool will provide a better design model for ambient light effects, but the Unity development tool does not provide such a good ambient light to import the design model. The effect in Unity is very poor.
- the usual solution is to simulate the ambient light effect in the design tool through a combination of multiple lights in Unity, but using a combination of multiple lights to simulate the ambient light, resulting in a serious decline in design model performance and flexibility, and three-dimensional
- the effect of the illumination is not controllable due to the immutability of the light combination model, resulting in poor display of the three-dimensional virtual object or character.
- the embodiment of the present application is expected to provide a method and a terminal for simulating illumination, which can simulate ambient light close to the real environment on the basis of ensuring the shadow detail of the three-dimensional design model, and use the simulated ambient light to three-dimensional
- the virtual object model is processed to improve the display effect of the three-dimensional virtual object model or the virtual character model.
- the embodiment of the present application provides a rendering method for simulating illumination, including:
- the terminal acquires first mesh vertex information of the preset first virtual object model, first color information corresponding to the first mesh vertex information, and first normal information, where the preset first virtual object model is to be Processing the preset model, the first normal information is obtained by baking the high modulus corresponding to the preset first virtual object model;
- the terminal performs vertex space conversion on the first normal information to obtain second normal information corresponding to the first mesh vertex information;
- the terminal obtains first illumination information corresponding to the first mesh vertex information according to the preset color setting rule and the second normal information, where the preset color setting rule is used to represent the correspondence between the color and the illumination relationship;
- the terminal uses the first illumination information, the first color information, and the first mesh vertex information to render the first virtual model object to obtain a second virtual object model.
- the embodiment of the present application provides a terminal, including:
- An acquiring unit configured to acquire first mesh vertex information of the preset first virtual object model, first color information corresponding to the first mesh vertex information, and first normal information, where the preset first virtual
- the object model is a preset model to be processed, and the first normal information is baked by the high modulus corresponding to the preset first virtual object model;
- a conversion unit configured to perform vertex space conversion on the first normal information to obtain second normal information corresponding to the first mesh vertex information
- the acquiring unit is further configured to obtain, according to the preset color setting rule and the second normal information, first illumination information corresponding to the first mesh vertex information, where the preset color setting rule is used for characterization The correspondence between color and light;
- the rendering unit is further configured to use the first illumination information, the first color information, and the first mesh vertex information to render the first virtual model object to obtain a second virtual object model.
- the embodiment of the present application further provides a terminal, including:
- processor a processor, a memory, a display, and a communication bus, wherein the processor, the memory, and the display are connected by the communication bus;
- the processor is configured to invoke a rendering related program of the simulated illumination stored in the memory, and perform the following steps:
- first mesh vertex information of the preset first virtual object model, first color information corresponding to the first mesh vertex information, and first normal information where the preset first virtual object model is to be processed a preset model, the first normal information is obtained by baking the high modulus corresponding to the preset first virtual object model; and the first normal information is transformed into a vertex space to obtain the first network.
- the second normal information corresponding to the vertices information; the first illumination information corresponding to the first mesh vertex information is obtained according to the preset color setting rule and the second normal information, and the preset color setting rule Corresponding relationship between color and illumination; using the first illumination information, the first color information, and the first mesh vertex information to render the first virtual model object to obtain a second virtual object model;
- the display is configured to display the second virtual object model.
- the embodiment of the present application provides a computer readable storage medium, which is applied to a terminal, where the computer readable storage medium stores one or more rendering programs for simulating illumination, and the one or more rendering programs of the simulated illumination may be Executed by one or more processors to implement the above-described method of rendering the simulated illumination.
- the embodiment of the present application provides a method for rendering a simulated illumination and a terminal, which acquires a first mesh vertex information of a preset first virtual object model, a first color information corresponding to the first mesh vertex information, and a first method.
- the line information, the first virtual object model is preset as a preset model to be processed, and the first normal information is baked by a high modulus corresponding to the preset first virtual object model; and the first normal information is transformed into a vertex space, Obtaining second normal information corresponding to the first mesh vertex information; and according to the preset color setting rule and the second normal information, obtaining first illumination information corresponding to the first mesh vertex information, and the preset color setting rule is used by
- the first virtual model object is rendered by using the first illumination information, the first color information, and the first mesh vertex information to obtain a second virtual object model.
- the terminal can parse the illumination information corresponding to each mesh vertex according to the fine normal information determined by the high mode, the illumination information can be used as the ambient light to render the first virtual object model. Due to the high precision of the normal information, the shadow detail of the 3D design model is guaranteed, and the ambient light close to the real environment is simulated. Therefore, the accuracy of the second virtual object model rendered by the method is high, and the second is improved. The display effect of the virtual object model.
- FIG. 1 is a flowchart 1 of a method for rendering simulated illumination according to an embodiment of the present application
- FIG. 2 is a schematic diagram of an exemplary normal map effect according to an embodiment of the present application.
- FIG. 3 is a second flowchart of a method for rendering simulated illumination according to an embodiment of the present application
- FIG. 4 is a schematic diagram of an exemplary normal map creation interface provided by an embodiment of the present application.
- FIG. 5 is a schematic plan view showing an exemplary model according to an embodiment of the present application.
- FIG. 6 is a schematic diagram of an exemplary normal map provided by an embodiment of the present application.
- FIG. 7 is a schematic diagram of an exemplary rendering effect provided by an embodiment of the present application.
- FIG. 8 is a schematic structural diagram 1 of a terminal according to an embodiment of the present disclosure.
- FIG. 9 is a schematic structural diagram 2 of a terminal according to an embodiment of the present disclosure.
- FIG. 10 is a schematic structural diagram 3 of a terminal according to an embodiment of the present disclosure.
- FIG. 11 is a schematic structural diagram 4 of a terminal according to an embodiment of the present application.
- the rendering method of the simulated illumination provided by the embodiment of the present application can be applied to any application or function that uses the three-dimensional model, and the application terminal can install the above application and perform data interaction through the server corresponding to the application. Implement the corresponding functions.
- the rendering method adopted by the terminal (development terminal) in performing the processing of the three-dimensional model is mainly described to achieve a better process of displaying the three-dimensional model.
- the embodiment of the present application provides a method for rendering simulated illumination. As shown in FIG. 1 , the method may include:
- the terminal when performing the three-dimensional model processing, the terminal may first use the three-dimensional model design tool to establish the model, and then use the three-dimensional model development tool to further process the model, and finally obtain the required model. 3D model.
- the three-dimensional model design tool may be 3ds Max, ZByush, or Cinema 4D, etc., and the embodiment of the present application does not limit the types of model design tools; and the 3D model development tool may be Unity 3D, Unreal, etc., this application Embodiments do not limit the types of model development tools.
- the three-dimensional model design tool is Cinema 4D, and the three-dimensional model development tool is Unity 3D.
- the terminal establishes the required three-dimensional virtual object model in the three-dimensional design tool. After the three-dimensional virtual object model is established, the three-dimensional virtual object model is exported for further model processing by the three-dimensional model development tool.
- the main description in the embodiment of the present application is a process of rendering the three-dimensional virtual object model in the three-dimensional model development tool after the three-dimensional virtual object model is established.
- the terminal establishes a virtual object according to the requirements of application development, and the terminal may establish a low-progress low-mode for the same virtual object, that is, preset the first virtual object model, and establish A high-precision high-mode, that is, a preset third virtual object model, that is, the preset first virtual object model and the preset third virtual object model are both model establishments for the same virtual object, the only difference Is the precision of modeling.
- the terminal may acquire related information of the preset first virtual object model and the preset third virtual object model, and then the terminal may perform normal map generation according to the preset third virtual object model.
- the method of making normal maps can be high-mode baking. Simply put, it is a high-precision model of several million or tens of millions or even hundreds of millions of faces (that is, the preset third virtual object model). Make a low-precision model with thousands of faces and tens of thousands of faces (that is, preset the first virtual object model), and then bake the high-mode details onto the low-mode to get a normal map.
- the 3D model in the terminal approximates an object by combining multiple polygon faces.
- the 3D development tool of the terminal may parse the preset first virtual object model, and obtain the first mesh vertex information of the preset first virtual object model and the first corresponding to the first mesh vertex information. Color information, and also parsing the normal map to obtain the first normal information, the first virtual object model is a preset model to be processed, and the first normal information is preset by the first virtual object model corresponding to the high mode (ie, pre- Let the third virtual object model be baked.
- the preset first virtual object model acquired by the terminal may obtain the UV map and the diffuse texture of the preset first virtual object model, and then the terminal parses the UV map to obtain the first network of the preset first virtual object model.
- the three-dimensional model is constructed by different aspects. Therefore, there are many vertices in the preset first virtual object model, and the first mesh vertex information refers to It is preset coordinate information of each mesh vertex of the first virtual object model.
- S102 Perform vertex space conversion on the first normal information to obtain second normal information corresponding to the first mesh vertex information.
- the terminal Since the first normal information acquired by the terminal is the relevant normal information of the model space, when rendering, the terminal needs to ensure that the illumination direction and the normal information are in the same coordinate system, and the illumination direction is generally in the world space. Medium, and the normal information is in the model space. Therefore, the first mesh vertex information of the preset first virtual object model, the first color information corresponding to the first mesh vertex information, and the first normal are acquired at the terminal. After the information, the terminal needs to perform vertex space conversion on the first normal information to meet the rendering requirement, and finally obtain the second normal information corresponding to the first mesh vertex information.
- each mesh vertex may correspond to one normal information (collectively referred to as first normal information), and the vertex information of each mesh vertex is collectively referred to as the first mesh vertex information, and therefore, the preset The first mesh vertex information of the first virtual object model is corresponding to the first normal information.
- the terminal performs the vertex space conversion on the first normal information to obtain the second normal information in the world space.
- the normal line is a straight line perpendicular to one surface.
- the angle of the surface can be known, and the color information that should be obtained by the surface can be calculated. Use this principle to perform simulated lighting processing.
- the first normal information corresponding to each mesh vertex is saved to the corresponding pixel on the normal map. Therefore, in the normal map, by storing the normal of each pixel in a texture, rendering The darkness of the pixel can be determined according to the normal of each pixel. That is, the first normal information records the numerical details of the highlights and shadows of each vertex information, and still stores the first normal information as three colors of RGB on the normal map.
- a normal is a three-dimensional vector
- a three-dimensional vector is composed of three components: X, Y, Z, etc., so that the three components are stored as values of three colors of red, green, and blue to generate a new one.
- Texture which is the normal map.
- the red and green channels in the normal map represent the up and down and left and right offsets
- the blue channel represents the vertical offset.
- the terminal imagines a pixel on the normal map as a normal.
- This normal map is 512*512 pixels, that is, 262144 pixels.
- the first virtual object model is equivalent to 262144 normals on the preset first virtual object model (of course, this is certainly not the case), so that a preset virtual object model of a few hundred faces instantly looks like It has the same detail effect as hundreds of thousands of faces.
- the first illumination information corresponding to the first mesh vertex information is obtained according to the preset color setting rule and the second normal information, and the preset color setting rule is used to represent the correspondence between the color and the illumination.
- the terminal After the terminal performs the vertex space conversion on the first normal information to obtain the second normal information corresponding to the first mesh vertex information, since the second normal information is in one-to-one correspondence with the first mesh vertex information, the terminal The terminal may set the first illumination information corresponding to the first mesh vertex information according to the preset color setting rule and the second normal information, that is, the terminal may project the normal line according to the second normal information.
- the projection point of the normal line can be converted into the UV coordinate of the normal map (corresponding to the vertex information of the first grid).
- the terminal can set the rule corresponding to the UV coordinate according to the preset color.
- the grid vertices are color-set to obtain the first illumination information, wherein the preset color setting rule is used to represent the correspondence between the color and the illumination.
- the terminal may store one color information at each mesh vertex of the first network vertex information according to the second normal information and the preset color setting rule, and obtain the second color information of the first mesh vertex information, which will be The two color information is used as the first illumination information.
- the projection range is x (-1, 1), y (-1, 1), which constitutes a circle. Therefore, the effective range of the normal map is basically a circle. In this case, when the first illumination information is obtained based on the second normal information, the area in which the first illumination information is stored in the light map is a circle.
- the light map can be a material capture (MatCap, Material Capture) texture, and a MatCap texture of a specific material sphere is used as a view-space environment map of the current material to achieve uniformity.
- the terminal can provide one or more suitable MatCap textures as a "guidance" of the lighting results without providing any illumination.
- the shader in the embodiment of the present application is a tool for rendering a three-dimensional model.
- the preset color setting rule is based on the principle that the black light absorption is less and the white light absorption is more. Therefore, when the illumination of a mesh vertex is strong, the second color information set by the gray color should be whitened. When the illumination of a mesh vertex is weak, the second color information set by it should be turned black, so that the terminal can obtain the first illumination information that uses the second color information to represent the illumination.
- the color information can be selected in 0-255. The closer to 0, the more black, and the closer to 255, the more white. That is to say, the light intensity in the center of the circle in the light map, the second color information is white, the edge light is weak, and the second color information is black.
- the first virtual model object is rendered by using the first illumination information, the first color information, and the first mesh vertex information to obtain a second virtual object model.
- the terminal After the terminal obtains the first illumination information corresponding to the first mesh vertex information according to the preset color setting rule and the second normal information, the terminal can use the first illumination information to simulate the ambient light, and the first Illumination information is also obtained using the relevant normal information, combined with high-precision lighting and shadow details, and therefore close to the real ambient light. In this way, the terminal can render the first virtual model object according to the first illumination information, the first color information, and the first mesh vertex information to obtain a second virtual object model.
- the terminal may use the first illumination information and the first color information to fill the vertex color of each mesh vertex corresponding to the first mesh vertex information, and obtain the color information of the main vertex of the vertex, thereby using the main
- the vertex color information processes the preset first virtual object model.
- the rendering process of the preset virtual object model by the terminal may be processed in various aspects such as texture, color, and the like, which is not limited in the embodiment of the present application.
- the normal map is created based on the UV of the model, and the detail values of the highlight and the shadow can be recorded, and then after the terminal obtains the normal map, the normal map is applied to the preset first virtual object. After the model, the accuracy of the obtained second virtual object model is high, and the unevenness of the surface texture can be well reflected.
- the first virtual object model is a candlestick model
- the candlestick model is on the left side
- the candlestick model becomes the right icon, which is very stereoscopic. sense.
- the terminal can use the normal map to process the texture details when rendering the 3D model, and the general terminal uses the vertex shader to implement.
- a method for rendering simulated illumination may further include: S105-S107. as follows:
- the normal map of the same high-precision model of the simulated virtual object is generated, thereby parsing the first normal information in the normal map.
- the process of obtaining the normal map by the terminal may be a high-modulation method, that is, the terminal acquires the second mesh vertex information corresponding to the corresponding high-mode (ie, the preset third virtual object model) of the preset first virtual object model, according to the The second mesh vertex information and the preset normal line model obtain the first normal direction, and finally determine the first corresponding to the first normal direction according to the correspondence between the preset second mesh vertex information and the first mesh vertex Normal information.
- the normal map due to the appearance of the normal map, the light and shadow detail data of the high-surface model is simulated for the low-surface model. The most important thing is the normal direction of the light incident direction and the incident point.
- the angle, the normal map essentially records the relevant information of this angle, and the calculation of the illumination is closely related to the normal direction on a certain surface.
- the terminal may know the second mesh vertex information corresponding to the preset third virtual object model (ie, the information of each mesh vertex in the preset third virtual object model), when the illumination is to the preset third virtual object.
- the first normal direction of the point obtained by interpolation ie, the preset normal model
- the preset third virtual object model is projected on the preset first virtual object model to form a two-dimensional projection (for example, xy plane projection), and then the terminal acquires the first mesh vertex information.
- Corresponding first normal of each mesh vertex is in two projection directions (in the x and y directions) on the two-dimensional projection. Finally, the terminal will take the first normal direction of each mesh vertex as the z direction. This gives the first normal information on the vertices of each network. Then, the terminal saves the first normal information of each point to the corresponding pixel on the normal map, and the actual calculation is to map the size of the x, y, and z directions in the first normal information into the color space rgb, that is, x The value exists in r, the y value exists in g, and the z value exists in b.
- the terminal performs the preset first virtual object model rendering, the normal map is obtained, and the terminal is the first normal information obtained by parsing the normal map.
- the normal map in the embodiment of the present application may be an object space normal map.
- the terminal normal map optimization model shadow control assumes that the preset third virtual object model is a face model, then the display window of the face model for normal map creation is shown in FIG. 4, and the face is baked by the baking method. The model is projected to obtain a planar unfolded view as shown in Fig. 5. Finally, the terminal expands the planar expanded view shown in Fig. 5 in the RGB values, and obtains the normal map as shown in Fig. 6.
- a method for rendering simulated illumination provided by the embodiment of the present application further includes: S108. as follows:
- the terminal Before the terminal performs the rendering of the preset first virtual object model, the terminal needs to acquire a scene file that needs to establish a three-dimensional model, and establish a first scene according to the scene file, and then preset the first virtual object in the first scene. Model display and processing.
- the embodiment of the present application does not limit the type of the first scenario, and may be various scenarios such as snow scenes and deserts.
- S108 is a primary execution sequence in which the terminal starts model processing, that is, the terminal execution S108 may be before S105-107.
- a method for rendering simulated illumination provided by the embodiment of the present application may further include: S109. as follows:
- the terminal After the terminal obtains the rendered second virtual object model, since the second virtual object model is the rendered model after rendering, and the entire model is processed in the first scenario, the terminal can The second virtual object model is displayed or displayed in the first scene.
- the embodiment of the present application provides a method for rendering a simulated illumination.
- the method for performing vertex space conversion on the first normal information in S102 to obtain second normal information corresponding to the first mesh vertex information may include: S1021- S1022. as follows:
- S1021 Perform vertex space conversion on the first normal information, and convert to a tangent space to obtain third normal information.
- S1022 Normalize the third normal information to obtain second normal information.
- the terminal uses the MatCap map, mainly to convert the first normal information from the object space to the tangent space, and switch to the area suitable for extracting the texture UV [0, 1] on.
- the tangent space used by the high mode is on the low mode (preset the first virtual object model), and the terminal generates the normal map. It must be confirmed which faces on the low mode correspond to which faces on the low mode, and then the normals of the faces on the high mode are converted to the coordinates of the tangent space constructed on the low mode side.
- the normal information ie, the normal value
- the transformation matrix of the coordinate system can obtain the external coordinates, wherein the first normal information stored in the high mode corresponds to the normal in the object space on the high mode.
- the terminal performs the vertex space conversion of the first normal information
- the specific implementation of the conversion to the tangent space is as follows: for each mesh vertex corresponding to the first mesh vertex information, the object space is converted to the tangent space, and can be used.
- Model-view matrix but the normal vector in the first normal information uses the model-view matrix to transform from object space to eye space. In the eye space, the direction of tangent still conforms to the definition. Normal (normal) is no longer perpendicular to the tangent of the mesh vertex, so the model-view matrix does not apply to normal. This is because, suppose that T is tangent, MV is a model-view matrix, and P1 and P2 are two mesh vertices connected by tangent, then,
- the normal transformation maintain perpendicular to the tangent, and now assume that the normal matrix matrix is G, then normal and tangent perpendicular to formula (3):
- the normal matrix matrix is the transposed matrix of the inverse matrix of the model-view matrix.
- the first normal information can be converted from the model space to the tangent space by the normal matrix matrix, and the third normal information is obtained.
- Unity is taken as an example.
- Unity's built-in transformation matrix normal matrix can be expressed as: UNITY_MATRIX_IT_MV, which is the inverse transpose matrix of UNITY_MATRIX_MV (model-view matrix), which functions to get the first normal information from The model space is transformed into the tangent space to obtain the third normal information.
- UNITY_MATRIX_IT_MV is the inverse transpose matrix of UNITY_MATRIX_MV (model-view matrix)
- This process is implemented in the vertex shader as follows:
- the terminal converts the first normal information from the object space to the tangent space
- the third normal information is obtained, and the third normal information needs to be switched to be suitable for extraction.
- the second normal information is obtained.
- the terminal normalizes the third normal information, and the process of obtaining the second normal information is implemented in the vertex shader, as follows:
- Output.position mul(UNITY_MATRIX_MVP, input.position);
- the embodiment of the present application provides a method for rendering a simulated illumination, in which the first virtual model object is rendered by using the first illumination information, the first color information, and the first mesh vertex information, to obtain a second virtual object model.
- the method may include: S1041-S1042. as follows:
- S1041 Perform interpolation processing on the first color information and the second color information corresponding to the first mesh vertex information to obtain main vertex color information of each mesh vertex corresponding to the first mesh vertex information.
- S1042 Perform drawing according to the main vertex color information and the correspondence relationship of each mesh vertex to obtain a second virtual object model.
- the color of the main vertex corresponding to each mesh vertex information in the preset first virtual object model of the terminal is the first color information
- the terminal renders the preset first virtual object model by using the light map.
- the terminal performs interpolation processing on the first color information and the second color information corresponding to the first mesh vertex information to obtain color information of the main vertex of each mesh vertex corresponding to the first mesh vertex information;
- the terminal draws according to the correspondence between the main vertex color information and each mesh vertex to obtain a second virtual object model.
- the first color information in the embodiment of the present application may be the original main vertex color information, and the terminal may obtain the detail texture according to the second normal information in the normal map, thereby obtaining the detailed color information.
- the process of obtaining the color information of the new street by the terminal is as follows:
- Float 3detailMask tex2D(_DetailTex, input.detailUVCoordsAndDepth.xy).rgb;
- Float 3detailColor lerp(_DetailColor.rgb,mainColor,detailMask);
- the terminal may first perform a difference between the detail color and the first color information, which is referred to as a new main vertex color information, and then with the second color information extracted from the lightmap (first illumination information). ) to work to get the final main vertex color information.
- a difference between the detail color and the first color information which is referred to as a new main vertex color information
- the second color information extracted from the lightmap first illumination information
- mainColor lerp(detailColor,mainColor,saturate(input.detailUVCoordsAndDepth.z*DetailTexDepthOffset));
- float3matCapColor tex2D(_MatCap,input.diffuseUVAndMatCapCoords.zw).rgb;
- float4finalColor float4(mainColor*matCapColor*2.0, MainColor.a);
- the rendering of the preset first virtual object model by the terminal is performed by combining the original model, the normal map, and the MatCap texture, and the simulated ambient light is obtained and the output of the shadow detail is ensured.
- the correspondence between the mesh vertex and the color information (corresponding relationship between the main vertex color information and each mesh vertex), the terminal draws according to the correspondence relationship between each mesh vertex and the color information, and obtains the second virtual object model.
- the three-dimensional character model is compared.
- the effect of the three-dimensional character model implemented by the rendering method used in the embodiment of the present application is as shown in the model 1 in FIG. 7 , and the previous rendering mode is implemented.
- the effect of the three-dimensional character model is shown in the model 2 in Fig. 7. From the comparison, it can be seen that the accuracy of the display effect of the model 1 is higher than that of the model 2, and the display effect of the second virtual object model is improved.
- the result of rendering the three-dimensional virtual character model in the implementation of the present application is that the head-to-body interface is weakened by the ambient light with the normal map.
- the traces at the seams are mainly the alignment of the shadows at the seams of the normal map seams.
- Matcap which resembles the head and body parts, maintains the same amount of light at the joints at the seams when simulating ambient light, avoiding seams. The position is more obvious in the case of different light quantities, that is, the point rendering at the seams of the respective blocks, parts or cut surfaces in the three-dimensional character model is weakened.
- the embodiment of the present application provides a terminal 1, and the terminal 1 may include:
- the acquiring unit 10 is configured to acquire first mesh vertex information of the preset first virtual object model, first color information corresponding to the first mesh vertex information, and first normal information, where the preset first The virtual object model is a preset model to be processed, and the first normal information is obtained by baking the high modulus corresponding to the preset first virtual object model;
- the converting unit 11 is configured to perform vertex space conversion on the first normal information to obtain second normal information corresponding to the first mesh vertex information;
- the obtaining unit 10 is further configured to obtain, according to the preset color setting rule and the second normal information, first illumination information corresponding to the first mesh vertex information, where the preset color setting rule is used for Characterizing the correspondence between color and light;
- the rendering unit 12 is further configured to use the first illumination information, the first color information, and the first mesh vertex information to render the first virtual model object to obtain a second virtual object model.
- the converting unit 11 is specifically configured to perform vertex space conversion on the first normal information, convert to a tangent space, and obtain third normal information; and the third method The line information is normalized to obtain the second normal information.
- the acquiring unit 10 is specifically configured to: at each mesh vertex of the first network vertex information, according to the second normal information and the preset color setting rule Storing a color information to obtain second color information of the first mesh vertex information, and using the second color information as the first illumination information.
- the rendering unit 12 is configured to perform interpolation processing on the first color information and the second color information corresponding to the first mesh vertex information to obtain the first Main vertex color information of each mesh vertex corresponding to a mesh vertex information; and drawing according to the main vertex color information and the correspondence relationship of each of the mesh vertexes to obtain the second virtual object model.
- the acquiring unit 10 is further configured to: acquire the first mesh vertex information of the preset first virtual object model, and the first color information corresponding to the first mesh vertex information. Obtaining second mesh vertex information corresponding to the preset third virtual object model, and the preset third virtual object model is a corresponding high mode of the preset first virtual object model, and the first normal information And obtaining a first normal direction according to the second mesh vertex information and the preset normal line model; and determining, according to the correspondence between the preset second mesh vertex information and the first mesh vertex, The first normal information corresponding to a normal direction.
- the terminal 1 further includes: an establishing unit 13;
- the acquiring unit 10 is further configured to: before acquiring the first mesh vertex information of the preset first virtual object model, the first color information corresponding to the first mesh vertex information, and the first normal information, Get the scene file;
- the establishing unit 13 is configured to establish a first scenario according to the scenario file.
- the terminal 1 further includes: a display unit 14;
- the display unit 14 is configured to perform, according to the correspondence relationship between the main vertex color information and each of the mesh vertices, after the second virtual object model is obtained, in the first scene, display The second virtual object model.
- the terminal can parse the illumination information corresponding to each mesh vertex according to the fine normal information determined by the high mode, the illumination information can be used as the ambient light to render the first virtual object model,
- the accuracy of the normal information is very high, which guarantees the shadow detail of the 3D design model and simulates the ambient light close to the real environment. Therefore, the precision of the second virtual object model rendered in this way is high, and the second virtual is improved.
- the display effect of the object model since the terminal can parse the illumination information corresponding to each mesh vertex according to the fine normal information determined by the high mode, the illumination information can be used as the ambient light to render the first virtual object model,
- the accuracy of the normal information is very high, which guarantees the shadow detail of the 3D design model and simulates the ambient light close to the real environment. Therefore, the precision of the second virtual object model rendered in this way is high, and the second virtual is improved.
- the display effect of the object model is very high, which guarantees the shadow detail of the 3D design model and simulates the ambient light close to the real
- the embodiment of the present application provides a terminal, where the terminal may include:
- the processor 15 is configured to invoke a rendering related program of the simulated illumination stored by the memory 16, and perform the following steps:
- first mesh vertex information of the preset first virtual object model, first color information corresponding to the first mesh vertex information, and first normal information where the preset first virtual object model is to be processed a preset model, the first normal information is baked by the high modulus corresponding to the preset first virtual object model; and the first normal information is subjected to vertex space conversion to obtain the first a second normal information corresponding to the mesh vertex information; and obtaining, according to the preset color setting rule and the second normal information, first illumination information corresponding to the first mesh vertex information, the preset color Setting a rule for characterizing the correspondence between the color and the illumination; and rendering the first virtual model object by using the first illumination information, the first color information, and the first mesh vertex information to obtain a Two virtual object models.
- the display 17 is configured to display the second virtual object model.
- the processor 15 is configured to perform vertex space conversion on the first normal information, convert to a tangent space, obtain third normal information, and normalize the third normal information. Processing, obtaining the second normal information;
- the processor 15 is configured to store, according to the second normal information and the preset color setting rule, a color information at each mesh vertex of the first network vertex information, Obtaining second color information of the first mesh vertex information, and using the second color information as the first illumination information.
- the processor 15 is configured to perform interpolation processing on the first color information and the second color information corresponding to the first mesh vertex information to obtain the first mesh vertex information. Corresponding main vertex color information of each mesh vertex; and drawing according to the main vertex color information and the corresponding relationship of each mesh vertex to obtain the second virtual object model.
- the processor 15 is further configured to: acquire the first mesh vertex information of the preset first virtual object model, the first color information corresponding to the first mesh vertex information, and the first method. Before the line information, acquiring second mesh vertex information corresponding to the preset third virtual object model, where the preset third virtual object model is a corresponding high mode of the preset first virtual object model; a second mesh vertex information and a preset normal line model to obtain a first normal direction; and determining, according to a preset correspondence between the second mesh vertex information and the first mesh vertex, to determine the first normal direction The first normal information.
- the processor 15 is further configured to: acquire the first mesh vertex information of the preset first virtual object model, the first color information corresponding to the first mesh vertex information, and the first method. Before the line information, acquiring a scene file; and establishing a first scene according to the scene file.
- the display 17 is configured to perform, according to the correspondence relationship between the primary vertex color information and each of the mesh vertices, after obtaining the second virtual object model, in the first scenario
- the second virtual object model is displayed.
- the terminal can parse the illumination information corresponding to each mesh vertex according to the fine normal information determined by the high mode, the illumination information can be used as the ambient light to render the first virtual object model,
- the accuracy of the normal information is very high, which guarantees the shadow detail of the 3D design model and simulates the ambient light close to the real environment. Therefore, the precision of the second virtual object model rendered in this way is high, and the second virtual is improved.
- the display effect of the object model since the terminal can parse the illumination information corresponding to each mesh vertex according to the fine normal information determined by the high mode, the illumination information can be used as the ambient light to render the first virtual object model,
- the accuracy of the normal information is very high, which guarantees the shadow detail of the 3D design model and simulates the ambient light close to the real environment. Therefore, the precision of the second virtual object model rendered in this way is high, and the second virtual is improved.
- the display effect of the object model is very high, which guarantees the shadow detail of the 3D design model and simulates the ambient light close to the real
- the above memory may be a volatile memory, such as a random access memory (RAM), or a non-volatile memory, such as a read only memory.
- RAM random access memory
- ROM Read-Only Memory
- flash memory hard disk (HDD, Hard Disk Drive) or solid state drive (SSD, Solid-State Drive); or a combination of the above types of memory, and provide to the processor Instructions and data.
- HDD Hard Disk Drive
- SSD Solid-State Drive
- the processor may be an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), or a Programmable Logic Device (PLD). At least one of a Programmable Logic Device, a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor. It is to be understood that, for the different devices, the electronic device for implementing the above-mentioned functions of the processor may be other, which is not specifically limited in the embodiment of the present application.
- ASIC Application Specific Integrated Circuit
- DSP Digital Signal Processor
- DSPD Digital Signal Processing Device
- PLD Programmable Logic Device
- FPGA Field Programmable Gate Array
- CPU Central Processing Unit
- controller a controller
- microcontroller a microcontroller
- the functional modules in this embodiment may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of a software function module.
- the integrated unit may be stored in a computer readable storage medium if it is implemented in the form of a software function module and is not sold or used as a stand-alone product.
- the technical solution of the embodiment is essentially The portion that contributes to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a computer readable storage medium, including instructions for causing a computer A device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) performs all or part of the steps of the method described in this embodiment.
- the foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes.
- the embodiment of the present application provides a computer readable storage medium, which is applied to a terminal, where the computer readable storage medium stores one or more rendering programs for simulating illumination, and the one or more rendering programs for simulating illumination. It can be executed by one or more processors to implement the methods described in the first embodiment and the second embodiment.
- embodiments of the present application can be provided as a method, system, or computer program product. Accordingly, the application can take the form of a hardware embodiment, a software embodiment, or an embodiment in combination with software and hardware. Moreover, the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
- the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
- the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
- These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
- the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (16)
- 一种模拟光照的渲染方法,包括:终端获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息,所述预设第一虚拟对象模型为待处理的预设模型,所述第一法线信息由所述预设第一虚拟对象模型对应的高模进行烘焙得到;所述终端将所述第一法线信息进行顶点空间转换,得到与所述第一网格顶点信息对应的第二法线信息;所述终端根据预设颜色设置规则和所述第二法线信息,得到与所述第一网格顶点信息对应的第一光照信息,所述预设颜色设置规则用于表征颜色与光照的对应关系;所述终端采用所述第一光照信息、所述第一颜色信息和所述第一网格顶点信息,对所述第一虚拟模型对象进行渲染,得到第二虚拟对象模型。
- 根据权利要求1所述的方法,所述终端将所述第一法线信息进行顶点空间转换,得到与所述第一网格顶点信息对应的第二法线信息,包括:所述终端将所述第一法线信息进行顶点空间转换,转换到切线空间,得到第三法线信息;所述终端将所述第三法线信息进行归一化处理,得到所述第二法线信息。
- 根据权利要求1所述的方法,所述终端根据预设颜色设置规则和所述第二法线信息,得到与所述第一网格顶点信息对应的第一光照信息,包括:所述终端根据所述第二法线信息和所述预设颜色设置规则,在所述第一网络顶点信息的每个网格顶点处存储一个颜色信息,得到所述第一网格顶点信息的第二颜色信息,将所述第二颜色信息作为所述第一光照信息。
- 根据权利要求3所述的方法,所述终端采用所述第一光照信息、所述第一颜色信息和所述第一网格顶点信息,对所述第一虚拟模型对象进行渲染,得到第二虚拟对象模型,包括:所述终端将所述第一网格顶点信息对应的所述第一颜色信息和所述第二颜色信息进行插值处理,得到所述第一网格顶点信息对应的每个网格顶点的主顶点颜色信息;所述终端根据所述主顶点颜色信息和所述每个网格顶点的对应关系进行绘制,得到所述第二虚拟对象模型。
- 根据权利要求1所述的方法,所述终端获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息之前,所述方法还包括:终端获取预设第三虚拟对象模型对应的第二网格顶点信息,所述预设第三虚拟对象模型为所述预设第一虚拟对象模型的对应的高模;所述终端根据所述第二网格顶点信息和预设法线模型,得到第一法线方向;所述终端根据预设第二网格顶点信息与第一网格顶点的对应关系,确定与所述第一法线方向对应的所述第一法线信息。
- 根据权利要求1所述的方法,所述终端获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息之前,所述方法还包括:终端获取场景文件,根据所述场景文件建立第一场景。
- 根据权利要求6所述的方法,所述终端根据所述主顶点颜色信息和所述每个网格顶点的对应关系进行绘制,得到所述第二虚拟对象模型之后,所述方法包括:所述终端在所述第一场景中,显示所述第二虚拟对象模型。
- 一种终端,包括:获取单元,用于获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息,所述预设第一虚拟对象模型为待处理的预设模型,所述第一法线信息由所述预设第一虚拟对象模型对应的高模进行烘焙得到;转换单元,用于将所述第一法线信息进行顶点空间转换,得到与所述第一网格顶点信息对应的第二法线信息;所述获取单元,还用于根据预设颜色设置规则和所述第二法线信息,得到与所述第一网格顶点信息对应的第一光照信息,所述预设颜色设置规则用于表征颜色与光照的对应关系;渲染单元,还用于采用所述第一光照信息、所述第一颜色信息和所述第一网格顶点信息,对所述第一虚拟模型对象进行渲染,得到第二虚拟对象模型。
- 根据权利要求8所述的终端,所述转换单元,具体用于将所述第一法线信息进行顶点空间转换,转换到切线空间,得到第三法线信息;以及将所述第三法线信息进行归一化处理,得到所述第二法线信息。
- 根据权利要求8所述的终端,所述获取单元,具体用于根据所述第二法线信息和所述预设颜色设置规则,在所述第一网络顶点信息的每个网格顶点处存储一个颜色信息,得到所述第一网格顶点信息的第二颜色信息,将所述第二颜色信息作为所述第一光照信息。
- 根据权利要求10所述的终端,所述渲染单元,具体用于将所述第一网格顶点信息对应的所述第一颜色信息和所述第二颜色信息进行插值处理,得到所述第一网格顶点信息对应的每个网格顶点的主顶点颜色信息;以及根据所述主顶点颜色信息和所述每个网格顶点的对应关系进行绘制,得到所述第二虚拟对象模型。
- 根据权利要求8所述的终端,所述获取单元,还用于所述获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息之前,获取预设第三虚拟对象模型对应的第二网格顶点信息,所述预设第三虚拟对象模型为所述预设第一虚拟对象模型的对应的高模;及根据所述第二网格顶点信息和预设法线模型,得到第一法线方向;以及根据预设第二网格顶点信息与第一网格顶点的对应关系,确定与所述第一法线方向对应的所述第一法线信息。
- 根据权利要求8所述的终端,所述终端还包括:建立单元;所述获取单元,还用于所述获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息之前,获取场景文件;所述建立单元,用于根据所述场景文件建立第一场景。
- 根据权利要求13所述的终端,所述终端还包括:显示单元;所述显示单元,用于所述根据所述主顶点颜色信息和所述每个网格顶点的对应关系进行绘制,得到所述第二虚拟对象模型之后,在所述第一场景中,显示所述第二虚拟对象模型。
- 一种终端,包括:处理器、存储器、显示器及通信总线,所述处理器、所述存储器及所述显示器通过所述通信总线连接;所述处理器,用于调用所述存储器存储的模拟光照的渲染相关程序,执行如下步骤:获取预设第一虚拟对象模型的第一网格顶点信息、所述第一网格顶点信息对应的第一颜色信息,以及第一法线信息,所述预设第一虚拟对象模型为待处理的预设模型,所述第一法线信息由所述预设第一虚拟对象模型对应的高模进行烘焙得到;将所述第一法线信息进行顶点空间转换,得到与所述第一网格顶点信息对应的第二法线信息;根据预设颜色设置规则和所述第二法线信息,得到与所述第一网格顶点信息对应的第一光照信息,所述预设颜色设置规则用于表征颜色与光照的对应关系;采用所述第一光照信息、所述第一颜色信息和所述第一网格顶点信息,对所述第一虚拟模型对象进行渲染,得到第二虚拟对象模型;所述显示器,用于显示所述第二虚拟对象模型。
- 一种计算机可读存储介质,应用于终端中,所述计算机可读存储介质存储有一个或者多个模拟光照的渲染程序,所述一个或者多个模拟光照的渲染程序可被一个或者多个处理器执行,以实现上述权利要求1至7所述的模拟光照的渲染方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020509081A JP7386153B2 (ja) | 2017-08-18 | 2018-06-28 | 照明をシミュレートするレンダリング方法及び端末 |
KR1020207005286A KR102319179B1 (ko) | 2017-08-18 | 2018-06-28 | 조명을 시뮬레이션하기 위한 렌더링 방법, 및 단말 |
US16/789,263 US11257286B2 (en) | 2017-08-18 | 2020-02-12 | Method for rendering of simulating illumination and terminal |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710711285.3A CN109427088B (zh) | 2017-08-18 | 2017-08-18 | 一种模拟光照的渲染方法及终端 |
CN201710711285.3 | 2017-08-18 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/789,263 Continuation US11257286B2 (en) | 2017-08-18 | 2020-02-12 | Method for rendering of simulating illumination and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019033859A1 true WO2019033859A1 (zh) | 2019-02-21 |
Family
ID=65361998
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/093322 WO2019033859A1 (zh) | 2017-08-18 | 2018-06-28 | 模拟光照的渲染方法及终端 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11257286B2 (zh) |
JP (1) | JP7386153B2 (zh) |
KR (1) | KR102319179B1 (zh) |
CN (1) | CN109427088B (zh) |
WO (1) | WO2019033859A1 (zh) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110390709A (zh) * | 2019-06-19 | 2019-10-29 | 北京巴别时代科技股份有限公司 | 卡通渲染勾边圆滑方法 |
CN111739135A (zh) * | 2020-07-30 | 2020-10-02 | 腾讯科技(深圳)有限公司 | 虚拟角色的模型处理方法、装置及可读存储介质 |
JP2020198066A (ja) * | 2019-06-03 | 2020-12-10 | アイドス インタラクティブ コープ | 拡張現実アプリケーション用システム及び方法 |
CN112494941A (zh) * | 2020-12-14 | 2021-03-16 | 网易(杭州)网络有限公司 | 虚拟对象的显示控制方法及装置、存储介质、电子设备 |
CN114612600A (zh) * | 2022-03-11 | 2022-06-10 | 北京百度网讯科技有限公司 | 虚拟形象生成方法、装置、电子设备和存储介质 |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110363836A (zh) * | 2019-07-19 | 2019-10-22 | 杭州绝地科技股份有限公司 | 一种基于Matcap贴图的角色渲染方法、装置和设备 |
CN111667581B (zh) * | 2020-06-15 | 2023-08-22 | 网易(杭州)网络有限公司 | 3d模型的处理方法、装置、设备及存储介质 |
CN111862344B (zh) * | 2020-07-17 | 2024-03-08 | 抖音视界有限公司 | 图像处理方法、设备和存储介质 |
CN112435285B (zh) * | 2020-07-24 | 2024-07-30 | 上海幻电信息科技有限公司 | 法线贴图的生成方法及装置 |
CN111882631B (zh) * | 2020-07-24 | 2024-05-03 | 上海米哈游天命科技有限公司 | 一种模型渲染方法、装置、设备及存储介质 |
CN112270759B (zh) * | 2020-10-30 | 2022-06-24 | 北京字跳网络技术有限公司 | 基于图像的光效处理方法、装置、设备及存储介质 |
CN112700541B (zh) * | 2021-01-13 | 2023-12-26 | 腾讯科技(深圳)有限公司 | 一种模型更新方法、装置、设备及计算机可读存储介质 |
CN112819929B (zh) * | 2021-03-05 | 2024-02-23 | 网易(杭州)网络有限公司 | 渲染水面方法及装置、电子设备、存储介质 |
CN112884873B (zh) * | 2021-03-12 | 2023-05-23 | 腾讯科技(深圳)有限公司 | 虚拟环境中虚拟物体的渲染方法、装置、设备及介质 |
CN113034661B (zh) * | 2021-03-24 | 2023-05-23 | 网易(杭州)网络有限公司 | 一种MatCap贴图生成的方法及装置 |
CN113034350B (zh) * | 2021-03-24 | 2023-03-24 | 网易(杭州)网络有限公司 | 一种植被模型的处理方法和装置 |
CN113077541B (zh) * | 2021-04-02 | 2022-01-18 | 广州益聚未来网络科技有限公司 | 一种虚拟天空画面的渲染方法及相关设备 |
US11423601B1 (en) | 2021-04-29 | 2022-08-23 | Dg Holdings, Inc. | Transforming a three-dimensional virtual model to a manipulatable format |
CN113362435B (zh) * | 2021-06-16 | 2023-08-08 | 网易(杭州)网络有限公司 | 虚拟对象模型的虚拟部件变化方法、装置、设备及介质 |
CN113592999B (zh) * | 2021-08-05 | 2022-10-28 | 广州益聚未来网络科技有限公司 | 一种虚拟发光体的渲染方法及相关设备 |
CN113590330A (zh) * | 2021-08-05 | 2021-11-02 | 北京沃东天骏信息技术有限公司 | 一种网格模型渲染方法及装置、存储介质 |
CN114241114B (zh) * | 2021-12-22 | 2024-09-10 | 上海完美时空软件有限公司 | 材质的渲染方法及装置、存储介质、电子装置 |
CN114255641B (zh) * | 2022-01-17 | 2023-09-29 | 广州易道智慧信息科技有限公司 | 虚拟机器视觉系统中模拟光源的制作方法及系统 |
CN114898032B (zh) * | 2022-05-10 | 2023-04-07 | 北京领为军融科技有限公司 | 一种基于着色器存储缓存对象的光点渲染方法 |
CN115063518A (zh) * | 2022-06-08 | 2022-09-16 | Oppo广东移动通信有限公司 | 轨迹渲染方法、装置、电子设备及存储介质 |
CN116244886B (zh) * | 2022-11-29 | 2024-03-15 | 北京瑞风协同科技股份有限公司 | 一种虚实试验数据匹配方法及系统 |
US20240331299A1 (en) * | 2023-03-10 | 2024-10-03 | Tencent America LLC | Joint uv optimization and texture baking |
CN116778053B (zh) * | 2023-06-20 | 2024-07-23 | 北京百度网讯科技有限公司 | 基于目标引擎的贴图生成方法、装置、设备及存储介质 |
CN117173314B (zh) * | 2023-11-02 | 2024-02-23 | 腾讯科技(深圳)有限公司 | 一种图像处理方法、装置、设备、介质及程序产品 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104157000A (zh) * | 2014-08-14 | 2014-11-19 | 无锡梵天信息技术股份有限公司 | 模型表面法线的计算方法 |
CN104966312A (zh) * | 2014-06-10 | 2015-10-07 | 腾讯科技(深圳)有限公司 | 一种3d模型的渲染方法、装置及终端设备 |
US9639773B2 (en) * | 2013-11-26 | 2017-05-02 | Disney Enterprises, Inc. | Predicting a light probe for an outdoor image |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000132707A (ja) * | 1997-11-07 | 2000-05-12 | Snk:Kk | ゲームシステム及びゲームシステムにおける表示方法 |
US7952583B2 (en) * | 2000-06-19 | 2011-05-31 | Mental Images Gmbh | Quasi-monte carlo light transport simulation by efficient ray tracing |
JP2002230579A (ja) * | 2001-02-02 | 2002-08-16 | Dainippon Printing Co Ltd | 画像作成方法および装置 |
US6894695B2 (en) * | 2001-04-27 | 2005-05-17 | National Semiconductor Corporation | Apparatus and method for acceleration of 2D vector graphics using 3D graphics hardware |
US8115774B2 (en) * | 2006-07-28 | 2012-02-14 | Sony Computer Entertainment America Llc | Application of selective regions of a normal map based on joint position in a three-dimensional model |
WO2008016645A2 (en) * | 2006-07-31 | 2008-02-07 | Onlive, Inc. | System and method for performing motion capture and image reconstruction |
US8629867B2 (en) * | 2010-06-04 | 2014-01-14 | International Business Machines Corporation | Performing vector multiplication |
US9679362B2 (en) * | 2010-12-30 | 2017-06-13 | Tomtom Global Content B.V. | System and method for generating textured map object images |
CN104268922B (zh) * | 2014-09-03 | 2017-06-06 | 广州博冠信息科技有限公司 | 一种图像渲染方法及图像渲染装置 |
KR102558737B1 (ko) * | 2016-01-04 | 2023-07-24 | 삼성전자주식회사 | 3d 렌더링 방법 및 장치 |
CN106204735B (zh) * | 2016-07-18 | 2018-11-09 | 中国人民解放军理工大学 | Unity3D地形数据在Direct3D 11环境中的使用方法 |
US10643375B2 (en) * | 2018-02-26 | 2020-05-05 | Qualcomm Incorporated | Dynamic lighting for objects in images |
-
2017
- 2017-08-18 CN CN201710711285.3A patent/CN109427088B/zh active Active
-
2018
- 2018-06-28 WO PCT/CN2018/093322 patent/WO2019033859A1/zh active Application Filing
- 2018-06-28 KR KR1020207005286A patent/KR102319179B1/ko active IP Right Grant
- 2018-06-28 JP JP2020509081A patent/JP7386153B2/ja active Active
-
2020
- 2020-02-12 US US16/789,263 patent/US11257286B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9639773B2 (en) * | 2013-11-26 | 2017-05-02 | Disney Enterprises, Inc. | Predicting a light probe for an outdoor image |
CN104966312A (zh) * | 2014-06-10 | 2015-10-07 | 腾讯科技(深圳)有限公司 | 一种3d模型的渲染方法、装置及终端设备 |
CN104157000A (zh) * | 2014-08-14 | 2014-11-19 | 无锡梵天信息技术股份有限公司 | 模型表面法线的计算方法 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020198066A (ja) * | 2019-06-03 | 2020-12-10 | アイドス インタラクティブ コープ | 拡張現実アプリケーション用システム及び方法 |
JP7089495B2 (ja) | 2019-06-03 | 2022-06-22 | アイドス インタラクティブ コープ | 拡張現実アプリケーション用システム及び方法 |
CN110390709A (zh) * | 2019-06-19 | 2019-10-29 | 北京巴别时代科技股份有限公司 | 卡通渲染勾边圆滑方法 |
CN111739135A (zh) * | 2020-07-30 | 2020-10-02 | 腾讯科技(深圳)有限公司 | 虚拟角色的模型处理方法、装置及可读存储介质 |
CN111739135B (zh) * | 2020-07-30 | 2023-03-21 | 腾讯科技(深圳)有限公司 | 虚拟角色的模型处理方法、装置及可读存储介质 |
CN112494941A (zh) * | 2020-12-14 | 2021-03-16 | 网易(杭州)网络有限公司 | 虚拟对象的显示控制方法及装置、存储介质、电子设备 |
CN112494941B (zh) * | 2020-12-14 | 2023-11-28 | 网易(杭州)网络有限公司 | 虚拟对象的显示控制方法及装置、存储介质、电子设备 |
CN114612600A (zh) * | 2022-03-11 | 2022-06-10 | 北京百度网讯科技有限公司 | 虚拟形象生成方法、装置、电子设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN109427088B (zh) | 2023-02-03 |
JP2020531980A (ja) | 2020-11-05 |
CN109427088A (zh) | 2019-03-05 |
KR20200029034A (ko) | 2020-03-17 |
KR102319179B1 (ko) | 2021-10-28 |
US11257286B2 (en) | 2022-02-22 |
JP7386153B2 (ja) | 2023-11-24 |
US20200184714A1 (en) | 2020-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019033859A1 (zh) | 模拟光照的渲染方法及终端 | |
CN112316420B (zh) | 模型渲染方法、装置、设备及存储介质 | |
CN112215934B (zh) | 游戏模型的渲染方法、装置、存储介质及电子装置 | |
CN104268922B (zh) | 一种图像渲染方法及图像渲染装置 | |
US20150325044A1 (en) | Systems and methods for three-dimensional model texturing | |
CN108230435B (zh) | 采用立方图纹理的图形处理 | |
US20230120253A1 (en) | Method and apparatus for generating virtual character, electronic device and readable storage medium | |
CN110533707A (zh) | 照明估计 | |
CN114119818A (zh) | 场景模型的渲染方法、装置及设备 | |
CN113826144B (zh) | 使用单幅彩色图像和深度信息的面部纹理贴图生成 | |
CN113112581A (zh) | 三维模型的纹理贴图生成方法、装置、设备及存储介质 | |
CN101477700A (zh) | 面向Google Earth与Sketch Up的真三维立体显示方法 | |
JP2024508457A (ja) | 3dモデリングを強化するために一時的なテクスチャアプリケーションを提供する方法及びシステム | |
CN114241151A (zh) | 三维模型简化方法、装置、计算机设备和计算机存储介质 | |
WO2017219643A1 (zh) | 输入文字的3d效果生成、输入文字的3d显示方法和系统 | |
CN114119848B (zh) | 一种模型渲染方法、装置、计算机设备及存储介质 | |
CN110569098B (zh) | 2d及3d混合的人机界面生成方法、系统、设备及介质 | |
CN101511034A (zh) | 面向Skyline的真三维立体显示方法 | |
CN115761105A (zh) | 光照渲染方法、装置、电子设备及存储介质 | |
CN115063330A (zh) | 头发渲染方法、装置、电子设备及存储介质 | |
CN103729888A (zh) | 一种方便调试的3d投影装置及其调试方法 | |
CN109598790B (zh) | 一种三维模型绘制方法及装置、一种计算设备及存储介质 | |
WO2023221683A1 (zh) | 图像渲染方法、装置、设备和介质 | |
Tang et al. | Research on 3D Rendering Effect under Multi-strategy | |
CN115131493A (zh) | 动态光线特效的展示方法、装置、计算机设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18845510 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020509081 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20207005286 Country of ref document: KR Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18845510 Country of ref document: EP Kind code of ref document: A1 |