CN113947657A - Target model rendering method, device, equipment and storage medium - Google Patents

Target model rendering method, device, equipment and storage medium Download PDF

Info

Publication number
CN113947657A
CN113947657A CN202111210803.6A CN202111210803A CN113947657A CN 113947657 A CN113947657 A CN 113947657A CN 202111210803 A CN202111210803 A CN 202111210803A CN 113947657 A CN113947657 A CN 113947657A
Authority
CN
China
Prior art keywords
texture
target
map
channel
target model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111210803.6A
Other languages
Chinese (zh)
Inventor
杨己力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202111210803.6A priority Critical patent/CN113947657A/en
Publication of CN113947657A publication Critical patent/CN113947657A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Generation (AREA)

Abstract

The application provides a rendering method, a rendering device, rendering equipment and a storage medium of a target model. The method comprises the following steps: acquiring a target model and a texture jigsaw, wherein the material of the target model is a combined material, the vertex data of the target model comprises a material identifier, and a texture map used for identifying a surface patch grid at the vertex, the texture jigsaw comprises a plurality of original texture maps, the UV coordinate of the texture jigsaw is positioned in a preset UV interval, and the UV interval respectively represents the value ranges of a U coordinate and a V coordinate; determining a UV sampling interval according to the material identification of the patch grid to be rendered and the number of original texture maps in the texture jigsaw puzzle; according to the UV sampling interval and the distance and angle of the patch grid in the visual window, obtaining a target map from a plurality of pre-generated texture puzzles with different texture levels; and carrying out texture sampling on the target map according to the UV sampling interval so as to render the surface patch grid to obtain a rendered target model.

Description

Target model rendering method, device, equipment and storage medium
Technical Field
The present application relates to game technologies, and in particular, to a method, an apparatus, a device, and a storage medium for rendering a target model.
Background
An object displayed in the game screen is a rendering effect of a model (hereinafter referred to as an object model) for the object, and the object model and a texture map used by the object model are previously prepared and rendered in a renderer of a terminal device of a user to realize a final display effect on the terminal device of the user.
When rendering is performed by the terminal device according to the target model and the texture map, a Central Processing Unit (CPU) calls a primary graphic programming interface to instruct a Graphics Processing Unit (GPU) to perform primary rendering on the target model according to one texture map. While an object model typically has many texture maps, the CPU needs to call the graphical programming interface multiple times. In order to reduce the CPU call of a graphical programming interface, one implementation mode is to splice a plurality of maps into a texture map, and scale the UV of a target model so that the area of the target model needing to render the maps corresponds to the position of the maps in the texture map, thereby rendering the whole texture map directly on the target model.
However, when the scheme of the texture mosaic is adopted for rendering, the flexibility is low.
Disclosure of Invention
The application provides a rendering method, a rendering device, equipment and a storage medium of a target model, which are used for solving the problem of low flexibility of a rendering scheme in the prior art.
In a first aspect, the present application provides a method for rendering a target model, including: the method comprises the steps of obtaining a target model and a texture jigsaw for attaching to the target model, wherein the target model is made of a combined material, the target model comprises a plurality of vertexes, each vertex corresponds to vertex data, the vertex data comprises a material identifier, the material identifier is used for identifying a texture map used by a patch grid of the target model at the vertex, the texture jigsaw comprises a plurality of original texture maps, UV coordinates of the texture jigsaw are located in a preset UV interval, and the UV interval is used for respectively representing the value ranges of U coordinates and V coordinates; determining a UV sampling interval corresponding to the material identifier of the patch grid according to the material identifier of the patch grid to be rendered and the number of original texture maps in the texture jigsaw puzzle; acquiring a target map from a plurality of pre-generated texture puzzles with different texture levels according to the UV sampling interval and the distance and angle of the patch grid in a visual window; and performing texture sampling on the target map according to the UV sampling interval so as to render the patch mesh to obtain a rendered target model.
In a second aspect, the present application provides an apparatus for rendering a target model, including: the system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring a target model and a texture jigsaw for attaching to the target model, the material of the target model is a combined material, the target model comprises a plurality of vertexes, each vertex corresponds to vertex data, the vertex data comprises a material identifier, the material identifier is used for identifying a texture chartlet used by a patch grid of the target model at the vertex, the texture jigsaw comprises a plurality of original texture chartlets, the UV coordinate of the texture jigsaw is located in a preset UV interval, and the UV interval is used for respectively representing the value ranges of a U coordinate and a V coordinate; the determining module is used for determining a UV sampling interval corresponding to the material identifier of the patch grid according to the material identifier of the patch grid to be rendered and the number of the original texture maps in the texture jigsaw puzzle; the acquisition module is further used for acquiring a target map from a plurality of pre-generated texture puzzles with different texture levels according to the UV sampling interval and the distance and angle of the patch grid in the visual window; and the sampling module is used for performing texture sampling on the target map according to the UV sampling interval so as to render the patch mesh to obtain a rendered target model.
In a third aspect, the present application provides an electronic device, comprising: a processor, and a memory communicatively coupled to the processor; the memory stores computer-executable instructions; the processor executes computer-executable instructions stored by the memory to implement the method of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon computer-executable instructions for implementing the method according to the first aspect when executed by a processor.
According to the method, the device, the equipment and the storage medium for rendering the target model, the target model and the texture jigsaw used for being attached to the target model are obtained, the material of the target model is combined, the target model comprises a plurality of vertexes, each vertex corresponds to vertex data, the vertex data comprises a material identifier, the material identifier is used for identifying a texture jigsaw used by a patch grid of the target model at the vertex, the texture jigsaw comprises a plurality of original texture jigsaws, the UV coordinate of each texture jigsaw is located in a preset UV interval, and the UV interval is used for respectively representing the value ranges of a U coordinate and a V coordinate; determining a UV sampling interval corresponding to the material identifier of the patch grid according to the material identifier of the patch grid to be rendered and the number of original texture maps in the texture jigsaw puzzle; according to the UV sampling interval, the distance and the angle of the texture map rendering the surface patch grid in the visual window, obtaining a target map from a plurality of pre-generated texture puzzles with different texture levels; and performing texture sampling on the target map according to the UV sampling interval so as to render the surface patch grid to obtain a rendered target model. The target model is made of one merged material, so that the number of Draw calls is reduced, then multiple materials of the target model can be recovered through the material identification recorded in the vertex data, a corresponding texture map is obtained in the texture jigsaw in an indexing mode, rendering is conducted according to a single texture map, on the basis that the number of Draw calls is reduced, rendering can be conducted according to the single texture map, and the effect of improving rendering flexibility is achieved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic diagram of a prior art rendering process;
FIG. 2 is a flowchart of a method for rendering a target model according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a rendering process of a target model according to an embodiment of the present disclosure;
FIG. 4 is a diagram illustrating a texture puzzle according to an embodiment of the present application;
fig. 5 is an exemplary diagram for generating MIP map according to an embodiment of the present application;
FIG. 6 is a diagram illustrating an example of processing according to a processed UV sampling interval according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of texture sampling provided by an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a multi-layer texture blending principle provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of a target model rendering apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Interpretation of terms:
DCC software: digital content generation software. For three-dimensional artists, the commonly used DCCs are all-around three-dimensional software such as houdini, maya, max, cinema4D and blender, and can also be special tools such as Zbrush, Nuke, Substance pointer, and the like.
Draw Call: the CPU calls a graphic programming interface, such as a glDrawElements command in OpenGL or a Drawlendedprimitive command in DirectX, to command the GPU to perform rendering operations.
Shader: and a shader.
MIP map: in order to speed up rendering and reduce image aliasing, the raw texture maps imported into the engine are typically processed as a file consisting of a series of pre-computed and optimized pictures, these maps being referred to as MIP-maps or MIP-maps.
The game running on the terminal device comprises a plurality of object models, such as a virtual character model, a building model and a grassland model, wherein each object model is finally displayed on the terminal device and needs to be manufactured and rendered in two stages:
the manufacturing process of the target model comprises the following steps: modeling, developing UV, dividing materials, drawing a texture map and rendering;
the modeling is to create a target model in three-dimensional software.
Unfolding the UV refers to unfolding the object model into a two-dimensional image in a UV coordinate system.
UV coordinates, which refers to a plane where all images are two-dimensional. The horizontal direction is U and the vertical direction is V, and any pixel on the image can be located by the two-dimensional UV coordinate system of the plane.
The material division means that a material is allocated to the target model.
Drawing texture maps refers to drawing texture maps for each image in mapping software.
Rendering refers to outputting an object model or scene as an image file or video signal.
When the renderer renders the target model according to the texture maps, if the target model corresponds to multiple materials, the CPU needs to call a graphical programming interface once to render the target model according to the texture map of one material each time so as to command the GPU to render the grid model. And after the target model is rendered according to the texture map of one material, rendering the target model according to the texture maps corresponding to other materials, and repeating the steps until the target model is rendered.
As introduced above, for the GPU, it may be called multiple times, increasing the number of Draw calls and the frequency of texture switching, thereby affecting rendering efficiency.
One solution to this problem is to pack multiple texture maps into a texture map set. The principle is that the original texture maps which are sent to a GPU for rendering one by one are packed on one texture map, and the GPU only needs to be called once, so that the number of Draw calls can be reduced.
This is achieved by the process of shrinking and aligning the UV of the target model to the corresponding position of the atlas. The specific principle can be seen in the introduction of fig. 1:
fig. 1 is a schematic diagram of a rendering process of the prior art.
As shown in fig. 1, the target model is a cube model, a two-dimensional plane view a of the cube model is obtained after UV is applied to the cube model, an atlas is a two-dimensional image b which has the same size and the same direction as the two-dimensional plane view, and in the rendering process, the rendered cube model is obtained by attaching the two-dimensional image b to the two-dimensional plane view a at one time.
Thus, it can be seen that in the prior art, although the number of Draw calls is reduced, the flexibility in the rendering process is reduced, for example, the rendering process cannot be performed according to each map in the atlas separately, so that the rendering scheme has a large limitation.
In view of the above technical problems, the inventors of the present application propose the following technical idea: the method comprises the steps of recording material identifications of a target model on a vertex color channel of the target model, combining a plurality of materials of the same target model into one material and leading the material into a game engine, so that the number of Draw calls can be reduced, and then using the previously recorded material identifications to index texture maps in texture puzzles in the game engine to realize the rendering of the target model, thereby improving the flexibility of the rendering while reducing the number of Draw calls.
The target model rendering method provided by the embodiment of the application can be applied to an application scene comprising a first terminal device and a second terminal device. The first terminal device draws a three-dimensional target model and a texture map attached to the target model according to an object to be displayed in a game scene, sends the three-dimensional target model and the texture map used by the model to the second terminal device, and the second terminal device calls the texture map according to a game engine in the second terminal device to render the target model and displays the rendered target model through a graphical user interface.
Under a common game scene, the first terminal device and the second terminal device can be a smart phone, a tablet computer, a computer and the like. In a cloud game scenario, the second terminal device may also be a cloud server.
Based on the application scenario, the following describes in detail how to solve the technical problem and the technical solution of the present application with reference to specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a flowchart of a target model rendering method according to an embodiment of the present disclosure. As shown in fig. 2, the method for rendering the object model includes the following steps:
s201, obtaining a target model and a texture jigsaw attached to the target model, wherein the target model is made of combined materials, the target model comprises a plurality of vertexes, each vertex corresponds to vertex data, the vertex data comprises a material identifier, the material identifier is used for identifying a texture map used by a patch grid of the target model at the vertex, the texture jigsaw comprises a plurality of texture maps, and UV coordinates of the texture jigsaw are located in a preset UV interval, wherein the UV interval is used for respectively representing the value ranges of U coordinates and V coordinates.
The execution subject of the present embodiment may be the second terminal device in fig. 1. The second terminal device obtains the target model and the texture map attached to the target model from the first terminal device.
Before this step, it is necessary to draw the target model by three-dimensional software, and draw the texture map attached to the target model by drawing software such as PS. The user can draw the target model and the texture map through the first terminal device, and send the drawn target model and texture map to the second terminal device to be rendered in the second terminal device.
In this embodiment, the target model is a three-dimensional model established for a target (object) in a game scene such as a building, a virtual character, a grass, or the like. The target model can be composed of a plurality of patch meshes, and the intersection position of two adjacent patch meshes is the vertex of the target model. The vertex has vertex data, the second terminal device can restore the target model according to the vertex data, and the vertex data comprises vertex color information of the vertex and a material identifier associated with the vertex color information.
In this embodiment, the material identifier is recorded in the vertex color in the vertex data to associate the material identifier with the vertex color, which may also be understood as establishing a corresponding relationship between the vertex color and the material identifier.
The material identification can be determined according to the texture pattern and the texture of the surface of the target model. For example, texture pattern A and texture a correspond to material ID1, and texture pattern A and texture b correspond to material ID 2.
In some examples, for a triangular patch mesh, there are three vertices, each vertex corresponds to vertex data, including the three-dimensional coordinates of the vertex, the color information used by the vertex (vertex color), the UV coordinates, and so on. In general, the color information used by the vertices includes RGB colors, and the material identifier used by the patch mesh may be recorded on the R-channel color, the G-channel color, or the B-channel color of any of the three vertices.
For the case that the adjacent patch meshes share the vertex, the material identification of the adjacent patch meshes can be distinguished by different vertices in the adjacent patch meshes. For example, if the vertex a and the vertex a 'share a common vertex, the texture identifier of the patch mesh a may be recorded by the vertex a, and the quality identifier of the patch mesh B may be recorded by the vertex a'.
Texture identification is achieved by sub-texture ID at the modeling stage. The sub-texture ID may be implemented by selecting a plane using the same texture in Maya software, and assigning a specified texture thereto, or by using a multi-dimensional self-texture in 3D Max software.
The texture jigsaw is a jigsaw obtained by splicing a plurality of texture maps, and is used for being attached to a target model and adding detail information to the target model. The value range of the U coordinate of the texture jigsaw is between 0 and 1, and the value range of the V coordinate is also between 0 and 1.
In addition, in this embodiment, the material of the target model is a merged material. The operation of merging the textures may be performed during the modeling stage. For example, after a plurality of materials of an object model are obtained for each material of the object model and each material is recorded on a vertex color in vertex data, the plurality of materials of the object model may be merged into one material by merging the materials.
S202, determining a UV sampling interval corresponding to the material identifier of the patch grid according to the material identifier of the patch grid to be rendered and the number of the original texture maps in the texture jigsaw puzzle.
In this embodiment, it may be understood that the target model is converted into a UV coordinate system to obtain a two-dimensional image, where the two-dimensional image includes a plurality of image blocks, and each image block corresponds to a material identifier.
After the three-dimensional target model is mapped to a two-dimensional space to obtain a two-dimensional image, the UV coordinates of each image block in the two-dimensional image are located in a preset UV interval. Namely, the value range of the U coordinate of each image block is between 0 and 1, and the value range of the V coordinate is also between 0 and 1.
In some optional embodiments, the material identifier of each image block of the two-dimensional image may be a number, and for a plurality of texture maps used by the target model, the texture maps may be ordered according to the order of the numbers of the image blocks, so as to correspond the image blocks to the texture maps. The sampling interval can be calculated according to the number of the image block, and the texture map corresponding to the sampling interval is searched in the texture jigsaw according to the sampling interval, so that the texture map used by the image block is determined.
Fig. 3 is a schematic diagram illustrating a rendering process of a target model according to an embodiment of the present disclosure.
As shown in fig. 3 (a), after the target model is converted into the UV coordinate system, a two-dimensional image including 16 image blocks is obtained, the U coordinate of each image block ranges from 0 to 1, and the V coordinate also ranges from 0 to 1.
As shown in fig. 3 (b), the texture tile of the target model includes 8 texture maps, each texture map corresponds to a map number (e.g., a number shown in the figure), and each texture map is numbered from left to right and from top to bottom, and the number is 0-7.
And the UV coordinates of the texture puzzle are located in a preset UV interval. Namely, the value range of the U coordinate of the whole texture jigsaw is between 0 and 1, and the value range of the V coordinate is also between 0 and 1.
The texture map with the map number 0 is attached to the image block with the material identifier 0, and similarly, the texture maps with other map numbers may be attached to other image blocks. The sampling interval is used for determining the sampling interval of the texture map with the map number of 0 so as to sample the texture map in the sampling interval from the texture map, thereby realizing the rendering of the image block.
The result of the completion of the attachment is shown in fig. 3 (c), and it can be seen that a corresponding texture map is attached to each image block.
Fig. 4 is a schematic diagram of a texture mosaic provided in the embodiment of the present application.
As shown in fig. 4, assuming that the material identifier of the patch grid to be rendered is 02, and the number of original texture maps in the texture map is 16, the U coordinate range of the texture map is between 0 and 1, and the V coordinate range is between 0 and 1, the UV coordinates of the UV sampling interval corresponding to the material identifier 02 of the patch grid to be rendered are (0.5,0), (0.75,0), (0.5,0.25), (0.75,0.25) in order from left to right and from top to bottom.
And S203, acquiring a target map from a plurality of pre-generated texture puzzles with different texture levels according to the UV sampling interval and the distance and angle of the patch grid in the visual window.
Before this step, it is necessary to generate a MIP map from the original texture tiles, where the MIP map includes a plurality of texture tiles with different sizes.
Fig. 5 is an exemplary diagram for generating MIP map according to an embodiment of the present application.
As shown in fig. 5, assuming that the size of the original texture tile is 256 × 256, a series of large-to-small texture tiles are generated according to the size 256 × 256 of the original texture tile, and the size of each texture tile is half of the size of the previous texture tile until the size of the original texture tile is reduced to 1 × 1, and these series of texture tiles with different sizes are MIP maps.
And then, in the rendering process, calculating a target texture level according to the distance, the angle and the UV sampling interval displayed in the visual window by the map, and determining a texture jigsaw corresponding to the target texture level from a plurality of texture jigsaws as a target jig-saw puzzle according to the target texture level.
The step S203 may have at least three different embodiments as follows:
in some optional embodiments, step S203 comprises:
a1, if the repeated tiling frequency of the patch grid is 1, performing texture mapping (Multum In Parvo, MIP) calculation according to the UV sampling interval and the distance and angle of the patch grid In the visual window to obtain a first target texture level.
For texture Mapping (MIP) calculation according to the UV sampling interval, the distance and the angle of the patch grid In the visual window, reference may be made to the description of the related art, and details thereof are not described here.
a2, obtaining the first target mosaic from the pre-generated texture mosaics of different texture levels according to the first target texture level.
Illustratively, the texture level may be represented by a number, for example, number 0 represents the original texture tile, number 1 represents a half-reduced texture tile for the original texture tile, and so on, the texture level of each texture tile may be obtained.
If the first target texture level is 01, a target puzzle corresponding to the texture level 01 is obtained, and the first target puzzle is obtained.
In other alternative embodiments, step S203 includes:
b1, if the repeated tiling frequency of the patch grid is more than 1, processing the UV sampling interval to obtain a processed UV sampling interval, wherein the U coordinate in the processed UV sampling interval is a continuous coordinate, and the V coordinate is a continuous coordinate.
In particular, the UV sampling interval may be treated such that the interval of U coordinates is between 0 and 1 × the repeated tiling number, and the interval of V coordinates is between 0 and 1 × the repeated tiling number. Step b1 is described in detail below with reference to the accompanying drawings:
fig. 6 is a diagram illustrating an example of processing according to a UV sampling interval after processing according to an embodiment of the present application.
As shown in fig. 6, assuming that the UV coordinates of the UV sampling intervals are (0.5,0), (0.75,0), (0.5,0.25), (0.75,0.25) from left to right and from top to bottom in sequence, the number of times of repeated tiling of the patch grid is 2 (2 times of repeated tiling are performed on the U axis and the V axis, respectively), it can be seen that the UV coordinates of each UV sampling interval are (0.5,0), (0.75,0), (0.5,0.25), (0.75,0.25), and thus in the MIP calculation process, a seam appears at a texture joint. Therefore, according to the texture repeated tiling condition, the UV sampling interval of the tile grid is processed from left to right and from top to bottom to sequentially be (0,0), (2,0), (0,2) and (2,2), and MIP calculation is performed in the shader according to (0,0), (2,0), (0,2) and (2,2) to obtain a second target texture level.
b2, carrying out MIP calculation according to the processed UV sampling interval and the distance and angle of the patch grid in the visual window to obtain a second target texture level.
The specific implementation of step b2 can be found in the description of step a 1.
b3, according to the second target texture level, obtaining a second target map from a plurality of pre-generated texture puzzles with different texture levels, wherein the texture level of the second target map is the second target texture level.
The specific implementation of step b3 can be found in the description of step a 2.
In still other alternative embodiments, step S203 includes:
c1, processing the UV sampling interval to obtain a processed UV sampling interval.
For a specific implementation of step c1, reference may be made to the description of step b 1.
c2, carrying out MIP calculation according to the processed UV sampling interval and the distance and angle of the patch grid in the visual window to obtain a third target texture level.
For a specific implementation of step c1, reference may be made to the description of step a 1.
c3, according to the third target texture level, obtaining a third target map from a plurality of pre-generated texture puzzles with different texture levels, wherein the texture level of the third target map is the third target texture level.
For a specific implementation of step c3, reference may be made to the description of step a 2.
As can be seen from the three embodiments of step S203, in the first two embodiments, it is necessary to determine whether to use the coordinates of the UV sampling interval before the processing or the coordinates of the UV sampling interval after the processing in the MIP calculation, according to the number of times the texture is tiled repeatedly. In the third embodiment, the number of times of texture repeat tiling is not considered, that is, MIP calculation is performed based on the coordinates of the processed UV sampling interval regardless of whether texture repeat tiling is required.
And S204, performing texture sampling on the target jigsaw according to the UV sampling interval so as to render the surface wafer grid and obtain a rendered target model.
Specifically, texture sampling is performed according to the UV coordinates of the patch grid and the size of the target tile.
Fig. 7 is a schematic diagram of texture sampling provided in an embodiment of the present application. As shown in fig. 7, assuming that the UV coordinates of the UV sampling interval are (0.5,0), (0.75,0), (0.5,0.25), (0.75,0.25) from left to right and from top to bottom, texture maps of (0.5,0), (0.75,0), (0.5,0.25), (0.75,0.25) from left to right and from top to bottom are sampled in the target tile, and the rendered target model is obtained by rendering the tile mesh.
In the embodiment, a target model and a texture puzzle used for being attached to the target model are obtained, the target model includes multiple vertexes, each vertex corresponds to vertex data, the vertex data includes a material identifier, the material identifier is used for identifying a texture map used by a patch grid of the target model at the vertex, the texture puzzle includes multiple original texture maps, a UV coordinate of the texture puzzle is located in a preset UV interval, and the UV interval is used for respectively representing a value range of a U coordinate and a value range of a V coordinate; determining a UV sampling interval corresponding to the material identifier of the patch grid according to the material identifier of the patch grid to be rendered and the number of original texture maps in the texture jigsaw puzzle; acquiring a target map from a plurality of pre-generated texture maps with different texture levels according to the UV sampling interval, the distance and the angle of the texture map rendering the surface patch grid in the visual window; and performing texture sampling on the target map according to the UV sampling interval so as to render the surface patch grid to obtain a rendered target model. The target model is made of one merged material, so that the number of Draw calls is reduced, then multiple materials of the target model can be recovered through the material identification recorded in the vertex data, a corresponding texture map is obtained in the texture jigsaw in an indexing mode, rendering is conducted according to a single texture map, on the basis that the number of Draw calls is reduced, rendering can be conducted according to the single texture map, and the effect of improving rendering flexibility is achieved.
In the foregoing embodiment, a scene in which the texture of the patch mesh is a single-layer texture is introduced, and in practical application, there is a scene in which the texture of the patch mesh is a mixture of multiple layers of textures, and for this kind of scene, it can be understood that each patch mesh includes multiple layers of images to be rendered, each layer of images to be rendered corresponds to one texture map, vertex data of an image block includes an identifier of each layer of images to be rendered, and the texture map in each layer of images to be rendered corresponds to a material identifier, then the method of this embodiment further includes the following method steps: and determining a texture map corresponding to the image to be rendered in the texture jigsaw according to the material identification corresponding to the color channel of the image to be rendered aiming at each layer of image to be rendered in the multi-layer image to be rendered.
For different layers in a patch mesh to be rendered in a multi-layer texture mixed scene, the material identification can be recorded in the following two different ways.
In an optional implementation manner, if a texture map corresponding to a patch mesh is a multi-layer texture map, vertex data of the target model includes an identifier associated with a first color channel and an identifier associated with a second color channel, where the first color channel is any one of an R channel of a vertex color, a G channel of the vertex color, a B channel of the vertex color, an R channel of the texture map, a G channel of the texture map, and a B channel of the texture map, the second color channel is any one of the R channel of the vertex color, the G channel of the vertex color, the B channel of the vertex color, the R channel of the texture map, the G channel of the texture map, and the B channel of the texture map, and the first color channel and the second color channel are different color channels; the identification associated with the first color channel is used for identifying a texture mosaic used by the underlying image; the identification associated with the second color channel is used to identify a texture tile used by the non-underlying image.
For example, a first material identifier may be recorded on the R channel of the vertex color for identifying the texture map used by the underlying image, and a second material identifier may be recorded on the B channel of the vertex color for identifying the texture map used by the non-underlying image.
In another optional implementation, if the texture map corresponding to the patch mesh is a multi-layer texture map, the vertex data of the target model includes an identifier associated with a first color channel and a remap identifier associated with the first color channel, where the first color channel is any one of an R channel of a vertex color, a G channel of a vertex color, a B channel of a vertex color, an R channel of a texture map, a G channel of a texture map, and a B channel of a texture map; the identification associated with the first color channel is used for identifying a texture mosaic used by the underlying image area; the remapping identifiers are used to represent texture tiles used by non-underlying image areas. The remapping algorithm can be referred to the description of the related art and is not described here.
For each layer of images to be rendered, rendering may be performed by using the method in the embodiment shown in fig. 2, which may specifically refer to the description of the embodiment shown in fig. 2, and details are not described here.
Fig. 8 is a schematic diagram of a multi-layer texture blending principle provided in an embodiment of the present application.
As shown in fig. 8, first, a texture map corresponding to the Id0 is searched for in the texture tile according to the material identifier Id0, and an image to be rendered in the first layer is rendered, then a texture map corresponding to the Id1 is searched for in the texture tile according to the material identifier Id1, and an image to be rendered in the second layer is rendered, and then a texture map corresponding to the Id2 is searched for in the texture tile according to the material identifier Id2, and an image to be rendered in the third layer is rendered. The images to be rendered of the first layer, the second layer and the third layer are arranged in the upward direction from the bottom layer.
The embodiment aims at the material mixed by multiple layers of detail textures, such as terrain materials, and controls the specified texture to appear in the specified area through the drawn vertex color in the engine. In the embodiment, the vertex color is used as the index ID, and a plurality of different texture maps can be used for each layer of texture. For example, assuming that the first layer of texture of an image is one of the 16 texture maps of the texture tile, then there are 16 possible selectivities for the first layer of texture, then there are 15 possible selectivities for the second layer of texture of the image (excluding the texture map that the first layer of texture has selected), and so on, and the image assumes a total of 3 layers of texture, then the image may eventually have 45 texture blends. Therefore, abundant surface texture expression can be obtained by using less sampling, and each detail texture can be tiled to increase the fineness of the surface and obtain more real surface rendering.
Further, in some embodiments, the texture identifier may also correspond to texture parameters, such as the number of repeated tiling, UV translation and rotation parameters, and the like. Therefore, in the rendering stage, the texture parameters can be used for a single texture map, and the rendering flexibility is further increased.
In the related art, some target models composed of a large number of components are often encountered, each component has different texture or texture types and needs to be split into a large number of materials. According to the conventional process, the art maker needs to evaluate whether some materials need to be merged to reduce the number of the materials, because the excessive number of the materials increases the number of Draw calls, which affects the rendering efficiency. In the solution of this embodiment, the art producer does not need to consider these. For example, a building model with a large number of structures is made of multiple materials, and the art producer only needs to divide the building model into multiple material IDs according to the texture and the texture pattern of the materials. For example, assuming that the building model uses 13 materials, the embodiment may combine the building model into 1 material in the model making stage, so that the material of the target model imported into the game engine is one material, that is, the target model obtained in step S201 is one material, thereby reducing the number of Draw calls. Then, the UV development and texture drawing portions are also fabricated in a conventional manner. For example, a tiling (repeated tiling) effect can be produced by UV framing, the control of UV translation, scaling and the like by using material parameters is supported, and an art maker can draw a texture map according to a normal flow at the production stage of the target model without synthesizing an atlas in advance. And the final effect can be previewed in real time in the processes of adjusting UV, debugging the material and drawing the map in DCC software.
In the stage of making the model assets, the method of the embodiment does not influence the existing making process, and ensures that the production of the model assets can be rapidly and smoothly carried out in batches. Meanwhile, the degree of freedom exerted by art makers can be increased, for example, a large number of material balls or a large number of detailed textures are used on the same target model. The artwork manufacturer may also use the material parameters in the DCC software to control texture tiling, translation, rotation, etc. In this embodiment, each material ID may correspond to a material parameter, so that the engine can distinguish the UV range controlled by each material ID and the material parameter used in the UV range according to the recorded material ID. For example, the Tiling value, the parameter of UV translation, the parameter of UV rotation, and the like corresponding to each material ID are obtained by using an array index.
A large number of materials can be merged after the model assets are manufactured, and the number of DrawCall finally rendered is reduced. By means of the packed texture jigsaw, the number of textures is reduced, and therefore the switching frequency of the textures in final rendering is reduced.
And in the engine effect realization stage, the material index is realized through a shader algorithm, and the material texture precision in DCC software is restored. For the material mixed by the multilayer textures, the texture sampling times can be reduced, and a small amount of sampling is used to obtain various texture combinations.
On the basis of the above embodiment of the target model rendering method, fig. 9 is a schematic structural diagram of a target model rendering device provided in the embodiment of the present application. As shown in fig. 9, the apparatus for rendering an object model includes: an acquisition module 91, a determination module 92 and a sampling module 93;
the obtaining module 91 is configured to obtain a target model and a texture puzzle for attaching to the target model, where the target model is made of a merged material, the target model includes multiple vertices, each vertex corresponds to vertex data, the vertex data includes a material identifier, the material identifier is used to identify a texture map used by a patch mesh of the target model at the vertex, the texture puzzle includes multiple original texture maps, a UV coordinate of the texture puzzle is located in a preset UV interval, and the UV interval is used to respectively represent a value range of a U coordinate and a V coordinate;
a determining module 92, configured to determine, according to a material identifier of a patch grid to be rendered and the number of original texture maps in the texture map, a UV sampling interval corresponding to the material identifier of the patch grid;
the obtaining module 91 is further configured to obtain a target map from a plurality of pre-generated texture puzzles of different texture levels according to the UV sampling interval and the distance and angle of the patch grid in the visual window;
and the sampling module 93 is configured to perform texture sampling on the target map according to the UV sampling interval, so as to render the patch mesh to obtain a rendered target model.
In some embodiments, the obtaining module 91 obtains the target map from a plurality of pre-generated texture tiles of different texture levels according to the UV sampling interval, and the distance and the angle of the patch grid in the visual window, specifically including: if the repeated tiling frequency of the patch grid is 1, performing texture mapping MIP calculation according to the UV sampling interval and the distance and angle of the patch grid in a visual window to obtain a first target texture level; and acquiring a first target map from a plurality of pre-generated texture puzzles with different texture levels according to the first target texture level, wherein the texture level of the first target map is the first target texture level.
In some embodiments, the obtaining module 91 obtains the target map from a plurality of pre-generated texture tiles of different texture levels according to the UV sampling interval, and the distance and the angle of the patch grid in the visual window, specifically including: if the repeated tiling frequency of the patch grid is greater than 1, processing the UV sampling interval to obtain a processed UV sampling interval; performing texture mapping MIP calculation according to the UV sampling interval and the distance and angle of the patch grid in a visual window to obtain a second target texture level; and acquiring a second target map from a plurality of pre-generated texture puzzles with different texture levels according to the second target texture level, wherein the texture level of the second target map is the second target texture level.
In some embodiments, the obtaining module 91 obtains the target map from a plurality of pre-generated texture tiles of different texture levels according to the UV sampling interval, the distance and the angle of the patch grid in the visual window, including: processing the UV sampling interval to obtain a processed UV sampling interval; performing texture mapping MIP calculation according to the UV sampling interval and the distance and angle of the patch grid in a visual window to obtain a third target texture level; and acquiring a third target map from a plurality of pre-generated texture puzzles with different texture levels according to the third target texture level, wherein the texture level of the third target map is the third target texture level.
In some embodiments, the U coordinate ranges from 0 to 1, and the V coordinate ranges from 0 to 1.
In some embodiments, if the texture map corresponding to the patch mesh is a multi-layer texture map, the vertex data of the target model includes an identifier associated with a first color channel and an identifier associated with a second color channel, where the first color channel is any one of an R channel of a vertex color, a G channel of the vertex color, a B channel of the vertex color, an R channel of the texture map, a G channel of the texture map, and a B channel of the texture map, the second color channel is any one of an R channel of the vertex color, a G channel of the vertex color, a B channel of the vertex color, an R channel of the texture map, a G channel of the texture map, and a B channel of the texture map, and the first color channel and the second color channel are different color channels; the identification associated with the first color channel is used for identifying a texture mosaic used by the underlying image; the identification associated with the second color channel is used to identify a texture tile used by the non-underlying image.
In some embodiments, if the texture map corresponding to the patch mesh is a multi-layer texture map, the vertex data of the target model includes an identifier associated with a first color channel and a remap identifier associated with the first color channel, where the first color channel is any one of an R channel of a vertex color, a G channel of a vertex color, a B channel of a vertex color, an R channel of a texture map, a G channel of a texture map, and a B channel of a texture map; the identification associated with the first color channel is used for identifying a texture mosaic used by the underlying image area; the remapping identifiers are used to represent texture tiles used by non-underlying image areas.
In some embodiments, the texture markings comprise texture patterns and textures.
The target model rendering device provided in the embodiment of the present application may be used in the technical solution for implementing the target model rendering method in the above embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the determining module 92 may be a separate processing element, or may be integrated into a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a processing element of the apparatus calls and executes the functions of the determining module 92. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element here may be an integrated circuit with signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 10, the electronic device may include: transceiver 101, processor 102, memory 103.
The processor 102 executes computer-executable instructions stored in the memory, causing the processor 102 to perform the aspects of the embodiments described above. The processor 102 may be a general-purpose processor including a central processing unit CPU, a Network Processor (NP), and the like; but also a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
Memory 103 is coupled to processor 102 via a system bus and communicates with each other, and memory 103 is used for storing computer program instructions.
The transceiver 101 may be used to obtain a target model and a texture puzzle attached to the target model.
The system bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The transceiver is used to enable communication between the database access device and other computers (e.g., clients, read-write libraries, and read-only libraries). The memory may include Random Access Memory (RAM) and may also include non-volatile memory (non-volatile memory).
The electronic device provided by the embodiment of the present application may be the second terminal device of the foregoing embodiment.
The embodiment of the application further provides a chip for executing the instruction, and the chip is used for executing the technical scheme of the rendering method of the target model in the embodiment.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores computer instructions, and when the computer instructions are run on a computer, the computer is enabled to execute the technical solution of the rendering method of the target model in the foregoing embodiment.
The present application further provides a computer program product, where the computer program product includes a computer program stored in a computer-readable storage medium, where the computer program can be read by at least one processor from the computer-readable storage medium, and when the computer program is executed by the at least one processor, the technical solution of the rendering method for an object model in the foregoing embodiments can be implemented.
The rendering method of the target model in the embodiment of the application may be executed in a local terminal device or a cloud interaction system.
The cloud interaction system comprises a cloud server and user equipment and is used for running cloud applications. The cloud applications run separately.
In an alternative embodiment, cloud gaming refers to a cloud computing-based gaming mode. In the running mode of the cloud game, the running main body of the game program and the game picture presenting main body are separated, the storage and the running of the object control method are completed on a cloud game server, and the cloud game client is used for receiving and sending data and presenting the game picture, for example, the cloud game client can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server in the cloud is used for processing the game data. When a game is played, a player operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the cloud game client through a network, and finally the data are decoded through the cloud game client and the game pictures are output.
In an alternative embodiment, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (11)

1. A method of rendering an object model, comprising:
the method comprises the steps of obtaining a target model and a texture jigsaw for attaching to the target model, wherein the target model is made of a combined material, the target model comprises a plurality of vertexes, each vertex corresponds to vertex data, the vertex data comprises a material identifier, the material identifier is used for identifying a texture map used by a patch grid of the target model at the vertex, the texture jigsaw comprises a plurality of original texture maps, UV coordinates of the texture jigsaw are located in a preset UV interval, and the UV interval is used for respectively representing the value ranges of U coordinates and V coordinates;
determining a UV sampling interval corresponding to the material identifier of the patch grid according to the material identifier of the patch grid to be rendered and the number of original texture maps in the texture jigsaw puzzle;
acquiring a target map from a plurality of pre-generated texture puzzles with different texture levels according to the UV sampling interval and the distance and angle of the patch grid in a visual window;
and performing texture sampling on the target map according to the UV sampling interval so as to render the patch mesh to obtain a rendered target model.
2. The method of claim 1, wherein obtaining the target map from a plurality of pre-generated texture tiles of different texture levels according to the UV sampling interval, the distance and the angle of the patch grid in the visual window comprises:
if the repeated tiling frequency of the patch grid is 1, performing texture mapping MIP calculation according to the UV sampling interval and the distance and angle of the patch grid in a visual window to obtain a first target texture level;
and acquiring a first target map from a plurality of pre-generated texture puzzles with different texture levels according to the first target texture level, wherein the texture level of the first target map is the first target texture level.
3. The method of claim 1, wherein obtaining the target map from a plurality of pre-generated texture tiles of different texture levels according to the UV sampling interval, the distance and the angle of the patch grid in the visual window comprises:
if the repeated tiling frequency of the patch grid is greater than 1, processing the UV sampling interval to obtain a processed UV sampling interval;
performing texture mapping MIP calculation according to the processed UV sampling interval and the distance and angle of the surface patch grid in a visual window to obtain a second target texture level;
and acquiring a second target map from a plurality of pre-generated texture puzzles with different texture levels according to the second target texture level, wherein the texture level of the second target map is the second target texture level.
4. The method of claim 1, wherein obtaining the target map from a plurality of pre-generated texture tiles of different texture levels according to the UV sampling interval, the distance and the angle of the patch grid in the visual window comprises:
processing the UV sampling interval to obtain a processed UV sampling interval;
performing texture mapping MIP calculation according to the processed UV sampling interval and the distance and angle of the surface patch grid in a visual window to obtain a third target texture level;
and acquiring a third target map from a plurality of pre-generated texture puzzles with different texture levels according to the third target texture level, wherein the texture level of the third target map is the third target texture level.
5. The method according to any one of claims 1 to 4, wherein the value of the U coordinate ranges from 0 to 1, and the value of the V coordinate ranges from 0 to 1.
6. The method according to claim 5, wherein if the texture map corresponding to the patch mesh is a multi-layer texture map, the vertex data of the target model includes an identifier associated with a first color channel and an identifier associated with a second color channel, the first color channel is any one of an R channel of a vertex color, a G channel of a vertex color, a B channel of a vertex color, an R channel of a texture map, a G channel of a texture map and a B channel of a texture map, the second color channel is any one of an R channel of a vertex color, a G channel of a vertex color, a B channel of a vertex color, an R channel of a texture map, a G channel of a texture map and a B channel of a texture map, and the first color channel and the second color channel are different color channels;
the identification associated with the first color channel is used for identifying a texture mosaic used by the underlying image;
the identification associated with the second color channel is used to identify a texture tile used by the non-underlying image.
7. The method according to claim 5, wherein if the texture map corresponding to the patch mesh is a multi-layer texture map, the vertex data of the target model includes an identifier associated with a first color channel and a remap identifier associated with the first color channel, and the first color channel is any one of an R channel of a vertex color, a G channel of a vertex color, a B channel of a vertex color, an R channel of a texture map, a G channel of a texture map, and a B channel of a texture map;
the identification associated with the first color channel is used for identifying a texture mosaic used by the underlying image area;
the remapping identifiers are used to represent texture tiles used by non-underlying image areas.
8. The method of any one of claims 1-4, wherein the material markings comprise texture patterns and textures.
9. An apparatus for rendering an object model, comprising:
the system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring a target model and a texture jigsaw for attaching to the target model, the material of the target model is a combined material, the target model comprises a plurality of vertexes, each vertex corresponds to vertex data, the vertex data comprises a material identifier, the material identifier is used for identifying a texture chartlet used by a patch grid of the target model at the vertex, the texture jigsaw comprises a plurality of original texture chartlets, the UV coordinate of the texture jigsaw is located in a preset UV interval, and the UV interval is used for respectively representing the value ranges of a U coordinate and a V coordinate;
the determining module is used for determining a UV sampling interval corresponding to the material identifier of the patch grid according to the material identifier of the patch grid to be rendered and the number of the original texture maps in the texture jigsaw puzzle;
the acquisition module is further used for acquiring a target map from a plurality of pre-generated texture puzzles with different texture levels according to the UV sampling interval and the distance and angle of the patch grid in the visual window;
and the sampling module is used for performing texture sampling on the target map according to the UV sampling interval so as to render the patch mesh to obtain a rendered target model.
10. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the method of any of claims 1-8.
11. A computer-readable storage medium having computer-executable instructions stored therein, which when executed by a processor, are configured to implement the method of any one of claims 1-8.
CN202111210803.6A 2021-10-18 2021-10-18 Target model rendering method, device, equipment and storage medium Pending CN113947657A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111210803.6A CN113947657A (en) 2021-10-18 2021-10-18 Target model rendering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111210803.6A CN113947657A (en) 2021-10-18 2021-10-18 Target model rendering method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113947657A true CN113947657A (en) 2022-01-18

Family

ID=79331309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111210803.6A Pending CN113947657A (en) 2021-10-18 2021-10-18 Target model rendering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113947657A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114429523A (en) * 2022-02-10 2022-05-03 浙江慧脑信息科技有限公司 Method for controlling three-dimensional model partition mapping
CN114494562A (en) * 2022-01-20 2022-05-13 北京中航双兴科技有限公司 Data processing method and device for terrain rendering
CN116561081A (en) * 2023-07-07 2023-08-08 腾讯科技(深圳)有限公司 Data processing method, device, electronic equipment, storage medium and program product
CN117011492A (en) * 2023-09-18 2023-11-07 腾讯科技(深圳)有限公司 Image rendering method and device, electronic equipment and storage medium
CN117745974A (en) * 2024-02-19 2024-03-22 潍坊幻视软件科技有限公司 Method for dynamically generating rounded rectangular grid
CN118096981A (en) * 2024-04-22 2024-05-28 山东捷瑞数字科技股份有限公司 Mapping processing method, system and equipment based on dynamic change of model

Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180150296A1 (en) * 2016-11-28 2018-05-31 Samsung Electronics Co., Ltd. Graphics processing apparatus and method of processing texture in graphics pipeline
CN108176048A (en) * 2017-11-30 2018-06-19 腾讯科技(深圳)有限公司 The treating method and apparatus of image, storage medium, electronic device
CN109377545A (en) * 2018-09-28 2019-02-22 武汉艺画开天文化传播有限公司 Model sharing, rendering method and electric terminal based on Alembic
CN109448099A (en) * 2018-09-21 2019-03-08 腾讯科技(深圳)有限公司 Rendering method, device, storage medium and the electronic device of picture
CN109671158A (en) * 2018-11-01 2019-04-23 苏州蜗牛数字科技股份有限公司 A kind of optimization method of game picture
CN109685869A (en) * 2018-12-25 2019-04-26 网易(杭州)网络有限公司 Dummy model rendering method and device, storage medium, electronic equipment
CN109816762A (en) * 2019-01-30 2019-05-28 网易(杭州)网络有限公司 A kind of image rendering method, device, electronic equipment and storage medium
CN109961498A (en) * 2019-03-28 2019-07-02 腾讯科技(深圳)有限公司 Image rendering method, device, terminal and storage medium
CN110533755A (en) * 2019-08-30 2019-12-03 腾讯科技(深圳)有限公司 A kind of method and relevant apparatus of scene rendering
CN111028361A (en) * 2019-11-18 2020-04-17 杭州群核信息技术有限公司 Three-dimensional model and material merging method, device, terminal, storage medium and rendering method
CN111476877A (en) * 2020-04-16 2020-07-31 网易(杭州)网络有限公司 Shadow rendering method and device, electronic equipment and storage medium
CN111508053A (en) * 2020-04-26 2020-08-07 网易(杭州)网络有限公司 Model rendering method and device, electronic equipment and computer readable medium
CN111540024A (en) * 2020-04-21 2020-08-14 网易(杭州)网络有限公司 Model rendering method and device, electronic equipment and storage medium
CN112138386A (en) * 2020-09-24 2020-12-29 网易(杭州)网络有限公司 Volume rendering method and device, storage medium and computer equipment
CN112215934A (en) * 2020-10-23 2021-01-12 网易(杭州)网络有限公司 Rendering method and device of game model, storage medium and electronic device
CN112316420A (en) * 2020-11-05 2021-02-05 网易(杭州)网络有限公司 Model rendering method, device, equipment and storage medium
US20210082185A1 (en) * 2019-09-13 2021-03-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus, method and computer program for rendering a visual scene
CN112598770A (en) * 2020-12-22 2021-04-02 福建天晴数码有限公司 Real-time applique rendering method and system based on model three-dimensional coordinate space
CN112619154A (en) * 2020-12-28 2021-04-09 网易(杭州)网络有限公司 Processing method and device of virtual model and electronic device
CN112652044A (en) * 2021-01-05 2021-04-13 网易(杭州)网络有限公司 Particle special effect rendering method, device and equipment and storage medium
CN112785679A (en) * 2021-03-15 2021-05-11 网易(杭州)网络有限公司 Rendering method and device of crystal stone model, computer storage medium and electronic equipment
CN112785674A (en) * 2021-01-22 2021-05-11 北京百度网讯科技有限公司 Texture map generation method, rendering method, device, equipment and storage medium
CN112802172A (en) * 2021-02-24 2021-05-14 网易(杭州)网络有限公司 Texture mapping method and device of three-dimensional model, storage medium and computer equipment
US20210146247A1 (en) * 2019-09-11 2021-05-20 Tencent Technology (Shenzhen) Company Limited Image rendering method and apparatus, device and storage medium
CN112915536A (en) * 2021-04-02 2021-06-08 网易(杭州)网络有限公司 Rendering method and device of virtual model
CN112933597A (en) * 2021-03-16 2021-06-11 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113077539A (en) * 2021-04-08 2021-07-06 网易(杭州)网络有限公司 Target virtual model rendering method and device and electronic equipment
CN113398583A (en) * 2021-07-19 2021-09-17 网易(杭州)网络有限公司 Applique rendering method and device of game model, storage medium and electronic equipment
US20210319621A1 (en) * 2018-09-26 2021-10-14 Beijing Kuangshi Technology Co., Ltd. Face modeling method and apparatus, electronic device and computer-readable medium

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180150296A1 (en) * 2016-11-28 2018-05-31 Samsung Electronics Co., Ltd. Graphics processing apparatus and method of processing texture in graphics pipeline
CN108176048A (en) * 2017-11-30 2018-06-19 腾讯科技(深圳)有限公司 The treating method and apparatus of image, storage medium, electronic device
CN109448099A (en) * 2018-09-21 2019-03-08 腾讯科技(深圳)有限公司 Rendering method, device, storage medium and the electronic device of picture
US20210319621A1 (en) * 2018-09-26 2021-10-14 Beijing Kuangshi Technology Co., Ltd. Face modeling method and apparatus, electronic device and computer-readable medium
CN109377545A (en) * 2018-09-28 2019-02-22 武汉艺画开天文化传播有限公司 Model sharing, rendering method and electric terminal based on Alembic
CN109671158A (en) * 2018-11-01 2019-04-23 苏州蜗牛数字科技股份有限公司 A kind of optimization method of game picture
CN109685869A (en) * 2018-12-25 2019-04-26 网易(杭州)网络有限公司 Dummy model rendering method and device, storage medium, electronic equipment
CN109816762A (en) * 2019-01-30 2019-05-28 网易(杭州)网络有限公司 A kind of image rendering method, device, electronic equipment and storage medium
CN109961498A (en) * 2019-03-28 2019-07-02 腾讯科技(深圳)有限公司 Image rendering method, device, terminal and storage medium
CN110533755A (en) * 2019-08-30 2019-12-03 腾讯科技(深圳)有限公司 A kind of method and relevant apparatus of scene rendering
US20210146247A1 (en) * 2019-09-11 2021-05-20 Tencent Technology (Shenzhen) Company Limited Image rendering method and apparatus, device and storage medium
US20210082185A1 (en) * 2019-09-13 2021-03-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus, method and computer program for rendering a visual scene
CN111028361A (en) * 2019-11-18 2020-04-17 杭州群核信息技术有限公司 Three-dimensional model and material merging method, device, terminal, storage medium and rendering method
CN111476877A (en) * 2020-04-16 2020-07-31 网易(杭州)网络有限公司 Shadow rendering method and device, electronic equipment and storage medium
CN111540024A (en) * 2020-04-21 2020-08-14 网易(杭州)网络有限公司 Model rendering method and device, electronic equipment and storage medium
CN111508053A (en) * 2020-04-26 2020-08-07 网易(杭州)网络有限公司 Model rendering method and device, electronic equipment and computer readable medium
CN112138386A (en) * 2020-09-24 2020-12-29 网易(杭州)网络有限公司 Volume rendering method and device, storage medium and computer equipment
CN112215934A (en) * 2020-10-23 2021-01-12 网易(杭州)网络有限公司 Rendering method and device of game model, storage medium and electronic device
CN112316420A (en) * 2020-11-05 2021-02-05 网易(杭州)网络有限公司 Model rendering method, device, equipment and storage medium
CN112598770A (en) * 2020-12-22 2021-04-02 福建天晴数码有限公司 Real-time applique rendering method and system based on model three-dimensional coordinate space
CN112619154A (en) * 2020-12-28 2021-04-09 网易(杭州)网络有限公司 Processing method and device of virtual model and electronic device
CN112652044A (en) * 2021-01-05 2021-04-13 网易(杭州)网络有限公司 Particle special effect rendering method, device and equipment and storage medium
CN112785674A (en) * 2021-01-22 2021-05-11 北京百度网讯科技有限公司 Texture map generation method, rendering method, device, equipment and storage medium
CN112802172A (en) * 2021-02-24 2021-05-14 网易(杭州)网络有限公司 Texture mapping method and device of three-dimensional model, storage medium and computer equipment
CN112785679A (en) * 2021-03-15 2021-05-11 网易(杭州)网络有限公司 Rendering method and device of crystal stone model, computer storage medium and electronic equipment
CN112933597A (en) * 2021-03-16 2021-06-11 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN112915536A (en) * 2021-04-02 2021-06-08 网易(杭州)网络有限公司 Rendering method and device of virtual model
CN113077539A (en) * 2021-04-08 2021-07-06 网易(杭州)网络有限公司 Target virtual model rendering method and device and electronic equipment
CN113398583A (en) * 2021-07-19 2021-09-17 网易(杭州)网络有限公司 Applique rendering method and device of game model, storage medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CLAYPOOL, MARK: "On Models for Game Input with Delay - Moving Target Selection with a Mouse", 《18TH IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (IEEE ISM)》, 10 May 2017 (2017-05-10), pages 575 - 582 *
赫春晓;吕志慧;邱天;陈超: "面向Web的城市级表面三维模型数据优化方法", 《江苏科技信息》, vol. 37, no. 31, 10 November 2020 (2020-11-10) *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494562A (en) * 2022-01-20 2022-05-13 北京中航双兴科技有限公司 Data processing method and device for terrain rendering
CN114429523A (en) * 2022-02-10 2022-05-03 浙江慧脑信息科技有限公司 Method for controlling three-dimensional model partition mapping
CN114429523B (en) * 2022-02-10 2024-05-14 浙江慧脑信息科技有限公司 Method for controlling partition mapping of three-dimensional model
CN116561081A (en) * 2023-07-07 2023-08-08 腾讯科技(深圳)有限公司 Data processing method, device, electronic equipment, storage medium and program product
CN116561081B (en) * 2023-07-07 2023-12-12 腾讯科技(深圳)有限公司 Data processing method, device, electronic equipment, storage medium and program product
CN117011492A (en) * 2023-09-18 2023-11-07 腾讯科技(深圳)有限公司 Image rendering method and device, electronic equipment and storage medium
CN117011492B (en) * 2023-09-18 2024-01-05 腾讯科技(深圳)有限公司 Image rendering method and device, electronic equipment and storage medium
CN117745974A (en) * 2024-02-19 2024-03-22 潍坊幻视软件科技有限公司 Method for dynamically generating rounded rectangular grid
CN117745974B (en) * 2024-02-19 2024-05-10 潍坊幻视软件科技有限公司 Method for dynamically generating rounded rectangular grid
CN118096981A (en) * 2024-04-22 2024-05-28 山东捷瑞数字科技股份有限公司 Mapping processing method, system and equipment based on dynamic change of model

Similar Documents

Publication Publication Date Title
CN113947657A (en) Target model rendering method, device, equipment and storage medium
TWI584223B (en) Method and system of graphics processing enhancement by tracking object and/or primitive identifiers,graphics processing unit and non-transitory computer readable medium
EP2973423B1 (en) System and method for display of a repeating texture stored in a texture atlas
US5966132A (en) Three-dimensional image synthesis which represents images differently in multiple three dimensional spaces
US7095418B2 (en) Apparatus and methods for texture mapping
US6900799B2 (en) Filtering processing on scene in virtual 3-D space
US11699263B2 (en) Apparatus, method and computer program for rendering a visual scene
US10217259B2 (en) Method of and apparatus for graphics processing
TW201539374A (en) Method for efficient construction of high resolution display buffers
US7903121B2 (en) System and method for image-based rendering with object proxies
US6664971B1 (en) Method, system, and computer program product for anisotropic filtering and applications thereof
US7158133B2 (en) System and method for shadow rendering
CN101477701A (en) Built-in real tri-dimension rendering process oriented to AutoCAD and 3DS MAX
US8698830B2 (en) Image processing apparatus and method for texture-mapping an image onto a computer graphics image
US6756989B1 (en) Method, system, and computer program product for filtering a texture applied to a surface of a computer generated object
CN115103134A (en) LED virtual shooting cutting synthesis method
US5793372A (en) Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically using user defined points
CN101540056B (en) Implanted true-three-dimensional stereo rendering method facing to ERDAS Virtual GIS
CN101521828B (en) Implanted type true three-dimensional rendering method oriented to ESRI three-dimensional GIS module
CN113457161A (en) Picture display method, information generation method, device, equipment and storage medium
CN108171784B (en) Rendering method and terminal
GB2339358A (en) Image mixing apparatus
CN101482977B (en) Microstation oriented implantation type true three-dimensional stereo display method
US20240193864A1 (en) Method for 3d visualization of sensor data
CN101488231B (en) Creator software oriented implantation type true three-dimensional stereo display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination