CN112057854A - Game object processing method and device, electronic equipment and computer readable medium - Google Patents

Game object processing method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN112057854A
CN112057854A CN202010951335.7A CN202010951335A CN112057854A CN 112057854 A CN112057854 A CN 112057854A CN 202010951335 A CN202010951335 A CN 202010951335A CN 112057854 A CN112057854 A CN 112057854A
Authority
CN
China
Prior art keywords
target
mesh
grid
animation
polygon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010951335.7A
Other languages
Chinese (zh)
Inventor
宋辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010951335.7A priority Critical patent/CN112057854A/en
Publication of CN112057854A publication Critical patent/CN112057854A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a game object processing method, a game object processing device, electronic equipment and a computer readable medium, which relate to the technical field of computers and comprise the following steps: acquiring animation to be optimized; the method provided by the application can reduce the expenditure of a CPU (Central processing Unit) and the performance problem during the animation operation, and can keep the smoothness of game display and interaction while ensuring the animation quality.

Description

Game object processing method and device, electronic equipment and computer readable medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for processing a game object, an electronic device, and a computer-readable medium.
Background
In the field of game animation, the content and fluency presented by an animation system are one of the important indexes for measuring the quality of a game. Spine can provide an animation solution based on two-dimensional skeleton animation, and skeleton animation production is completed by binding pictures to skeletons in Spine. However, Spine uses a two-dimensional skeletal animation to drive the mesh, which increases logic computation and increases CPU overhead.
The complete life cycle of the game system comprises three stages of initialization, logic update and rendering update per frame, termination and the like. The CPU calculation cost of each frame of logic update and rendering update directly influences the update frame rate of a game system and influences game display and interaction fluency. Spine can efficiently produce delicate two-dimensional animation effects, but Spine uses a two-dimensional skeleton animation grid driving mode, logic calculation is increased, and CPU overhead is increased.
With the continuous improvement of the game production level, the game two-dimensional animation tends to be high in quality and diversified in application scenes, and the influence of the Spine animation on the CPU overhead is mainly reflected in the animation quality and the animation application scene. On the one hand, from the perspective of animation quality, the higher animation effect directly leads to the improvement of the output precision of Spine animation resources, and the CPU overhead is increased. On the other hand, from the perspective of a cartoon application scene, Spine animations appear in a plurality of interfaces in a game, canvas of each interface is different in size, and animation regions displayed are different; in addition, the mobile phone devices on the market have many models, the CPU performance is uneven, and the game needs to be adapted to the mainstream models on the market as much as possible.
For the problem of animation quality, in the prior art, Spine animation can be optimized through design optimization, output resource optimization and runtime Draw Call batch processing, but the above optimization scheme has the following disadvantages:
in the aspects of design and resource optimization, a resource output specification is established and a resource monitoring tool is used, but the limitation degree of resource precision is difficult to grasp, the resource precision is too low, the animation quality can be directly reduced, the user experience is influenced, and the CPU overhead can not be effectively reduced under the condition of considering the animation quality.
In the aspect of runtime Draw Call batching, the automatic batching algorithm has limited optimization on common Spine animations. For diversified application scenes, the problem can be solved by outputting resources aiming at different scenes, but a longer development period and development cost are needed, and the cost caused by the change of design requirements is higher; in addition, the output of multiple resources can increase the game bag body and the updating patch, influence the downloading and updating experience of the user and consume more flow. If the algorithm in the Spine running process can be optimized, the CPU overhead is effectively reduced, only one resource can be output, the method is applied to different scenes, the production efficiency can be greatly improved, and the manufacturing cost is reduced.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus, an electronic device, and a computer-readable medium for processing a game object, where the method provided in the present application can reduce the overhead of a CPU during animation running, reduce performance problems, and maintain the smoothness of game display and interaction while ensuring the quality of the animation.
In a first aspect, an embodiment of the present invention provides a method for processing a game object, including: acquiring animation to be optimized; the animation to be optimized comprises at least one game object, the game object comprises an animation model corresponding to the game object, and the animation model comprises a skeleton and a grid bound with the skeleton; filtering the mesh according to a target bounding box to obtain a first target mesh, wherein the target bounding box comprises: the bounding box of the grid and/or the bounding box of the cutting area represent the area to be displayed on a game display interface in the animation to be optimized; and cutting the first target grid to obtain a second target grid, wherein the second target grid is a grid to be rendered.
Further, after the first target mesh is clipped to obtain a second target mesh, the method further includes: and performing batch rendering on the second target grid.
Further, the performing batch rendering on the second target grid includes: modifying the material of the second target grid into the same material to obtain an optimized animation; and performing batch rendering on the grids in the optimized animation.
Further, the mesh comprises a plurality of polygons; filtering the mesh according to the target bounding box to obtain a first target mesh comprises: and filtering the grids through an algorithm based on the bounding boxes of the grids to obtain a first target grid, wherein the bounding box of the first target grid is positioned in the clipping area or is intersected with the clipping area.
Further, filtering the mesh according to the target bounding box to obtain a first target mesh includes: and filtering a plurality of polygons in the mesh through an algorithm based on the bounding box of the clipping region to obtain a first target mesh, wherein the first target mesh is intersected with the bounding box of the clipping region or is positioned in the clipping region.
Further, the filtering the mesh through an algorithm based on bounding boxes of the mesh to obtain a first target mesh includes: determining a bounding box of the mesh according to the vertex coordinates of the mesh; determining the position relation between the bounding box of the grid and the cutting area according to the position information of the bounding box of the grid and the position information of the cutting area to obtain a first position relation; and filtering the grid according to the first position relation to obtain the first target grid.
Further, filtering the mesh according to the first location relationship to obtain the first target mesh includes: and filtering a first target grid in the grids according to the first position relation, wherein the first target grid comprises a first type grid and/or a second type grid, a bounding box of the first type grid is positioned in the cutting area, and a bounding box of the second type grid is intersected with the cutting area.
Further, the filtering the plurality of polygons in the mesh through an algorithm based on the bounding box of the clipping region to obtain a first target mesh includes: determining a bounding box of the clipping area according to the vertex coordinates of the clipping area; determining the position relation between the bounding box of the cutting area and the polygon according to the position information of the bounding box of the cutting area and the position information of the polygon in the grid to obtain a second position relation; and filtering the polygons in the mesh according to the second position relation to obtain the first target mesh.
Further, filtering the polygons in the mesh according to the second position relationship to obtain the first target mesh includes: and filtering out target polygons in the polygons according to the second positional relationship, and determining a mesh formed by the target polygons as the first target mesh, wherein the target polygons comprise first-class polygons and/or second-class polygons, and the first-class polygons are located in the bounding box of the clipping region, or the second-class polygons are intersected with the bounding box of the clipping region.
Further, the cutting the first target grid to obtain a second target grid includes: performing intersection calculation on the polygon in the first target mesh and the cutting area to obtain an intersection relation between the polygon in the first target mesh and the cutting area; if the polygon in the first target mesh is determined to be positioned outside the cutting area according to the intersection relation, discarding the polygon in the first target mesh; if the polygon in the first target mesh is determined to be located in the cutting area according to the intersection relation, adding the polygon in the first target mesh into an output mesh queue, wherein the polygon in the output mesh queue is a polygon to be rendered; if it is determined that at least one polygon in the first target mesh intersects with the clipping region according to the intersection relationship, clipping the at least one polygon intersecting with the clipping region into at least one sub-polygon contained in the clipping region, and adding the sub-polygon to the output mesh queue, wherein the polygon in the output mesh queue is a polygon to be rendered.
Further, the clipping at least one polygon intersecting a clipping region into at least one sub-polygon contained within the clipping region includes: for each polygon intersected with the cutting area, cutting the polygon into a first polygon positioned in the cutting area and a second polygon positioned outside the cutting area; if the first polygon is a triangle, taking the first polygon as the sub-polygon; otherwise, cutting the first polygon into a plurality of sub-triangles, and taking the sub-triangles as the sub-polygons.
Further, filtering the mesh according to the target bounding box further comprises: filtering the mesh according to the bounding box of the mesh; and filtering triangles in the filtered meshes according to the bounding boxes of the cutting areas to obtain the first target mesh.
Further, the original material of the map of the mesh includes a pattern corresponding thereto, the pattern including: a normal blend mode and a superposition blend mode; the normal mixed mode and the superposition mixed mode are respectively determined by corresponding mixing factors, and the calculation modes of the mixing factors corresponding to the normal mixed mode and the superposition mixed mode are different.
Further, modifying the material of the second target mesh to be the same material, so as to obtain the optimized animation, including: setting a mixing factor of a grid with an original material of the normal mixed mode in the second target grid as a first mixing factor, and setting a mixing factor of a grid with an original material of the superimposed mixed mode in the grid as a second mixing factor, so as to obtain the optimized animation, wherein a calculation formula of the first mixing factor is the same as a calculation formula of the second mixing factor, the mixing factor is a parameter for mixing a source color and a target color of the grid, the source color and the target color are used for determining a display color of the grid on a game display interface, and the first mixing factor and the second mixing factor enable an output color of the second target grid before and after modification to be unchanged after the material of the second target grid is modified to be the same material.
Further, the method further comprises: and setting corresponding identification information for each grid in the UV vertex coordinates of each grid, wherein the identification information is used for indicating the original material of each grid.
Further, the joint rendering of the meshes in the optimized animation comprises: reading identification information in the UV vertex coordinates of each grid in the optimized animation, and determining the original material of each grid in the optimized animation according to the read identification information; determining a mixing factor of each grid in the optimized animation according to the read original material; and determining the target output color of each grid in the optimized animation according to the mixing factor of each grid in the optimized animation, and performing batch rendering on each grid in the optimized animation according to the target output color.
Further, determining a blending factor of each mesh in the animation after optimization according to the read original material includes: if the read original material is the normal mixed mode, determining that the mixed factor of each grid in the optimized animation is the first mixed factor, wherein a calculation formula of the first mixed factor is as follows:
Figure BDA0002675990110000061
AlphasourceThe source mixing factor represents the influence of Alpha value on the source color; if the read original material is the overlay mixed mode, determining that the mixed factor of each grid in the optimized animation is the second mixed factor, wherein a calculation formula of the second mixed factor is as follows:
Figure BDA0002675990110000062
further, determining the target output color for each mesh in the animation after optimization according to the blending factor for each mesh in the animation after optimization comprises: according to the formula
Figure BDA0002675990110000063
Determining a target output color for each mesh in the animation after the optimization, COutput ofRepresenting the target output color, and,
Figure BDA0002675990110000064
CsourceAs a source color, AlphaSourceIs a mixing factor of the source primary colors.
In a second aspect, an embodiment of the present invention provides a processing apparatus for a game object, including: the acquiring unit is used for acquiring the animation to be optimized; the animation to be optimized comprises at least one game object, the game object comprises an animation model corresponding to the game object, and the animation model comprises a skeleton and a grid bound with the skeleton; a filtering unit, configured to filter the mesh according to a target bounding box to obtain a first target mesh, where the target bounding box includes: the bounding box of the grid and/or the bounding box of the cutting area represent the area to be displayed on a game display interface in the animation to be optimized; and the cutting unit is used for cutting the first target grid to obtain a second target grid, wherein the second target grid is a grid to be rendered.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method in any one of the above first aspects when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer-readable medium having non-volatile program code executable by a processor, where the program code causes the processor to perform the steps of the method described in any one of the above first aspects.
In the embodiment of the invention, firstly, the animation to be optimized is obtained; and then, filtering the grids according to the target bounding box to obtain a first target grid, cutting the first target grid, and obtaining a second target grid after cutting. According to the description, the method for filtering and cutting the grids by using the target bounding box can reduce the expense of a CPU (central processing unit) during the running of the animation, reduce the performance problem, and keep the fluency of game display and interaction while ensuring the quality of the animation.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method of processing a game object according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method of processing a game object according to an embodiment of the present invention;
FIG. 3 is a flow chart of yet another method of processing a game object according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a processing device of a game object according to an embodiment of the invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
in accordance with an embodiment of the present invention, there is provided an embodiment of a method for processing a game object, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that presented herein.
Fig. 1 is a flowchart of a processing method of a game object according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, obtaining the animation to be optimized; the animation to be optimized comprises at least one game object, the game object comprises an animation model corresponding to the game object, and the animation model comprises a skeleton and a grid bound with the skeleton.
In this application, a mesh is a representation of an object in a game or computer graphics, and a triangular mesh is generally used.
Specifically, for an animation model of a game object, the manner of creating an animation may be described as follows: firstly, building a skeleton of a game object in Spine software, then binding a picture to the skeleton to complete skeleton animation production, and exporting the skeleton animation into a binary format file. In the present application, the animated model of the game object includes a skeleton and meshes bound thereto, and each mesh may include a plurality of polygons, for example, a plurality of triangles. As the bone moves, the mesh to which it is bound moves.
Step S104, filtering the grids according to a target bounding box to obtain a first target grid, wherein the target bounding box comprises: and the clipping area represents an area to be displayed on a game display interface in the animation to be optimized.
And S106, cutting the first target grid to obtain a second target grid, wherein the second target grid is a grid to be rendered.
In the present application, the purpose of filtering the mesh in the animation to be optimized is to filter out the mesh located in the clipping region or intersecting the clipping region in the mesh. And then, clipping the filtered first target grid to obtain a clipped grid.
In the embodiment of the invention, firstly, the animation to be optimized is obtained; and then, filtering the grids according to the target bounding box to obtain a first target grid, cutting the first target grid, and obtaining a second target grid after cutting. According to the description, the mesh is cut by the target bounding box, so that the expenditure of a CPU (central processing unit) can be reduced during the animation operation, the performance problem is reduced, and the smoothness of game display and interaction is kept while the animation quality is ensured.
In an optional embodiment of the present application, after the first target mesh is clipped to obtain a second target mesh, the method further includes: and performing batch rendering on the second target grid.
Specifically, the material of the second target mesh may be modified to be the same material, so as to obtain an optimized animation, and perform batch rendering on meshes in the optimized animation. For example, in the present application, the mesh in the optimized animation may be subjected to batch rendering after receiving a batch rendering request.
Batch rendering Call Draw refers to: one of the important performance indicators is the excessive number of Call Draw that may cause CPU computational stress for the rendering request sent to the graphics engine.
For Spine animations with high production precision, large numbers of faces, bones and the like, the updating CPU cost of each frame of Spine is high, the updating frame rate of a game system is directly influenced, and the display and interaction fluency are influenced. The performance detector of Visual Studio 2015 is used for detecting the performance overhead of each frame when the Spine animation runs. Specifically, in the present application, Visual Studio 2015 may be used to open a game source code project, select a performance explorer, initiate a "CPU usage" option, and begin exploration. And starting the game, running the Spine animation performance test case, and collecting running data in real time by the tool at the moment. After collection is completed, the game is quitted, the tool processes the data, and a probing report is generated. And when the game running test case is checked in the report, the CPU time consumption, the occupation ratio and other data of the function call can be checked.
The exploration result shows that several functions with high CPU ratio before optimization mainly comprise: and (3) triangle cutting: clipTriangles; draw Call batch rendering: drawBatched; updating the skeleton animation: update. The CPU ratio of the main function before optimization is shown in table 1 below.
TABLE 1
Function name CPU ratio (%)
spSkeletonClipping_clipTriangles 42.22
drawBatchedQuadsAndTriangles 7.51
spine::SkeletonAnimation::update 26.65
The influencing factors and the optimization space of the three parts of the overhead are analyzed in turn.
1. Triangle clipping clipTriangles
The process of cutting the triangle is to cut off the area which is not needed to be displayed by the animation. As can be seen from the above description, in order to control the animation production cost, one part of animation can be used for interface display with different sizes, and a part of area of the animation needs to be selected for display through clipping. The cutting algorithm is positively correlated with the number of surfaces of the mesh, and the complexity of the algorithm is O (n), wherein the number of the surfaces of the mesh refers to the number of the surfaces of the polygon contained in the mesh. The clipping may use a CPU clipping algorithm and a GPU clipping algorithm. A CPU clipping algorithm is used during Spine running, the exploration result shows that the implementation overhead provided by Spine accounts for the highest proportion, and the optimization of the clipping algorithm is the key for reducing the overhead and increasing the frame rate. After analyzing the codes, the native implementation of triangle clipping has a larger optimization space when Spine runs.
2. Draw Call batch rendering: drawBatched
Reducing the Draw Call may reduce the CPU overhead caused by preparing data, switching rendering states and data transfers, etc. each time the Draw Call. Under the condition that the display effect is not influenced, the Draw Call batch makes objects with the same material render in sequence as much as possible by adjusting the rendering sequence of the display objects, and the objects rendered in sequence are batched, so that the number of Draw calls is reduced, and the cost is reduced. After the algorithm is analyzed, the algorithm is found to be high in optimization difficulty. On the other hand, the automatic batching algorithm can batch display objects with the same material, and by using the characteristic of the algorithm, the cost can be indirectly reduced by modifying the material.
3. Updating the skeleton animation: update
And in the skeleton animation process, a skeleton state difference value between the key frames is calculated along with time, and the top points bound to the skeleton are updated, wherein the skeleton state difference value refers to the transition state of the two key frames. General skeletal animation uses key frame techniques, each key frame containing information such as skeletal coordinates, rotation, etc. The key frames are discrete, such as 25 frames per second. When the game runs, the skeleton animation is played, and the skeleton state in the middle of the key frame needs to be calculated. The updating expense is positively correlated with the number of bones and the number of vertexes. As can be seen from the probing results, the bone update overhead is high; after analyzing the codes, the algorithm implementation optimization space is smaller when Spine runs.
In conclusion, the inventor of the invention uses an exploration tool to find out parts with higher overhead in Spine operation, and analyzes overhead influence factors and optimization space. After analysis, triangle clipping is the main reason causing high CPU overhead during Spine running, the algorithm has a large optimization space, and the optimization clipping algorithm has high yield; in the aspect of Draw Call batch, the Spine is usually made of two different materials, which may affect the batch effect, and the Draw Call batch may be further improved by modifying the materials. Therefore, the invention carries out Spine runtime optimization from two aspects of triangle clipping algorithm and Draw Call batch improvement, and the processing method of the game object is specifically described below.
According to the description, the existing triangle clipping algorithm is optimized, the bounding box of the grid or the bounding box of the clipping area is adopted to filter the grid, and the grid obtained through filtering is intersected with the clipping area or is positioned in the clipping area. And after the filtered first target grid is obtained, cutting the filtered first target grid so as to obtain a second target grid after cutting, wherein the second target grid obtained after cutting is a grid used for rendering to a game display interface. In this application, the mesh after being cut is obtained by filtering the mesh based on the target bounding box and cutting the filtered mesh, which can be described as the following ways.
In a first way,
(11) And filtering the grid through an algorithm based on the bounding box of the grid to obtain a first target grid, wherein the bounding box of the first target grid is positioned in the clipping area or is intersected with the clipping area.
In the application, the mesh may be filtered by using an algorithm of a bounding box based on the mesh, so as to obtain a first target mesh intersecting with the clipping region or located in the clipping region. After the first target mesh is obtained, the first target mesh is cut, and the main cutting object is a mesh intersecting with the cutting area, so as to obtain a second target mesh, and a specific cutting process will be described in the following process.
The second way,
(21) And filtering a plurality of polygons in the mesh through an algorithm based on the bounding box of the clipping region to obtain a first target mesh, wherein the first target mesh is intersected with the bounding box of the clipping region or is positioned in the clipping region.
In the present application, the polygon (e.g., triangle) in each mesh may be filtered by using an algorithm based on a bounding box of the clipping region, so as to obtain a polygon intersecting the clipping region or located in the clipping region, thereby obtaining the first target mesh according to the determined polygon. After the first target mesh is obtained, the first target mesh is cut, and the main cutting object is a polygon intersecting with the cutting area, so as to obtain a cut polygon, where a mesh corresponding to the cut polygon is the second target mesh, and a specific cutting process will be described in the following process.
The third method,
(31) Filtering the grids according to the algorithm of the bounding boxes of the grids to obtain filtered grids;
(32) and filtering triangles in the filtered meshes according to the algorithm of the bounding box of the cutting area to obtain the first target mesh.
In the application, the mesh may be filtered by using an algorithm of a bounding box based on the mesh, so as to be intersected with the clipping region, or the mesh located in the clipping region is used as the filtered mesh.
After the filtered meshes are obtained, a polygon (e.g., a triangle) in each mesh in the filtered meshes may be filtered by using an algorithm based on a bounding box of the clipping region, so as to obtain a polygon intersecting the clipping region or located in the clipping region, thereby obtaining the first target mesh according to the determined polygon. After the first target mesh is obtained, the first target mesh is cut, and the main cutting object is a polygon intersecting with the cutting area, so as to obtain a cut polygon, where a mesh corresponding to the cut polygon is the second target mesh, and a specific cutting process will be described in the following process.
As can be seen from the above description, in the existing triangle clipping algorithm, intersection calculation needs to be performed on each triangle in the mesh and the clipping area, so as to clip the triangle according to the calculated intersection relationship. If the number of triangles is large, the intersection calculation will incur a large overhead for the CPU. Based on this, in the application, the mesh is filtered by adopting an algorithm based on the target bounding box, the intersection relation between each triangle and the cutting area does not need to be calculated in the filtering process, and after the filtered mesh is obtained, the mesh is cut in a way of calculating the intersection relation, so that the expense of a CPU (Central processing Unit) is reduced.
In an alternative embodiment, as shown in fig. 2, the steps of: filtering the mesh through an algorithm based on the bounding box of the mesh to obtain a first target mesh comprises the following processes:
step S11, determining a bounding box of the grid according to the vertex coordinates of the grid;
step S12, determining the position relation between the bounding box of the grid and the cutting area according to the position information of the bounding box of the grid and the position information of the cutting area to obtain a first position relation;
and step S13, filtering the grid according to the first position relation to obtain the first target grid. Wherein, the step S13 specifically includes: and filtering a first target grid in the grids according to the first position relation, wherein the first target grid comprises a first type grid and/or a second type grid, a bounding box of the first type grid is positioned in the cutting area, and a bounding box of the second type grid is intersected with the cutting area.
Specifically, in the present embodiment, the mesh is filtered using the bounding box AABB of the mesh, and the purpose of the filtering is to discard the mesh outside the clipping region and retain the mesh inside the clipping region.
It should be noted that, in the present application, a corresponding accessory is set in advance for the animation to be optimized, and the accessory includes: crop attachments and grid attachments. The mesh attachment stores the triangular mesh of the animation model; and the cutting accessory stores the size and position information of the cutting area, and represents that the triangle in the cutting area is only reserved for rendering when the animation to be optimized is displayed on the game display interface. Corresponding attachment types are set for the cut attachment and the grid attachment.
Specifically, when the game runs, each attachment is read for each frame of animation to be optimized, and whether the attachment is a grid attachment or a clipping interval is determined according to the type of the attachment read.
And if the cutting attachment is traversed, starting the operation of cutting the bone to be optimized by using the polygonal area of the cutting attachment.
If the grid attachment is traversed, grid world coordinates may be computed, wherein the grid world coordinate computation process is described as follows: since the grid is bound to the skeleton, when the animation is played, the position and orientation of the current skeleton are calculated by using the key frames. And then calculating the world coordinates of the vertex of the grid by using the binding weight and the relative position of the vertex of the grid to the skeleton, thereby obtaining the world coordinates of the grid. The weight refers to the binding degree, such as weight 1, and indicates that the vertex and the skeleton of the mesh keep the relative positions unchanged; weight 0, indicating that the coordinates of the mesh vertices are not under skeletal control. Next, the bounding box AABB of the mesh is determined from the vertex coordinates of the mesh. Wherein the bounding box AABB of the grid can be represented as a lower left corner coordinate and an upper right corner coordinate. When calculating world coordinates of the grid, vertices of the triangle need to be traversed, the minimum abscissa and ordinate values of all the vertices are taken as lower left-hand coordinates, and the maximum abscissa and ordinate values of all the vertices are taken as upper right-hand coordinates, so that the bounding box AABB of the grid can be obtained.
After the bounding box of the grid is obtained through calculation, the position relationship between the bounding box of the grid and the cutting area can be determined according to the position information of the bounding box of the grid and the position information of the cutting area, so that the first position relationship is obtained. After the first position relationship is obtained, the mesh may be filtered according to the first position relationship to obtain a mesh located in the clipping region or intersecting the clipping region, that is, the first target mesh described above, and the specific filtering principle is described as follows:
discarding the entire mesh if the bounding box of the mesh, AABB, is outside the clipping region;
if the bounding box AABB of the mesh is in the clipping area, traversing each polygon in the mesh, and adding all the polygons to an output mesh queue to perform batch rendering on the polygons in the output mesh queue;
if the bounding box AABB of the grid is intersected with the clipping area, clipping the triangle in the grid by using a Spine native clipping algorithm, wherein the specific clipping process is described as follows:
step S21, performing intersection calculation on the polygon in the first target mesh and the clipping area to obtain an intersection relation between the polygon in the first target mesh and the clipping area;
step S22, if it is determined that the polygon in the first target mesh is located within the clipping area according to the intersection relationship, adding the polygon in the first target mesh to an output mesh queue, wherein the polygon in the output mesh queue is a polygon to be rendered;
step S23, if it is determined that at least one polygon in the first target mesh intersects the clipping region according to the intersection relationship, clipping the at least one polygon intersecting the clipping region into at least one sub-polygon included in the clipping region, and adding the sub-polygon to the output mesh queue, where the polygon in the output mesh queue is a polygon to be rendered.
In the present application, in step S23, for each polygon intersecting the clipping region, it is clipped to a first polygon located inside the clipping region and a second polygon located outside the clipping region. If the first polygon is a triangle, taking the first polygon as the sub-polygon; otherwise, clipping the first polygon into a plurality of sub-triangles, and regarding the sub-triangles as the sub-polygons, for example: if the first polygon is a quadrangle, the first polygon is cut into two sub-triangles.
In this embodiment, a description will be given taking a polygon as an example of a triangle. Specifically, for each frame of animation to be optimized, each attachment is read, and whether the attachment is a mesh attachment or a clip attachment is determined according to the attachment type. The mesh attachment stores the triangular mesh of the animation model; and the cutting accessory stores the size and position information of the cutting area, and represents that the triangle in the cutting area is only reserved for rendering when the animation to be optimized is displayed on the game display interface. Corresponding attachment types are set for the cut attachment and the grid attachment.
And if the clipping attachment is traversed, using the polygonal area of the clipping attachment to start clipping.
If the grid attachment is traversed, grid world coordinates may be computed and then each triangle may be traversed to crop according to this grid world coordinate. The cutting process is that each triangle in the filtered grid is traversed and intersected with the cutting area to calculate an intersection relationship, and the intersection relationship comprises the following steps: the triangle is outside the clipping area, the triangle is inside the clipping area, and the triangle and the clipping area intersect.
If the intersection relationship is that the triangle is outside the clipping region, the triangle is discarded.
If the intersection relation is that the triangle is in the clipping area, adding the triangle into an output grid queue to perform batch rendering on the polygons in the output grid queue;
if the intersection relation is that the triangle and the clipping area intersect, the triangle is split into a plurality of at least one sub-triangle contained in the clipping area, and the split at least one triangle is added into an output mesh queue so as to perform batch rendering on polygons in the output mesh queue. Specifically, when the intersection relationship is calculated, the intersection point of the clipping region and the triangle may be calculated, and then the intersection point and the vertex of the triangle in the clipping region are used to re-organize the triangle into a plurality of triangles, so as to split the triangle into a plurality of sub-triangles included in the clipping region.
In an alternative embodiment, as shown in fig. 3, the steps of: filtering the plurality of polygons in the mesh by an algorithm based on the bounding box of the clipping region to obtain a first target mesh, comprising the following processes:
step S31, determining a bounding box of the clipping area according to the vertex coordinates of the clipping area;
step S32, determining the position relation between the bounding box of the cutting area and the polygon according to the position information of the bounding box of the cutting area and the position information of the polygon in the grid, and obtaining a second position relation;
and step S33, filtering the polygons in the mesh according to the second position relation to obtain a first target mesh. Wherein, the step S33 specifically includes: and filtering out target polygons in the polygons according to the second positional relationship, and determining a mesh formed by the target polygons as the first target mesh, wherein the target polygons comprise first-class polygons and/or second-class polygons, and the first-class polygons are located in the bounding box of the clipping region, or the second-class polygons are intersected with the bounding box of the clipping region.
Specifically, in the present embodiment, triangles in the mesh are filtered using the bounding box AABB of the clipping region, triangles outside the clipping bounding box AABB are discarded, and triangles inside the clipping bounding box AABB are accepted.
It should be noted that, in the present application, a corresponding accessory is set in advance for the animation to be optimized, and the accessory includes: crop attachments and grid attachments. The mesh attachment stores the triangular mesh of the animation model; and the cutting accessory stores the size and position information of the cutting area, and represents that the triangle in the cutting area is only reserved for rendering when the animation to be optimized is displayed on the game display interface. Corresponding attachment types are set for the cut attachment and the grid attachment.
Specifically, during game running, each attachment is read for each frame of animation to be optimized, and whether the attachment is a grid attachment or a clipping interval is determined according to the attachment type.
And if the cutting accessory is traversed, starting the operation of cutting the bone to be optimized by using the polygonal area of the cutting accessory, and calculating the bounding box AABB of the cutting area.
If the grid attachment is traversed, grid world coordinates may be computed, wherein the grid world coordinate computation process is described as follows: since the grid is bound to the skeleton, when the animation is played, the position and orientation of the current skeleton are calculated by using the key frames. And then calculating the world coordinates of the vertex of the grid by using the binding weight and the relative position of the vertex of the grid to the skeleton, thereby obtaining the world coordinates of the grid. The weight refers to the binding degree, such as weight 1, and indicates that the vertex and the skeleton of the mesh keep the relative positions unchanged; weight 0, indicating that the coordinates of the mesh vertices are not under skeletal control. Next, each triangle is traversed according to the grid world coordinates for clipping. The clipping process is that for each triangle, intersection calculation is carried out on the triangle and the bounding box AABB of the clipping area, so as to judge the intersection relationship between the triangle and the clipped bounding box AABB, and the intersection relationship comprises the following steps: the triangle is outside the cut bounding box AABB, the triangle is inside the cut bounding box AABB, and the triangle intersects the bounding box of the cut area.
If the triangle is outside the bounding box AABB of the clipping region, the triangle is discarded.
If the triangle is in the bounding box AABB of the clipping area, the triangle is added to an output grid queue to perform batch rendering on the triangle in the output grid queue.
If the triangle is intersected with the bounding box AABB of the clipping area, clipping the triangle by using a Spine native clipping algorithm, wherein the specific clipping process is described as follows:
step S41, performing intersection calculation on the polygon in the first target mesh and the cutting area to obtain the intersection relation between the polygon in the first target mesh and the cutting area;
step S42, if determining that the polygon in the first target mesh is outside the cutting area according to the intersection relation, discarding the polygon in the first target mesh;
step S43, if it is determined that the polygon in the first target mesh is located within the clipping area according to the intersection relationship, adding the polygon in the first target mesh to an output mesh queue, wherein the polygon in the output mesh queue is a polygon to be rendered;
step S44, if it is determined that at least one polygon in the first target mesh intersects the clipping region according to the intersection relationship, clipping the at least one polygon intersecting the clipping region into at least one sub-polygon included in the clipping region, and adding the sub-polygon to the output mesh queue, where the polygon in the output mesh queue is a polygon to be rendered.
In this embodiment, a description will be given taking a polygon as an example of a triangle. Specifically, for each frame of animation to be optimized, each attachment is read, and whether the attachment is a grid attachment or a clipping interval is determined according to the attachment type. The mesh attachment stores the triangular mesh of the animation model; and the cutting accessory stores the size and position information of the cutting area, and represents that the triangle in the cutting area is only reserved for rendering when the animation to be optimized is displayed on the game display interface. Corresponding attachment types are set for the cut attachment and the grid attachment.
And if the clipping attachment is traversed, using the polygonal area of the clipping attachment to start clipping.
If the grid attachment is traversed, grid world coordinates may be computed and then each triangle may be traversed to crop according to this grid world coordinate. The cutting process is that each triangle in the filtered grid is traversed and intersected with the cutting area to calculate an intersection relationship, and the intersection relationship comprises the following steps: the triangle is outside the clipping area, the triangle is inside the clipping area, and the triangle and the clipping area intersect.
If the intersection relationship is that the triangle is outside the clipping region, the triangle is discarded.
If the intersection relation is that the triangle is in the clipping area, adding the triangle into an output grid queue to perform batch rendering on the polygons in the output grid queue;
if the intersection relation is that the triangle and the clipping area are intersected, the triangle is split into a plurality of sub-triangles contained in the clipping area, and the split triangle is added into an output mesh queue so as to perform batch rendering on the polygons in the output mesh queue. Specifically, when the intersection relationship is calculated, the intersection point of the clipping region and the triangle may be calculated, and then the intersection point and the vertex of the triangle in the clipping region are used to re-organize the triangle into a plurality of triangles, so as to split the triangle into a plurality of sub-triangles included in the clipping region.
As can be seen from the above description, in the existing triangle clipping algorithm, intersection calculation needs to be performed on each triangle in the mesh and the clipping area, so as to clip the triangle according to the calculated intersection relationship. If the number of triangles is large, the intersection calculation will incur a large overhead for the CPU. Based on this, in the application, the mesh is firstly filtered by adopting a target bounding box-based cutting algorithm, in the filtering process, the intersection relation between each triangle and the cutting area does not need to be calculated, and after the filtered mesh is obtained, the mesh is cut by calculating the intersection relation, so that the expense of a CPU (Central processing Unit) is reduced.
In the application, after the cut grid is obtained, the material of the cut grid can be modified into the same material, so that the optimized animation is obtained.
In an optional embodiment, if the original material of the map of the mesh includes a pattern corresponding to the original material, the pattern includes: the device comprises a normal mixing mode and an overlap mixing mode, wherein the normal mixing mode and the overlap mixing mode are respectively determined through corresponding mixing factors, and the normal mixing mode and the overlap mixing mode have different calculation modes of the mixing factors corresponding to the overlap mixing mode.
Based on this, the step of modifying the material of the cut grid into the same material so as to obtain the optimized animation comprises the following processes:
step S1061, setting the mixing factor of the mesh with the original material of the normal mixing mode in the second target mesh as a first mixing factor, setting the mixing factor of the grid with the original material of the superposition mixing mode as a second mixing factor so as to obtain the optimized animation, wherein a calculation formula of the first mixing factor is the same as a calculation formula of the second mixing factor, the mixing factor is a parameter for mixing a source color and a target color of a grid, the source color and the target color being used to determine a display color of the grid on a game display interface, the first and second blending factors are such that after modifying the material of the second target mesh to the same material, the output color of the second target mesh before and after modification is unchanged.
The inventor can find that the Spine animation has a large number of Draw calls by looking at the Draw Call batch process using the Nvidia Nsight cut frame. After analysis, the grid articulated by Spine animation uses a normal mixed mode and a superposition mixed mode, so that the grid can not be automatically batched and optimized. In the application, the grids with the original material being the normal mixed mode and the overlapped mixed mode are realized by using the same material, so that the normal mixed mode and the overlapped mixed mode are realized in the shader. After the materials are changed into one material, the grids with the same material can be merged by utilizing automatic batching, and the number of Draw calls is effectively reduced.
Mixed mode is one way to achieve a semi-transparent effect, and the output color is calculated using the following formula: cOutput of=CSource*FSource+CTarget*FTarget
Wherein, CSourceRepresenting a source color, which is a color value of a triangle currently being rendered; cTargetOutputting a target color for the triangular mesh currently being rendered; fSourceAssigning the influence of Alpha values on the source color for the source blending factor values; fTargetFor the target blending factor value, the effect of Alpha values on the target color is specified.
Normal Mixed mode, F, used in SpineSourceIs SRC _ ALPHA, i.e. C is usedSourceAlpha component of color, FTargetUsing ONE _ MINUS _ SRC _ ALPHA, i.e. 1-CSourceSo the output color of the triangle in normal blend mode is: cOutput of=CSource*AlphaSource+CTarget*(1-AlphaSource)。
Superimposed mixed mode, FSourceIs SRC _ ALPHA,FTargetUsing ONE, a constant of 1, the output color achieved by superimposing the blend mode is: cOutput of=CSource*AlphaSource+CTarget*1。
As can be seen from the above description, for the normal blending mode and the overlay blending mode, the source blending factor value and the target blending factor value are different, that is, the normal blending mode and the overlay blending mode are determined by corresponding blending factors respectively, and the computing manners of the blending factors corresponding to the normal blending mode and the overlay blending mode are different.
And after the optimization is started, modifying the mixing factors of the normal mixing mode and the superposition mixing mode, coding the mode information in the vertex UV coordinate, and decoding different mixing modes from the vertex UV coordinate by a shader. The modified data is represented using superscript 1 in the following formula, e.g.
Figure BDA0002675990110000211
Is the modified source color value. After modifying the material, the original material is the mixing factor F of the grid of the normal mixing modeSourceChanged to ONE, i.e. constants 1, FTargetKeep ONE _ MINUS _ SRC _ ALPHA, i.e. the
Figure BDA0002675990110000212
Alpha component of (a). Shader program code is shown as code lines 1 through 9 below, with code line 7 isononeblendsode equal to 0.
So the output color of the mesh in the modified normal hybrid mode is:
Figure BDA0002675990110000213
from code line 8, among others, we can get:
Figure BDA0002675990110000214
since the vertex UV, which encodes the normal hybrid mode, is less than 1, it can be derived from code lines 3,7, 9: isononeblendset is 0;
Figure BDA0002675990110000215
in will
Figure BDA0002675990110000221
And
Figure BDA0002675990110000222
substituted into a formula
Figure BDA0002675990110000223
After the step (c), the output color of the mesh in the modified normal hybrid mode is obtained as follows: cOutput of=CSource*AlphaSource+CTarget*(1-AlphaSource)。
For the modified superimposed blend mode, the same blend factor as the normal blend mode, i.e. F, is usedSourceChanged to ONE, i.e. constants 1, FTargetChanged to ONE _ MINUS _ SRC _ ALPHA, i.e. the
Figure BDA0002675990110000224
Wherein isoneblendmond of code line 7is equal to 1. The color output in the modified overlay mode is therefore:
Figure BDA0002675990110000225
from code line 8, among others, we can get:
Figure BDA0002675990110000226
since the vertex UV, which encodes the superimposed blend mode, is greater than 1, it can be derived from code lines 3,7, 9: isononeblendset ═ 1;
Figure BDA0002675990110000227
in will
Figure BDA0002675990110000228
Figure BDA0002675990110000229
Substituted into a formula
Figure BDA00026759901100002210
And after the step (c), obtaining the output color of the grid under the modified superposition mixed mode as follows: cOutput of=CSource*AlphaSource+CTarget*1。
The above process can be described by the following procedure:
Figure BDA00026759901100002211
Figure BDA0002675990110000231
it can be known from the above description that, in order to modify the material of the cut mesh into the same material, in the present application, the blending factor of the mesh of which the original material is the normal blending mode is set as the first blending factor, and the blending factor of the mesh of which the original material is the overlay blending mode in the mesh is set as the second blending factor. Wherein the calculation formula of the first mixing factor and the second mixing factor is the same.
I.e. in normal hybrid mode, FSourceIs 1, FTargetIs composed of
Figure BDA0002675990110000232
In the overlay blend mode, FSourceIs 1, FTargetIs composed of
Figure BDA0002675990110000233
After the first blending factor and the second blending factor are set according to the above manner, the purpose of modifying the material of the grid after clipping into the same material can be achieved.
In the application, besides modifying the material of the cut meshes into the same material, corresponding identification information can be set for each mesh in the UV vertex coordinates of each mesh, wherein the identification information is used for indicating the original material of each mesh.
For example, if the original material is the normal blend mode, the identification information may be a value smaller than 1, and if the original material is the overlay blend mode, the identification information may be a value larger than 1.
In the application, after the batch rendering request is obtained, the grid in the optimized animation can be batch rendered, and a specific process is described as follows:
firstly, reading identification information in the UV vertex coordinates of each grid in the optimized animation, and determining the original material of each grid in the optimized animation according to the read identification information;
secondly, determining a mixing factor of each grid in the optimized animation according to the read original material;
and finally, determining the target output color of each grid in the optimized animation according to the mixing factor of each grid in the optimized animation, and performing batch rendering on each grid in the optimized animation according to the target output color.
In the application, after obtaining the batch rendering request, the material of each grid in the optimized animation may be identified based on the blending factor, and in the case that grids of the same material are identified, the rendering requests (i.e., the batch rendering requests) are generated in a merged manner.
After the batch rendering request is generated, the identification information in the UV vertex coordinates of each mesh in the optimized animation may be read, and then the original material of the mesh may be obtained by decoding the identification information. After the original material is obtained by decoding, whether the blending factor of each mesh in the optimized animation is the first blending factor or the second blending factor can be determined according to the decoded original material.
Specifically, if the read original material is the normal blending mode, determining that the blending factor of each grid in the optimized animation is the first blending factor, where a calculation formula of the first blending factor is:
Figure BDA0002675990110000241
AlphasourceThe influence of Alpha values on the source color is expressed as a source blend factor.
If the read original material is the overlay mixed mode, determining that the mixed factor of each grid in the optimized animation is the second mixed factor, wherein a calculation formula of the second mixed factor is as follows:
Figure BDA0002675990110000251
after determining the blending factor of each mesh in the optimized animation, the target output color of each mesh in the optimized animation may be determined according to the blending factor of each mesh in the optimized animation, and the method specifically includes:
according to the formula
Figure BDA0002675990110000252
Determining a target output color for each mesh in the animation after the optimization, COutput ofRepresenting the target output color, and,
Figure BDA0002675990110000253
CsourceAs a source color, AlphaSourceIs a blending factor of the source colors.
In conclusion, the invention uses Visual Studio 2015 performance exploration tool and Nvidia Nsight frame cutting tool to deeply analyze the performance overhead of Spine animation operation, locates the CPU performance bottleneck, and performs clipping triangle algorithm optimization and improved Draw Call batch optimization. The basic optimization idea of the clipping triangle algorithm is to judge the intersection relationship between the triangle (or the mesh) and the clipping area, and use the optimized clipping algorithm for different triangles, so that the clipping result can be consistent with the primary clipping result. The basic optimization idea of the improved Draw Call batch optimization is that the main characteristic that the existing automatic batch algorithm can batch objects with the same material is utilized, one material is used for realizing two material effects commonly used by designers, and the effect can be ensured to be unchanged.
The inventor has conducted experimental verification on the processing method of the game object proposed in the present application. After analyzing the expenditure of the Spine animation in operation, the CPU calculation expenditure of the Spine animation in operation is optimized from two aspects of cutting a triangle algorithm and improving DrawCall batch, the operation frame rate is improved, and the Spine animation can be applied to high-quality game cases with various scenes. Experimental results show that the processing method of the game object does not influence the representation effect of Spine animation, and can reduce the cost by about 72%.
Specifically, the following control experiment was performed to analyze the performance overhead before and after optimization. The experimental environment was as follows:
1. hardware configuration
(1) Intel (R) core (TM) i7-8700 CPU @3.20GHZ
(2) 16.0GB for memory (RAM)
(3) Win10 enterprise edition 64-bit operating system
(4) Display card GeForce GTX 10605GB
(5) And (3) display card driving version: 417.71
2. Software configuration
(1) Spine runtime version: 3.6.53
3. Single Spine animation resource parameter
(1) Number of bones 317
(2) Number of sides about 3593
(3) Number of top points about 9075
Control experiment one: CPU ratio of main function before and after comparison optimization
Game setting
(1) Number of game card surfaces: 40 game card surfaces;
(2) setting a frame rate: low frame rate (30 FPS);
(3) setting a picture: refining the picture;
the experiments are used for comparing and optimizing CPU ratio changes of main functions during Spine running. The following two sets of experimental results can be compared, and it can be seen that the CPU duty is reduced overall after using the optimization technique of the present invention. As shown in table 2 and table 3, specifically, after the triangle clipping optimization is applied, the triangle clipping clipTriangles CPU ratio is reduced from 42.22% to 23.71%, which is reduced by about 43%; after the DrawCall batching is improved, the batching algorithm drawBatched expense is reduced from 7.51% to 4.16%, and is reduced by about 45%.
Table 1 major function CPU ratio before optimization
Figure BDA0002675990110000261
Figure BDA0002675990110000271
Table 2 optimized ratio of main function CPU
Figure BDA0002675990110000272
Control experiment two: CPU performance index before and after optimization
The experiment set is used for comparing the running frame rate, the cost per frame and the Draw Call number of the system before and after optimization. The optimization effects of the standard 30FPS and the high-end 60FPS were compared in this set of experiments, respectively.
The overhead result is one:
game setting:
(1) the number of clamping surfaces is as follows: 40 clamping surfaces;
(2) setting a frame rate: low frame rate (30 FPS);
(3) setting a picture: and (5) refining the picture.
The results in this group show the change of performance index after starting the optimization scheme in a test environment with 40 card surfaces and a target frame rate of 30 FPS. As shown in Table 4, it can be seen that after optimization, the three indexes are all obviously improved. The frame rate is increased from the original 10FPS to the target frame rate of 30FPS, the overhead is reduced by about 64%, and the Draw Call number is also greatly reduced. Note that since the target frame rate is set to 30FPS, the overhead per frame is at least about 33ms/frame, and the actual performance improvement may be higher than 64%, for the test data at frame rate 60 FPS.
TABLE 3 Performance indices before and after optimization
Figure BDA0002675990110000281
The overhead result is two:
game setting:
(1) the number of clamping surfaces is as follows: 40 clamping surfaces;
(2) setting a frame rate: high frame rate (60 FPS);
(3) setting a picture: and (5) refining the picture.
The results of the group show 40 card surfaces, and performance index changes after starting several optimization combinations in a test environment with a target frame rate of 60 FPS. It can be seen that, for the triangle clipping optimization, each optimization scheme can improve the performance index. The optimization effect of the 'clipping region bounding box' is higher than that of the 'grid + clipping region bounding box', which indicates that the optimization scheme using two levels simultaneously is that the overhead caused by the optimization of the grid layer is higher than the benefit caused by the optimization. It can be seen from this that, as shown in table 5, the optimization effect is the best when only the triangle level optimization is performed on the experimental subject, and the overhead is reduced from 94.67ms/frame to 35.44ms/frame, which is about 62% lower. Further starting the improved DrawCall optimization, the overhead is reduced from 35.44ms/frame to 25.82ms/frame, which is about 27%. Generally speaking, under the experimental conditions of the group, the overhead is reduced from 94.67ms/frame before optimization to 25.82ms/frame, the overhead is reduced by about 72%, and the performance is obviously improved.
TABLE 4 Performance index under different optimization schemes
Figure BDA0002675990110000282
Figure BDA0002675990110000291
Example two:
the embodiment of the present invention further provides a processing device for a game object, where the processing device for a game object is mainly used to execute the processing method for a game object provided in the above-mentioned content of the embodiment of the present invention, and the following describes the processing device for a game object provided in the embodiment of the present invention in detail.
Fig. 4 is a schematic diagram of a processing device of a game object according to an embodiment of the present invention, as shown in fig. 4, the processing device of the game object mainly includes an obtaining unit 10, a filtering unit 20 and a clipping unit 30, wherein:
an obtaining unit 10, configured to obtain an animation to be optimized; the animation to be optimized comprises at least one game object, the game object comprises an animation model corresponding to the game object, and the animation model comprises a skeleton and a grid bound with the skeleton;
a filtering unit 20, configured to filter the mesh according to a target bounding box to obtain a first target mesh, where the target bounding box includes: the bounding box of the grid and/or the bounding box of the cutting area represent the area to be displayed on a game display interface in the animation to be optimized;
and the cutting unit 30 is configured to cut the first target grid to obtain a second target grid, where the second target grid is a grid to be rendered.
In the embodiment of the invention, firstly, the animation to be optimized is obtained; then, filtering the grid through a cutting algorithm based on the target bounding box, and cutting the filtered grid to obtain the cut grid; and finally, modifying the material of the cut grids into the same material so as to obtain the optimized animation, and after acquiring the batch rendering request, performing batch rendering on the grids in the optimized animation. According to the description, the method for cutting the grid by the target bounding box and the method for modifying the cut grid into the same material to perform batch rendering are adopted, so that the expenditure of a CPU (central processing unit) can be reduced during animation operation, the performance problem is reduced, and the smoothness of game display and interaction is maintained while the animation quality is ensured.
Optionally, the apparatus is further configured to: and after the first target grid is cut to obtain a second target grid, performing batch rendering on the second target grid.
Optionally, the apparatus is further configured to: modifying the material of the second target grid into the same material to obtain an optimized animation; and performing batch rendering on the grids in the optimized animation.
Optionally, the filter unit is further configured to: and filtering the grids through an algorithm based on the bounding boxes of the grids to obtain a first target grid, wherein the bounding box of the first target grid is positioned in the clipping area or is intersected with the clipping area.
Optionally, the mesh includes a plurality of polygons therein; the filter unit is also used for: and filtering a plurality of polygons in the mesh through an algorithm based on the bounding box of the clipping region to obtain a first target mesh, wherein the first target mesh is intersected with the bounding box of the clipping region or is positioned in the clipping region.
Optionally, the filter unit is further configured to: determining a bounding box of the mesh according to the vertex coordinates of the mesh; determining the position relation between the bounding box of the grid and the cutting area according to the position information of the bounding box of the grid and the position information of the cutting area to obtain a first position relation; and filtering the grid according to the first position relation to obtain the first target grid.
Optionally, the filter unit is further configured to: and filtering a first target grid in the grids according to the first position relation, wherein the first target grid comprises a first type grid and/or a second type grid, a bounding box of the first type grid is positioned in the cutting area, and a bounding box of the second type grid is intersected with the cutting area.
Optionally, the filter unit is further configured to: determining a bounding box of the clipping area according to the vertex coordinates of the clipping area; determining the position relation between the bounding box of the cutting area and the polygon according to the position information of the bounding box of the cutting area and the position information of the polygon in the grid to obtain a second position relation; and filtering the polygons in the mesh according to the second position relation to obtain the first target mesh.
Optionally, the filter unit is further configured to: and filtering out target polygons in the polygons according to the second positional relationship, and determining a mesh formed by the target polygons as the first target mesh, wherein the target polygons comprise first-class polygons and/or second-class polygons, and the first-class polygons are located in the bounding box of the clipping region, or the second-class polygons are intersected with the bounding box of the clipping region.
Optionally, the clipping unit is configured to: performing intersection calculation on the polygon in the first target mesh and the cutting area to obtain an intersection relation between the polygon in the first target mesh and the cutting area; if the polygon in the first target mesh is determined to be positioned outside the cutting area according to the intersection relation, discarding the polygon in the first target mesh; if the polygon in the first target mesh is determined to be located in the cutting area according to the intersection relation, adding the polygon in the first target mesh into an output mesh queue, wherein the polygon in the output mesh queue is a polygon to be rendered; if it is determined that at least one polygon in the first target mesh intersects with the clipping region according to the intersection relationship, clipping the at least one polygon intersecting with the clipping region into at least one sub-polygon contained in the clipping region, and adding the sub-polygon to the output mesh queue, wherein the polygon in the output mesh queue is a polygon to be rendered.
Optionally, the clipping unit is further configured to: for each polygon intersected with the cutting area, cutting the polygon into a first polygon positioned in the cutting area and a second polygon positioned outside the cutting area; if the first polygon is a triangle, taking the first polygon as the sub-polygon; otherwise, cutting the first polygon into a plurality of sub-triangles, and taking the sub-triangles as the sub-polygons.
Optionally, the filter unit is further configured to: filtering the grids according to the algorithm of the bounding boxes of the grids to obtain filtered grids; and filtering triangles in the filtered meshes according to the algorithm of the bounding box of the cutting area to obtain the first target mesh.
Optionally, the original material of the map of the mesh includes a pattern corresponding thereto, and the pattern includes: a normal blend mode and a superposition blend mode; the normal mixed mode and the superposition mixed mode are respectively determined by corresponding mixing factors, and the calculation modes of the mixing factors corresponding to the normal mixed mode and the superposition mixed mode are different.
Optionally, the apparatus is further configured to: setting a mixing factor of a grid with an original material of the normal mixed mode in the second target grid as a first mixing factor, and setting a mixing factor of a grid with an original material of the superimposed mixed mode in the grid as a second mixing factor, so as to obtain the optimized animation, wherein a calculation formula of the first mixing factor is the same as a calculation formula of the second mixing factor, the mixing factor is a parameter for mixing a source color and a target color of the grid, the source color and the target color are used for determining a display color of the grid on a game display interface, and the first mixing factor and the second mixing factor enable an output color of the second target grid before and after modification to be unchanged after the material of the second target grid is modified to be the same material.
Optionally, the apparatus is further configured to: and setting corresponding identification information for each grid in the UV vertex coordinates of each grid, wherein the identification information is used for indicating the original material of each grid.
Optionally, the apparatus is further configured to: reading identification information in the UV vertex coordinates of each grid in the optimized animation, and determining the original material of each grid in the optimized animation according to the read identification information; determining a mixing factor of each grid in the optimized animation according to the read original material; and determining the target output color of each grid in the optimized animation according to the mixing factor of each grid in the optimized animation, and performing batch rendering on each grid in the optimized animation according to the target output color.
Optionally, the apparatus is further configured to: if the read original material is the normal mixed mode, determining that the mixed factor of each grid in the optimized animation is the first mixed factor, wherein a calculation formula of the first mixed factor is as follows:
Figure BDA0002675990110000331
AlphasourceThe source mixing factor represents the influence of Alpha value on the source color; if the read original material is the overlay mixed mode, determining that the mixed factor of each grid in the optimized animation is the second mixed factor, wherein a calculation formula of the second mixed factor is as follows:
Figure BDA0002675990110000332
optionally, the apparatus is further configured to: according to the formula
Figure BDA0002675990110000333
Determining a target output color for each mesh in the animation after the optimization, COutput ofRepresenting the target output color, and,
Figure BDA0002675990110000334
CsourceAs a source color, AlphaSourceIs a mixing factor of the source primary colors.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (21)

1. A method of processing a game object, comprising:
acquiring animation to be optimized; the animation to be optimized comprises at least one game object, the game object comprises an animation model corresponding to the game object, and the animation model comprises a skeleton and a grid bound with the skeleton;
filtering the mesh according to a target bounding box to obtain a first target mesh, wherein the target bounding box comprises: the bounding box of the grid and/or the bounding box of the cutting area represent the area to be displayed on a game display interface in the animation to be optimized;
and cutting the first target grid to obtain a second target grid, wherein the second target grid is a grid to be rendered.
2. The method of claim 1, wherein after cropping the first target mesh to obtain a second target mesh, the method further comprises:
and performing batch rendering on the second target grid.
3. The method of claim 2, wherein the pooling rendering of the second target mesh comprises:
modifying the material of the second target grid into the same material to obtain an optimized animation;
and performing batch rendering on the grids in the optimized animation.
4. The method of claim 1, wherein filtering the mesh according to the target bounding box to obtain a first target mesh comprises:
and filtering the grids through an algorithm based on the bounding boxes of the grids to obtain a first target grid, wherein the bounding box of the first target grid is positioned in the clipping area or is intersected with the clipping area.
5. The method of claim 1, wherein the mesh comprises a plurality of polygons; filtering the mesh according to the target bounding box to obtain a first target mesh comprises:
and filtering a plurality of polygons in the mesh through an algorithm based on the bounding box of the clipping region to obtain a first target mesh, wherein the first target mesh is intersected with the bounding box of the clipping region or is positioned in the clipping region.
6. The method of claim 4, wherein filtering the mesh through an algorithm based on bounding boxes of the mesh to obtain a first target mesh comprises:
determining a bounding box of the mesh according to the vertex coordinates of the mesh;
determining the position relation between the bounding box of the grid and the cutting area according to the position information of the bounding box of the grid and the position information of the cutting area to obtain a first position relation;
and filtering the grid according to the first position relation to obtain the first target grid.
7. The method of claim 6, wherein filtering the mesh according to the first location relationship to obtain the first target mesh comprises:
and filtering a first target grid in the grids according to the first position relation, wherein the first target grid comprises a first type grid and/or a second type grid, a bounding box of the first type grid is positioned in the cutting area, and a bounding box of the second type grid is intersected with the cutting area.
8. The method of claim 5, wherein filtering the plurality of polygons in the mesh through an algorithm based on a bounding box of the clipping region to obtain a first target mesh comprises:
determining a bounding box of the clipping area according to the vertex coordinates of the clipping area;
determining the position relation between the bounding box of the cutting area and the polygon according to the position information of the bounding box of the cutting area and the position information of the polygon in the grid to obtain a second position relation;
and filtering the polygons in the mesh according to the second position relation to obtain the first target mesh.
9. The method of claim 8, wherein filtering the polygons in the mesh according to the second positional relationship to obtain the first target mesh comprises:
and filtering out target polygons in the polygons according to the second positional relationship, and determining a mesh formed by the target polygons as the first target mesh, wherein the target polygons comprise first-class polygons and/or second-class polygons, and the first-class polygons are located in the bounding box of the clipping region, or the second-class polygons are intersected with the bounding box of the clipping region.
10. The method of any one of claims 1 to 9, wherein clipping the first target mesh to obtain the second target mesh comprises:
performing intersection calculation on the polygon in the first target mesh and the cutting area to obtain an intersection relation between the polygon in the first target mesh and the cutting area;
if the polygon in the first target mesh is determined to be positioned outside the cutting area according to the intersection relation, discarding the polygon in the first target mesh;
if the polygon in the first target mesh is determined to be located in the cutting area according to the intersection relation, adding the polygon in the first target mesh into an output mesh queue, wherein the polygon in the output mesh queue is a polygon to be rendered;
if it is determined that at least one polygon in the first target mesh intersects with the clipping region according to the intersection relationship, clipping the at least one polygon intersecting with the clipping region into at least one sub-polygon contained in the clipping region, and adding the sub-polygon to the output mesh queue, wherein the polygon in the output mesh queue is a polygon to be rendered.
11. The method of claim 10, wherein clipping at least one polygon intersecting a clipping region into at least one sub-polygon contained within the clipping region comprises:
for each polygon intersected with the cutting area, cutting the polygon into a first polygon positioned in the cutting area and a second polygon positioned outside the cutting area;
if the first polygon is a triangle, taking the first polygon as the sub-polygon;
otherwise, cutting the first polygon into a plurality of sub-triangles, and taking the sub-triangles as the sub-polygons.
12. The method of claim 1, wherein filtering the mesh according to target bounding boxes further comprises:
filtering the mesh according to the bounding box of the mesh;
and filtering triangles in the filtered meshes according to the bounding boxes of the cutting areas to obtain the first target mesh.
13. The method of claim 3, wherein the primitive material of the map of the mesh comprises a pattern corresponding thereto, the pattern comprising: a normal blend mode and a superposition blend mode; the normal mixed mode and the superposition mixed mode are respectively determined by corresponding mixing factors, and the calculation modes of the mixing factors corresponding to the normal mixed mode and the superposition mixed mode are different.
14. The method of claim 13, wherein modifying the material of the second target mesh to be the same material, thereby obtaining the optimized animation comprises:
setting a mixing factor of a grid with an original material of the normal mixed mode in the second target grid as a first mixing factor, and setting a mixing factor of a grid with an original material of the superimposed mixed mode in the grid as a second mixing factor, so as to obtain the optimized animation, wherein a calculation formula of the first mixing factor is the same as a calculation formula of the second mixing factor, the mixing factor is a parameter for mixing a source color and a target color of the grid, the source color and the target color are used for determining a display color of the grid on a game display interface, and the first mixing factor and the second mixing factor enable an output color of the second target grid before and after modification to be unchanged after the material of the second target grid is modified to be the same material.
15. The method of claim 14, further comprising:
and setting corresponding identification information for each grid in the UV vertex coordinates of each grid, wherein the identification information is used for indicating the original material of each grid.
16. The method of claim 15, wherein the joint rendering of the mesh in the optimized animation comprises:
reading identification information in the UV vertex coordinates of each grid in the optimized animation, and determining the original material of each grid in the optimized animation according to the read identification information;
determining a mixing factor of each grid in the optimized animation according to the read original material;
and determining the target output color of each grid in the optimized animation according to the mixing factor of each grid in the optimized animation, and performing batch rendering on each grid in the optimized animation according to the target output color.
17. The method of claim 16, wherein determining blending factors for each mesh in the optimized animation according to the read raw material comprises:
if the read original material is the normal mixed mode, determining that the mixed factor of each grid in the optimized animation is the first mixed factor, wherein a calculation formula of the first mixed factor is as follows:
Figure FDA0002675990100000051
AlphasourceThe source mixing factor represents the influence of Alpha value on the source color;
if the read original material is the overlay mixed mode, determining that the mixed factor of each grid in the optimized animation is the second mixed factor, wherein a calculation formula of the second mixed factor is as follows:
Figure FDA0002675990100000052
18. the method of claim 16, wherein determining the target output color for each mesh in the animation after optimization based on the blending factor for each mesh in the animation after optimization comprises:
according to the formula
Figure FDA0002675990100000053
Determining a target output color for each mesh in the animation after the optimization, COutput ofRepresenting the target output color, and,
Figure FDA0002675990100000054
CsourceAs a source color, AlphaSourceIs a mixing factor of the source primary colors.
19. A game object processing apparatus, comprising:
the acquiring unit is used for acquiring the animation to be optimized; the animation to be optimized comprises at least one game object, the game object comprises an animation model corresponding to the game object, and the animation model comprises a skeleton and a grid bound with the skeleton;
a filtering unit, configured to filter the mesh according to a target bounding box to obtain a first target mesh, where the target bounding box includes: the bounding box of the grid and/or the bounding box of the cutting area represent the area to be displayed on a game display interface in the animation to be optimized;
and the cutting unit is used for cutting the first target grid to obtain a second target grid, wherein the second target grid is a grid to be rendered.
20. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any of the preceding claims 1 to 18 are implemented when the computer program is executed by the processor.
21. A computer-readable medium having non-volatile program code executable by a processor, characterized in that the program code causes the processor to perform the steps of the method of any of the preceding claims 1 to 18.
CN202010951335.7A 2020-09-10 2020-09-10 Game object processing method and device, electronic equipment and computer readable medium Pending CN112057854A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010951335.7A CN112057854A (en) 2020-09-10 2020-09-10 Game object processing method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010951335.7A CN112057854A (en) 2020-09-10 2020-09-10 Game object processing method and device, electronic equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN112057854A true CN112057854A (en) 2020-12-11

Family

ID=73696039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010951335.7A Pending CN112057854A (en) 2020-09-10 2020-09-10 Game object processing method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN112057854A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837328A (en) * 2021-02-23 2021-05-25 中国石油大学(华东) Rectangular window clipping and drawing method for two-dimensional polygonal graphic primitive

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002063595A (en) * 2000-08-23 2002-02-28 Nintendo Co Ltd Graphics system with stitching hardware support of skeletal animation
CN104463934A (en) * 2014-11-05 2015-03-25 南京师范大学 Automatic generation method for point set model animation driven by mass point-spring system
CN108744520A (en) * 2018-06-05 2018-11-06 网易(杭州)网络有限公司 Determine the method, apparatus and electronic equipment of game model placement position
CN109887093A (en) * 2019-01-17 2019-06-14 珠海金山网络游戏科技有限公司 A kind of game level of detail processing method and system
CN110874812A (en) * 2019-11-15 2020-03-10 网易(杭州)网络有限公司 Scene image drawing method and device in game and electronic terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002063595A (en) * 2000-08-23 2002-02-28 Nintendo Co Ltd Graphics system with stitching hardware support of skeletal animation
CN104463934A (en) * 2014-11-05 2015-03-25 南京师范大学 Automatic generation method for point set model animation driven by mass point-spring system
CN108744520A (en) * 2018-06-05 2018-11-06 网易(杭州)网络有限公司 Determine the method, apparatus and electronic equipment of game model placement position
CN109887093A (en) * 2019-01-17 2019-06-14 珠海金山网络游戏科技有限公司 A kind of game level of detail processing method and system
CN110874812A (en) * 2019-11-15 2020-03-10 网易(杭州)网络有限公司 Scene image drawing method and device in game and electronic terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837328A (en) * 2021-02-23 2021-05-25 中国石油大学(华东) Rectangular window clipping and drawing method for two-dimensional polygonal graphic primitive
CN112837328B (en) * 2021-02-23 2022-06-03 中国石油大学(华东) Rectangular window clipping and drawing method for two-dimensional polygonal primitive
US11798206B2 (en) 2021-02-23 2023-10-24 China University Of Petroleum (East China) Method for clipping two-dimensional (2D) polygon against rectangular view window

Similar Documents

Publication Publication Date Title
Reshetov Morphological antialiasing
USRE42287E1 (en) Stochastic level of detail in computer animation
US7164420B2 (en) Ray tracing hierarchy
US8725466B2 (en) System and method for hybrid solid and surface modeling for computer-aided design environments
US6424345B1 (en) Binsorter triangle insertion optimization
US20080211810A1 (en) Graphic rendering method and system comprising a graphic module
CN109840931A (en) Conjunction batch render method, apparatus, system and the storage medium of skeleton cartoon
US8547395B1 (en) Writing coverage information to a framebuffer in a computer graphics system
CN116051708A (en) Three-dimensional scene lightweight model rendering method, equipment, device and storage medium
Schneider et al. Real-time rendering of complex vector data on 3d terrain models
US20110187720A1 (en) Interactive Labyrinth Curve Generation and Applications Thereof
CA2357962C (en) System and method for the coordinated simplification of surface and wire-frame descriptions of a geometric model
CN112057854A (en) Game object processing method and device, electronic equipment and computer readable medium
Barringer et al. High-quality curve rendering using line sampled visibility
McReynolds et al. Programming with opengl: Advanced rendering
CN111951369B (en) Detail texture processing method and device
CN113470153A (en) Rendering method and device of virtual scene and electronic equipment
CN111402369A (en) Interactive advertisement processing method and device, terminal equipment and storage medium
US10636210B2 (en) Dynamic contour volume deformation
US9007388B1 (en) Caching attributes of surfaces without global parameterizations
Ramos et al. Continuous level of detail on graphics hardware
JP4009289B2 (en) Method for determining a weighting factor for color-calculating a texel color value associated with a footprint
CN112419137A (en) Method and device for displaying mask picture and method and device for displaying mask picture
CN113398575B (en) Thermodynamic diagram generation method and device, computer readable medium and electronic equipment
KR100818286B1 (en) Method and apparatus for rendering 3 dimensional graphics data considering fog effect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination