CN112057854B - Game object processing method, game object processing device, electronic equipment and computer readable medium - Google Patents
Game object processing method, game object processing device, electronic equipment and computer readable medium Download PDFInfo
- Publication number
- CN112057854B CN112057854B CN202010951335.7A CN202010951335A CN112057854B CN 112057854 B CN112057854 B CN 112057854B CN 202010951335 A CN202010951335 A CN 202010951335A CN 112057854 B CN112057854 B CN 112057854B
- Authority
- CN
- China
- Prior art keywords
- grid
- target
- polygon
- animation
- clipping region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 21
- 238000003672 processing method Methods 0.000 title abstract description 8
- 238000000034 method Methods 0.000 claims abstract description 66
- 238000002156 mixing Methods 0.000 claims description 152
- 238000005457 optimization Methods 0.000 claims description 90
- 239000000463 material Substances 0.000 claims description 85
- 238000001914 filtration Methods 0.000 claims description 77
- 238000004422 calculation algorithm Methods 0.000 claims description 51
- 238000005520 cutting process Methods 0.000 claims description 47
- 238000009877 rendering Methods 0.000 claims description 44
- 238000004364 calculation method Methods 0.000 claims description 35
- 230000000694 effects Effects 0.000 claims description 15
- 238000012986 modification Methods 0.000 claims description 5
- 230000004048 modification Effects 0.000 claims description 5
- 239000003086 colorant Substances 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 4
- 239000002994 raw material Substances 0.000 claims 1
- 230000003993 interaction Effects 0.000 abstract description 7
- 230000000875 corresponding effect Effects 0.000 description 26
- 210000000988 bone and bone Anatomy 0.000 description 23
- 230000008569 process Effects 0.000 description 18
- 230000006870 function Effects 0.000 description 10
- 230000008901 benefit Effects 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 6
- 230000006872 improvement Effects 0.000 description 5
- 239000000203 mixture Substances 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000007670 refining Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 241000197727 Euscorpius alpha Species 0.000 description 1
- 238000010923 batch production Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011056 performance test Methods 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application provides a processing method, a processing device, electronic equipment and a computer readable medium of a game object, relating to the technical field of computers, comprising the following steps: obtaining an animation to be optimized; the method can reduce the CPU overhead and performance problems during the running of the animation, and can maintain the fluency of game display and interaction while guaranteeing the quality of the animation.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and apparatus for processing a game object, an electronic device, and a computer readable medium.
Background
In the field of game animation, the content and fluency presented by an animation system are one of the important indicators for measuring the quality of a game. Spine is capable of providing an animation solution based on two-dimensional skeletal animation, where skeletal animation is accomplished by binding pictures to the skeleton. However, the Spine drives the grid by using a two-dimensional skeleton animation, so that logic calculation is added, and CPU overhead is increased.
The complete life cycle of the game system comprises three stages of initialization, logic update and rendering update of each frame and termination. The CPU computing overhead of each frame of logic update and rendering update directly influences the update frame rate of the game system and influences the game display and interaction fluency. The Spine can efficiently produce an exquisite two-dimensional animation effect, but the Spine adopts a two-dimensional skeleton animation grid driving mode, so that logic calculation is increased, and CPU overhead is increased.
Along with the continuous improvement of game production level, the two-dimensional animation of the game tends to be exquisite in quality and diversified in application scene, and the influence of the Spine animation on CPU overhead is mainly reflected in animation quality and animation application scene. On the one hand, from the perspective of the quality of the motion, the higher animation effect directly leads to the improvement of the output precision of the Spine animation resource and the increase of the CPU overhead. On the other hand, from the perspective of the application scene of the motion, the Spine animation appears in a plurality of interfaces in the game, the canvas of each interface is different in size, and the animation areas displayed are different; in addition, the number of mobile phone devices on the market is large, the performance of the CPU is uneven, and games need to be adapted to the main stream of devices on the market as much as possible.
Regarding the problem of animation quality, in the prior art, the Spine animation can be optimized through design optimization, output resource optimization and run-time Draw Call batch processing, but the optimization scheme has the following disadvantages:
In the aspects of design and resource optimization, a resource output standard is established and a resource monitoring tool is used, but the limitation degree of the resource precision is difficult to grasp, the resource precision is too low, the animation quality can be directly reduced, the user experience is affected, and the CPU overhead can not be effectively reduced under the condition of considering the animation quality.
In terms of run-time Draw Call batch processing, automatic batch algorithms have limited optimization for common Spine animations. For diversified application scenes, the problem can be solved by outputting resources aiming at different scenes, but longer development period and development cost are needed, and the cost caused by the change of design requirements is higher; in addition, the output of multiple resources can lead to the increase of game inclusion and update patches, influence the downloading and updating experience of users and consume more flow. If the Spine run-time algorithm can be optimized, the CPU overhead is effectively reduced, only one resource can be output, and the method is applied to different scenes, so that the production efficiency can be greatly improved, and the manufacturing cost is reduced.
Disclosure of Invention
Accordingly, the present application is directed to a method, apparatus, electronic device, and computer readable medium for processing a game object, which can reduce the overhead of a CPU and performance problems during the running of an animation, and maintain the smoothness of game display and interaction while guaranteeing the quality of the animation.
In a first aspect, an embodiment of the present invention provides a method for processing a game object, including: obtaining an animation to be optimized; the animation to be optimized comprises at least one game object, wherein the game object comprises an animation model corresponding to the game object, and the animation model comprises a skeleton and grids bound with the skeleton; filtering the grids according to a target bounding box to obtain a first target grid, wherein the target bounding box comprises: the bounding box of the grid and/or the bounding box of the clipping region, wherein the clipping region represents a region to be displayed on a game display interface in the animation to be optimized; and cutting the first target grid to obtain a second target grid, wherein the second target grid is a grid to be rendered.
Further, after clipping the first target grid to obtain a second target grid, the method further includes: and performing batch rendering on the second target grids.
Further, the performing batch rendering on the second target grid includes: modifying the material of the second target grid to be the same material to obtain an optimized animation; and performing batch rendering on grids in the animation after the optimization.
Further, the grid comprises a plurality of polygons; filtering the grids according to the target bounding box to obtain a first target grid comprises: filtering the grids through an algorithm based on bounding boxes of the grids to obtain a first target grid, wherein the bounding boxes of the first target grid are located in the clipping region or intersect with the clipping region.
Further, filtering the grid according to the target bounding box to obtain a first target grid includes: and filtering the polygons in the grids through an algorithm based on the bounding box of the clipping region to obtain a first target grid, wherein the first target grid is intersected with the bounding box of the clipping region or is located in the clipping region.
Further, filtering the grid through an algorithm based on a bounding box of the grid, and obtaining a first target grid comprises: determining a bounding box of the grid according to the vertex coordinates of the grid; determining the position relationship between the bounding box of the grid and the clipping region according to the position information of the bounding box of the grid and the position information of the clipping region, and obtaining a first position relationship; and filtering the grid according to the first position relation to obtain the first target grid.
Further, filtering the grid according to the first position relation to obtain the first target grid includes: and filtering a first target grid from the grids according to the first position relation, wherein the first target grid comprises a first type grid and/or a second type grid, a bounding box of the first type grid is positioned in the clipping region, and a bounding box of the second type grid is intersected with the clipping region.
Further, filtering the plurality of polygons in the mesh by an algorithm based on the bounding box of the clipping region, the obtaining a first target mesh includes: determining a bounding box of the clipping region according to the vertex coordinates of the clipping region; determining the position relation between the bounding box of the clipping region and the polygon according to the position information of the bounding box of the clipping region and the position information of the polygon in the grid to obtain a second position relation; and filtering polygons in the grids according to the second position relation to obtain the first target grid.
Further, filtering the polygons in the mesh according to the second positional relationship, to obtain the first target mesh includes: filtering out a target polygon from the polygons according to the second position relation, and determining a grid formed by the target polygon as the first target grid, wherein the target polygon comprises a first type polygon and/or a second type polygon, and the first type polygon is positioned in a bounding box of the clipping region or the second type polygon is intersected with the bounding box of the clipping region.
Further, clipping the first target grid to obtain a second target grid includes: performing intersection calculation on the polygon in the first target grid and the clipping region to obtain an intersection relation between the polygon in the first target grid and the clipping region; discarding the polygon in the first target grid if the polygon in the first target grid is determined to be positioned outside the clipping region according to the intersection relation; if the polygon in the first target grid is determined to be located in the clipping region according to the intersection relation, adding the polygon in the first target grid into an output grid queue, wherein the polygon in the output grid queue is the polygon to be rendered; and if the at least one polygon in the first target grid is intersected with the cutting area according to the intersection relation, cutting the at least one polygon intersected with the cutting area into at least one sub-polygon contained in the cutting area, and adding the sub-polygon into the output grid queue, wherein the polygon in the output grid queue is the polygon to be rendered.
Further, the clipping the at least one polygon intersecting the clipping region into at least one sub-polygon contained within the clipping region includes: for each polygon intersecting a clipping region, clipping it into a first polygon located inside the clipping region and a second polygon located outside the clipping region; if the first polygon is a triangle, the first polygon is used as the sub-polygon; otherwise, the first polygon is cut into a plurality of sub-triangles, and the sub-triangles are used as the sub-polygons.
Further, filtering the grid according to the target bounding box further comprises: filtering the grid according to the bounding box of the grid; and filtering triangles in the filtered grids according to the bounding box of the clipping region to obtain the first target grid.
Further, the original material of the map of the grid includes a pattern corresponding thereto, the pattern including: a normal mixing mode and a superposition mixing mode; the normal mixing mode and the superposition mixing mode are respectively determined by corresponding mixing factors, and the mixing factors corresponding to the normal mixing mode and the superposition mixing mode are different in calculation mode.
Further, modifying the material of the second target mesh to be the same material, so as to obtain the optimized animation comprises: setting a mixing factor of a grid with an original material being in the normal mixing mode in the second target grid as a first mixing factor, and setting a mixing factor of a grid with an original material being in the superimposed mixing mode in the grid as a second mixing factor, so as to obtain the animation after optimization, wherein a calculation formula of the first mixing factor is the same as a calculation formula of the second mixing factor, the mixing factor is a parameter for mixing a source color and a target color of the grid, the source color and the target color are used for determining a display color of the grid on a game display interface, and the first mixing factor and the second mixing factor enable an output color of the second target grid before and after modification to be unchanged after modifying the material of the second target grid to be the same material.
Further, the method further comprises: and setting corresponding identification information for each grid in the UV vertex coordinates of each grid, wherein the identification information is used for indicating the original material of each grid.
Further, performing batch rendering on the grids in the animation after the optimization comprises: reading identification information in UV vertex coordinates of each grid in the animation after optimization, and determining original materials of each grid in the animation after optimization according to the read identification information; determining the mixing factor of each grid in the animation after optimization according to the read original materials; and determining a target output color of each grid in the optimized animation according to the mixing factor of each grid in the optimized animation, and performing batch rendering on each grid in the optimized animation according to the target output color.
Further, determining the blending factor of each grid in the animation after the optimization according to the read original material comprises: if the read original material is in the normal mixing mode, determining that the mixing factor of each grid in the animation after optimization is the first mixing factor, wherein the calculation formula of the first mixing factor is as follows: alpha Source(s) is a source mixing factor, which represents the effect of Alpha value on source color; if the read original material is the superposition mixed mode, determining that the mixed factor of each grid in the animation after optimization is the second mixed factor, wherein the calculation formula of the second mixed factor is as follows:
further, determining the target output color of each grid in the optimized animation according to the mixing factor of each grid in the optimized animation comprises: according to the formula Determining a target output color for each grid in the animation after the optimization, the C Output of representing the target output color,C Source(s) is the source color and Alpha Source(s) is the mixing factor of the source primary colors.
In a second aspect, an embodiment of the present invention provides a processing apparatus for a game object, including: the acquisition unit is used for acquiring the animation to be optimized; the animation to be optimized comprises at least one game object, wherein the game object comprises an animation model corresponding to the game object, and the animation model comprises a skeleton and grids bound with the skeleton; the filtering unit is configured to filter the grid according to a target bounding box, so as to obtain a first target grid, where the target bounding box includes: the bounding box of the grid and/or the bounding box of the clipping region, wherein the clipping region represents a region to be displayed on a game display interface in the animation to be optimized; the clipping unit is used for clipping the first target grid to obtain a second target grid, wherein the second target grid is a grid to be rendered.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of any one of the first aspects when the computer program is executed.
In a fourth aspect, embodiments of the present invention provide a computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the steps of the method of any of the first aspects above.
In the embodiment of the application, firstly, an animation to be optimized is obtained; and then filtering the grids according to the target bounding box to obtain a first target grid, cutting the first target grid, and obtaining a second target grid after cutting. According to the method, the grid is filtered and cut by adopting the target bounding box, so that the CPU overhead can be reduced, the performance problem can be reduced, the quality of the animation is ensured, and the fluency of game display and interaction is maintained.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method of processing a game object according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method of processing a game object according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method of processing a further game object according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a processing apparatus for game objects according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Embodiment one:
according to an embodiment of the present invention, there is provided an embodiment of a method of processing a game object, it being noted that the steps shown in the flowcharts of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 1 is a flowchart of a processing method of a game object according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
Step S102, obtaining an animation to be optimized; the animation to be optimized comprises at least one game object, the game object comprises an animation model corresponding to the game object, and the animation model comprises a skeleton and grids bound with the skeleton.
In the present application, the mesh is a representation of an object in a game or computer graphics, and generally a triangular mesh is used.
Specifically, for an animation model of a game object, the manner of creating an animation can be described as follows: firstly constructing a skeleton of a game object in the Spine software, binding pictures to the skeleton, completing skeleton animation production, and exporting the skeleton animation into a binary format file. Wherein in the present application, the animation model of the game object comprises a skeleton and grids bound with the skeleton, and each grid can comprise a plurality of polygons, for example, a plurality of triangles. As the bone moves, the mesh to which it is bound also moves.
Step S104, filtering the grids according to a target bounding box to obtain a first target grid, wherein the target bounding box comprises: the bounding box of the grid and/or the bounding box of a clipping region, wherein the clipping region represents a region to be displayed to a game display interface in the animation to be optimized.
And S106, cutting the first target grid to obtain a second target grid, wherein the second target grid is a grid to be rendered.
In the application, the purpose of filtering the grids in the animation to be optimized is to filter out the grids which are positioned in the clipping region or are intersected with the clipping region. Then, clipping the first target grid after filtering to obtain a clipped grid.
In the embodiment of the application, firstly, an animation to be optimized is obtained; and then filtering the grids according to the target bounding box to obtain a first target grid, cutting the first target grid, and obtaining a second target grid after cutting. According to the application, the grid is cut by adopting the target bounding box, so that the CPU overhead can be reduced, the performance problem can be reduced, the quality of the animation is ensured, and the fluency of game display and interaction is maintained.
In an optional embodiment of the present application, after clipping the first target mesh to obtain a second target mesh, the method further includes: and performing batch rendering on the second target grids.
Specifically, the material of the second target grid can be modified to be the same material, so that an animation after optimization is obtained, and the grids in the animation after optimization are subjected to batch rendering. For example, in the present application, after receiving the batch rendering request, the batch rendering may be performed on the mesh in the animation after the optimization.
Batch rendering Call Draw refers to: an excessive number of Call Draw requests sent to the graphics engine may cause the CPU to calculate pressure, which is one of the important performance metrics.
For the Spine animation with high manufacturing precision and large number of faces, bones and the like, the CPU overhead for updating each frame of Spine is large, the updating frame rate of a game system is directly influenced, and the display and interaction fluency are influenced. The present application uses a Visual Studio 2015 performance explorer to explore the performance overhead of each frame of the Spine animation at run-time. Specifically, in the present application, visual Studio 2015 can be used to open game source projects, select performance explorer, open "CPU usage" option, and begin exploration. And starting the game, and running the Spine animation performance test case, wherein the tool collects running data in real time. After the collection is completed, the game is exited, the tool processes the data, and a probe report is generated. When the game running test case can be checked in the report, the CPU time consumption, the duty ratio and other data of the function call can be checked.
The probing result shows that the CPU occupies a relatively high number of functions before optimization, and mainly comprises: triangle cutting: CLIPTRIANGLES; draw Call batch rendering: drawBatched; bone animation update: update. The main function CPU duty cycle before optimization is shown in table 1 below.
TABLE 1
Function name | CPU duty cycle (%) |
spSkeletonClipping_clipTriangles | 42.22 |
drawBatchedQuadsAndTriangles | 7.51 |
spine::SkeletonAnimation::update | 26.65 |
The impact factors and optimization space of these three partial overheads will be analyzed in turn.
1. Triangle cutting CLIPTRIANGLES
The triangle clipping process is to clip out the areas of the animation that do not need to be displayed. From the above description, it is known that in order to control the animation production cost, one animation is used for displaying interfaces with different sizes, and a partial area of the animation needs to be selected for display by clipping. The clipping algorithm is positively correlated with the number of faces of the grid, and the algorithm complexity is O (n), wherein the number of faces of the grid refers to the number of faces of polygons contained in the grid. Clipping may use a CPU clipping algorithm and a GPU clipping algorithm. The Spine operates by using a CPU clipping algorithm, and from a probing result, the realization overhead provided by the Spine is the highest in proportion, and the optimization clipping algorithm is a key for reducing the overhead and improving the frame rate. After analyzing codes, the original implementation of triangle clipping in the Spine running process is found to have a larger optimization space.
2. Draw Call batch rendering: drawBatched A
Reducing Draw calls may reduce CPU overhead caused by preparing data, switching rendering states, data transfer, and the like each time the Draw Call. Under the condition that the display effect is not affected, the Draw Call batch is used for enabling the objects with the same material to be sequentially rendered as much as possible by adjusting the rendering sequence of the display objects, the number of the Draw calls is reduced, and the cost is reduced. After the algorithm is analyzed, the algorithm is found to be high in optimization difficulty. On the other hand, the automatic batch combination algorithm can batch display objects with the same material, and the cost can be reduced indirectly by modifying the material by utilizing the characteristic of the algorithm.
3. Bone animation update: update
And a bone animation process, wherein bone state difference values between the key frames are calculated along with time, and vertexes bound to bones are updated, wherein the bone state difference values refer to transition states of the two key frames. General skeletal animation uses key frame techniques, each key frame containing information on skeletal coordinates, rotation, etc. The key frames are discrete, such as 25 frames per second. When the game runs, the skeleton animation is played, and skeleton states in the middle of key frames need to be calculated. The update overhead is positively correlated with the number of bones and the number of top points. From the exploration results, the bone update overhead is high; after analyzing codes, the algorithm implementation optimization space in the Spine running process is smaller.
In summary, the inventor of the present invention uses a probing tool to find out the parts with higher cost in the Spine operation, and analyze the cost influencing factors and the optimization space. After analysis, triangle clipping is a main reason for high CPU overhead in the Spine operation, and the algorithm has a large optimization space, so that the optimization clipping algorithm has high benefits; in the aspect of Draw Call batch mixing, two different materials are commonly used in the Spine, so that batch mixing effect can be affected, and the Draw Call batch mixing can be further improved by modifying the materials. To this end, the invention performs the spin runtime optimization from both the clipping triangle algorithm and the improvement Draw Call batch, and the processing method of the game object will be specifically described below.
From the above description, in the present application, the existing triangle clipping algorithm is optimized, and the present application adopts the bounding box of the mesh or the bounding box of the clipping region to filter the mesh, where the mesh obtained by filtering is the mesh intersecting with the clipping region or located in the clipping region. After the filtered first target grid is obtained, the filtered first target grid can be cut, so that a second target grid is obtained after cutting, wherein the second target grid obtained after cutting is a grid for rendering on a game display interface. In the present application, the mesh after clipping is obtained by filtering the mesh based on the target bounding box and clipping the mesh after filtering, which can be described as the following ways.
A mode one,
(11) Filtering the grids through an algorithm based on bounding boxes of the grids to obtain a first target grid, wherein the bounding boxes of the first target grid are located in the clipping region or intersect with the clipping region.
In the application, the grids can be filtered by utilizing an algorithm of a bounding box based on the grids to obtain a first target grid intersected with or positioned in the clipping region. After the first target grid is obtained, the first target grid is cut, and the main cutting object is a grid intersecting with the cutting area, so that a second target grid is obtained, and a specific cutting process will be described in the following process.
A second mode,
(21) And filtering the polygons in the grids through an algorithm based on the bounding box of the clipping region to obtain a first target grid, wherein the first target grid is intersected with the bounding box of the clipping region or is located in the clipping region.
In the present application, the polygons (e.g., triangles) in each mesh may be filtered using a bounding box based on the clipping region to obtain polygons that intersect with or lie within the clipping region, thereby obtaining the first target mesh from the determined polygons. After the first target grid is obtained, the first target grid is cut, the main cutting object is a polygon intersected with the cutting area, so that a cut polygon is obtained, wherein the grid corresponding to the cut polygon is the second target grid, and the specific cutting process is described in the following process.
Mode III,
(31) Filtering the grids according to an algorithm of a bounding box of the grids to obtain the filtered grids;
(32) And filtering the triangles in the grid after filtering according to the algorithm of the bounding box of the clipping region to obtain the first target grid.
In the present application, the mesh may be filtered using a mesh-based bounding box algorithm to obtain a mesh that intersects the clipping region or is located within the clipping region as a post-filter mesh.
After the filtered meshes are obtained, a polygon (e.g., triangle) in each of the filtered meshes may be filtered using a bounding box based on the clipping region to obtain a polygon that intersects with or is located within the clipping region, thereby obtaining the first target mesh according to the determined polygon. After the first target grid is obtained, the first target grid is cut, the main cutting object is a polygon intersected with the cutting area, so that a cut polygon is obtained, wherein the grid corresponding to the cut polygon is the second target grid, and the specific cutting process is described in the following process.
As is clear from the above description, in the existing triangle clipping algorithm, it is necessary to perform intersection calculation on each triangle in the mesh and the clipping region, so that clipping of the triangle is performed according to the calculated intersection relationship. If the number of triangles is large, then the intersection computation can incur significant overhead for the CPU. Based on the method, the grid is filtered by adopting an algorithm based on the target bounding box, the intersection relation between each triangle and the clipping region does not need to be calculated in the filtering process, and the grid is clipped in a mode of calculating the intersection relation after the grid is filtered, so that the CPU cost is reduced.
In an alternative embodiment, as shown in fig. 2, the steps are as follows: filtering the grids by an algorithm based on bounding boxes of the grids to obtain a first target grid comprises the following steps:
Step S11, determining a bounding box of the grid according to vertex coordinates of the grid;
Step S12, determining the position relationship between the bounding box of the grid and the clipping region according to the position information of the bounding box of the grid and the position information of the clipping region, so as to obtain a first position relationship;
And step S13, filtering the grid according to the first position relation to obtain the first target grid. The step S13 specifically includes: and filtering a first target grid from the grids according to the first position relation, wherein the first target grid comprises a first type grid and/or a second type grid, a bounding box of the first type grid is positioned in the clipping region, and a bounding box of the second type grid is intersected with the clipping region.
Specifically, in the present embodiment, the mesh is filtered using the bounding box AABB of the mesh, the purpose of the filtering being to discard the mesh outside the clipping region, leaving the mesh inside the clipping region.
It should be noted that, in the present application, a corresponding accessory is set in advance for the animation to be optimized, and the accessory includes: cutting accessories and grid accessories. Wherein, the grid attachment stores triangle grids of the animation model; and the clipping accessory stores the size and position information of the clipping region, and represents rendering only by reserving the triangle in the clipping region when the animation to be optimized is displayed on the game display interface. Corresponding accessory types are set for the clipping accessory and the grid accessory.
Specifically, when the game is running, the animation is to be optimized for each frame, each accessory is read, and whether the grid accessory or the cutting interval is determined according to the accessory type of the read accessory.
If the clipping accessory is traversed, the polygonal area of the clipping accessory is used, and the operation of clipping the bones to be optimized is started.
If traversing to the grid attachment, grid world coordinates may be calculated, wherein the grid world coordinate calculation process is described as follows: since the grid is tied to the bone, the key frames are first used to calculate the current bone position and orientation when playing the animation. And then calculating world coordinates of the grid vertexes by using binding weights and relative positions of the grid vertexes to bones, so as to obtain the world coordinates of the grid. Weight refers to the degree of binding, such as weight 1, indicating that the vertices of the mesh and the bones remain in relative position; weight 0 indicates that the coordinates of the mesh vertices are not bone controlled. Next, bounding box AABB of the mesh is determined from vertex coordinates of the mesh. Wherein the bounding box AABB of the mesh may be represented as a lower left-hand and an upper right-hand corner. When the world coordinates of the grid are calculated, the vertexes of the triangle need to be traversed, the values of the minimum abscissa and the ordinate of all vertexes are taken as the lower left angular coordinate, the values of the maximum abscissa and the ordinate of all vertexes are taken as the upper right angular coordinate, and the bounding box AABB of the grid can be obtained.
After the bounding box of the grid is calculated, the position relationship between the bounding box of the grid and the clipping region can be determined according to the position information of the bounding box of the grid and the position information of the clipping region, so that the first position relationship is obtained. After the first position relation is obtained, the grid can be filtered according to the first position relation to obtain a grid located in the clipping region or intersected with the clipping region, namely the first target grid described above, and the specific filtering principle is described as follows:
discarding the whole grid if the bounding box AABB of the grid is outside the clipping region;
If the bounding box AABB of the grid is in the clipping region, traversing each polygon in the grid, and adding all the polygons into an output grid queue to perform batch rendering on the polygons in the output grid queue;
If the bounding box AABB of the grid is intersected with the clipping region, a Spine native clipping algorithm is used for clipping the triangle in the grid, and the specific clipping process is described as follows:
S21, performing intersection calculation on the polygon in the first target grid and the clipping region to obtain an intersection relation between the polygon in the first target grid and the clipping region;
Step S22, if the polygon in the first target grid is determined to be located in the clipping region according to the intersection relation, adding the polygon in the first target grid into an output grid queue, wherein the polygon in the output grid queue is the polygon to be rendered;
Step S23, if it is determined that at least one polygon in the first target mesh intersects the clipping region according to the intersection relationship, clipping the at least one polygon intersecting the clipping region into at least one sub-polygon contained in the clipping region, and adding the sub-polygon into the output mesh queue, where the polygon in the output mesh queue is a polygon to be rendered.
In the present application, in step S23, for each polygon intersecting the clipping region, it is clipped into a first polygon located inside the clipping region and a second polygon located outside the clipping region. If the first polygon is a triangle, the first polygon is used as the sub-polygon; otherwise, the first polygon is cut into a plurality of sub-triangles, and the sub-triangles are taken as the sub-polygons, for example: if the first polygon is a quadrangle, the first polygon is cut into two sub triangles.
In this embodiment, a polygon is exemplified as a triangle. Specifically, for each frame, the animation is to be optimized, each attachment is read, and whether a grid attachment or a clip attachment is determined according to the attachment type. Wherein, the grid attachment stores triangle grids of the animation model; and the clipping accessory stores the size and position information of the clipping region, and represents rendering only by reserving the triangle in the clipping region when the animation to be optimized is displayed on the game display interface. Corresponding accessory types are set for the clipping accessory and the grid accessory.
If the clipping accessory is traversed, the clipping accessory polygonal area is used, and clipping is started.
If traversing to the grid attachment, the grid world coordinates may be calculated and then each triangle traversed to clip according to this grid world coordinate. The clipping process is that each triangle in the filtered grid is traversed and intersected with the clipping area, and the intersection relation is calculated, and the intersection relation comprises: the triangle is outside the clipping region, the triangle is inside the clipping region, and the triangle and the clipping region intersect.
If the intersection relationship is that the triangle is outside the clipping region, the triangle is discarded.
If the intersection relation is that the triangle is in the clipping region, adding the triangle into an output grid queue to perform batch rendering on the polygons in the output grid queue;
If the intersection relation is that the triangle and the clipping region intersect, splitting the triangle into a plurality of at least one sub-triangle contained in the clipping region, and adding the split at least one triangle into an output grid queue to perform batch rendering on the polygons in the output grid queue. Specifically, when the intersection relationship is calculated, the intersection point of the clipping region and the triangle may be calculated, and then the triangle is reorganized into a plurality of triangles by using the intersection point and the vertex of the triangle in the clipping region, so as to split the triangle into a plurality of at least one sub-triangle included in the clipping region.
In an alternative embodiment, as shown in fig. 3, the steps are as follows: filtering the polygons in the mesh by an algorithm based on the bounding box of the clipping region to obtain a first target mesh, wherein the method comprises the following steps of:
step S31, determining a bounding box of the clipping region according to the vertex coordinates of the clipping region;
Step S32, determining the position relation between the bounding box of the clipping region and the polygon according to the position information of the bounding box of the clipping region and the position information of the polygon in the grid to obtain a second position relation;
and step S33, filtering polygons in the grids according to the second position relation to obtain a first target grid. The step S33 specifically includes: filtering out a target polygon from the polygons according to the second position relation, and determining a grid formed by the target polygon as the first target grid, wherein the target polygon comprises a first type polygon and/or a second type polygon, and the first type polygon is positioned in a bounding box of the clipping region or the second type polygon is intersected with the bounding box of the clipping region.
Specifically, in the present embodiment, triangles in the bounding box AABB filter mesh of the clipping region are used, triangles outside the clipping bounding box AABB are discarded, and triangles inside the clipping bounding box AABB are accepted.
It should be noted that, in the present application, a corresponding accessory is set in advance for the animation to be optimized, and the accessory includes: cutting accessories and grid accessories. Wherein, the grid attachment stores triangle grids of the animation model; and the clipping accessory stores the size and position information of the clipping region, and represents rendering only by reserving the triangle in the clipping region when the animation to be optimized is displayed on the game display interface. Corresponding accessory types are set for the clipping accessory and the grid accessory.
Specifically, at game run time, the animation is to be optimized for each frame, each attachment is read, and whether a mesh attachment or a clipping interval is determined according to the attachment type.
If the clipping accessory is traversed, starting clipping operation on bones to be optimized by using a polygonal area of the clipping accessory, and calculating a bounding box AABB of the clipping area.
If traversing to the grid attachment, grid world coordinates may be calculated, wherein the grid world coordinate calculation process is described as follows: since the grid is tied to the bone, the key frames are first used to calculate the current bone position and orientation when playing the animation. And then calculating world coordinates of the grid vertexes by using binding weights and relative positions of the grid vertexes to bones, so as to obtain the world coordinates of the grid. Weight refers to the degree of binding, such as weight 1, indicating that the vertices of the mesh and the bones remain in relative position; weight 0 indicates that the coordinates of the mesh vertices are not bone controlled. Next, each triangle is traversed according to the grid world coordinates to crop. The clipping process is that, for each triangle, intersection calculation is carried out on the triangle and the bounding box AABB of the clipping area, so as to judge the intersection relation between the triangle and the clipping bounding box AABB, and the intersection relation comprises: the triangle is outside the cut bounding box AABB, the triangle is inside the cut bounding box AABB, and the triangle intersects the bounding box of the cut area.
If the triangle is outside the bounding box AABB of the clipping region, the triangle is discarded.
If the triangle is within the bounding box AABB of the clipping region, the triangle is added to the output mesh queue to render the triangle in the output mesh queue in batch.
If the triangle intersects the bounding box AABB of the clipping region, the triangle is clipped by using a Spine native clipping algorithm, and the specific clipping process is described as follows:
s41, performing intersection calculation on the polygon in the first target grid and the clipping region to obtain an intersection relation between the polygon in the first target grid and the clipping region;
step S42, discarding the polygon in the first target grid if the polygon in the first target grid is determined to be outside the clipping region according to the intersection relation;
step S43, if the polygon in the first target grid is determined to be located in the clipping region according to the intersection relationship, adding the polygon in the first target grid into an output grid queue, wherein the polygon in the output grid queue is the polygon to be rendered;
and step S44, if it is determined that at least one polygon in the first target grid intersects the clipping region according to the intersection relation, clipping the at least one polygon intersected with the clipping region into at least one sub-polygon contained in the clipping region, and adding the sub-polygon into the output grid queue, wherein the polygon in the output grid queue is a polygon to be rendered.
In this embodiment, a polygon is exemplified as a triangle. Specifically, for each frame, the animation is to be optimized, each attachment is read, and whether a mesh attachment or a clipping interval is determined according to the attachment type. Wherein, the grid attachment stores triangle grids of the animation model; and the clipping accessory stores the size and position information of the clipping region, and represents rendering only by reserving the triangle in the clipping region when the animation to be optimized is displayed on the game display interface. Corresponding accessory types are set for the clipping accessory and the grid accessory.
If the clipping accessory is traversed, the clipping accessory polygonal area is used, and clipping is started.
If traversing to the grid attachment, the grid world coordinates may be calculated and then each triangle traversed to clip according to this grid world coordinate. The clipping process is that each triangle in the filtered grid is traversed and intersected with the clipping area, and the intersection relation is calculated, and the intersection relation comprises: the triangle is outside the clipping region, the triangle is inside the clipping region, and the triangle and the clipping region intersect.
If the intersection relationship is that the triangle is outside the clipping region, the triangle is discarded.
If the intersection relation is that the triangle is in the clipping region, adding the triangle into an output grid queue to perform batch rendering on the polygons in the output grid queue;
If the intersection relation is that the triangle and the clipping region intersect, splitting the triangle into a plurality of sub-triangles contained in the clipping region, and adding the split triangle into an output grid queue to perform batch rendering on the polygons in the output grid queue. Specifically, when the intersection relationship is calculated, the intersection point of the clipping region and the triangle may be calculated, and then the triangle is reorganized into a plurality of triangles by using the intersection point and the vertex of the triangle in the clipping region, so as to split the triangle into a plurality of sub-triangles contained in the clipping region.
As is clear from the above description, in the existing triangle clipping algorithm, it is necessary to perform intersection calculation on each triangle in the mesh and the clipping region, so that clipping of the triangle is performed according to the calculated intersection relationship. If the number of triangles is large, then the intersection computation can incur significant overhead for the CPU. Based on the method, the grid is filtered by adopting a clipping algorithm based on the target bounding box, the intersection relation between each triangle and the clipping area does not need to be calculated in the filtering process, and the grid is clipped in a mode of calculating the intersection relation after the grid is filtered, so that the CPU cost is reduced.
In the application, after the grid after cutting is obtained, the material of the grid after cutting can be modified to the same material, so that the animation after optimization is obtained.
In an alternative embodiment, if the original material of the map of the grid includes a pattern corresponding thereto, the pattern includes: the normal mixing mode and the superposition mixing mode are respectively determined through corresponding mixing factors, and the mixing factors corresponding to the normal mixing mode and the superposition mixing mode are different in calculation mode.
Based on the above, the step of modifying the material of the mesh after clipping to the same material, thereby obtaining the animation after optimization includes the following procedures:
Step S1061, setting a mixing factor of a grid with the original material being the normal mixing mode in the second target grid as a first mixing factor, and setting a mixing factor of a grid with the original material being the superimposed mixing mode in the grid as a second mixing factor, so as to obtain the animation after optimization, where a calculation formula of the first mixing factor is the same as a calculation formula of the second mixing factor, the mixing factor is a parameter for mixing a source color and a target color of the grid, the source color and the target color are used for determining a display color of the grid on a game display interface, and the first mixing factor and the second mixing factor are such that an output color of the second target grid before and after modification is unchanged after modifying the material of the second target grid to the same material.
The inventors can find that the number of Draw calls for the Spine animation is large by viewing the Draw Call batch process using NVIDIA NSIGHT frames. After analysis, the Spine animation is hung on the grid, and the normal mixed mode and the superposition mixed mode are used, so that the grid cannot be automatically optimized in batches. In the application, the grid with the original material being the normal mixed mode and the overlapped mixed mode is realized by using the same material, thereby realizing the normal mixed mode and the overlapped mixed mode in the shader. After changing into one material, the grids with the same material can be combined by utilizing automatic batch combination, so that the number of Draw calls is effectively reduced.
The blend mode is one way to achieve a translucent effect, and the output color is calculated using the following formula: c Output of =C Source(s) *F Source(s) +C Target object *F Target object .
Wherein C Source(s) represents the source color, which is the color value of the triangle currently being rendered; c Target object is the target color of the triangle mesh output currently being rendered; f Source(s) is a source mixing factor value, and designates the influence of Alpha value on source color; f Target object is a target blending factor value, specifying the effect of Alpha value on target color.
The normal blend mode used in Spine, F Source(s) is src_alpha, i.e. the ALPHA component of the C Source(s) color, and F Target object is one_menu_src_alpha, i.e. the ALPHA component of 1-C Source(s) , so the triangle output colors in normal blend mode are: c Output of =C Source(s) *Alpha Source(s) +C Target object *(1-Alpha Source(s) ).
In the superposition mixed mode, F Source(s) is src_alpha, and F Target object uses ONE, i.e., constant 1, so that the output color achieved in the superposition mixed mode is: c Output of =C Source(s) *Alpha Source(s) +C Target object x 1.
As is apparent from the above description, the source mixing factor value and the target mixing factor value are different for the normal mixing mode and the superimposed mixing mode, that is, the normal mixing mode and the superimposed mixing mode are respectively determined by the corresponding mixing factors, and the mixing factors corresponding to the normal mixing mode and the superimposed mixing mode are calculated differently.
After the optimization is started, the mixing factors of the normal mixing mode and the superposition mixing mode are modified, the mode information is encoded in the vertex UV coordinates, and the shader decodes different mixing modes from the vertex UV coordinates. Using the superscript 1 in the following formulas, modified data is represented, e.gIs the modified source color value. After modifying the texture, the blending factor F Source(s) of the mesh with the original texture being in the normal blending mode is changed to ONE, i.e. constant 1, F Target object keeps ONE_MINUS_SRC_ALPHA, i.e.Alpha component of (c). The shader program code is shown in code lines 1 through 9, with code line 7isOneOneBlendMode equal to 0.
The output color of the grid in the modified normal blend mode is: wherein, the code line 8 can obtain: since the vertex UV, which encodes the normal blend mode, is less than 1, it is possible to obtain from code lines 3,7, 9: isOneOneBlendMode = 0;
At the future AndSubstituted into formulaAfter that, the output color of the grid in the modified normal mixed mode is obtained as follows: c Output of =C Source(s) *Alpha Source(s) +C Target object *(1-Alpha Source(s) ).
For the modified superposition mixed mode, the same mixing factor as for the normal mixed mode is used, i.e. F Source(s) is changed to ONE, i.e. constant 1, F Target object is changed to ONE_MINUS_SRC_ALPHA, i.e.Wherein isOneOneBlendMode of code line 7 is equal to 1. The color output in the modified overlay mode is: wherein, the code line 8 can obtain: Since the vertex UV, which encodes the superposition mixed mode, is greater than 1, it is possible to obtain from the code lines 3,7, 9: isOneOneBlendMode = 1;
At the future Substituted into formulaAfter that, the output color of the grid in the modified superposition mixed mode is obtained as follows: c Output of =C Source(s) *Alpha Source(s) +C Target object x 1.
The above procedure can be described by the following procedure:
As can be seen from the above description, in order to modify the material of the mesh after cutting into the same material, in the present application, the mixing factor of the mesh whose original material is the normal mixing mode is set as the first mixing factor, and the mixing factor of the mesh whose original material is the superimposed mixing mode is set as the second mixing factor. Wherein the calculation formulas of the first mixing factor and the second mixing factor are the same.
I.e. in normal mixed mode F Source(s) is 1 and F Target object isIn the superposition mixed mode, F Source(s) is 1 and F Target object isAfter the first mixing factor and the second mixing factor are set in the above manner, the material of the grid after cutting can be modified to the same material.
In the application, besides modifying the material of the cut grids into the same material, corresponding identification information can be set for each grid in the UV vertex coordinates of each grid, wherein the identification information is used for indicating the original material of each grid.
For example, if the original material is in the normal mixed mode, the identification information may be a value less than 1, and if the original material is in the superimposed mixed mode, the identification information may be a value greater than 1.
In the application, after the batch rendering request is acquired, the batch rendering can be carried out on the grids in the animation after optimization, and the specific process is as follows:
firstly, reading identification information in UV vertex coordinates of each grid in the animation after optimization, and determining original materials of each grid in the animation after optimization according to the read identification information;
Secondly, determining a mixing factor of each grid in the animation after optimization according to the read original materials;
And finally, determining the target output color of each grid in the optimized animation according to the mixing factor of each grid in the optimized animation, and performing batch rendering on each grid in the optimized animation according to the target output color.
In the present application, after the batch rendering request is acquired, the texture of each mesh in the animation after optimization may be identified based on the blending factor, and in the case that meshes of the same texture are identified, the rendering requests (i.e., batch rendering requests) are generated in a merging manner.
After generating the batch rendering request, the identification information in the UV vertex coordinates of each grid in the animation after optimization can be read, and then the original material of the grid is obtained by decoding the identification information. After the original material is decoded, whether the mixing factor of each grid in the animation after optimization is the first mixing factor or the second mixing factor can be determined according to the decoded original material.
Specifically, if the read original material is in the normal blending mode, determining a blending factor of each grid in the optimized animation as the first blending factor, where a calculation formula of the first blending factor is as follows: alpha Source(s) is a source mixing factor, representing the effect of Alpha values on source color.
If the read original material is the superposition mixed mode, determining that the mixed factor of each grid in the animation after optimization is the second mixed factor, wherein the calculation formula of the second mixed factor is as follows:
After determining the mixing factor of each grid in the optimized animation, determining the target output color of each grid in the optimized animation according to the mixing factor of each grid in the optimized animation, wherein the method specifically comprises the following steps:
According to the formula Determining a target output color for each grid in the animation after the optimization, the C Output of representing the target output color,C Source(s) is the source color and Alpha Source(s) is the mixing factor of the source color.
In summary, the invention uses Visual Studio 2015 performance exploration tool and NVIDIA NSIGHT frame cutting tool to deeply analyze the performance cost of the Spine animation during running, positions the CPU performance bottleneck, and performs clipping triangle algorithm optimization and improved Draw Call batch optimization. The basic optimization thought of the triangle clipping algorithm is to judge the intersection relation of the triangle (or grid) and the clipping region, and the clipping result and the original clipping result can be consistent by using the optimized clipping algorithm for different triangles. The basic optimization thought of the improved Draw Call batch optimization is to use the main characteristic that the existing automatic batch combination algorithm can batch objects with the same material, and use one material to realize the effects of two materials commonly used by designers, and also ensure that the effects are unchanged.
The inventor performs experimental verification on the processing method of the game object. After the cost of the Spine animation operation is analyzed, the CPU calculation cost of the Spine animation operation is optimized from two aspects of triangle clipping algorithm and DrawCall improvement, the operation frame rate is improved, and the Spine animation can be applied to high-quality and scene-diverse game cases. Experimental results show that the processing method of the game object provided by the application does not influence the performance effect of the Spine animation and can reduce the expenditure by about 72%.
Specifically, the following control experiments were performed to analyze the performance overhead before and after optimization. The experimental environment is as follows:
1. Hardware configuration
(1) Processor Intel (R) Core (TM) i7-8700 CPU@3.20GHZ
(2) Memory (RAM) 16.0GB
(3) System Win10 Enterprise edition 64-bit operating system
(4) Display card GeForce GTX 10605GB
(5) Display card drive version: 417.71
2. Software configuration
(1) Spine runtime version: 3.6.53
3. Single Spine animation resource parameters
(1) Number of bones 317
(2) Number of facets of about 3593
(3) Top count of about 9075
Control experiment one: main function CPU duty ratio before and after contrast optimization
Game setting
(1) Number of game card faces: 40 game card faces;
(2) Setting a frame rate: low frame rate (30 FPS);
(3) Setting a picture: an exquisite picture;
The set of experiments are used for comparing CPU duty ratio changes of main functions before and after optimization and during Spine operation. The following two sets of experimental results can be compared, and it can be seen that the CPU duty cycle is reduced overall after using the optimization technique of the present invention. As shown in tables 2 and 3, specifically, triangle clipping CLIPTRIANGLES CPU was reduced from 42.22% to 23.71% by about 43% after triangle clipping optimization was applied; after DrawCall batch combination is improved, the cost of the batch combination algorithm drawBatched is reduced from 7.51% to 4.16%, and the cost is reduced by about 45%.
Table 1 Primary function CPU duty cycle before optimization
Table 2 main function CPU duty cycle after optimization
Control experiment two: CPU performance index before and after optimization
The set of experiments is used for comparing the running frame rate, the overhead per frame and the Draw Call number of the system before and after optimization. The group of experiments compares the optimization effects of standard 30FPS and high-end 60FPS, respectively.
Overhead result one:
Game setting:
(1) Number of card faces: 40 clamping surfaces;
(2) Setting a frame rate: low frame rate (30 FPS);
(3) Setting a picture: and (5) refining the picture.
The set of results show 40 card faces, and the performance index changes after a better optimization scheme is started in a test environment with target frame rate of 30 FPS. As shown in table 4, it can be seen that after optimization, all three indexes are significantly improved. The frame rate is improved from the original 10FPS to the target frame rate 30FPS, the cost is reduced by about 64%, and the Draw Call number is also greatly reduced. Note that since the target frame rate is set to 30FPS, the overhead per frame is at least around 33ms/frame, the actual performance improvement may be higher than 64%, taking part in the test data at the following frame rate 60 FPS.
Table 3 performance index before and after optimization
Overhead result two:
Game setting:
(1) Number of card faces: 40 clamping surfaces;
(2) Setting a frame rate: high frame rate (60 FPS);
(3) Setting a picture: and (5) refining the picture.
The set of results show that 40 card surfaces are provided, and in the test environment of the target frame rate 60FPS, performance index changes after several optimization combinations are started. It can be seen that for triangle clipping optimization, each optimization scheme can improve performance index. The optimization effect of the 'clipping region bounding box' is higher than that of the 'grid and clipping region bounding box', and the optimization scheme using two levels simultaneously is that the additional cost caused by the optimization of the grid layer is higher than the benefit caused by the optimization. From this, as shown in Table 5, the optimization was performed only on the triangle level, and the optimization was best, and the overhead was reduced from 94.67ms/frame to 35.44ms/frame, and was reduced by about 62%. Further turning on the improved DrawCall optimizations, the overhead is reduced from 35.44ms/frame to 25.82ms/frame, by about 27%. Overall, under the experimental conditions of the group, the cost is reduced from 94.67ms/frame before optimization to 25.82ms/frame, which is reduced by about 72 percent, and the performance is obviously improved.
Table 4 performance index under different optimization schemes
Embodiment two:
the embodiment of the invention also provides a processing device of the game object, which is mainly used for executing the processing method of the game object provided by the embodiment of the invention, and the processing device of the game object provided by the embodiment of the invention is specifically described below.
Fig. 4 is a schematic diagram of a processing device for a game object according to an embodiment of the present invention, and as shown in fig. 4, the processing device for a game object mainly includes an acquisition unit 10, a filtering unit 20, and a clipping unit 30, wherein:
An acquisition unit 10 for acquiring an animation to be optimized; the animation to be optimized comprises at least one game object, wherein the game object comprises an animation model corresponding to the game object, and the animation model comprises a skeleton and grids bound with the skeleton;
A filtering unit 20, configured to filter the grids according to a target bounding box, to obtain a first target grid, where the target bounding box includes: the bounding box of the grid and/or the bounding box of the clipping region, wherein the clipping region represents a region to be displayed on a game display interface in the animation to be optimized;
And the clipping unit 30 is configured to clip the first target mesh to obtain a second target mesh, where the second target mesh is a mesh to be rendered.
In the embodiment of the application, firstly, an animation to be optimized is obtained; then, filtering the grids through a clipping algorithm based on a target bounding box, and clipping the filtered grids to obtain clipped grids; and finally, modifying the material of the grid after cutting into the same material, thereby obtaining the animation after optimization, and carrying out batch rendering on the grid in the animation after optimization after obtaining the batch rendering request. According to the method, the system and the device, the target bounding box is adopted to cut the grids, and the cut grids are modified to be made of the same material to be subjected to batch rendering, so that the CPU overhead can be reduced during the running of the animation, the performance problem is reduced, the quality of the animation is ensured, and meanwhile, the fluency of game display and interaction is maintained.
Optionally, the device is further configured to: and after the first target grid is cut to obtain a second target grid, performing batch rendering on the second target grid.
Optionally, the device is further configured to: modifying the material of the second target grid to be the same material to obtain an optimized animation; and performing batch rendering on grids in the animation after the optimization.
Optionally, the filtering unit is further configured to: filtering the grids through an algorithm based on bounding boxes of the grids to obtain a first target grid, wherein the bounding boxes of the first target grid are located in the clipping region or intersect with the clipping region.
Optionally, the mesh comprises a plurality of polygons; the filter unit is also for: and filtering the polygons in the grids through an algorithm based on the bounding box of the clipping region to obtain a first target grid, wherein the first target grid is intersected with the bounding box of the clipping region or is located in the clipping region.
Optionally, the filtering unit is further configured to: determining a bounding box of the grid according to the vertex coordinates of the grid; determining the position relationship between the bounding box of the grid and the clipping region according to the position information of the bounding box of the grid and the position information of the clipping region, and obtaining a first position relationship; and filtering the grid according to the first position relation to obtain the first target grid.
Optionally, the filtering unit is further configured to: and filtering a first target grid from the grids according to the first position relation, wherein the first target grid comprises a first type grid and/or a second type grid, a bounding box of the first type grid is positioned in the clipping region, and a bounding box of the second type grid is intersected with the clipping region.
Optionally, the filtering unit is further configured to: determining a bounding box of the clipping region according to the vertex coordinates of the clipping region; determining the position relation between the bounding box of the clipping region and the polygon according to the position information of the bounding box of the clipping region and the position information of the polygon in the grid to obtain a second position relation; and filtering polygons in the grids according to the second position relation to obtain the first target grid.
Optionally, the filtering unit is further configured to: filtering out a target polygon from the polygons according to the second position relation, and determining a grid formed by the target polygon as the first target grid, wherein the target polygon comprises a first type polygon and/or a second type polygon, and the first type polygon is positioned in a bounding box of the clipping region or the second type polygon is intersected with the bounding box of the clipping region.
Optionally, the clipping unit is configured to: performing intersection calculation on the polygon in the first target grid and the clipping region to obtain an intersection relation between the polygon in the first target grid and the clipping region; discarding the polygon in the first target grid if the polygon in the first target grid is determined to be positioned outside the clipping region according to the intersection relation; if the polygon in the first target grid is determined to be located in the clipping region according to the intersection relation, adding the polygon in the first target grid into an output grid queue, wherein the polygon in the output grid queue is the polygon to be rendered; and if the at least one polygon in the first target grid is intersected with the cutting area according to the intersection relation, cutting the at least one polygon intersected with the cutting area into at least one sub-polygon contained in the cutting area, and adding the sub-polygon into the output grid queue, wherein the polygon in the output grid queue is the polygon to be rendered.
Optionally, the clipping unit is further configured to: for each polygon intersecting a clipping region, clipping it into a first polygon located inside the clipping region and a second polygon located outside the clipping region; if the first polygon is a triangle, the first polygon is used as the sub-polygon; otherwise, the first polygon is cut into a plurality of sub-triangles, and the sub-triangles are used as the sub-polygons.
Optionally, the filtering unit is further configured to: filtering the grids according to an algorithm of a bounding box of the grids to obtain the filtered grids; and filtering the triangles in the filtered grids according to the algorithm of the bounding box of the clipping region to obtain the first target grid.
Optionally, the original material of the map of the grid includes a pattern corresponding thereto, and the pattern includes: a normal mixing mode and a superposition mixing mode; the normal mixing mode and the superposition mixing mode are respectively determined by corresponding mixing factors, and the mixing factors corresponding to the normal mixing mode and the superposition mixing mode are different in calculation mode.
Optionally, the device is further configured to: setting a mixing factor of a grid with an original material being in the normal mixing mode in the second target grid as a first mixing factor, and setting a mixing factor of a grid with an original material being in the superimposed mixing mode in the grid as a second mixing factor, so as to obtain the animation after optimization, wherein a calculation formula of the first mixing factor is the same as a calculation formula of the second mixing factor, the mixing factor is a parameter for mixing a source color and a target color of the grid, the source color and the target color are used for determining a display color of the grid on a game display interface, and the first mixing factor and the second mixing factor enable an output color of the second target grid before and after modification to be unchanged after modifying the material of the second target grid to be the same material.
Optionally, the device is further configured to: and setting corresponding identification information for each grid in the UV vertex coordinates of each grid, wherein the identification information is used for indicating the original material of each grid.
Optionally, the device is further configured to: reading identification information in UV vertex coordinates of each grid in the animation after optimization, and determining original materials of each grid in the animation after optimization according to the read identification information; determining the mixing factor of each grid in the animation after optimization according to the read original materials; and determining a target output color of each grid in the optimized animation according to the mixing factor of each grid in the optimized animation, and performing batch rendering on each grid in the optimized animation according to the target output color.
Optionally, the device is further configured to: if the read original material is in the normal mixing mode, determining that the mixing factor of each grid in the animation after optimization is the first mixing factor, wherein the calculation formula of the first mixing factor is as follows: alpha Source(s) is a source mixing factor, which represents the effect of Alpha value on source color; if the read original material is the superposition mixed mode, determining that the mixed factor of each grid in the animation after optimization is the second mixed factor, wherein the calculation formula of the second mixed factor is as follows:
optionally, the device is further configured to: according to the formula Determining a target output color for each grid in the animation after the optimization, the C Output of representing the target output color,C Source(s) is the source color and Alpha Source(s) is the mixing factor of the source primary colors.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (20)
1. A method of processing a game object, comprising:
Obtaining an animation to be optimized; the animation to be optimized comprises at least one game object, wherein the game object comprises an animation model corresponding to the game object, and the animation model comprises a skeleton and grids bound with the skeleton;
Filtering the grids according to a target bounding box to obtain a first target grid, wherein the target bounding box comprises: the bounding box of the grid and/or the bounding box of a clipping region, wherein the clipping region represents a region to be displayed on a game display interface in the animation to be optimized;
Cutting the first target grid to obtain a second target grid, wherein the second target grid is a grid to be rendered;
Filtering the grids according to the bounding boxes of the grids to obtain a first target grid comprises:
Filtering the grids by an algorithm based on bounding boxes of the grids to obtain a first target grid, wherein the bounding boxes of the first target grid are located in the clipping region or intersect with the clipping region;
cutting the first target grid to obtain a second target grid comprises the following steps:
performing intersection calculation on the polygon in the first target grid and the clipping region to obtain an intersection relation between the polygon in the first target grid and the clipping region;
if the polygon in the first target grid is determined to be positioned in the clipping region according to the intersection relation, adding the polygon in the first target grid into an output grid queue;
And if the intersection relation is determined that at least one polygon in the first target grid is intersected with the cutting area, cutting the at least one polygon intersected with the cutting area into at least one sub-polygon contained in the cutting area, and adding the sub-polygon into the output grid queue, wherein the polygon in the output grid queue is the polygon to be rendered.
2. The method of claim 1, wherein after clipping the first target grid to obtain a second target grid, the method further comprises:
and performing batch rendering on the second target grids.
3. The method of claim 2, wherein the batch rendering of the second target grid comprises:
modifying the material of the second target grid to be the same material to obtain an optimized animation;
And performing batch rendering on grids in the animation after the optimization.
4. The method of claim 1, wherein the mesh comprises a plurality of polygons;
Filtering the grid according to the bounding box of the clipping region, wherein obtaining a first target grid comprises:
And filtering the polygons in the grids through an algorithm based on the bounding box of the clipping region to obtain a first target grid, wherein the first target grid is intersected with the bounding box of the clipping region or is located in the clipping region.
5. The method of claim 1, wherein filtering the mesh by an algorithm based on a bounding box of the mesh to obtain a first target mesh comprises:
Determining a bounding box of the grid according to the vertex coordinates of the grid;
Determining the position relationship between the bounding box of the grid and the clipping region according to the position information of the bounding box of the grid and the position information of the clipping region, and obtaining a first position relationship;
and filtering the grid according to the first position relation to obtain the first target grid.
6. The method of claim 5, wherein filtering the grid according to the first positional relationship to obtain the first target grid comprises:
And filtering a first target grid from the grids according to the first position relation, wherein the first target grid comprises a first type grid and/or a second type grid, a bounding box of the first type grid is positioned in the clipping region, and a bounding box of the second type grid is intersected with the clipping region.
7. The method of claim 4, wherein filtering the plurality of polygons in the mesh by an algorithm based on a bounding box of the clipping region, the obtaining a first target mesh comprises:
determining a bounding box of the clipping region according to the vertex coordinates of the clipping region;
Determining the position relation between the bounding box of the clipping region and the polygon according to the position information of the bounding box of the clipping region and the position information of the polygon in the grid to obtain a second position relation;
and filtering polygons in the grids according to the second position relation to obtain the first target grid.
8. The method of claim 7, wherein filtering polygons in the mesh according to the second positional relationship to obtain the first target mesh comprises:
Filtering out a target polygon from the polygons according to the second position relation, and determining a grid formed by the target polygon as the first target grid, wherein the target polygon comprises a first type polygon and/or a second type polygon, and the first type polygon is positioned in a bounding box of the clipping region or the second type polygon is intersected with the bounding box of the clipping region.
9. The method of claim 1, wherein cropping the first target grid further comprises:
And discarding the polygon in the first target grid if the polygon in the first target grid is determined to be positioned outside the clipping region according to the intersection relation.
10. The method of claim 9, wherein clipping at least one polygon that intersects a clipping region into at least one sub-polygon contained within the clipping region comprises:
For each polygon intersecting a clipping region, clipping it into a first polygon located inside the clipping region and a second polygon located outside the clipping region;
If the first polygon is a triangle, the first polygon is used as the sub-polygon;
otherwise, the first polygon is cut into a plurality of sub-triangles, and the sub-triangles are used as the sub-polygons.
11. The method of claim 1, wherein filtering the mesh according to a target bounding box further comprises:
Filtering the grid according to the bounding box of the grid;
and filtering triangles in the filtered grids according to the bounding box of the clipping region to obtain the first target grid.
12. A method according to claim 3, wherein the original material of the map of the grid comprises a pattern corresponding thereto, the pattern comprising: a normal mixing mode and a superposition mixing mode; the normal mixing mode and the superposition mixing mode are respectively determined by corresponding mixing factors, and the mixing factors corresponding to the normal mixing mode and the superposition mixing mode are different in calculation mode.
13. The method of claim 12, wherein modifying the material of the second target mesh to the same material, thereby obtaining the optimized animation comprises:
Setting a mixing factor of a grid with an original material being in the normal mixing mode in the second target grid as a first mixing factor, and setting a mixing factor of a grid with an original material being in the superimposed mixing mode in the grid as a second mixing factor, so as to obtain the animation after optimization, wherein a calculation formula of the first mixing factor is the same as a calculation formula of the second mixing factor, the mixing factor is a parameter for mixing a source color and a target color of the grid, the source color and the target color are used for determining a display color of the grid on a game display interface, and the first mixing factor and the second mixing factor enable an output color of the second target grid before and after modification to be unchanged after modifying the material of the second target grid to be the same material.
14. The method of claim 13, wherein the method further comprises:
and setting corresponding identification information for each grid in the UV vertex coordinates of each grid, wherein the identification information is used for indicating the original material of each grid.
15. The method of claim 14, wherein batch rendering the mesh in the optimized animation comprises:
reading identification information in UV vertex coordinates of each grid in the animation after optimization, and determining original materials of each grid in the animation after optimization according to the read identification information;
determining the mixing factor of each grid in the animation after optimization according to the read original materials;
And determining a target output color of each grid in the optimized animation according to the mixing factor of each grid in the optimized animation, and performing batch rendering on each grid in the optimized animation according to the target output color.
16. The method of claim 15, wherein determining the blending factor for each mesh in the optimized animation from the read raw material comprises:
If the read original material is in the normal mixing mode, determining that the mixing factor of each grid in the animation after optimization is the first mixing factor, wherein the calculation formula of the first mixing factor is as follows: ,; Representing the effect of Alpha values on source color for a source mixing factor;
If the read original material is the superposition mixed mode, determining that the mixed factor of each grid in the animation after optimization is the second mixed factor, wherein the calculation formula of the second mixed factor is as follows: ,。
17. The method of claim 15, wherein determining the target output color for each grid in the optimized animation based on the blending factor for each grid in the optimized animation comprises:
According to the formula Determining a target output color for each grid in the animation after the optimization, theRepresenting the output color of the object in question,,For the source color(s),Is a mixing factor of the source primary colors.
18. A game object processing apparatus, comprising:
The acquisition unit is used for acquiring the animation to be optimized; the animation to be optimized comprises at least one game object, wherein the game object comprises an animation model corresponding to the game object, and the animation model comprises a skeleton and grids bound with the skeleton;
The filtering unit is configured to filter the grid according to a target bounding box, so as to obtain a first target grid, where the target bounding box includes: the bounding box of the grid and/or the bounding box of a clipping region, wherein the clipping region represents a region to be displayed on a game display interface in the animation to be optimized;
The clipping unit is used for clipping the first target grid to obtain a second target grid, wherein the second target grid is a grid to be rendered;
In the filtering unit, filtering the grid according to a bounding box of the grid, and obtaining a first target grid includes:
Filtering the grids by an algorithm based on bounding boxes of the grids to obtain a first target grid, wherein the bounding boxes of the first target grid are located in the clipping region or intersect with the clipping region;
cutting the first target grid to obtain a second target grid comprises the following steps:
performing intersection calculation on the polygon in the first target grid and the clipping region to obtain an intersection relation between the polygon in the first target grid and the clipping region;
if the polygon in the first target grid is determined to be positioned in the clipping region according to the intersection relation, adding the polygon in the first target grid into an output grid queue;
And if the intersection relation is determined that at least one polygon in the first target grid is intersected with the cutting area, cutting the at least one polygon intersected with the cutting area into at least one sub-polygon contained in the cutting area, and adding the sub-polygon into the output grid queue, wherein the polygon in the output grid queue is the polygon to be rendered.
19. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any of the preceding claims 1 to 17 when the computer program is executed.
20. A computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the steps of the method of any one of the preceding claims 1 to 17.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010951335.7A CN112057854B (en) | 2020-09-10 | 2020-09-10 | Game object processing method, game object processing device, electronic equipment and computer readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010951335.7A CN112057854B (en) | 2020-09-10 | 2020-09-10 | Game object processing method, game object processing device, electronic equipment and computer readable medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112057854A CN112057854A (en) | 2020-12-11 |
CN112057854B true CN112057854B (en) | 2024-07-12 |
Family
ID=73696039
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010951335.7A Active CN112057854B (en) | 2020-09-10 | 2020-09-10 | Game object processing method, game object processing device, electronic equipment and computer readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112057854B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112837328B (en) | 2021-02-23 | 2022-06-03 | 中国石油大学(华东) | Rectangular window clipping and drawing method for two-dimensional polygonal primitive |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108744520A (en) * | 2018-06-05 | 2018-11-06 | 网易(杭州)网络有限公司 | Determine the method, apparatus and electronic equipment of game model placement position |
CN109887093A (en) * | 2019-01-17 | 2019-06-14 | 珠海金山网络游戏科技有限公司 | A kind of game level of detail processing method and system |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6700586B1 (en) * | 2000-08-23 | 2004-03-02 | Nintendo Co., Ltd. | Low cost graphics with stitching processing hardware support for skeletal animation |
CN104463934B (en) * | 2014-11-05 | 2017-06-23 | 南京师范大学 | A kind of point-based surface Automatic Generation of Computer Animation method of " mass spring " system drive |
CN110874812B (en) * | 2019-11-15 | 2024-06-04 | 网易(杭州)网络有限公司 | Scene image drawing method and device in game and electronic terminal |
-
2020
- 2020-09-10 CN CN202010951335.7A patent/CN112057854B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108744520A (en) * | 2018-06-05 | 2018-11-06 | 网易(杭州)网络有限公司 | Determine the method, apparatus and electronic equipment of game model placement position |
CN109887093A (en) * | 2019-01-17 | 2019-06-14 | 珠海金山网络游戏科技有限公司 | A kind of game level of detail processing method and system |
Also Published As
Publication number | Publication date |
---|---|
CN112057854A (en) | 2020-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103180881B (en) | Complex scene sense of reality fast drawing method on the Internet | |
Reshetov | Morphological antialiasing | |
Schütz et al. | Real-time continuous level of detail rendering of point clouds | |
JP4015644B2 (en) | Image processing apparatus and image processing method | |
CN105912234B (en) | The exchange method and device of virtual scene | |
US8520004B2 (en) | Interactive labyrinth curve generation and applications thereof | |
Frey et al. | Interactive progressive visualization with space-time error control | |
TW201737207A (en) | Method and system of graphics processing enhancement by tracking object and/or primitive identifiers, graphics processing unit and non-transitory computer readable medium | |
TW201816724A (en) | Method for efficient construction of high resolution display buffers | |
WO2010043969A1 (en) | System and method for hybrid solid and surface modeling for computer-aided design environments | |
CA2357962C (en) | System and method for the coordinated simplification of surface and wire-frame descriptions of a geometric model | |
CN112057854B (en) | Game object processing method, game object processing device, electronic equipment and computer readable medium | |
KR20150093689A (en) | Method for forming an optimized polygon based shell mesh | |
JP2007102734A (en) | Image processor, image processing method and program | |
CN109243614B (en) | Operation simulation method, device and system | |
US20090201288A1 (en) | Rendering 3D Computer Graphics Using 2D Computer Graphics Capabilities | |
Barringer et al. | High-quality curve rendering using line sampled visibility | |
McReynolds et al. | Programming with opengl: Advanced rendering | |
Willmott | Rapid simplification of multi-attribute meshes | |
CN111951369B (en) | Detail texture processing method and device | |
CN113470153A (en) | Rendering method and device of virtual scene and electronic equipment | |
Barnett et al. | Relating yield models to burn-in fall-out in time | |
CN111402369A (en) | Interactive advertisement processing method and device, terminal equipment and storage medium | |
CN106716500A (en) | Program, information processing device, depth definition method, and recording medium | |
CN116012512A (en) | Foam effect rendering method, rendering device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |