CN113426130B - Batch processing method and device for model - Google Patents
Batch processing method and device for model Download PDFInfo
- Publication number
- CN113426130B CN113426130B CN202110749147.0A CN202110749147A CN113426130B CN 113426130 B CN113426130 B CN 113426130B CN 202110749147 A CN202110749147 A CN 202110749147A CN 113426130 B CN113426130 B CN 113426130B
- Authority
- CN
- China
- Prior art keywords
- model
- sub
- batch
- models
- vertex
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title abstract description 11
- 239000000463 material Substances 0.000 claims abstract description 62
- 238000000034 method Methods 0.000 claims abstract description 54
- 230000000694 effects Effects 0.000 claims abstract description 31
- 238000012163 sequencing technique Methods 0.000 claims abstract description 23
- 239000011159 matrix material Substances 0.000 claims abstract description 19
- 239000000872 buffer Substances 0.000 claims description 52
- 238000009877 rendering Methods 0.000 claims description 36
- 238000003860 storage Methods 0.000 claims description 17
- 238000005562 fading Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 12
- 230000003139 buffering effect Effects 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 5
- 230000001131 transforming effect Effects 0.000 claims description 4
- 238000005520 cutting process Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 abstract description 6
- 230000003068 static effect Effects 0.000 description 19
- 230000008569 process Effects 0.000 description 15
- 238000004364 calculation method Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000010923 batch production Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/60—Shadow generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention provides a batch processing method and device for a model, wherein the method comprises the following steps: adding the model to be processed into a batch processing queue; the model to be processed comprises at least one sub-model; according to the attribute of the model to be processed, distributing the models to be processed in the batch processing queue to different model groups; sequencing all the sub-models in the model group according to textures and materials of the sub-models in the model group respectively to obtain sequencing results; and carrying out batch mixing processing on the sub-models in the model group by using preset batch mixing logic according to the sequencing result so as to obtain a batch mixing result. And adding the model to be processed into a batch processing queue for processing, so that vertex limit can be removed, the sorting strategy of the model can be modified, a support transformation model matrix after combination is realized, and the hanging point model under the model and the model special effect are processed to maximize the batch of the model in the combined game scene.
Description
Technical Field
The invention relates to the technical field of games, in particular to a batch processing method of a model and a batch processing device of the model.
Background
In computer graphics, simply stated, a batch is an operation in which a CPU (central processing unit ) submits data required for drawing to a GPU (Graphics Processing Unit, graphics processor) and calls a graphics API (Application Programming Interface, application program interface). If a model is independent in the game scene, the model is a batch if no batch processing is performed. Since each batch needs to perform operations such as data submission, setting a loader Shader, and switching rendering states, these operations are relatively consuming performance, so that too many processing batches of one frame may cause performance degradation, and merging batches is an important means for improving game performance in a game.
In the prior art, three batch mixing schemes are mainly provided, and the batch mixing schemes are respectively as follows: static batch, dynamic batch, and INSTANCING example batch schemes. The Static batch is to use the same material and the same map and select the data corresponding to the object of Static supporting Static batch, and integrate the data into a combined Buffer zone data through calculation and submit the data to the GPU, so as to achieve the purpose of drawing a batch. Dynamic batching supports dynamic real-time batching of model-compliant batching. INSTANCING the batch is that under the condition of the same material ball and the same grid model, a model vertex data and different world matrixes of each model are submitted, and the GPU calculates different model positions according to the different world matrixes.
However, static batch-matching has the limitation of the number of top points, the problem of the sequence of batch processing causes that different models can hardly be batched, and the world matrix of the transformation model is not supported after the batch-matching; the dynamic batch combination is strictly limited on the number of model top points for efficiency because each frame needs real-time calculation, and the calculation consumption of each frame is overlarge; INSTANCING batch schemes support merging the same models, which is not applicable in games where there are a large number of different models. If the model in a game is characterized by a large variety of models, but the materials and textures used by different models are almost the same, the above schemes are not suitable for batch processing the models.
Disclosure of Invention
In view of the above-mentioned problems that the existing batch processing scheme is not suitable for models with a plurality of models in a game scene, but the materials and textures used are almost the same, embodiments of the present invention are proposed to provide a batch processing method of a model and a batch processing device of a corresponding model, which overcome or at least partially solve the above-mentioned problems.
The embodiment of the invention discloses a batch processing method of a model, which comprises the following steps:
Adding the model to be processed into a batch processing queue; wherein the model to be processed comprises at least one sub-model;
according to the attribute of the model to be processed, distributing the models to be processed in the batch processing queue to different model groups;
Sequencing all the sub-models in the model group according to textures and materials of the sub-models in the model group respectively to obtain sequencing results;
and carrying out batch mixing processing on the sub-models in the model group by using preset batch mixing logic according to the sequencing result so as to obtain a batch mixing result.
Optionally, according to the sorting result, performing batch processing on the sub-models in the model group by using a preset batch logic to obtain a batch result, including:
judging whether vertex data stored for the sub-model in the model group needs to be updated or not;
updating the vertex data stored for the sub-model in the model group when the vertex data stored for the sub-model in the model group needs to be updated, and setting an update bit corresponding to the model group to be true;
If the update bit of the model group is true, creating a geometric data structure for the model group, and storing vertex buffers and index buffers of all sub-models in the geometric data structure according to the sorting result;
And carrying out batch processing on the sub-models in the model group by using preset batch logic according to vertex buffer and index buffer stored in the geometric data structure so as to obtain a batch result.
Optionally, the determining whether the vertex data stored for the sub-model in the model group needs to be updated includes:
Judging whether the sub-model is added into the model group for the first time;
If the sub-model is added into the model group for the first time, judging that the vertex data stored for the sub-model in the model group needs to be updated;
If the sub-model is not added into the model group for the first time, judging whether the world matrix of the sub-model is changed or not; and if the world matrix of the sub-model changes, judging that the vertex data stored for the sub-model in the model group needs to be updated.
Optionally, the storing vertex buffers and index buffers of all sub-models in the geometric data structure according to the sorting result includes:
transforming vertex data stored for the sub-model in the model group into world space to obtain target vertex data;
and storing vertex buffers and index buffers of all sub-models in the geometric data structure according to the sorting result and the target vertex data.
Optionally, the geometric data structure includes a sub-geometric data structure corresponding to each sub-model, and the sub-geometric data structure is used for recording a start index and an end index of the sub-model in the geometric data structure.
Optionally, the batch processing is performed on the sub-models in the model group by using preset batch logic according to vertex buffer and index buffer stored in the geometric data structure, so as to obtain a batch result, including:
Determining one or more sub-models which belong to the same geometric data structure and are adjacent in position in all sub-models according to vertex buffer and index buffer stored in the geometric data structure;
determining a target sub-model meeting the batch matching condition in the one or more sub-models by using preset batch matching logic, and merging the target sub-models to obtain a batch matching result; wherein the batch conditions comprise at least one of:
The materials are the same;
The rendering priorities are the same;
The parameters of the batch flag bits are the same.
Optionally, the attribute of the model to be processed includes vertex type and used shader, and the assigning the model to be processed in the batch processing queue to different model groups according to the attribute of the model to be processed includes:
determining a current operation model from the batch processing queue in sequence;
determining a target model group matched with the current operation model according to the vertex type of the current operation model and the used shader;
And distributing the current operation model into the target model group.
Optionally, the determining, according to the vertex type and the shader used by the currently operated model, a target model group matched with the currently operated model includes:
determining a first sub-model of at least one sub-model of the currently operating model;
A target model group matching the currently operating model is determined based on the vertex type of the first sub-model and the shader used.
Optionally, the vertex data comprises at least one of: vertex position, color information, normal direction, and texture coordinates.
Optionally, the sorting all the sub-models according to the textures and materials of the sub-models in the model group respectively to obtain sorting results includes:
Acquiring main texture parameters and material parameters for describing textures and materials of all sub-models in the model group;
Calculating to obtain a texture hash value according to the main texture parameter, and calculating to obtain a parameter hash value according to the material parameter;
And sequencing all the submodels according to the texture hash value and the parameter hash value to obtain a sequencing result.
Optionally, before the step of adding the model to be processed to the batch queue, the method further includes:
Loading a model in a game scene; the model is provided with a batch flag bit parameter corresponding to the original batch logic;
Judging whether a model in the game scene starts a dissolve effect or not;
And if the model in the game scene does not start the fading effect, adjusting the batch flag bit parameter to a parameter corresponding to the preset batch logic, and starting the preset batch logic to perform batch processing on the model in the game scene according to the adjusted batch flag bit parameter.
Optionally, the method further comprises:
if the model in the game scene starts the fading effect, carrying out batch processing on the model in the game scene according to the original batch logic, and rendering the model in the game scene according to a batch result;
after the fading effect of the model in the game scene is finished, clearing rendering data of the model in the game scene, adjusting the batch flag bit parameter to be a parameter corresponding to the preset batch logic, and starting the preset batch logic to carry out batch processing on the model in the game scene according to the adjusted batch flag bit parameter.
Optionally, before the step of adding the model to be processed to the batch queue, the method further includes:
and cutting the model in the game scene to obtain the model to be processed.
The embodiment of the invention also discloses a batch processing device of the model, which comprises:
The model adding module is used for adding the model to be processed into the batch processing queue; wherein the model to be processed comprises at least one sub-model;
the model grouping module is used for distributing the models to be processed in the batch processing queue to different model groups according to the attribute of the models to be processed;
The model ordering module is used for ordering all the sub-models in the model group according to textures and materials of the material sub-models of the sub-models in the model group respectively to obtain an ordering result;
And the model batch combining module is used for carrying out batch combining processing on the sub-models in the model group by using a preset batch combining logic according to the sequencing result so as to obtain a batch combining result.
The embodiment of the invention also discloses an electronic device, which comprises:
A processor and a storage medium storing machine-readable instructions executable by the processor, the processor executing the machine-readable instructions when the electronic device is running to perform a method according to any one of the embodiments of the invention.
The embodiment of the invention also discloses a computer readable storage medium, wherein the storage medium is stored with a computer program, and the computer program is executed by a processor to execute the method according to any one of the embodiments of the invention.
The embodiment of the invention has the following advantages:
In the batch processing method of the model provided by the embodiment of the invention, the to-be-processed model is added into the batch processing queue, wherein the to-be-processed model comprises at least one sub-model, the to-be-processed model in the batch processing queue is distributed into different model groups according to the attribute of the to-be-processed model, all the sub-models in the model groups are respectively ordered according to the texture and the material of the material sub-model of the sub-model in the model groups so as to obtain an ordering result, and batch processing is carried out on the sub-models in the model groups according to the ordering result so as to obtain a batch combining result. The method and the device can be used for merging the models at the engine level, greatly reduce lots in a large number of building group scenes using similar materials, and improve the success rate of merging the models and support the movement of the models by carrying out the merging process after reordering all the sub-models compared with a static merging mode. Compared with INSTANCING batch obtaining modes, the embodiment of the invention can combine different models by grouping according to the attributes of the models, so that the models can be combined as long as the rendering state, the materials and other information of the models are consistent, and the model is not limited to the models which only combine balls with the same materials and the same grids. And the model to be processed is added into the batch processing queue for processing on the basis of the static batch scheme in the game engine, so that vertex limit can be removed, the sorting strategy of the model is modified, the support transformation model matrix after combination is realized, and the hanging point model under the model and the batch of the model in the combined game scene with maximized model special effect are processed.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the description of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of steps of a method for batch processing of a model according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a batch process according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps of a method for batch processing of a model according to an embodiment of the present invention;
FIG. 4 is a block diagram of a batch processing apparatus for a model according to an embodiment of the present invention;
FIG. 5 is a block diagram of an electronic device of the present invention;
fig. 6 is a block diagram of a storage medium of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the prior art, the batch schemes supported mainly by game engines (such as NeoX engine) are static batch, dynamic batch and INSTANCING example batch schemes. In some games, such as SLG (Simulation Game) strategy games in the space context, each player has its own base, and because of the building of the base with upgrade properties, it is split into numerous small models for flexibility, resulting in a very high number of batches of the player base, which requires pooling for reduced batches. The models of the base are numerous but the materials and textures used are almost the same, how to minimize the processing batches of the models, and being able to process models and effect models at the attachment points of the models is a difficult problem. The batch scheme of the game engine has various problems which can not meet the requirements: static batch combination has the limit of the number of top points, the sequence problem of batch processing causes that different models can hardly be combined, and the world matrix of the transformation model is not supported after the batch combination; the base models are different from each other, and it is obvious that INSTANCING batch schemes supporting merging the same model have no great effect; dynamic pooling requires real-time computation per frame, is a strict limitation on the number of model top points for efficiency, and is computationally expensive per frame.
The embodiment of the invention provides a scheme which can be suitable for batch processing of models with various models in a game scene, but the used models are almost the same in texture, a preset batch logic is MESHPACKER, which is a self-defined logic system for processing the combined models, when batch processing is required, the models to be processed are added into a batch processing queue, wherein the models to be processed comprise at least one sub-model, the models to be processed in the batch processing queue are distributed into different model groups according to the attribute of the models to be processed, all the sub-models in the model groups are sequenced according to the texture and the texture of the sub-models in the model groups respectively to obtain a sequencing result, and the sub-models in the model groups are subjected to batch processing according to the sequencing result by the preset batch logic to obtain the batch processing result. The method and the device can be used for merging the models at the engine level, greatly reduce lots in a large number of building group scenes using similar materials, and improve the success rate of merging the models and support the movement of the models by carrying out the merging process after reordering all the sub-models compared with a static merging mode. Compared with INSTANCING batch obtaining modes, the embodiment of the invention can combine different models by grouping according to the attributes of the models, so that the models can be combined as long as the rendering state, the materials and other information of the models are consistent, and the model is not limited to the models which only combine balls with the same materials and the same grids.
In addition, the batch of CSM (Cascaded Shadow Maps, cascading shadow mapping) shadows is reduced after the batch combination, and although a group of model data is calculated to be drawn by different batches possibly due to tiny differences of material parameters and the like during the combination, the switching of rendering states is avoided in the process, a certain efficiency improvement can still be brought, and finally, the batch combination can be effectively reduced under the condition that the base building is most complex in a game, the CPU utilization rate is basically equal, the GPU utilization rate is reduced by a small margin, and the game running efficiency is remarkably improved.
Referring to fig. 1, a step flow chart of a batch processing method of a model according to an embodiment of the present invention may specifically include the following steps:
Step 101, adding a model to be processed into a batch processing queue; wherein the model to be processed comprises at least one sub-model;
The to-be-processed model may be a visual model to be rendered in the game scene, for example, the to-be-processed model may be a building model of a base, a virtual character model, and the like. In the embodiment of the invention, the to-be-processed model can comprise a plurality of to-be-processed models, and each to-be-processed model can comprise at least one sub-model, for example, for a building model of a base, since the building model of the base has an upgrading attribute, in order to ensure flexibility, the to-be-processed model is split into a plurality of small models, and each small model is one sub-model. In a specific implementation, in order to improve the efficiency of game rendering, it is necessary to batch process all the sub-models included in the model to be processed.
In the embodiment of the invention, a batch processing queue can be created in advance, the models to be processed are added into the batch processing queue so as to reorder the models to be processed added into the batch processing queue, and batch processing is performed through preset batch processing logic. The preset pooling logic may be MESHPACKER, which is a custom logic system for handling the merge model, wherein the preset pooling logic may open the number of vertices of the model Group, increase the number of vertices of the model to the maximum number of vertices pointed out by the game engine, for example, to the maximum model vertices uint16—max supported by the Neox engine, so that a merged model Group may contain more small models.
In a preferred embodiment of the present invention, before the step 101, the method may further include the following steps:
and cutting the model in the game scene to obtain the model to be processed.
Specifically, the model in the game scene can be cut through the view cone of the main camera to obtain a to-be-processed model, so that the to-be-processed model to be rendered in the current game scene can be obtained.
102, Distributing the models to be processed in the batch processing queue to different model groups according to the attribute of the models to be processed;
The properties of the model to be processed may be parameters describing certain properties of the model to be processed. Specifically, the attributes of the model to be processed include vertex types of the model and Shader types used, wherein the vertex types can be in a vertex format, such as a vertex format requiring illumination for calculation of the vertex, a normal parameter is required for the vertex format, for example, a textured three-dimensional graph is required for calculation of the vertex, and a UV texture coordinate is required for the vertex format. The loader is used to implement image rendering, and the loader types used may include vertex shaders, pixel shaders, and the like.
Because the modes of rendering the models to be processed are different if the vertex types and/or the used loader types of the models to be processed are different, the models cannot be combined into one batch for processing.
Specifically, different model groups MeshGroup are created for different vertex types and/or used loader types respectively, the to-be-processed models are sequentially read from the batch processing queue, the vertex types corresponding to the to-be-processed models and the used loader types are obtained, matched MeshGroup is determined, the to-be-processed models are distributed to matched MeshGroup, and when the to-be-processed models are distributed to one MeshGroup, all sub-models contained in the to-be-processed models are distributed to the same MeshGroup. And continuously reading the to-be-processed models from the batch processing queue until all to-be-processed models in the batch processing queue are traversed.
Step 103, sorting all the sub-models in the model group according to the textures and materials of the sub-models in the model group respectively to obtain a sorting result;
When rendering or drawing models, only models in adjacent positions are judged to be combined into a batch, and in the process of sorting the characteristics of the models in MeshGroup, the static batch-merging scheme selects addresses of the models to be considered, which also causes that different models cannot be sorted to the adjacent positions with high probability.
Because most sub-models of the same model (such as a building model of a base) in a game use the same material, but have the characteristic of small difference in parameters, in the embodiment of the invention, in order to increase the probability of merging the models, the sorting strategy of the models can be modified, and the sub-models which can be combined in batches are arranged at adjacent positions only according to the similarity of the textures and the materials of the sub-models by comparing the textures and the materials of the sub-models. Specifically, textures and materials of each sub-model may be compared, so that all sub-models in a model group are ranked according to the textures and materials of the sub-models, so as to obtain a ranking result, wherein sub-models belonging to different models to be processed in the ranking result may be arranged in adjacent positions.
And 104, carrying out batch-batch processing on the sub-models in the model group by using a preset batch-batch logic according to the sequencing result to obtain a batch-batch result.
Specifically, after the sorting, the sorting result and the vertex data of all the sub-models may be packaged to generate Render Node data, and the Render Node data is submitted to the rendering module, and the rendering model may determine, according to the Render Node data, one or more sub-models that may be combined into a batch according to preset batch combining logic, so as to obtain a batch combining result.
In a specific implementation, whether the sub-models belong to the same MeshGroup may be determined according to the Render Node data, if the sub-models belong to one 3962, whether the sub-models are located at adjacent positions in the sorting result may be further determined, and then whether the sub-models located at the adjacent positions meet the batch matching condition may be determined, for example, whether the materials of the sub-models located at the adjacent positions are the same, whether the rendering priorities are the same, whether the batch matching flag parameters are the same, and the batch matching flag parameters are used to characterize batch matching logic used by the sub-models. By traversing each sub-model in turn, if the sub-models at adjacent positions are judged to meet the batch combination condition, the models can be combined into a batch for rendering; if the sub-models at the adjacent positions are judged to be not in accordance with the batch matching condition, the next sub-model is continuously judged until all the sub-models are traversed.
In a preferred embodiment of the present invention, as shown in fig. 2, the step 104 may specifically include the following sub-steps:
A sub-step S11 of judging whether the vertex data stored for the sub-model in the model group needs to be updated;
Specifically, after assigning the model to be processed to one model group MeshGroup, vertex data of all sub-models belonging to the model to be processed may be stored in the MeshGroup, where the vertex data may include data of vertex position, color information, normal direction, texture coordinates, and the like.
In order to avoid consumption of game performance caused by repeated batch processing, in the embodiment of the invention, when the vertex data stored for the sub-model is updated, a subsequent batch processing process can be performed, and if the vertex data stored for the sub-model is not updated, the model can be directly rendered according to the batch result obtained by previous calculation.
In a preferred embodiment of the present invention, the substep S11 may specifically comprise the following substeps:
Judging whether the sub-model is added into the model group for the first time; if the sub-model is added into the model group for the first time, judging that the vertex data stored for the sub-model in the model group needs to be updated; if the sub-model is not added into the model group for the first time, judging whether the world matrix of the sub-model is changed or not; and if the world matrix of the sub-model changes, judging that the vertex data stored for the sub-model in the model group needs to be updated.
Specifically, by determining whether the sub-model is first added MeshGroup, if the sub-model is first added MeshGroup, it is determined that the vertex data stored for the sub-model in MeshGroup needs to be updated.
If the submodel is not added to MeshGroup for the first time, further determining whether the world matrix of the submodel changes, and if the world matrix of the submodel changes, determining that the vertex data stored for the submodel in MeshGroup needs to be updated. For example, the model in the model special effects may be displaced, rotated, scaled, or the world matrix of the model may be manually adjusted, etc., so that the world matrix of the model changes, and in the case that the world matrix of the model changes, vertex data stored for the model in MeshGroup needs to be updated.
A substep S12, when the vertex data stored for the submodel in the model group needs to be updated, updating the vertex data stored for the submodel in the model group, and setting an update bit corresponding to the model group to be true;
When the vertex data stored for the sub-model in the model group needs to be updated, the vertex data stored for the sub-model in the model group can be updated, and an update bit corresponding to the model group is set to be true. The update bit is a parameter for describing whether the sub-model in the model group changes, when the update bit is true, the sub-model in the model group changes, batch combination calculation needs to be performed again, and when the update bit is false, the sub-model in the model group does not change, and batch combination calculation does not need to be performed again.
Step S13, if the update bit of the model group is true, a geometric data structure is established for the model group, and vertex buffering and index buffering of all sub-models are stored in the geometric data structure according to the sequencing result;
Specifically, it may be determined whether the update bit corresponding to MeshGroup is true, and when the update bit of MeshGroup is true, a Geometry data structure may be created for MeshGroup, where the Geometry is common, i.e., all submodels in one MeshGroup reference the same Geometry. In the embodiment of the present invention, each Sub-model in the Geometry data structure Geometry has a corresponding Sub-Geometry data structure sub_geometry, and the sub_geometry is used for recording a start index and an end index of the Sub-model in the Geometry. Specifically, a sub_geometry may be created for each Sub-model, then a Geometry corresponding to MeshGroup where the Sub-model is located is set, so that in a subsequent batch calculation process, whether the Sub-models belong to the same MeshGroup can be determined according to the Geometry referenced by the Sub-Geometry corresponding to the Sub-model, that is, when the geometries referenced by the Sub-Geometry corresponding to the Sub-model are the same, the Sub-models are determined to belong to the same Geometry, and when the geometries referenced by the Sub-Geometry corresponding to the Sub-model are different, the Sub-models are determined not to belong to the same Geometry.
In the embodiment of the present invention, after creating the Geometry for MeshGroup, vertex Buffer and Index Buffer of all sub-models may be stored in the Geometry according to the sorting result of all sub-models. The vertex buffer is used for storing vertex data, and the index buffer is used for storing index addresses corresponding to vertex data corresponding to the vertices in the geometric data structure.
In a preferred embodiment of the present invention, the substep S13 may specifically comprise the following substeps:
Transforming vertex data stored for the sub-model in the model group into world space to obtain target vertex data; and storing vertex buffers and index buffers of all sub-models in the geometric data structure according to the sorting result and the target vertex data.
In the embodiment of the invention, the vertex data stored for the sub-models in the model group can be transformed into world space to obtain target vertex data, and then vertex buffers and index buffers of all the sub-models are stored in the geometric data structure according to the sequencing result and the target vertex data.
And S14, carrying out batch processing on the sub-models in the model group by using a preset batch logic according to vertex buffer and index buffer stored in the geometric data structure so as to obtain a batch result.
Specifically, the preset batch combining logic can judge whether the sub-model belongs to one MeshGroup according to vertex buffering and index buffering, judge whether the sub-models are positioned at adjacent positions, judge whether the materials of the sub-models positioned at the adjacent positions are the same, judge whether the rendering priority is the same, judge whether batch combining zone bit parameters are the same and the like, so that one or the sub-models which are required to be combined into one batch are determined, and a batch combining result is obtained.
In a preferred embodiment of the present invention, the substep S14 may specifically comprise the following substeps:
determining one or more sub-models which belong to the same geometric data structure and are adjacent in position in all sub-models according to vertex buffer and index buffer stored in the geometric data structure; and determining a target sub-model meeting the batch matching condition in the one or more sub-models by using preset batch matching logic, and merging the target sub-models to obtain a batch matching result.
Specifically, according to vertex buffer and index buffer stored in the geometric data structure, one or more sub-models which belong to the same geometric data structure and are adjacent in position in all sub-models can be determined, and further, target sub-models which meet the batch condition in one or more sub-models are determined by a preset batch combination logic, and the target sub-models are combined to obtain a batch combination result.
In an embodiment of the present invention, the batch conditions include at least one of: the materials are the same; the rendering priorities are the same; the parameters of the batch flag bits are the same.
The rendering priority is a parameter for determining the priority of rendering when the rendering module processes a plurality of models needing to be rendered, the model with high rendering priority is firstly rendered, and then the model with low rendering priority is rendered.
The batch Flag parameter Flag is used for indicating batch combination logic used by the model, and when the model is judged to be started according to the batch combination Flag parameter, the preset batch combination logic is indicated to be capable of carrying out batch combination processing on the model by adopting the scheme of the embodiment of the invention.
In a preferred embodiment of the present invention, the attributes of the model to be processed include vertex types and used shaders, and the step 102 may specifically include the following sub-steps:
Determining a current operation model from the batch processing queue in sequence; determining a target model group matched with the current operation model according to the vertex type of the current operation model and the used shader; and distributing the current operation model into the target model group.
Specifically, a plurality of corresponding shaders are created MeshGroup for different vertex types and/or used shaders, a model to be processed is sequentially determined from the batch processing queue to serve as a current operation model, then a target MeshGroup matched with the current operation model is determined according to the vertex type and the used shaders of the current operation model, and the current operation model is distributed to the target MeshGroup. And then, continuously determining the next model to be processed from the batch processing queue as the model of the current operation until all the models to be processed in the batch processing queue are traversed.
In a preferred embodiment of the present invention, the determining a target model group matching the currently operated model according to the vertex type and the shader used of the currently operated model includes:
Determining a first sub-model of at least one sub-model of the currently operating model; a target model group matching the currently operating model is determined based on the vertex type of the first sub-model and the shader used.
Although a model to be processed may include multiple sub-models with different vertex types and/or using different shapers, since the same model (such as a base building model) in a game is generally divided into only a few classes, only the vertex type of the first sub-model and the shapers used are determined, so that a large probability can be achieved that the models are put together in the same MeshGroup.
Specifically, by determining a first sub-model of at least one sub-model of the currently operating model, a target MeshGroup that matches the currently operating model is determined based on the vertex type of the first sub-model and the loader used. The first sub-model may be determined according to a preset rule, for example, the first sub-model may be determined by a random method, which is not limited in the embodiment of the present invention.
In a preferred embodiment of the present invention, the step 103 may specifically include the following sub-steps:
Acquiring main texture parameters and material parameters for describing textures and materials of all sub-models in the model group; calculating to obtain a texture hash value according to the main texture parameter, and calculating to obtain a parameter hash value according to the material parameter; and sequencing all the submodels according to the texture hash value and the parameter hash value to obtain a sequencing result.
Wherein, the main texture parameter may refer to a parameter for describing texture mainly used by the sub-model, and the material parameter may refer to a parameter for describing material used by the sub-model.
In the embodiment of the invention, the texture hash value is obtained by obtaining the main texture parameters and the material parameters of the texture sub-models for describing all the sub-models in the model group, calculating according to the main texture parameters, obtaining the parameter hash value by calculating according to the material parameters, and sorting all the sub-models according to the texture hash value and the parameter hash value to obtain the sorting result. Specifically, the sub-models with the same texture hash value and parameter hash value can be arranged at adjacent positions, so that the sub-models which can be batched are ordered at adjacent positions only according to the similarity of the main texture parameters and the material parameters, and batch-batch processing is convenient for the sub-models.
In the batch processing method of the model provided by the embodiment of the invention, the to-be-processed model is added into the batch processing queue, wherein the to-be-processed model comprises at least one sub-model, the to-be-processed model in the batch processing queue is distributed into different model groups according to the attribute of the to-be-processed model, all the sub-models in the model groups are ordered according to the texture and the material of the material sub-model of the sub-model in the model groups respectively to obtain an ordering result, and batch processing is carried out on the sub-models in the model groups according to the ordering result by using preset batch combining logic to obtain the batch combining result. The method and the device can be used for merging the models at the engine level, greatly reduce lots in a large number of building group scenes using similar materials, and improve the success rate of merging the models and support the movement of the models by carrying out the merging process after reordering all the sub-models compared with a static merging mode. Compared with INSTANCING batch obtaining modes, the embodiment of the invention can combine different models by grouping according to the attributes of the models, so that the models can be combined as long as the rendering state, the materials and other information of the models are consistent, and the model is not limited to the models which only combine balls with the same materials and the same grids. And by improving the static batch combination scheme of NeoX engine, the model to be processed is added into the batch combination processing queue for processing, so that the vertex limit can be removed, the sorting strategy of the model is modified, the support transformation model matrix after combination is realized, and the hanging point model under the model and the model special effect are processed to maximize the batch of the model in the combined game scene.
Referring to fig. 3, a step flow chart of another batch processing method of a model according to an embodiment of the present invention is shown, which specifically may include the following steps:
Step 301, loading a model in a game scene; the model is provided with a batch flag bit parameter corresponding to the original batch logic;
The models have corresponding original batch logic, in the embodiment of the invention, the original batch logic corresponding to the models is not changed, after the models in the game scene are loaded on the script layer, the batch scheme of the embodiment of the invention is started by setting the models, namely, the preset batch logic is started by setting the models through batch flag bit parameters.
The original batch combination logic can be a batch combination scheme which is originally started by a model and comprises Static batch combination, dynamic batch combination, INSTANCING example batch combination scheme and the like, wherein the Static batch combination is to use the same material, the same map and pick data corresponding to an object with Static supporting the Static batch combination, and the data is integrated into a combined Buffer zone through calculation and submitted to a GPU, so that the purpose of drawing a batch is achieved. Dynamic batching supports dynamic real-time batching of model-compliant batching. INSTANCING the batch is that under the condition of the same material ball and the same grid model, a model vertex data and different world matrixes of each model are submitted, and the GPU calculates different model positions according to the different world matrixes.
The preset batch combination logic is improved on the basis of static batch combination, and batch combination processing is carried out after all sub-models in the model are reordered, so that limit of top points is removed, the ordering strategy of the model is modified, a support transformation model matrix after combination is realized, and the hanging point model under the model and the model special effect are processed to maximize batches of the model in the combined game.
Step 302, judging whether a model in the game scene starts a dissolve effect;
Specifically, whether the model in the game scene starts the dissolve effect can be judged through attribute data corresponding to the model.
Step 303, if the model in the game scene does not start the dissolve effect, adjusting the batch flag bit parameter to a parameter corresponding to the preset batch logic, and starting the preset batch logic to batch the model in the game scene according to the adjusted batch flag bit parameter;
In the embodiment of the invention, if the model in the game scene does not start the dissolve effect, the batch flag bit parameter can be directly adjusted to the parameter corresponding to the preset batch logic, and the preset batch logic is started to carry out batch processing on the model in the game scene according to the adjusted batch flag bit parameter. The specific batch process is similar to steps 101-104 described above, and will not be described again here.
In a specific implementation, a preset batch logic can be started on a model in a game scene at a script layer, when the model is rendered, if the engine layer determines that the model in the game scene needs to be started on the preset batch logic, a batch flag bit parameter corresponding to the original batch logic can be adjusted to a parameter corresponding to the preset batch logic, so that the preset batch logic is started according to the batch flag bit parameter to perform batch processing on the model in the game scene, and the model is rendered according to a batch result.
Step 304, if the model in the game scene starts the dissolve effect, carrying out batch processing on the model in the game scene according to the original batch logic, and rendering the model in the game scene according to the batch result;
In the embodiment of the invention, if the model in the game scene starts the dissolve effect, the original batch logic is started according to the batch flag bit parameter to carry out batch processing on the model in the game scene, and the model in the game scene is rendered according to the batch result.
Step 305, after the dissolve effect of the model in the game scene is over, clearing the rendering data of the model in the game scene, adjusting the batch flag bit parameter to a parameter corresponding to the preset batch logic, and starting the preset batch logic to perform batch processing on the model in the game scene according to the adjusted batch flag bit parameter.
After the fading effect of the model in the game scene is finished, the rendering data of the model can be cleared, the rendering state of the model is included, the data such as the batch result is calculated, the batch flag bit parameter is adjusted to be the parameter corresponding to the preset batch logic, and then the preset batch logic is started according to the adjusted batch flag bit parameter to carry out batch processing on the model in the game scene.
Because some models in games have a fading effect, such as building models of bases in games, the material parameters of each model are inconsistent in the fading process, so that the batch combination mode carried by a game engine cannot successfully batch the models, such as static batch combination, dynamic batch combination, INSTANCING example batch combination scheme carried by the NeoX engine, and the like, and the models in the fading process cannot be successfully batch combined. In the embodiment of the invention, the model is ensured to firstly keep the rendering state of the model in the period of fading out, after the period of fading out, the rendering data of the model, such as the rendering state, the batch combination result and the like, are cleared, then the batch combination type is modified to enable the batch combination logic preset in the embodiment of the invention through the batch combination zone bit parameter, then the models are subjected to batch combination processing according to the preset batch combination logic, and the models are re-rendered according to the batch combination result, so that the preset batch combination logic is effective.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 4, a block diagram of an embodiment of a batch processing apparatus for a model of the present invention is shown, and may specifically include the following modules:
The model adding module 401 is configured to add a model to be processed to the batch processing queue; wherein the model to be processed comprises at least one sub-model;
a model grouping module 402, configured to allocate the models to be processed in the batch processing queue to different model groupings according to the attribute of the models to be processed;
A model ranking module 403, configured to rank all the sub-models in the model group according to textures and materials of material sub-models of the sub-models in the model group, respectively, so as to obtain a ranking result;
And a model batching module 404, configured to perform batch processing on the sub-models in the model group with a preset batching logic according to the sorting result, so as to obtain a batching result.
In a preferred embodiment of the present invention, the model batching module 404 includes:
a model grouping update judging sub-module, configured to judge whether updating of vertex data stored for the sub-model in the model grouping is required;
A model grouping updating sub-module, configured to update vertex data stored for the sub-model in the model grouping and set an update bit corresponding to the model grouping to be true when the vertex data stored for the sub-model in the model grouping needs to be updated;
A geometric data structure storage sub-module, configured to create a geometric data structure for the model group if the update bit of the model group is true, and store vertex buffers and index buffers of all sub-models in the geometric data structure according to the sorting result;
And the model batch processing sub-module is used for carrying out batch processing on the sub-models in the model group by using preset batch logic according to vertex buffer and index buffer stored in the geometric data structure so as to obtain a batch result.
In a preferred embodiment of the present invention, the model packet update determination submodule includes:
a judging unit for judging whether the sub-model is added to the model group for the first time;
A first determining unit, configured to determine that vertex data stored for the sub-model in the model group needs to be updated if the sub-model is added to the model group for the first time;
The second judging unit is used for judging whether the world matrix of the sub-model changes or not if the sub-model is not added into the model group for the first time; and if the world matrix of the sub-model changes, judging that the vertex data stored for the sub-model in the model group needs to be updated.
In a preferred embodiment of the invention, the geometry data structure storage sub-module comprises:
The vertex data updating unit is used for transforming the vertex data stored for the sub-model in the model group into world space so as to obtain target vertex data;
and the geometric data structure storage unit is used for storing vertex buffers and index buffers of all sub-models in the geometric data structure according to the sorting result and the target vertex data.
In a preferred embodiment of the present invention, the geometric data structure includes a sub-geometric data structure corresponding to each sub-model, and the sub-geometric data structure is used for recording a start index and an end index of the sub-model in the geometric data structure.
In a preferred embodiment of the present invention, the modeling batch sub-module includes:
The adjacent model determining unit is used for determining one or more sub-models which belong to the same geometric data structure and are adjacent in position in all the sub-models according to the vertex buffer and the index buffer stored in the geometric data structure;
The model merging unit is used for determining a target sub-model which accords with the batch matching condition in the one or more sub-models by a preset batch matching logic, and merging the target sub-models to obtain a batch matching result; wherein the batch conditions comprise at least one of:
The materials are the same;
The rendering priorities are the same;
The parameters of the batch flag bits are the same.
In a preferred embodiment of the present invention, the attributes of the model to be processed include vertex types and shaders used, and the model grouping module 402 includes:
The current operation model determining submodule is used for sequentially determining the current operation model from the batch processing queue;
A target model grouping determination submodule, configured to determine a target model grouping matched with the currently operated model according to the vertex type of the currently operated model and the shader used;
and the model assignment sub-module is used for assigning the currently operated model to the target model group.
In a preferred embodiment of the present invention, the object model grouping determination submodule includes:
a first sub-model determination unit configured to determine a first sub-model of at least one sub-model of the currently operated model;
And the target model grouping determining unit is used for determining a target model grouping matched with the current operation model according to the vertex type of the first sub-model and the used shader.
In a preferred embodiment of the present invention, the vertex data comprises at least one of: vertex position, color information, normal direction, and texture coordinates.
In a preferred embodiment of the present invention, the model ordering module 403 includes:
A material parameter module sub-module, configured to obtain main texture parameters and material parameters for describing textures and materials of all sub-models in the model group;
the parameter hash value calculation sub-module is used for calculating a texture hash value according to the main texture parameter and calculating a parameter hash value according to the material parameter;
and the model ordering sub-module is used for ordering all the sub-models according to the texture hash value and the parameter hash value so as to obtain an ordering result.
In a preferred embodiment of the present invention, further comprising:
The model loading module is used for loading a model in the game scene; the model is provided with a batch flag bit parameter corresponding to the original batch logic;
The effect judging module is used for judging whether the model in the game scene starts the fading effect or not;
And the first batch processing module is used for adjusting the batch flag bit parameter to be a parameter corresponding to the preset batch logic if the model in the game scene does not start the fading effect, and starting the preset batch logic to perform batch processing on the model in the game scene according to the adjusted batch flag bit parameter.
In a preferred embodiment of the present invention, further comprising:
The second batch processing module is used for carrying out batch processing on the models in the game scene according to the original batch logic if the models in the game scene start the fading effect, and rendering the models in the game scene according to the batch result;
And the third batch processing module is used for clearing the rendering state of the model in the game scene after the fading effect of the model in the game scene is finished, adjusting the batch flag bit parameter to a parameter corresponding to the preset batch logic, and starting the preset batch logic to carry out batch processing on the model in the game scene according to the adjusted batch flag bit parameter.
In a preferred embodiment of the present invention, further comprising:
And the model clipping module is used for clipping the model in the game scene to obtain the model to be processed.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The embodiment of the invention also provides an electronic device, as shown in fig. 5, including:
A processor 501 and a storage medium 502, said storage medium 502 storing machine readable instructions executable by said processor 501, said processor 501 executing said machine readable instructions to perform a method according to any one of the embodiments of the present invention when the electronic device is running. The specific implementation manner and the technical effect are similar, and are not repeated here.
An embodiment of the present invention further provides a computer readable storage medium, as shown in fig. 6, on which a computer program 601 is stored, the computer program 601 performing the method according to any one of the embodiments of the present invention when being executed by a processor. The specific implementation manner and the technical effect are similar, and are not repeated here.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or terminal device that comprises the element.
The above detailed description of a model batch processing method and a model batch processing device provided by the invention applies specific examples to illustrate the principles and embodiments of the invention, and the above examples are only used to help understand the method and core ideas of the invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.
Claims (16)
1.A method for batch processing of a model, comprising:
Adding the model to be processed into a batch processing queue; wherein the model to be processed comprises at least one sub-model;
according to the attribute of the model to be processed, distributing the models to be processed in the batch processing queue to different model groups;
Sequencing all the sub-models in the model group according to textures and materials of the sub-models in the model group respectively to obtain sequencing results;
Carrying out batch mixing processing on the sub-models in the model group by using preset batch mixing logic according to the sequencing result to obtain a batch mixing result;
And carrying out batch-batch processing on the sub-models in the model group by using preset batch-batch logic according to the sequencing result to obtain a batch-batch result, wherein the batch-batch processing comprises the following steps:
If the update bit of the model group is true, creating a geometric data structure for the model group, and storing vertex buffers and index buffers of all sub-models in the geometric data structure according to the sorting result;
And carrying out batch processing on the sub-models in the model group by using preset batch logic according to vertex buffer and index buffer stored in the geometric data structure so as to obtain a batch result.
2. The method of claim 1, further comprising, before said creating a geometry data structure for said model group if said update bit of said model group is true and storing vertex buffers and index buffers for all sub-models in said geometry data structure based on said ordering result:
judging whether vertex data stored for the sub-model in the model group needs to be updated or not;
And when the vertex data stored for the sub-model in the model group needs to be updated, updating the vertex data stored for the sub-model in the model group, and setting an update bit corresponding to the model group to be true.
3. The method of claim 2, wherein the determining whether the vertex data stored for the sub-model in the model group needs to be updated comprises:
Judging whether the sub-model is added into the model group for the first time;
If the sub-model is added into the model group for the first time, judging that the vertex data stored for the sub-model in the model group needs to be updated;
If the sub-model is not added into the model group for the first time, judging whether the world matrix of the sub-model is changed or not; and if the world matrix of the sub-model changes, judging that the vertex data stored for the sub-model in the model group needs to be updated.
4. The method of claim 1, wherein storing vertex buffers and index buffers for all submodels in the geometric data structure according to the ordering result comprises:
transforming vertex data stored for the sub-model in the model group into world space to obtain target vertex data;
and storing vertex buffers and index buffers of all sub-models in the geometric data structure according to the sorting result and the target vertex data.
5. The method of claim 1 or 4, wherein the geometry data structure comprises a sub-geometry data structure corresponding to each sub-model, the sub-geometry data structure being used to record a start index and an end index of the sub-model in the geometry data structure.
6. The method of claim 1, wherein the batching sub-models in the model group with a preset batching logic according to vertex buffering and index buffering stored in the geometry data structure to obtain a batching result, comprising:
Determining one or more sub-models which belong to the same geometric data structure and are adjacent in position in all sub-models according to vertex buffer and index buffer stored in the geometric data structure;
determining a target sub-model meeting the batch matching condition in the one or more sub-models by using preset batch matching logic, and merging the target sub-models to obtain a batch matching result; wherein the batch conditions comprise at least one of:
The materials are the same;
The rendering priorities are the same;
The parameters of the batch flag bits are the same.
7. The method of claim 1, wherein the attributes of the model to be processed include vertex types and shaders used, and wherein the assigning the models to be processed in the aggregate batch queue to different model groupings based on the attributes of the model to be processed comprises:
determining a current operation model from the batch processing queue in sequence;
determining a target model group matched with the current operation model according to the vertex type of the current operation model and the used shader;
And distributing the current operation model into the target model group.
8. The method of claim 7, wherein the determining a target model group matching the currently operated model based on the vertex type and the shader used for the currently operated model comprises:
determining a first sub-model of at least one sub-model of the currently operating model;
A target model group matching the currently operating model is determined based on the vertex type of the first sub-model and the shader used.
9. The method of claim 2, wherein the vertex data comprises at least one of: vertex position, color information, normal direction, and texture coordinates.
10. The method of claim 1, wherein the ranking all sub-models according to the texture and texture of the sub-models in the model group, respectively, to obtain a ranking result comprises:
Acquiring main texture parameters and material parameters for describing textures and materials of all sub-models in the model group;
Calculating to obtain a texture hash value according to the main texture parameter, and calculating to obtain a parameter hash value according to the material parameter;
And sequencing all the submodels according to the texture hash value and the parameter hash value to obtain a sequencing result.
11. The method of claim 1, further comprising, prior to the step of adding the model to be processed to the batch queue:
Loading a model in a game scene; the model is provided with a batch flag bit parameter corresponding to the original batch logic;
Judging whether a model in the game scene starts a dissolve effect or not;
And if the model in the game scene does not start the fading effect, adjusting the batch flag bit parameter to a parameter corresponding to the preset batch logic, and starting the preset batch logic to perform batch processing on the model in the game scene according to the adjusted batch flag bit parameter.
12. The method as recited in claim 11, further comprising:
if the model in the game scene starts the fading effect, carrying out batch processing on the model in the game scene according to the original batch logic, and rendering the model in the game scene according to a batch result;
after the fading effect of the model in the game scene is finished, clearing rendering data of the model in the game scene, adjusting the batch flag bit parameter to be a parameter corresponding to the preset batch logic, and starting the preset batch logic to carry out batch processing on the model in the game scene according to the adjusted batch flag bit parameter.
13. The method of claim 1, further comprising, prior to the step of adding the model to be processed to a processing queue of a preset pooling logic:
and cutting the model in the game scene to obtain the model to be processed.
14. A batch processing apparatus for a model, comprising:
The model adding module is used for adding the model to be processed into the batch processing queue; wherein the model to be processed comprises at least one sub-model;
the model grouping module is used for distributing the models to be processed in the batch processing queue to different model groups according to the attribute of the models to be processed;
The model ordering module is used for ordering all the sub-models in the model group according to textures and materials of the material sub-models of the sub-models in the model group respectively to obtain an ordering result;
The model batch combining module is used for carrying out batch combining processing on the sub-models in the model group by using preset batch combining logic according to the sequencing result so as to obtain a batch combining result;
The model batch combining module comprises:
A geometric data structure storage sub-module, configured to create a geometric data structure for the model group if the update bit of the model group is true, and store vertex buffers and index buffers of all sub-models in the geometric data structure according to the sorting result;
And the model batch processing sub-module is used for carrying out batch processing on the sub-models in the model group by using preset batch logic according to vertex buffer and index buffer stored in the geometric data structure so as to obtain a batch result.
15. An electronic device, comprising:
A processor and a storage medium storing machine-readable instructions executable by the processor, the processor executing the machine-readable instructions when the electronic device is running to perform the method of any one of claims 1-13.
16. A computer readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, performs the method according to any of claims 1-13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110749147.0A CN113426130B (en) | 2021-07-01 | 2021-07-01 | Batch processing method and device for model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110749147.0A CN113426130B (en) | 2021-07-01 | 2021-07-01 | Batch processing method and device for model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113426130A CN113426130A (en) | 2021-09-24 |
CN113426130B true CN113426130B (en) | 2024-05-28 |
Family
ID=77758697
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110749147.0A Active CN113426130B (en) | 2021-07-01 | 2021-07-01 | Batch processing method and device for model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113426130B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114816629B (en) * | 2022-04-15 | 2024-03-22 | 网易(杭州)网络有限公司 | Method and device for drawing display object, storage medium and electronic device |
CN116977537A (en) * | 2022-04-22 | 2023-10-31 | 北京字跳网络技术有限公司 | Batch rendering method, device, equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109816763A (en) * | 2018-12-24 | 2019-05-28 | 苏州蜗牛数字科技股份有限公司 | A kind of method for rendering graph |
CN109840931A (en) * | 2019-01-21 | 2019-06-04 | 网易(杭州)网络有限公司 | Conjunction batch render method, apparatus, system and the storage medium of skeleton cartoon |
CN109978981A (en) * | 2019-03-15 | 2019-07-05 | 广联达科技股份有限公司 | A kind of batch rendering method improving buildings model display efficiency |
CN110570507A (en) * | 2019-09-11 | 2019-12-13 | 珠海金山网络游戏科技有限公司 | Image rendering method and device |
CN110738720A (en) * | 2019-10-08 | 2020-01-31 | 腾讯科技(深圳)有限公司 | Special effect rendering method and device, terminal and storage medium |
CN111145329A (en) * | 2019-12-25 | 2020-05-12 | 北京像素软件科技股份有限公司 | Model rendering method and system and electronic device |
CN112057868A (en) * | 2020-09-17 | 2020-12-11 | 网易(杭州)网络有限公司 | Game model batch processing method and device and electronic equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108339270B (en) * | 2018-02-09 | 2019-03-08 | 网易(杭州)网络有限公司 | The processing method, rendering method and device of static component in scene of game |
-
2021
- 2021-07-01 CN CN202110749147.0A patent/CN113426130B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109816763A (en) * | 2018-12-24 | 2019-05-28 | 苏州蜗牛数字科技股份有限公司 | A kind of method for rendering graph |
CN109840931A (en) * | 2019-01-21 | 2019-06-04 | 网易(杭州)网络有限公司 | Conjunction batch render method, apparatus, system and the storage medium of skeleton cartoon |
CN109978981A (en) * | 2019-03-15 | 2019-07-05 | 广联达科技股份有限公司 | A kind of batch rendering method improving buildings model display efficiency |
CN110570507A (en) * | 2019-09-11 | 2019-12-13 | 珠海金山网络游戏科技有限公司 | Image rendering method and device |
CN110738720A (en) * | 2019-10-08 | 2020-01-31 | 腾讯科技(深圳)有限公司 | Special effect rendering method and device, terminal and storage medium |
CN111145329A (en) * | 2019-12-25 | 2020-05-12 | 北京像素软件科技股份有限公司 | Model rendering method and system and electronic device |
CN112057868A (en) * | 2020-09-17 | 2020-12-11 | 网易(杭州)网络有限公司 | Game model batch processing method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113426130A (en) | 2021-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113426130B (en) | Batch processing method and device for model | |
CN109840931B (en) | Batch rendering method, device and system for skeletal animation and storage medium | |
US9424617B2 (en) | Graphics command generation device and graphics command generation method | |
CN110990516B (en) | Map data processing method, device and server | |
US20080074430A1 (en) | Graphics processing unit with unified vertex cache and shader register file | |
US7733341B2 (en) | Three dimensional image processing | |
US6046747A (en) | Graphics application programming interface avoiding repetitive transfer of texture mapping data | |
US7952588B2 (en) | Graphics processing unit with extended vertex cache | |
GB2542131B (en) | Graphics processing method and system for processing sub-primitives | |
CN113791914B (en) | Object processing method, device, computer equipment, storage medium and product | |
CN101124613A (en) | Increased scalability in the fragment shading pipeline | |
US5325485A (en) | Method and apparatus for displaying primitives processed by a parallel processor system in a sequential order | |
CN113256782B (en) | Three-dimensional model generation method and device, storage medium and electronic equipment | |
US20230298237A1 (en) | Data processing method, apparatus, and device and storage medium | |
CN112837416A (en) | Triangulation-based polygon rendering method and device and storage medium | |
CN114494646A (en) | Scene rendering method and device and electronic equipment | |
CN111476858B (en) | WebGL-based 2d engine rendering method, device and equipment | |
CN115049531B (en) | Image rendering method and device, graphic processing equipment and storage medium | |
JPH0814842B2 (en) | Image processing method and apparatus | |
US20200250790A1 (en) | Graphics processing method and related apparatus, and device | |
CN114519762A (en) | Model normal processing method and device, storage medium and electronic equipment | |
CN118052923B (en) | Object rendering method, device and storage medium | |
CN117808949B (en) | Scene rendering method | |
CN117788674A (en) | Texture map merging method and device, electronic equipment and storage medium | |
CN118736090A (en) | Graphic rendering method and system based on complex scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |