CN113426130A - Batch processing method and device for models - Google Patents

Batch processing method and device for models Download PDF

Info

Publication number
CN113426130A
CN113426130A CN202110749147.0A CN202110749147A CN113426130A CN 113426130 A CN113426130 A CN 113426130A CN 202110749147 A CN202110749147 A CN 202110749147A CN 113426130 A CN113426130 A CN 113426130A
Authority
CN
China
Prior art keywords
model
batching
models
sub
vertex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110749147.0A
Other languages
Chinese (zh)
Other versions
CN113426130B (en
Inventor
邹星明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110749147.0A priority Critical patent/CN113426130B/en
Publication of CN113426130A publication Critical patent/CN113426130A/en
Application granted granted Critical
Publication of CN113426130B publication Critical patent/CN113426130B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a batch processing method and a batch processing device for a model, wherein the method comprises the following steps: adding a model to be processed into a batch processing queue; the model to be processed comprises at least one sub-model; distributing the models to be processed in the batch processing queue to different model groups according to the attributes of the models to be processed; sorting all the submodels in the model group according to the texture and the material of the submodels in the model group respectively to obtain a sorting result; and according to the sequencing result, carrying out batch combination processing on the sub-models in the model group by using a preset batch combination logic so as to obtain a batch combination result. And adding the model to be processed into a batch combining processing queue for processing, thereby removing the limit of the number of top points, modifying the sorting strategy of the model, realizing the combination and supporting the transformation of a model matrix, and combining the batch of the model in the game scene to maximize the processing of the hang point model and the model special effect under the model.

Description

Batch processing method and device for models
Technical Field
The invention relates to the technical field of games, in particular to a model batch processing method and a model batch processing device.
Background
In computer Graphics, in brief, a batch is an operation in which a Central Processing Unit (CPU) submits data required for rendering to a Graphics Processing Unit (GPU) and calls a Graphics API (Application Programming Interface). If any batching process is not carried out, the game scene is just a batch if the game scene is an independent model. Because each batch needs to perform operations such as data submission, Shader setting, rendering state switching and the like, which are relatively performance-consuming operations, the problem of performance reduction caused by excessive processing batches of one frame is solved, and batch combination is an important means for improving game performance in a game.
In the prior art, there are three main batching schemes, which are respectively: static, dynamic, and instant example batching protocols. The Static batch is data corresponding to an object which uses the same material and the same map and selects Static supporting the Static batch, and the data is integrated into a combined Buffer area data through calculation and submitted to a GPU, so that the purpose of drawing a batch is achieved. Dynamic batching supports dynamic real-time batching of model combinations meeting requirements. The Instanceting co-batch refers to submitting model vertex data and different world matrixes of each model under the condition of the models of the same material ball and the same grid, and the GPU calculates different model positions according to the different world matrixes.
However, static batching has the limitation of the number of vertexes, and the problem of the sequence of batch processing causes that different models can hardly be batched, and a world matrix of a transformation model is not supported after batching; the dynamic batching is characterized in that each frame needs to be calculated in real time, the number of model vertexes is strictly limited for efficiency, and the calculation consumption of each frame is overlarge; the Instancing pooling scheme supports merging the same models, which is not suitable for use in games where there are a large number of different models. If models in a game are characterized by a wide variety of models, but the different models use almost the same material and texture, none of the above solutions is suitable for batch processing of the models.
Disclosure of Invention
In view of the above-mentioned problems that the existing batching scheme is not suitable for models in a game scene, but the models are almost the same in material and texture, embodiments of the present invention are proposed to provide a batching method for a model and a batching apparatus for a corresponding model, which overcome or at least partially solve the above-mentioned problems.
The embodiment of the invention discloses a batch processing method of a model, which comprises the following steps:
adding a model to be processed into a batch processing queue; wherein the model to be processed comprises at least one sub-model;
distributing the models to be processed in the batch processing queue to different model groups according to the attributes of the models to be processed;
sorting all the submodels in the model group according to the texture and the material of the submodels in the model group respectively to obtain a sorting result;
and according to the sequencing result, carrying out batch combination processing on the sub-models in the model group by using a preset batch combination logic so as to obtain a batch combination result.
Optionally, the batching sub-models in the model group by a preset batching logic according to the sorting result to obtain a batching result includes:
judging whether the vertex data stored for the submodels in the model group needs to be updated or not;
when the vertex data stored aiming at the submodel in the model group needs to be updated, updating the vertex data stored aiming at the submodel in the model group, and setting the updating bit corresponding to the model group as true;
if the update bit of the model group is true, establishing a geometric data structure aiming at the model group, and storing vertex buffer and index buffer of all sub models in the geometric data structure according to the sequencing result;
and carrying out batching processing on the sub models in the model grouping by using a preset batching logic according to the vertex buffer and the index buffer stored in the geometric data structure to obtain a batching result.
Optionally, the determining whether the vertex data stored for the sub-model in the model group needs to be updated includes:
judging whether the submodel is added into the model group for the first time;
if the submodel is added into the model group for the first time, judging that the vertex data stored aiming at the submodel in the model group needs to be updated;
if the submodel is not added into the model group for the first time, judging whether the world matrix of the submodel changes; and if the world matrix of the submodel changes, judging that the vertex data stored aiming at the submodel in the model group needs to be updated.
Optionally, the storing vertex buffers and index buffers of all submodels in the geometry data structure according to the sorting result includes:
converting the vertex data stored in the model group aiming at the submodel into a world space to obtain target vertex data;
and storing vertex buffer and index buffer of all sub models in the geometric data structure according to the sequencing result and the target vertex data.
Optionally, the geometry data structure includes a sub-geometry data structure corresponding to each sub-model, and the sub-geometry data structure is used for recording a start index and an end index of the sub-model in the geometry data structure.
Optionally, the batching sub-models in the model grouping according to the vertex buffer and the index buffer stored in the geometric data structure with a preset batching logic to obtain a batching result includes:
determining one or more submodels which belong to the same geometric data structure and are adjacent in position in all the submodels according to the vertex buffer and the index buffer stored in the geometric data structure;
determining a target sub-model which meets the batching condition in the one or more sub-models by preset batching logic, and merging the target sub-models to obtain a batching result; wherein the batching condition comprises at least one of the following:
the materials are the same;
rendering priorities are the same;
the parameters of the batch combination zone bits are the same.
Optionally, the attribute of the to-be-processed model includes a vertex type and a shader used, and the allocating, according to the attribute of the to-be-processed model, the to-be-processed model in the batching processing queue to different model groups includes:
sequentially determining a model of the current operation from the batch processing queue;
determining a target model group matched with the currently operated model according to the vertex type of the currently operated model and the used shader;
assigning the currently operating model to the target model group.
Optionally, the determining, according to the vertex type of the currently operated model and the shader used, a target model group matching the currently operated model includes:
determining a first sub-model of at least one sub-model of the currently operating model;
and determining a target model group matched with the currently operated model according to the vertex type of the first sub model and the used shader.
Optionally, the vertex data comprises at least one of: vertex position, color information, normal direction, and texture coordinates.
Optionally, the sorting all the submodels according to textures and materials of the submodels in the model group respectively to obtain a sorting result includes:
acquiring main texture parameters and material parameters for describing textures and materials of all sub models in the model group;
calculating to obtain texture hash values according to the main texture parameters, and calculating to obtain parameter hash values according to the material parameters;
and sequencing all the sub-models according to the texture hash value and the parameter hash value to obtain a sequencing result.
Optionally, before the step of adding the model to be processed to the batch processing queue, the method further includes:
loading a model in a game scene; wherein the model has a batching flag bit parameter corresponding to the original batching logic;
judging whether a model in the game scene starts a fading effect or not;
if the model in the game scene does not start the fade-out effect, adjusting the batching flag bit parameter to a parameter corresponding to the preset batching logic, and starting the preset batching logic according to the adjusted batching flag bit parameter to carry out batching processing on the model in the game scene.
Optionally, the method further comprises:
if the model in the game scene starts a fading effect, carrying out batch combination processing on the model in the game scene according to the original batch combination logic, and rendering the model in the game scene according to the batch combination result;
and after the fading effect of the model in the game scene is finished, clearing rendering data of the model in the game scene, adjusting the batching flag bit parameter to a parameter corresponding to the preset batching logic, and starting the preset batching logic according to the adjusted batching flag bit parameter to carry out batching processing on the model in the game scene.
Optionally, before the step of adding the model to be processed to the batch processing queue, the method further includes:
and cutting the model in the game scene to obtain the model to be processed.
The embodiment of the invention also discloses a batch processing device of the model, which comprises:
the model adding module is used for adding the model to be processed into the batch processing queue; wherein the model to be processed comprises at least one sub-model;
the model grouping module is used for distributing the models to be processed in the batch processing queue to different model groups according to the attributes of the models to be processed;
the model sorting module is used for sorting all the submodels in the model grouping according to the texture and the material of the material submodel of the submodel in the model grouping respectively to obtain a sorting result;
and the model batching module is used for batching the sub models in the model group by using a preset batching logic according to the sequencing result so as to obtain a batching result.
The embodiment of the invention also discloses an electronic device, which comprises:
a processor and a storage medium storing machine-readable instructions executable by the processor, the processor executing the machine-readable instructions to perform a method according to any one of the embodiments of the invention when the electronic device is operated.
The embodiment of the invention also discloses a computer readable storage medium, wherein a computer program is stored on the storage medium, and when the computer program is executed by a processor, the method of any one of the embodiments of the invention is executed.
The embodiment of the invention has the following advantages:
in the method for batching models provided by the embodiment of the present invention, a model to be processed is added to a batching queue, wherein the model to be processed includes at least one sub-model, the model to be processed in the batching queue is allocated to different model groups according to the attribute of the model to be processed, all sub-models in the model groups are sorted according to the texture and the texture of the texture sub-model of the sub-models in the model groups, so as to obtain a sorting result, and the sub-models in the model groups are batched according to the sorting result, so as to obtain a batching result. Compared with a static batching mode, the embodiment of the invention has the advantages that the batching processing is carried out after all the sub models are reordered, the success rate of model combination is improved, and the model movement can be supported. Compared with the method of supporting batch, the embodiment of the invention can merge different models by grouping according to the attributes of the models, and the models can be merged as long as the rendering states, the materials and other information of the models are consistent, and are not limited to merging only the models with the same material balls and the same grids. And the model to be processed is added into the batch processing queue for processing by improving the static batch combination scheme in the game engine, so that the limit of the number of top points can be removed, the ordering strategy of the model is modified, the model matrix is supported and transformed after combination, and the batch of the model in the game scene is combined by processing the hanging point model and the model special effect under the model to the maximum extent.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the description of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flowchart illustrating steps of a method for batch processing of a model according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating the steps of a batch process according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps of a method for batch processing of models according to an embodiment of the present invention;
FIG. 4 is a block diagram of a batch processing apparatus for a model according to an embodiment of the present invention;
FIG. 5 is a block diagram of an electronic device of the present invention;
fig. 6 is a block diagram of a storage medium of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the prior art, the game engine (such as a NeoX engine) mainly supports the batching schemes of static batching, dynamic batching and instant instance batching. In some games, such as slg (simulation game) strategy games in space background, each player has its own base, and because the base building has upgrading property, it is split into many small models for flexibility, resulting in very high number of batches of the player base, and the batch combination is needed for reducing the number of batches. The models of the base are various, but the used materials and textures are almost the same, so that the processing batch of the models is reduced as much as possible, and the models and the special effect models on the hanging points of the models can be processed. The batching scheme of the game engine has various problems which can not meet the requirements: the static batching has the limitation of the number of the top points and the problem of the sequence of batch processing, so that different models can hardly be batched, and a world matrix of a transformation model is not supported after the batching; base models vary, and it is clear that the Instance batch scheme that supports merging the same models does not work much; dynamic batching is because each frame needs real-time computation, the number of model vertices is strictly limited for efficiency, and the computation consumption of each frame is too large.
The embodiment of the invention provides a scheme which can be suitable for carrying out combined batch processing on models with various models in a game scene but almost the same used material and texture, the scheme is characterized in that a preset combined batch logic is added, the preset combined batch logic is MeshPacker, and is a self-defined logic system for processing a combined model, when the combined batch processing is required, the models to be processed are added into a combined batch processing queue, wherein the models to be processed comprise at least one sub-model, the models to be processed in the combined batch processing queue are distributed into different model groups according to the attribute of the models to be processed, all the sub-models in the model groups are respectively sequenced according to the texture and the material of the sub-models in the model groups to obtain a sequencing result, the sub-models in the model groups are subjected to combined batch processing according to the sequencing result and the preset combined batch logic, to obtain a pooled result. Compared with a static batching mode, the embodiment of the invention has the advantages that the batching processing is carried out after all the sub models are reordered, the success rate of model combination is improved, and the model movement can be supported. Compared with the method of supporting batch, the embodiment of the invention can merge different models by grouping according to the attributes of the models, and the models can be merged as long as the rendering states, the materials and other information of the models are consistent, and are not limited to merging only the models with the same material balls and the same grids.
In addition, the invention can also reduce CSM (Cascaded Shadow Maps) Shadow batches after combination, although when combination, a group of model data is calculated to cause drawing of different batches due to slight difference of material parameters and the like, the process avoids the switching of rendering states, still can bring certain efficiency improvement, and finally can effectively reduce the combined batches under the condition of most complex base buildings in games, the CPU utilization rate is basically equal, the GPU utilization rate is reduced in a small range, and the efficiency of game operation is obviously improved.
Referring to fig. 1, a flowchart illustrating steps of a batch processing method for a model according to an embodiment of the present invention is shown, which may specifically include the following steps:
step 101, adding a model to be processed into a batch processing queue; wherein the model to be processed comprises at least one sub-model;
the model to be processed may be a visual model to be rendered in a game scene, for example, the model to be processed may be a building model of a base, a virtual character model, or the like. In the embodiment of the present invention, the to-be-processed model may include a plurality of models, and each of the to-be-processed models may include at least one sub-model, for example, for the building model of the base, since the building model of the base has the upgrade property, each of the sub-models is a sub-model that is split into a plurality of small models in order to ensure flexibility. In a specific implementation, in order to improve the efficiency of game rendering, all the submodels included in the model to be processed need to be batched.
In the embodiment of the invention, a batch processing queue can be created in advance, the models to be processed are added into the batch processing queue, so that the models to be processed added into the batch processing queue can be reordered conveniently, and the batch processing is carried out through the preset batch logic. The preset batching logic may be MeshPacker, which is a self-defined logic system for processing merged models, wherein the preset batching logic may open the top points of the grouped model groups, so that the top points of the models are increased to the maximum top points indicated by the game engine, for example, to the maximum model top points UINT16_ MAX supported by the Neox engine, so that one merged model Group may contain more small models.
In a preferred embodiment of the present invention, before the step 101, the following steps may be further included:
and cutting the model in the game scene to obtain the model to be processed.
Specifically, the model in the game scene can be cut through the view cone of the main camera to obtain the model to be processed, so that the model to be processed which needs to be rendered in the current game scene can be obtained.
102, distributing the models to be processed in the batch processing queue to different model groups according to the attributes of the models to be processed;
the attributes of the model to be processed may be parameters that describe certain attributes of the model to be processed. Specifically, the attributes of the model to be processed include a vertex type of the model and a Shader type used, where the vertex type may be a vertex format, and if the computation of a vertex requires illumination, the vertex format requires a normal parameter, for example, if the computation of a vertex requires a textured three-dimensional graph, the vertex format requires UV texture coordinates. Shaders are used to implement image rendering, and the types of shaders used may include vertex shaders and pixel shaders, among others.
Because the rendering modes of the to-be-processed models are different if the vertex types and/or the used Shader types of the to-be-processed models are different, the to-be-processed models cannot be combined into one batch for processing, in the embodiment of the invention, the to-be-processed models in the batch processing queue can be distributed into different model groups according to the attributes of the to-be-processed models.
Specifically, different model groups, namely Meshgroup, are respectively created for different vertex types and/or used Shader types, a model to be processed is sequentially read from a batch processing queue, a vertex type corresponding to the model to be processed and a used Shader type are obtained, a matched Meshgroup is determined, and the model to be processed is allocated to the matched Meshgroup, wherein when the model to be processed is allocated to one Meshgroup, all sub-models included in the model to be processed are allocated to the same Meshgroup. And continuing to read the to-be-processed models from the batch processing queue until all the to-be-processed models in the batch processing queue are traversed and completed.
103, sequencing all submodels in the model group according to the texture and the material of the submodels in the model group to obtain a sequencing result;
when the models are rendered or drawn, only the models in the adjacent positions are judged to be combined into one batch, and in the process of sorting the characteristics of the models in the Meshgroup, the addresses of the models are selected and considered by a static batch combination scheme, so that the models cannot be sorted to the adjacent positions in a large probability.
In the embodiment of the invention, in order to increase the probability of merging the models, the sequencing strategy of the models can be modified, and the sub models which can be combined are arranged at adjacent positions only according to the similarity of the textures and the materials of the sub models by comparing the textures and the materials of the sub models. Specifically, the texture and the material of each submodel may be compared, so as to sort all submodels in a model group according to the texture and the material of the submodel to obtain a sorting result, wherein the submodels belonging to different models to be processed in the sorting result may be arranged in adjacent positions.
And 104, carrying out combined batch processing on the sub models in the model group by using a preset combined batch logic according to the sequencing result to obtain a combined batch result.
Specifically, after the sorting, the sorting result and vertex data of all submodels may be packaged to generate Render Node data, and the Render Node data is submitted to the rendering module, and the rendering module may determine, according to the Render Node data, one or more submodels that may be merged into one batch with a preset batching logic, so as to obtain a batching result.
In a specific implementation, whether the submodels belong to the same Meshgroup can be judged according to Render Node data, if the submodels belong to the same Meshgroup, whether the submodels are located at adjacent positions in the sequencing result is further judged, and then whether the submodels located at the adjacent positions meet the batching condition is judged, for example, whether the submodels at the adjacent positions are the same in material, the same in rendering priority and the same in batching flag bit parameters are judged, and the batching flag bit parameters are used for representing batching logic used by the submodels. By sequentially traversing each sub-model, if the sub-models at adjacent positions are judged to meet the batch combination condition, the models can be combined into one batch for rendering; and if the submodels at the adjacent positions are judged not to be in accordance with the batching condition, continuously judging the next submodel until all the submodels are traversed.
In a preferred embodiment of the present invention, as shown in fig. 2, the step 104 may specifically include the following sub-steps:
a substep S11, determining whether the vertex data stored for the submodel in the model grouping needs to be updated;
specifically, after the model to be processed is assigned to a model group MeshGroup, vertex data of all sub-models belonging to the model to be processed may be stored in the MeshGroup, where the vertex data may include data such as a vertex position, color information, a normal direction, and texture coordinates.
In order to avoid consumption of game performance caused by repeated batching, in the embodiment of the invention, the subsequent batching process can be carried out when the vertex data stored aiming at the submodel is updated, and if the vertex data stored aiming at the submodel is not updated, the model can be directly rendered according to the batching result obtained by previous calculation.
In a preferred embodiment of the present invention, the sub-step S11 may specifically include the following sub-steps:
judging whether the submodel is added into the model group for the first time; if the submodel is added into the model group for the first time, judging that the vertex data stored aiming at the submodel in the model group needs to be updated; if the submodel is not added into the model group for the first time, judging whether the world matrix of the submodel changes; and if the world matrix of the submodel changes, judging that the vertex data stored aiming at the submodel in the model group needs to be updated.
Specifically, whether the sub-model is added to the Meshgroup for the first time is judged, and if the sub-model is added to the Meshgroup for the first time, it is judged that vertex data stored in the Meshgroup for the sub-model needs to be updated.
If the submodel is not added into the Meshgroup for the first time, further judging whether the world matrix of the submodel changes, and if the world matrix of the submodel changes, judging that the vertex data stored in the Meshgroup for the submodel needs to be updated. For example, the model in the model special effect may be displaced, rotated, scaled, or the world matrix of the model may be adjusted manually, so that the world matrix of the model changes, and in the case that the world matrix of the model changes, the vertex data stored for the model in the mesh group needs to be updated.
Substep S12, when the vertex data stored for the submodel in the model grouping needs to be updated, updating the vertex data stored for the submodel in the model grouping, and setting the update bit corresponding to the model grouping to true;
when the vertex data stored in the model group for the sub-model needs to be updated, the vertex data stored in the model group for the sub-model can be updated, and the update bit corresponding to the model group is set to be true. The updating bit is a parameter used for describing whether the submodels in the model grouping change or not, when the updating bit is true, the submodels in the model grouping change and the batching calculation needs to be carried out again, and when the updating bit is false, the submodels in the model grouping do not change and the batching calculation does not need to be carried out again.
A substep S13, if the update bit of the model packet is true, creating a geometric data structure for the model packet, and storing vertex buffers and index buffers of all submodels in the geometric data structure according to the sorting result;
specifically, whether the update bit corresponding to the MeshGroup is true or not may be determined, and when the update bit of the MeshGroup is true, a Geometry data structure Geometry may be created for the MeshGroup, where the Geometry is common, that is, all submodels in one MeshGroup refer to the same Geometry. In the embodiment of the present invention, each Sub-model in the Geometry data structure Geometry has a corresponding Sub-Geometry data structure Sub _ Geometry, and the Sub _ Geometry is used to record a start index and an end index of the Sub-model in the Geometry. Specifically, a Sub _ Geometry may be created for each Sub-model, and then a Geometry corresponding to a MeshGroup where the Sub-model is referenced by the Sub _ Geometry is set, so that in the subsequent batch calculation process, whether the Sub-models belong to the same MeshGroup may be determined according to the Geometry referenced by the Sub _ Geometry corresponding to the Sub-model, that is, when the geometries referenced by the Sub _ geometries corresponding to the Sub-models are the same, it is determined that the Sub-models belong to the same Geometry, and when the geometries referenced by the Sub _ geometries corresponding to the Sub-models are different, it is determined that the Sub-models do not belong to the same Geometry.
In the embodiment of the present invention, after the Geometry is created for the MeshGroup, the Vertex Buffer and Index Buffer of all the submodels may be stored in the Geometry according to the sorting result of all the submodels. The vertex buffer is used for storing vertex data, and the index buffer is used for storing an index address corresponding to the vertex data corresponding to the vertex in the geometric data structure.
In a preferred embodiment of the present invention, the sub-step S13 may specifically include the following sub-steps:
converting the vertex data stored in the model group aiming at the submodel into a world space to obtain target vertex data; and storing vertex buffer and index buffer of all sub models in the geometric data structure according to the sequencing result and the target vertex data.
In the embodiment of the present invention, vertex data stored for the sub-models in the model group may be transformed into the world space to obtain target vertex data, and then vertex buffers and index buffers of all the sub-models may be stored in the geometric data structure according to the sorting result and the target vertex data.
And a substep S14, carrying out batching processing on the sub models in the model grouping by using preset batching logic according to the vertex buffer and the index buffer stored in the geometric data structure to obtain a batching result.
Specifically, the preset batching logic may determine whether the sub-model belongs to a MeshGroup according to the vertex buffer and the index buffer, determine whether the sub-model is in an adjacent position, determine whether the sub-model in the adjacent position is the same in material, determine whether the rendering priority is the same, determine whether the batching flag bit parameters are the same, and so on, thereby determining one or the sub-models that need to be merged into one batch, and obtaining the batching result.
In a preferred embodiment of the present invention, the sub-step S14 may specifically include the following sub-steps:
determining one or more submodels which belong to the same geometric data structure and are adjacent in position in all the submodels according to the vertex buffer and the index buffer stored in the geometric data structure; and determining a target sub-model which meets the batching condition in the one or more sub-models by using a preset batching logic, and merging the target sub-models to obtain a batching result.
Specifically, one or more submodels which belong to the same geometric data structure and are adjacent in position in all submodels can be determined according to vertex buffering and index buffering stored in the geometric data structure, a target submodel which meets the batching condition in the one or more submodels is further determined by preset batching logic, and the target submodels are blended to obtain a batching result.
In an embodiment of the present invention, the batching conditions comprise at least one of: the materials are the same; rendering priorities are the same; the parameters of the batch combination zone bits are the same.
The rendering priority is a parameter for determining the rendering priority when the rendering module processes a plurality of models needing to be rendered, wherein the models with high rendering priority are rendered first, and the models with low rendering priority are rendered later.
The batching Flag parameter Flag is used for representing the batching logic used by the model, and when the preset batching logic is started by the model according to the batching Flag parameter, the batching Flag parameter indicates that the scheme of the embodiment of the invention can be adopted to carry out batching processing on the model.
In a preferred embodiment of the present invention, the attributes of the model to be processed include vertex types and shaders used, and the step 102 may specifically include the following sub-steps:
sequentially determining a model of the current operation from the batch processing queue; determining a target model group matched with the currently operated model according to the vertex type of the currently operated model and the used shader; assigning the currently operating model to the target model group.
Specifically, a plurality of corresponding Meshgroups are created for different vertex types and/or shaders used, a model to be processed is determined from a batch processing queue in sequence to serve as a currently operated model, then a target Meshgroup matched with the currently operated model is determined according to the vertex type and the shaders used of the currently operated model, and the currently operated model is distributed to the target Meshgroup. And then, continuously determining the next model to be processed from the batch processing queue as the model of the current operation until all the models to be processed in the batch processing queue are traversed and completed.
In a preferred embodiment of the present invention, the determining a target model group matching the currently operated model according to the vertex type and the shader used of the currently operated model includes:
determining a first sub-model of at least one sub-model of the currently operating model; and determining a target model group matched with the currently operated model according to the vertex type of the first sub model and the used shader.
Although a model to be processed may include multiple sub-models with different vertex types and/or different shaders, because the same type of model (e.g., building model of base) in a game uses material groups that are roughly divided into several types, only the vertex type and the used shaders of the first sub-model are determined, so that the models that can be combined in a large probability can be placed in the same MeshGroup.
Specifically, a target MeshGroup matched with the currently operated model is determined according to the vertex type and the used Shader of the first submodel by determining the first submodel in at least one submodel of the currently operated model. The first sub-model may be determined according to a preset rule, for example, the first sub-model may be determined by a random method, which is not limited in this embodiment of the present invention.
In a preferred embodiment of the present invention, the step 103 may specifically include the following sub-steps:
acquiring main texture parameters and material parameters for describing textures and materials of all sub models in the model group; calculating to obtain texture hash values according to the main texture parameters, and calculating to obtain parameter hash values according to the material parameters; and sequencing all the sub-models according to the texture hash value and the parameter hash value to obtain a sequencing result.
The main texture parameter may refer to a parameter for describing a texture mainly used by the sub-model, and the material parameter may refer to a parameter for describing a material used by the sub-model.
In the embodiment of the invention, the texture hash value is obtained by calculating the main texture parameter and the material parameter of the texture submodel and the material of all the submodels in the model group according to the main texture parameter, the parameter hash value is obtained by calculating the material parameter, and all the submodels are sequenced according to the texture hash value and the parameter hash value to obtain the sequencing result. Specifically, the submodels with the same texture hash value and parameter hash value can be arranged at adjacent positions, so that submodels which can be batched are sorted at adjacent positions only according to the similarity between the main texture parameter and the material parameter, and the submodels are conveniently batched.
In the method for batching models provided by the embodiment of the present invention, a model to be processed is added to a batching queue, wherein the model to be processed includes at least one sub-model, the model to be processed in the batching queue is allocated to different model groups according to attributes of the model to be processed, all sub-models in the model groups are sorted according to textures and materials of sub-models of materials in the model groups, so as to obtain a sorting result, and the sub-models in the model groups are batched according to the sorting result and a preset batching logic, so as to obtain a batching result. Compared with a static batching mode, the embodiment of the invention has the advantages that the batching processing is carried out after all the sub models are reordered, the success rate of model combination is improved, and the model movement can be supported. Compared with the method of supporting batch, the embodiment of the invention can merge different models by grouping according to the attributes of the models, and the models can be merged as long as the rendering states, the materials and other information of the models are consistent, and are not limited to merging only the models with the same material balls and the same grids. And moreover, the model to be processed is added into the batching queue for processing by improving the static batching scheme of the NeoX engine, so that the limitation of the number of top points can be removed, the sequencing strategy of the model is modified, the model matrix is supported and transformed after combination, and the hanging point model and the model special effect under the model are processed to maximize the batch of the model in the combined game scene.
Referring to fig. 3, a flowchart illustrating steps of another batch processing method for a model according to an embodiment of the present invention is shown, which may specifically include the following steps:
step 301, loading a model in a game scene; wherein the model has a batching flag bit parameter corresponding to the original batching logic;
the models are provided with corresponding original batching logics, in the embodiment of the invention, the original batching logics corresponding to the models are not changed, and after the models in the game scene are loaded on the script level, the models are set to start the batching scheme of the embodiment of the invention, namely, the preset batching logics are started through the batching flag bit parameter setting model.
The original batching logic can be a batching scheme originally started by the model, and comprises a Static batching scheme, a dynamic batching scheme, an Instancing instance batching scheme and the like, wherein the Static batching scheme is used for integrating data corresponding to objects which use the same material and the same mapping and select Static supporting the Static batching by calculating and integrating the data into a combined Buffer area and submitting the data to a GPU (graphics processing unit), so that the purpose of drawing a batch is achieved. Dynamic batching supports dynamic real-time batching of model combinations meeting requirements. The Instanceting co-batch refers to submitting model vertex data and different world matrixes of each model under the condition of the models of the same material ball and the same grid, and the GPU calculates different model positions according to the different world matrixes.
The preset batching logic is improved on the basis of static batching, and the batching is carried out after reordering all submodels in the model, so that the limit of the number of top points is removed, the ordering strategy of the model is modified, the model matrix is supported and transformed after merging is realized, and the hanging point model and the model special effect under the model are processed to maximize the batch of the model in the merged game.
Step 302, judging whether a model in the game scene starts a fade-in effect or not;
specifically, whether the model in the game scene starts the fade-out effect or not can be judged through the attribute data corresponding to the model.
Step 303, if the model in the game scene does not start the fade-out effect, adjusting the batching flag bit parameter to a parameter corresponding to the preset batching logic, and starting the preset batching logic according to the adjusted batching flag bit parameter to carry out batching processing on the model in the game scene;
in the embodiment of the invention, if the model in the game scene does not start the fade-out effect, the batching flag bit parameter can be directly adjusted to the parameter corresponding to the preset batching logic, and the preset batching logic is started to carry out batching processing on the model in the game scene according to the adjusted batching flag bit parameter. The specific batch process is similar to the above-mentioned step 101-104, and is not described herein again.
In specific implementation, a preset batching logic for starting a model in a game scene can be set in a script layer, and when an engine layer renders the model, if the preset batching logic needs to be started for judging the model in the game scene, a batching flag bit parameter corresponding to an original batching logic can be adjusted to a parameter corresponding to the preset batching logic, so that the preset batching logic is started according to the batching flag bit parameter to carry out batching processing on the model in the game scene, and the model is rendered according to a batching result.
304, if the model in the game scene starts a fade-out effect, carrying out batch combination processing on the model in the game scene according to the original batch combination logic, and rendering the model in the game scene according to a batch combination result;
in the embodiment of the invention, if the model in the game scene starts the fade-out effect, the original batch combination logic can be started according to the batch combination flag bit parameter to carry out batch combination processing on the model in the game scene, and the model in the game scene is rendered according to the batch combination result.
Step 305, after the fading effect of the model in the game scene is finished, clearing the rendering data of the model in the game scene, adjusting the batching flag bit parameter to a parameter corresponding to the preset batching logic, and starting the preset batching logic according to the adjusted batching flag bit parameter to carry out batching processing on the model in the game scene.
After the fading effect of the model in the game scene is finished, the rendering data of the model can be cleared, including the data such as the batch combination result obtained by calculation aiming at the rendering state of the model, the batch combination flag bit parameter is adjusted to the parameter corresponding to the preset batch combination logic, and then the preset batch combination logic is started to carry out batch combination processing on the model in the game scene according to the adjusted batch combination flag bit parameter.
Because some models in the game also have a fading effect, such as building models of bases in the game, and material parameters of each model are inconsistent in the fading process, the models in the fading process cannot be successfully batched in a batching mode carried by a game engine, such as static batching, dynamic batching and instant instance batching schemes carried by a NeoX engine, and the like. In the embodiment of the invention, the rendering state of the model is firstly kept in the fading stage by ensuring the model, rendering data of the model, such as rendering state, batch combination result and other data, are cleared after the fading is finished, then the batch combination type is modified to enable the preset batch combination logic in the embodiment of the invention through the batch combination zone bit parameters, then the models are subjected to batch combination processing according to the preset batch combination logic, and the models are re-rendered according to the batch combination result, so that the preset batch combination logic is enabled to take effect.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 4, a block diagram of a batch processing apparatus according to an embodiment of the present invention is shown, and may specifically include the following modules:
the model adding module 401 is used for adding the model to be processed into the batch processing queue; wherein the model to be processed comprises at least one sub-model;
a model grouping module 402, configured to allocate the to-be-processed models in the batch processing queue to different model groups according to the attributes of the to-be-processed models;
a model sorting module 403, configured to sort all submodels in the model group according to textures and materials of material submodels in the model group, respectively, to obtain a sorting result;
and the model batching module 404 is configured to batch the sub models in the model group by using a preset batching logic according to the sorting result to obtain a batching result.
In a preferred embodiment of the present invention, the model batching module 404 comprises:
the model grouping updating judgment submodule is used for judging whether the vertex data stored aiming at the submodel in the model grouping needs to be updated or not;
the model grouping updating submodule is used for updating the vertex data stored aiming at the submodel in the model grouping when the vertex data stored aiming at the submodel in the model grouping needs to be updated, and setting an updating bit corresponding to the model grouping as true;
a geometric data structure storage submodule, configured to create a geometric data structure for the model group if the update bit of the model group is true, and store vertex buffers and index buffers of all submodels in the geometric data structure according to the sorting result;
and the model batching submodule is used for carrying out batching processing on the submodels in the model grouping by using a preset batching logic according to the vertex buffer and the index buffer stored in the geometric data structure so as to obtain a batching result.
In a preferred embodiment of the present invention, the module grouping update judgment sub-module includes:
the judging unit is used for judging whether the submodel is added into the model group for the first time;
the first judgment unit is used for judging that the vertex data stored aiming at the submodel in the model group needs to be updated if the submodel is added into the model group for the first time;
the second judgment unit is used for judging whether the world matrix of the submodel changes or not if the submodel is not added into the model group for the first time; and if the world matrix of the submodel changes, judging that the vertex data stored aiming at the submodel in the model group needs to be updated.
In a preferred embodiment of the present invention, the geometry data structure storage submodule includes:
the vertex data updating unit is used for converting the vertex data stored in the model group aiming at the submodel into a world space so as to obtain target vertex data;
and the geometric data structure storage unit is used for storing vertex buffer and index buffer of all sub models in the geometric data structure according to the sorting result and the target vertex data.
In a preferred embodiment of the invention, the geometry data structure comprises a sub-geometry data structure corresponding to each sub-model, the sub-geometry data structure being used for recording a start index and an end index of the sub-model in the geometry data structure.
In a preferred embodiment of the present invention, the model batching sub-module includes:
the adjacent model determining unit is used for determining one or more sub-models which belong to the same geometric data structure and are adjacent in position in all the sub-models according to the vertex buffer and the index buffer stored in the geometric data structure;
the model merging unit is used for determining a target sub-model which meets the merging condition in the one or more sub-models by preset merging logic and merging the target sub-models to obtain a merging result; wherein the batching condition comprises at least one of the following:
the materials are the same;
rendering priorities are the same;
the parameters of the batch combination zone bits are the same.
In a preferred embodiment of the present invention, the attributes of the model to be processed include vertex types and shaders used, and the model grouping module 402 includes:
the model determining submodule of the current operation is used for sequentially determining the model of the current operation from the batch processing queue;
the target model grouping determination submodule is used for determining a target model grouping matched with the currently operated model according to the vertex type of the currently operated model and the used shader;
and the model distribution submodule is used for distributing the currently operated model to the target model group.
In a preferred embodiment of the present invention, the object model grouping determination sub-module includes:
a first submodel determining unit for determining a first submodel of the at least one submodel of the currently operating model;
and the target model grouping determination unit is used for determining the target model grouping matched with the currently operated model according to the vertex type of the first sub model and the used shader.
In a preferred embodiment of the invention, the vertex data comprises at least one of: vertex position, color information, normal direction, and texture coordinates.
In a preferred embodiment of the present invention, the model ranking module 403 includes:
the material parameter module submodule is used for acquiring main texture parameters and material parameters for describing textures and materials of all sub models in the model group;
the parameter hash value operator module is used for obtaining a texture hash value by calculation according to the main texture parameter and obtaining a parameter hash value by calculation according to the material parameter;
and the model sorting submodule is used for sorting all the submodels according to the texture hash value and the parameter hash value so as to obtain a sorting result.
In a preferred embodiment of the present invention, the method further comprises:
the model loading module is used for loading a model in a game scene; wherein the model has a batching flag bit parameter corresponding to the original batching logic;
the effect judgment module is used for judging whether the model in the game scene starts the fade-in effect or not;
and the first batching processing module is used for adjusting the batching flag bit parameter to a parameter corresponding to the preset batching logic if the model in the game scene does not start the fading effect, and starting the preset batching logic according to the adjusted batching flag bit parameter to carry out batching processing on the model in the game scene.
In a preferred embodiment of the present invention, the method further comprises:
the second batch combination processing module is used for carrying out batch combination processing on the models in the game scene according to the original batch combination logic and rendering the models in the game scene according to the batch combination result if the models in the game scene start the fade-out effect;
and the third batching processing module is used for clearing the rendering state of the model in the game scene after the fading effect of the model in the game scene is finished, adjusting the batching flag bit parameter to a parameter corresponding to the preset batching logic, and starting the preset batching logic to carry out batching processing on the model in the game scene according to the adjusted batching flag bit parameter.
In a preferred embodiment of the present invention, the method further comprises:
and the model cutting module is used for cutting the model in the game scene to obtain the model to be processed.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
An embodiment of the present invention further provides an electronic device, as shown in fig. 5, including:
a processor 501 and a storage medium 502, wherein the storage medium 502 stores machine-readable instructions executable by the processor 501, and when the electronic device runs, the processor 501 executes the machine-readable instructions to perform the method according to any one of the embodiments of the present invention. The specific implementation and technical effects are similar, and are not described herein again.
An embodiment of the present invention further provides a computer-readable storage medium, as shown in fig. 6, where the storage medium stores a computer program 601, and the computer program 601 is executed by a processor to perform the method according to any one of the embodiments of the present invention. The specific implementation and technical effects are similar, and are not described herein again.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method for batch processing of a model and the apparatus for batch processing of a model provided by the present invention are introduced in detail, and specific examples are applied herein to illustrate the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (16)

1. A method for batch processing of models, comprising:
adding a model to be processed into a batch processing queue; wherein the model to be processed comprises at least one sub-model;
distributing the models to be processed in the batch processing queue to different model groups according to the attributes of the models to be processed;
sorting all the submodels in the model group according to the texture and the material of the submodels in the model group respectively to obtain a sorting result;
and according to the sequencing result, carrying out batch combination processing on the sub-models in the model group by using a preset batch combination logic so as to obtain a batch combination result.
2. The method of claim 1, wherein the batching sub-models in the model grouping with a preset batching logic according to the sorting result to obtain a batching result comprises:
judging whether the vertex data stored for the submodels in the model group needs to be updated or not;
when the vertex data stored aiming at the submodel in the model group needs to be updated, updating the vertex data stored aiming at the submodel in the model group, and setting the updating bit corresponding to the model group as true;
if the update bit of the model group is true, establishing a geometric data structure aiming at the model group, and storing vertex buffer and index buffer of all sub models in the geometric data structure according to the sequencing result;
and carrying out batching processing on the sub models in the model grouping by using a preset batching logic according to the vertex buffer and the index buffer stored in the geometric data structure to obtain a batching result.
3. The method of claim 2, wherein the determining whether the vertex data stored for the sub-model in the model group needs to be updated comprises:
judging whether the submodel is added into the model group for the first time;
if the submodel is added into the model group for the first time, judging that the vertex data stored aiming at the submodel in the model group needs to be updated;
if the submodel is not added into the model group for the first time, judging whether the world matrix of the submodel changes; and if the world matrix of the submodel changes, judging that the vertex data stored aiming at the submodel in the model group needs to be updated.
4. The method of claim 2, wherein storing vertex buffers and index buffers for all submodels in the geometry data structure according to the ordering result comprises:
converting the vertex data stored in the model group aiming at the submodel into a world space to obtain target vertex data;
and storing vertex buffer and index buffer of all sub models in the geometric data structure according to the sequencing result and the target vertex data.
5. The method of claim 2 or 4, wherein the geometry data structure comprises a sub-geometry data structure corresponding to each sub-model, and the sub-geometry data structure is used for recording a start index and an end index of the sub-model in the geometry data structure.
6. The method of claim 2, wherein the batching sub-models in the model grouping with preset batching logic to obtain batching results according to vertex buffers and index buffers stored in the geometry data structure comprises:
determining one or more submodels which belong to the same geometric data structure and are adjacent in position in all the submodels according to the vertex buffer and the index buffer stored in the geometric data structure;
determining a target sub-model which meets the batching condition in the one or more sub-models by preset batching logic, and merging the target sub-models to obtain a batching result; wherein the batching condition comprises at least one of the following:
the materials are the same;
rendering priorities are the same;
the parameters of the batch combination zone bits are the same.
7. The method according to claim 1, wherein the attributes of the model to be processed include vertex type and shader used, and the assigning the model to be processed in the batching queue to different model groups according to the attributes of the model to be processed comprises:
sequentially determining a model of the current operation from the batch processing queue;
determining a target model group matched with the currently operated model according to the vertex type of the currently operated model and the used shader;
assigning the currently operating model to the target model group.
8. The method of claim 7, wherein determining a target model group matching the currently operating model according to the vertex type and the shader used for the currently operating model comprises:
determining a first sub-model of at least one sub-model of the currently operating model;
and determining a target model group matched with the currently operated model according to the vertex type of the first sub model and the used shader.
9. The method of claim 2, wherein the vertex data comprises at least one of: vertex position, color information, normal direction, and texture coordinates.
10. The method of claim 1, wherein the sorting all submodels according to texture and texture of submodels in the model grouping to obtain a sorting result comprises:
acquiring main texture parameters and material parameters for describing textures and materials of all sub models in the model group;
calculating to obtain texture hash values according to the main texture parameters, and calculating to obtain parameter hash values according to the material parameters;
and sequencing all the sub-models according to the texture hash value and the parameter hash value to obtain a sequencing result.
11. The method of claim 1, further comprising, prior to the step of adding the pending model to the pooling processing queue:
loading a model in a game scene; wherein the model has a batching flag bit parameter corresponding to the original batching logic;
judging whether a model in the game scene starts a fading effect or not;
if the model in the game scene does not start the fade-out effect, adjusting the batching flag bit parameter to a parameter corresponding to the preset batching logic, and starting the preset batching logic according to the adjusted batching flag bit parameter to carry out batching processing on the model in the game scene.
12. The method of claim 11, further comprising:
if the model in the game scene starts a fading effect, carrying out batch combination processing on the model in the game scene according to the original batch combination logic, and rendering the model in the game scene according to the batch combination result;
and after the fading effect of the model in the game scene is finished, clearing rendering data of the model in the game scene, adjusting the batching flag bit parameter to a parameter corresponding to the preset batching logic, and starting the preset batching logic according to the adjusted batching flag bit parameter to carry out batching processing on the model in the game scene.
13. The method of claim 1, further comprising, prior to the step of adding the pending model to the processing queue of the pre-set batching logic:
and cutting the model in the game scene to obtain the model to be processed.
14. An apparatus for batch processing of models, comprising:
the model adding module is used for adding the model to be processed into the batch processing queue; wherein the model to be processed comprises at least one sub-model;
the model grouping module is used for distributing the models to be processed in the batch processing queue to different model groups according to the attributes of the models to be processed;
the model sorting module is used for sorting all the submodels in the model grouping according to the texture and the material of the material submodel of the submodel in the model grouping respectively to obtain a sorting result;
and the model batching module is used for batching the sub models in the model group by using a preset batching logic according to the sequencing result so as to obtain a batching result.
15. An electronic device, comprising:
a processor and a storage medium storing machine-readable instructions executable by the processor, the processor executing the machine-readable instructions to perform the method of any one of claims 1-13 when the electronic device is run.
16. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the method according to any one of claims 1-13.
CN202110749147.0A 2021-07-01 2021-07-01 Batch processing method and device for model Active CN113426130B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110749147.0A CN113426130B (en) 2021-07-01 2021-07-01 Batch processing method and device for model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110749147.0A CN113426130B (en) 2021-07-01 2021-07-01 Batch processing method and device for model

Publications (2)

Publication Number Publication Date
CN113426130A true CN113426130A (en) 2021-09-24
CN113426130B CN113426130B (en) 2024-05-28

Family

ID=77758697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110749147.0A Active CN113426130B (en) 2021-07-01 2021-07-01 Batch processing method and device for model

Country Status (1)

Country Link
CN (1) CN113426130B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114816629A (en) * 2022-04-15 2022-07-29 网易(杭州)网络有限公司 Method and device for drawing display object, storage medium and electronic device
WO2023202023A1 (en) * 2022-04-22 2023-10-26 北京字跳网络技术有限公司 Batch rendering method, apparatus, device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816763A (en) * 2018-12-24 2019-05-28 苏州蜗牛数字科技股份有限公司 A kind of method for rendering graph
CN109840931A (en) * 2019-01-21 2019-06-04 网易(杭州)网络有限公司 Conjunction batch render method, apparatus, system and the storage medium of skeleton cartoon
CN109978981A (en) * 2019-03-15 2019-07-05 广联达科技股份有限公司 A kind of batch rendering method improving buildings model display efficiency
CN110570507A (en) * 2019-09-11 2019-12-13 珠海金山网络游戏科技有限公司 Image rendering method and device
CN110738720A (en) * 2019-10-08 2020-01-31 腾讯科技(深圳)有限公司 Special effect rendering method and device, terminal and storage medium
CN111145329A (en) * 2019-12-25 2020-05-12 北京像素软件科技股份有限公司 Model rendering method and system and electronic device
CN112057868A (en) * 2020-09-17 2020-12-11 网易(杭州)网络有限公司 Game model batch processing method and device and electronic equipment
US20210106913A1 (en) * 2018-02-09 2021-04-15 Netease (Hangzhou) Network Co.,Ltd. Processing Method, Rendering Method and Device for Static Component in Game Scene

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210106913A1 (en) * 2018-02-09 2021-04-15 Netease (Hangzhou) Network Co.,Ltd. Processing Method, Rendering Method and Device for Static Component in Game Scene
CN109816763A (en) * 2018-12-24 2019-05-28 苏州蜗牛数字科技股份有限公司 A kind of method for rendering graph
CN109840931A (en) * 2019-01-21 2019-06-04 网易(杭州)网络有限公司 Conjunction batch render method, apparatus, system and the storage medium of skeleton cartoon
CN109978981A (en) * 2019-03-15 2019-07-05 广联达科技股份有限公司 A kind of batch rendering method improving buildings model display efficiency
CN110570507A (en) * 2019-09-11 2019-12-13 珠海金山网络游戏科技有限公司 Image rendering method and device
CN110738720A (en) * 2019-10-08 2020-01-31 腾讯科技(深圳)有限公司 Special effect rendering method and device, terminal and storage medium
CN111145329A (en) * 2019-12-25 2020-05-12 北京像素软件科技股份有限公司 Model rendering method and system and electronic device
CN112057868A (en) * 2020-09-17 2020-12-11 网易(杭州)网络有限公司 Game model batch processing method and device and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114816629A (en) * 2022-04-15 2022-07-29 网易(杭州)网络有限公司 Method and device for drawing display object, storage medium and electronic device
CN114816629B (en) * 2022-04-15 2024-03-22 网易(杭州)网络有限公司 Method and device for drawing display object, storage medium and electronic device
WO2023202023A1 (en) * 2022-04-22 2023-10-26 北京字跳网络技术有限公司 Batch rendering method, apparatus, device and storage medium

Also Published As

Publication number Publication date
CN113426130B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN110990516B (en) Map data processing method, device and server
CN109840931B (en) Batch rendering method, device and system for skeletal animation and storage medium
US7733341B2 (en) Three dimensional image processing
CN113426130B (en) Batch processing method and device for model
US20080074430A1 (en) Graphics processing unit with unified vertex cache and shader register file
CN109448094B (en) Texture atlas scheduling method
US20130141448A1 (en) Graphics Command Generation Device and Graphics Command Generation Method
CN113256782B (en) Three-dimensional model generation method and device, storage medium and electronic equipment
CN113791914B (en) Object processing method, device, computer equipment, storage medium and product
CN107507262A (en) A kind of three-dimensional rendering method and system of large scene
CN111063032B (en) Model rendering method, system and electronic device
US20230298237A1 (en) Data processing method, apparatus, and device and storage medium
CN111583378A (en) Virtual asset processing method and device, electronic equipment and storage medium
CN114494646A (en) Scene rendering method and device and electronic equipment
CN115049531B (en) Image rendering method and device, graphic processing equipment and storage medium
CN111476858A (en) 2d engine rendering method, device and equipment based on WebG L
KR100693134B1 (en) Three dimensional image processing
CN114596195A (en) Topographic data processing method, system, device and computer storage medium
CN114494623A (en) LOD-based terrain rendering method and device
JP2007087283A (en) Graphic drawing device and graphic drawing program
CN112837416A (en) Triangulation-based polygon rendering method and device and storage medium
JPH1173527A (en) Compression and expansion of three-dimensional geometric data representing regularly tiled surface part of graphical object
CN117953181B (en) Vertex layering and incremental LOD method and system for WEB3D
CN117788674A (en) Texture map merging method and device, electronic equipment and storage medium
JP4759109B2 (en) Multi-resolution geometrical arrangement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant