CN111063032A - Model rendering method and system and electronic device - Google Patents

Model rendering method and system and electronic device Download PDF

Info

Publication number
CN111063032A
CN111063032A CN201911373358.8A CN201911373358A CN111063032A CN 111063032 A CN111063032 A CN 111063032A CN 201911373358 A CN201911373358 A CN 201911373358A CN 111063032 A CN111063032 A CN 111063032A
Authority
CN
China
Prior art keywords
model
models
merging
rendering
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911373358.8A
Other languages
Chinese (zh)
Other versions
CN111063032B (en
Inventor
吕天胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Pixel Software Technology Co Ltd
Original Assignee
Beijing Pixel Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Pixel Software Technology Co Ltd filed Critical Beijing Pixel Software Technology Co Ltd
Priority to CN201911373358.8A priority Critical patent/CN111063032B/en
Publication of CN111063032A publication Critical patent/CN111063032A/en
Application granted granted Critical
Publication of CN111063032B publication Critical patent/CN111063032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a model rendering method, a model rendering system and an electronic device, and relates to the technical field of model rendering. And then respectively merging the models in each classification group to obtain a plurality of merging results, and finally respectively rendering and calculating the models in each merging result. According to the method, the models in the multiple merging results are merged in real time, the number of the model coincident surfaces is reduced, the number of the model triangular surfaces and the number of DrawCall which need to be rendered are further reduced, the rendering efficiency is improved, and the smoothness of building block building games in the mobile equipment is favorably improved.

Description

Model rendering method and system and electronic device
Technical Field
The present invention relates to the field of model rendering technologies, and in particular, to a method, a system, and an electronic device for model rendering.
Background
At present, in a three-dimensional game of building block construction type scenes, a player constructs a complex model through a plurality of regular building block assemblies, the model comprises surface information of all the building block assemblies, a large number of DrawCall are required to be called in the model rendering process, and the large number of triangular surfaces is involved. When the model contains more building blocks, the process of rendering the model occupies more resources of game equipment, the efficiency of game operation is seriously influenced, the game rendering frame rate is reduced, and the user experience degree is reduced.
In the prior art, a batch mode is adopted to render the model, but the mode needs to reorganize batch data for each frame, which affects the operation efficiency. When in a mobile device, due to the limited performance of the mobile device, there is a limit in the models that can be merged per batch, and performance problems tend to occur when dealing with a large number of batches. The batch mode can not reduce the number of the triangular surfaces, and the overlapped triangular surfaces can be repeatedly rendered.
In summary, for the building block building game, a method capable of effectively reducing the number of DrawCall and the number of triangle faces is lacked in the prior art.
Disclosure of Invention
In view of the above, the present invention provides a method, a system and an electronic device for rendering a model. By means of real-time grouping and merging of the models, the number of DrawCall and triangle surfaces required by rendering is reduced, and the model rendering efficiency is improved.
In a first aspect, an embodiment of the present invention provides a model rendering method, where the method includes:
grouping the models according to the types of the models to obtain a plurality of classification groups;
respectively merging the models in each classification group to obtain one or more merging results;
generating a merging model according to a merging result;
and rendering and displaying the merged model.
In some embodiments, the step of grouping the models according to the types of the models to obtain a plurality of classification groups includes:
obtaining the type of each model;
respectively establishing corresponding classification groups for each type;
and traversing all models needing to be combined, and dividing each model into corresponding classification groups according to different types of information to obtain a plurality of classification groups.
In some embodiments, the step of traversing all models that need to be merged and dividing each model into a corresponding classification group according to different types of information to obtain a plurality of classification groups includes:
traversing all models to be combined to obtain the types of all the models;
and if all the models have the newly added models, adding classification groups corresponding to the newly added models.
In some embodiments, the type of the model is determined by the name of the model and the orientation of the model.
In some embodiments, the step of merging the models in each classification group to obtain one or more merged group results includes:
initializing a coordinate system;
obtaining world coordinates and normal vectors of the same model group;
and respectively merging the same group of models in each direction according to the direction of the coordinate system to obtain a plurality of merging results.
In some embodiments, the step of merging the same set of models in each direction according to the directions of the coordinate system includes:
acquiring surface information of the same group of models;
if two models in the same group have the same surface and are positioned on the opposite sides, the two models are combined into one model.
In some embodiments, the step of merging the models of the same group in each direction according to the directions of the coordinate system to obtain a plurality of merged groups includes:
newly building a temporary merging group; the temporary merging group is used for storing merging results in each direction;
combining the models in each direction in the coordinate axes of the coordinate system in sequence to obtain a combined result in each direction in the coordinate axes;
and sequentially storing the merging results in the temporary merging groups to obtain a plurality of merging results.
In some embodiments, the step of generating the merging model according to the merging result includes:
acquiring vertex coordinates and normal vectors of the model to be rendered in the merging result;
calculating texture coordinates of the model to be rendered according to the vertex coordinates of the model to be rendered;
and generating the model to be rendered according to the vertex coordinates, the normal vector and the texture coordinates.
In a second aspect, an embodiment of the present invention provides a model rendering system, including:
the model grouping module is used for grouping the models according to the types of the models to obtain a plurality of classification groups;
the grouping and merging module is used for respectively merging the models in each classification group to obtain one or more merging and merging results;
the model merging module is used for generating a merging model according to a merging result;
and the model rendering module is used for rendering and displaying each combined model.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor executes the computer program to implement the steps of the method in any one of the possible embodiments of the first aspect.
The embodiment of the invention has the following beneficial effects: the embodiment of the invention provides a model rendering method, a model rendering system and an electronic device. And then respectively merging the models in each classification group to obtain one or more merging group results, generating a merging model according to the merging results, and finally rendering and displaying the merging model. According to the method, the models in the multiple merging groups are merged in real time, so that the number of the model triangular surfaces and the number of DrawCall to be rendered are reduced, the rendering efficiency is improved, and the smoothness of the building block building game in the mobile equipment is favorably improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of a model rendering method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a certain three-dimensional building block construction game provided by an embodiment of the invention;
fig. 3 is a flowchart of step S101 in the model rendering method according to the embodiment of the present invention;
fig. 4 is a flowchart of step S303 in the model rendering method according to the embodiment of the present invention;
fig. 5 is a flowchart of step S102 in the model rendering method according to the embodiment of the present invention;
fig. 6 is a schematic diagram of a model coordinate system in the model rendering method according to the embodiment of the present invention;
fig. 7 is a flowchart of step S503 in the model rendering method according to the embodiment of the present invention;
fig. 8 is a flowchart of step S103 in the model rendering method according to the embodiment of the present invention;
FIG. 9 is a diagram illustrating a specific implementation process of model rendering according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of a model rendering system according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Icon:
1001-model grouping module; 1002-a grouping and merging module; 1003-model merge module; 1004-model rendering module; 101-a processor; 102-a memory; 103-a bus; 104-communication interface.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the building block building type game, a player builds a complex model by using simple building block components, and then the complex model is used in the game process. For example, a player may build a ladder-like model by adding blocks and then use the model to climb up gradually to achieve a function of crossing obstacles. In the process of model building, the building block assembly with the minimum unit is required to be used for realizing. The building block assembly of these smallest elements comprises a plurality of surfaces, each surface comprising data required for the rendering process. When the accumulated model is rendered, a large number of DrawCall are called, and a large number of triangular surfaces are rendered.
The DrawCall is an operation that the CPU calls a graphical programming interface to command the GPU to render, and before calling the DrawCall each time, the CPU needs to complete various preparation works, and then the CPU needs to send much data to the GPU. When the GPU receives the rendering data instruction from the CPU, the GPU can start the current rendering. Because of the strong rendering capability of the GPU, how fast the rendering speed is often depends on the efficiency of the CPU submitting commands. If the number of DrawCall is too large, the CPU may spend a significant amount of time submitting the DrawCall command, resulting in an overload of the CPU. Therefore, when the number of building blocks in the model is large, more DrawCall can be called in the rendering process, so that the rendering frame rate of the game is reduced, the game interface is jammed, and the user experience is reduced.
The method for reducing the number of DrawCall in the prior art is to render the model in a batch mode, and the essential idea is to merge many small DrawCall into one large DrawCall and then execute the DrawCall at one time. The batching process is generally adapted to static objects, such as non-moving earth, rocks, etc., for which only one batching is required. When batching dynamic objects, it is necessary to batch each frame and then perform a DrawCall to complete the rendering. When the gaming device is a mobile device, performance issues are likely to arise when dynamically handling large numbers of batches due to the limited capabilities of the mobile device. Moreover, the existing batch combination mode cannot reduce the number of the triangular surfaces, and the overlapped triangular surfaces can still be repeatedly rendered.
In consideration of the problem that the number of DrawCall and the number of triangular faces can be effectively reduced in the current building block building game, the invention aims to provide a model rendering method, a system and an electronic device, and the technology can be applied to the building block building game; the techniques may be implemented in associated software or hardware, as described by way of example below.
To facilitate understanding of the embodiment, a detailed description is first given of a model rendering method disclosed in the embodiment of the present invention, and a flowchart of the method is shown in fig. 1, and includes the following steps:
and S101, grouping the models according to the types of the models to obtain a plurality of classification groups.
In a building-type game, a player builds a complex structure using building elements, wherein the used building elements are the models mentioned in the above steps. The player selects models with different types and sizes to combine, finally forms a structure required by the player, and is finally used by combining with a game scene. Due to the fact that building block games are adopted, the models are usually regular three-dimensional structures and can be built continuously for subsequent models, therefore, the models are usually hexahedrons, building blocks in the vertical, front, rear, left and right directions are built, and the most common building blocks are cuboids. In a special scene, other models with complex three-dimensional structures exist, building blocks are usually not built on the models, and the models are often used in the ending process of model building.
The basis of model grouping is generally according to the type and size of the models, and the models that are identical are generally classified into one class. The building block model can be divided into models with different sections and the same height according to actual conditions, the models are sequentially arranged, although the models cannot be completely matched, the models are the same in height, the tops of the models after the arrangement are also in the same plane, and the building block models can be continuously accumulated subsequently.
The classification group result may include the basis of classification, for example, if the identical models are classified into one class, the result corresponding to the classification group includes the shape, height, section size, etc. of the model. If the models with the same height are classified into one class, the corresponding result of the classification group contains the height value of the models.
The purpose of model grouping is to combine models subsequently, so whether combinable attributes exist between models or not needs to be considered when the models are grouped, for example, the two models have the same height and the same section, when the two models are placed in the same plane, the respective same sections can be matched with each other to form a whole, and the models can be preferentially grouped together when being grouped.
And step S102, respectively merging the models in each classification group to obtain one or more merging results.
Combining the models contained in the classified groups, wherein the combining process follows the basis of the classification process, for example, completely identical models are divided into one group in the classification step, so that the models can be sequentially overlapped and placed in the combining process, two identical surfaces between the models can be completely overlapped and placed, and the two surfaces are directly ignored in the subsequent rendering process.
If the classification rule in the classification step is other rules, the merging idea of increasing the overlapping positions as much as possible is also followed when the models are merged, for example, when the models with the same height are merged, two models with the same section are preferably selected for merging.
And step S103, generating a merging model according to the merging result.
After the models in each classification group are combined, one model can be finally generated, or a plurality of models can be generated, but the number of the combined models is certainly smaller than that of the models before combination. The sections of the models are combined in the combining process, so that the number of the sections combined in the models in the same group is greatly reduced, and the efficiency is improved in the subsequent rendering process.
And step S104, rendering and displaying the combined model.
And when the models in each merging group have completed the merging process, rendering and displaying the merging group models. The rendering technology mentioned in this embodiment relates to the field of computer animation. Rendering in this field refers to the process of generating images from models in computer graphics by using relevant three-dimensional rendering software. Models in the art are descriptions of three-dimensional objects in a well-defined language or data structure, which includes geometric, viewpoint, texture, and lighting information.
The rendering display process includes various environmental parameters in the scene where the model is located, such as light, material, and rendering related parameters. But also parameters of the model itself, such as vertex data, normal data, and texture data. In a scene of building a game by building blocks, a process of rendering and displaying is carried out by calling a GPU through a CPU. The process of computing is implemented by a DrawCall, which is an operation in which the CPU calls a system graphics programming interface to command the GPU to render.
In a specific implementation process, firstly, a visibility test is performed on the three-dimensional rendering software or the rendering engine to determine a model which can be seen by a main camera (a first-person viewing angle). And initializing data such as vertexes, indexes, transformations, related light sources, textures, rendering modes and the like in the needed model, notifying a system graphic programming interface after initialization is finished, and calling a GPU by the graphic programming interface and drawing the model according to the parameters. In the GPU drawing process, the required three-dimensional rendering image is finally obtained in a triangular surface drawing mode.
Because the number of cross sections of the models is reduced in the merging process in the step S103, and the number of the models to be rendered after merging is much smaller than the number of the models before merging, the number of DrawCall can be greatly reduced in the rendering process, and the number of the triangular surfaces to be rendered is also greatly reduced, thereby facilitating the speed and efficiency of model rendering.
In the model rendering method provided by the embodiment of the invention, the models in the multiple combined groups are combined in real time, so that the overlapped surfaces of the models are reduced, the number of the triangular surfaces of the models to be rendered and the number of DrawCall are reduced, the rendering efficiency is improved, and the fluency of building block building games in mobile equipment is favorably improved.
Examples of practical applications are given here for ease of understanding. In a certain three-dimensional building block building game, a complex model is built through a simple model, and the complex model is rendered finally, wherein the structure of the model is shown in FIG. 2.
As can be seen from fig. 2, when a model to be rendered is a complex model built from a simple model, the number of drawcalls in the rendering process of the model is large, and the number of related sections is large, so that the rendering efficiency is reduced, and particularly in a mobile device with limited performance, the situation of insufficient performance is likely to occur during the rendering process, so that a game is stuck.
In order to solve the above problem, the model rendering method mentioned in the above embodiment mode may be adopted. As shown in fig. 3, in an actual implementation process, step S101 further includes:
step S301, obtaining the type of each model.
The model type acquisition process is obtained by traversing all models, and is determined in the model initialization process. In this example, all model types are determined at build time, all cubic models, and the texture (UV) is four-way continuous.
Step S302, establishing corresponding classification groups for each type respectively.
Since this example only contains one classification group, only identical models are placed in the classification group that is created. If multiple model types are obtained in step S301, a corresponding classification group is established for each type in this step.
Step S303, traversing all models to be combined, and dividing each model into corresponding classification groups according to different types of information to obtain a plurality of classification groups.
And in the grouping process, according to the type information of the models, the type information not only comprises the sizes and the shapes of the models, but also comprises the orientation and the placing positions of the models. In this example, if the blocks added by the player are regular blocks and the orientation between the blocks is the same, then blocks are added to the same merge group.
As shown in fig. 4, in some embodiments, step S303 further includes:
step S401, traversing all models needing to be combined to obtain the types of all models.
In the above example, if the player performs an operation of adding a block, if the added block is a regular model, traversal is performed in the already created classification group, and all model types are obtained, and finally it is determined whether there is a new classification group. The type of model is determined by the name of the model and the orientation of the model.
And step S402, if all the models have the new models, adding classification groups corresponding to the new models.
If the user performs the operation of adding the building blocks and the newly added building block types belong to the new merge group, adding the classification group corresponding to the model; if the newly added building block type does not belong to the new classification group, the model is added to the existing classification group.
If the user carries out the operation of deleting the building blocks, if the classification group recorded on the deleted model is not empty, deleting the current model from the classification group; if the model in the classification group is empty after deletion, the group is also deleted.
As shown in fig. 5, in some embodiments, the step S102 further includes:
in step S501, a coordinate system is initialized.
After the classification group is completed, an operation of initializing the coordinate system is first performed, as shown in fig. 6. The coordinate system is chosen in relation to the shape of the model, which in the example is a regular cuboid, and the origin of the coordinate system is chosen at one of the vertices a in the cuboid, the coordinate axes are chosen to be on the straight lines on which the three edges corresponding to the vertices are located, and the corresponding vertices are B, C, D respectively.
Step S502, the world coordinates and the normal vector of the same model group are obtained.
After the coordinate system is selected, the world coordinates and corresponding normal vectors of other models in the same group of models are obtained. In this embodiment, each model is represented in a separate set using three vectors, AB, AC, AD.
Step S503, respectively combining the same group of models in each direction according to the direction of the coordinate system to obtain a plurality of combined results.
As shown in fig. 7, in some embodiments, the step S503 further includes:
step S701, creating a temporary merge group; the temporary merge group is used to save the merge result in each direction.
The merging process is performed according to three directions of AB, AC, and AD, a corresponding merging group is newly created for each group, for example, the original group is G1, all models to be merged are stored in G1, after the models in the AD direction are merged, the obtained merging result is recorded as R1, and is stored in the newly created temporary merging group. Then merging the merging results R1 along the AD direction, and recording the obtained merging result as R2, and storing the merging result in a newly-built temporary merging group. Similarly, the merging results R2 are merged along the direction AB, and the resulting merged result is recorded as R3 and stored in the newly created temporary merge group.
Step S702, models in each direction in the coordinate axis of the coordinate system are combined in sequence to obtain a combination result in each direction in the coordinate axis.
Taking the AD direction merging process as an example, the model in G1 is traversed, and for convenience of description, the model in G1 in this embodiment is denoted as S0. And comparing the S0 with the model data in the R1 one by one, and directly putting the S0 into the R1 if the data in the R1 are empty. If the data in R1 is not empty, the models in R1 are traversed one by one, and a model S1 which is merged with the model S0 in R1 at the left side and a model S2 which is merged at the right side are searched. Assuming that the current traversal model in R1 is S3, if the a point coordinate of S3 is equal to the D point coordinate of S0 and the AB length of S3 is equal to the AB length of S0, and the AC length of S3 is equal to the AC length of S0, S3 is set as the right adjoining model S2 of S0. Similarly, if the D point coordinate of S3 is equal to the A point coordinate of S0, and the AB length of S3 is equal to the AB length of S0, and the AC length of S3 is equal to the AC length of S0, then S3 is the left adjacency model S1 of S0.
After searching, if both S1 and S2 do not exist, a copy of S0 is put into G2, if both S1 and S2 exist, the D point of S1 is modified to be the D point of S2, and S3 is deleted, that is, S1 represents the merger of three cubes of S0, S1 and S2. If only S1 exists, the D point of S1 is modified to be the D point of S0, i.e., S1 represents the merger of two cubes of S0, S1. If only S2 exists, the A point coordinate of S2 is modified to be the A point coordinate of S0, i.e. S2 represents the combination of two cubes of S0 and S2.
Step S703, sequentially storing the merged results in the temporary merged group to obtain a plurality of merged results.
In this step, the data of the merged group G1 are merged in the AD direction to obtain a merged result R1. Then, the stored R1 data are used to perform merging along the AB direction, resulting in a merged result R2. And then the stored R2 data are used for merging along the AC direction to obtain a merging result R3.
In the above process, when the merged result R3 is obtained, the merged results R1 and R2 may be cleared. The emptying process may also be performed along with the merging process, for example, after the merged result R2 is obtained, the merged result R1 may be emptied.
After merging, rendering the models in the merged group, as shown in fig. 8, the rendering step S103 further includes:
step S801, obtaining vertex coordinates and a normal vector of the model to be rendered in the merging result.
After merging, a cube is constructed from the merged data in each R3. The normal and the tangent use the normal and the corresponding tangent data in step S502.
Step S802, calculating texture coordinates of the model to be rendered according to the vertex coordinates of the model to be rendered.
Since the texture direction is four-sided continuous, the UV value can be calculated from the side length of the newly created cube model. For example, in the original model, the UV value at the point a is (0,0), the UV value at the point D is (1,0), the combined values are set as points a1 and D1, the length L between A, D, the length L1 between a1 and D1 are calculated, and the array of L1 divided by L is calculated as N, so that the UV value at the point a1 is (0,0), and the UV value at the point D1 is (N, 0). The UV values of the other vertices were calculated using the same method.
And S803, generating the model to be rendered according to the vertex coordinates, the normal vector and the texture coordinates, and finishing the rendering of the model through a three-dimensional rendering engine.
The model is rendered corresponding to the model style in fig. 2, as shown in fig. 9A. Firstly, a coordinate system is established in the model, wherein A is taken as an origin, and AB, AC and AD which are perpendicular to each other are taken as three coordinate axes. Firstly, after models in the AD direction are combined, the obtained result is shown in FIG. 9B; and combining the AB directions, wherein the model only has one sub-model in the AB direction, and the combined model does not change at the moment. The final merging result obtained by merging the AC directions after the AB direction merging is completed is shown in fig. 9C.
As can be seen from a comparison between fig. 9C and fig. 9A, the number of coincident surfaces in fig. 9C is much smaller than that in fig. 9A, and therefore the number of triangle surfaces to be rendered when model rendering is performed on 9C is much smaller than that in fig. 9A.
In the embodiment, the model rendering method reduces the overlapped surfaces of the models by adopting a mode of combining a plurality of regular models, further reduces the number of the model triangular surfaces and the DrawCall to be rendered, improves the rendering efficiency, and is beneficial to improving the fluency of the building block building type game in the mobile equipment.
Corresponding to the above-described embodiment for the model rendering method, a model rendering system described with reference to fig. 10 includes the following modules:
a model grouping module 1001, configured to group multiple models according to types of the models to obtain one or more classification groups;
a grouping and merging module 1002, configured to merge the models in each classification group respectively to obtain multiple merging group results;
a model merging module 1003, configured to generate a merging model according to a merging result;
and a model rendering module 1004, configured to render and display each merged model.
The model rendering system provided by the embodiment of the present invention has the same implementation principle and technical effect as the foregoing embodiment of the model rendering method, and for brief description, no part of the embodiment is mentioned, and reference may be made to the corresponding contents in the foregoing embodiment of the method.
The embodiment also provides an electronic device, a schematic structural diagram of which is shown in fig. 11, and the electronic device includes a processor 101 and a memory 102; memory 102 is used to store one or more computer instructions that are executed by the processor to implement the method for model rendering described above.
The server shown in fig. 11 further includes a bus 103 and a communication interface 104, and the processor 101, the communication interface 104, and the memory 102 are connected through the bus 103.
The Memory 102 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Bus 103 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 11, but that does not indicate only one bus or one type of bus.
The communication interface 104 is configured to connect with at least one user terminal and other network units through a network interface, and send the packaged IPv4 message or IPv4 message to the user terminal through the network interface.
The processor 101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 101. The Processor 101 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present disclosure may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 102, and the processor 101 reads the information in the memory 102 and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention or a part thereof, which essentially contributes to the prior art, can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method of model rendering, the method comprising:
grouping the models according to the types of the models to obtain a plurality of classification groups;
respectively merging the models in each classification group to obtain one or more merging results;
generating a merging model according to the merging result;
and rendering and displaying the merged model.
2. The method of claim 1, wherein the step of grouping the plurality of models according to the types of the models to obtain a plurality of classification groups comprises:
obtaining the type of each model;
establishing a corresponding classification group aiming at each type;
traversing all the models needing to be combined, and dividing each model into the corresponding classification groups according to different types of information to obtain a plurality of classification groups.
3. The method according to claim 2, wherein the step of traversing all the models that need to be merged and dividing each model into the corresponding classification groups according to different types of information to obtain a plurality of classification groups comprises:
traversing all the models to be combined to obtain the types of all the models;
and if all the models have the newly added models, adding the classification groups corresponding to the newly added models.
4. The method of claim 1, wherein the type of the model is determined by a name of the model and an orientation of the model.
5. The method of claim 4, wherein the step of merging the models in each of the classification groups to obtain one or more merged results comprises:
initializing a coordinate system;
obtaining world coordinates and normal vectors of the same group of models;
and respectively combining the same group of models in each direction according to the direction of the coordinate system to obtain a plurality of combined results.
6. The method of claim 5, wherein the step of merging the same set of models in each direction according to the direction of the coordinate system comprises:
obtaining surface information of the same group of models;
if two models in the same group have the same surface and are positioned on the opposite sides, the two models are combined into one model.
7. The method of claim 5, wherein the step of combining the same set of models in each direction according to the directions of the coordinate system to obtain a plurality of combined results comprises:
newly building a temporary merging group; the temporary merging group is used for storing merging results in each direction;
combining the models in each direction in the coordinate axes of the coordinate system in sequence to obtain a combined result in each direction in the coordinate axes;
and sequentially storing the merging results in the temporary merging groups to obtain a plurality of merging results.
8. The method of claim 1, wherein the step of generating a merged model from the merged results comprises:
obtaining vertex coordinates and normal vectors of the model to be rendered in the merging result;
calculating texture coordinates of the model to be rendered according to the vertex coordinates of the model to be rendered;
and generating the model to be rendered according to the vertex coordinates, the normal vector and the texture coordinates.
9. A model rendering system, the system comprising:
the model grouping module is used for grouping the models according to the types of the models to obtain a plurality of classification groups;
the grouping and merging module is used for respectively merging the models in each classification group to obtain one or more merging results;
the model merging module is used for generating a merging model according to the merging result;
and the model rendering module is used for rendering and displaying the combined model.
10. An electronic device comprising a memory and a processor, wherein the memory stores a computer program operable on the processor, and wherein the processor implements the steps of the method according to any of the preceding claims 1 to 8 when executing the computer program.
CN201911373358.8A 2019-12-26 2019-12-26 Model rendering method, system and electronic device Active CN111063032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911373358.8A CN111063032B (en) 2019-12-26 2019-12-26 Model rendering method, system and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911373358.8A CN111063032B (en) 2019-12-26 2019-12-26 Model rendering method, system and electronic device

Publications (2)

Publication Number Publication Date
CN111063032A true CN111063032A (en) 2020-04-24
CN111063032B CN111063032B (en) 2024-02-23

Family

ID=70303894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911373358.8A Active CN111063032B (en) 2019-12-26 2019-12-26 Model rendering method, system and electronic device

Country Status (1)

Country Link
CN (1) CN111063032B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111710020A (en) * 2020-06-18 2020-09-25 腾讯科技(深圳)有限公司 Animation rendering method and device and storage medium
CN112473127A (en) * 2020-11-24 2021-03-12 杭州电魂网络科技股份有限公司 Method, system, electronic device and storage medium for large-scale same object rendering
CN114042311A (en) * 2021-11-15 2022-02-15 中国联合网络通信集团有限公司 Information processing method, edge server, electronic device, and computer medium
CN114818097A (en) * 2022-07-01 2022-07-29 中国建筑西南设计研究院有限公司 Construction engineering design three-dimensional model dynamic description method based on operation rule
WO2023202023A1 (en) * 2022-04-22 2023-10-26 北京字跳网络技术有限公司 Batch rendering method, apparatus, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015188749A1 (en) * 2014-06-10 2015-12-17 Tencent Technology (Shenzhen) Company Limited 3d model rendering method and apparatus and terminal device
CN106780686A (en) * 2015-11-20 2017-05-31 网易(杭州)网络有限公司 The merging rendering system and method, terminal of a kind of 3D models
WO2019052371A1 (en) * 2017-09-12 2019-03-21 阿里巴巴集团控股有限公司 3d model data processing method, device and system
CN109816762A (en) * 2019-01-30 2019-05-28 网易(杭州)网络有限公司 A kind of image rendering method, device, electronic equipment and storage medium
CN109816763A (en) * 2018-12-24 2019-05-28 苏州蜗牛数字科技股份有限公司 A kind of method for rendering graph
CN109949413A (en) * 2019-03-27 2019-06-28 武汉数文科技有限公司 Model display method, system and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015188749A1 (en) * 2014-06-10 2015-12-17 Tencent Technology (Shenzhen) Company Limited 3d model rendering method and apparatus and terminal device
CN106780686A (en) * 2015-11-20 2017-05-31 网易(杭州)网络有限公司 The merging rendering system and method, terminal of a kind of 3D models
WO2019052371A1 (en) * 2017-09-12 2019-03-21 阿里巴巴集团控股有限公司 3d model data processing method, device and system
CN109816763A (en) * 2018-12-24 2019-05-28 苏州蜗牛数字科技股份有限公司 A kind of method for rendering graph
CN109816762A (en) * 2019-01-30 2019-05-28 网易(杭州)网络有限公司 A kind of image rendering method, device, electronic equipment and storage medium
CN109949413A (en) * 2019-03-27 2019-06-28 武汉数文科技有限公司 Model display method, system and electronic equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111710020A (en) * 2020-06-18 2020-09-25 腾讯科技(深圳)有限公司 Animation rendering method and device and storage medium
CN111710020B (en) * 2020-06-18 2023-03-21 腾讯科技(深圳)有限公司 Animation rendering method and device and storage medium
CN112473127A (en) * 2020-11-24 2021-03-12 杭州电魂网络科技股份有限公司 Method, system, electronic device and storage medium for large-scale same object rendering
CN114042311A (en) * 2021-11-15 2022-02-15 中国联合网络通信集团有限公司 Information processing method, edge server, electronic device, and computer medium
WO2023202023A1 (en) * 2022-04-22 2023-10-26 北京字跳网络技术有限公司 Batch rendering method, apparatus, device and storage medium
CN114818097A (en) * 2022-07-01 2022-07-29 中国建筑西南设计研究院有限公司 Construction engineering design three-dimensional model dynamic description method based on operation rule

Also Published As

Publication number Publication date
CN111063032B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN111063032A (en) Model rendering method and system and electronic device
US9842425B2 (en) System and method for rendering three-dimensional scenes by a computer graphics processor using orthogonal projection
US7561156B2 (en) Adaptive quadtree-based scalable surface rendering
JP4598031B2 (en) Accelerated start tile search
US9928643B2 (en) Hierarchical continuous level of detail for three-dimensional meshes
US20130300736A1 (en) Adaptively merging intersecting meshes
EP1746541A2 (en) Rendering by using object symmetry
JP4987070B2 (en) Image generating apparatus and image generating method
CN107578467B (en) Three-dimensional modeling method and device for medical instrument
CN111145329B (en) Model rendering method, system and electronic device
JP2006503355A5 (en)
US20200286285A1 (en) Automated mesh generation
CN111921202A (en) Data processing method, device and equipment for virtual scene and readable storage medium
US20240212236A1 (en) Polygon Processing Methods and Systems
US20200027268A1 (en) Polygon model generating apparatus, polygon model generation method, and program
RU2680355C1 (en) Method and system of removing invisible surfaces of a three-dimensional scene
KR100693134B1 (en) Three dimensional image processing
CN104156999A (en) Three-dimensional scene rendering method
JP2011203785A (en) Polygon division device and polygon division method
CN114519762A (en) Model normal processing method and device, storage medium and electronic equipment
CN112837416A (en) Triangulation-based polygon rendering method and device and storage medium
US20190228577A1 (en) Dynamic contour volume deformation
US20220005261A1 (en) Method for instant rendering of voxels
JP7368950B2 (en) Method and apparatus for efficient building footprint identification
JP3178808B2 (en) Object reconstruction method, object approximation method, and space rendering method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant