CN112669418A - Model rendering method and device - Google Patents

Model rendering method and device Download PDF

Info

Publication number
CN112669418A
CN112669418A CN202011532448.XA CN202011532448A CN112669418A CN 112669418 A CN112669418 A CN 112669418A CN 202011532448 A CN202011532448 A CN 202011532448A CN 112669418 A CN112669418 A CN 112669418A
Authority
CN
China
Prior art keywords
data
rendering
model
models
vertex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011532448.XA
Other languages
Chinese (zh)
Inventor
吕天胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Pixel Software Technology Co Ltd
Original Assignee
Beijing Pixel Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Pixel Software Technology Co Ltd filed Critical Beijing Pixel Software Technology Co Ltd
Priority to CN202011532448.XA priority Critical patent/CN112669418A/en
Publication of CN112669418A publication Critical patent/CN112669418A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a method and a device for rendering a model, and the method comprises the steps of respectively obtaining model data sets of at least two models to be rendered, wherein the model data sets at least comprise: the data set is used for representing the target state of each type of the model to be rendered in the at least two different types; merging the data sets of the models to obtain merged data, wherein the merged data comprises the data sets of the models and the index data corresponding to the data sets of the models; and sending a rendering instruction and merging data to the image processor, wherein the rendering instruction at least comprises the vertex data and the index data in the merging data, so that the operation burden of the image processor can be reduced, and the user experience is improved.

Description

Model rendering method and device
Technical Field
The embodiment of the application relates to the field of rendering, in particular to a method and a device for rendering a model.
Background
In the related art, characters in an animation are indispensable modules, each character is additionally provided with equipment through a hanging point, each equipment needs to submit rendering to enable the animation to enter the next frame, but a plurality of characters exist in the animation, the number of the additional equipment is increased, but the frame rate of a game is reduced after the number of times of submitting rendering is increased, the burden is increased for a processor, and the problems of blocking and the like occur.
Therefore, how to reduce the operation burden of the processor and improve the user experience becomes a problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a model rendering method and device, and the method and device at least can realize the batch rendering of different types of skeletal animation models and reduce the rendering times of submission, so that the running efficiency of animation is improved, the running burden of a processor is reduced, and the user experience is improved.
In a first aspect, an embodiment of the present application provides a method for rendering a model, where the method includes: respectively obtaining model data sets of at least two different types of models to be rendered, wherein the data sets at least comprise: the data set is used for representing the target state of each type of the model to be rendered in the at least two different types; merging the data sets of the models to obtain merged data, wherein the merged data comprises the data sets of the models and the index data corresponding to the data sets of the models; sending a rendering instruction to an image processor, wherein the rendering instruction comprises at least the vertex data and the index data in the merged data.
Therefore, the embodiment of the application can realize the batch rendering of different types of animation models with bones through the model rendering method, and reduce the rendering times, thereby improving the operation efficiency of the animation, reducing the operation burden of a processor and improving the user experience.
With reference to the first aspect, in one implementation, the index data is used to index the vertex data to correspond to the models.
Therefore, in the embodiment of the application, by establishing the index data, the vertex data can be made to correspond to each model in the rendering process.
With reference to the first aspect, in one embodiment, the vertex data is obtained by copying each vertex data of the models.
Therefore, the embodiment of the present application places the vertex data of each model into the merged data by copying the vertex data.
With reference to the first aspect, in an implementation manner, the merged data further includes an index number corresponding to each vertex data; the index number is obtained by: sequencing the vertexes of the first model, and directly taking the sequencing as an index number of the first model in the vertex data; and starting from the second model, recording the sum of the number of vertexes of all models in front of the current model as N, wherein the index number of the current model is the sum of the sequence of the vertexes of the current model and N, and N is an integer greater than or equal to 1.
Therefore, according to the embodiment of the application, the corresponding index numbers are established for the vertex data, the vertex data corresponding to each model can be quickly found in the merged data, the searching times of the processor are reduced, the operation burden is reduced, and the operation speed is accelerated.
With reference to the first aspect, in one implementation, the merging data includes: the vertex data and the index data corresponding to the vertex data; after the merging the data sets of the models to obtain merged data, the method further includes: recording model serial numbers of the models in the merged data; creating a map array, storing the maps of the models in the map array, and recording the map serial numbers corresponding to the models; creating a skeleton matrix array, calculating the skeleton matrix data corresponding to each model according to the coordinates, the orientation and the scaling in a local space to obtain new skeleton matrix data, storing the new skeleton matrix data in the skeleton matrix array, and recording the skeleton serial numbers corresponding to each model; and creating a parameter array, storing the material parameters corresponding to the models in the parameter array, and recording the parameter serial numbers corresponding to the models.
Therefore, according to the embodiment of the application, the arrays for storing the data of the models are established, the serial numbers are established, the bone matrix data, the mapping, the material parameters and the like corresponding to the models can be quickly found in the merged data, the searching times of the processor are reduced, the operation burden is reduced, and the operation speed is accelerated.
With reference to the first aspect, in an embodiment, the vertex data includes a batching number corresponding to each vertex data, where the batching number is obtained by: storing the batching serial number consisting of the model serial number, the chartlet serial number, the initial serial number in the skeleton serial number and the parameter serial number in each vertex data; the sending rendering instructions to an image processor, comprising: and sending the rendering instruction, the merging data and the batching serial number to an image processor.
Therefore, according to the embodiment of the application, the batching serial numbers are established for the vertexes in the vertex data, the rendering data can be quickly found from the arrays by using the batching serial numbers of the vertexes after the rendering is submitted, and the models of multiple types are rendered at one time, so that the running efficiency of the animation is improved, the running load of the processor is reduced, and the user experience is improved.
With reference to the first aspect, in an embodiment, before the obtaining the data sets of the models of the at least two different types of models to be rendered, the method further includes: and creating rendering material parameters for the at least two different types of models to be rendered.
Therefore, according to the embodiment of the application, the standard of the rendering material can be unified by unifying the rendering material parameters of each model of the model to be rendered, so that the rendering process can be accelerated.
In a second aspect, an embodiment of the present application provides a method for rendering a model, where the method includes: receiving a rendering instruction, wherein the rendering instruction at least comprises vertex data and index data in the merged data; obtaining rendering data in a data set of each model according to the vertex data and the batching serial number corresponding to the vertex data, wherein the rendering data at least comprises bone matrix data; and performing batch rendering on at least two different types of models to be rendered by using the rendering data once to obtain rendering models.
In a third aspect, an embodiment of the present application provides an apparatus for model rendering, including: the device comprises an acquisition module, a merging module and a sending module; the obtaining module is configured to obtain model data sets of at least two different types of models to be rendered, respectively, where the data sets at least include: the data set is used for representing the target state of each type of the model to be rendered in the at least two different types; the merging module is configured to merge the data sets of the models to obtain merged data, where the merged data includes the data sets of the models and the index data corresponding to the data sets of the models; the sending module is configured to send a rendering instruction and merging data to an image processor, wherein the rendering instruction at least comprises the vertex data and the index data in the merging data.
With reference to the third aspect, in one embodiment, the index data is used to index the vertex data to correspond to each model.
With reference to the third aspect, in one embodiment, the vertex data is obtained by copying each vertex data of the models.
With reference to the third aspect, in an embodiment, the merged data further includes an index number corresponding to each vertex data; the index number is obtained by: sequencing the vertexes of the first model, and directly taking the sequencing as an index number of the first model in the vertex data; and starting from the second model, recording the sum of the number of vertexes of all models in front of the current model as N, wherein the index number of the current model is the sum of the sequence of the vertexes of the current model and N, and N is an integer greater than or equal to 1.
With reference to the third aspect, in one embodiment, the merging data includes: the vertex data and the index data corresponding to the vertex data; the merging module is specifically configured to: recording model serial numbers of the models in the merged data; creating a map array, storing the maps of the models in the map array, and recording the map serial numbers corresponding to the models; creating a skeleton matrix array, calculating the skeleton matrix data corresponding to each model according to the coordinates, the orientation and the scaling in a local space to obtain new skeleton matrix data, storing the new skeleton matrix data in the skeleton matrix array, and recording the skeleton serial numbers corresponding to each model; and creating a parameter array, storing the material parameters corresponding to the models in the parameter array, and recording the parameter serial numbers corresponding to the models.
With reference to the third aspect, in an embodiment, the vertex data includes a batching number corresponding to each vertex data, and the batching number is obtained by: storing the batching serial number consisting of the model serial number, the chartlet serial number, the initial serial number in the skeleton serial number and the parameter serial number in each vertex data; the sending rendering instructions to an image processor, comprising: and sending the rendering instruction, the merging data and the batching serial number to an image processor.
With reference to the third aspect, in one embodiment, the obtaining module is configured to: and creating rendering material parameters for the at least two different types of models to be rendered.
In a fourth aspect, an embodiment of the present application provides an apparatus for model rendering, including: the device comprises a receiving module, a reading module and a rendering module; the receiving module is configured to receive a rendering instruction, wherein the rendering instruction at least comprises vertex data and index data in the merged data; the reading module is configured to read rendering data in a data set of each model according to the vertex data and the batching serial number corresponding to the vertex data, wherein the rendering data at least comprises bone matrix data; and the rendering module is configured to perform batch rendering on at least two different types of models to be rendered at one time by using the rendering data to obtain a rendering model.
In a fifth aspect, an embodiment of the present application provides a central processing unit, including: a processing module, a storage module and a bus, wherein the processing module is connected to the storage module through the bus, and the storage module stores computer readable instructions, and when the computer readable instructions are executed by the processing module, the processing module is configured to implement the method according to the first aspect and all embodiments thereof.
In a sixth aspect, an embodiment of the present application provides an image processor, including: a processing module, a storage module and a bus, wherein the processing module is connected to the storage module through the bus, and the storage module stores computer readable instructions for implementing the method according to the second aspect when the computer readable instructions are executed by the processing module.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a server, the computer program implements any one of the methods described above.
Drawings
FIG. 1 is a system diagram illustrating a model rendering according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for rendering a model according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of another method for rendering a model according to an embodiment of the present disclosure;
FIG. 4 illustrates device internal modules of a model rendering according to an embodiment of the present application;
FIG. 5 is a block diagram of an apparatus for rendering a model according to an embodiment of the present disclosure;
fig. 6 is a central processing unit internal module according to an embodiment of the present application;
fig. 7 is an internal block of an image processor according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
The method steps in the embodiments of the present application are described in detail below with reference to the accompanying drawings.
The implementation of the application can be applied to scenes rendered by various models, for example, scenes rendered by three-dimensional games, wherein each moving character is accompanied by a plurality of weapons, and each weapon is a scene rendered by a weapon model. Under the condition that the number of weapons to be rendered is more and more due to too many three-dimensional game roles, the processor submits rendering at least once in the process of rendering weapons by using different types of weapon models (namely, data sets of models of at least two different types of models to be rendered are respectively acquired), and if too many roles are used, the game frame rate is reduced, and the user experience is poor. For example, in a scene of rendering a weapon, the method in the embodiment of the present application may be used to perform batch rendering on at least two different types of models to be rendered at one time by using merge data, so as to obtain a rendering animation. It is to be understood that the application scenarios of the embodiments of the present application are not limited thereto.
In the related art, characters in an animation are indispensable modules, each character is additionally provided with equipment through a hanging point, each equipment needs to submit rendering to enable the animation to enter the next frame, but a plurality of characters exist in the animation, the number of the additional equipment is increased, but the frame rate of a game is reduced after the number of times of submitting rendering is increased, the burden is increased for a processor, and the problems of blocking and the like occur. Therefore, how to reduce the operation burden of the processor and improve the user experience becomes a problem to be solved urgently.
In view of the foregoing problems, an embodiment of the present application provides a method and an apparatus for model rendering, where the method includes. According to the method, each model data set of at least two different types of models to be rendered is obtained respectively, wherein the data sets at least comprise: the data set is used for representing the target state of each type of the model to be rendered in the at least two different types; merging the data sets of the models to obtain merged data, wherein the merged data comprises the data sets of the models and the index data corresponding to the data sets of the models; sending a rendering instruction and merging data to an image processor, wherein the rendering instruction at least comprises the vertex data and the index data in the merging data.
A system for model rendering will be described below with reference to fig. 1, where fig. 1 is a system for model rendering in an embodiment of the present application, and includes: a data obtaining server 110 and a rendering server 140, wherein the data obtaining server 110 uses the central processing unit 120 to execute a program to obtain merged data, and the rendering server 140 may use the image processor 130 to perform merged rendering on the merged data obtained from the data obtaining server 110. The central processing unit is used for respectively acquiring data sets of models of at least two different types of models to be rendered; merging the data sets of the models to obtain merged data; sending rendering instructions to the image processor. The image processor is used for executing and receiving the rendering instruction sent by the central processor; acquiring rendering data in a data set of each model according to the vertex data and the batching serial number corresponding to the vertex data; and performing batch rendering on at least two different types of models to be rendered by using the rendering data once to obtain rendering models.
The following describes in detail, with reference to fig. 2, implementation steps of a method for processing model rendering performed by the data acquisition server 110, where the steps shown in fig. 2 include:
s210, respectively obtaining model data sets of at least two different types of models to be rendered.
In one embodiment, rendering material parameters are created for the at least two different types of models to be rendered.
Before acquiring data sets of models of at least two different types of models to be rendered, a central processing unit unifies rendering material parameters required by rendering weapons, the number of maps and the number of parameters in rendering materials are fixed, the rendering materials used by each animation have the same maps and parameters, and the sizes of the maps are the same, for example: all types of base maps are 512 × 512 in size, the normal maps are 256 × 256 in size, and the total number of all base maps is no more than 256.
It should be noted that the basic map represents the color, pattern or pattern of the model to be rendered; the normal map represents the concave-convex information, flatness, and light intensity of the tape rendering animation.
It should be noted that the rendering material parameter may include various materials, may include a base map and a normal map, and may further include a highlight map, and the embodiment of the present application is not limited thereto.
Therefore, according to the embodiment of the application, the standard of the rendering material can be unified by unifying the rendering material parameters of each model of the model to be rendered, so that the rendering process can be accelerated.
In one embodiment, model data sets of at least two different types of models to be rendered are obtained, respectively, where the data sets at least include: the data set is used for representing the target state of each type of the model to be rendered in the at least two different types.
The central processor creates a basic map array, a normal map array and a rendering material array when the game runs, and is used for storing corresponding data in the following step of merging the data.
After the array is created, vertex data, bone matrix data, and index data of at least two different types of models to be rendered are obtained, where the obtained data set of each model may further include the basic map, the normal map, and the like, and the obtained data set may be changed according to an actual situation, which is not limited in this embodiment of the present application.
The above describes a process in which the central processing unit respectively obtains data sets of models of at least two different types of models to be rendered, and the following describes a process in which the data sets of the models are merged to obtain merged data.
And S220, merging the data sets of the models to obtain merged data.
In one embodiment, the data sets of the models are merged to obtain merged data, where the merged data includes the data sets of the models and the index data corresponding to the data sets of the models;
and after the data sets of the models are obtained, merging the data sets of the models, wherein index data are used for indexing the vertex data to correspond to the models in the merging process.
Therefore, in the embodiment of the application, by establishing the index data, the vertex data can be made to correspond to each model in the rendering process.
In one embodiment, the vertex data in the data set for each model is obtained by copying the vertex data for each model.
In one embodiment, the merged data further includes an index number corresponding to each vertex data; the index number is obtained by: sequencing the vertexes of the first model, and directly taking the sequencing as an index number of the first model in the vertex data; and starting from the second model, recording the sum of the number of vertexes of all models in front of the current model as N, wherein the index number of the current model is the sum of the sequence of the vertexes of the current model and N, and N is an integer greater than or equal to 1.
The central processing unit is used for creating a total rendering model in the process of merging data, directly copying vertex data of each model in a data set of each model into the total rendering model, firstly sorting the vertex data of a first model according to an index number corresponding to the vertex data, directly using the sorted vertex data of the first model as the index number of the first model in the vertex data, starting a second model, recording the sum of the number of vertexes of all models in front of the current model as N, and recording the index number of the current model as the sum of the sorted vertexes of the current model and N.
For example, A, B two models exist, the vertex data of the model A is 1, 2 and 3, the vertex data of the model B is P, D and F, a total rendering model C is newly created, the vertex data of the model A and the vertex data of the model B are firstly put into the model C, then the vertex data in the model C are 1, 2, 3, P, D and F, index numbers are arranged on the vertex data in the model C, 3 vertex data exist in the model A, 3 is added to the ordering of the vertex data in the model B, the ordering corresponding to the vertex data in the model C is 1, 2, 3, 4, 5 and 6, and therefore, the vertices arranged at the positions 1, 2 and 3 belong to the model A, and the vertices arranged at the positions 4, 5 and 6 belong to the model B.
It should be noted that, in the process of merging data, a total rendering model may be newly created and submitted for batch rendering, or merged data may be directly submitted for batch rendering.
Therefore, according to the embodiment of the application, the corresponding index numbers are established for the vertex data, the vertex data corresponding to each model can be quickly found in the merged data, the searching times of the processor are reduced, the operation burden is reduced, and the operation speed is accelerated.
In one embodiment, the merging data comprises: the vertex data and the index data corresponding to the vertex data; after the merging the data sets of the models to obtain merged data, the method further includes: recording model serial numbers of the models in the merged data; creating a map array, storing the maps of the models in the map array, and recording the map serial numbers corresponding to the models; creating a skeleton matrix array, calculating the skeleton matrix data corresponding to each model according to the coordinates, the orientation and the scaling in a local space to obtain new skeleton matrix data, storing the new skeleton matrix data in the skeleton matrix array, and recording the skeleton serial numbers corresponding to each model; and creating a parameter array, storing the material parameters corresponding to the models in the parameter array, and recording the parameter serial numbers corresponding to the models.
Creating a map array, wherein the map array comprises a basic map array and a normal map array, putting maps of all models into the created map array, recording map sequence numbers of the map array, and corresponding map elements in the map array to corresponding models according to the map sequence numbers.
For example: the A model comprises a map 1 and a map 2, the B model comprises a map 3 and a map 4, the map 1, the map 2 and the added map array are marked as 1, the map 1 and the map 2 are maps of the A model, the map 3 and the map 4 are marked as 2, and the map 3 and the map 4 are maps of the B model.
Creating a bone matrix array, wherein the size of the array is the sum of the number of bones of each model, calculating according to the coordinates, the orientation and the scaling in a local space to obtain new bone matrix data, storing the new bone matrix data in the bone matrix array, recording the number of bones corresponding to each model, marking the index numbers of n bone matrixes of a first model as 0, and marking the index numbers of the bone matrixes as the sum of the number of bones of a previous model from a second model.
For example: the model A is influenced by 5 bones, 5 elements are arranged in a bone matrix, the numerical value of the vertex of the model A is marked as 0, the numerical value of the vertex of the model B is marked as 5, the initial sequence number of each model bone matrix is used as the bone sequence number between each bone matrix and the model, and so on, if more models exist, the initial sequence number of each item is the sum of the bone numbers of all the models in front.
Creating a parameter array, wherein the parameter array is used for storing basic colors, normal intensity and the like in a rendering material, and can also be used for storing other parameters such as high light intensity and the like.
Therefore, according to the embodiment of the application, the arrays for storing the data of the models are established, the serial numbers are established, the bone matrix data, the mapping, the material parameters and the like corresponding to the models can be quickly found in the merged data, the searching times of the processor are reduced, the operation burden is reduced, and the operation speed is accelerated.
In one embodiment, the vertex data includes a batching number corresponding to each vertex data, and the batching number is obtained by: storing the batching serial number consisting of the model serial number, the chartlet serial number, the initial serial number in the skeleton serial number and the parameter serial number in each vertex data; the sending rendering instructions to an image processor, comprising: and sending the rendering instruction, the merging data and the batching serial number to an image processor.
In the total rendering model, a batch number is established for each vertex, which may include 4 values, the first value being the model number corresponding to each model, the second being the starting number of the skeleton matrix corresponding to each model, the third value being the chartlet number corresponding to each model, and the fourth value being the parameter number corresponding to each model.
It should be noted that the batch serial number may include a serial number corresponding to each model, a serial number corresponding to the bone matrix array, and a serial number corresponding to the chartlet array, and may also include a serial number corresponding to the parameter array, and the like, and may be determined according to the rendering actual situation, which is not limited to this embodiment of the present application.
For example: two models of an A model and a B model need to be subjected to batch rendering, index numbers of vertexes of the A model in a total rendering model are 1, 2 and 3, index numbers of vertexes of the B model are 4, 5 and 6, batch serial numbers of 6 vertexes are respectively established at present, taking vertex 1 as an example, vertex 1 has 4 batch serial numbers, and since vertex 1 belongs to the vertex of the A model, a first value is 'A'; the second value is the starting sequence number of the A model skeleton matrix, and since the A model is the first of the models and the skeleton matrices are all marked as 0, the starting sequence number is also 0, so the second value is '0'; the third value is the corresponding map, so in contrast to the above-described method of creating a map array, the third value is a map of A, so the third value is "1"; the fourth value corresponds to a parameter. So similarly, the fourth value is "1", and thus the pooling serial number of vertex 1 in the A model is (A,0,1, 1).
Therefore, according to the embodiment of the application, the batching serial numbers are established for the vertexes in the vertex data, the rendering data can be quickly found from the arrays by using the batching serial numbers of the vertexes after the rendering is submitted, and the models of multiple types are rendered at one time, so that the running efficiency of the animation is improved, the running load of the processor is reduced, and the user experience is improved.
S230, sending a rendering instruction to the image processor.
In one embodiment, a rendering instruction is sent to an image processor, wherein the rendering instruction includes at least the vertex data and the index data in the merged data.
After the central processing unit completes the execution of S210 and S220, submitting and sending a rendering instruction to the image processor, wherein the submitted rendering instruction comprises vertex data, index data, batch serial number, a mapping array, a skeleton matrix array and a parameter array in merged data; the rendering instruction may also include only vertex data and batching sequence number in the merged data, and the rest of the arrays are stored in a register in the central processing unit.
Therefore, the embodiment of the application can realize the batch rendering of different types of animation models with bones through the model rendering method, and reduce the rendering times, thereby improving the operation efficiency of the animation, reducing the operation burden of a processor and improving the user experience. One method of model rendering is described above in detail, and another method of model rendering is described below.
S310, receiving a rendering instruction sent by the central processing unit.
In one embodiment, a rendering instruction sent by a central processing unit is received, wherein the rendering instruction at least comprises vertex data and index data in merged data.
The image processor receives a rendering instruction sent by the central processing unit, wherein the rendering instruction received by the image processor can comprise vertex data, index data, batch sequence number, a mapping array, a skeleton matrix array and a parameter array in merged data; the rendering instruction may also include only vertex data and batching sequence number in the merged data, and the rest of the arrays are stored in a register in the central processing unit.
And S320, acquiring rendering data in the data set of each model according to the vertex data and the batching serial number corresponding to the vertex data.
In one embodiment, rendering data in a data set of each model is obtained according to the vertex data and a batching serial number corresponding to the vertex data, wherein the rendering data at least comprises bone matrix data.
After receiving a rendering instruction sent by the central processing unit, the image processor obtains rendering data corresponding to each vertex from a rendering material array, a mapping array, a skeleton matrix array and a parameter array according to the batching serial number corresponding to each vertex in the vertex data.
S330, performing batch rendering on at least two different types of models to be rendered by using the rendering data to obtain rendering models.
And after acquiring rendering data of different types of models to be rendered, the image processor performs batch rendering on the different types of models to be rendered at one time to obtain the rendering models.
Therefore, the embodiment of the application can realize the batch rendering of different types of animation models with bones through the model rendering method, and reduce the rendering times, thereby improving the operation efficiency of the animation, reducing the operation burden of a processor and improving the user experience.
A method of model rendering is described above in detail, and an apparatus of model rendering is described below with reference to fig. 4 and 5.
As shown in fig. 4, an apparatus for model rendering includes: an acquisition module 410, a merging module 420, and a sending module 430.
In an implementation manner, an embodiment of the present application provides an apparatus for rendering a model, including: the device comprises an acquisition module, a merging module and a sending module; the obtaining module is configured to obtain model data sets of at least two different types of models to be rendered, respectively, where the data sets at least include: the data set is used for representing the target state of each type of the model to be rendered in the at least two different types; the merging module is configured to merge the data sets of the models to obtain merged data, where the merged data includes the data sets of the models and the index data corresponding to the data sets of the models; the sending module is configured to send a rendering instruction and merging data to an image processor, wherein the rendering instruction at least comprises the vertex data and the index data in the merging data.
In one embodiment, the index data is used to index the vertex data to correspond to the models.
In one embodiment, the vertex data is obtained by copying the vertex data of the models.
In one embodiment, the merged data further includes an index number corresponding to each vertex data; the index number is obtained by: sequencing the vertexes of the first model, and directly taking the sequencing as an index number of the first model in the vertex data; and starting from the second model, recording the sum of the number of vertexes of all models in front of the current model as N, wherein the index number of the current model is the sum of the sequence of the vertexes of the current model and N, and N is an integer greater than or equal to 1.
In one embodiment, the merging data comprises: the vertex data and the index data corresponding to the vertex data; the merging module is specifically configured to: recording model serial numbers of the models in the merged data; creating a map array, storing the maps of the models in the map array, and recording the map serial numbers corresponding to the models; creating a skeleton matrix array, calculating the skeleton matrix data corresponding to each model according to the coordinates, the orientation and the scaling in a local space to obtain new skeleton matrix data, storing the new skeleton matrix data in the skeleton matrix array, and recording the skeleton serial numbers corresponding to each model; and creating a parameter array, storing the material parameters corresponding to the models in the parameter array, and recording the parameter serial numbers corresponding to the models.
In one embodiment, the vertex data includes a batching number corresponding to each vertex data, and the batching number is obtained by: storing the batching serial number consisting of the model serial number, the chartlet serial number, the initial serial number in the skeleton serial number and the parameter serial number in each vertex data; the sending rendering instructions to an image processor, comprising: and sending the rendering instruction, the merging data and the batching serial number to an image processor.
In one embodiment, the acquisition module is configured to: and creating rendering material parameters for the at least two different types of models to be rendered.
In the embodiment of the present application, the module shown in fig. 4 can implement the process in the embodiment of the method shown in fig. 1 and fig. 2. The operations and/or functions of the respective modules in fig. 4 are respectively for implementing the corresponding flows in the method embodiments in fig. 1 and 2. Reference may be made specifically to the description of the above method embodiments, and a detailed description is appropriately omitted herein to avoid redundancy.
As shown in fig. 5, an apparatus for model rendering includes: a receiving module 510, a reading module 520, and a rendering module 530.
In an implementation manner, an embodiment of the present application provides an apparatus for rendering a model, including: the device comprises a receiving module, a reading module and a rendering module; the receiving module is configured to receive a rendering instruction, wherein the rendering instruction at least comprises vertex data and index data in the merged data; the reading module is configured to read rendering data in a data set of each model according to the vertex data and the batching serial number corresponding to the vertex data, wherein the rendering data at least comprises bone matrix data; and the rendering module is configured to perform batch rendering on at least two different types of models to be rendered at one time by using the rendering data to obtain a rendering model. In the embodiment of the present application, the module shown in fig. 5 can implement the process in the method embodiment of fig. 1 and 3. The operations and/or functions of the respective modules in fig. 5 are respectively for implementing the corresponding flows in the method embodiments in fig. 1 and 3. Reference may be made specifically to the description of the above method embodiments, and a detailed description is appropriately omitted herein to avoid redundancy.
The above describes a model rendering apparatus, and a central processor and an image processor are described below with reference to fig. 6 and 7.
As shown in fig. 6, a central processing unit includes a processing module 610, a memory module 620, and a bus 630.
In one embodiment, an embodiment of the present application provides a central processing unit, including: the processing module is connected with the storage module through the bus, the storage module stores computer readable instructions, and when the computer readable instructions are executed by the processing module, the method applied to any one of all embodiments of the central processing unit is implemented.
As shown in fig. 7, an image processor includes a processing module 710, a memory module 720, and a bus 730.
In one implementation, an embodiment of the present application provides an image processor, including: the image processor comprises a processing module, a storage module and a bus, wherein the processing module is connected with the storage module through the bus, the storage module stores computer readable instructions, and when the computer readable instructions are executed by the processing module, the computer readable instructions are used for implementing the method applied to any one of all embodiments of the image processor.
Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a server, the method is implemented, and in particular, reference may be made to the description in the foregoing method embodiments, and a detailed description is appropriately omitted here to avoid repetition.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of model rendering, the method comprising:
respectively obtaining model data sets of at least two different types of models to be rendered, wherein the data sets at least comprise: the data set is used for representing the target state of each type of the model to be rendered in the at least two different types;
merging the data sets of the models to obtain merged data, wherein the merged data comprises the data sets of the models and the index data corresponding to the data sets of the models;
sending a rendering instruction to an image processor, wherein the rendering instruction comprises at least the vertex data and the index data in the merged data.
2. The method of claim 1, wherein the index data is used to index the vertex data to correspond to the models.
3. The method of claim 2, wherein the vertex data is obtained by copying the vertex data of the models.
4. The method of claim 3, wherein the merged data further comprises an index number corresponding to each vertex data;
the index number is obtained by:
sequencing the vertexes of the first model, and directly taking the sequencing as an index number of the first model in the vertex data;
and starting from the second model, recording the sum of the number of vertexes of all models in front of the current model as N, wherein the index number of the current model is the sum of the sequence of the vertexes of the current model and N, and N is an integer greater than or equal to 1.
5. The method of claim 4, wherein merging the data comprises: the vertex data and the index data corresponding to the vertex data;
after the merging the data sets of the models to obtain merged data, the method further includes:
recording model serial numbers of the models in the merged data;
creating a map array, storing the maps of the models in the map array, and recording the map serial numbers corresponding to the models;
creating a skeleton matrix array, calculating the skeleton matrix data corresponding to each model according to the coordinates, the orientation and the scaling in a local space to obtain new skeleton matrix data, storing the new skeleton matrix data in the skeleton matrix array, and recording the skeleton serial numbers corresponding to each model;
and creating a parameter array, storing the material parameters corresponding to the models in the parameter array, and recording the parameter serial numbers corresponding to the models.
6. The method of claim 5, wherein the vertex data comprises a batching number corresponding to each vertex data, the batching number being obtained by:
storing the batching serial number consisting of the model serial number, the chartlet serial number, the initial serial number in the skeleton serial number and the parameter serial number in each vertex data;
the sending rendering instructions to an image processor, comprising:
and sending the rendering instruction, the merging data and the batching serial number to an image processor.
7. The method of claim 1, wherein prior to said obtaining the data sets for each of the at least two different types of models to be rendered, the method further comprises:
and creating rendering material parameters for the at least two different types of models to be rendered.
8. A method of model rendering, the method comprising:
receiving a rendering instruction, wherein the rendering instruction at least comprises vertex data and index data in the merged data;
obtaining rendering data in a data set of each model according to the vertex data and the batching serial number corresponding to the vertex data, wherein the rendering data at least comprises bone matrix data;
and performing batch rendering on at least two different types of models to be rendered by using the rendering data once to obtain rendering models.
9. An apparatus for model rendering, comprising: the device comprises an acquisition module, a merging module and a sending module;
the obtaining module is configured to obtain model data sets of at least two different types of models to be rendered, respectively, where the data sets at least include: the data set is used for representing the target state of each type of the model to be rendered in the at least two different types;
the merging module is configured to merge the data sets of the models to obtain merged data, where the merged data includes the data sets of the models and the index data corresponding to the data sets of the models;
the sending module is configured to send a rendering instruction and merging data to an image processor, wherein the rendering instruction at least comprises the vertex data and the index data in the merging data.
10. An apparatus for model rendering, comprising: the device comprises a receiving module, a reading module and a rendering module;
the receiving module is configured to receive a rendering instruction, wherein the rendering instruction at least comprises vertex data and index data in the merged data;
the reading module is configured to read rendering data in a data set of each model according to the vertex data and the batching serial number corresponding to the vertex data, wherein the rendering data at least comprises bone matrix data;
and the rendering module is configured to perform batch rendering on at least two different types of models to be rendered at one time by using the rendering data to obtain a rendering model.
CN202011532448.XA 2020-12-22 2020-12-22 Model rendering method and device Pending CN112669418A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011532448.XA CN112669418A (en) 2020-12-22 2020-12-22 Model rendering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011532448.XA CN112669418A (en) 2020-12-22 2020-12-22 Model rendering method and device

Publications (1)

Publication Number Publication Date
CN112669418A true CN112669418A (en) 2021-04-16

Family

ID=75407803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011532448.XA Pending CN112669418A (en) 2020-12-22 2020-12-22 Model rendering method and device

Country Status (1)

Country Link
CN (1) CN112669418A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113350785A (en) * 2021-05-08 2021-09-07 广州三七极创网络科技有限公司 Virtual character rendering method and device and electronic equipment
EP4287131A4 (en) * 2022-04-22 2024-03-27 Beijing Zitiao Network Technology Co., Ltd. Batch rendering method, apparatus, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7209139B1 (en) * 2005-01-07 2007-04-24 Electronic Arts Efficient rendering of similar objects in a three-dimensional graphics engine
US20120154409A1 (en) * 2010-12-15 2012-06-21 Microsoft Corporation Vertex-baked three-dimensional animation augmentation
CN103606180A (en) * 2013-11-29 2014-02-26 广州菲动软件科技有限公司 Rendering method and device of 3D skeletal animation
CN106075909A (en) * 2016-07-15 2016-11-09 珠海金山网络游戏科技有限公司 A kind of system and method that changes the outfit of playing
CN109840931A (en) * 2019-01-21 2019-06-04 网易(杭州)网络有限公司 Conjunction batch render method, apparatus, system and the storage medium of skeleton cartoon

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7209139B1 (en) * 2005-01-07 2007-04-24 Electronic Arts Efficient rendering of similar objects in a three-dimensional graphics engine
US20120154409A1 (en) * 2010-12-15 2012-06-21 Microsoft Corporation Vertex-baked three-dimensional animation augmentation
CN103606180A (en) * 2013-11-29 2014-02-26 广州菲动软件科技有限公司 Rendering method and device of 3D skeletal animation
CN106075909A (en) * 2016-07-15 2016-11-09 珠海金山网络游戏科技有限公司 A kind of system and method that changes the outfit of playing
CN109840931A (en) * 2019-01-21 2019-06-04 网易(杭州)网络有限公司 Conjunction batch render method, apparatus, system and the storage medium of skeleton cartoon

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YANGZI DONG等: "Real-Time Large Crowd Rendering with Efficient Character andInstance Management on GPU", INTERNATIONAL JOURNAL OF COMPUTER GAMES TECHNOLOGY, vol. 2019, 26 March 2019 (2019-03-26), pages 1 - 15 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113350785A (en) * 2021-05-08 2021-09-07 广州三七极创网络科技有限公司 Virtual character rendering method and device and electronic equipment
EP4287131A4 (en) * 2022-04-22 2024-03-27 Beijing Zitiao Network Technology Co., Ltd. Batch rendering method, apparatus, device and storage medium

Similar Documents

Publication Publication Date Title
CN112933597B (en) Image processing method, image processing device, computer equipment and storage medium
CN112767489A (en) Three-dimensional pose determination method and device, electronic equipment and storage medium
CN112669418A (en) Model rendering method and device
CN107590811B (en) Scene segmentation based landscape image processing method and device and computing equipment
CN110503718A (en) Three-dimensional engineering model lightweight display methods
CN114241105A (en) Interface rendering method, device, equipment and computer readable storage medium
CN112233009A (en) Picture rendering method, device, equipment and storage medium
CN111179402B (en) Rendering method, device and system of target object
CN111160501A (en) Construction method and device of two-dimensional code training data set
CN111950057A (en) Loading method and device of Building Information Model (BIM)
CN112102441A (en) Color card manufacturing method and device, electronic equipment and storage medium
CN111583398A (en) Image display method and device, electronic equipment and computer readable storage medium
CN111653175A (en) Virtual sand table display method and device
CN112669419A (en) Method for rendering, central processing unit, image processor, system, and storage medium
CN112598611A (en) Method and device for synthesizing and identifying embossed bank card number image
CN117237514A (en) Image processing method and image processing apparatus
CN111862343A (en) Three-dimensional reconstruction method, device and equipment and computer readable storage medium
CN113781658A (en) Method and device for processing 3D model data in streaming mode
CN113591832A (en) Training method of image processing model, document image processing method and device
CN113313805A (en) Three-dimensional scene data storage method, device, equipment and storage medium
CN112596536A (en) Method, product, storage medium and electronic device for clustering unmanned aerial vehicle performance pictures
CN117475070A (en) Image rendering method, device, terminal equipment and storage medium
CN114407364B (en) Slicing method, printing system and electronic equipment of three-dimensional model
CN116778065B (en) Image processing method, device, computer and storage medium
CN117197326A (en) Rendering method, device and equipment of three-dimensional game scene and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination