CN116012507A - Rendering data processing method and device, electronic equipment and storage medium - Google Patents

Rendering data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116012507A
CN116012507A CN202211666816.9A CN202211666816A CN116012507A CN 116012507 A CN116012507 A CN 116012507A CN 202211666816 A CN202211666816 A CN 202211666816A CN 116012507 A CN116012507 A CN 116012507A
Authority
CN
China
Prior art keywords
rendered
sub
grid data
rendering
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211666816.9A
Other languages
Chinese (zh)
Inventor
朱雨乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xingzhen Technology Shanghai Co ltd
Original Assignee
Xingzhen Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xingzhen Technology Shanghai Co ltd filed Critical Xingzhen Technology Shanghai Co ltd
Priority to CN202211666816.9A priority Critical patent/CN116012507A/en
Publication of CN116012507A publication Critical patent/CN116012507A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Generation (AREA)

Abstract

The present disclosure relates to a rendering data processing method, apparatus, electronic device, and storage medium, the method comprising: determining a plurality of objects to be rendered from the scene to be rendered; the rendering code identification of each object to be rendered in the plurality of objects to be rendered is the same, and the texture maps of at least two objects to be rendered in the plurality of objects to be rendered are different; merging the texture maps of each object to be rendered to generate a merged map; acquiring grid data of each object to be rendered; the mesh data of each object to be rendered includes triangle patches constituting each object to be rendered; and responding to the call instruction of the rendering interface, and carrying out batch rendering on the grid data of each object to be rendered based on the merging map. Therefore, the texture maps of the objects to be rendered of different materials are combined, so that the objects to be rendered of different materials can be batched by using the single material of the combined map, the times of calling a rendering interface can be reduced, the CPU performance cost can be saved, and the rendering speed is improved.

Description

Rendering data processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of internet, and in particular relates to a rendering data processing method, a rendering data processing device, electronic equipment and a storage medium.
Background
Currently, in the process of rendering a game scene, the rendering interface is called to render the scene, and in the process of calling the rendering interface, a Central Processing Unit (CPU) of the terminal calls the bottom graphic rendering interface to command a Graphic Processor (GPU) to perform rendering operation.
However, since the objects in the game scene are numerous and the adopted materials are also various, the objects with different materials cannot be rendered simultaneously in the process of calling the rendering interface once, so that the CPU needs to call the rendering interface for many times, which leads to overload of the CPU and influences the rendering speed.
Disclosure of Invention
The disclosure provides a rendering data processing method, a rendering data processing device, electronic equipment and a storage medium, and the technical scheme of the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a rendering data processing method, including:
determining a plurality of objects to be rendered from the scene to be rendered; the rendering code identification of each object to be rendered in the plurality of objects to be rendered is the same, and the texture maps of at least two objects to be rendered in the plurality of objects to be rendered are different;
Merging the texture maps of each object to be rendered to generate a merged map;
acquiring grid data of each object to be rendered; the mesh data of each object to be rendered includes triangle patches constituting each object to be rendered;
and responding to the call instruction of the rendering interface, and carrying out batch rendering on the grid data of each object to be rendered based on the merging map.
In some possible embodiments, after acquiring the mesh data of each object to be rendered, the method further comprises:
the method comprises the steps of performing segmentation processing on grid data of each object to be rendered to obtain a plurality of sub-grid data sets corresponding to each object to be rendered; each of the plurality of sub-grid data sets includes the same preset number of triangular patches.
In some possible embodiments, the slicing processing is performed on the grid data of each object to be rendered to obtain a plurality of sub-grid data sets corresponding to each object to be rendered, including:
responding to a call instruction of a grid segmentation interface, and carrying out equal division processing on triangle patches forming each object to be rendered based on a preset number of triangle patches to obtain a plurality of sub-grid data sets corresponding to each object to be rendered;
When the number of the triangular patches in the plurality of sub-grid data sets does not meet the preset number of the sub-grid data sets, performing filling processing on the triangular patches in the sub-grid data sets which do not meet the preset number, and obtaining the filled sub-grid data sets.
In some possible embodiments, before batch rendering the mesh data of each object to be rendered, the method includes:
combining and storing vertex data of each triangle patch in each sub-grid data group corresponding to each object to be rendered into the same vertex buffer area in the graphic processor;
and merging and storing vertex index data of each triangle patch in each sub-grid data group corresponding to each object to be rendered into the same index buffer area in the graphic processor.
In some possible embodiments, batch rendering the mesh data of each object to be rendered includes:
for each object to be rendered, reading vertex index data of each triangle patch in each sub-grid data group corresponding to the object to be rendered from an index buffer;
based on the vertex index data of each triangular patch, reading vertex data of each triangular patch in each sub-grid data group corresponding to the object to be rendered from a vertex buffer area;
Performing visibility verification on each sub-grid data set corresponding to the object to be rendered to obtain a visibility verification result of each sub-grid data set;
based on the visibility verification result of each sub-grid data set, carrying out shielding and eliminating processing on grid data of an object to be rendered to obtain target grid data of the object to be rendered;
rendering the target grid data of the object to be rendered to obtain the rendered object.
In some possible embodiments, performing visibility verification on each sub-grid data set corresponding to the object to be rendered to obtain a visibility verification result of each sub-grid data set, including:
determining a cubic bounding box of each sub-grid data set and size information of the cubic bounding box for each sub-grid data set; the size information comprises position information of a central point of the cubic bounding box and a vector from a vertex to the central point of the cubic bounding box;
and determining a visibility verification result of the sub-grid data set based on the size information of the cubic bounding box of the sub-grid data set and viewpoint information of the scene to be rendered.
In some possible embodiments, based on the visibility check result of each sub-grid data set, performing occlusion culling processing on the grid data of the object to be rendered, including:
Aiming at each sub-grid data set of the object to be rendered, if the visibility check result of the sub-grid data set indicates that the sub-grid data set is invisible, rejecting the sub-grid data set;
or alternatively; and if the visibility check result of the sub-grid data set indicates that the visibility is visible, reserving the sub-grid data set.
According to a second aspect of embodiments of the present disclosure, there is provided a rendering data processing apparatus, comprising:
a determining module configured to perform determining a plurality of objects to be rendered from a scene to be rendered; the rendering code identification of each object to be rendered in the plurality of objects to be rendered is the same, and the texture maps of at least two objects to be rendered in the plurality of objects to be rendered are different;
the merging module is configured to execute merging processing on the texture map of each object to be rendered to generate a merging map;
an acquisition module configured to perform acquisition of mesh data of each object to be rendered; the mesh data of each object to be rendered includes triangle patches constituting each object to be rendered;
and the batch processing module is configured to execute batch processing rendering on the grid data of each object to be rendered based on the merging map in response to the call instruction of the rendering interface.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute instructions to implement the rendering data processing method of the first aspect of the embodiments of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the rendering data processing method of the first aspect of embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
determining a plurality of objects to be rendered from the scenes to be rendered; the rendering code identification of each object to be rendered in the plurality of objects to be rendered is the same, and the texture maps of at least two objects to be rendered in the plurality of objects to be rendered are different; merging the texture maps of each object to be rendered to generate a merged map; acquiring grid data of each object to be rendered; the mesh data of each object to be rendered includes triangle patches constituting each object to be rendered; and responding to the call instruction of the rendering interface, and carrying out batch rendering on the grid data of each object to be rendered based on the merging map. Therefore, the texture maps of the objects to be rendered of different materials are combined, so that the objects to be rendered of different materials can be batched by using the single material of the combined map, the times of calling a rendering interface can be reduced, the CPU performance cost can be saved, and the rendering speed is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram of an application environment shown in accordance with an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method of rendering data processing according to an exemplary embodiment;
FIG. 3 is a flow chart illustrating one method of deriving a plurality of sub-grid data sets for each object to be rendered, in accordance with an exemplary embodiment;
FIG. 4 is a flowchart illustrating a method of rendering data processing, according to an example embodiment;
FIG. 5 is a flow diagram illustrating a batch rendering of mesh data for each object to be rendered, according to an example embodiment;
FIG. 6 is a flowchart illustrating a visibility check in accordance with an exemplary embodiment;
FIG. 7 is a block diagram of a rendering data processing apparatus, shown according to an exemplary embodiment;
FIG. 8 is a block diagram of an electronic device for rendering data processing, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar first objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The user information (including but not limited to user equipment information, user personal information, etc.) related to the present disclosure is information authorized by the user or sufficiently authorized by each party.
Referring to fig. 1, fig. 1 is a schematic diagram of an application environment according to an exemplary embodiment, as shown in fig. 1, including a server 01 and a terminal 02. Alternatively, the server 01 and the terminal 02 may be connected through a wireless link, or may be connected through a wired link, which is not limited herein.
Referring to fig. 1, a terminal 02 runs a 3D game, the terminal 02 needs to render a game screen during the game running process, and a server 01 provides a background service of the 3D game for the terminal 02, including providing various data services required when rendering the game screen. Specifically, in the process of rendering the game frame, the terminal 02 mainly includes a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU), prepares grid data and map data of the frame to be rendered by the CPU, calls a rendering interface after waiting for rendering data, submits a drawing instruction to the GPU, transmits the data to be rendered, and finishes rendering the frame to be rendered by the GPU.
In some possible embodiments, the server 01 may include a stand-alone physical server, a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDN (Content Delivery Network ), and basic cloud computing services such as big data and artificial intelligence platforms. Operating systems running on the server may include, but are not limited to, android systems, IOS systems, linux, windows, unix, and the like.
In some possible embodiments, the terminal 02 described above may include, but is not limited to, a smart phone, a desktop computer, a tablet computer, a notebook computer, a smart speaker, a digital assistant, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a smart wearable device, and the like. Or may be software running on the client, such as an application, applet, etc. Alternatively, the operating system running on the client may include, but is not limited to, an android system, an IOS system, linux, windows, unix, and the like.
In addition, it should be noted that the application environment shown in fig. 1 is merely an example. In practical applications, the rendering data processing method of the embodiment of the present disclosure may be executed by the terminal and the server in cooperation, or the rendering data processing method of the embodiment of the present disclosure may be executed independently by the terminal, which is not limited to a specific application environment.
Fig. 2 is a flowchart illustrating a rendering data processing method according to an exemplary embodiment, and as shown in fig. 2, the rendering data processing method may be applied to a terminal, including the steps of:
in step S201, a plurality of objects to be rendered are determined from the scene to be rendered; the rendering code identification of each object to be rendered in the plurality of objects to be rendered is the same, and the texture maps of at least two objects to be rendered in the plurality of objects to be rendered are different.
In step S203, the texture maps of each object to be rendered are merged to generate a merged map.
In the embodiment of the disclosure, a scene to be rendered is generated by a computer and used for describing a virtual scene environment, which can provide a multimedia virtual world, a user can control an operable virtual object in the virtual scene through an operation terminal device or an operation interface, observe virtual objects such as an object, an animal, a person, a landscape and the like in a three-dimensional virtual scene from the perspective of the virtual object, or interact with the virtual objects such as the object, the animal, the person, the landscape and the like in the virtual scene or other virtual objects and the like. For example, in a game, various game scenes are designed, the to-be-rendered scene can be presented in the terminal display device as the game scene after the rendering is completed, and a player can watch various game scenes in the game world through the terminal display device, so that game experience is achieved.
The scene to be rendered consists of individual models which only contain points and lines, each model consists of a Mesh (Mesh), and each model represents a certain virtual object or virtual object in the scene; the smallest basic unit of the mesh is a triangular patch. Mesh may be understood as the skeleton that forms the object model, while material is the skin of the object model, where material may include Texture maps (Texture) and other rendering parameters.
The embodiment of the disclosure mainly describes a process of processing data to be rendered by a terminal during rendering, and as described above, main computing components include a CPU and a GPU during rendering by the terminal. At present, with the increase of the complexity of rendering algorithm and the improvement of scene complexity, compared with a serial processing arithmetic unit CPU, the consumption of a parallel processing arithmetic unit GPU is lower, and the calculation efficiency is higher, so that the tasks of eliminating invisible objects and the like before rendering completed by the CPU are transferred to the GPU to be completed, and the performance of the CPU is ensured; therefore, in the embodiment of the disclosure, the terminal CPU mainly performs preprocessing on the data of the object to be rendered, calls the rendering interface based on the preprocessed data, and transfers subsequent work to the GPU for processing so as to complete rendering of the scene to be rendered.
When the CPU calls the rendering interface each time, a great amount of preparation work is required, which causes significant performance cost on the aspect of the CPU, and models of some objects are combined in the related art, so that the objects are rendered by one drawing call, the number of times of calling the rendering interface is reduced, and the purpose of saving the performance cost of the CPU is achieved. This operation is called "batch processing". However, only the objects with the same material can be batched, and for the objects with different materials, the rendering interface is called for multiple times to realize the rendering of the objects with different materials.
Based on this, the embodiments of the present disclosure contemplate merging the Texture referenced by some objects into a larger Texture by merging the Texture referenced by some objects into a larger Texture in a manner of merging the Texture, except for Texture maps, so that the objects can be batched by submitting a rendering instruction once and calling a rendering interface once.
Therefore, in the embodiment of the present disclosure, first, a plurality of objects to be rendered are determined from a scene to be rendered, texture maps of at least two objects to be rendered in the plurality of objects to be rendered are different, but the plurality of objects to be rendered all adopt the same rendering parameters, where the rendering parameters mainly refer to rendering code identifiers, that is, the plurality of rendering objects are rendered by using the same rendering code, and in an actual game scene, the objects which are not important in the scene to be rendered and have similar rendering styles can be rendered by using the same rendering code.
Specifically, all objects corresponding to the rendering code identifier can be determined from the scene to be rendered according to the rendering code identifier, and each object is taken as the object to be rendered. And then, merging the texture maps of each object to be rendered to obtain a merged map, so that a plurality of objects to be rendered can be batched by using a single material of the merged map, and different materials can be batched, so that the number of times of calling a rendering interface for a large number of unimportant objects existing in a complex scene originally is greatly reduced, and CPU performance cost can be saved.
In step S205, mesh data of each object to be rendered is acquired; the mesh data of each object to be rendered includes triangle patches constituting each object to be rendered.
In the embodiment of the disclosure, the terminal CPU first acquires mesh data of each object to be rendered, where the mesh data of each object to be rendered includes triangle patches constituting each object to be rendered, and the number of triangle vertices, vertex positions, vertex colors, normal directions of vertices, the number of triangle patches, and the like.
In some possible embodiments, the rendering data processing method of the embodiments of the present disclosure further includes, after acquiring the mesh data of each object to be rendered, the steps of:
the method comprises the steps of performing segmentation processing on grid data of each object to be rendered to obtain a plurality of sub-grid data sets corresponding to each object to be rendered; wherein each of the plurality of sub-grid data sets includes the same preset number of triangular patches.
Here, the terminal can load the grid data of all the objects to be rendered at one time, and segment all the grid data; the grid data of each object to be rendered can be loaded respectively, and segmentation processing is carried out respectively; the mesh segmentation processing aims at facilitating the realization of more efficient rejection of the invisible part by the subsequent GPU.
In a specific embodiment, the foregoing splitting processing is performed on the mesh data of each object to be rendered to obtain a plurality of sub-mesh data sets corresponding to each object to be rendered, which may specifically include the following steps as shown in fig. 3:
in step S301, in response to a call instruction to the mesh segmentation interface, based on a preset number of triangle patches, the triangle patches forming each object to be rendered are equally divided, so as to obtain a plurality of sub-mesh data sets corresponding to each object to be rendered.
In the step, the terminal can call a bottom grid segmentation interface to realize the segmentation processing of grid data; here, the grid slicing may be implemented using an open-source meta algorithm. Specifically, according to a preset number of triangle patches, equally dividing the grid data of each object to be rendered, namely the triangle patches forming each object to be rendered; for example, the preset number may be 64; then, dividing each 64 triangle patches in the grid data into a sub-grid data group, dividing the grid data into a plurality of sub-grid data groups, recording the number Num of the sub-grid data groups corresponding to each object to be rendered, and marking each sub-grid data group with a continuous sequence number, which can be denoted as Ni, wherein the range of Ni ranges from 0 to Num-1.
In step S303, when the number of triangle patches in the plurality of sub-grid data sets does not satisfy the preset number of sub-grid data sets, the triangle patches in the sub-grid data sets that do not satisfy the preset number are subjected to the patch filling processing, so as to obtain a sub-grid data set after the patch filling.
In this step, when the mesh data of each object to be rendered is equal to the preset number, for the last group of sub-mesh data sets, there may be a situation that the number of triangle patches is less than the preset number, so that for the sub-mesh data sets in which the number of triangle patches in the sub-mesh data sets does not satisfy the preset number, the terminal performs the filling process on the number of triangle patches in the sub-mesh data sets to obtain the filled sub-mesh data sets, so that the number of triangle patches in each sub-mesh data set is the preset number, so as to meet the requirement that each object needs to use the same topological structure when the subsequent GPU realizes rendering of a plurality of different objects.
In the above embodiment, the terminal equally divides the mesh data of each object to be rendered according to the same preset number of triangle patches, and represents each object to be rendered with the same topological structure by using the sub-mesh data set of the original different structures, so that the subsequent GPU can conveniently realize a finer rejection effect for each object to be rendered.
In step S207, in response to the call instruction to the rendering interface, the mesh data of each object to be rendered is batch-rendered based on the merge map.
In the embodiment of the disclosure, a terminal CPU calls a rendering interface, initiates a rendering request to a GPU, and the terminal GPU responds to a calling instruction of the rendering interface and performs batch rendering on grid data of each object to be rendered based on a merging map.
In some possible embodiments, before the batch rendering is performed on the mesh data of each object to be rendered, the rendering data processing method according to the embodiment of the present disclosure further includes the following steps as shown in fig. 4:
in step S401, vertex data of each triangle patch in each sub-grid data set corresponding to each object to be rendered is merged and stored in the same vertex buffer in the graphics processor.
In this step, the GPU opens up a Vertex Buffer (Vertex Buffer) for a plurality of objects to be rendered in the memory in response to a call instruction of the rendering interface, and merges and stores Vertex data of each triangle patch in each sub-grid data set corresponding to each object to be rendered in the Vertex Buffer.
In step S403, vertex index data of each triangle patch in each sub-grid data group corresponding to each object to be rendered is merged and stored in the same index buffer in the graphics processor.
Wherein an Index Buffer (Index Buffer) is used to hold the rendering order of the vertices. Similarly, in this step, the GPU opens up an index buffer for use by a plurality of objects to be rendered in the memory, and merges and stores vertex index data of each triangle patch in each sub-grid data set corresponding to each object to be rendered into the index buffer.
In the above embodiment, for a plurality of objects to be rendered drawn using the same pipeline state, the split sub-grid data are combined and merged into the same vertex buffer area and submitted to the GPU rendering pipeline, so that the overhead of the memory application, the number of switching and calling instructions of the GPU pipeline state can be reduced, and the high concurrent data processing capability of the GPU can be fully utilized.
In some possible embodiments, the batch rendering of the mesh data of each object to be rendered may specifically include the following steps as shown in fig. 5:
in step S501, for each object to be rendered, vertex index data of each triangle patch in each sub-grid data set corresponding to the object to be rendered is read from the index buffer.
In the step, when rendering is performed, firstly, vertex index data of each triangle patch in each sub-grid data group corresponding to an object to be rendered, namely vertex rendering sequence of each triangle patch, is read from an index buffer.
In step S503, vertex data of each triangle patch in each sub-grid data group corresponding to the object to be rendered is read from the vertex buffer based on the vertex index data of each triangle patch.
In the step, corresponding vertex data are sequentially read from the vertex buffer based on the vertex rendering sequence read in the preamble step until each sub-grid data set corresponding to all the objects to be rendered is read.
In step S505, the visibility verification is performed on each sub-grid data set corresponding to the object to be rendered, so as to obtain a visibility verification result of each sub-grid data set.
In this step, considering that tens of thousands of objects need to be rendered in a game scene, scene complexity is often on the order of thousands of facets, while thousands of lights and hundreds of materials need to be processed. Therefore, in order to effectively reduce unnecessary drawing, the terminal performs visibility verification on each sub-grid data set corresponding to each object to be rendered, and a visibility verification result of each sub-grid data set is obtained; and then, carrying out subsequent drawing rendering based on the visibility verification result.
In a specific embodiment, the step S505 may specifically include the following steps as shown in fig. 6:
In step S601, for each sub-grid data set, a cubic bounding box of the sub-grid data set and size information of the cubic bounding box are determined; the size information includes position information of a center point of the cubic bounding box and a vector of vertices to the center point of the cubic bounding box.
In the step, the terminal performs visibility verification on each sub-grid data set instead of each object to be rendered, so that considering the situation that some objects are possibly partially visible and partially invisible, the manner of performing visibility verification on each sub-grid data set can avoid the situation that the visible part of the object is also removed, and more refined removal is realized.
Here, considering that each sub-grid data set tends to be relatively complex, it is difficult to accurately calculate the intersection between it and the view cone, and therefore, the bounding box of each sub-grid data set is adopted here to perform intersection calculation instead of each sub-grid data set itself; that is, first determining bounding boxes for each sub-grid dataset; the bounding box may be an Axis Aligned Bounding Box (AABB), a directed bounding box (OBB) or a bounding sphere (BSphere); in this embodiment, an AABB bounding box, also referred to as a cube bounding box, is employed.
In the step, after determining the cubic bounding box of each sub-grid data set, the size information of the corresponding cubic bounding box can be obtained; the size information may include position information of a center point of the cubic bounding box and a vector of vertices to the center point of the cubic bounding box.
In step S603, a visibility check result of the sub-grid data set is determined based on the size information of the cubic bounding box of the sub-grid data set and viewpoint information of the scene to be rendered.
In this step, when the visibility check is performed based on the cubic bounding box of each sub-grid data set, the visibility check result of the sub-grid data set may be determined by combining a plurality of rejection means. Specifically, the sub-grid data sets may be rejected by determining a distance between each sub-grid data set and the camera (distance rejection) or whether it is within a view cone of the camera (view cone rejection) based on size information of a cubic bounding box of each sub-grid data set and viewpoint information of a scene to be rendered (camera position, view angle, etc.), or may be further rejected (occlusion rejection) by determining whether each sub-grid data set is occluded by other sub-grid data sets when it is within a camera visible range.
In step S507, based on the visibility verification result of each sub-grid data set, the grid data of the object to be rendered is subjected to occlusion rejection processing, so as to obtain target grid data of the object to be rendered.
In this step, the visibility check result may indicate whether the sub-grid data set is visible or invisible; after the visibility verification result of each sub-grid data set is determined, the terminal performs shielding and eliminating processing on the grid data of the object to be rendered to obtain target grid data of the object to be rendered.
In a specific embodiment, the step S507 includes: the method comprises the steps that through a computer loader of the GPU, aiming at each sub-grid data set of each object to be rendered, if the visibility check result of the sub-grid data set indicates that the sub-grid data set is invisible, the sub-grid data set is removed; or alternatively; and if the visibility check result of the sub-grid data set indicates that the visibility is visible, reserving the sub-grid data set. And after the operation is performed on each sub-grid data group of each object to be rendered, obtaining target grid data of each object to be rendered. Then, the target mesh data of each object to be rendered can be cached in the industrict buffer and read from the buffer when drawing.
In step S509, the target mesh data of the object to be rendered is rendered, so as to obtain a rendered object.
In the step, the terminal calls an industrict Draw instruction to read target grid data of each object to be rendered from an industrict Buffer, and then renders the object to be rendered based on the target grid data of each object to be rendered to obtain a rendered object.
In the above embodiment, the terminal performs the visibility check for each sub-grid data set instead of each object to be rendered, so, considering the situation that some objects may be partially visible and partially invisible, the manner of performing the visibility check for each sub-grid data set can avoid the situation that the visible part of the object is also removed, thereby realizing more refined removal. Meanwhile, the method of combining different removing means such as distance removing, view cone removing and shielding removing can effectively reduce the rendering consumption, and the number of vertexes and the number of faces involved in rendering can be greatly reduced under the condition that the rendering effect is not affected, so that the rendering efficiency is improved.
In summary, the rendering data processing method provided in the embodiments of the present disclosure merges texture maps of objects to be rendered with different materials, so that different objects to be rendered can be batched by using the single material of the merged map, thereby
The number of times of calling the rendering interface is reduced, the CPU performance cost can be saved, and the rendering speed is improved; in addition, the method of combining multiple removing modes from the sub-grid data 5 data group layer to remove the vertex not in the screen can greatly reduce the rendering
The number of the vertexes and the number of the surfaces involved in the dyeing can further improve the rendering efficiency under the condition of not affecting the rendering effect.
FIG. 7 is a block diagram of a rendering data processing apparatus, according to an example embodiment. Referring to fig. 7, the apparatus includes a determining module 701, a merging module 702, an obtaining module 703, and a batch processing module 704;
a 0 determination module 701 configured to perform determining a plurality of objects to be rendered from a scene to be rendered; multiple objects to be rendered
The rendering code identification of each object to be rendered is the same, and the texture maps of at least two objects to be rendered in the plurality of objects to be rendered are different;
a merging module 702 configured to perform merging processing on the texture maps of each object to be rendered to generate merging
Mapping;
a 5 acquisition module 703 configured to perform acquisition of mesh data of each object to be rendered; network of each object to be rendered
The lattice data includes triangle patches constituting each object to be rendered;
the batch processing module 704 is configured to execute batch rendering of the mesh data of each object to be rendered based on the merge map in response to a call instruction to the rendering interface.
In some possible embodiments, the apparatus further comprises: the 0 segmentation module is configured to execute segmentation processing on the grid data of each object to be rendered to obtain each object to be rendered
A plurality of sub-grid data sets corresponding to the dyeing object; each of the plurality of sub-grid data sets includes the same preset number of triangular patches.
In some possible embodiments, the splitting module is further configured to execute a finger in response to a call to the grid splitting interface
Enabling the triangular patches forming each object to be rendered to be subjected to equal division processing based on a preset number of triangular patches, and obtaining 5 a plurality of sub-grid data sets corresponding to each object to be rendered; when there are triangle patches in multiple sub-grid data sets
And when the quantity does not meet the preset quantity of the sub-grid data sets, performing filling processing on the triangular patches in the sub-grid data sets which do not meet the preset quantity, and obtaining the filled sub-grid data sets.
In some possible embodiments, the apparatus further comprises:
the storage module is configured to execute the merging and storing of the vertex data of each triangle patch 0 in each sub-grid data group corresponding to each object to be rendered into the same vertex buffer area in the graphics processor; and merging and storing vertex index data of each triangle patch in each sub-grid data group corresponding to each object to be rendered into the same index buffer area in the graphic processor.
In some possible embodiments, the batch processing module 704 is configured to execute, for each object to be rendered, reading vertex index data of each triangle patch in each sub-grid data set corresponding to the object to be rendered from the index buffer; based on the vertex index data of each triangular patch, reading vertex data of each triangular patch in each sub-grid data group corresponding to the object to be rendered from a vertex buffer area; performing visibility verification on each sub-grid data set corresponding to the object to be rendered to obtain a visibility verification result of each sub-grid data set; based on the visibility verification result of each sub-grid data set, carrying out shielding and eliminating processing on grid data of an object to be rendered to obtain target grid data of the object to be rendered; rendering the target grid data of the object to be rendered to obtain the rendered object.
In some possible embodiments, the batch processing module 704 is further configured to perform determining, for each sub-grid data set, a cubic bounding box of the sub-grid data set and size information of the cubic bounding box; the size information comprises position information of a central point of the cubic bounding box and a vector from a vertex to the central point of the cubic bounding box; and determining a visibility verification result of the sub-grid data set based on the size information of the cubic bounding box of the sub-grid data set and viewpoint information of the scene to be rendered.
In some possible embodiments, the batch processing module 704 is further configured to execute, for each sub-grid data set of the object to be rendered, if the visibility check result of the sub-grid data set indicates that it is not visible, the sub-grid data set is culled; or alternatively; and if the visibility check result of the sub-grid data set indicates that the visibility is visible, reserving the sub-grid data set.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 8 is a block diagram illustrating an electronic device for rendering data processing, which may be a terminal, according to an exemplary embodiment, an internal structure diagram of which may be as shown in fig. 8. The electronic device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the electronic device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a rendering data processing method. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the electronic equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of a portion of the structure associated with the disclosed aspects and is not limiting of the electronic device to which the disclosed aspects apply, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an exemplary embodiment, there is also provided an electronic device including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement a rendering data processing method as in an embodiment of the present disclosure.
In an exemplary embodiment, a computer-readable storage medium is also provided, which when executed by a processor of an electronic device, enables the electronic device to perform the rendering data processing method in the embodiments of the present disclosure.
In an exemplary embodiment, there is also provided a computer program product containing instructions, the computer program product including a computer program stored in a readable storage medium, at least one processor of the computer device reading and executing the computer program from the readable storage medium, causing the computer device to perform the rendering data processing method of the embodiments of the present disclosure.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A rendering data processing method, comprising:
determining a plurality of objects to be rendered from the scene to be rendered; the rendering code identification of each object to be rendered in the plurality of objects to be rendered is the same, and the texture maps of at least two objects to be rendered in the plurality of objects to be rendered are different;
merging the texture maps of each object to be rendered to generate a merged map;
Acquiring grid data of each object to be rendered; the grid data of each object to be rendered comprises triangle patches forming each object to be rendered;
and responding to a call instruction of a rendering interface, and carrying out batch rendering on the grid data of each object to be rendered based on the merging map.
2. The rendering data processing method according to claim 1, wherein after the mesh data of each object to be rendered is acquired, the method further comprises:
performing segmentation processing on the grid data of each object to be rendered to obtain a plurality of sub-grid data sets corresponding to each object to be rendered; each of the plurality of sub-grid data sets includes the same preset number of triangular patches.
3. The method for processing rendering data according to claim 2, wherein the performing segmentation processing on the mesh data of each object to be rendered to obtain a plurality of sub-mesh data sets corresponding to each object to be rendered includes:
responding to a call instruction of a grid segmentation interface, and carrying out equal division processing on triangle patches forming each object to be rendered based on the preset number of triangle patches to obtain a plurality of sub-grid data sets corresponding to each object to be rendered;
When the number of the triangular patches in the plurality of sub-grid data sets does not meet the preset number of sub-grid data sets, performing filling processing on the triangular patches in the sub-grid data sets which do not meet the preset number, and obtaining the filled sub-grid data sets.
4. A rendering data processing method according to claim 2 or 3, wherein before the batch rendering of the mesh data of each object to be rendered, the method comprises:
combining and storing the vertex data of each triangle patch in each sub-grid data group corresponding to each object to be rendered into the same vertex buffer area in the graphic processor;
and merging and storing vertex index data of each triangle patch in each sub-grid data group corresponding to each object to be rendered into the same index buffer zone in the graphic processor.
5. The method for processing the rendering data according to claim 4, wherein the batch rendering the mesh data of each object to be rendered comprises:
for each object to be rendered, reading vertex index data of each triangle patch in each sub-grid data group corresponding to the object to be rendered from the index buffer;
Based on the vertex index data of each triangular patch, reading vertex data of each triangular patch in each sub-grid data group corresponding to the object to be rendered from the vertex buffer;
performing visibility verification on each sub-grid data set corresponding to the object to be rendered to obtain a visibility verification result of each sub-grid data set;
based on the visibility verification result of each sub-grid data set, carrying out shielding and eliminating processing on the grid data of the object to be rendered to obtain target grid data of the object to be rendered;
and rendering the target grid data of the object to be rendered to obtain a rendered object.
6. The method for processing rendering data according to claim 5, wherein the performing the visibility check on each sub-grid data set corresponding to the object to be rendered to obtain the visibility check result of each sub-grid data set includes:
determining a cubic bounding box of each sub-grid data set and size information of the cubic bounding box for the sub-grid data set; the size information includes position information of a center point of the cubic bounding box and a vector from a vertex of the cubic bounding box to the center point;
And determining a visibility check result of the sub-grid data set based on the size information of the cubic bounding box of the sub-grid data set and the viewpoint information of the scene to be rendered.
7. The rendering data processing method according to claim 5 or 6, wherein the performing, based on the visibility check result of each sub-mesh data group, the occlusion elimination processing on the mesh data of the object to be rendered includes:
for each sub-grid data set of the object to be rendered, if the visibility verification result of the sub-grid data set indicates that the object is invisible, rejecting the sub-grid data set;
or alternatively; and if the visibility check result of the sub-grid data set indicates that the visibility is visible, reserving the sub-grid data set.
8. A rendering data processing apparatus, comprising:
a determining module configured to perform determining a plurality of objects to be rendered from a scene to be rendered; the rendering code identification of each object to be rendered in the plurality of objects to be rendered is the same, and the texture maps of at least two objects to be rendered in the plurality of objects to be rendered are different;
the merging module is configured to perform merging processing on the texture maps of each object to be rendered to generate merging maps;
An acquisition module configured to perform acquisition of mesh data of each object to be rendered; the grid data of each object to be rendered comprises triangle patches forming each object to be rendered;
and the batch processing module is configured to execute batch processing rendering on the grid data of each object to be rendered based on the merging map in response to the call instruction of the rendering interface.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the rendering data processing method of any one of claims 1-7.
10. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the rendering data processing method of any one of claims 1-7.
CN202211666816.9A 2022-12-23 2022-12-23 Rendering data processing method and device, electronic equipment and storage medium Pending CN116012507A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211666816.9A CN116012507A (en) 2022-12-23 2022-12-23 Rendering data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211666816.9A CN116012507A (en) 2022-12-23 2022-12-23 Rendering data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116012507A true CN116012507A (en) 2023-04-25

Family

ID=86025969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211666816.9A Pending CN116012507A (en) 2022-12-23 2022-12-23 Rendering data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116012507A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117011492A (en) * 2023-09-18 2023-11-07 腾讯科技(深圳)有限公司 Image rendering method and device, electronic equipment and storage medium
CN117541744A (en) * 2024-01-10 2024-02-09 埃洛克航空科技(北京)有限公司 Rendering method and device for urban live-action three-dimensional image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117011492A (en) * 2023-09-18 2023-11-07 腾讯科技(深圳)有限公司 Image rendering method and device, electronic equipment and storage medium
CN117011492B (en) * 2023-09-18 2024-01-05 腾讯科技(深圳)有限公司 Image rendering method and device, electronic equipment and storage medium
CN117541744A (en) * 2024-01-10 2024-02-09 埃洛克航空科技(北京)有限公司 Rendering method and device for urban live-action three-dimensional image
CN117541744B (en) * 2024-01-10 2024-04-26 埃洛克航空科技(北京)有限公司 Rendering method and device for urban live-action three-dimensional image

Similar Documents

Publication Publication Date Title
WO2020220915A1 (en) Virtual object display method and apparatus, electronic device, and storage medium
CN111815755B (en) Method and device for determining blocked area of virtual object and terminal equipment
CN116012507A (en) Rendering data processing method and device, electronic equipment and storage medium
CN110874812B (en) Scene image drawing method and device in game and electronic terminal
CN110368694B (en) Game scene data processing method, device, equipment and readable storage medium
CN110090440B (en) Virtual object display method and device, electronic equipment and storage medium
CN111400024B (en) Resource calling method and device in rendering process and rendering engine
CN109242967B (en) Three-dimensional terrain rendering method and device
US8237703B2 (en) Split-scene rendering of a three-dimensional model
CN111773719A (en) Rendering method and device of virtual object, storage medium and electronic device
US20230298237A1 (en) Data processing method, apparatus, and device and storage medium
CN109410213A (en) Polygon pel method of cutting out, computer readable storage medium, electronic equipment based on bounding box
WO2022033162A1 (en) Model loading method and related apparatus
CN113822965A (en) Image rendering processing method, device and equipment and computer storage medium
CN114596423A (en) Model rendering method and device based on virtual scene gridding and computer equipment
CN114241151A (en) Three-dimensional model simplification method and device, computer equipment and computer storage medium
US20220036106A1 (en) Method and apparatus for data calculation in neural network model, and image processing method and apparatus
CN112288842B (en) Shadow map algorithm-based quantitative analysis method and device for terrain visible area
CN113689325A (en) Method for digitizing beautiful eyebrows, electronic device and storage medium
CN111179414B (en) Terrain LOD generation method
CN110838167B (en) Model rendering method, device and storage medium
CN115965735B (en) Texture map generation method and device
CN113419806B (en) Image processing method, device, computer equipment and storage medium
CN113192173B (en) Image processing method and device of three-dimensional scene and electronic equipment
CN115228083A (en) Resource rendering method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination