CN118037914A - Three-dimensional object rendering method, three-dimensional object rendering device, computer equipment and storage medium - Google Patents

Three-dimensional object rendering method, three-dimensional object rendering device, computer equipment and storage medium Download PDF

Info

Publication number
CN118037914A
CN118037914A CN202211425727.5A CN202211425727A CN118037914A CN 118037914 A CN118037914 A CN 118037914A CN 202211425727 A CN202211425727 A CN 202211425727A CN 118037914 A CN118037914 A CN 118037914A
Authority
CN
China
Prior art keywords
node
vertex
nodes
detection
index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211425727.5A
Other languages
Chinese (zh)
Inventor
陈玉钢
郑榕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211425727.5A priority Critical patent/CN118037914A/en
Publication of CN118037914A publication Critical patent/CN118037914A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a three-dimensional object rendering method, a three-dimensional object rendering device, computer equipment and a storage medium, and belongs to the technical field of virtual scenes. The method comprises the following steps: in response to displaying a three-dimensional object in a virtual scene at a specified perspective, reading an index binary tree of a three-dimensional model of the three-dimensional object, each node in the index binary tree containing an index of at least one vertex triangle in the three-dimensional model, respectively; based on the appointed view angle and the index binary tree, performing visibility detection on the three-dimensional object to obtain visible nodes in the index binary tree; the visible nodes are nodes in which the vertex triangles corresponding to the contained indexes are visible under the appointed view angle; rendering and displaying the three-dimensional object based on the index contained in the visible node.

Description

Three-dimensional object rendering method, three-dimensional object rendering device, computer equipment and storage medium
Technical Field
The present application relates to the field of virtual scene technologies, and in particular, to a three-dimensional object rendering method, apparatus, computer device, and storage medium.
Background
At present, in the field of three-dimensional graphics rendering, view cone rejection and occlusion rejection are two common rejection methods.
In the related technology, the view cone rejection determines whether the object needs to be rendered by judging whether the bounding box of the tested object is fully or partially overlapped with the view cone; soft rasterized occlusion culling is to test whether the axis alignment bounding box of the model is in the depth buffer of the current occlusion at the processor end, so as to achieve the purpose of conservatively estimating the visibility of the model.
The above-mentioned rejection method uses the bounding box of the model to perform visibility test, that is, even if the model is only a small part visible, even completely invisible, the whole model still participates in rendering due to the fact that the bounding box is visible, and waste of processing resources is caused.
Disclosure of Invention
The embodiment of the application provides a three-dimensional object rendering method, a device, computer equipment and a storage medium, which can reduce resource waste in a three-dimensional graphics rendering scene. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a three-dimensional object rendering method, where the method includes:
Responding to the three-dimensional object in the virtual scene displayed at a specified view angle, reading an index binary tree of a three-dimensional model of the three-dimensional object, wherein each node in the index binary tree respectively comprises an index of at least one vertex triangle in the three-dimensional model, and child nodes in the index binary tree are obtained by clustering vertex triangles corresponding to father nodes of the child nodes;
based on the appointed view angle and the index binary tree, performing visibility detection on the three-dimensional object to obtain visible nodes in the index binary tree; the visible nodes are nodes in which the vertex triangles corresponding to the contained indexes are visible under the appointed view angle;
rendering and displaying the three-dimensional object based on the index contained in the visible node.
In another aspect, an embodiment of the present application provides a three-dimensional object rendering apparatus, including:
The reading module is used for responding to the three-dimensional object in the virtual scene displayed at the appointed visual angle, reading an index binary tree of a three-dimensional model of the three-dimensional object, wherein each node in the index binary tree respectively comprises an index of at least one vertex triangle in the three-dimensional model, and child nodes in the index binary tree are obtained by clustering vertex triangles corresponding to father nodes of the child nodes;
The detection module is used for carrying out visibility detection on the three-dimensional object based on the appointed visual angle and the index binary tree to obtain visible nodes in the index binary tree; the visible nodes are nodes in which the vertex triangles corresponding to the contained indexes are visible under the appointed view angle;
And the rendering module is used for rendering and displaying the three-dimensional object based on the index contained in the visible node.
In another aspect, embodiments of the present application provide a computer device, where the computer device includes a processor and a memory, where at least one computer instruction is stored, where the at least one computer instruction is loaded and executed by the processor to implement the three-dimensional object rendering method as described in the above aspect.
In another aspect, embodiments of the present application provide a computer-readable storage medium having stored therein at least one computer instruction that is loaded and executed by a processor to implement the three-dimensional object rendering method as described in the above aspects.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the three-dimensional object rendering method provided in various alternative implementations of the above aspects.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
When the three-dimensional object in the virtual scene is rendered, the visibility detection is carried out through the index binary tree of the three-dimensional model of the three-dimensional object to obtain the visible node where the visible vertex triangle in the three-dimensional model is located, and the rendering of the three-dimensional object is carried out based on the visible node.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a display interface of a virtual scene provided by an exemplary embodiment of the present application;
FIG. 3 is a flow chart of a three-dimensional object rendering method provided by an exemplary embodiment of the present application;
FIG. 4 is a flow chart of a three-dimensional object rendering method provided by an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of binary tree generation for the embodiment of FIG. 4;
FIG. 6 is a flowchart of an algorithm for generating an indexed binary tree according to the embodiment of FIG. 4;
FIG. 7 is a flow of the K-Means algorithm involved in the embodiment of FIG. 4;
FIG. 8 is a schematic diagram of the ordering of indexes in memory in an index binary tree according to the embodiment of FIG. 4;
FIG. 9 is a flowchart of the traversal process algorithm of the binary tree involved in the embodiment of FIG. 4;
FIG. 10 is a graph of traversal results involved in the embodiment of FIG. 4;
FIG. 11 is a block diagram of a three-dimensional object rendering apparatus according to an exemplary embodiment of the present application;
FIG. 12 is a block diagram of a computer device provided in accordance with an exemplary embodiment of the present application;
Fig. 13 is a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
It should be understood that references herein to "a number" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
In order to facilitate understanding, several terms related to the present application are explained below.
1) Virtual scene
A virtual scene is a virtual scene that an application program displays (or provides) while running on a terminal. The virtual scene can be a simulation environment scene of a real world, a half-simulation half-fictional three-dimensional environment scene, or a pure fictional three-dimensional environment scene. In the present application, the virtual scene may be a three-dimensional virtual scene. Optionally, the virtual scene may also be used for virtual scene fight between at least two virtual characters. Optionally, the virtual scene may also be used to fight between at least two virtual characters using virtual props. Optionally, the virtual scene may be further configured to use virtual props to combat at least two virtual characters within a target area range that is continuously smaller over time in the virtual scene.
Virtual scenes are typically presented by application generation in a computer device such as a terminal based on hardware (such as a screen) in the terminal. The terminal can be a mobile terminal such as a smart phone, a tablet computer or an electronic book reader; or the terminal may be a notebook computer or a personal computer device of a stationary computer.
2) Three-dimensional object
Three-dimensional objects refer to objects in a three-dimensional virtual scene, including movable objects and non-movable objects. The movable object may be at least one of a virtual character, a virtual animal, a virtual vehicle, and the inactive object may be a virtual building, a virtual terrain, or the like.
3) Rendering optimization
Rendering optimization refers to the fact that a plurality of performance bottlenecks exist in graphic rendering, and the performance bottlenecks are searched and optimized in a targeted mode so as to improve program efficiency. Common optimization methods include model optimization, culling, multithreading, caching, and the like.
4) Occlusion culling
Occlusion rejection is a rejection method that rejects occluded geometric objects by finding occlusion information in a graph. The realization mode is various, and has the application range and advantages and disadvantages of the realization mode.
5) Graphic processing unit (Graphics Processing Unit GPU)
A special chip for processing graphics and images is disclosed.
6) Back face rejection
Backside culling is a culling method, typically occurring in the GPU stage. And when the GPU is subjected to rasterization, calculating according to the included angle between the normal line of the triangle and the current viewing angle direction, and if the triangle is opposite to the viewing angle, not rendering.
7) Overdrawing is performed
Overdrawing refers to excessive drawing, which is caused by inconsistent rendering order of objects, and the disorder of the GPU to pixel coloring, pixels near the viewing angle may be drawn last, so that some pixels in the picture are repeatedly erased and rewritten, and these drawing are overdrawing.
8) Offline stage
In rendering optimization, many schemes have large-scale intensive computation that is pre-computed before actual operation, and this process is the offline stage of the optimization scheme.
9) Real-time rendering
Real-time rendering refers to the real-time run phase in a graphics product where devices typically need to complete picture rendering and submit to a display in a very short time. The number of pictures submitted per second, called the frame rate, typically requires 30 frames, 60 frames or even more to ensure the smoothness of the dynamic pictures.
10 Index buffer
When the GPU draws the three-dimensional model, model vertex data are input into a vertex buffer area, the vertex index is arranged into another array by the index buffer area according to a triangle mode, and the vertex index is also input into the GPU to wait for rendering draw call. The content of the index buffer is the vertex index sequence number, generally, each three sequence numbers represent a triangle, and the triangles can be stored in disorder.
11A rendering pipeline)
Graphics rendering flow running in the GPU. Generally we discuss their vertex shader, rasterizer, and pixel shader. By writing codes in the shader, the GPU can be flexibly controlled to render the drawing of the rendering component.
12 Vertex shader
And (3) a necessary link of the GPU rendering pipeline, wherein the program calculates the vertexes of the model one by one according to codes, and outputs the result to the next stage.
13 Pixel shader
And (3) a necessary link of the GPU rendering pipeline, wherein the program performs coloring calculation on the rasterized pixels according to codes, and outputs the rasterized pixels to a frame buffer area after passing the test to complete one rendering pipeline flow.
14 Viewing cone)
A viewing cone refers to a geometry determined by the position, angle, aspect ratio of the camera rendering the scene, and fov angles and far/near clipping planes.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application. The implementation environment may include: a first terminal 110, a server 120, and a second terminal 130.
The first terminal 110 is installed and operated with an application 111 supporting a virtual environment, and the application 111 may be a multi-person online fight program. When the first terminal runs the application 111, a user interface of the application 111 is displayed on the screen of the first terminal 110. The application 111 may be any one of a multiplayer online tactical Game (Multiplayer Online Battle ARENA GAMES, MOBA), a fleeing Game, a simulated strategic Game (SLG). In the present embodiment, the application 111 is exemplified as an FPS (First Person Shooting Game, first person shooter game). The first terminal 110 is a terminal used by the first user 112, and the first user 112 uses the first terminal 110 to control a first virtual object located in the virtual environment to perform activities, where the first virtual object may be referred to as a master virtual object of the first user 112. The activities of the first virtual object include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing, releasing skills. Illustratively, the first virtual object is a first virtual character, such as an emulated character or a cartoon character.
The second terminal 130 is installed and operated with an application 131 supporting a virtual environment, and the application 131 may be a multi-person online fight program. When the second terminal 130 runs the application 131, a user interface of the application 131 is displayed on a screen of the second terminal 130. The client may be any of MOBA games, fleeing games, SLG games, in this embodiment illustrated by the application 131 being an FPS game. The second terminal 130 is a terminal used by the second user 132, and the second user 132 uses the second terminal 130 to control a second virtual object located in the virtual environment to perform activities, and the second virtual object may be referred to as a master virtual character of the second user 132. Illustratively, the second virtual object is a second virtual character, such as an emulated character or a cartoon character.
Optionally, the first virtual object and the second virtual object are in the same virtual world. Optionally, the first virtual object and the second virtual object may belong to the same camp, the same team, the same organization, have a friend relationship, or have temporary communication rights. Alternatively, the first virtual object and the second virtual object may belong to different camps, different teams, different organizations, or have hostile relationships.
Alternatively, the applications installed on the first terminal 110 and the second terminal 130 are the same, or the applications installed on the two terminals are the same type of application on different operating system platforms (android or IOS). The first terminal 110 may refer broadly to one of the plurality of terminals and the second terminal 130 may refer broadly to another of the plurality of terminals, the present embodiment being illustrated with only the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and the device types include: at least one of a smart phone, a tablet computer, an electronic book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but in different embodiments there are a number of other terminals that can access the server 120. Optionally, there is one or more terminals corresponding to the developer, on which a development and editing platform for supporting the application program of the virtual environment is installed, the developer may edit and update the application program on the terminal, and transmit the updated application program installation package to the server 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 may download the application program installation package from the server 120 to implement the update of the application program.
The first terminal 110, the second terminal 130, and other terminals are connected to the server 120 through a wireless network or a wired network.
The server 120 includes at least one of a server, a server cluster formed by a plurality of servers, a cloud computing platform and a virtualization center. The server 120 is used to provide background services for applications supporting a three-dimensional virtual environment. Optionally, the server 120 takes on primary computing work and the terminal takes on secondary computing work; or the server 120 takes on secondary computing work and the terminal takes on primary computing work; or the server 120 and the terminal use a distributed computing architecture for collaborative computing.
In one illustrative example, server 120 includes memory 121, processor 122, user account database 123, combat service module 124, and user-oriented Input/Output Interface (I/O Interface) 125. Wherein the processor 122 is configured to load instructions stored in the server 120, process data in the user account database 123 and the combat service module 124; the user account database 123 is configured to store data of user accounts used by the first terminal 110, the second terminal 130, and other terminals, such as an avatar of the user account, a nickname of the user account, and a combat index of the user account, where the user account is located; the combat service module 124 is configured to provide a plurality of combat rooms for users to combat, such as 1V1 combat, 3V3 combat, 5V5 combat, etc.; the user-oriented I/O interface 125 is used to establish communication exchanges of data with the first terminal 110 and/or the second terminal 130 via a wireless network or a wired network.
The virtual scene may be a three-dimensional virtual scene, or the virtual scene may be a two-dimensional virtual scene. Taking an example that the virtual scene is a three-dimensional virtual scene, please refer to fig. 2, which illustrates a schematic diagram of a display interface of the virtual scene provided in an exemplary embodiment of the present application. As shown in fig. 2, the display interface of the virtual scene includes a scene screen 200, and the scene screen 200 includes a virtual object 210 currently controlled, an environment screen 220 of the three-dimensional virtual scene, and a virtual object 240. Wherein, the virtual object 240 may be a virtual object controlled by a corresponding user of other terminals or a virtual object controlled by an application program. The virtual object may be a three-dimensional object.
In fig. 2, the currently controlled virtual object 210 and the virtual object 240 are three-dimensional models in a three-dimensional virtual scene, and an environmental screen of the three-dimensional virtual scene displayed in the scene screen 200 is an object observed from a perspective of the currently controlled virtual object 210, and as illustrated in fig. 2, an environmental screen 220 of the three-dimensional virtual scene displayed under the perspective of the currently controlled virtual object 210 is the ground 224, the sky 225, the horizon 223, the hill 221, and the factory building 222, for example.
The currently controlled virtual object 210 may perform skill release or use of a virtual prop under control of a user, move and perform a specified action, and the virtual object in the virtual scene may exhibit different three-dimensional models under control of the user, for example, a screen of the terminal supports touch operation, and the scene picture 200 of the virtual scene includes a virtual control, so when the user touches the virtual control, the currently controlled virtual object 210 may perform the specified action in the virtual scene and exhibit the currently corresponding three-dimensional model.
Fig. 3 illustrates a flowchart of a three-dimensional object rendering method according to an exemplary embodiment of the present application. The three-dimensional object rendering method may be performed by a computer device, which may be a terminal, a server, or the computer device may also include the terminal and the server. As shown in fig. 3, the three-dimensional object rendering method includes:
In step 310, in response to displaying the three-dimensional object in the virtual scene at the specified view angle, an index binary tree of the three-dimensional model of the three-dimensional object is read, each node in the index binary tree respectively contains an index of at least one vertex triangle in the three-dimensional model, and child nodes in the index binary tree are obtained by clustering vertex triangles corresponding to parent nodes of the child nodes.
The three-dimensional object may be any three-dimensional object that needs to be displayed in a three-dimensional virtual scene, such as a virtual object (e.g., a virtual character or a virtual monster controlled by a player/artificial intelligence), a virtual prop (e.g., virtual equipment), a virtual carrier, a virtual building, a virtual topography (e.g., a virtual tree or a virtual hill), or the like in the virtual scene.
The above specified view is a view of the currently displayed virtual scene, that is, a view of the current frame.
The indexing binary tree may be obtained by clustering in advance according to the positional relationship of each vertex triangle in the three-dimensional object in an offline stage, where each parent node in the indexing binary tree includes two child nodes, and the index of the vertex triangle included in the parent node is a union of indexes of the vertex triangles included in the two child nodes of the parent node.
Each node in the index binary tree includes an index of at least one vertex triangle in the three-dimensional model, which may refer to an index of at least one vertex triangle in a set of three-dimensional model in a memory, where each node in the index binary tree corresponds to an index of all or part of address intervals corresponding to the set of indexes. That is, the indices of the same vertex triangle map to the same memory address in nodes of different levels of the index binary tree.
Or each node in the index binary tree contains the index of at least one vertex triangle in the three-dimensional model, which may mean that each node corresponds to each independent address interval in the memory, the memory address corresponding to the address interval stores the index of the vertex triangle of the corresponding node, that is, the index of the same vertex triangle repeatedly appears in the address interval corresponding to the nodes of different levels of the index binary tree.
Step 320, performing visibility detection on the three-dimensional object based on the specified view angle and the index binary tree to obtain visible nodes in the index binary tree; the visible node is a node in which the vertex triangle corresponding to the contained index is visible under the specified view angle.
The visibility detection may include a cone detection and an occlusion detection.
The visible node may be a node where the vertex triangle corresponding to the index is partially visible or fully visible under the specified viewing angle.
Step 330 renders and displays the three-dimensional object based on the index contained in the visible node.
In the embodiment of the application, when the computer equipment displays the three-dimensional object in the virtual scene, the visibility of the vertex triangle in the three-dimensional object can be traversed step by step through the index binary tree of the three-dimensional object to determine the nodes in the index binary tree, which are visible under the current view angle, of the vertex triangle corresponding to the index contained in the index, and render the vertex triangle corresponding to the index contained in the visible nodes, so that all the vertex triangles in the three-dimensional object do not need to be rendered under the condition that the three-dimensional object is visible under the current view angle in the virtual scene, and resource waste in the three-dimensional graph rendering scene is reduced.
In summary, according to the scheme disclosed by the embodiment of the application, when the three-dimensional object in the virtual scene is rendered, the visibility detection is performed through the index binary tree of the three-dimensional model of the three-dimensional object to obtain the visible node where the visible vertex triangle in the three-dimensional model is located, and the rendering of the three-dimensional object is performed based on the visible node.
Fig. 4 illustrates a flowchart of a three-dimensional object rendering method according to an exemplary embodiment of the present application. The three-dimensional object rendering method may be performed by a computer device, which may be a terminal, a server, or the computer device may also include the terminal and the server. As shown in fig. 4, the three-dimensional object rendering method includes:
Step 401, obtaining vertex data of a three-dimensional model; the vertex data includes position data of each vertex in the three-dimensional model, and an index of a vertex triangle in which each vertex in the three-dimensional model is located.
In the embodiment of the application, after the development and design of the three-dimensional model are completed by a developer in an offline stage, the computer equipment can output vertex data of the three-dimensional model, wherein the vertex data can comprise position data of each vertex in the three-dimensional model and indexes of vertex triangles where each vertex in the three-dimensional model is located.
The position data of each vertex in the three-dimensional model may include any data that can determine position information of the vertex triangle, such as coordinates of each vertex in a specified coordinate system, coordinates of a bounding box of each vertex triangle in the specified coordinate system, and coordinates of a center point of each vertex triangle in the specified coordinate system.
Alternatively, the specified coordinate system may be a coordinate system with respect to the three-dimensional model (for example, an origin of the specified coordinate system may be a specified point corresponding to the three-dimensional model, for example, a center point or a vertex on an external frame, and directions of three coordinate axes of the specified coordinate system may be specified directions, or directions with respect to an orientation of the three-dimensional model), or the specified coordinate system may also be a coordinate system with respect to the virtual scene.
For example, when the three-dimensional model is a model fixedly set in the virtual scene, the specified coordinate system may be a coordinate system with respect to the three-dimensional model or a coordinate system with respect to the virtual scene; when the three-dimensional model is a movable model in a virtual scene, the specified coordinate system may be a coordinate system with respect to the virtual scene.
Step 402, classifying each vertex triangle in the three-dimensional model as a root node based on the vertex data.
In an embodiment of the present application, during the offline phase, the computer device may first categorize each vertex triangle in the three-dimensional model into a root node in an index binary tree.
Step 403, starting from the root node, clustering each vertex triangle in the three-dimensional model step by step according to the position of each vertex triangle in the three-dimensional model, and obtaining an index binary tree.
In the embodiment of the application, after the computer equipment classifies each vertex triangle into the root node in the index binary tree, two sub-nodes corresponding to each node in the index binary tree can be generated step by step from the root node according to the relative position relation between the vertex triangles, that is, for any node existing in the index binary tree, the vertex triangle corresponding to the index contained in the node is divided into two groups step by step according to the distance of the relative position, and the two sub-nodes are respectively classified into the two sub-nodes of the node. And executing the process step by step in the index binary tree until the establishment of the index binary tree is completed.
Wherein the condition for completion of the indexing binary tree establishment may include at least one of the following conditions:
the index number of the vertex triangle contained in the last layer of nodes in the index binary tree is smaller than the index number threshold, and the number of node layers in the index binary tree reaches the layer number threshold.
In one possible implementation, from the root node, clustering each vertex triangle in the three-dimensional model step by step according to the position of each vertex triangle in the three-dimensional model to obtain an index binary tree, including:
Acquiring two clustering center points in the current clustering node;
clustering the vertex triangles in the current clustering node into two vertex triangle groups according to the distance between the vertex triangles in the current clustering node and the two clustering center points according to the positions of the vertex triangles in the current clustering node;
when the stopping condition is met, constructing two child nodes of the current clustering node based on the two vertex triangle groups;
And respectively taking the two child nodes of the current clustering node as new current clustering nodes.
In the embodiment of the application, for a current clustering node, computer equipment firstly acquires two clustering center points in the current clustering node, then clusters each vertex triangle corresponding to an index in the current clustering node according to the two clustering center points and the positions of the vertex triangles in the current clustering node to obtain two vertex triangle groups, and if a stopping condition is met, the two vertex triangle groups are respectively classified into two sub-nodes of the current clustering node, then the two sub-nodes of the current clustering node are respectively used as new current clustering nodes, and the steps are continuously executed.
In one possible implementation, when the stop condition is not satisfied, the two cluster center points are updated based on the two vertex triangle groups.
Wherein, the stopping condition may include: the number of recursions reaches a threshold number of recursions. The number of recursions may refer to the number of times that each vertex triangle corresponding to the index in the current cluster node is clustered according to the two cluster center points and the positions of the vertex triangles in the current cluster node. Or the number of recursions may refer to the number of updates of two cluster centers.
Optionally, the stopping condition may also include: the difference between the two vertex triangle groups acquired at this time and the two vertex triangle groups of the current clustering node acquired at last time is smaller than a difference threshold; for example, the difference between the two vertex triangle groups obtained this time and the two vertex triangle groups of the current clustering node obtained last time may include the number/number ratio of the two vertex triangle groups obtained this time and the unmatched vertex triangle groups of the current clustering node obtained last time.
The two stopping conditions may be used in combination or alone.
In one possible implementation, updating two cluster center points based on two vertex triangle groups when the stop condition is not satisfied includes:
when the stopping condition is not met, respectively acquiring the center points of the external boxes of the two vertex triangle groups;
And obtaining the central points of the external boxes of the two vertex triangle groups as new two clustering central points.
In the embodiment of the application, when the computer equipment clusters the vertex triangles corresponding to the indexes contained in the current clustering node, first, two initial clustering center points can be determined according to the positions of the vertex triangles corresponding to the indexes contained in the current clustering node, for example, two vertices in the bounding box of the vertex triangle corresponding to the indexes contained in the current clustering node, which are furthest in position, can be used as the initial two clustering center points in the current clustering node; and then, when recursion is performed each time, taking the central points of the two vertex triangle groups obtained by the current clustering (namely the central points of the external boxes of the vertex triangle groups) as new clustering central points, recursively executing the process until the stopping condition is met, and then completing the clustering of the current clustering nodes.
In the embodiment of the present application, after the above-mentioned indexing binary tree is obtained, the computer device may reorder the indexes of the vertex triangles in the indexing binary tree, so that the indexes of the vertex triangles contained in each node in each layer of the indexing binary tree are continuous in the memory.
In the embodiment of the application, in the off-line stage, the program firstly acquires the vertex and triangle data of the model, divides all triangles into a binary tree structure according to the space relative positions, and finally stores all nodes of the binary tree in the additional data of the model after ordering according to a certain degree, and the whole flow is shown in fig. 5.
As shown in FIG. 5, a schematic diagram of binary tree generation is shown in accordance with an embodiment of the present application. In fig. 5, the program first gathers model vertices and triangle data for use in subsequent steps. Wherein, the vertex data comprises information such as vertex position, triangle index where the vertex is positioned, and the like; triangle data contains the positions of three vertices, bounding box, center point position, etc.
In the index binary tree generation step in fig. 5, all triangles of the model are divided into nodes of different levels of the index binary tree according to the space relative positions so that the clipping can be efficiently performed during operation, and the more reasonable index binary tree generation algorithm can enable the cost of clipping during operation to be smaller and the clipping effect to be better.
An algorithm flow diagram for generating an indexing binary tree may be as shown in fig. 6.
In fig. 6, the process of generating an indexing binary tree is the following recursive process:
1) Creating a root node and distributing all triangles of the model to the root node;
2) Pointing the pointer of the current cluster node to the root node, and starting a recursion process;
3) If the number of vertex triangles corresponding to the current clustering nodes is lower than the leaf node triangle number target value set by the parameters, stopping recursion, otherwise, continuing the next step;
4) Creating left and right child nodes for the current cluster node, dividing all triangles contained in the current cluster node into two subsets according to space relative positions by using a K-Means algorithm (described in detail below), and respectively distributing the two child nodes;
5) Directing the pointer of the current cluster node to the left child node, jumping to the step 4), and continuing the recursion process;
6) Directing the pointer of the current cluster node to the right child node, jumping to the step 4), and continuing the recursion process;
7) Outputting all node information of the index binary tree, including vertex triangle information, bounding box information and indexes of left and right child nodes of the node, wherein the subsequent node ordering step is used.
The flow of the K-Means algorithm may be as shown in FIG. 7.
The K-Means algorithm in fig. 7 is an iterative process, the algorithm presets an iteration number according to the number of triangles contained in the current cluster node, and when the iteration number reaches a preset value, the algorithm exits:
1) And calculating a minimum bounding box for all vertex triangles contained in the current clustering node, taking a pair of vertices with farthest mutual distances in 8 vertices of the bounding box, respectively setting the vertices as initial center points of left and right child nodes of the current clustering node, and starting the following iterative process.
2) The distances from the current cluster node to the two child nodes are calculated for all vertex triangles contained in the current cluster node. The distance may be the distance from the vertex triangle center point (the arithmetic tie value of the three vertices) to the child node center point. The distance between the two 3d coordinates can be obtained by squaring the sum of the differences of the components of the two 3d coordinates in the X, Y and Z directions and then squaring the sum.
3) The vertex triangles contained in the left child node and the right child node of the current cluster node are firstly cleared, and then all the triangles of the current cluster node are distributed to the child node with the closer distance (smaller distance value) to the two child nodes according to the distance calculated in the previous step.
4) And (3) recalculating new center point positions of the left and right child nodes according to the vertex triangles currently contained in the left and right child nodes of the current cluster node, wherein the calculation method can be to take an arithmetic average value (divided by the number after addition) of the centers of all the triangles.
5) Judging that if the current iteration times exceed a preset value, ending iteration, and executing the next step; otherwise, the step is skipped to the step two to continue the iterative process.
6) And saving the left and right child node information of the current cluster node, and continuing the index binary tree construction process.
In the node ordering step, vertex triangles of all leaf nodes of the generated index binary tree are arranged in sequence and are recombined into an index buffer, so that indexes of all vertex triangles contained in nodes of any level are ensured to be continuous in a memory. A schematic diagram of the ordering of the indexes in the ordered binary tree in memory may be as shown in fig. 8.
The steps 401 to 403 may be performed by a developer terminal or a server in the computer device.
After the steps 401 to 403 are completed, the three-dimensional model of the three-dimensional object and the index binary tree may be stored in a scene file of the virtual scene, so as to be used in a subsequent display process of the virtual scene.
Step 404, in response to displaying the three-dimensional object in the virtual scene at the specified perspective, reading an indexed binary tree of the three-dimensional model of the three-dimensional object.
Each node in the index binary tree comprises an index of at least one vertex triangle in the three-dimensional model, and the child nodes in the index binary tree are obtained by clustering vertex triangles corresponding to parent nodes of the child nodes.
In the process of displaying a frame of picture of a virtual scene, when a computer device (such as a terminal or a server running the virtual scene in the computer device) displays the virtual scene according to the display view angle (i.e. the specified view angle) of the current frame of the virtual scene, the index binary tree of the three-dimensional model of the three-dimensional object in the virtual scene can be read.
Step 405, performing visibility detection on the three-dimensional object based on the specified view angle and the index binary tree to obtain visible nodes in the index binary tree; the visible node is a node in which the vertex triangle corresponding to the contained index is visible under the specified view angle.
In the embodiment of the application, when the visibility detection is performed on the three-dimensional object, the visibility of the vertex triangle contained in each node in the index binary tree is detected step by step from the root node by taking the visible node in the index binary tree as a unit detection target so as to determine the node which is completely visible in the index binary tree and the incompletely visible leaf node.
In one possible implementation, performing visibility detection on a three-dimensional object based on a specified perspective and an index binary tree to obtain visible nodes in the index binary tree, including:
Traversing each node in the index binary tree from the root node;
Performing view cone detection and shielding detection on the current detection node in the index binary tree to obtain a detection result;
Responding to the detection result that the vertex triangle corresponding to the index in the current detection node is completely visible under the appointed visual angle, and outputting the current detection node as a visible node;
Responding to the detection result that the vertex triangle corresponding to the index in the current detection node is partially visible under the appointed visual angle, and respectively taking two child nodes of the current detection node as new current detection nodes to perform view cone detection and shielding detection; optionally, the process of performing the view cone detection and the occlusion detection by using the two child nodes of the current detection node as the new current detection node respectively may include: responding to the detection result that the vertex triangle corresponding to the index in the current detection node is partially visible under the appointed view angle, wherein the current detection node is a non-leaf node in the index binary tree, and respectively taking two child nodes of the current detection node as new current detection nodes to perform view cone detection and shielding detection; responding to the detection result that the vertex triangle corresponding to the index in the current detection node is partially visible under the appointed view angle, wherein the current detection node is a leaf node in the index binary tree, and outputting the current detection node as a visible node;
and stopping the cone detection and the shielding detection on the two child nodes of the current detection node in response to the detection result that the vertex triangle corresponding to the index in the current detection node is completely invisible under the appointed visual angle.
In one possible implementation manner, the detecting method for detecting the view cone and the shielding of the current detection node to obtain a detection result includes:
and performing view cone detection and shielding detection on the bounding box of the vertex triangle corresponding to the index in the current detection node to obtain a detection result.
In the embodiment of the present application, since the current detection node may include indexes of a plurality of vertex triangles, when the current detection node performs view cone detection and occlusion detection, a bounding box of vertex triangles corresponding to each index in the current detection node may be calculated, that is, the bounding box includes all vertex triangles corresponding to each index in the current detection node, and then the bounding box is subjected to view cone detection and occlusion detection.
In one possible implementation manner, the detecting method for detecting the view cone and the shielding of the current detection node to obtain a detection result includes:
responding to the current detection node as an invisible node in the display process of the previous i frames of pictures, and performing view cone detection and shielding detection on the current detection node to obtain a detection result; i is more than or equal to 1, and i is an integer.
For the case that the current detection node is a visible node in the display process of the previous i-frame picture, the computer equipment can also directly output the current detection node as the visible node.
Because the above-mentioned visibility detection process based on the index binary tree needs to consume a certain amount of computing resources and time, if the complete visibility detection process is executed in each frame of picture display process, the efficiency of picture display is low and larger resource consumption is brought, in this regard, in the embodiment of the present application, when a frame of picture is displayed, the computer device can skip the detection of a part of the nodes therein.
Specifically, because the change of the view angle in the virtual scene has continuity, the adjacent multi-frame frames generally have higher contact ratio, that is, the content in the current frame and the content in the previous i frame generally have higher contact ratio, in this embodiment of the application, if the current detection node is a visible node in the display process of the previous i frame, the probability of the current detection node still being a visible node is higher, at this time, the step of detecting the visibility can be skipped, and the current detection node can be directly rendered as the visible node; and if the current detection node is not the visible node in the display process of the previous i-frame picture, the visibility detection process is carried out on the current detection node. Through the process, the node visibility detection process can be effectively reduced, unnecessary visibility detection steps are avoided, and the detection efficiency is improved.
Step 406, rendering and displaying the three-dimensional object based on the index contained in the visible node.
In one possible implementation, the number of visible nodes is n, n being an integer greater than or equal to 1; rendering and displaying the three-dimensional object based on the index contained in the visible node, comprising:
In response to n being greater than m, merging vertex triangles corresponding to indexes contained in the n visible nodes into m rendering operations for rendering; m is greater than or equal to 2, and m is an integer;
and respectively rendering vertex triangles corresponding to indexes contained in the n visible nodes through n rendering operations in response to n being not greater than m.
The rendering operation may be a draw call operation.
Since excessive fine crushing of draw all may burden rendering, in this embodiment of the present application, when the number of visible nodes is large, the computer device may merge part of the visible nodes to reduce the number of draw call operations.
In one possible implementation, in each level of nodes in the index binary tree, the indices of the vertex triangles contained by the respective nodes are contiguous in memory;
Merging vertex triangles corresponding to indexes contained in the n visible nodes into m rendering operations to render, wherein the method comprises the following steps:
In response to a merge condition being satisfied between two visible nodes in the n visible nodes, rendering vertex triangles corresponding to indexes contained in the two visible nodes and vertex triangles corresponding to other indexes of intervals between the indexes contained in the two visible nodes through a single rendering operation;
the merging conditions include: the number of other indexes of the interval between indexes contained by the two visible nodes is smaller than the number threshold.
The interval between indexes contained in the two visible nodes may refer to an address interval in the memory of the indexes contained in the two visible nodes.
In the embodiment of the application, because in each layer of nodes in the index binary tree, the indexes of the vertex triangles contained in each node are continuous in the memory, for two visible nodes in n visible nodes, if the address intervals of the two visible nodes in the memory are close or continuous, the positions of the vertex triangles corresponding to the indexes contained in the two visible nodes and the vertex triangles corresponding to other indexes spaced between the indexes contained in the two visible nodes are similar in the three-dimensional model, the vertex triangles corresponding to the indexes contained in the two visible nodes and the vertex triangles corresponding to other indexes spaced between the indexes contained in the two visible nodes are rendered through a single rendering operation, and only a small amount of vertex triangles corresponding to non-visible nodes need to be additionally rendered, so that one draw call operation can be saved.
In another possible implementation manner of the embodiment of the present application, no matter whether the value of n is greater than m, the computer device may merge vertex triangles corresponding to indexes included in the visible nodes satisfying the merge condition in n visible nodes into the same rendering operation for rendering.
In one possible implementation manner, the detecting method for detecting the view cone and the shielding of the current detection node to obtain a detection result includes:
responding to the current detection node as a node which is not covered or is not completely covered by a vertex triangle contained in rendering operation in the display process of the previous j frames of pictures, and performing view cone detection and shielding detection on the current detection node to obtain a detection result; j is greater than or equal to 1, and i is an integer.
For the problem that a certain computing resource and time are required to be consumed in the visibility detection process based on the index binary tree, in the embodiment of the application, when the current detection node is a node covered by a draw call operation in the display process of the previous j frames of pictures, the visibility detection step can be skipped, and the current detection node is directly rendered as a visible node; and if the current detection node is not the node covered by the draw call operation in the previous j-frame picture display process, the visibility detection process is carried out on the current detection node. Through the process, the node visibility detection process can be effectively reduced, unnecessary visibility detection steps are avoided, and the detection efficiency is improved.
In the runtime stage, after the traditional bounding box-based view cone and occlusion rejection are applied, the method is further applied to reject the model which is judged to be visible by the rest. The process can be largely divided into two steps:
step one: for the traversal process of the binary tree shown above, which nodes of the binary tree need to submit a rendering, the algorithm flow may be as shown in fig. 9.
The algorithm shown in fig. 9 may include the steps of:
1) Pointing the pointer of the current detection node to the root node, and starting a recursion process;
2) The cone detection and occlusion detection are carried out on the bounding box of the current detection node, and the result can be one of the following three situations:
a) If the current detection node is completely visible, submitting the current detection node to a subsequent draw call generation step, and ending recursion;
b) If the current detection node is completely invisible, the current detection node is ignored, and the recursion is ended;
c) The current detection node is partially visible, and the subsequent steps are continued;
3) Judging whether the current detection node is a leaf node or not, if so, jumping to 2.A, submitting the current detection node, and ending recursion; otherwise, continuing;
4) Directing the pointer of the current detection node to the left child node, jumping to the second step, and continuing the recursion process;
5) Directing the pointer of the current detection node to the right child node, jumping to the second step, and continuing the recursion process;
6) And finishing binary tree traversal, and continuing the subsequent draw call generation step.
Step two: and (3) re-splicing all the child nodes which are output in the step one and need to be rendered into a draw call.
As shown in FIG. 10, the 3 rd leaf node is eliminated, two draw calls are generated finally, and in practical application of the method, if as many child nodes as possible are eliminated accurately, more draw calls are generated, so that in order to avoid the burden of rendering caused by too many small draw calls, two or more draw calls with a short distance can be combined into one, and therefore the clipping effect of triangles and the number of the finally generated draw calls can be both considered.
Through tests, under the condition that the number of vertex triangles of the leaf nodes is 512 and the distance is smaller than 512, the triangle culling rate of 13-38% and the draw call of 5-24 can be reduced by the scheme shown in the embodiment of the application. The indoor optimization range is highest, the optimization can be better near the building (with a shielding body), and the optimization effect can be achieved at the wide place. The test results are shown in table 1 below.
TABLE 1
Case (B) Time consuming Number of removed faces DC variation Total number of faces Total DC Rejection rate
Gate port 0.04 55.9k -16 205.7k 277 27.2%
Building side 0.11 90k -14 304.4k 375 29.6%
Indoor unit 0.07 94k -24 246k 305 38.2%
Outdoor building group 0.09 78k -12 271k 339 28%
Side of building group 0.10 59k -11 234k 344 25.2%
Open place 0.10 65k -17 263k 385 24.7%
Top view of the open area 1 0.10 39k -9 291k 434 13.4%
Top view of the open area 2 0.09 38k -5 205k 320 18.5%
The CPU (Central Processing Unit, CPU) overhead brought by the scheme is mainly located on the retrieval of the binary tree and the visibility test of the nodes, and the greater the depth of the leaf nodes or the fewer triangles contained in the leaf nodes, the better the clipping effect, and the side effect is that the CPU overhead is correspondingly increased. The application balances CPU time consumption and clipping effect in the following two modes.
1) The probability that the clipping result remains unchanged between several frames of pictures rendered continuously is very large, which brings about an optimization space of the inter-frame data buffer, for example, we can consider that the node judged to be visible in a certain frame is still visible in the following frames (specifically settable), and only need to check whether the invisible node becomes visible, so that the spending of the view cone test and the occlusion test in the operation of a very large proportion can be reduced.
2) By caching the TRIANGLE LIST (triangle list) ranges corresponding to the multiple draw calls finally generated in the previous frame, based on the assumption that the draw calls are likely to be still valid in the subsequent frames, we do not need to expand the nodes for testing the nodes completely covered by the frames TRIANGLE LIST, so that the considerable binary tree traversal time can be saved.
Through the two optimizations, about 40% of CPU overhead optimization effect can be obtained in a typical scene, the rendering speed can be increased in practical application, and the power consumption of the mobile equipment is reduced; or realizing deeper binary tree clipping under the condition of equal cost by adjusting parameters, thereby obtaining better clipping effect.
Tested, in the case of a draw call merge with a leaf node triangle count of 512 and a distance of less than 512, we can obtain a triangle culling rate of 13% -38% and a draw call reduction of 5-24. The indoor optimization range is highest, the optimization can be better near the building (with a shielding body), and the optimization effect can be achieved at the wide place.
In the application, the space division of the child nodes can try other methods, and in addition, different draw call merging strategies can be adopted according to different draw call spending of an actual operation platform, so that the optimization effect of the method is maximally explored.
The application provides a method for eliminating a buffer zone based on an index, which is characterized in that a geometric body is divided into level clusters, the index buffer zones are rearranged according to the level clusters, and the zone of the level clusters is selected for rendering and submitting according to an eliminating result when rendering in real time, so that triangles needing to be drawn in real time are reduced. The method can be widely applied to the world scene rendering of mobile equipment or high-performance host/PC equipment, and achieves the aim of rendering performance optimization.
In summary, according to the scheme disclosed by the embodiment of the application, when the three-dimensional object in the virtual scene is rendered, the visibility detection is performed through the index binary tree of the three-dimensional model of the three-dimensional object to obtain the visible node where the visible vertex triangle in the three-dimensional model is located, and the rendering of the three-dimensional object is performed based on the visible node.
Fig. 11 illustrates a block diagram of a three-dimensional object rendering apparatus according to an exemplary embodiment of the present application. The three-dimensional object rendering apparatus may be applied in a computer device to perform all or part of the steps in the method as shown in fig. 3 or fig. 4. As shown in fig. 11, the three-dimensional object rendering apparatus includes:
A reading module 1101, configured to, in response to displaying a three-dimensional object in a virtual scene at a specified perspective, read an index binary tree of a three-dimensional model of the three-dimensional object, where each node in the index binary tree includes an index of at least one vertex triangle in the three-dimensional model, and child nodes in the index binary tree are obtained by clustering vertex triangles corresponding to parent nodes of the child nodes;
A detection module 1102, configured to perform visibility detection on the three-dimensional object based on the specified perspective and the index binary tree, to obtain visible nodes in the index binary tree; the visible nodes are nodes in which the vertex triangles corresponding to the contained indexes are visible under the appointed view angle;
and a rendering module 1103, configured to render and display the three-dimensional object based on the index included in the visible node.
In one possible implementation, the detection module 1102 is configured to,
Traversing each node in the index binary tree from a root node;
Performing view cone detection and shielding detection on the current detection node in the index binary tree to obtain a detection result;
Responding to the detection result that the vertex triangle corresponding to the index in the current detection node is completely visible under the appointed view angle, and outputting the current detection node as the visible node;
And responding to the detection result that the vertex triangle corresponding to the index in the current detection node is partially visible under the appointed visual angle, and respectively taking two child nodes of the current detection node as new current detection nodes to perform view cone detection and shielding detection.
In a possible implementation manner, the pair detection module 1102 is configured to perform view cone detection and occlusion detection on a bounding box of a vertex triangle corresponding to an index in the current detection node, so as to obtain the detection result.
In a possible implementation manner, the detection module 1102 is configured to perform cone detection and occlusion detection on the current detection node in response to the current detection node being an invisible node in the display process of the previous i-frame picture, so as to obtain the detection result; i is more than or equal to 1, and i is an integer.
In one possible implementation, the rendering module 1103 is configured to,
In response to n being greater than m, merging vertex triangles corresponding to indexes contained in n visible nodes into m rendering operations for rendering; m is greater than or equal to 2, and m is an integer;
And respectively rendering vertex triangles corresponding to indexes contained in the n visible nodes through n rendering operations in response to n being not greater than m.
In one possible implementation, in each level of nodes in the index binary tree, the indices of vertex triangles contained by the respective nodes are contiguous in memory; the rendering module 1103 is configured to render, through a single rendering operation, vertex triangles corresponding to indexes included in two visible nodes and vertex triangles corresponding to other indexes of intervals between indexes included in the two visible nodes in response to a merge condition being satisfied between two visible nodes in the n visible nodes;
the merging condition includes: the number of other indexes of the interval between indexes contained by the two visible nodes is less than a number threshold.
In a possible implementation manner, the detection module 1102 is configured to perform cone detection and occlusion detection on the current detection node in response to the current detection node being a node that is not covered or is not completely covered by a vertex triangle included in a rendering operation in a presentation process of a previous j frames of pictures, so as to obtain the detection result; j is greater than or equal to 1, and i is an integer.
In one possible implementation, the apparatus further includes:
The data acquisition module is used for acquiring vertex data of the three-dimensional model; the vertex data comprises position data of each vertex in the three-dimensional model and indexes of vertex triangles where each vertex in the three-dimensional model is located;
The classification module is used for classifying each vertex triangle in the three-dimensional model as a root node based on the vertex data;
And the clustering module is used for clustering each vertex triangle in the three-dimensional model step by step from the root node according to the position of each vertex triangle in the three-dimensional model to obtain the index binary tree.
In one possible implementation, the clustering module is configured to,
Acquiring two clustering center points in the current clustering node;
Clustering the vertex triangles in the current clustering node into two vertex triangle groups according to the positions of the vertex triangles in the current clustering node and the two clustering center points;
when a stopping condition is met, constructing two child nodes of the current cluster node based on the two vertex triangle groups;
and respectively taking the two child nodes of the current clustering node as new current clustering nodes.
In one possible implementation, the apparatus further includes:
And the updating module is used for updating the two clustering center points based on the two vertex triangle groups when the stopping condition is not met.
In one possible implementation, the updating module is configured to, in response to the update module,
When the stopping condition is not met, respectively acquiring the center points of the external boxes of the two vertex triangle groups;
and obtaining the central points of the external boxes of the two vertex triangle groups as new clustering central points.
In summary, according to the scheme disclosed by the embodiment of the application, when the three-dimensional object in the virtual scene is rendered, the visibility detection is performed through the index binary tree of the three-dimensional model of the three-dimensional object to obtain the visible node where the visible vertex triangle in the three-dimensional model is located, and the rendering of the three-dimensional object is performed based on the visible node.
Fig. 12 shows a block diagram of a computer device 1200 provided in accordance with an exemplary embodiment of the present application. The computer device 1200 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Computer device 1200 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, and the like.
In general, the computer device 1200 includes: a processor 1201 and a memory 1202.
Processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). Processor 1201 may also include a main processor and a coprocessor.
Memory 1202 may include one or more computer-readable storage media, which may be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1202 is used to store at least one computer instruction for execution by processor 1201 to implement the three-dimensional object rendering methods provided by the method embodiments of the present application.
In some embodiments, the computer device 1200 may also optionally include: a peripheral interface 1203, and at least one peripheral. The processor 1201, the memory 1202, and the peripheral interface 1203 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1203 via buses, signal lines, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, a display 1205, a camera assembly 1206, audio circuitry 1207, and a power supply 1209.
In some embodiments, computer device 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: an acceleration sensor 1211, a gyro sensor 1212, a pressure sensor 1213, an optical sensor 1215, and a proximity sensor 1216.
Those skilled in the art will appreciate that the architecture shown in fig. 12 is not limiting as to the computer device 1200, and may include more or fewer components than shown, or may combine certain components, or employ a different arrangement of components.
Fig. 13 shows a block diagram of a computer device 1300 in accordance with an exemplary embodiment of the present application. The computer device may be implemented as a protection blocking device in the above-described aspects of the present application. The computer apparatus 1300 includes a central processing unit (Central Processing Unit, CPU) 1301, a system Memory 1304 including a random access Memory (Random Access Memory, RAM) 1302 and a Read-Only Memory (ROM) 1303, and a system bus 1305 connecting the system Memory 1304 and the central processing unit 1301. The computer device 1300 also includes a basic Input/Output system (I/O) 1306 to facilitate the transfer of information between various devices within the computer, and a mass storage device 1307 for storing an operating system 1313, application programs 1314, and other program modules 1315.
The basic input/output system 1306 includes a display 1308 for displaying information, and an input device 1309, such as a mouse, keyboard, etc., for a user to input information. Wherein the display 1308 and the input device 1309 are connected to the central processing unit 1301 through an input output controller 1310 connected to the system bus 1305. The basic input/output system 1306 may also include an input/output controller 1310 for receiving and processing input from a keyboard, mouse, or electronic stylus, among a plurality of other devices. Similarly, the input output controller 1310 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1307 is connected to the central processing unit 1301 through a mass storage controller (not shown) connected to the system bus 1305. The mass storage device 1307 and its associated computer-readable media provide non-volatile storage for the computer device 1300. That is, the mass storage device 1307 may include a computer readable medium (not shown) such as a hard disk or a compact disk-Only (CD-ROM) drive.
The computer readable medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, erasable programmable read-Only register (Erasable Programmable Read Only Memory, EPROM), electrically erasable programmable read-Only Memory (EEPROM) flash Memory or other solid state Memory technology, CD-ROM, digital versatile disks (DIGITAL VERSATILE DISC, DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 1304 and mass storage device 1307 described above may be referred to collectively as memory.
According to various embodiments of the disclosure, the computer device 1300 may also operate by being connected to a remote computer on a network, such as the internet. I.e., the computer device 1300 may be connected to the network 1312 via a network interface unit 1311 coupled to the system bus 1305, or alternatively, the network interface unit 1311 may be used to connect to other types of networks or remote computer systems (not shown).
The memory further includes at least one computer instruction stored in the memory, and the central processor 1301 implements all or part of the steps of the three-dimensional object rendering method shown in the above embodiments by executing the at least one instruction, at least one program, a code set, or an instruction set.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as a memory, comprising at least one computer instruction executable by a processor to perform all or part of the steps of the method shown in any of the embodiments of fig. 3 or 4 described above. For example, the non-transitory computer readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium and executes the computer instructions to cause the computer device to perform all or part of the steps of the method shown in any of the embodiments of fig. 3 or fig. 4 described above.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. A method of rendering a three-dimensional object, the method comprising:
Responding to the three-dimensional object in the virtual scene displayed at a specified view angle, reading an index binary tree of a three-dimensional model of the three-dimensional object, wherein each node in the index binary tree respectively comprises an index of at least one vertex triangle in the three-dimensional model, and child nodes in the index binary tree are obtained by clustering vertex triangles corresponding to father nodes of the child nodes;
based on the appointed view angle and the index binary tree, performing visibility detection on the three-dimensional object to obtain visible nodes in the index binary tree; the visible nodes are nodes in which the vertex triangles corresponding to the contained indexes are visible under the appointed view angle;
rendering and displaying the three-dimensional object based on the index contained in the visible node.
2. The method of claim 1, wherein the performing visibility detection on the three-dimensional object based on the specified perspective and the index binary tree to obtain visible nodes in the index binary tree comprises:
traversing each node in the index binary tree from a root node;
Performing view cone detection and shielding detection on the current detection node in the index binary tree to obtain a detection result;
Responding to the detection result that the vertex triangle corresponding to the index in the current detection node is completely visible under the appointed view angle, and outputting the current detection node as the visible node;
And responding to the detection result that the vertex triangle corresponding to the index in the current detection node is partially visible under the appointed visual angle, and respectively taking two child nodes of the current detection node as new current detection nodes to perform view cone detection and shielding detection.
3. The method according to claim 2, wherein the performing the view cone detection and the occlusion detection on the current detection node to obtain the detection result includes:
And performing view cone detection and shielding detection on the bounding box of the vertex triangle corresponding to the index in the current detection node to obtain the detection result.
4. The method according to claim 2, wherein the performing the view cone detection and the occlusion detection on the current detection node to obtain the detection result includes:
Responding to the current detection node as an invisible node in the display process of the previous i frames of pictures, and performing view cone detection and shielding detection on the current detection node to obtain the detection result; i is more than or equal to 1, and i is an integer.
5. The method of claim 1, wherein the number of visible nodes is n, n being an integer greater than or equal to 1; the rendering the three-dimensional object based on the index contained in the visible node includes:
In response to n being greater than m, merging vertex triangles corresponding to indexes contained in n visible nodes into m rendering operations for rendering; m is greater than or equal to 2, and m is an integer;
And respectively rendering vertex triangles corresponding to indexes contained in the n visible nodes through n rendering operations in response to n being not greater than m.
6. The method of claim 5, wherein in each level of nodes in the index binary tree, the indices of vertex triangles contained by the respective nodes are contiguous in memory;
Merging vertex triangles corresponding to indexes contained in n visible nodes into m rendering operations for rendering, wherein the steps include:
In response to a merge condition being met between two visible nodes in the n visible nodes, rendering vertex triangles corresponding to indexes contained in the two visible nodes and vertex triangles corresponding to other indexes of intervals between the indexes contained in the two visible nodes through a single rendering operation;
the merging condition includes: the number of other indexes of the interval between indexes contained by the two visible nodes is less than a number threshold.
7. The method according to claim 5 or 6, wherein the performing the cone detection and the occlusion detection on the current detection node to obtain a detection result includes:
responding to the current detection node as a node which is not covered or is not completely covered by a vertex triangle contained in rendering operation in the display process of the previous j frames of pictures, and performing view cone detection and shielding detection on the current detection node to obtain a detection result; j is greater than or equal to 1, and i is an integer.
8. The method of any of claims 1 to 7, wherein, in response to displaying a three-dimensional object in a virtual scene at a specified perspective, prior to reading an indexed binary tree of a three-dimensional model of the three-dimensional object, further comprising:
Obtaining vertex data of the three-dimensional model; the vertex data comprises position data of each vertex in the three-dimensional model and indexes of vertex triangles where each vertex in the three-dimensional model is located;
classifying each vertex triangle in the three-dimensional model as a root node based on the vertex data;
And from the root node, clustering each vertex triangle in the three-dimensional model step by step according to the position of each vertex triangle in the three-dimensional model to obtain the index binary tree.
9. The method of claim 8, wherein starting from the root node, clustering each vertex triangle in the three-dimensional model step by step according to the position of each vertex triangle in the three-dimensional model, to obtain the index binary tree, comprises:
Acquiring two clustering center points in the current clustering node;
Clustering the vertex triangles in the current clustering node into two vertex triangle groups according to the distance between the vertex triangles and the two clustering center points according to the positions of the vertex triangles in the current clustering node;
when a stopping condition is met, constructing two child nodes of the current cluster node based on the two vertex triangle groups;
and respectively taking the two child nodes of the current clustering node as new current clustering nodes.
10. The method according to claim 9, wherein the method further comprises:
And when the stopping condition is not met, updating the two clustering center points based on the two vertex triangle groups.
11. The method of claim 10, wherein updating two of the cluster center points based on two of the vertex triangle groups when a stop condition is not satisfied comprises:
when the stopping condition is not met, respectively acquiring the center points of the external boxes of the two vertex triangle groups;
and obtaining the central points of the external boxes of the two vertex triangle groups as new clustering central points.
12. A three-dimensional object rendering apparatus, the apparatus comprising:
The reading module is used for responding to the three-dimensional object in the virtual scene displayed at the appointed visual angle, reading an index binary tree of a three-dimensional model of the three-dimensional object, wherein each node in the index binary tree respectively comprises an index of at least one vertex triangle in the three-dimensional model, and child nodes in the index binary tree are obtained by clustering vertex triangles corresponding to father nodes of the child nodes;
The detection module is used for carrying out visibility detection on the three-dimensional object based on the appointed visual angle and the index binary tree to obtain visible nodes in the index binary tree; the visible nodes are nodes in which the vertex triangles corresponding to the contained indexes are visible under the appointed view angle;
And the rendering module is used for rendering and displaying the three-dimensional object based on the index contained in the visible node.
13. A computer device comprising a processor and a memory, the memory storing instructions for execution by at least one computer, the at least one computer instructions loaded and executed by the processor to implement the three-dimensional object rendering method of any one of claims 1 to 11.
14. A computer readable storage medium having stored therein at least one computer instruction that is loaded and executed by a processor to implement the three-dimensional object rendering method of any one of claims 1 to 11.
15. A computer program product, characterized in that the computer program product comprises computer instructions that are read and executed by a processor of a computer device, so that the computer device performs the three-dimensional object rendering method according to any of claims 1 to 11.
CN202211425727.5A 2022-11-14 2022-11-14 Three-dimensional object rendering method, three-dimensional object rendering device, computer equipment and storage medium Pending CN118037914A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211425727.5A CN118037914A (en) 2022-11-14 2022-11-14 Three-dimensional object rendering method, three-dimensional object rendering device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211425727.5A CN118037914A (en) 2022-11-14 2022-11-14 Three-dimensional object rendering method, three-dimensional object rendering device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118037914A true CN118037914A (en) 2024-05-14

Family

ID=90983003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211425727.5A Pending CN118037914A (en) 2022-11-14 2022-11-14 Three-dimensional object rendering method, three-dimensional object rendering device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118037914A (en)

Similar Documents

Publication Publication Date Title
US20210042991A1 (en) Object loading method and apparatus, storage medium, and electronic device
CN107358649B (en) Processing method and device of terrain file
CN110874812B (en) Scene image drawing method and device in game and electronic terminal
US11335058B2 (en) Spatial partitioning for graphics rendering
KR20140105609A (en) Online gaming
CN110478898B (en) Configuration method and device of virtual scene in game, storage medium and electronic equipment
CN112090078A (en) Game character movement control method, device, equipment and medium
CN112587921A (en) Model processing method and device, electronic equipment and storage medium
KR20230145430A (en) Method and device for displaying coordinate axes in a virtual environment, and terminals and media
JP7189288B2 (en) Methods and systems for displaying large 3D models on remote devices
US20230326129A1 (en) Method and apparatus for storing visibility data of three-dimensional model, device, and storage medium
US20230281251A1 (en) Object management method and apparatus, device, storage medium, and system
CN117237502A (en) Three-dimensional rendering method, device, equipment and medium
CN118037914A (en) Three-dimensional object rendering method, three-dimensional object rendering device, computer equipment and storage medium
CN116109767A (en) Rendering method of three-dimensional scene, image processor, electronic device and storage medium
CN112915540B (en) Data processing method, device and equipment for virtual scene and storage medium
WO2018175299A1 (en) System and method for rendering shadows for a virtual environment
CN111790151A (en) Method and device for loading object in scene, storage medium and electronic equipment
CN117392358B (en) Collision detection method, collision detection device, computer device and storage medium
CN116824082B (en) Virtual terrain rendering method, device, equipment, storage medium and program product
CN115631320B (en) Pre-calculation cell display method, pre-calculation cell generation method and device
JP2019046080A (en) Information processing apparatus, method, and program
CN111729303B (en) Large map baking and cutting method and restoration method
US20240203030A1 (en) 3d model rendering method and apparatus, electronic device, and storage medium
US20240193864A1 (en) Method for 3d visualization of sensor data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication