CN112933599B - Three-dimensional model rendering method, device, equipment and storage medium - Google Patents

Three-dimensional model rendering method, device, equipment and storage medium Download PDF

Info

Publication number
CN112933599B
CN112933599B CN202110377711.0A CN202110377711A CN112933599B CN 112933599 B CN112933599 B CN 112933599B CN 202110377711 A CN202110377711 A CN 202110377711A CN 112933599 B CN112933599 B CN 112933599B
Authority
CN
China
Prior art keywords
target
dimensional model
triangle
rendering
visible
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110377711.0A
Other languages
Chinese (zh)
Other versions
CN112933599A (en
Inventor
陈玉钢
王钦佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110377711.0A priority Critical patent/CN112933599B/en
Publication of CN112933599A publication Critical patent/CN112933599A/en
Application granted granted Critical
Publication of CN112933599B publication Critical patent/CN112933599B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a three-dimensional model rendering method, a three-dimensional model rendering device, three-dimensional model rendering equipment and a storage medium, and belongs to the field of computer graphics. The method comprises the following steps: in the off-line calculation stage, acquiring user input data; according to the user input data, dividing a view angle range aiming at a target three-dimensional model in a three-dimensional space to obtain at least two view angle areas; acquiring a triangular visible set of the target three-dimensional model in different visual angle areas; constructing an index buffer area according to the triangular visible sets in different visual angle areas; wherein the index buffer is used for storing the vertex index of each triangle in the triangle visible set; in the real-time rendering stage, determining a target subinterval in the index buffer area according to the current visual angle; and rendering the target three-dimensional model according to the triangular visible set corresponding to the target subinterval. The method and the device can optimize the rendering performance of the graphic product.

Description

Three-dimensional model rendering method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer graphics, and in particular, to a method, an apparatus, a device, and a storage medium for rendering a three-dimensional model.
Background
In a graphics rendering scene, Occlusion Culling (Occlusion Culling) refers to cancelling rendering of a three-dimensional model when the three-dimensional model is obscured by other parts (such as other three-dimensional models) in the scene and thus cannot be seen in a visual field of a virtual camera. For example, occlusion relationships generally exist among three-dimensional models in a game scene, and invisible three-dimensional models can be selected to be removed by using the occlusion relationships among the three-dimensional models, rendering of the invisible three-dimensional models is cancelled, and only visible three-dimensional models are rendered, so that the rendering amount of each frame of game picture is reduced, the overhead of a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU) is reduced, and the purpose of optimizing the rendering performance of graphic products is achieved.
At present, in the field of graphic rendering and industrial application, shielding elimination schemes are mainly divided into two categories. One is the shielding elimination which takes a model as granularity and occurs at the end of the CPU; another type is pixel-granular occlusion culling implemented by changing the GPU rendering pipeline. However, in today's graphics application scenarios, the number of on-screen vertices and triangles of a scene is up to several million or even tens of millions, in which case whatever occlusion culling scheme described above does not reduce the rendering burden well without introducing other larger computational load. Therefore, how to perform three-dimensional model rendering and further optimize rendering performance of a graphic product becomes a problem to be solved urgently by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides a three-dimensional model rendering method, a three-dimensional model rendering device, three-dimensional model rendering equipment and a storage medium, and the rendering performance of a graphic product can be optimized. The technical scheme is as follows:
in one aspect, a three-dimensional model rendering method is provided, and the method includes:
in the off-line calculation stage, acquiring user input data; according to the user input data, dividing a view angle range aiming at a target three-dimensional model in a three-dimensional space to obtain at least two view angle areas;
in an off-line calculation stage, acquiring a triangular visible set of the target three-dimensional model in different view angle areas; constructing an index buffer area according to the triangular visible sets of the different visual angle areas; wherein the index buffer is used for storing the vertex index of each triangle in the triangle visible set;
in the real-time rendering stage, determining a target subinterval in the index buffer area according to the current visual angle; and rendering the target three-dimensional model according to the triangular visible set corresponding to the target subinterval.
In another aspect, there is provided a three-dimensional model rendering apparatus, the apparatus including:
the dividing module is configured to acquire user input data in an offline computing stage; according to the user input data, dividing a view angle range aiming at a target three-dimensional model in a three-dimensional space to obtain at least two view angle areas;
the acquisition module is configured to acquire a triangular visible set of the target three-dimensional model in different view angle areas in an offline calculation stage;
the construction module is configured to construct an index buffer area according to the triangular visible sets of different view angle areas in an offline calculation stage; the index buffer area is used for storing the vertex index of each triangle in the triangle visible set;
a rendering module configured to determine a target subinterval in the index buffer according to a current perspective during a real-time rendering phase; and rendering the target three-dimensional model according to the triangular visible set corresponding to the target subinterval.
In some embodiments, the range of viewing angles is spherical; the partitioning module configured to:
determining an axis alignment bounding box of the target three-dimensional model according to the vertex data of the target three-dimensional model; determining a unit spherical surface by taking the central point of the axis alignment bounding box as the central point of the target three-dimensional model; and carrying out area division on the unit spherical surface according to the longitude and the latitude to obtain at least two visual angle areas.
In some embodiments, the acquisition module is configured to:
for any view angle area, uniformly sampling at least two view angles in the view angle area;
acquiring a triangular visible set of the target three-dimensional model under the at least two visual angles;
and determining a union of the triangle visible sets of the at least two visual angles as a triangle visible set of the visual angle area.
In some embodiments, the acquisition module is configured to:
numbering all triangles forming the target three-dimensional model in sequence for any visual angle;
assigning unique colors to each triangle forming the target three-dimensional model according to the numbering sequence;
rendering the target three-dimensional model to a frame buffer at the view angle;
reading back and analyzing the frame buffer result; in response to the analyzed color corresponding to the triangle number, determining the triangle indicated by the triangle number as a visible triangle at the view angle;
wherein all visible triangles under the view angle constitute a visible set of triangles for the view angle.
In some embodiments, the acquisition module is configured to:
setting each triangle constituting the target three-dimensional model to be invisible;
setting the depth test as a near principle, and starting depth writing; under any visual angle, obtaining depth information of each pixel of the target three-dimensional model by drawing the target three-dimensional model; writing the depth information into a depth buffer;
setting the depth test to be in the principle of equal value, and closing the depth writing; for any one of the triangles constituting the target three-dimensional model, in response to the triangles passing the depth test requiring equal values and the triangle having a drawing pixel amount greater than zero, setting the triangle to be visible from invisible;
wherein all visible triangles under the view angle constitute a visible set of triangles for the view angle.
In some embodiments, the proximity principle refers to: in response to the occurrence of pixel coincidence at the same pixel position, storing the depth value with the minimum value of the pixel position in the depth buffer area; wherein, the smaller the depth value, the closer the distance to the virtual camera in the three-dimensional space;
the values being equal means that the current depth value is consistent with the corresponding depth value stored in the depth buffer.
In some embodiments, the construction module is configured to:
numbering the at least two view angle regions in sequence;
and sequentially arranging the vertex indexes of the triangular visible sets of each visual angle area according to the numbering sequence to obtain the index buffer area.
In some embodiments, the construction module is configured to:
numbering the at least two view angle regions in sequence;
arranging the vertex indexes of the triangular visible sets of all the visual angle areas in sequence according to the numbering sequence to obtain the index buffer area; wherein, a visible set of triangles corresponds to a sub-interval of the index buffer;
and reordering the vertex indexes of the triangles of the subintervals, and merging the vertex indexes of the triangles which repeatedly appear in the adjacent subintervals to obtain the compressed index buffer area.
In some embodiments, the apparatus further comprises:
a storage module configured to store the index buffer into a model file during an offline computation phase.
In some embodiments, the rendering module is configured to:
reading the index buffer area from the model file in a real-time rendering stage;
acquiring a rotation matrix of the target three-dimensional model relative to a world coordinate system;
determining a direction vector of the current visual angle relative to the central point of the target three-dimensional model according to the rotation matrix, the current visual angle and the central point position of the target three-dimensional model;
determining a target visual angle area where the direction vector is located;
and determining a subinterval corresponding to the target view angle area in the index buffer area as the target subinterval.
In some embodiments, the rendering module is configured to:
in a real-time rendering stage, acquiring the offset of the target subinterval in the index buffer area; acquiring the drawing quantity of triangles corresponding to the target subintervals;
and submitting a drawing instruction to a graphic processing unit, wherein the drawing instruction comprises the offset and the triangle drawing number, and the graphic processing unit determines the triangle visible set according to the offset and the triangle drawing number to finish rendering.
In some embodiments, the partitioning module is configured to:
dividing two areas of which the latitude values on the unit spherical surface are larger than a target threshold value into two independent visual angle areas;
and uniformly dividing the area with the latitude value not greater than the target threshold into N viewing angle areas according to longitude, wherein N is a positive integer not less than 2.
In another aspect, a computer apparatus is provided, the apparatus comprising a processor and a memory, the memory having stored therein at least one program code, the at least one program code being loaded into and executed by the processor to implement the three-dimensional model rendering method described above.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, and the at least one program code is loaded and executed by a processor to implement the above-mentioned three-dimensional model rendering method.
In another aspect, a computer program product or a computer program is provided, the computer program product or the computer program comprising computer program code, the computer program code being stored in a computer-readable storage medium, the computer program code being read by a processor of a computer device from the computer-readable storage medium, the computer program code being executed by the processor to cause the computer device to perform the three-dimensional model rendering method described above.
In an offline calculation stage, a visual angle range of a three-dimensional model is divided into a plurality of visual angle areas, then a plurality of triangular visible sets of the three-dimensional model in the visual angle range are inquired, and then a visual angle related index buffer area is constructed based on the triangular visible sets of different visual angle areas. In the real-time rendering stage, the corresponding subintervals are inquired in the view angle related index buffer area according to the current view angle, and a DrawCall is formed and submitted to the GPU for rendering. According to the method and the device, the visual angle related index buffer zone can be constructed in the offline calculation stage, so that the rendering can be completed by inquiring the corresponding subinterval of the index buffer zone according to the current visual angle in the real-time rendering stage. According to the scheme, fine-grained occlusion rejection is achieved in an offline computing stage, the number of vertexes and the number of triangles of a model rendering process can be reduced in a real-time rendering stage, the cost of a GPU is reduced, and the rendering performance is improved. To sum up, this application still does not bring extra performance consumption when having guaranteed to shelter from the rejection fineness, can optimize the rendering performance of figure product.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment related to a three-dimensional model rendering method provided in an embodiment of the present application;
FIG. 2 is an interface schematic of a tool blueprint provided by an embodiment of the present application;
FIG. 3 is a diagram illustrating a frame buffering result according to an embodiment of the present disclosure;
FIG. 4 is a schematic view of a visualization interface for self-occlusion culling according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of an offline computation phase according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of an online rendering stage according to an embodiment of the present disclosure;
FIG. 7 is a flowchart of a three-model rendering method according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating a region division of a viewing angle range according to an embodiment of the present disclosure;
FIG. 9 is a schematic flow chart of an occlusion query provided in an embodiment of the present application;
FIG. 10 is a diagram of an index buffer according to an embodiment of the present disclosure;
FIG. 11 is a diagram of another index buffer provided in an embodiment of the present application;
FIG. 12 is a diagram of another index buffer provided in an embodiment of the present application;
FIG. 13 is a diagram of another index buffer provided in an embodiment of the present application;
FIG. 14 is a diagram of another index buffer provided in an embodiment of the present application;
FIG. 15 is a diagram of another index buffer provided in an embodiment of the present application;
FIG. 16 is a diagram of another index buffer provided in an embodiment of the present application;
FIG. 17 is a graph illustrating a neighboring repetition rate and compression ratio provided by an embodiment of the present application;
fig. 18 is a schematic structural diagram of a three-dimensional model rendering apparatus according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," and the like, in this application, are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency, nor do they define a quantity or order of execution. It will be further understood that, although the following description uses the terms first, second, etc. to describe various elements, these elements should not be limited by these terms.
These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of various examples. The first element and the second element may both be elements, and in some cases, may be separate and distinct elements.
For example, at least one element may be any integer number of elements greater than or equal to one, such as one element, two elements, three elements, and the like. At least two means two or more, for example, at least two elements may be any integer number of two or more than two, such as two elements, three elements, etc.
The embodiment of the application provides a three-dimensional model rendering scheme. Illustratively, the solution relates to the field of Cloud Technology.
The cloud technology is a hosting technology for unifying series resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. In addition, the cloud technology can also be a general term of a network technology, an information technology, an integration technology, a management platform technology, an application technology and the like based on cloud computing business model application, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources, such as video websites, picture-like websites and more portal websites. With the high development and application of the internet industry, each article may have an own identification mark and needs to be transmitted to a background system for logic processing, data of different levels can be processed separately, and various industry data need strong system background support and can be realized only through cloud computing.
In some embodiments, the embodiments of the present application may relate to data transmission in the cloud technology field, that is, bidirectional data transmission may be performed between a terminal and a server, which is not specifically limited in the embodiments of the present application.
In other embodiments, the embodiments of the present application may also relate to Cloud Gaming (Cloud Gaming) in the Cloud technology field. The cloud game may also be called a game On Demand (Gaming), which is an online game technology based On a cloud computing technology. Cloud gaming technology enables light-end devices (Thin clients) with relatively limited graphics processing and data computing capabilities to run high-quality games. In a cloud game scene, a game is not executed at a game terminal of a player, but is executed in a cloud server, the cloud server renders the game scene into a video and audio stream, and the video and audio stream is transmitted to the game terminal of the player through a network. The player game terminal does not need to have strong graphic operation and data processing capacity, and only needs to have basic streaming media playing capacity and capacity of acquiring player input instructions and sending the instructions to the cloud server.
Abbreviations or key terms that may be mentioned in the embodiments of the present application are described below.
Rendering: the key technology of three-dimensional graphics is to correctly display a three-dimensional object model on an electronic screen.
Three-dimensional model: composed of vertex data, triangle indices, and textures. Wherein, three non-collinear points can form a plane, and the plane is a triangle. Illustratively, one face of a cube may consist of 2 triangles, so one face has 2 triangle indices.
Optionally, the three-dimensional model refers to an object model appearing in a game scene in the embodiment of the present application.
Optionally, the object models comprise static object models and dynamic object models; static object models refer to objects that are at rest at all times in a game scene, such as hills, houses, walls, factories, streets, and so on; the dynamic object model refers to an object capable of moving in a game scene, for example, a player-controlled character object is the dynamic object model.
Rendering Optimization (Rendering Optimization): in the graphic rendering, a lot of performance bottlenecks exist, and the rendering optimization refers to finding the performance bottlenecks and performing targeted optimization so as to improve the rendering efficiency.
Occlusion Culling (Occlusion Culling): a method for eliminating the three-dimensional model by finding out the shielding information in the game scene is disclosed. In short, occlusion culling is the rendering of one three-dimensional model when it is occluded from view by other three-dimensional models.
In other words, the occlusion elimination refers to canceling the rendering of the three-dimensional model when the three-dimensional model is blocked by other parts in the game scene, such as other three-dimensional models, and thus the three-dimensional model cannot be seen in the visual field of the virtual camera. Illustratively, occlusion relations exist among three-dimensional models in a game scene, invisible three-dimensional models are removed by utilizing the occlusion relations among the three-dimensional models, and rendering operation is not carried out.
Self-shielding elimination: and finding out the shielding relation by using the triangles or pixel points of the model as granularity and rejecting.
Visual field: the range of visibility, i.e., the range that can be seen, of a virtual camera, which refers to a player-controlled character object, in a game scene.
GPU: the system is also called a display core, a visual processor and a display chip, and is a microprocessor which is specially used for image and graph related operation work on a personal computer, a game host and mobile equipment (such as a tablet computer, a smart phone and the like).
Back face removing: one culling method, typically occurs at the GPU stage. When the GPU is rasterized, judging whether back face rejection exists or not according to an included angle between a normal line of the triangle and the current visual angle; a triangle is not rendered if it faces away from the current view. In short, backface culling refers to a method of discarding fragments that are backface to the viewer.
And (3) excessive drawing: because the rendering order of the rendering objects is inconsistent and the GPU has no orderliness for pixel coloring, pixels close to the current visual angle may be drawn finally, so that some pixels in the picture are repeatedly erased and rewritten to cause redundant drawing, and the drawing is excessive drawing. For example, multiple objects are superimposed in the same region, that is, the superposition of multiple pixels occurs on one pixel, and actually, only the top object is finally presented in front of us.
An off-line calculation stage: in rendering optimization, large-scale intensive computation in many schemes is pre-computed before the actual rendering runs, which is the off-line computation stage of the optimization scheme.
A real-time rendering stage: referring to the real-time operation stage in the graphic product, the computer device usually needs to complete the rendering of the picture and submit it to the display in a very short time. The number of pictures submitted per second is called frame rate, and in general, to ensure the smoothness of dynamic pictures, the frame rate can be 30 frames, 60 frames or more.
Rendering the pipeline: a graphics rendering process running in the GPU. Generally we discuss the vertex shader, rasterization, and pixel shader of a rendering pipeline. The GPU can be flexibly controlled to render the rendering components by writing codes in the shader.
And (4) a vertex shader: and a necessary link of the rendering pipeline is used for calculating the vertexes of the three-dimensional model one by one according to the codes and outputting the result to the next stage.
A pixel shader: and a necessary link of the rendering pipeline is used for performing coloring calculation on the rasterized pixels according to the codes, outputting the rasterized pixels to a frame buffer area after the pixels pass the test, and completing the process of rendering the pipeline once.
A depth buffer area: the GPU includes a buffer area for recording depth information corresponding to pixels in a frame buffer area, and is generally used to determine a front-back order of the pixels during rendering, so as to ensure a correct rendering result. A developer can specify a depth buffer switch, a depth testing method and a depth writing switch through an instruction, and the rendering requirement is flexibly met.
Occlusion Query (Occlusion Query): the query function supported by the GPU terminal can query the actual pixel quantity generated by drawing and calling in a certain interval and returns the query result to the CPU terminal. In detail, when the occlusion query method is to render a three-dimensional model, a CPU firstly sends an occlusion query command to a GPU and then waits for a query result to return; if the number of the pixels rendered is larger than 0 as a result of the query, the three-dimensional model is represented to be rendered, otherwise, the three-dimensional model is not rendered.
Index Buffer Object (EBO): when the GPU draws the three-dimensional model, vertex data of the three-dimensional model are transmitted into a vertex buffer area, the index buffer area arranges the vertex indexes into another array according to a triangle mode, and the vertex indexes are transmitted into the GPU to wait for being rendered, drawn and called.
For the vertex buffer, we will store a series of vertex data into a container, which is called the vertex buffer. In addition, in order to process vertex data using the GPU, the following steps are required: 1. creating a vertex buffer area; 2. and storing the vertex data to a vertex buffer area.
And the information stored in the index buffer area is the vertex index corresponding to the triangle of the three-dimensional model. Generally, every three sequence numbers represent a triangle, and the triangles can be stored out of order. Illustratively, each triangle corresponds to three unsigned integer data variables.
The following describes an implementation environment related to a rendering scheme of an object model provided in the present application.
Fig. 1 is a schematic diagram of an implementation environment related to a three-dimensional model rendering method provided by the present application. Referring to fig. 1, the implementation environment includes: terminal 110, server 120.
The terminal 110 is installed and operated with a client 111 supporting a virtual environment, and the client 111 may be a game application. When the terminal runs the client 111, a user interface of the client 111 is displayed on a screen of the terminal 110.
Terminal 110 is a terminal used by user 112. Optionally, the user 112 uses the terminal 110 to control a virtual character located in the virtual environment to perform an activity, and the virtual character may be referred to as a master virtual character of the user 112.
Optionally, the terminal 110 may refer to one of multiple terminals, and the embodiment of the present application is illustrated by the terminal 110. The device types of the terminal 110 may include: a smart phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts compressed standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion Picture Experts compressed standard Audio Layer 4) player, a laptop portable computer, a desktop computer, a smart speaker, a smart watch, etc., but not limited thereto.
Only one terminal is shown in fig. 1, but there are a plurality of other terminals 130 that may access the server 120 in different embodiments. Optionally, there are one or more terminals 130 corresponding to the developer, a development and editing platform for supporting the client in the virtual environment is installed on the terminal 130, the developer can edit and update the client on the terminal 130, and transmit the updated installation package of the client to the server 120 through a wired or wireless network, and the terminal 110 can download the client installation package from the server 120 to update the client.
In addition, the server 120 may be an independent physical server, a server cluster or a distributed system composed of a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, and the like. The terminal and the server 120 may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
The server 120 is configured to provide a background service for a client supporting a three-dimensional virtual environment. Alternatively, the server 120 undertakes primary computational work and the terminal undertakes secondary computational work; alternatively, the server 120 undertakes the secondary computing work and the terminal undertakes the primary computing work; alternatively, the server 120 and the terminal perform cooperative computing by using a distributed computing architecture.
An application scenario of the three-dimensional model rendering method provided by the present application is introduced below.
Optionally, the three-dimensional model rendering method provided by the embodiment of the application provides support for a graphics rendering development process in a tool plug-in form of a rendering engine.
Optionally, the developer selects a model in the rendering engine and calls a tool blueprint to send a pre-calculation command to the tool plugin, and the tool plugin reads model data and rejection setting, performs pre-calculation for a period of time, outputs the self-occlusion rejection parameter, and stores the self-occlusion rejection parameter in the model file. The tool blueprint interface is shown in figure 2. Optionally, various parameters including the number of latitude divisions, latitude range, the number of longitude divisions, baking resolution, baking sampling number, baking method may be set on the interface of the tool blueprint.
Optionally, the picture shown in fig. 3 is a frame buffer result of model baking under a certain viewing angle region. The picture is pieced together from the results of multiple sampling views, e.g., a visible triangle is rendered onto the picture at 16 views and the picture is read back. In addition, the color of each triangle on the sphere shown in fig. 3 is different, i.e., each triangle is assigned a unique color. Optionally, the color of each triangle is a solid color rendering result after encoding the corresponding triangle number.
Optionally, the self-occlusion culling parameters are synchronized into the rendering component as the model is placed when the scene is built. Illustratively, a self-occlusion culling visualization interface in a rendering component is shown in FIG. 4. The interface can be provided with view angle rotation offset, a debugging rotation angle, a current view angle rotation angle query, a baking parameter and a baking result which are rejected by self-shielding, and the like, and the method is not limited herein. And during real-time rendering, the rendering component can remove the three-dimensional model at the CPU end according to the self-shielding removal parameter and the current visual angle. The culling results are submitted to the GPU and eventually rendered to a screen buffer. The whole rendering process does not contain other human-computer interaction, so that the degree of automation is high.
It should be noted that the three-dimensional model rendering scheme provided in the embodiment of the present application may be widely applied to a scene rendering process of a mobile device, a high-performance host, a personal computer, or the like, and can achieve the purpose of optimizing the rendering performance. In addition, the application scenarios described above are only used for illustrating the embodiments of the present application and are not limiting. In practical implementation, the technical scheme of the embodiment of the application can be flexibly applied according to actual needs.
The following embodiments are provided to describe the three-dimensional model rendering scheme in detail.
The three-dimensional model rendering scheme may be technically divided into an offline computing stage (referred to as an offline stage) shown in fig. 5 and an online rendering stage or a real-time rendering stage (referred to as a rendering stage) shown in fig. 6.
Optionally, referring to fig. 5, the flow of the offline computation phase includes, but is not limited to, the following steps: 51. acquiring user input data to obtain model data needing to be rejected and calculated and rejection settings; 52. dividing a visual angle range to obtain a plurality of visual angle areas; 53. respectively calculating a triangular visible set under each visual angle area; 54. constructing a view-dependent index buffer; 55. the view dependent index buffer is compressed and stored to the model file.
Optionally, referring to fig. 6, the flow of the online rendering phase includes, but is not limited to, the following steps: 61. reading a view angle related index buffer area of the model and a current rotation matrix of the model relative to a world coordinate system; 62. calculating a direction vector of the current visual angle to the center point of the model; 63. searching a sub-interval of the view angle related index buffer area according to the direction vector; 64. and submitting drawing commands to the GPU.
The detailed implementation of the above steps is described by the following examples.
Fig. 7 is a flowchart of a three-dimensional model rendering method according to an embodiment of the present disclosure. An execution main body of the method is computer equipment, and the computer equipment is taken as a terminal as an example, referring to fig. 7, a method flow provided by the embodiment of the application includes:
701. in the off-line calculation stage, acquiring user input data; and dividing the view angle range of the target three-dimensional model in a three-dimensional space according to the user input data to obtain at least two view angle areas.
Optionally, the target three-dimensional model is a static object model or a dynamic object model appearing in the game scene, and the embodiments of the present application are not limited herein.
Optionally, the user input data includes model data and culling settings. Wherein the model data includes, but is not limited to: which triangles are included on the model to be processed, the distribution of the triangles on the model to be processed, and the like; culling settings include, but are not limited to: the number of latitude divisions, latitude range, number of longitude divisions, baking resolution, etc., and the present application is not limited thereto.
The target three-dimensional model comprises vertex data and a triangle vertex index, and an axis alignment bounding box AABB of the target three-dimensional model can be obtained according to the upper and lower limits of coordinates of the vertex data of the target three-dimensional model, wherein the central point of the axis alignment bounding box is the central point C of the target three-dimensional model. Wherein the axis-aligned bounding box is defined as the smallest hexahedron containing the target three-dimensional model with edges parallel to the coordinate axes.
Optionally, a connection line direction between any one of the view angles E in the three-dimensional space and the center point C of the target three-dimensional model is a view angle direction, and the view angle range in the three-dimensional space is represented as a unit sphere shown in fig. 8. According to the embodiment of the application, the visual angle range of the target three-dimensional model is divided into a plurality of visual angle areas, optionally, in consideration of index convenience, the tool plug-in unit can read and reject the setting, and area division is performed on the unit spherical surface by combining longitude and latitude, and the detailed steps are as follows.
701-1, determining an axis alignment bounding box of the target three-dimensional model according to vertex data of the target three-dimensional model; and determining the unit spherical surface by taking the central point of the axis alignment bounding box as the central point of the target three-dimensional model.
701-2, performing area division on the unit spherical surface according to the longitude and latitude to obtain at least two visual angle areas.
Optionally, the unit spherical surface is divided into regions according to the longitude and latitude to obtain at least two view angle regions, including but not limited to: dividing two areas of which the unit spherical surface latitude values are larger than a target threshold into two independent visual angle areas; and uniformly dividing the area of which the latitude value is not more than the target threshold into N visual angle areas according to longitude. Wherein N is a positive integer not less than 2, and the target threshold is 60 degrees of south latitude and 60 degrees of north latitude, which is not limited herein.
Illustratively, two high-latitude regions are separately divided into two viewing angle regions, corresponding to viewing angle region 1 and viewing angle region 10 in fig. 8; the medium latitude area and the low latitude area are uniformly divided into a plurality of areas in terms of longitude, for example, into viewing angle areas 2 to 9 shown in fig. 8. Where the numbers in fig. 8 are numbers for the relevant viewing angle regions.
The first point to be noted is that the latitude is the angle between the vertical line of the direction of gravity on the earth and the equatorial plane. The value is between 0 and 90 degrees. The latitude of a point located north of the equator is called north latitude; the latitude of a point located south of the equator is called south latitude. In addition, for convenience of study of the problem, the latitudes are divided into a low latitude, a medium latitude and a high latitude. Wherein, 0 degree to 30 degrees is low latitude, 30 degrees to 60 degrees is medium latitude, and 60 degrees to 90 degrees is high latitude.
The second point to be described is that the division number of the view angle area is related to the index buffer space overhead, the offline calculation amount, and the occlusion rejection rate, and the specific division rule can be adjusted according to the actual requirement in the engineering, which is not limited herein.
702. And in the off-line calculation stage, acquiring a triangular visible set of the target three-dimensional model under different visual angle areas.
Optionally, acquiring a visible set of triangles of the target three-dimensional model in different view angle regions includes, but is not limited to, the following steps: for any one view angle area, uniformly sampling at least two view angles in the view angle area; acquiring a triangular visible set of the target three-dimensional model under at least two visual angles; and determining the union of the triangle visible sets of at least two visual angles as the triangle visible set of the visual angle area.
In other words, this step requires querying the visible set of triangles of the target three-dimensional model under each view angle region. For example, for the view angle area a, a plurality of view angles E are uniformly sampled on the view angle area a, a triangle visible set s (E) of the target three-dimensional model is queried from each view angle E, and the triangle visible sets s (a) of the view angle area a can be obtained by merging the triangle visible sets of the view angles E.
Alternatively, querying the visible set of triangles of the target three-dimensional model at a single perspective may be accomplished by rendering both readback and occlusion queries.
Render readback
Aiming at a rendering read-back mode, acquiring a triangular visible set of a target three-dimensional model under a single visual angle, wherein the method comprises the following steps of:
7021. for any one view angle, the triangles constituting the target three-dimensional model are numbered in order.
For example, assuming that the number of triangles forming the target three-dimensional model is 800 in total, the 800 triangles may be numbered sequentially according to the numbers 1 to 800, and the present application is not limited herein.
7022. Unique colors are assigned to each triangle that constitutes the target three-dimensional model in order of numbering.
Optionally, the color of each triangle is different. Taking RGB (Red Green Blue ) color as an example, a unique color can be assigned to each triangle by performing encoding operation on each triangle number. Illustratively, the encoding operation may be a remainder operation of 255 for triangle encoding, and the application is not limited herein.
7023. And rendering the target three-dimensional model to a frame buffer under the view angle.
Optionally, this step renders the target three-dimensional model to the frame buffer at the perspective using a GPU rendering pipeline. The frame buffer area is also called a color buffer area, and color information of pixels of the object model in the game scene is written into the frame buffer area and then rendered on a screen for display.
7024. Reading back and analyzing the frame buffer result; in response to the analyzed color corresponding to the triangle number, determining the triangle indicated by the triangle number as a visible triangle at the view angle; wherein, all visible triangles at the current view angle constitute a visible set of triangles at the view angle.
Wherein, the frame buffer result is in the form of a picture file as shown in fig. 3. In the step, the frame buffer area result is read back and the color is analyzed to be a triangle number, and the triangle corresponding to the triangle number is a visible triangle under the current visual angle. In addition, since the speed of reading back the frame buffer result is slow, a small buffer resolution is usually set to avoid affecting the correctness of the occlusion culling.
Occlusion querying
Fig. 9 shows an overall flow of the occlusion query manner. Aiming at the occlusion query mode, a triangular visible set of a target three-dimensional model under a single visual angle is obtained, and the method comprises the following steps of:
7025. the respective triangles constituting the target three-dimensional model are set to be invisible.
This step is an initialization step, i.e. all triangles are initialized to be invisible.
7026. Setting the depth test as a near principle, and starting depth writing; under any visual angle, obtaining the depth information of each pixel of the target three-dimensional model by drawing the target three-dimensional model; the depth information is written to a depth buffer.
In the embodiments of the present application, the principle of proximity refers to: if the same pixel position has pixel coincidence, namely before the occlusion rejection is not carried out, the same pixel position may need to render a plurality of pixels, and in order to avoid transition drawing, the depth value with the minimum value of the pixel position is stored in a depth buffer area; wherein a smaller depth value indicates a closer distance to the virtual camera in three-dimensional space.
It should be noted that only in the case of the on-depth writing, the depth value can be written into the depth buffer. Wherein a Depth Buffer (DB) corresponds to a Color Buffer (CB), the Color Buffer storing Color information of pixels, and the Depth Buffer storing Depth information of pixels.
7027. And (4) setting the depth test to be in the principle of equal value, closing the depth writing, and not writing the depth information any more.
7028. For any one triangle forming the target three-dimensional model, in response to the triangle passing a depth test requiring equal values and the triangle having a drawing pixel quantity greater than zero, setting the triangle from invisible to visible; wherein all visible triangles at the current view constitute a visible set of triangles at the view.
In the embodiment of the present application, the equal value means that the current depth value is consistent with the corresponding depth value stored in the depth buffer. After the depth test is modified to be equal in value, each triangle of the target three-dimensional model is drawn respectively, and if the pixels can be output in the screen buffer area through the depth test which requires to be equal in value, the triangle can be seen from the current visual angle. In other words, whether each triangle forming the target three-dimensional model is visible at the current view angle can be known by using the occlusion query technology of the GPU, and then a triangle visible set at the current view angle is obtained.
In addition, because the frame buffer area is not required to be read back in the shielding query mode, the calculation main expense is a large amount of drawing calling and query, and therefore the high-resolution buffer area can be arranged to improve the calculation precision.
703. In the off-line calculation stage, an index buffer area is constructed according to the triangular visible sets in different visual angle areas; the index buffer is used for storing the vertex indexes of all triangles in the triangle visible set.
And the information stored in the index buffer area of the target three-dimensional model is a vertex index. Optionally, the vertex indices of each triangle correspond to three unsigned integer data variables.
Optionally, when a drawing instruction is submitted to the GPU, a certain sub-interval of the index buffer is specified, and the upper and lower limits of model drawing are determined by the offset and the drawing number of triangles. Illustratively, in the default model rendering pipeline, the model rendering is typically the entire index buffer, i.e., the offset is 0 and the number of triangle renderings is the buffer size, as shown in FIG. 10.
In the previous step 702, a plurality (n for example) of viewing angle regions a have been obtained k Set (A) of the triangle visible Set k ) Set (A) k ) According to Set (A) 1 )、Set(A 2) ……Set(A n ) The order of the index buffer areas is sequentially arranged to obtain the index buffer area related to the visual angle, which is called the index buffer area for short. Put another way, constructing the index buffer according to the visible set of triangles in different view angle regions includes the following step 7031.
7031. Numbering the divided visual angle areas in sequence; the vertex indices of the visible set of triangles for each view region are arranged in order of number, resulting in a view-dependent index buffer as shown in fig. 11.
This simple construction results in excessive space being occupied by the index buffer. In fact, a large number of repeated triangles exist in different viewing angle regions. For example, fig. 12 shows a view-dependent index buffer comprising four sub-intervals, wherein each visible set of triangles determined by step 702 corresponds to one sub-interval in the view-dependent index buffer; the triangles corresponding to the four subintervals are visible to be concentrated in a large number of repetitions, and the white elements in the figure are the elements which are repeated with the adjacent subintervals.
In order to avoid excessive occupation of space, after the index buffer is initially constructed, the embodiment of the application may further compress the index buffer. That is, the index buffer is constructed according to the visible set of triangles under different view angle regions, and the following step 7032 is further included.
7032. Numbering the divided visual angle areas in sequence; sequentially arranging the vertex indexes of the triangular visible sets of each visual angle area according to the numbering sequence to obtain an index buffer area; and carrying out triangle vertex index reordering on subintervals included in the index buffer area, and merging vertex indexes of triangles which repeatedly appear in adjacent subintervals to obtain the compressed index buffer area.
By utilizing the disorder of the vertex index of the triangle in the index buffer, the embodiment of the present application performs triangle-based reordering on each pair of subintervals in fig. 12, so that triangles repeatedly appearing in each pair of subintervals are adjacent to each other, and thus fig. 13 is obtained. For example, in fig. 13, elements a, d, h, i in sub-interval 1 and elements a, d, h, i in sub-interval 2 that occur repeatedly are ordered adjacently. Elements a, t, w, x, z in sub-interval 3 and elements a, t, w, x, z in sub-interval 4 that occur repeatedly are adjacently ordered. Thereafter, the results shown in fig. 14 were obtained by merging the adjacent chordal subintervals, i.e., elements a, d, h, i in subinterval 1 and elements a, d, h, i that repeatedly appeared in subinterval 2, and by merging elements a, t, w, x, z in subinterval 3 and elements a, t, w, x, z that repeatedly appeared in subinterval 4. In a similar way, the remaining elements of sub-interval 2 and sub-interval 3 are continued to be combined, resulting in the final compression result as shown in fig. 15.
The first point to be noted is that, in the off-line computing stage, the embodiment of the present application stores the compressed index buffer area into the model file.
The second point to be noted is that, assuming that the average repetition rate between adjacent subintervals is α, the compression ratio x between two consecutive subintervals can be expressed as the following formula:
Figure BDA0003011449910000161
next, the remaining adjacent subintervals are compressed, at which the adjacent repetition rate is α (1- α), and the final compression ratio x can be expressed as the following equation:
Figure BDA0003011449910000162
alternatively, the value of α is in the range of 0 to 1, and the curve y ═ x (α) corresponds to that shown in fig. 16. As can be seen from fig. 16, when α is 50%, the compression ratio x has reached 62.5%. Since the adjacent repetition rate increases as the number of divisions of the view angle region becomes larger, the size of the index buffer does not increase linearly as the number of divided view angle regions becomes larger. In addition, the division number of the viewing angle areas can be flexibly adjusted according to the actual requirements of products, and the application is not limited herein.
A third point to be described is that the foregoing steps 701 to 703 are all performed at the CPU end, that is, the present embodiment provides fine-grained occlusion culling that can be performed at the CPU end, so that no additional performance consumption is caused while the fineness of culling is ensured.
704. In the real-time rendering stage, determining a target subinterval in an index buffer area according to a current visual angle; and rendering the target three-dimensional model according to the triangular visible set corresponding to the target subinterval.
When the online rendering stage calls drawing, a target sub-interval of the index buffer area is inquired according to the current visual angle, and the rendering can be submitted to the GPU by the offset of the target sub-interval in the index buffer area and the drawing quantity (sub-interval length) of triangles.
Optionally, in the real-time rendering stage, determining a target sub-interval in the index buffer according to the current view angle, and rendering the target three-dimensional model according to the triangular visible set corresponding to the target sub-interval, including but not limited to the following steps:
7041. and in the real-time rendering stage, reading the compressed index buffer area from the model file.
7042. Acquiring a rotation matrix of the target three-dimensional model relative to a world coordinate system; and determining a direction vector of the current visual angle relative to the central point of the target three-dimensional model according to the rotation matrix, the current visual angle and the central point position of the target three-dimensional model and the following formula.
Figure BDA0003011449910000171
Wherein, P refers to the current view angle, C refers to the central point of the target three-dimensional model, R refers to the rotation matrix, and norm () refers to the norm of the solution.
7043. Determining a target visual angle area where the direction vector is located; and determining the subinterval corresponding to the target view angle area in the index buffer area as a target subinterval.
7044. Acquiring the offset of the target subinterval in the index buffer; and acquiring the drawing quantity of the triangles corresponding to the target subintervals.
7045. And submitting a drawing instruction to a graphic processing unit, wherein the drawing instruction comprises the acquired offset and the triangle drawing quantity, and then determining a triangle visible set by the graphic GPU according to the offset and the triangle drawing quantity, thereby finishing rendering.
The method provided by the embodiment of the application has at least the following beneficial effects:
in an offline calculation stage, a visual angle range of a three-dimensional model is divided into a plurality of visual angle areas, then a plurality of triangular visible sets of the three-dimensional model in the visual angle range are inquired, and then a visual angle related index buffer area is constructed based on the triangular visible sets of different visual angle areas. In the real-time rendering stage, the corresponding subintervals are inquired in the view angle related index buffer area according to the current view angle, and a DrawCall is formed and submitted to the GPU for rendering. According to the method and the device, the view angle related index buffer zone can be constructed in the offline calculation stage, so that the corresponding subinterval of the index buffer zone is inquired according to the current view angle in the real-time rendering stage, and the rendering can be completed. According to the scheme, fine-grained shielding elimination is achieved in an offline computing stage, and the amount of vertexes and the amount of triangles in a model rendering process can be reduced by submitting the offset of the triangle visible set corresponding to the subinterval and the triangle drawing amount to the GPU in a real-time rendering stage, so that the cost of the GPU is reduced, and the rendering performance is improved. In addition, the model self-shielding elimination can be completed, meanwhile, the back surface of the model can be eliminated in advance at the CPU end, and the consumption of the GPU is further reduced.
To sum up, this application still does not bring extra performance consumption when having guaranteed to shelter from the rejection fineness, can optimize the rendering performance of figure product. The scheme has the advantages of low cost, controllable rendering result quality and strong flexibility, and can be applied to related products of various platforms.
In addition, the application provides a rendering optimization scheme, which can obtain a larger rejection rate by using extremely low runtime consumption and acceptable extra space, is compatible with most of the existing graphic production processes, has scalability to different running platforms, and can be applied to different graphic rendering products in the industry.
In addition, this application has solved the problem that current industry has the scheme to solve now: in the related art, the schemes for carrying out occlusion rejection in the CPU stage all use models as granularity, so that occlusion rejection cannot be carried out more finely; the process of carrying out occlusion rejection on the model with fine granularity is carried out in the GPU stage, the consumption of the pixel shader is still large, and low-efficiency optimization and even negative optimization are brought in the scene that the number of vertexes becomes a bottleneck. The shielding elimination of the fine granularity which can be executed at the CPU end is provided, so that the elimination fineness is guaranteed, and extra performance consumption is not brought.
In addition, the application optimizes the problem that the existing scheme in the industry at present cannot optimize: the default model backface culling occurs after vertex shading in the GPU phase, which makes vertices that do not pass the frontside test invalid computations, and there is no general optimization scheme in the industry at all. According to the method and the device, while the problem of self-shielding rejection is solved, most of back rejection can be performed in the CPU in advance, the invalid drawing quantity of the graphics primitives submitted to the GPU is reduced, and the back rejection performance is optimized.
Illustratively, the theoretical optimization effect of the present application was actually tested in the UE4 engine experimental environment. For an original building model with 11832 triangles, dividing a region from 30 degrees of south latitude to 30 degrees of north latitude into 4 view angle regions, calculating a buffer region with the resolution of 512x512 by using an occlusion query method to obtain a view angle related index buffer region, wherein the pre-calculation time is 35 seconds. The resulting view-dependent index buffer requires additional 86.70% real estate. The results of the rendering at runtime are shown in fig. 17.
After occlusion culling, the number of triangles submitted for rendering to the GPU is between 1990 and 2690, with culling efficiency reaching 77.3% to 83.2%. The drawing performance is also reduced from 83.30 microseconds to 61.92 microseconds, and the drawing efficiency is improved by 25.67% compared with the previous drawing efficiency. And with the zooming of the view angle, the drawing cost of a pixel shader of the model is reduced, while the drawing cost of a vertex shader is unchanged, and the efficiency improvement of the view angle related index buffer area is improved accordingly. The performance of the view dependent index buffer is improved between 22% and 32% when tested on multiple sets of models.
It should be noted that, besides the spherical-surface-based view angle area division method, other view angle area division methods may also be adopted. Additionally, the above aspects may also be implemented in engines, platforms, or products other than the UE 4. In addition, the scheme can be applied to other similar model occlusion rejection scenes except games.
Fig. 18 is a schematic structural diagram of a three-dimensional model rendering apparatus according to an embodiment of the present application. Referring to fig. 18, the apparatus includes:
a dividing module 1801 configured to obtain user input data in an offline computing stage; according to the user input data, dividing a view angle range aiming at a target three-dimensional model in a three-dimensional space to obtain at least two view angle areas;
an obtaining module 1802 configured to, in an offline calculation stage, obtain a visible set of triangles of the target three-dimensional model in different view angle areas;
a constructing module 1803 configured to, in an offline computing stage, construct an index buffer according to the triangle visible sets of different view angle regions; wherein the index buffer is used for storing the vertex index of each triangle in the triangle visible set;
a rendering module 1804 configured to determine a target subinterval in the index buffer according to a current perspective during a real-time rendering phase; and rendering the target three-dimensional model according to the triangular visible set corresponding to the target subinterval.
In the off-line calculation stage, a visual angle range for the three-dimensional model is divided into a plurality of visual angle areas, then a plurality of triangular visible sets of the three-dimensional model in the visual angle range are inquired, and then a visual angle related index buffer area is constructed based on the triangular visible sets of different visual angle areas. In a real-time rendering stage, the corresponding subintervals are inquired in the view angle related index buffer area according to the current view angle, and a DrawCall is formed and submitted to a GPU for rendering. According to the method and the device, the visual angle related index buffer zone can be constructed in the offline calculation stage, so that the rendering can be completed by inquiring the corresponding subinterval of the index buffer zone according to the current visual angle in the real-time rendering stage. According to the scheme, fine-grained occlusion rejection is achieved in an offline computing stage, the number of vertexes and the number of triangles of a model rendering process can be reduced in a real-time rendering stage, the cost of a GPU is reduced, and the rendering performance is improved. To sum up, this application still does not bring extra performance consumption when having guaranteed to shelter from the rejection fineness, can optimize the rendering performance of figure product.
In some embodiments, the range of viewing angles is spherical; the partitioning module configured to:
determining an axis alignment bounding box of the target three-dimensional model according to the vertex data of the target three-dimensional model; determining a unit spherical surface by taking the central point of the axis-aligned bounding box as the central point of the target three-dimensional model; and carrying out area division on the unit spherical surface according to the longitude and latitude to obtain the at least two visual angle areas.
In some embodiments, the acquisition module is configured to:
for any view angle area, uniformly sampling at least two view angles in the view angle area;
acquiring a triangular visible set of the target three-dimensional model under the at least two visual angles;
and determining the union of the triangular visible sets of the at least two visual angles as the triangular visible set of the visual angle area.
In some embodiments, the acquisition module is configured to:
numbering all triangles forming the target three-dimensional model in sequence for any visual angle;
assigning unique colors to each triangle forming the target three-dimensional model according to the numbering sequence;
rendering the target three-dimensional model to a frame buffer at the view angle;
reading back and analyzing the frame buffer result; in response to the analyzed color corresponding to the triangle number, determining the triangle indicated by the triangle number as a visible triangle at the view angle;
wherein all visible triangles under the view angle constitute a visible set of triangles for the view angle.
In some embodiments, the acquisition module is configured to:
setting each triangle constituting the target three-dimensional model to be invisible;
setting the depth test as a near principle, and starting depth writing; under any visual angle, obtaining depth information of each pixel of the target three-dimensional model by drawing the target three-dimensional model; writing the depth information into a depth buffer;
setting the depth test to be in the principle of equal value, and closing the depth writing; for any one of the triangles forming the target three-dimensional model, in response to the triangle passing a depth test requiring equal values and the triangle having a drawing pixel quantity greater than zero, setting the triangle from invisible to visible;
wherein all visible triangles under the view angle constitute a visible set of triangles for the view angle.
In some embodiments, the proximity principle refers to: in response to the occurrence of pixel coincidence at the same pixel position, storing the depth value with the minimum value of the pixel position in the depth buffer area; wherein the smaller the depth value, the closer the distance to the virtual camera in the three-dimensional space;
the values being equal means that the current depth value is consistent with the corresponding depth value stored in the depth buffer.
In some embodiments, the construction module is configured to:
numbering the at least two view angle regions in sequence;
and sequentially arranging the vertex indexes of the triangular visible sets of all the visual angle areas according to the numbering sequence to obtain the index buffer area.
In some embodiments, the construction module is configured to:
numbering the at least two view angle regions in sequence;
arranging the vertex indexes of the triangular visible sets of each visual angle area in sequence according to the numbering sequence to obtain the index buffer area; wherein a visible set of triangles corresponds to a sub-interval of the index buffer;
and reordering the vertex indexes of the triangles of the subintervals, and merging the vertex indexes of the triangles which repeatedly appear in the adjacent subintervals to obtain the compressed index buffer area.
In some embodiments, the apparatus further comprises:
a storage module configured to store the index buffer into a model file during an offline computation phase.
In some embodiments, the rendering module is configured to:
reading the index buffer area from the model file in a real-time rendering stage;
acquiring a rotation matrix of the target three-dimensional model relative to a world coordinate system;
determining a direction vector of the current visual angle relative to the central point of the target three-dimensional model according to the rotation matrix, the current visual angle and the central point position of the target three-dimensional model;
determining a target visual angle area where the direction vector is located;
and determining a subinterval corresponding to the target view angle area in the index buffer area as the target subinterval.
In some embodiments, the rendering module is configured to:
in a real-time rendering stage, acquiring the offset of the target subinterval in the index buffer area; acquiring the drawing quantity of triangles corresponding to the target subintervals;
and submitting a drawing instruction to a graphic processing unit, wherein the drawing instruction comprises the offset and the triangle drawing quantity, and the graphic processing unit determines the triangle visible set according to the offset and the triangle drawing quantity to finish rendering.
In some embodiments, the partitioning module is configured to:
dividing two areas of which the latitude values on the unit spherical surface are larger than a target threshold value into two independent visual angle areas;
and uniformly dividing the area with the latitude value not greater than the target threshold into N viewing angle areas according to longitude, wherein N is a positive integer not less than 2.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described in detail herein.
It should be noted that: in the three-dimensional model rendering device provided in the above embodiment, when rendering a three-dimensional model, only the division of the functional modules is exemplified, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the three-dimensional model rendering device provided in the above embodiment and the three-dimensional model rendering method embodiment belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiment, and are not described herein again.
FIG. 19 is a block diagram illustrating an architecture of a computer device 1900 provided in an exemplary embodiment of the application. The computer device 1900 may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer or a desktop computer. Computer device 1900 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so on.
Generally, computer device 1900 includes: a processor 1901 and a memory 1902.
The processor 1901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1901 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in a wake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1901 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1902 may include one or more computer-readable storage media, which may be non-transitory. The memory 1902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1902 is used to store at least one program code for execution by the processor 1901 to implement the three-dimensional model rendering method provided by the method embodiments herein.
In some embodiments, computer device 1900 may also optionally include: a peripheral device interface 1903 and at least one peripheral device. The processor 1901, memory 1902, and peripherals interface 1903 may be coupled via buses or signal lines. Various peripheral devices may be connected to peripheral interface 1903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1904, a display screen 1905, a camera assembly 1906, an audio circuit 1907, a positioning assembly 1908, and a power supply 1909.
The peripheral interface 1903 may be used to connect at least one peripheral associated with an I/O (Input/Output) to the processor 1901 and the memory 1902. In some embodiments, the processor 1901, memory 1902, and peripherals interface 1903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1901, the memory 1902, and the peripheral interface 1903 may be implemented on separate chips or circuit boards, which is not limited by this embodiment.
The Radio Frequency circuit 1904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1904 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1904 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1904 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1904 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1905 is used for displaying a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1905 is a touch display screen, the display screen 1905 also has the ability to capture touch signals on or above the surface of the display screen 1905. The touch signal may be input to the processor 1901 as a control signal for processing. At this point, the display 1905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1905 may be one, disposed on a front panel of computer device 1900; in other embodiments, display 1905 may be at least two, each disposed on a different surface of computer device 1900 or in a folded design; in other embodiments, display 1905 may be a flexible display disposed on a curved surface or on a folding surface of computer device 1900. Even more, the display 1905 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1905 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1906 is used to capture images or video. Optionally, camera assembly 1906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of a terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, the main camera and the wide-angle camera are fused to realize panoramic shooting and a VR (Virtual Reality) shooting function or other fusion shooting functions. In some embodiments, camera head assembly 1906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1901 for processing or inputting the electric signals into the radio frequency circuit 1904 to achieve voice communication. The microphones may be multiple and placed at different locations on the computer device 1900 for stereo sound capture or noise reduction purposes. The microphone may also be an array microphone or an omni-directional acquisition microphone. The speaker is used to convert electrical signals from the processor 1901 or the radio frequency circuitry 1904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1907 may also include a headphone jack.
The positioning component 1908 is used to locate the current geographic Location of the computer device 1900 for navigation or LBS (Location Based Service). The Positioning component 1908 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 1909 is used to supply power to the various components in computer device 1900. Power source 1909 can be alternating current, direct current, disposable battery, or rechargeable battery. When power supply 1909 includes a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 1900 also includes one or more sensors 1910. The one or more sensors 1910 include, but are not limited to: acceleration sensor 1911, gyro sensor 1912, pressure sensor 1913, fingerprint sensor 1914, optical sensor 1915, and proximity sensor 1916.
The acceleration sensor 1911 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the computer apparatus 1900. For example, the acceleration sensor 1911 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1901 may control the display screen 1905 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1911. The acceleration sensor 1911 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1912 may detect a body direction and a rotation angle of the computer device 1900, and the gyro sensor 1912 may acquire a 3D motion of the user on the computer device 1900 in cooperation with the acceleration sensor 1911. From the data collected by the gyro sensor 1912, the processor 1901 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1913 may be disposed on a side bezel of computer device 1900 and/or on a lower layer of display 1905. When the pressure sensor 1913 is disposed on the side frame of the computer device 1900, the user can detect a holding signal of the computer device 1900, and the processor 1901 can perform right-left hand recognition or quick operation based on the holding signal collected by the pressure sensor 1913. When the pressure sensor 1913 is disposed at a lower layer of the display 1905, the processor 1901 controls the operability control on the UI interface according to the pressure operation of the user on the display 1905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1914 is configured to collect a fingerprint of the user, and the processor 1901 identifies the user according to the fingerprint collected by the fingerprint sensor 1914, or the fingerprint sensor 1914 identifies the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1901 authorizes the user to perform relevant sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. Fingerprint sensor 1914 may be disposed on the front, back, or side of computer device 1900. When a physical button or vendor Logo is provided on computer device 1900, fingerprint sensor 1914 may be integrated with the physical button or vendor Logo.
The optical sensor 1915 is used to collect the ambient light intensity. In one embodiment, the processor 1901 may control the display brightness of the display screen 1905 based on the ambient light intensity collected by the optical sensor 1915. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1905 is increased; when the ambient light intensity is low, the display brightness of the display screen 1905 is adjusted down. In another embodiment, the processor 1901 may also dynamically adjust the shooting parameters of the camera assembly 1906 according to the intensity of the ambient light collected by the optical sensor 1915.
Proximity sensor 1916, also known as a distance sensor, is typically disposed on the front panel of computer device 1900. Proximity sensor 1916 is used to capture the distance between the user and the front of computer device 1900. In one embodiment, the display 1905 is controlled by the processor 1901 to switch from the bright screen state to the dark screen state when the proximity sensor 1916 detects that the distance between the user and the front surface of the computer device 1900 is gradually decreasing; when the proximity sensor 1916 detects that the distance between the user and the front of the computer device 1900 is gradually increasing, the display 1905 is controlled by the processor 1901 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the architecture illustrated in FIG. 19 does not constitute a limitation of computer device 1900, and may include more or fewer components than those illustrated, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, a computer readable storage medium, such as a memory including program code, executable by a processor in a computer device to perform the three-dimensional model rendering method in the above embodiments is also provided. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or the computer program comprising computer program code, the computer program code being stored in a computer readable storage medium, the computer program code being read by a processor of a computer device from the computer readable storage medium, the processor executing the computer program code, such that the computer device performs the three-dimensional model rendering method described above.
In some embodiments, the computer program according to the embodiments of the present application may be deployed to be executed on one computer device or on multiple computer devices located at one site, or may be executed on multiple computer devices distributed at multiple sites and interconnected by a communication network, and the multiple computer devices distributed at the multiple sites and interconnected by the communication network may constitute a block chain system.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk.
The above description is intended only to illustrate the alternative embodiments of the present application, and should not be construed as limiting the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (13)

1. A method of rendering a three-dimensional model, the method comprising:
in the off-line calculation stage, acquiring user input data; according to the user input data, dividing a view angle range aiming at a target three-dimensional model in a three-dimensional space to obtain at least two view angle areas;
in an off-line calculation stage, for any one of the at least two view angle areas, uniformly sampling at least two view angles in the view angle area;
numbering all triangles forming the target three-dimensional model in sequence for any one of the at least two visual angles; assigning unique colors to each triangle forming the target three-dimensional model according to the numbering sequence; rendering the target three-dimensional model to a frame buffer under the view angle; reading back and analyzing the frame buffer result; in response to the analyzed color corresponding to the triangle number, determining the triangle indicated by the triangle number as a visible triangle at the viewing angle; wherein all visible triangles at the perspective constitute a visible set of triangles of the target three-dimensional model at the perspective;
determining a union set of the triangle visible sets of the at least two visual angles as a triangle visible set of the target three-dimensional model in the visual angle area;
constructing an index buffer area according to the triangular visible sets in different visual angle areas; wherein the index buffer is used for storing the vertex index of each triangle in the triangle visible set;
in the real-time rendering stage, determining a target subinterval in the index buffer area according to the current visual angle; and rendering the target three-dimensional model according to the triangular visible set corresponding to the target subinterval.
2. The method of claim 1, wherein the range of viewing angles is spherical;
dividing a view angle range aiming at a target three-dimensional model in a three-dimensional space according to the user input data to obtain at least two view angle areas, wherein the steps comprise:
determining an axis alignment bounding box of the target three-dimensional model according to the vertex data of the target three-dimensional model; determining a unit spherical surface by taking the central point of the axis alignment bounding box as the central point of the target three-dimensional model;
and carrying out area division on the unit spherical surface according to the longitude and latitude to obtain the at least two visual angle areas.
3. The method of claim 1, further comprising:
setting each triangle constituting the target three-dimensional model to be invisible;
setting the depth test as a near principle, and starting depth writing; under any visual angle, obtaining depth information of each pixel of the target three-dimensional model by drawing the target three-dimensional model; writing the depth information into a depth buffer;
setting the depth test to be in the principle of equal value, and closing the depth writing; for any one of the triangles constituting the target three-dimensional model, in response to the triangles passing the depth test requiring equal values and the triangle having a drawing pixel amount greater than zero, setting the triangle to be visible from invisible;
wherein all visible triangles at the perspective constitute a visible set of triangles at the perspective.
4. The method of claim 3, wherein the proximity principle is: in response to the occurrence of pixel coincidence at the same pixel position, storing the depth value with the minimum value of the pixel position in the depth buffer area; wherein the smaller the depth value, the closer the distance to the virtual camera in the three-dimensional space;
the values being equal means that the current depth value is consistent with the corresponding depth value stored in the depth buffer.
5. The method according to claim 1, wherein constructing an index buffer according to the visible set of triangles under different view angle regions comprises:
numbering the at least two view angle regions in sequence;
and sequentially arranging the vertex indexes of the triangular visible sets of each visual angle area according to the numbering sequence to obtain the index buffer area.
6. The method according to claim 1, wherein constructing an index buffer according to the visible set of triangles under different view angle regions comprises:
numbering the at least two view angle regions in sequence;
arranging the vertex indexes of the triangular visible sets of each visual angle area in sequence according to the numbering sequence to obtain the index buffer area; wherein a visible set of triangles corresponds to a sub-interval of the index buffer;
and reordering the vertex indexes of the triangles of the subintervals, and merging the vertex indexes of the triangles which repeatedly appear in the adjacent subintervals to obtain the compressed index buffer area.
7. The method of claim 1, further comprising:
and storing the index buffer into a model file in an offline computing stage.
8. The method of claim 7, wherein determining a target sub-interval in the index buffer according to a current view during the real-time rendering stage comprises:
reading the index buffer area from the model file in a real-time rendering stage;
acquiring a rotation matrix of the target three-dimensional model relative to a world coordinate system;
determining a direction vector of the current visual angle relative to the central point of the target three-dimensional model according to the rotation matrix, the current visual angle and the central point position of the target three-dimensional model;
determining a target visual angle area where the direction vector is located;
and determining a subinterval corresponding to the target view angle area in the index buffer area as the target subinterval.
9. The method of claim 1, wherein the rendering the target three-dimensional model according to the set of triangularly visible corresponding to the target subintervals comprises:
in a real-time rendering stage, acquiring the offset of the target subinterval in the index buffer area; acquiring the drawing quantity of triangles corresponding to the target subintervals;
and submitting a drawing instruction to a graphic processing unit, wherein the drawing instruction comprises the offset and the triangle drawing quantity, and the graphic processing unit determines the triangle visible set according to the offset and the triangle drawing quantity to finish rendering.
10. The method of claim 2, wherein the dividing the unit sphere into the at least two view angle areas according to the longitude and latitude comprises:
dividing two areas of which the latitude values on the unit spherical surface are larger than a target threshold into two independent visual angle areas;
and uniformly dividing the area with the latitude value not greater than the target threshold into N viewing angle areas according to longitude, wherein N is a positive integer not less than 2.
11. An apparatus for rendering a three-dimensional model, the apparatus comprising:
the dividing module is configured to acquire user input data in an offline calculation stage; according to the user input data, dividing a view angle range aiming at a target three-dimensional model in a three-dimensional space to obtain at least two view angle areas;
the acquisition module is configured to uniformly sample at least two view angles in any one of the at least two view angle areas in an offline calculation stage; numbering the triangles forming the target three-dimensional model in sequence for any one of the at least two visual angles; assigning unique colors to each triangle forming the target three-dimensional model according to the numbering sequence; rendering the target three-dimensional model to a frame buffer under the view angle; reading back and analyzing a frame buffer result; in response to the analyzed color corresponding to the triangle number, determining the triangle indicated by the triangle number as a visible triangle at the view angle; wherein all visible triangles at the viewing angle constitute a visible set of triangles of the target three-dimensional model at the viewing angle; determining a union set of the triangle visible sets of the at least two visual angles as a triangle visible set of the target three-dimensional model in the visual angle area;
the construction module is configured to construct an index buffer area according to the triangular visible sets of different view angle areas in an offline calculation stage; wherein the index buffer is used for storing the vertex index of each triangle in the triangle visible set;
a rendering module configured to determine a target subinterval in the index buffer according to a current view angle during a real-time rendering phase; and rendering the target three-dimensional model according to the triangular visible set corresponding to the target subinterval.
12. A computer apparatus, characterized in that the apparatus comprises a processor and a memory, in which at least one program code is stored, which is loaded and executed by the processor to implement the three-dimensional model rendering method according to any of claims 1 to 10.
13. A computer-readable storage medium, having stored therein at least one program code, which is loaded and executed by a processor, to implement the three-dimensional model rendering method according to any one of claims 1 to 10.
CN202110377711.0A 2021-04-08 2021-04-08 Three-dimensional model rendering method, device, equipment and storage medium Active CN112933599B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110377711.0A CN112933599B (en) 2021-04-08 2021-04-08 Three-dimensional model rendering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110377711.0A CN112933599B (en) 2021-04-08 2021-04-08 Three-dimensional model rendering method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112933599A CN112933599A (en) 2021-06-11
CN112933599B true CN112933599B (en) 2022-07-26

Family

ID=76231173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110377711.0A Active CN112933599B (en) 2021-04-08 2021-04-08 Three-dimensional model rendering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112933599B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546005A (en) * 2021-06-30 2022-12-30 华为技术有限公司 Instruction processing method and related equipment thereof
CN114494570A (en) * 2021-10-18 2022-05-13 北京市商汤科技开发有限公司 Rendering method and device of three-dimensional model, storage medium and computer equipment
WO2023115408A1 (en) * 2021-12-22 2023-06-29 华为技术有限公司 Image processing apparatus and method
CN114470766A (en) * 2022-02-14 2022-05-13 网易(杭州)网络有限公司 Model anti-penetration method and device, electronic equipment and storage medium
CN116777731A (en) * 2022-03-11 2023-09-19 腾讯科技(成都)有限公司 Method, apparatus, device, medium and program product for soft rasterization
CN116843811A (en) * 2022-03-23 2023-10-03 腾讯科技(成都)有限公司 Three-dimensional model rendering method, device, equipment and storage medium
CN114494024B (en) * 2022-04-13 2022-08-02 腾讯科技(深圳)有限公司 Image rendering method, device and equipment and storage medium
CN114742956B (en) * 2022-06-09 2022-09-13 腾讯科技(深圳)有限公司 Model processing method, device, equipment and computer readable storage medium
CN115168112B (en) * 2022-09-07 2022-12-27 中建三局信息科技有限公司 Method, device, equipment and medium for restoring section data under dynamic section change
CN116433818B (en) * 2023-03-22 2024-04-16 宝钢工程技术集团有限公司 Cloud CPU and GPU parallel rendering method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446351A (en) * 2016-08-31 2017-02-22 郑州捷安高科股份有限公司 Real-time drawing-oriented large-scale scene organization and scheduling technology and simulation system
CN106909640A (en) * 2017-02-16 2017-06-30 杭州新迪数字工程系统有限公司 Threedimensional model lightweight display technique based on webgl
CN112233048A (en) * 2020-12-11 2021-01-15 成都成电光信科技股份有限公司 Spherical video image correction method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8902228B2 (en) * 2011-09-19 2014-12-02 Qualcomm Incorporated Optimizing resolve performance with tiling graphics architectures

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446351A (en) * 2016-08-31 2017-02-22 郑州捷安高科股份有限公司 Real-time drawing-oriented large-scale scene organization and scheduling technology and simulation system
CN106909640A (en) * 2017-02-16 2017-06-30 杭州新迪数字工程系统有限公司 Threedimensional model lightweight display technique based on webgl
CN112233048A (en) * 2020-12-11 2021-01-15 成都成电光信科技股份有限公司 Spherical video image correction method

Also Published As

Publication number Publication date
CN112933599A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN112933599B (en) Three-dimensional model rendering method, device, equipment and storage medium
US20230143323A1 (en) Shadow rendering method and apparatus, computer device, and storage medium
CN109754454B (en) Object model rendering method and device, storage medium and equipment
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
CN111932664B (en) Image rendering method and device, electronic equipment and storage medium
CN105051792B (en) Equipment for using depth map and light source to synthesize enhancing 3D rendering
EP3960261A1 (en) Object construction method and apparatus based on virtual environment, computer device, and readable storage medium
US8970587B2 (en) Five-dimensional occlusion queries
CN109615686B (en) Method, device, equipment and storage medium for determining potential visual set
CN112370784B (en) Virtual scene display method, device, equipment and storage medium
CN111489378A (en) Video frame feature extraction method and device, computer equipment and storage medium
CN111932463B (en) Image processing method, device, equipment and storage medium
CN111258467A (en) Interface display method and device, computer equipment and storage medium
CN111738914A (en) Image processing method, image processing device, computer equipment and storage medium
EP4290464A1 (en) Image rendering method and apparatus, and electronic device and storage medium
CN112245926A (en) Virtual terrain rendering method, device, equipment and medium
CN112907716A (en) Cloud rendering method, device, equipment and storage medium in virtual environment
CN113384880A (en) Virtual scene display method and device, computer equipment and storage medium
CN108492339B (en) Method and device for acquiring resource compression packet, electronic equipment and storage medium
CN112750190B (en) Three-dimensional thermodynamic diagram generation method, device, equipment and storage medium
CN112116681A (en) Image generation method and device, computer equipment and storage medium
CN112053360A (en) Image segmentation method and device, computer equipment and storage medium
CN116828207A (en) Image processing method, device, computer equipment and storage medium
CN113762054A (en) Image recognition method, device, equipment and readable storage medium
CN113018865A (en) Climbing line generation method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40046026

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant