CN114581596A - Geometric body fast rendering method based on graphic processing unit GPU drive - Google Patents

Geometric body fast rendering method based on graphic processing unit GPU drive Download PDF

Info

Publication number
CN114581596A
CN114581596A CN202210178311.1A CN202210178311A CN114581596A CN 114581596 A CN114581596 A CN 114581596A CN 202210178311 A CN202210178311 A CN 202210178311A CN 114581596 A CN114581596 A CN 114581596A
Authority
CN
China
Prior art keywords
rendering
gpu
geometric
target scene
shader
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210178311.1A
Other languages
Chinese (zh)
Inventor
马恩成
张晓龙
杨广剑
于贵友
蔡欢
王新洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Construction Technology Co ltd
Original Assignee
Beijing Construction Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Construction Technology Co ltd filed Critical Beijing Construction Technology Co ltd
Priority to CN202210178311.1A priority Critical patent/CN114581596A/en
Publication of CN114581596A publication Critical patent/CN114581596A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

Embodiments of the present disclosure provide methods, electronic devices, and computer program products related to Graphics Processing Unit (GPU) -driven geometry-based fast rendering. The method includes determining, in a cache of the GPU, respective buffers according to respective types and respective rendering attributes of a plurality of geometric shapes in a target scene. The method also includes populating rendering parameters for each geometric form to a respective buffer according to the priority. The method also includes generating a view frustum based on the plurality of geometric shapes and the perspective parameters associated with the target scene, and removing, using the GPU, rendering parameters for geometric shapes outside of the view frustum range from the respective buffers. And the method further comprises generating a rendered target scene using the shader based on the rendering parameters remaining in the corresponding buffer. By the embodiment of the disclosure, the rendering frame rate can be improved, the load of a Central Processing Unit (CPU) can be reduced, and the data volume of communication between the GPU and the CPU can be reduced.

Description

Geometric body fast rendering method based on graphic processing unit GPU drive
Technical Field
Embodiments of the present disclosure relate to the field of computers, and more particularly, to a method, electronic device, apparatus, medium, and computer program product for Graphics Processing Unit (GPU) -driven geometry-based fast rendering.
Background
In the prior art, the modeling software product generally performs geometric shape rendering on the CPU side (e.g., the geometric shape data is discretized into triangular patches), and then on the GPU side, the triangular patches are rendered by the corresponding GPU rendering pipeline. Therefore, the data volume of communication between the CPU and the GPU is large, the calculation amount of the CPU in calculating the geometric figure data to be dispersed is large, the calculation power of the GPU is wasted at the time, the final rendering frame rate is low, the presented visual effect is poor, and the user experience is reduced. Therefore, there is an urgent need for a rendering method by which the load of the CPU of the central processing unit can be reduced, the amount of data communicated between the GPU and the CPU can be reduced, and the rendering frame rate can be increased.
Disclosure of Invention
Embodiments of the present disclosure provide a method, electronic device, apparatus, medium, and computer program product for GPU-driven-based fast rendering of geometric shapes.
According to a first aspect of the present disclosure, a method for GPU-driven geometric fast rendering is provided. The method includes determining, in a cache of the GPU, respective buffers according to respective types and respective rendering attributes of a plurality of geometric shapes in a target scene, the respective buffers corresponding to respective ones of a plurality of GPU rendering pipelines in the GPU. The method also includes filling rendering parameters for each of the plurality of geometries into a respective buffer according to a predetermined priority. The method also includes generating a view frustum based on a plurality of geometric shapes in the target scene and perspective parameters associated with the target scene, removing, using the GPU, rendering parameters for geometric shapes outside of the view frustum range from a respective buffer. And the method further includes generating a rendered target scene using at least one shader in the GPU rendering pipeline based on the remaining rendering parameters in the respective buffer, wherein the remaining rendering parameters represent the geometric shapes to be rendered.
In some embodiments, wherein generating the rendered target scene using at least one shader in the GPU rendering pipeline comprises discretizing remaining rendering parameters using a tessellation shader in the GPU rendering pipeline to generate the first plurality of triangular patch data. And rendering the target scene based on the first plurality of triangular patch data.
In some embodiments, wherein generating the rendered target scene using at least one shader in the GPU rendering pipeline comprises discretizing remaining rendering parameters using a geometry shader in the GPU rendering pipeline to generate the second plurality of triangle patch data. And rendering the target scene based on the second plurality of triangular patch data.
In some embodiments, wherein generating the rendered target scene using at least one shader in the GPU rendering pipeline comprises discretizing remaining rendering parameters using a tessellation shader in the GPU rendering pipeline to generate the third plurality of triangle patch data. Based on the third plurality of triangular patch data, generating, using a geometry shader in the GPU rendering pipeline, a fourth plurality of triangular patch data, wherein a number of the fourth plurality of triangular patch data is greater than a number of the third plurality of triangular patch data. And rendering the target scene based on the fourth plurality of triangular patch data.
In some embodiments, the cone of view is determined by: a bounding box for each geometry of the plurality of geometries is generated using a hull shader in the GPU rendering pipeline based on the rendering parameters remaining within the respective buffer. And transforming the bounding box of each geometric body into a standard equipment space through matrix transformation, wherein the transformation meets the display characteristics of the normalized equipment coordinates. And determining the boundary of the transformation as the boundary of the view frustum.
In some embodiments, wherein the plurality of geometric forms are classified into respective types by a scenario management tree, the scenario management tree indicates the types of the geometric forms and relationships between sub-types of each type.
In some embodiments, the type of geometric form comprises an entity, wherein geometric forms belonging to the entity have geometric shapes that can be represented using parameters. A curvilinear surface, wherein a geometric shape belonging to the curvilinear surface has a surface that can be expressed using an equation. Triangular patches, where geometric shapes that do not belong to a solid or curved surface belong to a triangular patch.
In some embodiments, building information model BIM data is used therein to represent a plurality of geometric forms.
According to a second aspect of the present disclosure, an electronic device is also provided. The electronic device includes a processor, a Graphics Processing Unit (GPU), and a memory coupled to the processor and the GPU, either collectively or individually, the memory having instructions stored therein, the memory coupled to the processor and the GPU having instructions stored therein that, when executed by the processor, cause the electronic device to perform the acts of: causing the processor to determine, in the cache of the GPU, respective buffers according to respective types and respective rendering attributes of the plurality of geometric shapes in the target scene, the respective buffers corresponding to respective ones of a plurality of GPU rendering pipelines in the GPU. And filling the rendering parameters of each geometric body in the plurality of geometric bodies into the corresponding buffer according to the predetermined priority. The instructions, when executed by the GPU, cause the electronic device to perform the following: generating a view frustum based on a plurality of geometric shapes in the target scene and perspective parameters associated with the target scene, removing rendering parameters of geometric shapes outside a range of the view frustum from a respective buffer using the GPU. Generating a rendered target scene using at least one shader in the GPU rendering pipeline based on remaining rendering parameters in the respective buffer, wherein the remaining rendering parameters represent the geometry to be rendered.
In some embodiments, wherein generating the rendered target scene using at least one shader in the GPU rendering pipeline comprises discretizing remaining rendering parameters using a tessellation shader in the GPU rendering pipeline to generate the first plurality of triangular patch data. And rendering the target scene based on the first plurality of triangular patch data.
In some embodiments, wherein generating the rendered target scene using at least one shader in the GPU rendering pipeline comprises discretizing remaining rendering parameters using a geometry shader in the GPU rendering pipeline to generate the second plurality of triangle patch data. And rendering the target scene based on the second plurality of triangular patch data.
In some embodiments, wherein generating the rendered target scene using at least one shader in the GPU rendering pipeline comprises discretizing remaining rendering parameters using a tessellation shader in the GPU rendering pipeline to generate the third plurality of triangle patch data. Based on the third plurality of triangular patch data, generating, using a geometry shader in the GPU rendering pipeline, a fourth plurality of triangular patch data, wherein a number of the fourth plurality of triangular patch data is greater than a number of the third plurality of triangular patch data. And rendering the target scene based on the fourth plurality of triangular patch data.
In some embodiments, the cone of view is determined by: based on the rendering parameters remaining within the respective buffer, a bounding box for each geometry of the plurality of geometries is generated using a hull shader in the GPU rendering pipeline. And transforming the bounding box of each geometric body into a standard equipment space through matrix transformation, wherein the transformation meets the display characteristics of the normalized equipment coordinates. And determining the boundary of the transformation as the boundary of the view frustum.
In some embodiments, wherein the plurality of geometric forms are classified into respective types by a scenario management tree, the scenario management tree indicates the types of the geometric forms and relationships between sub-types of each type.
In some embodiments, the type of geometric form comprises an entity, wherein geometric forms belonging to the entity have geometric shapes that can be represented using parameters. A curved surface, wherein a geometric shape belonging to the curved surface has a curved surface that can be expressed using an equation. Triangular patches, where geometric shapes that do not belong to a solid or curved surface belong to a triangular patch.
In some embodiments, building information model BIM data is used therein to represent a plurality of geometric forms.
According to a third aspect of the present disclosure, an apparatus for GPU-driven geometry-based fast rendering is provided. The apparatus includes a buffer determination module configured to determine, in a cache of the GPU, a respective buffer corresponding to a respective GPU rendering pipeline of a plurality of GPU rendering pipelines in the GPU based on respective types and respective rendering attributes of a plurality of geometric shapes in a target scene. The apparatus also includes a fill module configured to fill rendering parameters for each of the plurality of geometries to a respective buffer according to a predetermined priority. The apparatus also includes means for generating a view frustum based on a plurality of geometric shapes in the target scene and perspective parameters associated with the target scene, and means for removing, using the GPU, rendering parameters for geometric shapes outside a range of the view frustum from a respective buffer. And the apparatus further includes generating a rendered target scene using at least one shader in the GPU rendering pipeline based on the remaining rendering parameters in the respective buffer, wherein the remaining rendering parameters represent geometric shapes to be rendered.
According to a fourth aspect of the present disclosure, there is provided a computer-readable storage medium comprising machine executable instructions which, when executed by an apparatus, cause the apparatus to perform a method according to the first aspect of the present disclosure.
According to a fifth aspect of the present disclosure, there is provided a computer program product, tangibly stored on a computer-readable medium and comprising machine executable instructions that, when executed by an apparatus, cause the apparatus to perform the method according to the first aspect.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure can be implemented;
FIG. 2 schematically shows a schematic diagram of a scene management tree according to an embodiment of the disclosure;
FIG. 3 schematically shows a schematic diagram of a GPU rendering pipeline according to an embodiment of the present disclosure;
fig. 4 schematically shows a schematic view of a view frustum according to an embodiment of the disclosure;
FIG. 5A schematically illustrates a schematic diagram of a shader, according to an exemplary implementation of the present disclosure;
FIG. 5B schematically illustrates a schematic diagram of a tessellated shader in accordance with an exemplary implementation of the present disclosure;
FIG. 6 schematically illustrates a flow chart of a method of GPU-driven geometry rendering according to an exemplary implementation of the present disclosure;
FIG. 7 schematically illustrates a block diagram of an apparatus for GPU-driven geometry rendering according to an exemplary implementation of the present disclosure; and
fig. 8 schematically illustrates a block diagram of an apparatus for GPU-driven geometry rendering according to an exemplary implementation of the present disclosure.
Throughout the drawings, the same or similar reference numbers refer to the same or similar elements.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be understood as open-ended, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
Additionally, all specific values herein are examples only to aid understanding and are in no way intended to limit the scope.
The inventors have noted that the existing modeling software products generally implement discretization of geometry data at the CPU end (e.g., discretizing the geometry data into triangle patches), and then render the triangle patches of data by the corresponding GPU rendering pipeline at the GPU end. Thus, communication between the CPU and the GPU is required, and the amount of data is large. The CPU has a large amount of calculation when calculating the geometry data to be discretized, and the GPU is idle at this time, and the parallel calculation power of the CPU and the GPU is wasted. Therefore, the final rendering frame rate is low, the presented visual effect is not good, and the user experience is reduced. Therefore, there is an urgent need for a rendering method by which the load of the CPU can be reduced, the amount of data communicated between the GPU and the CPU can be reduced, and the rendering frame rate can be increased.
In view of the above, the method of the present disclosure provides a GPU-driven geometric rendering method. By the method, a part of processes performed in the original CPU can be transferred to the GPU to be completed, so that the CPU and the GPU can be calculated in parallel, the CPU coincidence is reduced, the calculation efficiency is improved, the data volume of communication between the CPU and the GPU is reduced, and the rendering frame rate is improved.
In the following description, certain embodiments will be discussed with reference to write access. It should be understood that they are merely intended to better understand the principles and concepts of the embodiments of the disclosure, and are not intended to limit the scope of the disclosure in any way.
FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure can be implemented. As shown in fig. 1, at a computing resource 101 (e.g., computer system, server, etc.), a computerized representation of a target scene, e.g., data of the target scene described with a building information model BIM, examples of which include data such as model pixels, temporary pixels, graphical model data, non-graphical parameter data, other form result data (text, pictures, XML, etc.) is obtained (e.g., received). Based on these data, characteristics of corresponding geometric shapes in the target scene are reflected, each being a separate parameterized geometric representation method. The computing resources 101 include a central processing unit CPU 102 and a graphics processing unit GPU 103. It should be possible that both CPU 102 and GPU 103 may have multiple cores (not shown).
It should be understood that the example environment 100 shown in FIG. 1 is illustrative only and is not intended to limit the scope of the present disclosure. Various additional devices, apparatuses, and/or modules may also be included in the example environment 100. Moreover, the modules shown in fig. 1 are also merely illustrative and are not intended to limit the scope of the present disclosure. In some embodiments, some modules may be integrated into one physical entity or further split into more modules.
Fig. 2 schematically shows a schematic diagram of a scene management tree 200 according to an embodiment of the present disclosure.
As shown, the scenario management tree indicates the types of geometric forms and the relationships between the sub-types of each type. With the scene management tree, the target scene may be divided into different geometric shapes. As can be seen from the scene management tree, in the first layer of the scene management tree, entities, curved surfaces, and triangular patches are included. An entity is understood to be one in which the geometric shapes belonging to the entity have a geometric shape that can be represented using parameters. For example, a sphere can be expressed as a mathematical formula of the size of a radius and a sphere. A curved surface is understood to mean a surface in which the geometric shape belonging to the curved surface has a surface that can be expressed using an equation. For example, a bezier curve surface, a B-spline curve surface. A triangular patch is understood to be one in which geometric shapes that do not belong to a solid or curved surface belong to a triangular patch. Triangular patches are of a type that cannot be structurally expressed using parameters.
Each type may be further sub-divided into sub-types. For example, sub-types of entity types include faces, balls, cylinders, rings, elongated bodies, rotating bodies, swept bodies, and the like. The sub-types of curve surface types include Bezier curve surfaces, B-spline curve surfaces, rational Bezier curve surfaces, and the like. Since the triangular patch cannot be represented with parametric structuring, there are no subtypes.
Fig. 3 schematically illustrates a schematic diagram of a GPU rendering pipeline 300 according to an embodiment of the present disclosure. As shown, GPU rendering pipeline 300 may include a vertex phase 301 in which coordinate transformations are performed and required data is output for subsequent phases. A tessellation stage 302 in which geometric fidelity of the enhanced geometric form is achieved. Geometry phase 303, in which geometry data is refined. A segmentation stage 304 in which data information is generated describing how a triangular mesh covers each pixel.
Fig. 4 schematically illustrates a schematic view of a viewing frustum 400 according to an embodiment of the present disclosure.
As shown, the view frustum 400 can be obtained by cubic coordinate transformation according to the coordinate system of the target scene itself. First, the coordinate system of the target scene itself is transformed into a world coordinate system. Then, the world coordinate system is converted into a view space coordinate system, the coordinates in the obtained view space coordinate system are projected to a cutting space, and the viewing cone 400 which meets the coordinate display characteristics of the standardized equipment is obtained through perspective division. It will be appreciated that this process "compresses" the target scene in three-dimensional space into a two-dimensional screen and is viewed from the perspective of the human eye, so the resulting cone is referred to as a "view cone". Geometries outside the viewing volume 400 need not be rendered. This process, also referred to as a "culling" process, may prevent invalid geometries from being rendered and thus affecting rendering performance.
Fig. 5A schematically illustrates a schematic diagram of a shader in accordance with an exemplary implementation of the present disclosure.
As shown, the shader may include a vertex shader 501, and the vertex shader 501 may be used to perform transformation processing on vertex data. The tessellation shader 502, which is an optional stage of the GPU rendering pipeline, may increase the fidelity of the geometry by enriching the mesh details with the tessellation shader 502 by increasing the number of triangles. The geometry shader 503, which is an optional stage of the GPU rendering pipeline, uses the geometry shader 503 to expand the input points or lines into polygons to complete the data of the geometry. The fragment shader 504, using the fragment shader 504, can determine the final color of the pixels on the screen, where the illumination calculations and shading are performed at this stage, which is where the GPU rendering pipeline advanced effects are generated.
Fig. 5B schematically illustrates a schematic diagram of a tessellated shader in accordance with an exemplary implementation of the present disclosure.
As shown in the figure, some data such as subdivision parameters and subdivision factors of internal parameters are defined in the hull shader 505, and the bounding box of the triangle patch is generated to perform culling operation. The tessellator (tesselator) 506 is not programmable and is self-processed by hardware according to the parameters configured in the hull shader 505 to perform data subdivision operations. In the domain shader 507, the actual vertex coordinates are calculated according to the subdivision factors and vertex UV data after mosaic processing, and are converted into a homogeneous clipping space, and display data required by a program are generated for subsequent rendering.
Fig. 6 schematically illustrates a flow diagram of a method 600 for GPU-driven geometry-based fast rendering in accordance with an exemplary implementation of the present disclosure.
At block 602, respective buffers are determined in a cache of the GPU according to respective types and respective rendering attributes of the plurality of geometries in the target scene, the respective buffers corresponding to respective ones of a plurality of GPU rendering pipelines in the GPU.
In some embodiments, which geometric shapes are included in the target scene are extracted according to the scene management tree, and the types of the geometric shapes and corresponding rendering parameters are stored as entries according to the types and rendering parameters of the geometric shapes. The entry may be updated. Rendering parameters may include colors, graphs, numerical values, and the like. For each geometry, a buffer is determined to store the entries in the GPU cache. Each buffer corresponds to a respective GPU rendering pipeline, rendered in its own rendering stage.
At block 604, rendering parameters for each of the plurality of geometries are filled to a respective buffer according to a predetermined priority.
In some embodiments, respective priorities are determined for different types of geometry based on a principle of maximizing rendering performance. And filling the corresponding geometric body type and the corresponding rendering parameter into the corresponding buffer area. Then, for each buffer, the geometric objects belonging to the buffer are traversed, and the graphics application is invoked for graphics rendering.
Therefore, in the subsequent steps, the process originally in the CPU can be transferred to the GPU, so that the data communication between the CPU and the GPU is reduced, and the consumption of rendering bandwidth is reduced.
At block 606, a view frustum is generated based on a plurality of geometric shapes in the target scene and perspective parameters associated with the target scene, rendering parameters for geometric shapes outside of the view frustum range are removed from a respective buffer using the GPU.
It will be appreciated that the removed geometry (represented by the corresponding rendering parameters) is content that cannot be viewed from a human perspective and therefore also does not need to be rendered.
At block 608, a rendered target scene is generated using at least one shader in the GPU rendering pipeline based on the remaining rendering parameters in the respective buffer, where the remaining rendering parameters represent a geometric form to be rendered.
In some embodiments, wherein generating the rendered target scene using at least one shader in the GPU rendering pipeline comprises discretizing remaining rendering parameters using a tessellation shader in the GPU rendering pipeline to generate the first plurality of triangle patch data. And rendering the target scene based on the first plurality of triangular patch data.
In some embodiments, wherein generating the rendered target scene using at least one shader in the GPU rendering pipeline comprises discretizing remaining rendering parameters using a geometry shader in the GPU rendering pipeline to generate a second plurality of triangle patch data. And rendering the target scene based on the second plurality of triangular patch data.
In some embodiments, wherein generating the rendered target scene using at least one shader in the GPU rendering pipeline comprises discretizing remaining rendering parameters using a tessellation shader in the GPU rendering pipeline to generate the third plurality of triangular patch data. Based on the third plurality of triangular patch data, generating, using a geometry shader in the GPU rendering pipeline, a fourth plurality of triangular patch data, wherein a number of the fourth plurality of triangular patch data is greater than a number of the third plurality of triangular patch data. And rendering the target scene based on the fourth plurality of triangular patch data.
For example, if the generated third plurality of triangular patch data includes 10 data and 4 data are generated again for each of the 10 data, a total of 4 × 10 data may be generated. In this way, it can be understood that by generating the fourth plurality of triangular patch data based on the generated third plurality of triangular patch data, a finer rendering effect can be obtained, and thus user experience can be increased.
In some embodiments, a polygon decomposition algorithm or a vertex normal vector interpolation algorithm may be utilized to generate a plurality of triangular patch data of finer granularity, and a tessellation shader may be used to generate the rendered target scene.
By using a polygon decomposition algorithm or a vertex normal vector interpolation algorithm, the effect of simulating and expressing a curved surface by using more exquisite multi-primitive on the surface of a geometric figure can be achieved. Thus, the rendered target scene may result in an increased user experience.
In some embodiments, the cone of view is determined by: based on the rendering parameters remaining within the respective buffer, a bounding box for each geometry of the plurality of geometries is generated using a hull shader in the GPU rendering pipeline. The bounding box of each geometric form is transformed to standard Device space by a matrix transformation that satisfies the display characteristics of Normalized Device Coordinates (Normalized Device Coordinates). And determining the boundary of the transformation as the boundary of the view frustum.
In some embodiments, the plurality of geometric forms are classified into respective types by a scenario management tree indicating types of geometric forms and relationships between sub-types of each type.
In some embodiments, the types of geometric features include: an entity, wherein a geometric form belonging to said entity has a geometric shape that can be represented using parameters; a curved surface, wherein a geometric shape belonging to the curved surface has a curved surface that can be expressed using an equation; a triangular patch to which a geometric shape that does not belong to the entity or the curvilinear surface belongs.
In some embodiments, the plurality of geometric forms are represented using Building Information Model (BIM) data.
The following provides pseudo code for a computer program implementing embodiments of the present disclosure.
The rendering algorithm architecture based on GPU driving used in the embodiment of the invention is shown as follows, wherein the main steps are embodied in code annotation.
Figure BDA0003521271260000111
Figure BDA0003521271260000121
Figure BDA0003521271260000131
The sphere generation algorithm based on GPU driving used in the embodiment of the invention is as follows, wherein the main steps are embodied for code annotation:
Figure BDA0003521271260000132
Figure BDA0003521271260000141
the rational bezier curve surface generation algorithm based on GPU driving used in the embodiments of the present invention is as follows, wherein the main steps are embodied for code annotation.
Figure BDA0003521271260000142
Figure BDA0003521271260000151
Figure BDA0003521271260000161
It can be appreciated that, by the method 600, a part of the processes (e.g., the processes shown in block 606 and block 608) performed in the original CPU may be transferred to the GPU for completion, so that the CPU and the GPU may perform parallel computations, the CPU conformance is reduced, the computation efficiency is improved, the load between the CPU and the GPU is relatively balanced, the data amount of the communication between the CPU and the GPU is reduced, and the rendering frame rate is improved. In addition, through the use of different shaders in each rendering stage, the rendering effect is more exquisite, and the user experience is increased.
Fig. 7 schematically illustrates a block diagram of an apparatus for GPU-driven geometry-based fast rendering according to an exemplary implementation of the present disclosure.
A buffer determination module 702 configured to determine, in accordance with the respective types and the respective rendering attributes of the plurality of geometric forms in the target scene, a respective buffer in a cache of the GPU, the respective buffer corresponding to a respective GPU rendering pipeline of a plurality of GPU rendering pipelines in the GPU.
A filling module 704 configured to fill rendering parameters of each of the plurality of geometries to a respective buffer according to a predetermined priority.
A rendering parameter removal module 706 configured to generate a view frustum based on the plurality of geometric shapes in the target scene and the perspective parameters associated with the target scene, and remove rendering parameters for geometric shapes outside the view frustum range from the respective buffers using the GPU.
A rendering module 708 configured to generate a rendered target scene using at least one shader in the GPU rendering pipeline based on the remaining rendering parameters in the respective buffer, wherein the remaining rendering parameters represent geometric shapes to be rendered.
It is to be appreciated that apparatus 700 can also achieve beneficial technical effects as can be achieved by method 600.
Fig. 8 shows a schematic block diagram of a device 800 that may be used to implement embodiments of the present disclosure, the device 800 may be a device or apparatus as described by embodiments of the present disclosure. As shown in fig. 8, device 800 includes a Central Processing Unit (CPU)801 that may perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM)802 or loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804. Although not shown in fig. 8, device 800 may also include a coprocessor.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Various methods or processes described above may be performed by the processing unit 801. For example, in some embodiments, the methods may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 800 via ROM 802 and/or communications unit 809. When loaded into RAM 803 and executed by CPU 801, a computer program may perform one or more steps or actions of the methods or processes described above.
In some embodiments, the methods and processes described above may be implemented as a computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for carrying out various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language, as well as conventional procedural programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Although the disclosure has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (19)

1. A geometry fast rendering method based on GPU drive comprises the following steps:
determining, in a cache of the GPU, respective buffers according to respective types and respective rendering attributes of a plurality of geometric shapes in a target scene, the respective buffers corresponding to respective ones of a plurality of GPU rendering pipelines in the GPU;
filling rendering parameters of each geometric body in the plurality of geometric bodies into the corresponding buffer area according to the priority determined in advance;
generating a view frustum based on the plurality of geometric shapes in the target scene and perspective parameters associated with the target scene, rendering parameters for geometric shapes outside the view frustum range being removed from the respective buffers using the GPU; and
generating a rendered target scene using at least one shader in the GPU rendering pipeline based on remaining rendering parameters in a respective buffer, wherein the remaining rendering parameters represent a geometric form to be rendered.
2. The method of claim 1, wherein generating a rendered target scene using at least one shader in the GPU rendering pipeline comprises:
discretizing the remaining rendering parameters using a tessellation shader in the GPU rendering pipeline to generate a first plurality of triangular patch data; and
rendering the target scene based on the first plurality of triangular patch data.
3. The method of claim 1, wherein generating a rendered target scene using at least one shader in the GPU rendering pipeline comprises:
discretizing the remaining rendering parameters using a geometry shader in the GPU rendering pipeline to generate a second plurality of triangular patch data; and
rendering the target scene based on the second plurality of triangular patch data.
4. The method of claim 1, wherein generating a rendered target scene using at least one shader in the GPU rendering pipeline comprises:
discretizing the remaining rendering parameters using a tessellation shader in the GPU rendering pipeline to generate a third plurality of triangular patch data;
generating, using a geometry shader in the GPU rendering pipeline, a fourth plurality of triangular patch data based on the third plurality of triangular patch data, wherein a number of the fourth plurality of triangular patch data is greater than a number of the third plurality of triangular patch data; and
rendering the target scene based on the fourth plurality of triangular patch data.
5. The method of claim 1, the view frustum being determined by:
generating, using a hull shader in the GPU rendering pipeline, bounding boxes for each geometry in the plurality of geometries based on rendering parameters remaining in a respective buffer;
transforming the bounding box of each geometric shape to a standard device space by matrix transformation, the transformation satisfying display characteristics of normalized device coordinates; and
determining the boundary of the transformation as the boundary of the view frustum.
6. The method of claim 1, wherein the plurality of geometric forms are classified into respective types by a scenario management tree indicating types of geometric forms and relationships between sub-types of each type.
7. The method of claim 1, the type of geometric form comprising:
an entity, wherein a geometric form belonging to said entity has a geometric shape that can be represented using parameters;
a curved surface, wherein a geometric shape belonging to the curved surface has a curved surface that can be expressed using an equation; and
a triangular patch to which a geometric shape that does not belong to the entity or the curvilinear surface belongs.
8. The method of claim 1, wherein the plurality of geometric forms are represented using Building Information Model (BIM) data.
9. An electronic device, comprising:
a processor;
a Graphics Processing Unit (GPU); and
a memory coupled, either in common with or separately from the processor and the GPU, the memory having instructions stored therein that, when executed by the processor, cause the electronic device to perform the actions of:
determining, in a cache of the GPU, respective buffers according to respective types and respective rendering attributes of a plurality of geometric shapes in a target scene, the respective buffers corresponding to respective ones of a plurality of GPU rendering pipelines in the GPU; and
filling rendering parameters of each geometric body in the plurality of geometric bodies into the corresponding buffer area according to the priority determined in advance;
the instructions, when executed by the GPU, cause the electronic device to perform the following:
generating a view frustum based on the plurality of geometric shapes in the target scene and perspective parameters associated with the target scene, rendering parameters for geometric shapes outside the view frustum range being removed from the respective buffers using the GPU; and
generating a rendered target scene using at least one shader in the GPU rendering pipeline based on remaining rendering parameters in a respective buffer, wherein the remaining rendering parameters represent a geometric form to be rendered.
10. The electronic device of claim 9, wherein generating a rendered target scene using at least one shader in the GPU rendering pipeline comprises:
discretizing the remaining rendering parameters using a tessellation shader in the GPU rendering pipeline to generate a first plurality of triangular patch data; and
rendering the target scene based on the first plurality of triangular patch data.
11. The electronic device of claim 9, wherein generating a rendered target scene using at least one shader in the GPU rendering pipeline comprises:
discretizing the remaining rendering parameters using a geometry shader in the GPU rendering pipeline to generate a second plurality of triangular patch data; and
rendering the target scene based on the second plurality of triangular patch data.
12. The electronic device of claim 9, wherein generating a rendered target scene using at least one shader in the GPU rendering pipeline comprises:
discretizing the remaining rendering parameters using a tessellation shader in the GPU rendering pipeline to generate a third plurality of triangular patch data;
generating, using a geometry shader in the GPU rendering pipeline, a fourth plurality of triangular patch data based on the third plurality of triangular patch data, wherein a number of the fourth plurality of triangular patch data is greater than a number of the third plurality of triangular patch data; and
rendering the target scene based on the fourth plurality of triangular patch data.
13. The method of claim 9, the view frustum being determined by:
generating, using a hull shader in the GPU rendering pipeline, bounding boxes for each geometry in the plurality of geometries based on the rendering parameters remaining within the respective buffer;
transforming the bounding box of each geometric shape to a standard device space by matrix transformation, the transformation satisfying display characteristics of normalized device coordinates; and
determining the boundary of the transformation as the boundary of the view frustum.
14. The electronic device of claim 9, wherein the plurality of geometric forms are classified into respective types by a scene management tree that indicates types of geometric forms and relationships between sub-types of each type.
15. The electronic device of claim 9, the types of geometric features comprising:
an entity, wherein a geometric form belonging to said entity has a geometric shape that can be represented using parameters;
a curved surface, wherein a geometric shape belonging to the curved surface has a curved surface that can be expressed using an equation; and
a triangular patch to which a geometric shape that does not belong to the entity or the curvilinear surface belongs.
16. The electronic device of claim 9, wherein the plurality of geometric forms are represented using Building Information Model (BIM) data.
17. An apparatus for Graphics Processing Unit (GPU) -driven geometry fast rendering, comprising:
a buffer determination module configured to determine, in accordance with respective types and respective rendering attributes of a plurality of geometric shapes in a target scene, a respective buffer in a cache of the GPU, the respective buffer corresponding to a respective GPU rendering pipeline of a plurality of GPU rendering pipelines in the GPU;
a filling module configured to fill rendering parameters of each of the plurality of geometric shapes to the respective buffer according to a predetermined priority;
a rendering parameter removal module configured to generate a view frustum based on the plurality of geometric shapes in the target scene and perspective parameters associated with the target scene, the rendering parameters for geometric shapes outside the view frustum range being removed from the respective buffers using the GPU; and
a rendering module configured to generate a rendered target scene using at least one shader in the GPU rendering pipeline based on remaining rendering parameters in a respective buffer, wherein the remaining rendering parameters represent geometric shapes to be rendered.
18. A computer readable storage medium having one or more computer instructions stored thereon, wherein the one or more computer instructions are executed by a processor to implement the method of any one of claims 1 to 8.
19. A computer program product comprising one or more computer instructions, wherein the one or more computer instructions are executed by a processor to implement the method of any one of claims 1 to 8.
CN202210178311.1A 2022-02-25 2022-02-25 Geometric body fast rendering method based on graphic processing unit GPU drive Pending CN114581596A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210178311.1A CN114581596A (en) 2022-02-25 2022-02-25 Geometric body fast rendering method based on graphic processing unit GPU drive

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210178311.1A CN114581596A (en) 2022-02-25 2022-02-25 Geometric body fast rendering method based on graphic processing unit GPU drive

Publications (1)

Publication Number Publication Date
CN114581596A true CN114581596A (en) 2022-06-03

Family

ID=81770239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210178311.1A Pending CN114581596A (en) 2022-02-25 2022-02-25 Geometric body fast rendering method based on graphic processing unit GPU drive

Country Status (1)

Country Link
CN (1) CN114581596A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023241210A1 (en) * 2022-06-17 2023-12-21 腾讯科技(深圳)有限公司 Method and apparatus for rendering virtual scene, and device and storage medium
CN117893663A (en) * 2024-03-13 2024-04-16 北京大学 Web graphic rendering performance optimization method based on WebGPU

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130127858A1 (en) * 2009-05-29 2013-05-23 Luc Leroy Interception of Graphics API Calls for Optimization of Rendering
US20140043341A1 (en) * 2012-08-09 2014-02-13 Qualcomm Incorporated Gpu-accelerated path rendering
US20140320523A1 (en) * 2013-04-30 2014-10-30 Microsoft Corporation Tessellation of two-dimensional curves using a graphics pipeline
CN109978751A (en) * 2017-12-28 2019-07-05 辉达公司 More GPU frame renderings
CN112509108A (en) * 2020-12-03 2021-03-16 杭州群核信息技术有限公司 GPU-based vertex ambient light shading generation method and image rendering method
CN112614041A (en) * 2020-12-29 2021-04-06 完美世界(北京)软件科技发展有限公司 Data driving method and device for sparse rendering, storage medium and electronic device
CN112614042A (en) * 2020-12-29 2021-04-06 完美世界(北京)软件科技发展有限公司 Data driving method and device for delayed rendering of map
CN113012269A (en) * 2019-12-19 2021-06-22 中国科学院深圳先进技术研究院 Three-dimensional image data rendering method and equipment based on GPU
CN113178014A (en) * 2021-05-27 2021-07-27 网易(杭州)网络有限公司 Scene model rendering method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130127858A1 (en) * 2009-05-29 2013-05-23 Luc Leroy Interception of Graphics API Calls for Optimization of Rendering
US20140043341A1 (en) * 2012-08-09 2014-02-13 Qualcomm Incorporated Gpu-accelerated path rendering
US20140320523A1 (en) * 2013-04-30 2014-10-30 Microsoft Corporation Tessellation of two-dimensional curves using a graphics pipeline
CN109978751A (en) * 2017-12-28 2019-07-05 辉达公司 More GPU frame renderings
CN113012269A (en) * 2019-12-19 2021-06-22 中国科学院深圳先进技术研究院 Three-dimensional image data rendering method and equipment based on GPU
CN112509108A (en) * 2020-12-03 2021-03-16 杭州群核信息技术有限公司 GPU-based vertex ambient light shading generation method and image rendering method
CN112614041A (en) * 2020-12-29 2021-04-06 完美世界(北京)软件科技发展有限公司 Data driving method and device for sparse rendering, storage medium and electronic device
CN112614042A (en) * 2020-12-29 2021-04-06 完美世界(北京)软件科技发展有限公司 Data driving method and device for delayed rendering of map
CN113178014A (en) * 2021-05-27 2021-07-27 网易(杭州)网络有限公司 Scene model rendering method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023241210A1 (en) * 2022-06-17 2023-12-21 腾讯科技(深圳)有限公司 Method and apparatus for rendering virtual scene, and device and storage medium
CN117893663A (en) * 2024-03-13 2024-04-16 北京大学 Web graphic rendering performance optimization method based on WebGPU
CN117893663B (en) * 2024-03-13 2024-06-07 北京大学 WebGPU-based Web graphic rendering performance optimization method

Similar Documents

Publication Publication Date Title
US10089774B2 (en) Tessellation in tile-based rendering
CN114581596A (en) Geometric body fast rendering method based on graphic processing unit GPU drive
JP2011238213A (en) Hierarchical bounding of displaced parametric curves
US8576225B2 (en) Seamless fracture in a production pipeline
US8471852B1 (en) Method and system for tessellation of subdivision surfaces
CN104933749B (en) Clipping of graphics primitives
US11087511B1 (en) Automated vectorization of a raster image using a gradient mesh with arbitrary topology
CN109934893B (en) Method and device for displaying any cross section of geometric body and electronic equipment
EP3866119B1 (en) Data structures, methods and primitive block generators for storing primitives in a graphics processing system
US9858708B2 (en) Convex polygon clipping during rendering
Chen et al. An improved texture-related vertex clustering algorithm for model simplification
US9607435B2 (en) Method for rendering an image synthesis and corresponding device
CN108010113B (en) Deep learning model execution method based on pixel shader
US11417058B2 (en) Anti-aliasing two-dimensional vector graphics using a multi-vertex buffer
CN110502305B (en) Method and device for realizing dynamic interface and related equipment
CA2847865C (en) System and method for rendering virtual contaminants
US11869123B2 (en) Anti-aliasing two-dimensional vector graphics using a compressed vertex buffer
US11217005B1 (en) Techniques for rendering 2D vector graphics on mobile devices
US8274513B1 (en) System, method, and computer program product for obtaining a boundary attribute value from a polygon mesh, during voxelization
US7268788B2 (en) Associative processing for three-dimensional graphics
Ivo et al. Improved silhouette rendering and detection of splat-based models
WO2022120800A1 (en) Graphics processing method and apparatus, and device and medium
CA2847863C (en) System and method for modeling virtual contaminants
US11586788B2 (en) Efficient shape-accurate finite element mesh visualization
Amor et al. A new architecture for efficient hybrid representation of terrains

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination