CN114612579A - Image rendering method and device, computer equipment and readable storage medium - Google Patents

Image rendering method and device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN114612579A
CN114612579A CN202210187253.9A CN202210187253A CN114612579A CN 114612579 A CN114612579 A CN 114612579A CN 202210187253 A CN202210187253 A CN 202210187253A CN 114612579 A CN114612579 A CN 114612579A
Authority
CN
China
Prior art keywords
rendering
target scene
channel
data
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210187253.9A
Other languages
Chinese (zh)
Inventor
孙思远
王月
冯星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202210187253.9A priority Critical patent/CN114612579A/en
Publication of CN114612579A publication Critical patent/CN114612579A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Generation (AREA)

Abstract

The application discloses an image rendering method, an image rendering device, computer equipment and a readable storage medium, relates to the technical field of image processing, and is used for packaging a plurality of rendering commands for rendering a target scene to obtain a rendering command instruction set containing a calling sequence identifier; sending the rendering command instruction set, preset target scene rendering channel data and preset target scene frame cache data to a graphics processor; the graphics processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data; and sending the target scene rendering data to a memory or a video memory. The method and the device can effectively reduce the interactive workload of the CPU and the GPU, thereby realizing effective reduction of rendering power consumption.

Description

Image rendering method and device, computer equipment and readable storage medium
The application is a divisional application of Chinese patent application with application number 202011508323.3 entitled "image rendering method, device, computer equipment and readable storage medium" filed by the Chinese patent office on 18.12.18.2020.
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image rendering method and apparatus, a computer device, and a readable storage medium.
Background
The scheme for designing the multisampling anti-aliasing rendering process on the existing mainstream mobile device using the android platform mainly comprises two schemes, in the first technical scheme, the rendering process is realized by using OpenGLES on the android platform, and the rendering of a three-dimensional scene is realized by calling a function interface provided by the OpenGLES. If multi-sampling anti-aliasing is added in the rendering process, the method needs to be realized by using an extended function glframebuffer texture2DMultisampleEXT or glframebuffer texture2DMultisampleIMG of OpenGLES; in the second technical scheme, rendering of a scene is realized by using Vulkan on an android platform, and rendering by using multi-sampling anti-aliasing comprises three steps, namely rendering an opaque object of a three-dimensional scene, performing multi-sampling mixing on depth to obtain a single-sampling depth map, and rendering a transparent object, wherein each step needs to record and submit a rendering command independently.
In the prior related art, the applicant found that at least the following problems exist: in view of the first technical solution of rendering using OpenGLES, OpenGLES is weaker in rendering performance and cache capacity compared with Vulkan, and is more difficult to implement in the multi-sampling anti-aliasing technique; aiming at the second technical scheme of rendering a scene in three steps, a CPU records a rendering command in each step and submits the rendering command to a GPU, so that the interaction between the CPU and the GPU is more, the rendering power consumption is higher, and the utilization rate of a cache on a GPU chip of a mobile platform is lower.
Disclosure of Invention
In view of this, the present application provides an image rendering method, an image rendering device, a computer device, and a readable storage medium, and mainly aims to solve the technical problems that the rendering performance and the cache capacity of the existing scene rendering by using OpenGLES are weak, especially the difficulty in implementing the multi-sampling anti-aliasing technique is high, and the existing CPU and GPU for performing scene rendering by using Vulkan interact more, and the rendering power consumption is high.
According to an aspect of the present application, there is provided an image rendering method including:
packaging a plurality of rendering commands for rendering a target scene to obtain a rendering command instruction set containing a calling sequence identifier, wherein the calling sequence identifier is used for representing the calling sequence of each rendering command;
sending the rendering command instruction set, preset target scene rendering channel data and preset target scene frame cache data to a graphics processor;
the graphics processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data;
and sending the target scene rendering data to a memory or a video memory.
According to another aspect of the present application, there is provided an image rendering apparatus including:
the system comprises a packaging module, a processing module and a processing module, wherein the packaging module is used for packaging a plurality of rendering commands for rendering a target scene to obtain a rendering command instruction set containing calling sequence identifiers, and the calling sequence identifiers are used for representing the calling sequence of each rendering command;
the first sending module is used for sending the rendering command instruction set, preset target scene rendering channel data and preset target scene frame cache data to a graphics processor;
the rendering module is used for the graphics processor to obtain target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data;
and the second sending module is used for sending the target scene rendering data to a memory or a video memory.
According to yet another aspect of the present application, there is provided a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of the image rendering method described above when executing the computer program.
According to yet another aspect of the present application, there is provided a readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the image rendering method described above.
By means of the technical scheme, the image rendering method, the image rendering device, the computer equipment and the readable storage medium provided by the application obtain the rendering command instruction set containing the calling sequence identification by packaging the rendering commands for rendering the target scene, the calling sequence identification is used for representing the calling sequence of each rendering command, and then the rendering command instruction set, the preset target scene rendering channel data and the preset target scene frame cache data are sent to the graphics processor, so that the graphics processor can obtain the target scene rendering data by sequentially calling the rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data, and send the target scene rendering data to the memory or the video memory. Compared with the existing scene rendering mode by utilizing OpenGLES or Vulkan, the method improves the mode that the CPU respectively submits the rendering commands recorded in each step to the GPU, the CPU sends a plurality of rendering commands for rendering the target scene, target scene rendering channel data and target scene frame cache data to the GPU at one time in a mode of packaging to obtain a rendering command instruction set, so that the GPU can perform target scene rendering according to the plurality of rendering commands in the rendering command instruction set, the target scene rendering channel data and the target scene frame cache data of the CPU to obtain target scene rendering data and send the target scene rendering data to a memory or a display memory, under the condition that the rendering performance and the cache capacity of the scene rendering based on OpenGLES are weak, the interaction workload of the CPU and the GPU is effectively reduced through the optimization of a rendering engine architecture, thereby achieving effective reduction of rendering power consumption.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a flowchart of an image rendering method provided in an embodiment of the present application;
FIG. 2 is a flowchart illustrating another image rendering method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram illustrating an image rendering apparatus according to an embodiment of the present application;
fig. 4 shows a schematic structural diagram of another image rendering apparatus provided in an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The method aims at the technical problems that the rendering performance and the caching capacity of the existing OpenGLES for scene rendering are weak, the existing CPU and GPU for scene rendering by Vulkan have more interaction, and the rendering power consumption is large. The embodiment provides an image rendering method, which can effectively reduce the workload of interaction between a CPU and a GPU and achieve the purpose of effectively reducing rendering power consumption by optimizing a rendering engine architecture on an android platform. As shown in fig. 1, the method includes:
101. and packaging the rendering commands for rendering the target scene to obtain a rendering command instruction set containing the calling sequence identifier.
In this embodiment, a CPU records a plurality of rendering commands for rendering a target scene in a Command Buffer, and performs a packing process on the plurality of rendering commands for rendering the target scene in the Command Buffer to obtain a rendering Command instruction set including a call sequence identifier, where the call sequence identifier is used to represent a call sequence of each rendering Command. As can be seen, the CPU records a plurality of rendering commands for one target scene rendering in one Command Buffer, so as to package and send the plurality of rendering commands in the Command Buffer to the GPU, unlike the prior art, when the target scene is rendered, the CPU stores a plurality of recorded rendering commands for rendering the target scene into different Command buffers respectively, so that the rendering commands in the corresponding Command Buffer are sequentially transmitted to the GPU during the rendering process, so as to realize the execution of each rendering step, the workload of information interaction between the CPU and the GPU is large, and the workload of information interaction between the CPU and the GPU can be effectively reduced by a mode of packaging a plurality of rendering commands, under the optimized rendering engine architecture, the rendering efficiency is improved, the work function of a CPU is simplified, the running loss of the mobile equipment is effectively reduced, and therefore the problem that the mobile equipment is heated too fast is effectively solved.
102. And sending the rendering command instruction set, preset target scene rendering channel data and preset target scene frame cache data to a graphics processor.
In this embodiment, image rendering is mainly used for rendering a target scene in a frame of image, and the CPU sets corresponding rendering channel attribute information and frame buffer attribute information by creating a Vulkan rendering channel VkRenderPass used for scene rendering and creating a Vulkan frame buffer VkFramebuffer used for rendering a three-dimensional scene, where VkRenderPass and VkFramebuffer are types of Vulkan, where VkFramebuffer is a type of Vulkan created under the name of a frame buffer mechanism, so that the CPU can conveniently manage with abstract logic and send the result to the GPU to achieve a corresponding rendering effect.
103. And the graphics processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data.
In this embodiment, the rendering command instruction set includes a plurality of rendering commands for rendering the target scene, and the GPU obtains the rendering data of the target scene by sequentially calling corresponding rendering commands in the rendering command instruction set according to the rendering command instruction set for rendering the target scene, which is received at a time, and in the process of rendering the target scene by using the rendering channel data of the target scene and the frame buffer data of the target scene.
According to the requirement of an actual application scene, a CPU sets a Vulkan rendering channel VkRenderPass and a Vulkan frame buffer used in a target scene rendering process by calling a vkCmdBeginRenderPass command for triggering the Vulkan rendering process to start and setting the VkCmBenderPass and the Vulkan frame buffer so as to send the set target scene rendering channel data and the set target scene frame buffer data to a GPU together with a rendering command instruction set to realize the rendering of the target scene; the GPU utilizes a Vulkan multi-rendering channel MultiRenderPass mechanism to sequentially execute a first rendering process for rendering opaque objects according to all rendering commands rendered by a target scene to obtain a first rendering result (including a multi-sampling depth rendering result and anti-aliasing color information), a second rendering process for performing multi-sampling information fusion processing on the multi-sampling depth rendering result in the first rendering result to obtain a second rendering result (including the anti-aliasing depth information, namely a single-sampling depth rendering result), and a third rendering process for performing transparent object rendering on the second rendering result to obtain a third rendering result, wherein the third rendering result is used as target scene rendering data.
104. And sending the target scene rendering data to a memory or a video memory.
In this embodiment, the target scene rendering data includes color rendering information and depth rendering information of the target scene, and after the target scene rendering is completed based on the multi-sampling anti-aliasing technique, the corresponding target scene rendering data is written into the memory or the video memory, that is, the color rendering information of the target scene is stored in the non-multi-sampling color rendering target resource ColorTarget of the memory or the video memory, and the depth rendering information of the target scene is stored in the non-multi-sampling depth rendering target resource DepthTarget.
By applying the technical scheme of the embodiment, a rendering command instruction set for rendering a target scene, preset obtained target scene rendering channel data and preset obtained target scene frame cache data are sent to a graphics processor; the graphics processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data; and sending the target scene rendering data to a memory or a video memory. Compared with the existing method for rendering a scene by utilizing OpenGLES or Vulkan, the embodiment sends a plurality of rendering commands for rendering a target scene to a GPU at one time in a mode of packaging a rendering command instruction set by a CPU on the basis of rendering the scene based on a multisampling antialiasing technology, so that the GPU can render the target scene according to the plurality of rendering commands in the rendering command instruction set, target scene rendering channel data of the CPU and target scene frame cache data to obtain target scene rendering data and send the target scene rendering data to a video memory, and on the basis of realizing the multisampling antialiasing technology based on the scene rendering based on OpenGLES on the condition of avoiding weaker rendering performance and caching capability and higher difficulty in realizing the multisampling antialiasing technology, through optimizing a rendering engine architecture, the interactive workload of the CPU and the GPU is effectively reduced, and therefore rendering power consumption is effectively reduced.
Based on the foregoing principle, as a refinement and an extension of the above specific implementation of the embodiment shown in fig. 1, the present embodiment further provides another image rendering method, as shown in fig. 2, where the method includes:
201. and packaging the rendering commands for rendering the target scene to obtain a rendering command instruction set containing the calling sequence identifier.
202. And creating a cache resource used for storing rendering results generated in the rendering process in an on-chip cache of the graphics processor, wherein the rendering results comprise a first rendering result output by a first sub-rendering channel used for rendering the opaque object, a second rendering result output by a second sub-rendering channel used for carrying out multi-sampling information fusion processing on the multi-sampling depth rendering result in the first rendering result, and a third rendering result output by a third sub-rendering channel used for rendering the transparent object.
In specific implementation, the on-chip cache of the GPU refers to a cache of the GPU, and by caching a rendering result generated in the whole target scene rendering process into the created cache resource, the purposes of filtering a request for a memory controller, reducing access to a video memory, and reducing bandwidth consumption of the video memory are achieved.
203. The method for setting the Vulkan rendering channel and the Vulkan frame cache comprises the following steps that a central processing unit creates a Vulkan rendering channel and a Vulkan frame cache for setting target scene rendering channel data and target scene frame cache data, and specifically comprises the following steps: the central processing unit creates a Vulkan rendering channel according to a preset accessory description array; creating a Vulkan frame buffer according to the Vulkan rendering channel and the accessory description array thereof; wherein the attachment description array corresponds to the Vulkan frame buffer format one to one.
Further, as an optional mode, the method specifically includes: the target scene rendering channel data comprises attribute information for performing attribute setting on a Vulkan rendering channel by using a multi-rendering flow mechanism, wherein the Vulkan rendering channel comprises a first sub-rendering channel for rendering opaque objects, a second sub-rendering channel for performing multi-sampling information fusion processing on multi-sampling depth rendering results in first rendering results output by the first sub-rendering channel, and a third sub-rendering channel for rendering transparent objects.
Further, as an optional mode, the method specifically includes: the target scene rendering channel data comprises an attachment description array of a Vulkan rendering channel created in a central processing unit, and index relations among the first sub-rendering channel, the second sub-rendering channel and the third sub-rendering channel are established according to element index information in the attachment description array.
In specific implementation, the CPU utilizes a multi-rendering flow mechanism and a Vulkan function to create a Vulkan rendering channel VkRenderPass for target scene rendering. The index relationship comprises an index relationship of the accessory elements and the layout attributes thereof, and specifically, a rendering channel for rendering the target scene is created by using a Vulkan function according to the accessory description array and is used as the target scene rendering channel. The specific setting of the rendering channel data includes creating an attachment description vktatachmentdescription array of the Vulkan rendering channel in the CPU, which is recorded as vktatachments and includes 4 elements, and the element indexes of the elements are 0, 1, 2, and 3, respectively. The setting of the element index information in the vkAttachments specifically comprises the following steps:
the index relationship of the attachment element is set so that a subsequently created child rendering channel can invoke corresponding data information based on the index attribute value of the attachment element. Specifically, the member loadOp and the member stenilloadop of the 4 elements are both set to VK _ ATTACHMENT _ LOAD _ OP _ don _ CARE to set the operation behavior of the pre-rendering data and the template data at the corresponding attachment, i.e., the existing content is undefined, allowing the driver to be discarded or deleted without saving the content, and the member stenilstoreop is set to VK _ ATTACHMENT _ STORE _ OP _ don _ CARE to set the operation behavior of the post-rendering template data at the corresponding attachment, i.e., the existing content is undefined, allowing the driver to be discarded or deleted without saving the content. Further, the member StoreOp attribute of the index 0 and index 1 elements is set to VK _ ATTACHMENT _ STORE _ OP _ LOAD to set to save the content already existing in the current attachment, and the samples attribute value refers to the number of sample points being 1. The member StoreOp attributes of the index 2 and index 3 elements are set to be VK-ATTACHMENT-STORE-OP-DONT-CARE, so that the operation behavior of the rendered data in the corresponding attachment is set, namely the existing content is undefined, the driver is allowed to discard or delete without saving the content, the samples attribute value is the number n of sampling points, and n can be set to be 2 or 4.
And setting the index relation of the layout attributes so that the subsequently created sub-rendering channel can call corresponding data information based on the index attribute values of the layout attributes. Specifically, the initial LAYOUT initialilout attribute for index 0 and index 2 elements is set to VK _ IMAGE _ LAYOUT _ COLOR _ ATTACHMENT _ OPTIMAL, and the FORMAT attribute is set to VK _ FORMAT _ R8B8G8A8_ UNORM. The initial LAYOUT initialilout attribute for index 1 and index 3 elements is set to VK _ IMAGE _ LAYOUT _ DEPTH _ STENCIL _ ATTACHMENT _ OPTIMAL, and the FORMAT attribute is set to VK _ FORMAT _ D32_ FLOAT. The final LAYOUT finalLayout attribute for index 0 and 1 elements is set to VK _ IMAGE _ LAYOUT _ SHADER _ READ _ ONLY _ OPTIMAL, the final LAYOUT finalLayout attribute for index 2 elements is set to VK _ IMAGE _ LAYOUT _ COLOR _ ATTACHMENT _ OPTIMAL, and the final LAYOUT finalLayout attribute for index 3 elements is set to VK _ IMAGE _ LAYOUT _ DEPTH _ STENCIL _ ATTACHMENT _ OPTIMAL.
Further, the description information of the three sub-rendering channels, SubRenderPass, required for creating the target scene rendering is obtained. Specifically, a Vulkan sub-rendering channel description VkSubpassDescription array is created, and comprises 3 elements, wherein each element is used for describing one sub-rendering channel. Wherein, the setting of the description information of each sub-rendering channel specifically comprises:
vkSubpassDescs [0] is a Subpass array used to describe the opaque objects of the rendered scene, i.e. the first sub-rendering pass. Specifically, the COLOR attachment colorAttachmentCount attribute of vksubpass details [0] is set to 1, the colorattachmentreferences attribute thereof contains 1 vktattachmentreference element whose attachmentattribute value is 2 to point to the attachment of the specified index position (element index 2 in vktattachmentss), and the layout attribute is set to VK _ IMAGE _ COLOR _ ATTACHMENT _ OPTIMA; set vksubpass details [0] presolveattachment for multisampling antialiasing processing of COLOR attachments contains a vktattachmentreference element with an attachment attribute value of 0 to point to an attachment at a specified index position (element index 0 in vktattachment), and sets the layout attribute to VK _ IMAGE _ COLOR _ ATTACHMENT _ OPTIMAL; the attachment pdeptthstephaniattachment for DEPTH and template data, set vksubbsdescs [0], contains a vktattachmentreference element with an attribute value of 3 to point to the attachment at the specified index position and a LAYOUT attribute of VK _ IMAGE _ LAYOUT _ DEPTH _ stemlink _ ATTACHMENT _ option.
vkSubpassDescs [1] is a SubRenderPass array for describing that the multi-sampling depth rendering result (the first rendering result output by the corresponding first sub-rendering channel) is mixed to obtain a single-sampling depth result, namely the second sub-rendering channel, and the multi-sampling depth rendering result is fused into the single-sampling depth rendering result to obtain a second rendering result. Specifically, the input attachment inputtatachmentcount attribute value of vksubpass details [1] is set to 1, the attribute pinputtatachments attribute of the multi-sample depth rendering result READ from the SHADER includes a vktattachmentreference element, the attribute value of the element is 3, so as to point to the attachment at the specified index position, and the LAYOUT attribute is set to VK _ IMAGE _ LAYOUT _ shadow _ READ _ ONLY _ option; the attachment pdeptthstephaniattachment for DEPTH and template data, set vksubbandses [1], contains a vktattachmentreference element with an attribute value of 1 to point to the attachment at the specified index position (element index 1 in vktattachments), and a LAYOUT attribute of VK _ IMAGE _ LAYOUT _ DEPTH _ stemlink _ ATTACHMENT _ OPTIMAL.
vkSubpassDescs [2] is a Subpass array for describing rendering scene transparent objects, namely a third sub-rendering channel. Specifically, the COLOR attachment colorAttachmentCount attribute specified by vksubpass details [2] is set to 1, the colorattachmentreferences attribute thereof contains 1 vktattachmentreference element, the value of the attachment attribute of this element is 0 to point to the attachment of the specified index position (element index 0 in vktattachments), and the layout attribute is set to VK _ IMAGE _ COLOR _ ATTACHMENT _ OPTIMAL, so that the attachment is expected to function as a COLOR buffer; the attachment pdeptthstephaniattachment for DEPTH and template data set vksubbandses [2] contains a vktattachmentreference element with an attribute value of 1 to point to the attachment at the specified index position (element index 1 in vktattachments), and a LAYOUT attribute of VK _ IMAGE _ LAYOUT _ DEPTH _ stemlink _ ATTACHMENT _ OPTIMAL.
Further, as an optional mode, the method specifically further includes: and establishing a rendering sequence among the first sub-rendering channel, the second sub-rendering channel and the third sub-rendering channel by establishing a sub-rendering channel dependency array in a central processing unit.
In a specific implementation, the resource dependency relationship among the sub-rendering channels may be specified by a vksubpass dependency structure, and this embodiment specifies that rendering sequences among the first sub-rendering channel, the second sub-rendering channel, and the third sub-rendering channel are rendering in sequence, that is, after a first rendering process corresponding to the first sub-rendering channel is completed, a second rendering process corresponding to the second sub-rendering channel is executed, and after the second rendering process is completed, a third rendering process corresponding to the third sub-rendering channel is executed. Specifically, the create sub-render channel dependent vksubpsassdependency array, denoted as vkSubDependencies, contains 2 elements.
The setting for the vksubpsassdependency array specifically includes: the src stagegestask and dststagegestask are used to specify which PIPELINE STAGEs generate data and use data, the attribute of the 2-element src stagegestask is set to VK _ PIPELINE _ STAGE _ COLOR _ ATTACHMENT _ OUTPUT _ BIT, and the dependent source PIPELINE STAGE is designated as a COLOR attachment OUTPUT STAGE; setting the dstStageMask attribute as VK _ PIPELINE _ STAGE _ FRAGMENT _ SHADER _ BIT, and designating the dependent target PIPELINE STAGE as FRAGMENT SHADER, namely the FRAGMENT SHADER STAGE must wait until the former sub-rendering channel finishes executing the color accessory output STAGE and then can continue executing. The src ACCESS mask and the dstAccessMask are used for specifying how each source and target sub-rendering channel accesses data, and the attribute of the 2-element src ACCESS mask is set to VK _ ACCESS _ COLOR _ ATTACHMENT _ WRITE _ BIT, and the attribute of the dstAccessMask is set to VK _ ACCESS _ shadow _ READ _ BIT, that is, a SHADER READ operation is executed after the COLOR attachment WRITE operation is completed.
Further, a dependency flag attribute is set to VK _ DEPENDENCY _ BY _ REGION _ BIT, specifying that this dependency occurs in the frame buffer space. Wherein, src _ sub and dst _ sub _ sub are indexes of sub-rendering channel arrays of the composed rendering channels, that is, the src _ sub attribute of vksubdependences [0] is set to 0, and the dst _ sub _ sub attribute is set to 1, so as to convert an input accessory to read in from a color accessory to a shader; the srcSubpass attribute of vkSubDependencies [1] is set to 1 and the dstSubpass attribute is set to 2.
And further, according to the accessory description array, creating a rendering channel for rendering the target scene and using the rendering channel as the target scene rendering channel. Specifically, the value of the VkRenderPassCreateInfo element is set, the element is noted to be vkRpInfo, the sType attribute value of the vK _ STRUCTURE _ TYPE _ REDDER _ PASS _ CREATE _ INFO, the pNext attribute value is nulptr, the attachmentCount attribute value is 4, the pAttachments attribute value is vkAttachments, the subpassCount attribute value is 3, the pSubpass attribute value is vkSubpassDescs, the dependencount attribute value is 2, and the pDendencides attribute value is kSubDesendencides. And creating a rendering channel VkRenderPass for scene rendering based on the attribute information set in the vkRpInfo, and recording the rendering channel VkRenderPass as a target scene rendering channel sceneRenderPass.
In a specific implementation, a Vulkan frame buffer VkFramebuffer for target scene rendering is created. Specifically, a VkFramebuffer for rendering a three-dimensional scene, that is, a frame buffer VkFramebuffer compatible with a rendering channel RenderPass is created, the number and types of attachments thereof are the same, and the width and height of the size of a target scene to be rendered (generally, the screen resolution size of a mobile device) are recorded as OriginW and OriginH, respectively. Specifically, with OriginW wide and OriginH high, multisampled rendering target resources are created, including a color multisampled rendering target resource MSColorTarget and a depth multisampled rendering target resource MSDepthTarget, so that color multisampled rendering results are stored into the created MSColorTarget and depth multisampled rendering results are stored into the created MSDepthTarget. Since the multi-sampling resource is intermediate data generated in the rendering process, the MTLStorageMode parameter corresponding to the MSColorTarget and the MSDepthTarget is set to memoryless, and the number of sampling points is set to n, which may be 2 or 4.
Further, with OriginW as wide and OriginH as high, creating a non-multisampled rendering target resource, including a non-multisampled color rendering target resource ColorTarget and a non-multisampled depth rendering target resource DepthTarget, so as to sequentially store the multisampled non-multisampled color rendering result and the non-multisampled depth rendering result after the multisampled antialiasing processing to the non-multisampled color rendering target resource ColorTarget and the non-multisampled depth rendering target resource DepthTarget.
Further, a VkImageView array is created and is marked as attributes, and the VkImageView array comprises 4 elements, wherein the attributes [0] is VkImageView of the non-multi-sampling color rendering target resource ColorTarget, the attributes [1] is VkImageView of the non-multi-sampling depth rendering target resource DepthTarget, the attributes [2] is VkImageView of the multi-sampling color rendering target resource MSColorTarget, and the attributes [3] is VkImageView of the multi-sampling depth rendering target resource MSDepthTarget.
Further, a Vulkan frame buffer VkFramebuffer for target scene rendering is created according to the accessory description array and serves as a target scene frame buffer. Wherein, the accessory description array corresponds to the frame buffer format one by one. Specifically, a variable of the vkframebufferucatelnfo TYPE is set, the variable is referred to as frameBufferInfo, an sType attribute value of the variable is VK _ structrure _ TYPE _ FRAMEBUFFER _ CREATE _ INFO, an sdnext attribute value of the variable is nullptr, a renderPass attribute value of the renderPass attribute value is sceneredpass, a pattocals attribute value of the attribute is attributes, an accessibility count attribute value of the variable is 4, a layers attribute value of the variable is 1, a width attribute value of the variable is origin w, and a height attribute value of the variable is origin h, that is, an index relationship of an attachment description array with different sub-rendering channel indexes is an index relationship of an array when a frame buffer is created. Therefore, VkFrameBuffer is created based on the information set in the frameBufferInfo, and the frame buffer VkFrameBuffer is taken as a target scene frame buffer sceneFrameBuffer.
204. And sending the rendering command instruction set, preset target scene rendering channel data and preset target scene frame cache data to a graphics processor.
In the foregoing embodiment, as an optional manner, the rendering commands include a first rendering command and a first vkcmdext subpass command for rendering the opaque object corresponding to the first sub-rendering channel, a second rendering command and a second vkcmdext subpass command for performing multi-sampling information fusion processing on the multi-sampling depth rendering result corresponding to the second sub-rendering channel, and a third rendering command and a vkCmdEndRenderPass command for rendering the transparent object corresponding to the third sub-rendering channel.
In specific implementation, when the CPU calls a vkcmdeginbenderpass command, the VkRenderPass and the vkcramebuffer in the rendering flow are correspondingly set by using the obtained target scene rendering channel scenederpass and target scene frame cache sceneFrameBuffer, so as to complete the preparation work of the target scene rendering. The recording process of the target scene rendering command specifically includes:
executing a first sub-rendering process, recording a rendering command for rendering the opaque object, and recording a vkCmdextSubpass command after the first sub-rendering process is executed; executing a second sub-rendering process, mixing the multi-sampling depth rendering results to obtain a single-sampling depth result, calling a subpassLoad function in the implementation of the coloring language, setting a sampling point value with the index of MSDepthTarget of the multi-sampling depth rendering target resource as 0 as a mixed depth value, and recording a vkCmdextSubpass command after the second sub-rendering process is executed; and executing the third sub-rendering process, recording a rendering command for rendering the transparent object, and recording a vkCmdEndRenderPass command after the third sub-rendering process is executed.
205. And the graphics processor performs opaque object rendering on the target scene by using the first sub-rendering channel according to the first rendering command to obtain a first rendering result.
206. And performing multi-sampling information fusion processing on the target scene by using a second sub-rendering channel according to the second rendering command and the obtained multi-sampling depth rendering result in the first rendering result in the cache resource to obtain a second rendering result.
207. And performing transparent object rendering on the target scene by using a third sub-rendering channel according to the third rendering command and the obtained second rendering result in the cache resource, wherein the obtained third rendering result is used as target scene rendering data.
208. And the graphics processor sends the third rendering result as target scene rendering data to a memory or a video memory.
In specific implementation, after the GPU completes rendering of the target scene, the color rendering data of the target scene subjected to the multi-sampling antialiasing processing is saved in the non-multi-sampling color rendering target resource ColorTarget, and the depth rendering data of the target scene is saved in the non-multi-sampling depth rendering target resource DepthTarget.
According to the requirements of the actual application scene, the color rendering data and the depth rendering data of the target scene in the non-multi-sampling color rendering target resource ColorTarget and the non-multi-sampling depth rendering target resource DepthTarget can also be used as the rendering target of the subsequent rendering process, and the rendering process is continued; the color rendering data and the depth rendering data of the target scene subjected to the multi-sampling anti-aliasing process in the non-multi-sampling color rendering target resource ColorTarget and the non-multi-sampling depth rendering target resource deptthtarget may also be used as texture resources for reading and using in the subsequent rendering process, and after all rendering operations are completed, the rendering image stored in the ColorTarget may be output, for example, the rendering image may be displayed on the screen of the mobile device.
Therefore, by using the Vulkan multi-rendering flow renderPass mechanism, the target scene can be rendered by sequentially calling the corresponding rendering commands used for rendering the target scene in the rendering command instruction set and the corresponding sub-rendering channels and combining the read rendering results generated in the rendering process and cached in the cache on the GPU chip. The method has the advantages that the cache characteristic of the GPU of the mobile platform on the chip is fully utilized, meanwhile, data interaction between the CPU and the GPU and access to the video memory are reduced, and the purposes of improving rendering efficiency and reducing bandwidth resource overhead are achieved.
By applying the technical scheme of the embodiment, a rendering command instruction set for rendering a target scene is used, and target scene rendering channel data and target scene frame cache data from a CPU are used for rendering the target scene to obtain target scene rendering data which is sent to a memory or a display memory, compared with the existing scene rendering mode by using OpenGLES or Vulkan, in the embodiment, on the basis of scene rendering based on a multi-sampling antialiasing technique, the CPU obtains the rendering command instruction set by packaging, and sends a plurality of rendering commands for rendering the target scene to the GPU at one time, so that the GPU can perform target scene rendering according to the plurality of rendering commands in the rendering command instruction set, the target scene rendering channel data and the target scene frame cache data of the CPU to obtain target scene rendering data, and sends the target scene rendering data to the memory or the display memory, under the conditions that rendering performance and caching capacity of OpenGLES-based scene rendering are poor and implementation difficulty of a multi-sampling anti-aliasing technology is high (namely function is weak), interaction workload of a CPU and a GPU is effectively reduced through optimization of a rendering engine architecture on the basis of the Vulkan-based scene rendering multi-sampling anti-aliasing technology, and accordingly rendering power consumption is effectively reduced.
Further, as a specific implementation of the method shown in fig. 1, the present embodiment provides an image rendering apparatus, as shown in fig. 3, the apparatus including: a packaging module 33, a first sending module 34, a rendering module 35, and a second sending module 36.
The packing module 33 may be configured to pack a plurality of rendering commands for rendering a target scene to obtain a rendering command instruction set including a call sequence identifier, where the call sequence identifier is used to represent a call sequence of each rendering command.
The first sending module 34 may be configured to send the rendering command instruction set, and preset target scene rendering channel data and target scene frame buffer data to the graphics processor.
The rendering module 35 may be configured to, by the graphics processor, obtain target scene rendering data by sequentially invoking multiple rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data.
A second sending module 36, configured to send the target scene rendering data to a memory or a video memory.
In a specific application scenario, as shown in fig. 4, the apparatus may further include: a caching module 31 and a creating module 32.
In a specific application scene, the target scene rendering channel data includes attribute information for performing attribute setting on a Vulkan rendering channel by using a multi-rendering flow mechanism, and the Vulkan rendering channel includes a first sub-rendering channel for rendering an opaque object, a second sub-rendering channel for performing multi-sampling information fusion processing on a multi-sampling depth rendering result, and a third sub-rendering channel for rendering a transparent object.
In a specific application scenario, the cache module 31 may be configured to create, in an on-chip cache of a graphics processor, a cache resource for storing rendering results generated in a rendering process, where the rendering results include a first rendering result output by a first sub-rendering channel for rendering an opaque object, a second rendering result output by a second sub-rendering channel for performing multi-sampling information fusion processing on a multi-sampling depth rendering result in the first rendering result, and a third rendering result output by a third sub-rendering channel for rendering a transparent object.
The second sending module 36 is specifically configured to send, by the graphics processor, the third rendering result as target scene rendering data to the memory or the video memory.
In a specific application scenario, the rendering commands include a first rendering command and a first vkcmdextsubpass command for rendering an opaque object corresponding to a first sub-rendering channel, a second rendering command and a second vkcmdextsubpass command for performing multi-sampling information fusion processing on a multi-sampling depth rendering result corresponding to a second sub-rendering channel, and a third rendering command and a vkCmdEndRenderPass command for rendering a transparent object corresponding to a third sub-rendering channel.
In a specific application scenario, the rendering module 35 includes: a first rendering unit 351, a second rendering unit 352, and a third rendering unit 353.
The first rendering unit 351 may be configured to, by the graphics processor, perform opaque object rendering on the target scene by using the first sub-rendering channel according to the first rendering command, so as to obtain a first rendering result.
The second rendering unit 352 may be configured to perform multi-sampling information fusion processing on the target scene by using a second sub-rendering channel according to the second rendering command and the obtained multi-sampling depth rendering result in the first rendering result in the cache resource, so as to obtain a second rendering result.
The third rendering unit 353 may be configured to perform transparent object rendering on the target scene by using a third sub-rendering channel according to the third rendering command and the obtained second rendering result in the cache resource, and use the obtained third rendering result as target scene rendering data.
In a specific application scenario, the creating module 32 may be configured to create, by the central processing unit, a Vulkan rendering channel and a Vulkan frame buffer for setting the target scene rendering channel data and the target scene frame buffer data.
In a specific application scenario, the creating module 32 includes: a first creating unit 321 and a second creating unit 322.
The first creating unit 321 may be configured to create, by the central processor, a Vulkan rendering channel according to a preset accessory description array.
A second creating unit 322, configured to create a Vulkan frame buffer according to the Vulkan rendering channel and the accessory description array thereof; wherein the attachment description array corresponds to the Vulkan frame buffer format one to one.
In a specific application scene, the target scene rendering channel data includes an attachment description array of a Vulkan rendering channel created in a central processing unit, and index relationships with the first sub-rendering channel, the second sub-rendering channel, and the third sub-rendering channel are established according to element index information in the attachment description array.
In a specific application scene, a sub-rendering channel dependency array is created in a central processing unit, and a rendering sequence among the first sub-rendering channel, the second sub-rendering channel and the third sub-rendering channel is established.
It should be noted that other corresponding descriptions of the functional units related to the image rendering apparatus provided in the embodiment of the present application may refer to the corresponding descriptions in fig. 1 and fig. 2, and are not repeated herein.
Based on the method shown in fig. 1 and fig. 2, correspondingly, the embodiment of the present application further provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the image rendering method shown in fig. 1 and fig. 2.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Based on the foregoing methods shown in fig. 1 and fig. 2 and the virtual device embodiment shown in fig. 3, to achieve the foregoing object, an embodiment of the present application further provides a computer device, which may specifically be a personal computer, a server, a network device, and the like, where the entity device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the image rendering method as shown in fig. 1 and 2.
Optionally, the computer device may further include a user interface, a network interface, a camera, Radio Frequency (RF) circuitry, a sensor, audio circuitry, a WI-FI module, and so forth. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., a bluetooth interface, WI-FI interface), etc.
It will be understood by those skilled in the art that the present embodiment provides a computer device structure that is not limited to the physical device, and may include more or less components, or some components in combination, or a different arrangement of components.
The storage medium may further include an operating system and a network communication module. An operating system is a program that manages the hardware and software resources of a computer device, supporting the operation of information handling programs, as well as other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and other hardware and software in the entity device.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware. By applying the technical scheme of the application, compared with the existing scene rendering mode by utilizing OpenGLES or Vulkan, on the basis of the scene rendering mode based on the multiple sampling antialiasing technology, the method for the CPU to obtain the rendering command instruction set through packaging sends a plurality of rendering commands of the scene rendering to the GPU at one time, so that the GPU can render the target scene according to the plurality of rendering commands in the rendering command instruction set, the target scene rendering channel data and the target scene frame cache data of the CPU to obtain the target scene rendering data, and sends the target scene rendering data to the memory or the display memory, on the premise of avoiding that the rendering performance and the cache capacity of the scene rendering based on OpenGLES are weak and the implementation difficulty of the multiple sampling antialiasing technology is high, on the basis of the implementation of the multiple sampling antialiasing technology based on the scene rendering based on Vulkan through the optimization of the rendering engine architecture, the interactive workload of the CPU and the GPU is effectively reduced, and therefore rendering power consumption is effectively reduced.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (11)

1. An image rendering method, comprising:
packaging a plurality of rendering commands for rendering a target scene to obtain a rendering command instruction set containing a calling sequence identifier, wherein the calling sequence identifier is used for representing the calling sequence of each rendering command;
sending the rendering command instruction set, preset target scene rendering channel data and preset target scene frame cache data to a graphics processor;
the graphics processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data;
and sending the target scene rendering data to a memory or a video memory.
2. The method of claim 1, wherein the target scene rendering channel data comprises attribute information for setting attributes of a Vulkan rendering channel by using a multi-rendering flow mechanism, and the Vulkan rendering channel comprises a first sub-rendering channel for rendering opaque objects, a second sub-rendering channel for performing multi-sampling information fusion processing on multi-sampling depth rendering results, and a third sub-rendering channel for rendering transparent objects.
3. The method according to claim 1 or 2, wherein a cache resource for storing rendering results generated in the rendering process is created in a graphics processor on-chip cache, and the rendering results include a first rendering result output by a first sub-rendering channel for rendering an opaque object, a second rendering result output by a second sub-rendering channel for performing multi-sampling information fusion processing on a multi-sampling depth rendering result in the first rendering result, and a third rendering result output by a third sub-rendering channel for rendering a transparent object;
further comprising: and the graphics processor sends the third rendering result as target scene rendering data to a memory or a video memory.
4. The method of claim 3, wherein the plurality of rendering commands comprises a first rendering command and a first vkCmdExxtSubpass command for rendering opaque objects corresponding to a first sub-rendering channel, a second rendering command and a second vkCmdExxtSubpass command for performing multi-sampling information fusion processing on multi-sampling depth rendering results corresponding to a second sub-rendering channel, and a third rendering command and a vkCmdEndRenderPass command for rendering transparent objects corresponding to a third sub-rendering channel.
5. The method of claim 4, wherein the obtaining, by the graphics processor, the target scene rendering data by sequentially invoking the plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame buffer data comprises:
the graphics processor performs opaque object rendering on a target scene by using a first sub-rendering channel according to the first rendering command to obtain a first rendering result;
performing multi-sampling information fusion processing on the target scene by using a second sub-rendering channel according to the second rendering command and the obtained multi-sampling depth rendering result in the first rendering result in the cache resource to obtain a second rendering result;
and performing transparent object rendering on the target scene by using a third sub-rendering channel according to the third rendering command and the obtained second rendering result in the cache resource, wherein the obtained third rendering result is used as target scene rendering data.
6. The method of claim 2, further comprising: the method for setting the Vulkan rendering channel and the Vulkan frame cache comprises the following steps that a central processing unit creates a Vulkan rendering channel and a Vulkan frame cache for setting target scene rendering channel data and target scene frame cache data, and specifically comprises the following steps:
the central processing unit creates a Vulkan rendering channel according to a preset accessory description array;
creating a Vulkan frame buffer according to the Vulkan rendering channel and the accessory description array thereof;
wherein the attachment description array corresponds to the Vulkan frame buffer format one to one.
7. The method according to claim 2 or 6, wherein the target scene rendering channel data comprises an attachment description array of the Vulkan rendering channel created in a central processor, and the index relationship with the first sub-rendering channel, the second sub-rendering channel and the third sub-rendering channel is established according to the element index information in the attachment description array.
8. The method of claim 2 or 6, wherein the rendering order among the first sub-rendering pass, the second sub-rendering pass, and the third sub-rendering pass is established by creating a sub-rendering pass dependency array in a central processor.
9. An image rendering apparatus, comprising:
the system comprises a packaging module, a processing module and a processing module, wherein the packaging module is used for packaging a plurality of rendering commands for rendering a target scene to obtain a rendering command instruction set containing calling sequence identifiers, and the calling sequence identifiers are used for representing the calling sequence of each rendering command;
the first sending module is used for sending the rendering command instruction set, preset target scene rendering channel data and preset target scene frame cache data to a graphics processor;
the rendering module is used for the graphics processor to obtain target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data;
and the second sending module is used for sending the target scene rendering data to a memory or a video memory.
10. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the image rendering method of any of claims 1 to 8.
11. A readable storage medium on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the image rendering method of any one of claims 1 to 8.
CN202210187253.9A 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium Pending CN114612579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210187253.9A CN114612579A (en) 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011508323.3A CN112652025B (en) 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium
CN202210187253.9A CN114612579A (en) 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202011508323.3A Division CN112652025B (en) 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114612579A true CN114612579A (en) 2022-06-10

Family

ID=75355349

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210187253.9A Pending CN114612579A (en) 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium
CN202011508323.3A Active CN112652025B (en) 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202011508323.3A Active CN112652025B (en) 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (2) CN114612579A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116185640A (en) * 2023-04-20 2023-05-30 上海励驰半导体有限公司 Image command processing method and device based on multiple GPUs, storage medium and chip

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113835890A (en) * 2021-09-24 2021-12-24 厦门雅基软件有限公司 Rendering data processing method, device, equipment and storage medium
CN113934491B (en) * 2021-09-30 2023-08-22 阿里云计算有限公司 Big data processing method and device
CN114760526A (en) * 2022-03-31 2022-07-15 北京百度网讯科技有限公司 Video rendering method and device, electronic equipment and storage medium
CN115908678B (en) * 2023-02-25 2023-05-30 深圳市益玩网络科技有限公司 Bone model rendering method and device, electronic equipment and storage medium

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101295408A (en) * 2007-04-27 2008-10-29 新奥特硅谷视频技术有限责任公司 3D videotext rendering method and system
US8537166B1 (en) * 2007-12-06 2013-09-17 Nvidia Corporation System and method for rendering and displaying high-resolution images
GB0810311D0 (en) * 2008-06-05 2008-07-09 Advanced Risc Mach Ltd Graphics processing systems
US8675000B2 (en) * 2008-11-07 2014-03-18 Google, Inc. Command buffers for web-based graphics rendering
US8659616B2 (en) * 2010-02-18 2014-02-25 Nvidia Corporation System, method, and computer program product for rendering pixels with at least one semi-transparent surface
EP2616954B1 (en) * 2010-09-18 2021-03-31 Google LLC A method and mechanism for rendering graphics remotely
CN102722861A (en) * 2011-05-06 2012-10-10 新奥特(北京)视频技术有限公司 CPU-based graphic rendering engine and realization method
US8830246B2 (en) * 2011-11-30 2014-09-09 Qualcomm Incorporated Switching between direct rendering and binning in graphics processing
CN102810199B (en) * 2012-06-15 2015-03-04 成都平行视野科技有限公司 Image processing method based on GPU (Graphics Processing Unit)
US9582848B2 (en) * 2012-12-28 2017-02-28 Apple Inc. Sprite Graphics rendering system
CN103106680B (en) * 2013-02-16 2015-05-06 赞奇科技发展有限公司 Implementation method for three-dimensional figure render based on cloud computing framework and cloud service system
CN105023234B (en) * 2015-06-29 2018-02-23 嘉兴慧康智能科技有限公司 Figure accelerated method based on embedded system storage optimization
CN105279253B (en) * 2015-10-13 2018-12-14 上海联彤网络通讯技术有限公司 Promote the system and method for webpage painting canvas rendering speed
US10134103B2 (en) * 2015-10-23 2018-11-20 Qualcomm Incorporated GPU operation algorithm selection based on command stream marker
US10853118B2 (en) * 2015-12-21 2020-12-01 Intel Corporation Apparatus and method for pattern-driven page table shadowing for graphics virtualization
EP3441877A3 (en) * 2017-08-09 2019-03-20 Daniel Herring Systems and methods for using egl with an opengl api and a vulkan graphics driver
WO2019071600A1 (en) * 2017-10-13 2019-04-18 华为技术有限公司 Image processing method and apparatus
CN109669739A (en) * 2017-10-16 2019-04-23 阿里巴巴集团控股有限公司 A kind of interface rendering method, device, terminal device and storage medium
CN108711182A (en) * 2018-05-03 2018-10-26 广州爱九游信息技术有限公司 Render processing method, device and mobile terminal device
CN110163943A (en) * 2018-11-21 2019-08-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device
CN111400024B (en) * 2019-01-03 2023-10-10 百度在线网络技术(北京)有限公司 Resource calling method and device in rendering process and rendering engine
CN111508055B (en) * 2019-01-30 2023-04-11 华为技术有限公司 Rendering method and device
CN110471701B (en) * 2019-08-12 2021-09-10 Oppo广东移动通信有限公司 Image rendering method and device, storage medium and electronic equipment
CN110992462A (en) * 2019-12-25 2020-04-10 重庆文理学院 Batch processing drawing method for 3D simulation scene image based on edge calculation
CN111798372B (en) * 2020-06-10 2021-07-13 完美世界(北京)软件科技发展有限公司 Image rendering method, device, equipment and readable medium
CN111798365B (en) * 2020-06-12 2023-09-01 完美世界(北京)软件科技发展有限公司 Deep antialiasing data reading method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116185640A (en) * 2023-04-20 2023-05-30 上海励驰半导体有限公司 Image command processing method and device based on multiple GPUs, storage medium and chip
CN116185640B (en) * 2023-04-20 2023-08-08 上海励驰半导体有限公司 Image command processing method and device based on multiple GPUs, storage medium and chip

Also Published As

Publication number Publication date
CN112652025B (en) 2022-03-22
CN112652025A (en) 2021-04-13

Similar Documents

Publication Publication Date Title
CN112652025B (en) Image rendering method and device, computer equipment and readable storage medium
US11344806B2 (en) Method for rendering game, and method, apparatus and device for generating game resource file
US8149242B2 (en) Graphics processing apparatus, graphics library module and graphics processing method
US8269782B2 (en) Graphics processing apparatus
JP6073533B1 (en) Optimized multi-pass rendering on tile-based architecture
JP5242789B2 (en) Mapping of graphics instructions to related graphics data in performance analysis
JP2015520881A (en) Drawing method, apparatus, and terminal
US11727632B2 (en) Shader binding management in ray tracing
CN112801855A (en) Method and device for scheduling rendering task based on graphics primitive and storage medium
CN110750664A (en) Picture display method and device
WO2023197762A1 (en) Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN114461406A (en) DMA OpenGL optimization method
WO2024007293A1 (en) Graphics processing system and method and gpu based on bitmap primitives
WO2021248706A1 (en) Depth anti-aliasing data reading method and device, computer program and readable medium
CN111145074B (en) Full liquid crystal instrument image rendering method
US8203567B2 (en) Graphics processing method and apparatus implementing window system
JP5242788B2 (en) Partition-based performance analysis for graphics imaging
CN114331808A (en) Action posture storage method, device, medium and electronic equipment
CN111243069B (en) Scene switching method and system of Unity3D engine
CN113835890A (en) Rendering data processing method, device, equipment and storage medium
CN116348904A (en) Optimizing GPU kernels with SIMO methods for downscaling with GPU caches
CN108897537A (en) Document display method, computer-readable medium and a kind of computer
CN112348934A (en) Game map display method, device and medium based on large linked list
CN117369820B (en) Rendering flow chart generation method, device and equipment
WO2022161199A1 (en) Image editing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20220610

Assignee: Beijing Xuanguang Technology Co.,Ltd.

Assignor: Perfect world (Beijing) software technology development Co.,Ltd.

Contract record no.: X2022990000514

Denomination of invention: Image rendering method, apparatus, computer device, and readable storage medium

License type: Exclusive License

Record date: 20220817

EE01 Entry into force of recordation of patent licensing contract