CN112652025A - Image rendering method and device, computer equipment and readable storage medium - Google Patents

Image rendering method and device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN112652025A
CN112652025A CN202011508323.3A CN202011508323A CN112652025A CN 112652025 A CN112652025 A CN 112652025A CN 202011508323 A CN202011508323 A CN 202011508323A CN 112652025 A CN112652025 A CN 112652025A
Authority
CN
China
Prior art keywords
rendering
target scene
channel
data
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011508323.3A
Other languages
Chinese (zh)
Other versions
CN112652025B (en
Inventor
孙思远
王月
冯星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202210187253.9A priority Critical patent/CN114612579A/en
Priority to CN202011508323.3A priority patent/CN112652025B/en
Publication of CN112652025A publication Critical patent/CN112652025A/en
Application granted granted Critical
Publication of CN112652025B publication Critical patent/CN112652025B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Abstract

The application discloses an image rendering method, an image rendering device, computer equipment and a readable storage medium, and relates to the technical field of image processing, wherein a rendering command instruction set obtained through recording, target scene rendering channel data obtained through presetting and target scene frame cache data are sent to a graphics processor; the graphics processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data; and sending the target scene rendering data to a memory or a video memory. The method and the device can effectively reduce the interactive workload of the CPU and the GPU, thereby realizing effective reduction of rendering power consumption.

Description

Image rendering method and device, computer equipment and readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image rendering method and apparatus, a computer device, and a readable storage medium.
Background
The scheme for designing the multisampling anti-aliasing rendering process on the existing mainstream mobile device using the android platform mainly comprises two schemes, in the first scheme, the rendering process is realized by using OpenGLES on the android platform, and the rendering of a three-dimensional scene is realized by calling a function interface provided by OpenGLES. If multi-sampling anti-aliasing is added in the rendering process, the method needs to be realized by using an extended function glframebuffer texture2DMultisampleEXT or glframebuffer texture2DMultisampleIMG of OpenGLES; in the second technical scheme, rendering of a scene is realized by using Vulkan on an android platform, and rendering by using multi-sampling anti-aliasing comprises three steps, namely rendering an opaque object of a three-dimensional scene, performing multi-sampling mixing on depth to obtain a single-sampling depth map, and rendering a transparent object, wherein each step needs to record and submit a rendering command independently.
In the prior related art, the applicant found that at least the following problems exist: in view of the first technical solution of rendering using OpenGLES, OpenGLES is weaker in rendering performance and cache capacity compared with Vulkan, and is more difficult to implement in the multi-sampling anti-aliasing technique; aiming at the second technical scheme of rendering a scene in three steps, a CPU records a rendering command in each step and submits the rendering command to a GPU, so that the interaction between the CPU and the GPU is more, the rendering power consumption is higher, and the utilization rate of a cache on a GPU chip of a mobile platform is lower.
Disclosure of Invention
In view of this, the present application provides an image rendering method, an image rendering device, a computer device, and a readable storage medium, and mainly aims to solve the technical problems that the rendering performance and the cache capacity of the existing scene rendering by using OpenGLES are weak, especially the difficulty in implementing the multi-sampling anti-aliasing technique is high, and the existing CPU and GPU for performing scene rendering by using Vulkan interact more, and the rendering power consumption is high.
According to an aspect of the present application, there is provided an image rendering method including:
sending the rendering command instruction set obtained through recording, target scene rendering channel data and target scene frame cache data obtained through presetting to a graphics processor;
the graphics processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data;
and sending the target scene rendering data to a memory or a video memory.
According to another aspect of the present application, there is provided an image rendering apparatus including:
the first sending module is used for sending the rendering command instruction set obtained through recording, and preset obtained target scene rendering channel data and target scene frame cache data to the graphics processor;
the rendering module is used for the graphics processor to obtain target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data;
and the second sending module is used for sending the target scene rendering data to a memory or a video memory.
According to yet another aspect of the present application, there is provided a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of the image rendering method described above when executing the computer program.
According to yet another aspect of the present application, there is provided a readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the image rendering method described above.
By means of the technical scheme, the image rendering method, the image rendering device, the computer equipment and the readable storage medium provided by the application send the rendering command instruction set obtained through recording, the preset target scene rendering channel data and the preset target scene frame cache data to the graphics processor; the graphics processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data; and sending the target scene rendering data to a memory or a video memory. Compared with the existing scene rendering mode by utilizing OpenGLES or Vulkan, the method improves the mode that the CPU respectively submits the rendering commands recorded in each step to the GPU, the CPU sends a plurality of rendering commands for rendering the target scene, target scene rendering channel data and target scene frame cache data to the GPU at one time in a mode of recording a rendering command instruction set, so that the GPU can perform target scene rendering according to the plurality of rendering commands in the rendering command instruction set, the target scene rendering channel data and the target scene frame cache data of the CPU to obtain the target scene rendering data and send the target scene rendering data to a memory or a display memory, under the condition that the rendering performance and the cache capacity of the scene rendering based on OpenGLES are weak, the interaction workload of the CPU and the GPU is effectively reduced through the optimization of a rendering engine architecture, thereby achieving effective reduction of rendering power consumption.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a flowchart of an image rendering method provided in an embodiment of the present application;
FIG. 2 is a flowchart illustrating another image rendering method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram illustrating an image rendering apparatus according to an embodiment of the present application;
fig. 4 shows a schematic structural diagram of another image rendering apparatus provided in an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The method aims at the technical problems that the rendering performance and the caching capacity of the existing OpenGLES for scene rendering are weak, the existing CPU and GPU for scene rendering by Vulkan have more interaction, and the rendering power consumption is large. The embodiment provides an image rendering method, which can effectively reduce the workload of interaction between a CPU and a GPU and achieve the purpose of effectively reducing rendering power consumption by optimizing a rendering engine architecture on an android platform. As shown in fig. 1, the method includes:
101. and sending the rendering command instruction set obtained through recording, and preset obtained target scene rendering channel data and target scene frame cache data to the graphics processor.
In this embodiment, image rendering is mainly used for rendering a target scene in a frame of image, and the CPU sets corresponding rendering channel attribute information and frame buffer attribute information by creating a Vulkan rendering channel VkRenderPass used for scene rendering and creating a Vulkan frame buffer VkFramebuffer used for rendering a three-dimensional scene, where VkRenderPass and VkFramebuffer are types of Vulkan, where VkFramebuffer is a type of Vulkan created under the name of a frame buffer mechanism, so that the CPU can conveniently manage with abstract logic and send the result to the GPU to achieve a corresponding rendering effect.
In addition, the CPU records a plurality of rendering commands for rendering a target scene in a Command Buffer, so as to package and send the plurality of rendering commands in the Command Buffer to the GPU, which is different from the prior art, when performing target scene rendering, the CPU stores the plurality of rendering commands for rendering the target scene, which are obtained by recording, in different Command buffers, so as to send the rendering commands in the corresponding Command buffers to the GPU in sequence in the rendering process, so as to implement execution of each rendering step, resulting in a large workload of information interaction between the CPU and the GPU, and by means of packaging of the plurality of rendering commands, the workload of information interaction between the CPU and the GPU can be effectively reduced, the work function of the CPU is simplified under an optimized rendering engine architecture, and the rendering efficiency is improved at the same time.
102. And the graphics processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data.
In this embodiment, the rendering command instruction set includes a plurality of rendering commands for rendering the target scene, and the GPU obtains the rendering data of the target scene by sequentially calling corresponding rendering commands in the rendering command instruction set according to the rendering command instruction set for rendering the target scene, which is received at a time, and in the process of rendering the target scene by using the rendering channel data of the target scene and the frame buffer data of the target scene.
According to the requirement of an actual application scene, a CPU (central processing unit) calls a vkCmdBeeginRenderPass command for triggering the Vulkan rendering process to start, sets a Vulkan rendering channel VkRenderPass and a Vulkan frame buffer used in the target scene rendering process by using a target scene rendering channel sceneRenderPass and a target scene frame buffer VkFramebuffer, so that the set target scene rendering channel data and the target scene frame buffer data are sent to a GPU together with a rendering command instruction set to realize the rendering of the target scene; the GPU utilizes a Vulkan multi-rendering channel MultiRenderPass mechanism to sequentially execute a first rendering process for rendering opaque objects according to all rendering commands rendered by a target scene to obtain a first rendering result (including a multi-sampling depth rendering result and anti-aliasing color information), a second rendering process for performing multi-sampling information fusion processing on the multi-sampling depth rendering result in the first rendering result to obtain a second rendering result (including the anti-aliasing depth information, namely a single-sampling depth rendering result), and a third rendering process for performing transparent object rendering on the second rendering result to obtain a third rendering result, wherein the third rendering result is used as target scene rendering data.
103. And sending the target scene rendering data to a memory or a video memory.
In this embodiment, the target scene rendering data includes color rendering information and depth rendering information of the target scene, and after the target scene rendering is completed based on the multi-sampling anti-aliasing technique, the corresponding target scene rendering data is written into the memory or the video memory, that is, the color rendering information of the target scene is stored in the non-multi-sampling color rendering target resource ColorTarget of the memory or the video memory, and the depth rendering information of the target scene is stored in the non-multi-sampling depth rendering target resource DepthTarget.
By applying the technical scheme of the embodiment, a rendering command instruction set obtained by recording, target scene rendering channel data and target scene frame cache data obtained by presetting are sent to a graphics processor; the graphics processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data; and sending the target scene rendering data to a memory or a video memory. Compared with the existing method for rendering a scene by utilizing OpenGLES or Vulkan, in the embodiment, on the basis of rendering a scene based on a multisampling antialiasing technique, a CPU (central processing unit) sends a plurality of rendering commands for rendering a target scene to a GPU (graphics processing unit) at one time in a mode of recording a rendering command instruction set so that the GPU can perform target scene rendering according to the plurality of rendering commands in the rendering command instruction set, target scene rendering channel data of the CPU and target scene frame cache data to obtain target scene rendering data, and sends the target scene rendering data to a video memory, under the conditions that the rendering performance and the cache capacity of the scene rendering based on OpenGLES are weak and the implementation difficulty of the multisampling antialiasing technique is high, on the basis of implementing the multisampling antialiasing technique based on the scene rendering based on Vulkan by optimizing a rendering engine architecture, the interactive workload of the CPU and the GPU is effectively reduced, and therefore rendering power consumption is effectively reduced.
Based on the foregoing principle, as a refinement and an extension of the above specific implementation of the embodiment shown in fig. 1, the present embodiment further provides another image rendering method, as shown in fig. 2, where the method includes:
201. and creating a cache resource used for storing rendering results generated in the rendering process in an on-chip cache of the graphics processor, wherein the rendering results comprise a first rendering result output by a first sub-rendering channel used for rendering the opaque object, a second rendering result output by a second sub-rendering channel used for carrying out multi-sampling information fusion processing on the multi-sampling depth rendering result in the first rendering result, and a third rendering result output by a third sub-rendering channel used for rendering the transparent object.
In specific implementation, the on-chip cache of the GPU refers to a cache of the GPU, and by caching a rendering result generated in the whole target scene rendering process into the created cache resource, the purposes of filtering a request for a memory controller, reducing access to a video memory, and reducing bandwidth consumption of the video memory are achieved.
202. The method for setting the Vulkan rendering channel and the Vulkan frame cache comprises the following steps that a central processing unit creates a Vulkan rendering channel and a Vulkan frame cache for setting target scene rendering channel data and target scene frame cache data, and specifically comprises the following steps: the central processing unit creates a Vulkan rendering channel according to a preset accessory description array; creating a Vulkan frame buffer according to the Vulkan rendering channel and the accessory description array thereof; wherein the attachment description array corresponds to the Vulkan frame buffer format one to one.
Further, as an optional mode, the method specifically includes: the target scene rendering channel data comprises attribute information for performing attribute setting on a Vulkan rendering channel by using a multi-rendering flow mechanism, wherein the Vulkan rendering channel comprises a first sub-rendering channel for rendering opaque objects, a second sub-rendering channel for performing multi-sampling information fusion processing on multi-sampling depth rendering results in first rendering results output by the first sub-rendering channel, and a third sub-rendering channel for rendering transparent objects.
Further, as an optional mode, the method specifically includes: the target scene rendering channel data comprises an attachment description array of a Vulkan rendering channel created in a central processing unit, and index relations among the first sub-rendering channel, the second sub-rendering channel and the third sub-rendering channel are established according to element index information in the attachment description array.
In a specific implementation, a CPU utilizes a multi-rendering flow mechanism and a Vulkan function to create a Vulkan rendering channel VkRenderPass for target scene rendering. The index relationship comprises an index relationship of the accessory elements and the layout attributes thereof, and specifically, a rendering channel for rendering the target scene is created by using a Vulkan function according to the accessory description array and is used as the target scene rendering channel. The specific setting of the rendering channel data includes creating an attachment description vktatachmentdescription array of the Vulkan rendering channel in the CPU, which is recorded as vktatachments and includes 4 elements, and the element indexes of the elements are 0, 1, 2, and 3, respectively. The setting of the element index information in the vkAttachments specifically comprises the following steps:
the index relationship of the attachment element is set so that a subsequently created child rendering channel can invoke corresponding data information based on the index attribute value of the attachment element. Specifically, the member loadOp and the member stenilloadop of the 4 elements are both set to VK _ ATTACHMENT _ LOAD _ OP _ don _ CARE to set the operation behavior of the pre-rendering data and the template data at the corresponding attachment, i.e., the existing content is undefined, allowing the driver to be discarded or deleted without saving the content, and the member stenilstoreop is set to VK _ ATTACHMENT _ STORE _ OP _ don _ CARE to set the operation behavior of the post-rendering template data at the corresponding attachment, i.e., the existing content is undefined, allowing the driver to be discarded or deleted without saving the content. Further, the member StoreOp attribute of the index 0 and index 1 elements is set to VK _ ATTACHMENT _ STORE _ OP _ LOAD to set to save the content already existing in the current attachment, and the samples attribute value refers to the number of sample points being 1. The member StoreOp attributes of the index 2 and index 3 elements are set to be VK-ATTACHMENT-STORE-OP-DONT-CARE, so that the operation behavior of the rendered data in the corresponding attachment is set, namely the existing content is undefined, the driver is allowed to discard or delete without saving the content, the samples attribute value is the number n of sampling points, and n can be set to be 2 or 4.
And setting the index relation of the layout attributes so that the subsequently created sub-rendering channel can call corresponding data information based on the index attribute values of the layout attributes. Specifically, the initial LAYOUT initialilout attribute for index 0 and index 2 elements is set to VK _ IMAGE _ LAYOUT _ COLOR _ ATTACHMENT _ OPTIMAL, and the FORMAT attribute is set to VK _ FORMAT _ R8B8G8A8_ UNORM. The initial LAYOUT initialilout attribute for index 1 and index 3 elements is set to VK _ IMAGE _ LAYOUT _ DEPTH _ STENCIL _ ATTACHMENT _ OPTIMAL, and the FORMAT attribute is set to VK _ FORMAT _ D32_ FLOAT. The final LAYOUT finalLayout attribute for index 0 and 1 elements is set to VK _ IMAGE _ LAYOUT _ SHADER _ READ _ ONLY _ OPTIMAL, the final LAYOUT finalLayout attribute for index 2 elements is set to VK _ IMAGE _ LAYOUT _ COLOR _ ATTACHMENT _ OPTIMAL, and the final LAYOUT finalLayout attribute for index 3 elements is set to VK _ IMAGE _ LAYOUT _ DEPTH _ STENCIL _ ATTACHMENT _ OPTIMAL.
Further, the description information of the three sub-rendering channels, SubRenderPass, required for creating the target scene rendering is obtained. Specifically, a Vulkan sub-rendering channel description VkSubpassDescription array is created, and comprises 3 elements, wherein each element is used for describing one sub-rendering channel. Wherein, the setting of the description information of each sub-rendering channel specifically comprises:
vkSubpassDescs [0] is a Subpass array used to describe the opaque objects of the rendered scene, i.e. the first sub-rendering pass. Specifically, the COLOR attachment colorAttachmentCount attribute of vksubpass details [0] is set to 1, the colorattachmentreferences attribute thereof contains 1 vktattachmentreference element whose attachmentattribute value is 2 to point to the attachment of the specified index position (element index 2 in vktattachmentss), and the layout attribute is set to VK _ IMAGE _ COLOR _ ATTACHMENT _ OPTIMA; set vksubpass details [0] presolveattachment for multisampling antialiasing processing of COLOR attachments contains a vktattachmentreference element with an attachment attribute value of 0 to point to an attachment at a specified index position (element index 0 in vktattachment), and sets the layout attribute to VK _ IMAGE _ COLOR _ ATTACHMENT _ OPTIMAL; the attachment pdeptthstephaniattachment for DEPTH and template data, set vksubbsdescs [0], contains a vktattachmentreference element with an attribute value of 3 to point to the attachment at the specified index position and a LAYOUT attribute of VK _ IMAGE _ LAYOUT _ DEPTH _ stemlink _ ATTACHMENT _ option.
vkSubpassDescs [1] is a SubRenderPass array for describing that the multi-sampling depth rendering result (the first rendering result output by the corresponding first sub-rendering channel) is mixed to obtain a single-sampling depth result, namely the second sub-rendering channel, and the multi-sampling depth rendering result is fused into the single-sampling depth rendering result to obtain a second rendering result. Specifically, the input attachment inputtatachmentcount attribute value of vksubpass details [1] is set to 1, the attribute pinputtatachments attribute of the multi-sample depth rendering result READ from the SHADER includes a vktattachmentreference element, the attribute value of the element is 3, so as to point to the attachment at the specified index position, and the LAYOUT attribute is set to VK _ IMAGE _ LAYOUT _ shadow _ READ _ ONLY _ option; the attachment pdeptthstephaniattachment for DEPTH and template data, set vksubbandses [1], contains a vktattachmentreference element with an attribute value of 1 to point to the attachment at the specified index position (element index 1 in vktattachments), and a LAYOUT attribute of VK _ IMAGE _ LAYOUT _ DEPTH _ stemlink _ ATTACHMENT _ OPTIMAL.
vkSubpassDescs [2] is a Subpass array for describing rendering scene transparent objects, namely a third sub-rendering channel. Specifically, the COLOR attachment colorAttachmentCount attribute specified by vksubpass details [2] is set to 1, the colorattachmentreferences attribute thereof contains 1 vktattachmentreference element, the value of the attachment attribute of this element is 0 to point to the attachment of the specified index position (element index 0 in vktattachments), and the layout attribute is set to VK _ IMAGE _ COLOR _ ATTACHMENT _ OPTIMAL, so that the attachment is expected to function as a COLOR buffer; the attachment pdeptthstephaniattachment for DEPTH and template data set vksubbandses [2] contains a vktattachmentreference element with an attribute value of 1 to point to the attachment at the specified index position (element index 1 in vktattachments), and a LAYOUT attribute of VK _ IMAGE _ LAYOUT _ DEPTH _ stemlink _ ATTACHMENT _ OPTIMAL.
Further, as an optional mode, the method specifically further includes: and establishing a rendering sequence among the first sub-rendering channel, the second sub-rendering channel and the third sub-rendering channel by establishing a sub-rendering channel dependency array in a central processing unit.
In a specific implementation, the resource dependency relationship among the sub-rendering channels may be specified by a vksubpass dependency structure, and this embodiment specifies that rendering sequences among the first sub-rendering channel, the second sub-rendering channel, and the third sub-rendering channel are rendering in sequence, that is, after a first rendering process corresponding to the first sub-rendering channel is completed, a second rendering process corresponding to the second sub-rendering channel is executed, and after the second rendering process is completed, a third rendering process corresponding to the third sub-rendering channel is executed. Specifically, the create sub-render channel dependent vksubpsassdependency array, denoted as vkSubDependencies, contains 2 elements.
The setting for the vksubpsassdependency array specifically includes: the src stagegestask and dststagegestask are used to specify which PIPELINE STAGEs generate data and use data, the attribute of the 2-element src stagegestask is set to VK _ PIPELINE _ STAGE _ COLOR _ ATTACHMENT _ OUTPUT _ BIT, and the dependent source PIPELINE STAGE is designated as a COLOR attachment OUTPUT STAGE; setting the dstStageMask attribute as VK _ PIPELINE _ STAGE _ FRAGMENT _ SHADER _ BIT, and designating the dependent target PIPELINE STAGE as FRAGMENT SHADER, namely the FRAGMENT SHADER STAGE must wait until the former sub-rendering channel finishes executing the color accessory output STAGE and then can continue executing. The src ACCESS mask and the dstAccessMask are used for specifying how each source and target sub-rendering channel accesses data, and the attribute of the 2-element src ACCESS mask is set to VK _ ACCESS _ COLOR _ ATTACHMENT _ WRITE _ BIT, and the attribute of the dstAccessMask is set to VK _ ACCESS _ shadow _ READ _ BIT, that is, a SHADER READ operation is executed after the COLOR attachment WRITE operation is completed.
Further, a dependency flag attribute is set to VK _ DEPENDENCY _ BY _ REGION _ BIT, specifying that this dependency occurs in the frame buffer space. Wherein, the srcSubpass and the dstSubpass are indexes of sub-rendering channel arrays of the combined rendering channel, namely, the srcSubpass attribute of vkSubDependentines [0] is set to be 0, and the dstSubpass attribute is set to be 1, so that the input attachment is converted to be read in from the color attachment to the shader; the srcSubpass attribute of vkSubDependencies [1] is set to 1 and the dstSubpass attribute is set to 2.
And further, according to the accessory description array, creating a rendering channel for rendering the target scene and using the rendering channel as the target scene rendering channel. Specifically, the value of the VkRenderPassCreateInfo element is set, the element is denoted as vkRpInfo, the sttype attribute value of the vkrept element is VK _ sturcure _ TYPE _ RENDER _ PASS _ CREATE _ INFO, the pNext attribute value is nulptr, the attribute value of attchmentcount is 4, the attribute value of pAttachments is vktatarecords, the attribute value of subasscount is 3, the attribute value of psuppasssdescript is vksubasssdescript, the attribute value of dependencount is 2, and the attribute value of pdependencoes is vksubundencoes. And creating a rendering channel VkRenderPass for scene rendering based on the attribute information set in the vkRpInfo, and recording the rendering channel VkRenderPass as a target scene rendering channel sceneRenderPass.
In a specific implementation, a Vulkan frame buffer VkFramebuffer for target scene rendering is created. Specifically, a VkFramebuffer for rendering a three-dimensional scene, that is, a frame buffer VkFramebuffer compatible with a rendering channel RenderPass is created, the number and types of attachments thereof are the same, and the width and height of the size of a target scene to be rendered (generally, the screen resolution size of a mobile device) are recorded as OriginW and OriginH, respectively. Specifically, with OriginW wide and OriginH high, multisampled rendering target resources are created, including a color multisampled rendering target resource MSColorTarget and a depth multisampled rendering target resource MSDepthTarget, so that color multisampled rendering results are stored into the created MSColorTarget and depth multisampled rendering results are stored into the created MSDepthTarget. Because the multi-sampling resource is intermediate data generated in the rendering process, the MTLStorageMode parameter corresponding to the MSColorTarget and the MSDepthTarget is set to be memoryless, and the number of sampling points is set to be n, wherein n can be 2 or 4.
Further, with OriginW as wide and OriginH as high, creating a non-multisampled rendering target resource, including a non-multisampled color rendering target resource ColorTarget and a non-multisampled depth rendering target resource DepthTarget, so as to sequentially store the multisampled non-multisampled color rendering result and the non-multisampled depth rendering result after the multisampled antialiasing processing to the non-multisampled color rendering target resource ColorTarget and the non-multisampled depth rendering target resource DepthTarget.
Further, a VkImageView array is created and is marked as attributes, the VkImageView array comprises 4 elements, wherein the attributes [0] is VkImageView of a non-multi-sampling color rendering target resource ColorTarget, the attributes [1] is VkImageView of a non-multi-sampling depth rendering target resource DepthTarget, the attributes [2] is VkImageView of a multi-sampling color rendering target resource MSColorTarget, and the attributes [3] is VkImageView of a multi-sampling depth rendering target resource MSDepthTarget.
Further, a Vulkan frame buffer VkFramebuffer for target scene rendering is created according to the accessory description array and serves as a target scene frame buffer. Wherein, the accessory description array corresponds to the frame buffer format one by one. Specifically, a variable of the vkframebufferucatelnfo TYPE is set, the variable is referred to as frameBufferInfo, an sType attribute value of the variable is VK _ structrure _ TYPE _ FRAMEBUFFER _ CREATE _ INFO, an sdnext attribute value of the variable is nullptr, a renderPass attribute value of the renderPass attribute value is sceneredpass, a pattocals attribute value of the attribute is attributes, an accessibility count attribute value of the variable is 4, a layers attribute value of the variable is 1, a width attribute value of the variable is origin w, and a height attribute value of the variable is origin h, that is, an index relationship of an attachment description array with different sub-rendering channel indexes is an index relationship of an array when a frame buffer is created. Therefore, VkFrameBuffer is created based on the information set in the frameBufferInfo, and the frame buffer VkFrameBuffer is taken as a target scene frame buffer sceneFrameBuffer.
203. And sending the rendering command instruction set obtained through recording, and preset obtained target scene rendering channel data and target scene frame cache data to the graphics processor.
In the above embodiment, as an optional manner, the rendering command instruction set includes a plurality of rendering commands for rendering a target scene, and a call order identifier for characterizing a call order of each of the rendering commands; the rendering commands comprise a first rendering command and a first vkCmdExtSubpass command which correspond to the first sub-rendering channel and are used for rendering the opaque object, a second rendering command and a second vkCmdExtSubpass command which correspond to the second sub-rendering channel and are used for conducting multi-sampling information fusion processing on the multi-sampling depth rendering result, and a third rendering command and a vkCmdEndRenderPass command which correspond to the third sub-rendering channel and are used for rendering the transparent object.
In specific implementation, the CPU calls a vkCmdBeeginRenderPass command, and correspondingly sets VkRenderPass and VkFrameBuffers in a rendering process by using an obtained target scene rendering channel sceneRenderPass and a target scene frame cache sceneFrameBuffer so as to finish the preparation work of target scene rendering. The recording process of the target scene rendering command specifically includes:
executing a first sub-rendering process, recording a rendering command for rendering the opaque object, and recording a vkCmdextSubpass command after the first sub-rendering process is executed; executing a second sub-rendering process, mixing the multi-sampling depth rendering results to obtain a single-sampling depth result, calling a subpassLoad function in the implementation of the coloring language, setting a sampling point value with the index of MSDepthTarget of the multi-sampling depth rendering target resource as 0 as a mixed depth value, and recording a vkCmdextSubpass command after the second sub-rendering process is executed; and executing the third sub-rendering process, recording a rendering command for rendering the transparent object, and recording a vkCmdEndRenderPass command after the third sub-rendering process is executed.
204. And the graphics processor performs opaque object rendering on the target scene by using the first sub-rendering channel according to the first rendering command to obtain a first rendering result.
205. And performing multi-sampling information fusion processing on the target scene by using a second sub-rendering channel according to the second rendering command and the obtained multi-sampling depth rendering result in the first rendering result in the cache resource to obtain a second rendering result.
206. And performing transparent object rendering on the target scene by using a third sub-rendering channel according to the third rendering command and the obtained second rendering result in the cache resource, wherein the obtained third rendering result is used as target scene rendering data.
207. And the graphics processor sends the third rendering result as target scene rendering data to a memory or a video memory.
In specific implementation, after the GPU completes rendering of the target scene, the color rendering data of the target scene subjected to the multi-sampling antialiasing processing is saved in the non-multi-sampling color rendering target resource ColorTarget, and the depth rendering data of the target scene is saved in the non-multi-sampling depth rendering target resource DepthTarget.
According to the requirements of the actual application scene, the color rendering data and the depth rendering data of the target scene in the non-multi-sampling color rendering target resource ColorTarget and the non-multi-sampling depth rendering target resource DepthTarget can also be used as the rendering target of the subsequent rendering process, and the rendering process is continued; the color rendering data and the depth rendering data of the target scene subjected to the multi-sampling anti-aliasing process in the non-multi-sampling color rendering target resource ColorTarget and the non-multi-sampling depth rendering target resource deptthtarget may also be used as texture resources for reading and using in the subsequent rendering process, and after all rendering operations are completed, the rendering image stored in the ColorTarget may be output, for example, the rendering image may be displayed on the screen of the mobile device.
Therefore, by using the Vulkan multi-rendering flow renderPass mechanism, the target scene can be rendered by sequentially calling the corresponding rendering commands used for rendering the target scene in the rendering command instruction set and the corresponding sub-rendering channels and combining the read rendering results generated in the rendering process and cached in the cache on the GPU chip. The method has the advantages that the cache characteristics of the GPU of the mobile platform on the GPU are fully utilized, meanwhile, data interaction between the CPU and the GPU and access to the video memory are reduced, and the purposes of improving rendering efficiency and reducing bandwidth resource overhead are achieved.
By applying the technical scheme of the embodiment, the rendering command instruction set obtained by recording is utilized, the target scene rendering channel data and the target scene frame cache data from the CPU are utilized to perform the target scene rendering, the target scene rendering data are obtained and are sent to the memory or the display memory, compared with the existing scene rendering mode utilizing OpenGLES or Vulkan, the embodiment is that on the basis of the scene rendering based on the multi-sampling antialiasing technology, the CPU obtains the rendering command instruction set by recording, and sends a plurality of rendering commands for the target scene rendering to the GPU at one time, so that the GPU can perform the target scene rendering according to the plurality of rendering commands in the rendering command instruction set, the target scene rendering channel data and the target scene frame cache data of the CPU, obtain the target scene rendering data, and send the target scene rendering data to the memory or the display memory, under the conditions that rendering performance and caching capacity of OpenGLES-based scene rendering are poor and implementation difficulty of a multi-sampling anti-aliasing technology is high (namely function is weak), interaction workload of a CPU and a GPU is effectively reduced through optimization of a rendering engine architecture on the basis of the Vulkan-based scene rendering multi-sampling anti-aliasing technology, and accordingly rendering power consumption is effectively reduced.
Further, as a specific implementation of the method shown in fig. 1, the present embodiment provides an image rendering apparatus, as shown in fig. 3, the apparatus including: a first sending module 33, a rendering module 34, and a second sending module 35.
The first sending module 33 may be configured to send the rendering command instruction set obtained through recording, and the preset target scene rendering channel data and target scene frame buffer data to the graphics processor.
The rendering module 34 may be configured to, by the graphics processor, obtain target scene rendering data by sequentially invoking a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data.
A second sending module 35, configured to send the target scene rendering data to a memory or a video memory.
In a specific application scenario, as shown in fig. 4, the apparatus may further include: a caching module 31 and a creating module 32.
In a specific application scene, the target scene rendering channel data includes attribute information for performing attribute setting on a Vulkan rendering channel by using a multi-rendering flow mechanism, and the Vulkan rendering channel includes a first sub-rendering channel for rendering an opaque object, a second sub-rendering channel for performing multi-sampling information fusion processing on a multi-sampling depth rendering result, and a third sub-rendering channel for rendering a transparent object.
In a specific application scenario, the cache module 31 may be configured to create, in an on-chip cache of a graphics processor, a cache resource for storing rendering results generated in a rendering process, where the rendering results include a first rendering result output by a first sub-rendering channel for rendering an opaque object, a second rendering result output by a second sub-rendering channel for performing multi-sampling information fusion processing on a multi-sampling depth rendering result in the first rendering result, and a third rendering result output by a third sub-rendering channel for rendering a transparent object.
The second sending module 35 is specifically configured to send, by the graphics processor, the third rendering result as target scene rendering data to the memory or the video memory.
In a specific application scene, the rendering command instruction set comprises a plurality of rendering commands for rendering a target scene and a calling sequence identifier for characterizing the calling sequence of each rendering command; the rendering commands comprise a first rendering command and a first vkCmdExtSubpass command which correspond to the first sub-rendering channel and are used for rendering the opaque object, a second rendering command and a second vkCmdExtSubpass command which correspond to the second sub-rendering channel and are used for conducting multi-sampling information fusion processing on the multi-sampling depth rendering result, and a third rendering command and a vkCmdEndRenderPass command which correspond to the third sub-rendering channel and are used for rendering the transparent object.
In a specific application scenario, the rendering module 34 includes: a first rendering unit 341, a second rendering unit 342, and a third rendering unit 343.
The first rendering unit 341 may be configured to, by the graphics processor, perform opaque object rendering on the target scene by using the first sub-rendering channel according to the first rendering command, to obtain a first rendering result.
The second rendering unit 342 may be configured to perform multi-sampling information fusion processing on the target scene by using a second sub-rendering channel according to the second rendering command and the obtained multi-sampling depth rendering result in the first rendering result in the cache resource, so as to obtain a second rendering result.
The third rendering unit 343 may be configured to perform transparent object rendering on the target scene by using a third sub-rendering channel according to the third rendering command and the obtained second rendering result in the cache resource, and use the obtained third rendering result as target scene rendering data.
In a specific application scenario, the creating module 32 may be configured to create, by the central processing unit, a Vulkan rendering channel and a Vulkan frame buffer for setting the target scene rendering channel data and the target scene frame buffer data.
In a specific application scenario, the creating module 32 includes: a first creating unit 321 and a second creating unit 322.
The first creating unit 321 may be configured to create, by the central processor, a Vulkan rendering channel according to a preset attachment description array.
A second creating unit 322, configured to create a Vulkan frame buffer according to the Vulkan rendering channel and the accessory description array thereof; wherein the attachment description array corresponds to the Vulkan frame buffer format one to one.
In a specific application scene, the target scene rendering channel data includes an attachment description array of a Vulkan rendering channel created in a central processing unit, and index relationships with the first sub-rendering channel, the second sub-rendering channel, and the third sub-rendering channel are established according to element index information in the attachment description array.
In a specific application scene, a sub-rendering channel dependency array is created in a central processing unit, and a rendering sequence among the first sub-rendering channel, the second sub-rendering channel and the third sub-rendering channel is established.
It should be noted that other corresponding descriptions of the functional units related to the image rendering apparatus provided in the embodiment of the present application may refer to the corresponding descriptions in fig. 1 and fig. 2, and are not repeated herein.
Based on the method shown in fig. 1 and fig. 2, correspondingly, the embodiment of the present application further provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the image rendering method shown in fig. 1 and fig. 2.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Based on the foregoing methods shown in fig. 1 and fig. 2 and the virtual device embodiment shown in fig. 3, to achieve the foregoing object, an embodiment of the present application further provides a computer device, which may specifically be a personal computer, a server, a network device, and the like, where the entity device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the image rendering method as shown in fig. 1 and 2.
Optionally, the computer device may further include a user interface, a network interface, a camera, Radio Frequency (RF) circuitry, a sensor, audio circuitry, a WI-FI module, and so forth. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., a bluetooth interface, WI-FI interface), etc.
It will be understood by those skilled in the art that the present embodiment provides a computer device structure that is not limited to the physical device, and may include more or less components, or some components in combination, or a different arrangement of components.
The storage medium may further include an operating system and a network communication module. An operating system is a program that manages the hardware and software resources of a computer device, supporting the operation of information handling programs, as well as other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and other hardware and software in the entity device.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware. By applying the technical scheme of the application, compared with the existing scene rendering mode by utilizing OpenGLES or Vulkan, on the basis of the scene rendering mode based on the multiple sampling antialiasing technology, the CPU sends a plurality of rendering commands of the scene rendering to the GPU at one time in a mode of recording a rendering command instruction set, so that the GPU can render a target scene according to the plurality of rendering commands in the rendering command instruction set, target scene rendering channel data and target scene frame cache data of the CPU to obtain target scene rendering data, and sends the target scene rendering data to a memory or a display memory, on the condition that the rendering performance and the cache capacity of the scene rendering based on OpenGLES are weak and the implementation difficulty of the multiple sampling antialiasing technology is high, on the basis of the implementation of the multiple sampling antialiasing technology based on the scene rendering based on Vulkan through the optimization of a rendering engine architecture, the interactive workload of the CPU and the GPU is effectively reduced, and therefore rendering power consumption is effectively reduced.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (11)

1. An image rendering method, comprising:
sending the rendering command instruction set obtained through recording, target scene rendering channel data and target scene frame cache data obtained through presetting to a graphics processor;
the graphics processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data;
and sending the target scene rendering data to a memory or a video memory.
2. The method of claim 1, wherein the target scene rendering channel data comprises attribute information for setting attributes of a Vulkan rendering channel by using a multi-rendering flow mechanism, and the Vulkan rendering channel comprises a first sub-rendering channel for rendering opaque objects, a second sub-rendering channel for performing multi-sampling information fusion processing on multi-sampling depth rendering results, and a third sub-rendering channel for rendering transparent objects.
3. The method according to claim 1 or 2, wherein a cache resource for storing rendering results generated in the rendering process is created in a graphics processor on-chip cache, and the rendering results include a first rendering result output by a first sub-rendering channel for rendering an opaque object, a second rendering result output by a second sub-rendering channel for performing multi-sampling information fusion processing on a multi-sampling depth rendering result in the first rendering result, and a third rendering result output by a third sub-rendering channel for rendering a transparent object;
further comprising: and the graphics processor sends the third rendering result as target scene rendering data to a memory or a video memory.
4. The method of claim 3, wherein the set of rendering command instructions includes a plurality of rendering commands for rendering a target scene, and a call order identification for characterizing a call order of each of the rendering commands;
the rendering commands comprise a first rendering command and a first vkCmdExtSubpass command which correspond to the first sub-rendering channel and are used for rendering the opaque object, a second rendering command and a second vkCmdExtSubpass command which correspond to the second sub-rendering channel and are used for conducting multi-sampling information fusion processing on the multi-sampling depth rendering result, and a third rendering command and a vkCmdEndRenderPass command which correspond to the third sub-rendering channel and are used for rendering the transparent object.
5. The method of claim 4, wherein the obtaining, by the graphics processor, the target scene rendering data by sequentially invoking the plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame buffer data comprises:
the graphics processor performs opaque object rendering on a target scene by using a first sub-rendering channel according to the first rendering command to obtain a first rendering result;
performing multi-sampling information fusion processing on the target scene by using a second sub-rendering channel according to the second rendering command and the obtained multi-sampling depth rendering result in the first rendering result in the cache resource to obtain a second rendering result;
and performing transparent object rendering on the target scene by using a third sub-rendering channel according to the third rendering command and the obtained second rendering result in the cache resource, wherein the obtained third rendering result is used as target scene rendering data.
6. The method of claim 2, further comprising: the method for setting the Vulkan rendering channel and the Vulkan frame cache comprises the following steps that a central processing unit creates a Vulkan rendering channel and a Vulkan frame cache for setting target scene rendering channel data and target scene frame cache data, and specifically comprises the following steps:
the central processing unit creates a Vulkan rendering channel according to a preset accessory description array;
creating a Vulkan frame buffer according to the Vulkan rendering channel and the accessory description array thereof;
wherein the attachment description array corresponds to the Vulkan frame buffer format one to one.
7. The method according to claim 2 or 6, wherein the target scene rendering channel data comprises an attachment description array of the Vulkan rendering channel created in a central processor, and the index relationship with the first sub-rendering channel, the second sub-rendering channel and the third sub-rendering channel is established according to the element index information in the attachment description array.
8. The method of claim 2 or 6, wherein the rendering order among the first sub-rendering pass, the second sub-rendering pass, and the third sub-rendering pass is established by creating a sub-rendering pass dependency array in a central processor.
9. An image rendering apparatus, comprising:
the first sending module is used for sending the rendering command instruction set obtained through recording, and preset obtained target scene rendering channel data and target scene frame cache data to the graphics processor;
the rendering module is used for the graphics processor to obtain target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data;
and the second sending module is used for sending the target scene rendering data to a memory or a video memory.
10. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the image rendering method of any of claims 1 to 8.
11. A readable storage medium on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the image rendering method of any one of claims 1 to 8.
CN202011508323.3A 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium Active CN112652025B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210187253.9A CN114612579A (en) 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium
CN202011508323.3A CN112652025B (en) 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011508323.3A CN112652025B (en) 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210187253.9A Division CN114612579A (en) 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN112652025A true CN112652025A (en) 2021-04-13
CN112652025B CN112652025B (en) 2022-03-22

Family

ID=75355349

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011508323.3A Active CN112652025B (en) 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium
CN202210187253.9A Pending CN114612579A (en) 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210187253.9A Pending CN114612579A (en) 2020-12-18 2020-12-18 Image rendering method and device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (2) CN112652025B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113835890A (en) * 2021-09-24 2021-12-24 厦门雅基软件有限公司 Rendering data processing method, device, equipment and storage medium
CN113934491A (en) * 2021-09-30 2022-01-14 阿里云计算有限公司 Big data processing method and device
CN114760526A (en) * 2022-03-31 2022-07-15 北京百度网讯科技有限公司 Video rendering method and device, electronic equipment and storage medium
CN115908678A (en) * 2023-02-25 2023-04-04 深圳市益玩网络科技有限公司 Skeleton model rendering method and device, electronic equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116185640B (en) * 2023-04-20 2023-08-08 上海励驰半导体有限公司 Image command processing method and device based on multiple GPUs, storage medium and chip

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101295408A (en) * 2007-04-27 2008-10-29 新奥特硅谷视频技术有限责任公司 3D videotext rendering method and system
CN101639929A (en) * 2008-06-05 2010-02-03 Arm有限公司 Graphics processing systems
US20100118039A1 (en) * 2008-11-07 2010-05-13 Google Inc. Command buffers for web-based graphics rendering
CN102163337A (en) * 2010-02-18 2011-08-24 辉达公司 System and method for rendering pixels with at least one semi-transparent surface
US20120069036A1 (en) * 2010-09-18 2012-03-22 Makarand Dharmapurikar Method and mechanism for delivering applications over a wan
CN102722861A (en) * 2011-05-06 2012-10-10 新奥特(北京)视频技术有限公司 CPU-based graphic rendering engine and realization method
CN102810199A (en) * 2012-06-15 2012-12-05 成都平行视野科技有限公司 Image processing method based on GPU (Graphics Processing Unit)
CN103106680A (en) * 2013-02-16 2013-05-15 赞奇科技发展有限公司 Implementation method for three-dimensional figure render based on cloud computing framework and cloud service system
US20130135322A1 (en) * 2011-11-30 2013-05-30 Qualcomm Incorporated Switching between direct rendering and binning in graphics processing using an overdraw tracker
US8537166B1 (en) * 2007-12-06 2013-09-17 Nvidia Corporation System and method for rendering and displaying high-resolution images
CN104823215A (en) * 2012-12-28 2015-08-05 苹果公司 Sprite graphics rendering system
CN105023234A (en) * 2015-06-29 2015-11-04 嘉兴慧康智能科技有限公司 Figure acceleration method based on storage optimization of embedded system
CN105279253A (en) * 2015-10-13 2016-01-27 上海联彤网络通讯技术有限公司 System and method for increasing canvas rendering speed of webpage
CN108140234A (en) * 2015-10-23 2018-06-08 高通股份有限公司 GPU operation algorithms selection based on order flow label
CN108711182A (en) * 2018-05-03 2018-10-26 广州爱九游信息技术有限公司 Render processing method, device and mobile terminal device
US20180373556A1 (en) * 2015-12-21 2018-12-27 Intel Corporation Apparatus and method for pattern-driven page table shadowing for graphics virtualization
CA3013624A1 (en) * 2017-08-09 2019-02-09 Daniel Herring Systems and methods for using egl with an opengl api and a vulkan graphics driver
CN109669739A (en) * 2017-10-16 2019-04-23 阿里巴巴集团控股有限公司 A kind of interface rendering method, device, terminal device and storage medium
CN109891388A (en) * 2017-10-13 2019-06-14 华为技术有限公司 A kind of image processing method and device
CN110163943A (en) * 2018-11-21 2019-08-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device
CN110471701A (en) * 2019-08-12 2019-11-19 Oppo广东移动通信有限公司 Method, apparatus, storage medium and the electronic equipment of image rendering
CN110992462A (en) * 2019-12-25 2020-04-10 重庆文理学院 Batch processing drawing method for 3D simulation scene image based on edge calculation
CN111400024A (en) * 2019-01-03 2020-07-10 百度在线网络技术(北京)有限公司 Resource calling method and device in rendering process and rendering engine
CN111508055A (en) * 2019-01-30 2020-08-07 华为技术有限公司 Rendering method and device
CN111798365A (en) * 2020-06-12 2020-10-20 完美世界(北京)软件科技发展有限公司 Deep anti-saw data reading method, device, equipment and storage medium
CN111798372A (en) * 2020-06-10 2020-10-20 完美世界(北京)软件科技发展有限公司 Image rendering method, device, equipment and readable medium

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101295408A (en) * 2007-04-27 2008-10-29 新奥特硅谷视频技术有限责任公司 3D videotext rendering method and system
US8537166B1 (en) * 2007-12-06 2013-09-17 Nvidia Corporation System and method for rendering and displaying high-resolution images
CN101639929A (en) * 2008-06-05 2010-02-03 Arm有限公司 Graphics processing systems
US20100118039A1 (en) * 2008-11-07 2010-05-13 Google Inc. Command buffers for web-based graphics rendering
CN102163337A (en) * 2010-02-18 2011-08-24 辉达公司 System and method for rendering pixels with at least one semi-transparent surface
US20120069036A1 (en) * 2010-09-18 2012-03-22 Makarand Dharmapurikar Method and mechanism for delivering applications over a wan
CN102722861A (en) * 2011-05-06 2012-10-10 新奥特(北京)视频技术有限公司 CPU-based graphic rendering engine and realization method
US20130135322A1 (en) * 2011-11-30 2013-05-30 Qualcomm Incorporated Switching between direct rendering and binning in graphics processing using an overdraw tracker
CN102810199A (en) * 2012-06-15 2012-12-05 成都平行视野科技有限公司 Image processing method based on GPU (Graphics Processing Unit)
CN104823215A (en) * 2012-12-28 2015-08-05 苹果公司 Sprite graphics rendering system
CN103106680A (en) * 2013-02-16 2013-05-15 赞奇科技发展有限公司 Implementation method for three-dimensional figure render based on cloud computing framework and cloud service system
CN105023234A (en) * 2015-06-29 2015-11-04 嘉兴慧康智能科技有限公司 Figure acceleration method based on storage optimization of embedded system
CN105279253A (en) * 2015-10-13 2016-01-27 上海联彤网络通讯技术有限公司 System and method for increasing canvas rendering speed of webpage
CN108140234A (en) * 2015-10-23 2018-06-08 高通股份有限公司 GPU operation algorithms selection based on order flow label
US20180373556A1 (en) * 2015-12-21 2018-12-27 Intel Corporation Apparatus and method for pattern-driven page table shadowing for graphics virtualization
CA3013624A1 (en) * 2017-08-09 2019-02-09 Daniel Herring Systems and methods for using egl with an opengl api and a vulkan graphics driver
CN109891388A (en) * 2017-10-13 2019-06-14 华为技术有限公司 A kind of image processing method and device
CN109669739A (en) * 2017-10-16 2019-04-23 阿里巴巴集团控股有限公司 A kind of interface rendering method, device, terminal device and storage medium
CN108711182A (en) * 2018-05-03 2018-10-26 广州爱九游信息技术有限公司 Render processing method, device and mobile terminal device
CN110163943A (en) * 2018-11-21 2019-08-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device
CN111400024A (en) * 2019-01-03 2020-07-10 百度在线网络技术(北京)有限公司 Resource calling method and device in rendering process and rendering engine
CN111508055A (en) * 2019-01-30 2020-08-07 华为技术有限公司 Rendering method and device
CN110471701A (en) * 2019-08-12 2019-11-19 Oppo广东移动通信有限公司 Method, apparatus, storage medium and the electronic equipment of image rendering
CN110992462A (en) * 2019-12-25 2020-04-10 重庆文理学院 Batch processing drawing method for 3D simulation scene image based on edge calculation
CN111798372A (en) * 2020-06-10 2020-10-20 完美世界(北京)软件科技发展有限公司 Image rendering method, device, equipment and readable medium
CN111798365A (en) * 2020-06-12 2020-10-20 完美世界(北京)软件科技发展有限公司 Deep anti-saw data reading method, device, equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SIMON_MIAOV: "Vulkan多线程渲染", 《HTTPS://WWW.JIANSHU.COM/P/70731B49BEAB》 *
柳杨光: "增强现实中三维人脸模型的相片级真实感实时渲染", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
沉默的舞台剧: "Vulkan填坑学习Day12—渲染通道", 《HTTPS://BLOG.CSDN.NET/QQ_35312463/ARTICLE/DETAILS/103981577》 *
韩改宁 等: "对空中飞行目标的动态仿真及飞行图像的处理", 《电脑开发与应用》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113835890A (en) * 2021-09-24 2021-12-24 厦门雅基软件有限公司 Rendering data processing method, device, equipment and storage medium
CN113934491A (en) * 2021-09-30 2022-01-14 阿里云计算有限公司 Big data processing method and device
CN113934491B (en) * 2021-09-30 2023-08-22 阿里云计算有限公司 Big data processing method and device
CN114760526A (en) * 2022-03-31 2022-07-15 北京百度网讯科技有限公司 Video rendering method and device, electronic equipment and storage medium
CN115908678A (en) * 2023-02-25 2023-04-04 深圳市益玩网络科技有限公司 Skeleton model rendering method and device, electronic equipment and storage medium
CN115908678B (en) * 2023-02-25 2023-05-30 深圳市益玩网络科技有限公司 Bone model rendering method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114612579A (en) 2022-06-10
CN112652025B (en) 2022-03-22

Similar Documents

Publication Publication Date Title
CN112652025B (en) Image rendering method and device, computer equipment and readable storage medium
US8149242B2 (en) Graphics processing apparatus, graphics library module and graphics processing method
US20080278509A1 (en) Graphics Processing Apparatus
JP6073533B1 (en) Optimized multi-pass rendering on tile-based architecture
JP5963940B2 (en) Drawing method, apparatus, and terminal
JP5242789B2 (en) Mapping of graphics instructions to related graphics data in performance analysis
CN112801855B (en) Method and device for scheduling rendering task based on graphics primitive and storage medium
WO2021248705A1 (en) Image rendering method and apparatus, computer program and readable medium
CN110750664B (en) Picture display method and device
US11727632B2 (en) Shader binding management in ray tracing
WO2023197762A1 (en) Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN111145074B (en) Full liquid crystal instrument image rendering method
CN111080761A (en) Method and device for scheduling rendering tasks and computer storage medium
CN106504305A (en) A kind of animation processing method and device
JP5242788B2 (en) Partition-based performance analysis for graphics imaging
CN110647377A (en) Picture processing system, device and medium for human-computer interaction interface
KR102645239B1 (en) GPU kernel optimization with SIMO approach for downscaling using GPU cache
CN114461406A (en) DMA OpenGL optimization method
CN114331808A (en) Action posture storage method, device, medium and electronic equipment
CN113835890A (en) Rendering data processing method, device, equipment and storage medium
US8988444B2 (en) System and method for configuring graphics register data and recording medium
CN115809956B (en) Graphics processor performance analysis method, device, computer equipment and storage medium
CN117369820B (en) Rendering flow chart generation method, device and equipment
WO2022161199A1 (en) Image editing method and device
CN114329046A (en) Dynamic video storage management method, device, medium and electronic equipment based on map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210413

Assignee: Beijing Xuanguang Technology Co.,Ltd.

Assignor: Perfect world (Beijing) software technology development Co.,Ltd.

Contract record no.: X2022990000254

Denomination of invention: Image rendering method, device, computer device and readable storage medium

Granted publication date: 20220322

License type: Exclusive License

Record date: 20220610

EE01 Entry into force of recordation of patent licensing contract