WO2023051590A1 - 一种渲染格式选择方法及其相关设备 - Google Patents

一种渲染格式选择方法及其相关设备 Download PDF

Info

Publication number
WO2023051590A1
WO2023051590A1 PCT/CN2022/122060 CN2022122060W WO2023051590A1 WO 2023051590 A1 WO2023051590 A1 WO 2023051590A1 CN 2022122060 W CN2022122060 W CN 2022122060W WO 2023051590 A1 WO2023051590 A1 WO 2023051590A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
rendering format
frame
format
channel
Prior art date
Application number
PCT/CN2022/122060
Other languages
English (en)
French (fr)
Inventor
姜泽成
邓一鑫
尚钦
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202280066021.6A priority Critical patent/CN118043842A/zh
Priority to EP22874974.3A priority patent/EP4379647A1/en
Publication of WO2023051590A1 publication Critical patent/WO2023051590A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/62Semi-transparency
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the field of computers, in particular to a method for selecting a rendering format and related equipment.
  • the bottlenecks of game performance and power consumption include central processing unit (CPU), graphics processing unit (GPU), double data rate synchronous dynamic random access memory (DDR)
  • CPU central processing unit
  • GPU graphics processing unit
  • DDR double data rate synchronous dynamic random access memory
  • the render target format (render target format, which can be referred to simply as Render Format, or rendering format) is used to describe how many bits are allocated to each pixel, and how they are displayed in the red channel (R channel), green channel (G channel), blue channel ( B channel) and alpha transparency channel (A channel), such as RGBA8, each pixel is allocated 32 bits, and red, green, blue and alpha each occupy 8 bits.
  • the present application provides a method for selecting a rendering format, the method is applied to a terminal device, the terminal device includes a graphics processor GPU, and the method includes:
  • the first rendering format change information is transmitted to the GPU, so that the GPU can use the first rendering format according to the first rendering format.
  • Rendering format change information for drawing objects in the second frame wherein the first rendering format includes a transparency channel, and the first rendering format change information is used to indicate changing the first rendering format to a second rendering format, so The second rendering format does not include a transparency channel, and the second frame is a frame after the first frame.
  • the current rendering format is the first rendering format
  • the first rendering format may be composed of R channel, G channel, B channel, and A channel.
  • the first rendering format may be RGBA8 or RGBA16F.
  • the transparency feature of the rendering instructions in the first rendering instruction set can be detected, and if the first rendering instruction set does not contain instructions for drawing transparent objects, it can be considered that there is no need to render transparent objects at present , and then there is no need for a rendering format that includes a transparency channel. On the contrary, if you use a rendering format that includes a transparency channel for rendering, it will increase the DDR overhead when performing rendering tasks without increasing the image quality.
  • the transparency features of the rendering commands in the first rendering command set can be determined according to the following command features, such as whether there are commands such as Disable (GL_BLEND) in the main scene RenderPass command to determine whether to draw a transparent object.
  • command features such as whether there are commands such as Disable (GL_BLEND) in the main scene RenderPass command to determine whether to draw a transparent object.
  • the feature information make a decision result, for example, call Disable (GL_BLEND) in Frame N frame (that is, the first frame), then it can be considered that the first rendering instruction set does not contain instructions for drawing transparent objects.
  • the transparency feature of the rendering instruction of the first frame is used as a reference for selecting the rendering format of the second frame.
  • the first rendering instruction set does not contain instructions for drawing transparent objects, it can be considered that the second Frames also do not need to render transparent objects.
  • the current rendering format is the first rendering format (including the transparency channel), and if the rendering of the second frame is continued based on the first rendering format, unnecessary DDR overhead will be added.
  • the second rendering format may be R11G11B10F.
  • the CPU needs to be able to transmit the first rendering format change information indicating to change the first rendering format to the second rendering format to the GPU, so that the rendering method adopted by the GPU when rendering the second frame The format is changed from the first rendering format to the second rendering format.
  • the second frame is an adjacent frame after the first frame.
  • the CPU can detect the rendering format and transparency features of Frame N (that is, the first frame above), and whether it is transparent can be based on the following command features, such as whether there are Disable/Enable (GL_BLEND), BlendEquation, BlendFunc in the RenderPass command of the main scene and other instructions to decide whether to draw a transparent object.
  • command features such as whether there are Disable/Enable (GL_BLEND), BlendEquation, BlendFunc in the RenderPass command of the main scene and other instructions to decide whether to draw a transparent object.
  • the feature information make a decision result.
  • Frame N frame calls Disable(GL_BLEND)
  • the rendering format is RGBA16F
  • Frame N+1 frame that is, the second frame above
  • An embodiment of the present application provides a method for selecting a rendering format, including: acquiring a first rendering instruction set, the first rendering instruction set is used to draw objects in the first frame; based on the first rendering instruction set not containing an instruction for drawing a transparent object, and the current rendering format is the first rendering format, and transmitting the first rendering format change information to the GPU, so that the GPU can draw the object of the second frame according to the first rendering format change information,
  • the first rendering format includes a transparency channel
  • the first rendering format change information is used to indicate that the first rendering format is changed to a second rendering format
  • the second rendering format does not include a transparency channel
  • the The second frame is a frame after the first frame.
  • the transparency feature of the rendering instruction of the first frame is used as a reference for selecting the rendering format of the second frame.
  • the first rendering instruction set does not contain instructions for drawing transparent objects, it can be considered that the second frame does not need to be transparent Object rendering.
  • the current rendering format is the first rendering format (including the transparency channel), and if the rendering of the second frame is continued based on the first rendering format, unnecessary DDR overhead will be added. Therefore, when rendering the second frame, the GPU adopts the second rendering format not including the transparency channel, which reduces the DDR overhead.
  • the present application provides a method for selecting a rendering format, the method is applied to a terminal device, the terminal device includes a graphics processor GPU, and the method includes:
  • the second rendering format change information is transmitted to the GPU, so that the GPU
  • the rendering format change information draws the object of the second frame, wherein the second rendering format does not include a transparency channel, and the second rendering format change information is used to indicate that the second rendering format is changed to the first rendering format, so
  • the first rendering format includes a transparency channel
  • the second frame is a frame after the first frame.
  • the current rendering format is a second rendering format
  • the second rendering format does not include a transparency channel
  • the second rendering format may be composed of an R channel, a G channel, and a B channel, for example, the The second rendering format may be R11G11B10F.
  • the transparency feature of the rendering instructions in the first rendering instruction set can be detected, and if the first rendering instruction set includes instructions for drawing transparent objects, it can be considered that rendering of transparent objects is currently required, and then A rendering format that includes a transparency channel is also required. Conversely, if you render with a rendering format that does not include a transparency channel, the rendering quality will be greatly reduced.
  • the transparency features of the rendering commands in the first rendering command set can be determined according to the following command features, such as whether there are commands such as Enable (GL_BLEND), BlendEquation, and BlendFunc in the main scene RenderPass command to determine whether to draw a transparent object.
  • command features such as whether there are commands such as Enable (GL_BLEND), BlendEquation, and BlendFunc in the main scene RenderPass command to determine whether to draw a transparent object.
  • GL_BLEND Enable
  • BlendEquation BlendEquation
  • BlendFunc in the main scene RenderPass command to determine whether to draw a transparent object.
  • the characteristic information make a decision result, for example, Frame N frame (that is, the first frame) calls Enable (GL_BLEND), then it can be considered that the first rendering instruction set contains instructions for drawing transparent objects.
  • the transparency feature of the rendering instruction of the first frame is used as a reference for selecting the rendering format of the second frame.
  • the second frame can be regarded as Rendering of transparent objects is also required.
  • the current rendering format is the second rendering format (not including the transparency channel). If you continue to render the second frame based on the second rendering format, the rendering quality will be greatly reduced.
  • the first rendering format may be composed of an R channel, a G channel, a B channel, and an A channel.
  • the first rendering format can be RGBA8 or RGBA16F.
  • the CPU needs to be able to transmit the second rendering format change information indicating to change the second rendering format to the first rendering format to the GPU, so that the rendering method adopted by the GPU when rendering the second frame The format is changed from the second rendering format to the first rendering format.
  • the second frame is an adjacent frame after the first frame.
  • the CPU can detect the rendering format and transparency features of Frame N (that is, the first frame above), and whether it is transparent can be based on the following command features, such as whether there are Disable/Enable (GL_BLEND), BlendEquation, BlendFunc in the RenderPass command of the main scene and other instructions to decide whether to draw a transparent object.
  • the feature information make a decision result. For example, call Enable(GL_BLEND) for Frame N, and the rendering format is R11G11B10, then give Frame N+1 frame (that is, the second frame above) to dynamically switch the rendering format decision, and switch R11G11B10 to RGBA16F format, Frame N+1 draws based on the newly replaced RGBA16F rendering format.
  • An embodiment of the present application provides a method for selecting a rendering format, the method is applied to a terminal device, the terminal device includes a graphics processor GPU, and the method includes: acquiring a first rendering instruction set, the first rendering instruction set For drawing the object of the first frame; based on the first rendering instruction set including instructions for drawing transparent objects, and the current rendering format is the second rendering format, passing the second rendering format change information to the GPU, so that the GPU draws the object of the second frame according to the second rendering format change information, wherein the second rendering format does not include a transparency channel, and the second rendering format change information is used to indicate that the second rendering The format is changed to a first rendering format, the first rendering format includes a transparency channel, and the second frame is a frame after the first frame.
  • the transparency feature of the rendering instruction of the first frame is used as a reference for selecting the rendering format of the second frame.
  • the first rendering instruction set contains instructions for drawing transparent objects, it can be considered that the second frame also needs to perform transparent object rendering. rendering.
  • the current rendering format is the second rendering format (not including the transparency channel). If you continue to render the second frame based on the second rendering format, the rendering quality will be greatly reduced. Therefore, the present application needs to adopt the first rendering format including the transparency channel when rendering the second frame, which can improve the rendering quality.
  • RGBA8 allocates 32 bits for each pixel, red, green, blue and transparency each occupy 8 bits
  • RGBA16F allocates 32 bits for each pixel. 64 bits, red, green, blue and transparency each occupy 16 bits, that is, the pixel value representation precision supported by RGBA16F is greater than the pixel value representation precision supported by RGBA8.
  • the rendering quality may be higher than the pixel value representation precision supported by the current rendering format.
  • the rendering format needs to be changed to support higher
  • the pixel value represents the rendering format of the precision to improve the quality of the rendering.
  • the drawing result of the second frame can be obtained, and the drawing result can include the value of each channel of each pixel on the rendering interface, or the rendering interface can be divided into multiple tile regions (tile region , as shown in FIG. 5 for details), the drawing result may include the values of each channel of one or more pixel points (such as center points) in each tile on the rendering interface.
  • the CPU may determine whether the current rendering format meets the pixel value representation precision required by the second frame based on the rendering result.
  • the drawing result indicates that the second frame has the required
  • the pixel value represents the pixel whose precision exceeds the upper limit of the pixel value supported by the first rendering format.
  • the second The third rendering format change information is transmitted to the GPU, so that the GPU draws the object of the third frame according to the third rendering format change information, wherein the third rendering format change information is used to indicate that the first rendering The format is changed to a third rendering format, the upper limit of pixel value representation precision supported by the third rendering format is greater than the upper limit of pixel value representation precision supported by the first rendering format, and the third frame is a frame after the second frame .
  • the second rendering format is RGBA8, and the third rendering format is RGBA16F.
  • RGBA8 can be switched from RGBA8 to RGBA16F
  • Frame N maps Render Buffer, such as 1920*1080 Render Buffer, divides 253 128*64 tiles, reads the center value of each tile from GPU memory, and judges whether the value is equal to 255 , if there are more than 255, it is considered that there is a highlight area, and the RGBA8 rendering format cannot meet the quality rendering requirements, and it is necessary to switch from RGBA8 to RGBA16F.
  • Render Buffer such as 1920*1080 Render Buffer
  • the rendering format needs to be changed to support lower Pixel value representation precision rendering format to reduce DDR overhead.
  • the rendering result of the second frame is acquired, and based on the rendering result, it indicates that there is no pixel value in the second frame whose precision exceeds the pixel value supported by the first rendering format.
  • a pixel representing an upper limit of precision and transmitting fourth rendering format change information to the GPU, so that the GPU can draw the object of the third frame according to the fourth rendering format change information, wherein the fourth rendering format change information It is used to indicate that the first rendering format is changed to a fourth rendering format, the upper limit of pixel value representation precision supported by the fourth rendering format is smaller than the upper limit of pixel value representation precision supported by the first rendering format, and the third frame is the frame after the second frame.
  • the third rendering format is RGBA16F
  • the second rendering format is RGBA8.
  • the present application provides an apparatus for selecting a rendering format, the apparatus is applied to a terminal device, the terminal equipment includes a graphics processor GPU, and the apparatus includes:
  • An instruction acquiring module configured to acquire a first rendering instruction set, the first rendering instruction set being used to draw the object of the first frame;
  • An instruction format conversion module configured to transmit the first rendering format change information to the GPU based on that the first rendering instruction set does not include an instruction for drawing a transparent object, and the current rendering format is the first rendering format, so that The GPU draws the object of the second frame according to the first rendering format change information, wherein the first rendering format includes a transparency channel, and the first rendering format change information is used to indicate that the first rendering format is changed to is a second rendering format, the second rendering format does not include a transparency channel, and the second frame is a frame after the first frame.
  • the first rendering format is composed of R channel, G channel, B channel and A channel
  • the second rendering format is composed of R channel, G channel and B channel.
  • the first rendering format is RGBA8 or RGBA16F
  • the second rendering format is R11G11B10F.
  • the second frame is an adjacent frame after the first frame.
  • the present application provides an apparatus for selecting a rendering format, the apparatus is applied to a terminal device, the terminal equipment includes a graphics processor GPU, and the apparatus includes:
  • An instruction acquiring module configured to acquire a first rendering instruction set, the first rendering instruction set being used to draw the object of the first frame;
  • An instruction format conversion module configured to transmit the second rendering format change information to the GPU based on the first rendering instruction set including instructions for drawing transparent objects, and the current rendering format is the second rendering format, so that The GPU draws the object of the second frame according to the second rendering format change information, wherein the second rendering format does not include a transparency channel, and the second rendering format change information is used to indicate that the second rendering format is changed to is a first rendering format, the first rendering format includes a transparency channel, and the second frame is a frame after the first frame.
  • the first rendering format is composed of R channel, G channel, B channel and A channel
  • the second rendering format is composed of R channel, G channel and B channel.
  • the first rendering format is RGBA8 or RGBA16F
  • the second rendering format is R11G11B10F.
  • the instruction acquiring module is further configured to acquire the rendering result of the second frame
  • the instruction format conversion module is further configured to indicate, based on the rendering result, that there are pixels in the second frame whose required pixel value representation precision exceeds the upper limit of the pixel value representation precision supported by the first rendering format, and convert the second
  • the third rendering format change information is transmitted to the GPU, so that the GPU draws the object of the third frame according to the third rendering format change information, wherein the third rendering format change information is used to indicate that the first rendering
  • the format is changed to a third rendering format, the upper limit of pixel value representation precision supported by the third rendering format is greater than the upper limit of pixel value representation precision supported by the first rendering format, and the third frame is a frame after the second frame .
  • the second rendering format is RGBA8, and the third rendering format is RGBA16F.
  • the instruction acquiring module is further configured to acquire the rendering result of the second frame
  • the instruction format conversion module is further configured to indicate, based on the rendering result, that there is no pixel in the second frame whose required pixel value representation precision exceeds the upper limit of the pixel value representation precision supported by the first rendering format, and
  • the fourth rendering format change information is transmitted to the GPU, so that the GPU draws the object of the third frame according to the fourth rendering format change information, wherein the fourth rendering format change information is used to indicate that the first
  • the rendering format is changed to the fourth rendering format, the upper limit of pixel value representation precision supported by the fourth rendering format is smaller than the upper limit of pixel value representation precision supported by the first rendering format, and the third frame is after the second frame frame.
  • the third rendering format is RGBA16F
  • the second rendering format is RGBA8.
  • the present application provides a terminal device, the terminal device includes a processor and a memory, and the processor obtains the code stored in the memory to execute any one of the first aspect and its optional implementation manners one, and any one of the second aspect and its optional implementation manners.
  • the present application provides a non-volatile computer-readable storage medium
  • the non-volatile computer-readable storage medium contains computer instructions for performing any of the above-mentioned first aspect and its optional implementations The method, any one of the second aspect and its optional implementation manners.
  • the present application also provides a computer program product, which includes computer instructions, and the processor of the host device executes the computer instructions to perform the operations performed by the processor in any of the possible implementations of this embodiment. .
  • An embodiment of the present application provides a method for selecting a rendering format, including: acquiring a first rendering instruction set, the first rendering instruction set is used to draw objects in the first frame; based on the first rendering instruction set not containing An instruction to draw a transparent object, and the current rendering format is the first rendering format, and the first rendering format change information is transmitted to the GPU, so that the GPU draws the object of the second frame according to the first rendering format change information,
  • the first rendering format includes a transparency channel
  • the first rendering format change information is used to indicate that the first rendering format is changed to a second rendering format
  • the second rendering format does not include a transparency channel
  • the The second frame is a frame after the first frame.
  • the transparency feature of the rendering instruction of the first frame is used as a reference for selecting the rendering format of the second frame.
  • the first rendering instruction set does not contain instructions for drawing transparent objects, it can be considered that the second frame does not need to be transparent Object rendering.
  • the current rendering format is the first rendering format (including the transparency channel), and if the rendering of the second frame is continued based on the first rendering format, unnecessary DDR overhead will be added. Therefore, when rendering the second frame, the GPU adopts the second rendering format not including the transparency channel, which reduces the DDR overhead.
  • Figure 1a is a schematic diagram of a system architecture
  • Figure 1b is a schematic diagram of a system architecture
  • Figure 1c is a schematic diagram of a system architecture
  • FIG. 2 is a block diagram of a computing device of the technology described in the embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a method for selecting a rendering format provided by an embodiment of the present application
  • FIG. 4 is a schematic flowchart of a method for selecting a rendering format provided by an embodiment of the present application
  • FIG. 5 is a schematic diagram of a rendering result provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a method for selecting a rendering format provided by an embodiment of the present application
  • FIG. 7 is a schematic diagram of an apparatus for selecting a rendering format provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an apparatus for selecting a rendering format provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a terminal device provided by the present application.
  • the client program needs to call the API interface to achieve 3D rendering.
  • the rendering commands and data will be cached in random access memory (RAM). Under certain conditions, these commands and data will be passed through the CPU.
  • the clock is sent to the video random access memory (VRAM).
  • VRAM video random access memory
  • the data and commands in the VRAM are used to complete the rendering of the graphics and store the results in the frame buffer.
  • the frame buffer The frames in will eventually be sent to the monitor, showing the result.
  • it is also supported to directly send data from RAM to VRAM or directly send data from frame buffer to RAM without using CPU clock (for example, VBO, PBO in OpenGL).
  • FIG. 1b is a schematic diagram of the software and hardware modules and their positions in the technology stack of the embodiment of the application.
  • the architecture diagram is a typical smart device game rendering scene. It mainly includes: feature recognition, decision-making module, and rendering instruction reconstruction module.
  • the program code of the embodiment of the present application may exist in the Framework layer of the Android platform software, between the Framework and the DDK.
  • This application will first perform feature recognition based on multiple dimensions, such as: whether the Render Pass renders transparent objects, whether the image quality features rendered in real time match the Render Format, etc.; after the rendering instruction stream of the game/APP is intercepted, the feature recognition module Perform feature analysis, make a decision after analyzing the features, and reorganize instructions according to the decision result.
  • Figure 1c is a schematic diagram of the implementation form of this application.
  • FIG. 2 is a block diagram illustrating a computing device 30 that may implement the techniques described in this disclosure.
  • the computing device 30 may be the rendering instruction processing device in the embodiment of the present application.
  • Examples of the computing device 30 include but are not limited to wireless devices, mobile or cellular phones (including so-called smart phones), personal digital assistants (personal digital assistant, PDA) , video game consoles including video displays, mobile video game devices, mobile video conferencing units, laptop computers, desktop computers, television set-top boxes, tablet computing devices, e-book readers, fixed or mobile media players, and the like.
  • computing device 30 includes a central processing unit (CPU) 32 with CPU memory 34, a graphics processing unit (GPU) with GPU memory 38 and one or more shader units 40. ) 36, a display unit 42, a display buffer unit 44, a user interface unit 46 and a storage unit 48.
  • storage unit 48 may store GPU driver 50 with compiler 54 , GPU program 52 and natively compiled GPU program 56 .
  • CPU 32 examples include, but are not limited to, digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuits.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • CPU 32 and GPU 36 are illustrated as separate units in the example of FIG. 2, in some examples, CPU 32 and GPU 36 may be integrated into a single unit.
  • CPU 32 may execute one or more application programs. Examples of applications may include web browsers, email applications, spreadsheets, video games, audio and/or video capture, playback, or editing applications, or other applications that initiate the generation of image data to be presented via display unit 42 program.
  • CPU 32 includes CPU memory 34.
  • CPU memory 34 may represent on-chip storage or memory used when executing machine or object code.
  • CPU memories 34 may each include hardware memory registers capable of storing a fixed number of digital bits.
  • the CPU 32 may be able to read values from or write values to the native CPU memory 34 more rapidly than the memory unit 48 (which may be accessed, for example, via the system bus) or to the memory unit 48. into the value.
  • GPU 36 represents one or more special purpose processors for performing graphics operations. That is, GPU 36 may be, for example, a dedicated hardware unit with fixed functionality and programmable components for rendering graphics and executing GPU applications. GPU 36 may also include a DSP, general purpose microprocessor, ASIC, FPGA, or other equivalent integrated or discrete logic circuitry.
  • GPU 36 also includes GPU memory 38, which may represent on-chip storage or memory used when executing machine or object code.
  • GPU memories 38 may each include hardware memory registers capable of storing a fixed number of digital bits.
  • the GPU 36 may be able to read values from or write values to the native GPU memory 38 more rapidly than the storage unit 48 (which may be accessed, for example, via the system bus) or to the storage unit 48. into the value.
  • GPU 36 also includes shader unit 40.
  • shading unit 40 may be configured as a programmable pipeline of processing components.
  • shading unit 40 may be referred to as a "shader processor" or “unified shader,” and may perform geometry, vertex, pixel, or other shading operations to render graphics.
  • Shading unit 40 may include one or more components not specifically shown in FIG. 2 for clarity, such as components for fetching and decoding instructions, one or more arithmetic and logic units (ALUs) for performing arithmetic calculations ALU) and one or more memories, caches or registers.
  • ALUs arithmetic and logic units
  • Display unit 42 represents a unit capable of displaying video data, images, text or any other type of data.
  • the display unit 42 may include a liquid crystal display (liquid crystal display, LCD), a light emitting diode (light emitting diode, LED) display, an organic LED (organic light-emitting diode, OLED), an active matrix OLED (active-matrix organic light-emitting diode, AMOLED) display and so on.
  • Display buffer unit 44 represents a memory or storage device dedicated to storing data for display unit 42 for rendering images, such as photographs or video frames.
  • Display buffer unit 44 may represent a two-dimensional buffer containing multiple storage locations. The number of storage locations within display buffer unit 44 may be substantially similar to the number of pixels to be displayed on display unit 42 . For example, if display unit 42 is configured to include 640x480 pixels, display buffer unit 44 may include 640x480 storage locations.
  • Display buffer unit 44 may store the final pixel value for each of the pixels processed by GPU 36.
  • Display unit 42 may retrieve the final pixel values from display buffer unit 44 and display the final image based on the pixel values stored in display buffer unit 44 .
  • User interface unit 46 represents a unit that a user may use to interact with or otherwise interface to communicate with other units of computing device 30 (e.g., CPU 32). Examples of user interface unit 46 include, but are not limited to, trackballs, mice, keyboards, and other types of input devices. The user interface unit 46 may also be a touch screen and may be incorporated as part of the display unit 42 .
  • Storage unit 48 may include one or more computer-readable storage media. Examples of storage unit 48 include, but are not limited to, random access memory (RAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), CD-ROM or other optical disk storage, magnetic disk storage, or Other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer or processor.
  • RAM random access memory
  • ROM read only memory
  • EEPROM electrically erasable programmable read only memory
  • the memory unit 48 may contain instructions for causing the CPU 32 and/or the GPU 36 to execute the functions of the CPU 32 and the GPU 36 in the present invention.
  • storage unit 48 may be considered a non-transitory storage medium.
  • the term "non-transitory” may indicate that the storage medium is not embodied in a carrier wave or propagated signal. However, the term “non-transitory” should not be interpreted to mean that storage unit 48 is not removable.
  • storage unit 48 may be removed from computing device 30 and moved to another device.
  • a storage unit substantially similar to storage unit 48 may be inserted into computing device 30.
  • non-transitory storage media may store data (eg, in RAM) that may change over time.
  • the storage unit 48 stores a GPU driver 50 and a compiler 54 , a GPU program 52 and a natively compiled GPU program 56 .
  • GPU driver 50 represents a computer program or executable code that provides an interface to access GPU 36.
  • CPU 32 executes GPU driver 50 or portions thereof to interface with GPU 36, and for this reason GPU driver 50 is shown in the example of FIG. 2 as GPU driver 50 within CPU 32 marked with a dashed box.
  • GPU driver 50 may access programs executed by CPU 32 or other executable files, including GPU program 52.
  • GPU program 52 may comprise (eg, using an application programming interface (API)) code written in a high-level (HL) programming language.
  • API application programming interface
  • An example of an API includes Open Graphics Library (OpenGL).
  • OpenGL Open Graphics Library
  • an API consists of a predetermined standardized set of commands executed by associated hardware. The API command allows the user to instruct the hardware components of the GPU to execute commands without requiring the user to know the specific conditions of the hardware components.
  • GPU program 52 may call or otherwise incorporate one or more functions provided by GPU driver 50 .
  • the CPU 32 generally executes the program in which the GPU program 52 is embedded, and upon encountering the GPU program 52, passes the GPU program 52 to the GPU driver 50 (eg, in the form of a command stream).
  • CPU 32 executes GPU driver 50 to process GPU program 52 in this context.
  • GPU driver 50 may process GPU program 52 by compiling the GPU program into object or machine code executable by GPU 36. This object code is shown in the example of FIG. 2 as a natively compiled GPU program 56 .
  • compiler 54 may operate in real-time or near real-time to compile GPU program 52 during execution of the program in which GPU program 52 is embedded.
  • compiler 54 generally represents a module that condenses HL instructions defined in accordance with the HL programming language into LL instructions of a low-level (LL) programming language. After compilation, these LL instructions can be executed by a particular type of processor or other type of hardware such as FPGA, ASIC, etc. (including, for example, CPU 32 and GPU 36).
  • compiler 54 may receive GPU program 52 from CPU 32 while executing HL code that includes GPU program 52.
  • Compiler 54 may compile GPU program 52 into a natively compiled GPU program 56 conforming to the LL programming language.
  • Compiler 54 then outputs a natively compiled GPU program 56 containing LL instructions.
  • the GPU 36 generally receives a natively compiled GPU program 56 (as shown by the dashed box within the GPU 36 labeled "Natively Compiled GPU Program 56"), after which, in some instances, the GPU 36 immediately renders the image And the rendered portion of the image is output to display buffer unit 44 .
  • GPU 36 may generate a plurality of primitives to be displayed at display unit 42.
  • Primitives can contain one or more lines (including curves, splines, etc.), points, circles, ellipses, polygons (where a polygon is typically defined as a collection of one or more triangles), or any other two-dimensional (2D) primitive .
  • primary may also refer to three-dimensional (3D) primitives, such as cubes, cylinders, spheres, cones, pyramids, torus, and the like.
  • 3D three-dimensional
  • the term “primitive” refers to any geometric shape or element that is rendered by GPU 36 for display as an image (or frame in the context of video data) via display unit 42.
  • GPU 36 may transform the primitive or other state data for the primitive (e.g., which defines the texture, brightness, camera configuration, or other aspects of the primitive) by applying one or more model transformations (which may also be specified in the state data) into the so-called "world space". Once transformed, the GPU 36 may apply the effective camera's view transformation (which may also be specified in the state data defining the camera) to transform the coordinates of primitives and lights into camera or eye space. The GPU36 can also perform vertex shading to render the primitive's appearance in the view of any available light. GPU 36 may perform vertex shading in one or more of the aforementioned model, world, or view spaces (although vertex shading is typically performed in world space).
  • the GPU 36 can perform projection to project the image into (as an example) a unit cube with poles at (-1,-1,-1) and (1,1,1). This unit cube is often called the canonical view volume.
  • GPU 36 may perform clipping to remove any primitives that do not at least partially reside in the view volume. In other words, the GPU 36 can remove any primitives that are not within the camera frame.
  • the GPU 36 can then map the primitive's coordinates from the view volume to screen space, effectively reducing the primitive's 3D primitives to the screen's 2D coordinates.
  • GPU 36 may then rasterize the primitive. For example, GPU 36 can calculate and set the color of the pixels of the screen covered by the primitive. During rasterization, GPU 36 may apply any textures (where textures may include state data) associated with the primitive. GPU 36 may also perform Z-buffer algorithms (also known as depth testing) during rasterization to determine if any primitives and/or objects are obscured by any other objects. The Z-buffer algorithm orders the primitives according to their depth so that the GPU 36 knows the order in which to draw each primitive to the screen. GPU 36 outputs the rendered pixels to display buffer unit 44.
  • Z-buffer algorithms also known as depth testing
  • Display buffer unit 44 may temporarily store rendered pixels of a rendered image until the entire image has been rendered.
  • display buffer unit 44 may be considered an image frame buffer.
  • Display buffer unit 44 may then transmit the rendered image to be displayed on display unit 42 .
  • GPU 36 may output the rendered portion of the image directly to display unit 42 for display, rather than temporarily storing the image in display buffer unit 44.
  • Display unit 42 may then display the images stored in display buffer unit 78 .
  • Texture One or several two-dimensional graphics representing the surface details of an object, also known as texture mapping (texture mapping), when the texture is mapped to the surface of the object in a specific way, it can make the object look more real.
  • RenderBuffer It is a memory buffer, which stores a series of bytes, integers, pixels, etc., and can directly store all rendering data in RenderBuffer without any format conversion of the data.
  • Render Target In the field of computer 3D graphics, a rendering target is a function of a modern graphics processing unit (GPU), which allows rendering of a 3D scene to an intermediate memory buffer RenderBuffer or a Render Target Texture (Render Target Texture), while Instead of a framebuffer or backbuffer, the Render Target Texture can then be processed through a pixel shader to apply additional effects to the final image before it is displayed.
  • GPU graphics processing unit
  • Render Target Format Render target format, which describes how many bits are allocated to each pixel, and how they are divided between red, green, blue and alpha transparency, such as RGBA8, each pixel is allocated 32 bits, red Green, blue and alpha each occupy 8 bits, referred to as Render Format, or rendering format.
  • RenderPass Rendering pass, which usually refers to multi-pass rendering technology. In multi-pass rendering technology, an object needs to be rendered multiple times, and the results of each rendering process will be added to the final rendering result.
  • Graphics application programming interface graphics rendering programming interface; the game engine renders the 3D scene to the screen by calling the graphics API.
  • the most widely used graphics APIs on mobile platforms include: OpenGL ES and Vulkan; OpenGL ES is a traditional graphics API, and Vulkan is a new generation of graphics API.
  • FIG. 3 is a schematic flowchart of a rendering format selection method provided by the embodiment of the present application.
  • the rendering format selection method provided by the embodiment of the present application includes:
  • the processor may acquire the rendering instructions of each frame in the game; wherein, the first rendering instruction set is the rendering instruction of one frame (the first frame). Specifically, the processor may obtain the main scene drawing instruction of the first frame in the game.
  • the first rendering instruction set does not include an instruction for drawing transparent objects
  • the current rendering format is the first rendering format
  • transmit the first rendering format change information to the GPU, so that the GPU
  • the first rendering format change information draws the object of the second frame, wherein the first rendering format includes a transparency channel, and the first rendering format change information is used to indicate that the first rendering format is changed to a second rendering format , the second rendering format does not include a transparency channel, and the second frame is a frame after the first frame.
  • the current rendering format is the first rendering format
  • the first rendering format may be composed of R channel, G channel, B channel, and A channel.
  • the first rendering format may be RGBA8 or RGBA16F.
  • the transparency feature of the rendering instructions in the first rendering instruction set can be detected, and if the first rendering instruction set does not contain instructions for drawing transparent objects, it can be considered that there is no need to render transparent objects at present , and then there is no need for a rendering format that includes a transparency channel. On the contrary, if you use a rendering format that includes a transparency channel for rendering, it will increase the DDR overhead when performing rendering tasks without increasing the image quality.
  • the transparency features of the rendering commands in the first rendering command set can be determined according to the following command features, such as whether there are commands such as Disable (GL_BLEND) in the main scene RenderPass command to determine whether to draw a transparent object.
  • command features such as whether there are commands such as Disable (GL_BLEND) in the main scene RenderPass command to determine whether to draw a transparent object.
  • the feature information make a decision result, for example, call Disable (GL_BLEND) in Frame N frame (that is, the first frame), then it can be considered that the first rendering instruction set does not contain instructions for drawing transparent objects.
  • the transparency feature of the rendering instruction of the first frame is used as a reference for selecting the rendering format of the second frame.
  • the first rendering instruction set does not contain instructions for drawing transparent objects, it can be considered that the second Frames also do not need to render transparent objects.
  • the current rendering format is the first rendering format (including the transparency channel), and if the rendering of the second frame is continued based on the first rendering format, unnecessary DDR overhead will be added.
  • the second rendering format may be R11G11B10F.
  • the CPU needs to be able to transmit the first rendering format change information indicating to change the first rendering format to the second rendering format to the GPU, so that the rendering method adopted by the GPU when rendering the second frame The format is changed from the first rendering format to the second rendering format.
  • the second frame is an adjacent frame after the first frame.
  • the CPU can detect the rendering format and transparency features of Frame N (that is, the first frame above), and whether it is transparent can be based on the following command features, such as whether there are Disable/Enable (GL_BLEND), BlendEquation, BlendFunc in the RenderPass command of the main scene and other instructions to decide whether to draw a transparent object.
  • command features such as whether there are Disable/Enable (GL_BLEND), BlendEquation, BlendFunc in the RenderPass command of the main scene and other instructions to decide whether to draw a transparent object.
  • the feature information make a decision result.
  • Frame N frame calls Disable(GL_BLEND)
  • the rendering format is RGBA16F
  • Frame N+1 frame that is, the second frame above
  • An embodiment of the present application provides a method for selecting a rendering format, including: acquiring a first rendering instruction set, the first rendering instruction set is used to draw objects in the first frame; based on the first rendering instruction set not containing an instruction for drawing a transparent object, and the current rendering format is the first rendering format, and transmitting the first rendering format change information to the GPU, so that the GPU can draw the object of the second frame according to the first rendering format change information,
  • the first rendering format includes a transparency channel
  • the first rendering format change information is used to indicate that the first rendering format is changed to a second rendering format
  • the second rendering format does not include a transparency channel
  • the The second frame is a frame after the first frame.
  • the transparency feature of the rendering instruction of the first frame is used as a reference for selecting the rendering format of the second frame.
  • the first rendering instruction set does not contain instructions for drawing transparent objects, it can be considered that the second frame does not need to be transparent Object rendering.
  • the current rendering format is the first rendering format (including the transparency channel), and if the rendering of the second frame is continued based on the first rendering format, unnecessary DDR overhead will be added. Therefore, when rendering the second frame, the GPU adopts the second rendering format not including the transparency channel, which reduces the DDR overhead.
  • FIG. 4 is a schematic flowchart of a rendering format selection method provided by the embodiment of the present application.
  • the rendering format selection method provided by the embodiment of the present application includes:
  • the processor may acquire the rendering instructions of each frame in the game; wherein, the first rendering instruction set is the rendering instruction of one frame (the first frame). Specifically, the processor may obtain the main scene drawing instruction of the first frame in the game.
  • the current rendering format is the second rendering format
  • the second rendering format change information draws the object of the second frame, wherein the second rendering format does not include a transparency channel, and the second rendering format change information is used to indicate that the second rendering format is changed to the first rendering format
  • the first rendering format includes a transparency channel
  • the second frame is a frame after the first frame.
  • the current rendering format is a second rendering format
  • the second rendering format does not include a transparency channel
  • the second rendering format may be composed of an R channel, a G channel, and a B channel, for example, the The second rendering format may be R11G11B10F.
  • the transparency feature of the rendering instructions in the first rendering instruction set can be detected, and if the first rendering instruction set includes instructions for drawing transparent objects, it can be considered that rendering of transparent objects is currently required, and then A rendering format that includes a transparency channel is also required. Conversely, if you render with a rendering format that does not include a transparency channel, the rendering quality will be greatly reduced.
  • the transparency features of the rendering commands in the first rendering command set can be determined according to the following command features, such as whether there are commands such as Enable (GL_BLEND), BlendEquation, and BlendFunc in the main scene RenderPass command to determine whether to draw a transparent object.
  • command features such as whether there are commands such as Enable (GL_BLEND), BlendEquation, and BlendFunc in the main scene RenderPass command to determine whether to draw a transparent object.
  • GL_BLEND Enable
  • BlendEquation BlendEquation
  • BlendFunc in the main scene RenderPass command to determine whether to draw a transparent object.
  • the characteristic information make a decision result, for example, Frame N frame (that is, the first frame) calls Enable (GL_BLEND), then it can be considered that the first rendering instruction set contains instructions for drawing transparent objects.
  • the transparency feature of the rendering instruction of the first frame is used as a reference for selecting the rendering format of the second frame.
  • the second frame can be regarded as Rendering of transparent objects is also required.
  • the current rendering format is the second rendering format (not including the transparency channel), if you continue to render the second frame based on the second rendering format, the rendering quality will be greatly reduced.
  • the first rendering format may be composed of an R channel, a G channel, a B channel, and an A channel.
  • the first rendering format can be RGBA8 or RGBA16F.
  • the CPU needs to be able to transmit the second rendering format change information indicating to change the second rendering format to the first rendering format to the GPU, so that the rendering method adopted by the GPU when rendering the second frame The format is changed from the second rendering format to the first rendering format.
  • the second frame is an adjacent frame after the first frame.
  • the CPU can detect the rendering format and transparency features of Frame N (that is, the first frame above), and whether it is transparent can be based on the following command features, such as whether there are Disable/Enable (GL_BLEND), BlendEquation, BlendFunc in the RenderPass command of the main scene and other instructions to decide whether to draw a transparent object.
  • the feature information make a decision result. For example, call Enable(GL_BLEND) for Frame N, and the rendering format is R11G11B10, then give Frame N+1 frame (that is, the second frame above) to dynamically switch the rendering format decision, and switch R11G11B10 to RGBA16F format, Frame N+1 draws based on the newly replaced RGBA16F rendering format.
  • An embodiment of the present application provides a method for selecting a rendering format, the method is applied to a terminal device, the terminal device includes a graphics processor GPU, and the method includes: acquiring a first rendering instruction set, the first rendering instruction set For drawing the object of the first frame; based on the first rendering instruction set including instructions for drawing transparent objects, and the current rendering format is the second rendering format, passing the second rendering format change information to the GPU, so that the GPU draws the object of the second frame according to the second rendering format change information, wherein the second rendering format does not include a transparency channel, and the second rendering format change information is used to indicate that the second rendering The format is changed to a first rendering format, the first rendering format includes a transparency channel, and the second frame is a frame after the first frame.
  • the transparency feature of the rendering instruction of the first frame is used as a reference for selecting the rendering format of the second frame.
  • the first rendering instruction set contains instructions for drawing transparent objects, it can be considered that the second frame also needs to perform transparent object rendering. rendering.
  • the current rendering format is the second rendering format (not including the transparency channel). If you continue to render the second frame based on the second rendering format, the rendering quality will be greatly reduced. Therefore, the present application needs to adopt the first rendering format including the transparency channel when rendering the second frame, which can improve the rendering quality.
  • RGBA8 allocates 32 bits for each pixel, red, green, blue and transparency each occupy 8 bits
  • RGBA16F allocates 32 bits for each pixel. 64 bits, red, green, blue and transparency each occupy 16 bits, that is, the pixel value representation precision supported by RGBA16F is greater than the pixel value representation precision supported by RGBA8.
  • the rendering quality may be higher than the pixel value representation precision supported by the current rendering format.
  • the rendering format needs to be changed to support higher
  • the pixel value represents the rendering format of the precision to improve the quality of the rendering.
  • the drawing result of the second frame can be obtained, and the drawing result can include the value of each channel of each pixel on the rendering interface, or the rendering interface can be divided into multiple tile regions (tile region , as shown in FIG. 5 for details), the drawing result may include the values of each channel of one or more pixel points (such as center points) in each tile on the rendering interface.
  • the CPU can determine whether the current rendering format meets the pixel value representation precision required by the second frame based on the drawing result.
  • the drawing result indicates that the second frame has the required
  • the pixel value represents the pixel whose precision exceeds the upper limit of the pixel value supported by the first rendering format.
  • the second The third rendering format change information is transmitted to the GPU, so that the GPU draws the object of the third frame according to the third rendering format change information, wherein the third rendering format change information is used to indicate that the first rendering The format is changed to a third rendering format, the upper limit of pixel value representation precision supported by the third rendering format is greater than the upper limit of pixel value representation precision supported by the first rendering format, and the third frame is a frame after the second frame .
  • the second rendering format is RGBA8, and the third rendering format is RGBA16F.
  • RGBA8 can be switched from RGBA8 to RGBA16F
  • Frame N maps Render Buffer, such as 1920*1080 Render Buffer, divides 253 128*64 tiles, reads the center value of each tile from GPU memory, and judges whether the value is equal to 255 , if there are more than 255, it is considered that there is a highlight area, and the RGBA8 rendering format cannot meet the quality rendering requirements, and it is necessary to switch from RGBA8 to RGBA16F.
  • Render Buffer such as 1920*1080 Render Buffer
  • the rendering format needs to be changed to support lower Pixel value representation precision rendering format to reduce DDR overhead.
  • the rendering result of the second frame is acquired, and based on the rendering result, it indicates that there is no pixel value in the second frame whose precision exceeds the pixel value supported by the first rendering format.
  • a pixel representing an upper limit of precision and transmitting fourth rendering format change information to the GPU, so that the GPU can draw the object of the third frame according to the fourth rendering format change information, wherein the fourth rendering format change information It is used to indicate that the first rendering format is changed to a fourth rendering format, the upper limit of pixel value representation precision supported by the fourth rendering format is smaller than the upper limit of pixel value representation precision supported by the first rendering format, and the third frame is the frame after the second frame.
  • the third rendering format is RGBA16F
  • the second rendering format is RGBA8.
  • Frame N maps the Render Buffer, such as a 1920*1080 Render Buffer, divides 253 128*64 tiles, and reads the center value of each tile, as shown in Figure 5, the judgment value Are they all less than 1.0 (normalized result), if yes, reduce RGBA16F to RGBA8.
  • Render Buffer such as a 1920*1080 Render Buffer
  • An application flow of the embodiment of the present application may refer to FIG. 6 .
  • the average frame rate of the solution of the embodiment of the application is increased from 40.65fps to 53.16fps, and the frame rate is increased by 12.51fps; the power consumption is reduced from 1636.7mA to 1591.86mA, and the power consumption is reduced by 44.54mA; The number of freezes per hour is reduced from 206 to 179, and the worst frame loss remains the same; the single-frame power is reduced from 40.26mA to 29.94mA, and the single-frame power consumption is optimized to 10.32mA.
  • FIG. 7 provides a schematic structural diagram of an apparatus for selecting a rendering format for the present application. As shown in FIG. 700 includes:
  • An instruction acquisition module 701 configured to acquire a first set of rendering instructions, the first set of rendering instructions is used to draw the object of the first frame;
  • An instruction format conversion module 702 configured to transmit the first rendering format change information to the GPU based on that the first rendering instruction set does not include an instruction for drawing a transparent object, and the current rendering format is the first rendering format, so that the GPU draws the object of the second frame according to the first rendering format change information, wherein the first rendering format includes a transparency channel, and the first rendering format change information is used to indicate that the first rendering format Change to a second rendering format, the second rendering format does not include a transparency channel, and the second frame is a frame after the first frame.
  • instruction format conversion module 702 For the description of the instruction format conversion module 702, reference may be made to the description of step 302 in the above embodiment, and details are not repeated here.
  • the first rendering format is composed of R channel, G channel, B channel and A channel
  • the second rendering format is composed of R channel, G channel and B channel.
  • the first rendering format is RGBA8 or RGBA16F
  • the second rendering format is R11G11B10F.
  • the second frame is an adjacent frame after the first frame.
  • FIG. 8 provides a schematic structural diagram of an apparatus for selecting a rendering format for the present application. As shown in FIG. 800 includes:
  • An instruction obtaining module 801 configured to obtain a first set of rendering instructions, the first set of rendering instructions is used to draw the object of the first frame;
  • the instruction format conversion module 802 is configured to transmit the second rendering format change information to the GPU based on the first rendering instruction set including instructions for drawing transparent objects, and the current rendering format is the second rendering format, so that The GPU draws the object of the second frame according to the second rendering format change information, wherein the second rendering format does not include a transparency channel, and the second rendering format change information is used to indicate that the second rendering format Change to the first rendering format, the first rendering format includes a transparency channel, and the second frame is a frame after the first frame.
  • the first rendering format is composed of R channel, G channel, B channel and A channel
  • the second rendering format is composed of R channel, G channel and B channel.
  • the first rendering format is RGBA8 or RGBA16F
  • the second rendering format is R11G11B10F.
  • the instruction acquiring module is further configured to acquire the rendering result of the second frame
  • the instruction format conversion module is further configured to indicate, based on the rendering result, that there are pixels in the second frame whose required pixel value representation precision exceeds the upper limit of the pixel value representation precision supported by the first rendering format, and convert the second
  • the third rendering format change information is transmitted to the GPU, so that the GPU draws the object of the third frame according to the third rendering format change information, wherein the third rendering format change information is used to indicate that the first rendering
  • the format is changed to a third rendering format, the upper limit of pixel value representation precision supported by the third rendering format is greater than the upper limit of pixel value representation precision supported by the first rendering format, and the third frame is a frame after the second frame .
  • the second rendering format is RGBA8, and the third rendering format is RGBA16F.
  • the instruction acquiring module is further configured to acquire the rendering result of the second frame
  • the instruction format conversion module is further configured to indicate, based on the rendering result, that there is no pixel in the second frame whose required pixel value representation precision exceeds the upper limit of the pixel value representation precision supported by the first rendering format, and
  • the fourth rendering format change information is transmitted to the GPU, so that the GPU draws the object of the third frame according to the fourth rendering format change information, wherein the fourth rendering format change information is used to indicate that the first
  • the rendering format is changed to the fourth rendering format, the upper limit of pixel value representation precision supported by the fourth rendering format is smaller than the upper limit of pixel value representation precision supported by the first rendering format, and the third frame is after the second frame frame.
  • the third rendering format is RGBA16F
  • the second rendering format is RGBA8.
  • FIG. 9 is a schematic structural diagram of a terminal device 900 provided by the present application.
  • the terminal device includes a processor 901 and a memory 903, and the processor 901 includes a central processing unit CPU and a graphics processing unit GPU, the CPU for fetching code from the memory to execute:
  • the first rendering format change information is transmitted to the GPU, so that the GPU can use the first rendering format according to the first rendering format.
  • Rendering format change information for drawing objects in the second frame wherein the first rendering format includes a transparency channel, and the first rendering format change information is used to indicate changing the first rendering format to a second rendering format, so The second rendering format does not include a transparency channel, and the second frame is a frame after the first frame.
  • the first rendering format is composed of R channel, G channel, B channel and A channel
  • the second rendering format is composed of R channel, G channel and B channel.
  • the first rendering format is RGBA8 or RGBA16F
  • the second rendering format is R11G11B10F.
  • the second frame is an adjacent frame after the first frame.
  • the CPU is used to obtain the code of the memory to execute:
  • the second rendering format change information is transmitted to the GPU, so that the GPU
  • the rendering format change information draws the object of the second frame, wherein the second rendering format does not include a transparency channel, and the second rendering format change information is used to indicate that the second rendering format is changed to the first rendering format, so
  • the first rendering format includes a transparency channel
  • the second frame is a frame after the first frame.
  • the first rendering format is composed of R channel, G channel, B channel and A channel
  • the second rendering format is composed of R channel, G channel and B channel.
  • the first rendering format is RGBA8 or RGBA16F
  • the second rendering format is R11G11B10F.
  • the method also includes:
  • the third rendering format change information is transmitted to the GPU , so that the GPU draws the object in the third frame according to the third rendering format change information, wherein the third rendering format change information is used to indicate changing the first rendering format to a third rendering format, the The pixel value representation precision upper limit supported by the third rendering format is greater than the pixel value representation precision limit supported by the first rendering format, and the third frame is a frame after the second frame.
  • the second rendering format is RGBA8, and the third rendering format is RGBA16F.
  • the method also includes:
  • the fourth rendering format change information is transmitted to the GPU, so that the GPU draws the object in the third frame according to the fourth rendering format change information, where the fourth rendering format change information is used to indicate changing the first rendering format to a fourth rendering format, the The pixel value representation precision upper limit supported by the fourth rendering format is smaller than the pixel value representation precision limit supported by the first rendering format, and the third frame is a frame after the second frame.
  • the third rendering format is RGBA16F
  • the second rendering format is RGBA8.
  • the disclosed system, device and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or part of the contribution to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or other network equipment, etc.) execute all or part of the steps of the method described in the embodiment of FIG. 3 of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

一种渲染格式选择方法,包括:获取第一渲染指令集合;基于所述第一渲染指令集合不包含用于绘制透明物体的指令,且当前的渲染格式为第一渲染格式,将第一渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第一渲染格式变更信息绘制第二帧的对象,其中,所述第一渲染格式包括透明度通道,所述第一渲染格式变更信息用于指示将所述第一渲染格式变更为第二渲染格式,所述第二渲染格式不包括透明度通道。该方法可以在进行第二帧的渲染时GPU采用不包括透明度通道的第二渲染格式,降低了DDR开销。

Description

一种渲染格式选择方法及其相关设备
本申请要求于2021年9月29日提交中国专利局、申请号为202111155620.9、发明名称为“一种渲染格式选择方法及其相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,尤其涉及一种渲染格式选择方法及其相关设备。
背景技术
游戏性能和功耗的瓶颈点有中央处理器(central processing unit,CPU)、图形处理器(graphics processing unit,GPU)、双倍速率同步动态随机存储器(double data rate synchronous dynamic random access memory,DDR)共三方面。目前游戏画质越来越精致,而精致画质渲染需要较多的渲染通道(Render Pass),如渲染各种特效等,同时游戏渲染分辨率也越来越大,两方面都会导致DDR带宽需求越来越大,很容易造成DDR是性能瓶颈点。
渲染目标格式(render target format,可以简称为Render Format,或者渲染格式)用于描述每个像素分配多少位,以及它们如何在红色通道(R通道)、绿色通道(G通道)、蓝色通道(B通道)和alpha透明度通道(A通道)之间进行划分,如RGBA8,每个像素分配32位,红绿蓝和alpha各占8位。
目前针对渲染格式的选择,一般有两种选择技术,一是游戏引擎是在游戏开发时,选择一种适合本款游戏的渲染格式,二是一些应用程序(application,APP)针对处理的图形质量选择匹配的渲染格式。然而,无论是在游戏开发时选择,还是在游戏运行前选择,在游戏运行时都会固定按照一种渲染格式,导致了游戏的画质或性能较差。
发明内容
第一方面,本申请提供了一种渲染格式选择方法,所述方法应用于终端设备,所述终端设备包括图形处理器GPU,所述方法包括:
获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;
基于所述第一渲染指令集合不包含用于绘制透明物体的指令,且当前的渲染格式为第一渲染格式,将第一渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第一渲染格式变更信息绘制第二帧的对象,其中,所述第一渲染格式包括透明度通道,所述第一渲染格式变更信息用于指示将所述第一渲染格式变更为第二渲染格式,所述第二渲染格式不包括透明度通道,所述第二帧为所述第一帧之后的帧。
在一种可能的实现中,当前的渲染格式为第一渲染格式,所述第一渲染格式可以由R通道、G通道、B通道以及A通道组成,例如所述第一渲染格式可以为RGBA8或RGBA16F。
在一种可能的实现中,可以检测第一渲染指令集合中渲染指令的透明特征,若第一渲染指令集合中不包含用于绘制透明物体的指令,则可以认为当前不需要进行透明物体的渲 染,进而也不需要包含透明度通道的渲染格式,反之,如果用包含透明度通道的渲染格式进行渲染,在没有增加画质的前提下,会增加执行渲染任务时的DDR开销。
例如,第一渲染指令集合中渲染指令的透明特征可以根据如下指令特征确定,比如主场景RenderPass指令中是否存在Disable(GL_BLEND)等指令,决定是否绘制透明物体。根据特征信息,做出决策结果,比如Frame N帧(即第一帧)调用Disable(GL_BLEND),则可以认为第一渲染指令集合中不包含用于绘制透明物体的指令。
本申请实施例中,将第一帧的渲染指令的透明特征作为进行第二帧的渲染格式选择的参考,在第一渲染指令集合中不包含用于绘制透明物体的指令时,可以认为第二帧也不需要进行透明物体的渲染。然而,当前渲染格式为第一渲染格式(包括透明度通道),若继续基于第一渲染格式进行第二帧的渲染,会增加不必要的DDR开销。
因此,在进行第二帧的渲染时需要采用不包括透明度通道的第二渲染格式,例如,所述第二渲染格式可以为R11G11B10F。
在一种可能的实现中,CPU需要可以将指示将所述第一渲染格式变更为第二渲染格式的第一渲染格式变更信息传递至GPU,以便GPU在进行第二帧的渲染时采用的渲染格式由第一渲染格式变更为第二渲染格式。
在一种可能的实现中,所述第二帧为所述第一帧之后相邻的帧。
示例性的,CPU可以检测Frame N帧(即上述第一帧)的渲染格式和透明特征,是否透明可以根据如下指令特征,比如主场景RenderPass指令中是否存在Disable/Enable(GL_BLEND)、BlendEquation、BlendFunc等指令,决定是否绘制透明物体。根据特征信息,做出决策结果,比如Frame N帧调用Disable(GL_BLEND),同时渲染格式是RGBA16F,则给出Frame N+1帧(即上述第二帧)动态切换渲染格式决策,将RGBA16F切换为R11G11B10格式,Frame N+1基于新替换的R11G11B10渲染格式进行绘制。
本申请实施例提供了一种渲染格式选择方法,包括:获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;基于所述第一渲染指令集合不包含用于绘制透明物体的指令,且当前的渲染格式为第一渲染格式,将第一渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第一渲染格式变更信息绘制第二帧的对象,其中,所述第一渲染格式包括透明度通道,所述第一渲染格式变更信息用于指示将所述第一渲染格式变更为第二渲染格式,所述第二渲染格式不包括透明度通道,所述第二帧为所述第一帧之后的帧。将第一帧的渲染指令的透明特征作为进行第二帧的渲染格式选择的参考,在第一渲染指令集合中不包含用于绘制透明物体的指令时,可以认为第二帧也不需要进行透明物体的渲染。然而,当前渲染格式为第一渲染格式(包括透明度通道),若继续基于第一渲染格式进行第二帧的渲染,会增加不必要的DDR开销。因此,在进行第二帧的渲染时GPU采用不包括透明度通道的第二渲染格式,降低了DDR开销。
第二方面,本申请提供了一种渲染格式选择方法,所述方法应用于终端设备,所述终端设备包括图形处理器GPU,所述方法包括:
获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;
基于所述第一渲染指令集合包含用于绘制透明物体的指令,且当前的渲染格式为第二渲染格式,将第二渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第二渲染格式变更信息绘制第二帧的对象,其中,所述第二渲染格式不包括透明度通道,所述第二渲染格式变更信息用于指示将所述第二渲染格式变更为第一渲染格式,所述第一渲染格式包括透明度通道,所述第二帧为所述第一帧之后的帧。
在一种可能的实现中,当前的渲染格式为第二渲染格式,所述第二渲染格式不包括透明度通道,所述第二渲染格式可以由R通道、G通道以及B通道组成,例如所述第二渲染格式可以为R11G11B10F。
在一种可能的实现中,可以检测第一渲染指令集合中渲染指令的透明特征,若第一渲染指令集合中包含用于绘制透明物体的指令,则可以认为当前需要进行透明物体的渲染,进而也需要包含透明度通道的渲染格式,反之,如果用不包含透明度通道的渲染格式进行渲染,会大大降低渲染的画质。
例如,第一渲染指令集合中渲染指令的透明特征可以根据如下指令特征确定,比如主场景RenderPass指令中是否存在Enable(GL_BLEND)、BlendEquation、BlendFunc等指令,决定是否绘制透明物体。根据特征信息,做出决策结果,比如Frame N帧(即第一帧)调用Enable(GL_BLEND),则可以认为第一渲染指令集合中包含用于绘制透明物体的指令。
本申请实施例中,将第一帧的渲染指令的透明特征作为进行第二帧的渲染格式选择的参考,在第一渲染指令集合中包含用于绘制透明物体的指令时,可以认为第二帧也需要进行透明物体的渲染。然而,当前渲染格式为第二渲染格式(不包括透明度通道),若继续基于第二渲染格式进行第二帧的渲染,会大大降低渲染的画质。
因此,在进行第二帧的渲染时需要采用包括透明度通道的第一渲染格式,所述第一渲染格式可以由R通道、G通道、B通道以及A通道组成,例如,所述第一渲染格式可以为RGBA8或RGBA16F。
在一种可能的实现中,CPU需要可以将指示将所述第二渲染格式变更为第一渲染格式的第二渲染格式变更信息传递至GPU,以便GPU在进行第二帧的渲染时采用的渲染格式由第二渲染格式变更为第一渲染格式。
在一种可能的实现中,所述第二帧为所述第一帧之后相邻的帧。
示例性的,CPU可以检测Frame N帧(即上述第一帧)的渲染格式和透明特征,是否透明可以根据如下指令特征,比如主场景RenderPass指令中是否存在Disable/Enable(GL_BLEND)、BlendEquation、BlendFunc等指令,决定是否绘制透明物体。根据特征信息,做出决策结果,比如Frame N帧调用Enable(GL_BLEND),同时渲染格式是R11G11B10,则给出Frame N+1帧(即上述第二帧)动态切换渲染格式决策,将R11G11B10切换为RGBA16F格式,Frame N+1基于新替换的RGBA16F渲染格式进行绘制。
本申请实施例提供了一种渲染格式选择方法,所述方法应用于终端设备,所述终端设备包括图形处理器GPU,所述方法包括:获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;基于所述第一渲染指令集合包含用于绘制透明物体的指令,且当前的渲染格式为第二渲染格式,将第二渲染格式变更信息传递至所述GPU,以便所述GPU 根据所述第二渲染格式变更信息绘制第二帧的对象,其中,所述第二渲染格式不包括透明度通道,所述第二渲染格式变更信息用于指示将所述第二渲染格式变更为第一渲染格式,所述第一渲染格式包括透明度通道,所述第二帧为所述第一帧之后的帧。将第一帧的渲染指令的透明特征作为进行第二帧的渲染格式选择的参考,在第一渲染指令集合中包含用于绘制透明物体的指令时,可以认为第二帧也需要进行透明物体的渲染。然而,当前渲染格式为第二渲染格式(不包括透明度通道),若继续基于第二渲染格式进行第二帧的渲染,会大大降低渲染的画质。因此,本申请在进行第二帧的渲染时需要采用包括透明度通道的第一渲染格式,可以提高渲染的画质。
应理解,包括透明度通道的不同渲染格式之间存在着像素值表示精度的差异,例如,RGBA8中为每个像素分配32位,红绿蓝和透明度各占8位,RGBA16F中为每个像素分配64位,红绿蓝和透明度各占16位,也就是RGBA16F支持的像素值表示精度大于RGBA8支持的像素值表示精度。
在一种可能的实现中,可能存在渲染的画质所需的像素值表示精度高于当前渲染格式支持的像素值表示精度的情况,在这种情况下,需要将渲染格式更改为支持更高像素值表示精度的渲染格式,以提高渲染的画质。
在一种可能的实现中,可以获取所述第二帧的绘制结果,绘制结果可以包括渲染界面上各个像素点的各个通道的值,或者可以将渲染界面划分为多个瓦片区域(tile区域,具体可以参照图5所示),绘制结果可以包括渲染界面上各个tile中一个或多个像素点(例如中心点)的各个通道的值。CPU可以基于绘制结果来确定当前的渲染格式是否满足第二帧所需的像素值表示精度。具体的,可以判断绘制结果中是否存在像素值等于像素值表示精度上限值的像素点,如果存在(或者存在多个),则可以认为所述绘制结果指示所述第二帧中存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点。
在一种可能的实现中,若所述绘制结果指示所述第二帧中存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点,则可以将第三渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第三渲染格式变更信息绘制第三帧的对象,其中,所述第三渲染格式变更信息用于指示将所述第一渲染格式变更为第三渲染格式,所述第三渲染格式支持的像素值表示精度上限大于所述第一渲染格式支持的像素值表示精度上限,所述第三帧为所述第二帧之后的帧。
在一种可能的实现中,所述第二渲染格式为RGBA8,所述第三渲染格式为RGBA16F。
示例性的,可以由RGBA8切换到RGBA16F,Frame N映射Render Buffer,比如1920*1080的Render Buffer,划分253个128*64个tile,从GPU内存读取每个tile中心值,判断值是否等于255,如果存在多个等于255,则认为存在高光区域,RGBA8渲染格式无法满足画质渲染要求,需要将RGBA8切换为RGBA16F。
在一种可能的实现中,可能存在渲染的画质所需的像素值表示精度低于当前渲染格式支持的像素值表示精度的情况,在这种情况下,需要将渲染格式更改为支持更低像素值表示精度的渲染格式,以降低DDR开销。
在一种可能的实现中,获取所述第二帧的绘制结果,基于所述绘制结果指示所述第二帧中不存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点,将第四渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第四渲染格式变更信息绘制第三帧的对象,其中,所述第四渲染格式变更信息用于指示将所述第一渲染格式变更为第四渲染格式,所述第四渲染格式支持的像素值表示精度上限小于所述第一渲染格式支持的像素值表示精度上限,所述第三帧为所述第二帧之后的帧。
在一种可能的实现中,所述第三渲染格式为RGBA16F,所述第二渲染格式为RGBA8。
第三方面,本申请提供了一种渲染格式选择装置,所述装置应用于终端设备,所述终端设备包括图形处理器GPU,所述装置包括:
指令获取模块,用于获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;
指令格式转换模块,用于基于所述第一渲染指令集合不包含用于绘制透明物体的指令,且当前的渲染格式为第一渲染格式,将第一渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第一渲染格式变更信息绘制第二帧的对象,其中,所述第一渲染格式包括透明度通道,所述第一渲染格式变更信息用于指示将所述第一渲染格式变更为第二渲染格式,所述第二渲染格式不包括透明度通道,所述第二帧为所述第一帧之后的帧。
在一种可能的实现中,所述第一渲染格式由R通道、G通道、B通道以及A通道组成,所述第二渲染格式由R通道、G通道以及B通道组成。
在一种可能的实现中,所述第一渲染格式为RGBA8或RGBA16F,所述第二渲染格式为R11G11B10F。
在一种可能的实现中,所述第二帧为所述第一帧之后相邻的帧。
第四方面,本申请提供了一种渲染格式选择装置,所述装置应用于终端设备,所述终端设备包括图形处理器GPU,所述装置包括:
指令获取模块,用于获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;
指令格式转换模块,用于基于所述第一渲染指令集合包含用于绘制透明物体的指令,且当前的渲染格式为第二渲染格式,将第二渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第二渲染格式变更信息绘制第二帧的对象,其中,所述第二渲染格式不包括透明度通道,所述第二渲染格式变更信息用于指示将所述第二渲染格式变更为第一渲染格式,所述第一渲染格式包括透明度通道,所述第二帧为所述第一帧之后的帧。
在一种可能的实现中,所述第一渲染格式由R通道、G通道、B通道以及A通道组成, 所述第二渲染格式由R通道、G通道以及B通道组成。
在一种可能的实现中,所述第一渲染格式为RGBA8或RGBA16F,所述第二渲染格式为R11G11B10F。
在一种可能的实现中,所述指令获取模块,还用于获取所述第二帧的绘制结果;
所述指令格式转换模块,还用于基于所述绘制结果指示所述第二帧中存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点,将第三渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第三渲染格式变更信息绘制第三帧的对象,其中,所述第三渲染格式变更信息用于指示将所述第一渲染格式变更为第三渲染格式,所述第三渲染格式支持的像素值表示精度上限大于所述第一渲染格式支持的像素值表示精度上限,所述第三帧为所述第二帧之后的帧。
在一种可能的实现中,所述第二渲染格式为RGBA8,所述第三渲染格式为RGBA16F。
在一种可能的实现中,所述指令获取模块,还用于获取所述第二帧的绘制结果;
所述指令格式转换模块,还用于基于所述绘制结果指示所述第二帧中不存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点,将第四渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第四渲染格式变更信息绘制第三帧的对象,其中,所述第四渲染格式变更信息用于指示将所述第一渲染格式变更为第四渲染格式,所述第四渲染格式支持的像素值表示精度上限小于所述第一渲染格式支持的像素值表示精度上限,所述第三帧为所述第二帧之后的帧。
在一种可能的实现中,所述第三渲染格式为RGBA16F,所述第二渲染格式为RGBA8。
第五方面,本申请提供了一种终端设备,所述终端设备包括处理器和存储器,所述处理器获取存储器中存储的代码,以执行第一方面以及其可选的实现方式中的任意一种,以及第二方面以及其可选的实现方式中的任意一种。
第六方面,本申请提供了一种非易失性计算机可读存储介质,所述非易失性计算机可读存储介质包含计算机指令用于执行上述第一方面及其可选的实现中任一所述的方法,以及第二方面以及其可选的实现方式中的任意一种。
第七方面,本申请还提供了一种计算机程序产品,包含计算机指令,主机设备的处理器执行该计算机指令,用于执行本实施例可能实现方式中任一种实现方式中处理器执行的操作。
本申请实施例提供了一种渲染格式选择方法,包括:获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;基于所述第一渲染指令集合不包含用于绘制透明物体的指令,且当前的渲染格式为第一渲染格式,将第一渲染格式变更信息传递至所述GPU, 以便所述GPU根据所述第一渲染格式变更信息绘制第二帧的对象,其中,所述第一渲染格式包括透明度通道,所述第一渲染格式变更信息用于指示将所述第一渲染格式变更为第二渲染格式,所述第二渲染格式不包括透明度通道,所述第二帧为所述第一帧之后的帧。将第一帧的渲染指令的透明特征作为进行第二帧的渲染格式选择的参考,在第一渲染指令集合中不包含用于绘制透明物体的指令时,可以认为第二帧也不需要进行透明物体的渲染。然而,当前渲染格式为第一渲染格式(包括透明度通道),若继续基于第一渲染格式进行第二帧的渲染,会增加不必要的DDR开销。因此,在进行第二帧的渲染时GPU采用不包括透明度通道的第二渲染格式,降低了DDR开销。
附图说明
图1a为一种系统的架构示意;
图1b为一种系统的架构示意;
图1c为一种系统的架构示意;
图2为本申请实施例中描述的技术的计算装置的框图;
图3为本申请实施例提供的一种渲染格式选择方法的流程示意;
图4为本申请实施例提供的一种渲染格式选择方法的流程示意;
图5为本申请实施例提供的一种渲染结果的示意;
图6为本申请实施例提供的一种渲染格式选择方法的流程示意;
图7为本申请实施例提供的一种渲染格式选择装置的示意;
图8为本申请实施例提供的一种渲染格式选择装置的示意;
图9为本申请提供的一种终端设备的结构示意。
具体实施方式
下面结合附图,对本申请的实施例进行描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。本领域普通技术人员可知,随着技术的发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或模块的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或模块,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或模块。在本申请中出现的对步骤进行的命名或者编号,并不意味着必须按照命名或者编号所指示的时间/逻辑先后顺序执行方法流程中的步骤,已经命名或者编号的流程步骤可以根据要实现的技术目的变更执行次序,只要能达到相同或者相类似的技术效果即可。
如图1a所示,客户端程序需要调用API接口实现3D渲染,渲染命令和数据会缓存在随机存取存储器(random access memory,RAM)中,在一定条件下,会将这些命令和数据 通过CPU时钟发送到影像随机接达记忆器(video random access memory,VRAM),在GPU的控制下,使用VRAM中的数据和命令,完成图形的渲染,并将结果存入帧缓冲区中,帧缓冲区中的帧最终会被发送到显示器上,显示出结果。在现代的图形硬件系统中,还支持不通过CPU时钟直接将数据由RAM发送至VRAM或直接将数据由帧缓冲区发送至RAM(例如OpenGL中的VBO,PBO)。
本申请可以应用于游戏应用的画面渲染过程,参照图1b,图1b为本申请实施例的软件和硬件模块以及其在技术栈中的位置的示意。该架构图是一个典型的智能设备游戏渲染的场景。主要包括:特征识别、决策模块、渲染指令重构模块。
本申请实施例的程序代码可以存在于Android平台软件的Framework层,位于Framework和DDK之间。
本申请首先会基于多个维度进行特征识别,如:Render Pass是否渲染透明物体、实时渲染出的画质特征是否和Render Format匹配等;游戏/APP的渲染指令流被截获之后,由特征识别模块进行特征分析,分析符合特征后进行决策,根据决策结果进行指令重组,图1c为本申请的实现形态的一个示意。
接下来描述本申请实施例的一种系统架构示意。
图2是说明可以实施本发明中描述的技术的计算装置30的框图。计算装置30可以为本申请实施例中的渲染指令处理装置,计算装置30的实例包含但不限于无线装置、移动或蜂窝电话(包含所谓的智能手机)、个人数字助理(personal digital assistant,PDA)、包含视频显示器的视频游戏控制台、移动视频游戏装置、移动视频会议单元、膝上型计算机、台式计算机、电视机顶盒、平板计算装置、电子书阅读器、固定或移动媒体播放器等等。
在图2的实例中,计算装置30包含具有CPU存储器34的中央处理单元(central processing unit,CPU)32、具有GPU存储器38和一或多个着色单元40的图形处理单元(graphics processing unit,GPU)36、显示器单元42、显示器缓冲单元44、用户接口单元46和存储单元48。此外,存储单元48可以存储具有编译器54的GPU驱动器50、GPU程序52和本机编译的GPU程序56。
CPU 32的实例包含但不限于数字信号处理器(DSP)、通用微处理器、专用集成电路(ASIC)、现场可编程逻辑阵列(FPGA)或其它等效的集成或离散逻辑电路。虽然CPU 32和GPU 36在图2的实例中被说明成分开的单元,但是在一些实例中,CPU 32和GPU 36可以集成为单个单元。CPU 32可以执行一或多个应用程序。应用程序的实例可以包含网络浏览器、电子邮件应用程序、电子表格、视频游戏、音频和/或视频捕获、回放或编辑应用程序或其它起始有待经由显示器单元42呈现的图像数据的产生的应用程序。
在图2所示的实例中,CPU 32包含CPU存储器34。CPU存储器34可以表示在执行机器或对象代码时使用的芯片上存储设备或存储器。CPU存储器34可以各自包括能够存储固定数目个数字位的硬件存储器寄存器。CPU 32可以能够比从存储单元48(其可例如经由系统总线存取)读取值或者向存储单元48写入值更迅速地从本机CPU存储器34读取值或者向本机CPU存储器34写入值。
GPU 36表示用于执行图形操作的一或多个专用处理器。也就是说,举例来说,GPU 36 可以是具有固定功能和用于渲染图形和执行GPU应用程序的可编程组件的专用硬件单元。GPU 36还可包含DSP、通用微处理器、ASIC、FPGA或其它等效的集成或离散逻辑电路。
GPU 36还包含GPU存储器38,其可以表示在执行机器或对象代码时使用的芯片上存储设备或存储器。GPU存储器38可以各自包括能够存储固定数目个数字位的硬件存储器寄存器。GPU 36可以能够比从存储单元48(其可例如经由系统总线存取)读取值或者向存储单元48写入值更迅速地从本机GPU存储器38读取值或者向本机GPU存储器38写入值。
GPU 36还包含着色单元40。如下文更详细地描述,着色单元40可以配置成处理组件的可编程管线。在一些实例中,着色单元40可以称为“着色器处理器”或“统一着色器”,并且可以执行几何形状、顶点、像素或其它着色操作以渲染图形。着色单元40可以包含图2中为了清晰起见未具体展示的一或多个组件,例如用于取出和解码指令的组件、用于实行算术计算的一或多个算术逻辑单元(arithmetic and logic unit,ALU)和一或多个存储器、高速缓存或寄存器。
显示器单元42表示能够显示视频数据、图像、文本或任何其它类型的数据的单元。显示器单元42可以包含液晶显示器(liquid crystal display,LCD)、发光二极管(light emitting diode,LED)显示器、有机LED(organic light-emitting diode,OLED)、有源矩阵OLED(active-matrix organic light-emitting diode,AMOLED)显示器等等。
显示器缓冲单元44表示专用于为显示器单元42存储数据以供呈现图像(例如照片或视频帧)的存储器或存储装置。显示器缓冲单元44可以表示包含多个存储位置的二维缓冲器。显示器缓冲单元44内的存储位置的数目可以基本上类似于有待在显示器单元42上显示的像素的数目。举例来说,如果显示器单元42经配置以包含640x480个像素,那么显示器缓冲单元44可以包含640x480个存储位置。显示器缓冲单元44可以存储由GPU 36处理的像素中的每一个的最终像素值。显示器单元42可以从显示器缓冲单元44检索最终像素值,并且基于显示器缓冲单元44中存储的像素值显示最终图像。
用户接口单元46表示用户可以用来与计算装置30的其它单元(例如,CPU 32)交互或者以其它方式介接以与计算装置30的其它单元通信的单元。用户接口单元46的实例包含但不限于轨迹球、鼠标、键盘和其它类型的输入装置。用户接口单元46还可以是触摸屏,并且可以并入为显示器单元42的一部分。
存储单元48可以包括一或多个计算机可读存储媒体。存储单元48的实例包含但不限于随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁性存储装置、快闪存储器或可以用于以指令或数据结构的形式存储期望的程序代码并且可以由计算机或处理器存取的任何其它媒体。
在一些实例实施方案中,存储单元48可以包含使得CPU 32和/或GPU 36执行本发明中用于实现CPU 32和GPU 36的功能的指令。在一些实例中,存储单元48可以被视为非暂时性存储媒体。术语“非暂时性”可以指示存储媒体不是体现在载波或传播信号中。然而,术语“非暂时性”不应解释为意味着存储单元48是不能移动的。作为一个实例,存储单元48可以从计算装置30中移除,并且移动到另一装置。作为另一实例,基本上类似于存储单元 48的存储单元可以插入到计算装置30中。在某些实例中,非暂时性存储媒体可以存储可能随时间而改变的数据(例如,在RAM中)。
存储单元48存储GPU驱动器50和编译器54、GPU程序52和本机编译的GPU程序56。GPU驱动器50表示提供存取GPU 36的接口的计算机程序或可执行代码。CPU 32执行GPU驱动器50或其若干部分以与GPU 36连接,并且出于此原因,GPU驱动器50在图2的实例中展示为CPU 32内的用虚线框标记的GPU驱动器50。GPU驱动器50可以存取CPU 32执行的程序或其它可执行文件,包含GPU程序52。
GPU程序52可以包含(例如,使用应用程序编程接口(API))用高级(HL)编程语言编写的代码。API的实例包含开放图形库(OpenGL)。总地来说,API包含由相关联的硬件执行的预定的标准化的成组命令。API命令允许用户指令GPU的硬件组件执行命令,而无需用户知道硬件组件的具体情况。
GPU程序52可以调用或者以其它方式包含GPU驱动器50提供的一或多个功能。CPU 32总体上执行其中嵌入着GPU程序52的程序,并且在遇到GPU程序52后,即刻将GPU程序52传递给GPU驱动器50(例如,以命令流的形式)。CPU 32在这个上下文中执行GPU驱动器50以处理GPU程序52。举例来说,GPU驱动器50可以通过将GPU程序编译成GPU 36可执行的对象或机器代码而处理GPU程序52。这个对象代码在图2的实例中展示为本机编译的GPU程序56。
在一些实例中,编译器54可以实时或近实时地操作,以在执行其中嵌入着GPU程序52的程序期间编译GPU程序52。举例来说,编译器54总体上表示将根据HL编程语言定义的HL指令精简成低级(LL)编程语言的LL指令的模块。在编译之后,这些LL指令能够由特定类型的处理器或其它类型的硬件(例如FPGA、ASIC等等(包含例如CPU 32和GPU 36)来执行。
在图2的实例中,编译器54可以在执行包含GPU程序52的HL代码时从CPU 32接收GPU程序52。编译器54可以将GPU程序52编译成符合LL编程语言的本机编译的GPU程序56。编译器54接着输出包含LL指令的本机编译的GPU程序56。
GPU 36总体上接收本机编译的GPU程序56(如通过GPU 36内的虚线框标记的“本机编译的GPU程序56”所展示),在这之后,在一些例子中,GPU 36即刻渲染图像并且将图像的经渲染部分输出到显示器缓冲单元44。举例来说,GPU 36可以产生有待在显示器单元42处显示的多个基元。基元可以包含一或多条线(包含曲线、样条等)、点、圆、椭圆、多边形(其中通常将多边形定义为一或多个三角形的集合)或任何其它二维(2D)基元。术语“基元”还可以指代三维(3D)基元,例如立方体、圆柱体、球体、圆锥体、金字塔、圆环等等。总地来说,术语“基元”是指任何被GPU 36渲染以供经由显示器单元42作为图像(或在视频数据的上下文中的帧)显示的几何形状或要素。
GPU 36可以通过应用一或多个模型变换(其也可以在状态数据中指定)将基元或基元的其它状态数据(例如,其定义基元的纹理、亮度、相机配置或其它方面)变换成所谓的“世界空间”。一旦经过变换,GPU 36就可以应用有效相机的视图变换(其同样也可以在定义相机的状态数据中指定)以将基元和光的坐标变换到相机或眼睛空间中。GPU36还可以执行顶点着色以在任何有效光的视图中渲染基元的外观。GPU 36可以在上述模型、世界或视图空间 中的一或多个中执行顶点着色(虽然顶点着色通常是在世界空间中执行的)。
一旦基元经过着色,GPU 36就可以执行投影以将图像投影到(作为一个实例)在(-1,-1,-1)和(1,1,1)处具有极点的单位立方体中。这个单位立方体通常称为典型视图体。在将模型从眼睛空间变换到典型视图体之后,GPU 36可以执行裁剪以移除任何不至少部分地驻留在视图体中的基元。换句话说,GPU 36可以移除任何不在相机帧内的基元。GPU 36可以接着将基元的坐标从视图体映射到屏幕空间,从而有效地将基元的3D基元精简成屏幕的2D坐标。
在给定用其相关联的着色数据定义基元的经变换和投影的顶点的情况下,GPU 36可以接着使基元光栅化。举例来说,GPU 36可以计算和设置基元所覆盖的屏幕的像素的颜色。在光栅化期间,GPU 36可以应用与基元相关联的任何纹理(其中纹理可以包括状态数据)。GPU 36还可以在光栅化期间执行Z缓冲器算法(也称为深度测试)以确定是否有任何基元和/或对象被任何其它对象遮蔽。Z缓冲器算法根据基元的深度将基元排序,使得GPU 36知道将每一基元绘制到屏幕上时的次序。GPU 36将经渲染的像素输出到显示器缓冲单元44。
显示器缓冲单元44可以暂时存储经渲染的图像的经渲染的像素,直到整个图像都被渲染了为止。在这个上下文中,可以将显示器缓冲单元44视为图像帧缓冲器。显示器缓冲单元44可以接着发射有待在显示器单元42上显示的经渲染的图像。在一些替代的实例中,GPU 36可以将图像的经渲染的部分直接输出到显示器单元42以供显示,而不是将图像暂时存储在显示器缓冲单元44中。显示器单元42可以接着显示在显示器缓冲器单元78中存储的图像。
接下来介绍本申请实施例涉及的一些概念:
1)、纹理:表示物体表面细节的一幅或几幅二维图形,也称纹理贴图(texture mapping),当把纹理按照特定的方式映射到物体表面上的时候能使物体看上去更加真实。
2)、RenderBuffer:是一个内存缓冲区,即存储一系列的字节、整数、像素等,可以直接将所有渲染数据存储到RenderBuffer中,同时不会对数据做任何格式转变。
3)、Render Target:在计算机3D图形领域,渲染目标是现代图形处理单元(GPU)的一项功能,它允许将3D场景渲染到中间内存缓冲区RenderBuffer或渲染目标纹理(Render Target Texture),而不是帧缓冲区或后台缓冲区,然后可以通过像素着色器处理Render Target Texture,以便在显示最终图像之前将附加效果应用于最终图像。
4)、Render Target Format:渲染目标格式,该格式描述每个像素分配多少位,以及它们如何在红色、绿色、蓝色和alpha透明度之间进行划分,如RGBA8,每个像素分配32位,红绿蓝和alpha各占8位,简称Render Format,或者渲染格式。
5)、RenderPass:渲染通道,它通常指的是多通道渲染技术,在多通道渲染技术中,一个物体需要多次渲染,每个渲染过程的结果会被累加到最终的呈现结果上。
6)、图形应用程序接口(application programming interface,API):图形渲染编程接口;游戏引擎通过调用图形API将3D场景渲染呈现至屏幕,移动平台上使用最广泛的图形API包括:OpenGL ES和Vulkan;OpenGL ES是传统的图形API,Vulkan是新一代图形API。
接下来对本申请实施例提供的一种渲染格式选择方法进行详细描述。
参照图3,图3为本申请实施例提供的一种渲染格式选择方法的流程示意,如图3中示出的那样,本申请实施例提供的渲染格式选择方法,包括:
301、获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象。
本申请实施例中,在游戏启动时,处理器可以获取到该游戏中各个帧的渲染指令;其中,第一渲染指令集合是其中一帧(第一帧)的的渲染指令。具体的,处理器可以获取到该游戏中第一帧的主场景绘制指令。
302、基于所述第一渲染指令集合不包含用于绘制透明物体的指令,且当前的渲染格式为第一渲染格式,将第一渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第一渲染格式变更信息绘制第二帧的对象,其中,所述第一渲染格式包括透明度通道,所述第一渲染格式变更信息用于指示将所述第一渲染格式变更为第二渲染格式,所述第二渲染格式不包括透明度通道,所述第二帧为所述第一帧之后的帧。
在一种可能的实现中,当前的渲染格式为第一渲染格式,所述第一渲染格式可以由R通道、G通道、B通道以及A通道组成,例如所述第一渲染格式可以为RGBA8或RGBA16F。
在一种可能的实现中,可以检测第一渲染指令集合中渲染指令的透明特征,若第一渲染指令集合中不包含用于绘制透明物体的指令,则可以认为当前不需要进行透明物体的渲染,进而也不需要包含透明度通道的渲染格式,反之,如果用包含透明度通道的渲染格式进行渲染,在没有增加画质的前提下,会增加执行渲染任务时的DDR开销。
例如,第一渲染指令集合中渲染指令的透明特征可以根据如下指令特征确定,比如主场景RenderPass指令中是否存在Disable(GL_BLEND)等指令,决定是否绘制透明物体。根据特征信息,做出决策结果,比如Frame N帧(即第一帧)调用Disable(GL_BLEND),则可以认为第一渲染指令集合中不包含用于绘制透明物体的指令。
本申请实施例中,将第一帧的渲染指令的透明特征作为进行第二帧的渲染格式选择的参考,在第一渲染指令集合中不包含用于绘制透明物体的指令时,可以认为第二帧也不需要进行透明物体的渲染。然而,当前渲染格式为第一渲染格式(包括透明度通道),若继续基于第一渲染格式进行第二帧的渲染,会增加不必要的DDR开销。
因此,在进行第二帧的渲染时需要采用不包括透明度通道的第二渲染格式,例如,所述第二渲染格式可以为R11G11B10F。
在一种可能的实现中,CPU需要可以将指示将所述第一渲染格式变更为第二渲染格式的第一渲染格式变更信息传递至GPU,以便GPU在进行第二帧的渲染时采用的渲染格式由第一渲染格式变更为第二渲染格式。
在一种可能的实现中,所述第二帧为所述第一帧之后相邻的帧。
示例性的,CPU可以检测Frame N帧(即上述第一帧)的渲染格式和透明特征,是否透明可以根据如下指令特征,比如主场景RenderPass指令中是否存在Disable/Enable(GL_BLEND)、BlendEquation、BlendFunc等指令,决定是否绘制透明物体。根据特征信息,做出决策结果,比如Frame N帧调用Disable(GL_BLEND),同时渲染格式是RGBA16F,则给出Frame N+1帧(即上述第二帧)动态切换渲染格式决策,将RGBA16F切换为R11G11B10格式,Frame N+1基于新替换的R11G11B10渲染格式进行绘制。
本申请实施例提供了一种渲染格式选择方法,包括:获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;基于所述第一渲染指令集合不包含用于绘制透明物体的指令,且当前的渲染格式为第一渲染格式,将第一渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第一渲染格式变更信息绘制第二帧的对象,其中,所述第一渲染格式包括透明度通道,所述第一渲染格式变更信息用于指示将所述第一渲染格式变更为第二渲染格式,所述第二渲染格式不包括透明度通道,所述第二帧为所述第一帧之后的帧。将第一帧的渲染指令的透明特征作为进行第二帧的渲染格式选择的参考,在第一渲染指令集合中不包含用于绘制透明物体的指令时,可以认为第二帧也不需要进行透明物体的渲染。然而,当前渲染格式为第一渲染格式(包括透明度通道),若继续基于第一渲染格式进行第二帧的渲染,会增加不必要的DDR开销。因此,在进行第二帧的渲染时GPU采用不包括透明度通道的第二渲染格式,降低了DDR开销。
参照图4,图4为本申请实施例提供的一种渲染格式选择方法的流程示意,如图4中示出的那样,本申请实施例提供的渲染格式选择方法,包括:
401、获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象。
本申请实施例中,在游戏启动时,处理器可以获取到该游戏中各个帧的渲染指令;其中,第一渲染指令集合是其中一帧(第一帧)的的渲染指令。具体的,处理器可以获取到该游戏中第一帧的主场景绘制指令。
402、基于所述第一渲染指令集合包含用于绘制透明物体的指令,且当前的渲染格式为第二渲染格式,将第二渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第二渲染格式变更信息绘制第二帧的对象,其中,所述第二渲染格式不包括透明度通道,所述第二渲染格式变更信息用于指示将所述第二渲染格式变更为第一渲染格式,所述第一渲染格式包括透明度通道,所述第二帧为所述第一帧之后的帧。
在一种可能的实现中,当前的渲染格式为第二渲染格式,所述第二渲染格式不包括透明度通道,所述第二渲染格式可以由R通道、G通道以及B通道组成,例如所述第二渲染格式可以为R11G11B10F。
在一种可能的实现中,可以检测第一渲染指令集合中渲染指令的透明特征,若第一渲染指令集合中包含用于绘制透明物体的指令,则可以认为当前需要进行透明物体的渲染,进而也需要包含透明度通道的渲染格式,反之,如果用不包含透明度通道的渲染格式进行渲染,会大大降低渲染的画质。
例如,第一渲染指令集合中渲染指令的透明特征可以根据如下指令特征确定,比如主场景RenderPass指令中是否存在Enable(GL_BLEND)、BlendEquation、BlendFunc等指令,决定是否绘制透明物体。根据特征信息,做出决策结果,比如Frame N帧(即第一帧)调用Enable(GL_BLEND),则可以认为第一渲染指令集合中包含用于绘制透明物体的指令。
本申请实施例中,将第一帧的渲染指令的透明特征作为进行第二帧的渲染格式选择的参考,在第一渲染指令集合中包含用于绘制透明物体的指令时,可以认为第二帧也需要进行透明物体的渲染。然而,当前渲染格式为第二渲染格式(不包括透明度通道),若继续 基于第二渲染格式进行第二帧的渲染,会大大降低渲染的画质。
因此,在进行第二帧的渲染时需要采用包括透明度通道的第一渲染格式,所述第一渲染格式可以由R通道、G通道、B通道以及A通道组成,例如,所述第一渲染格式可以为RGBA8或RGBA16F。
在一种可能的实现中,CPU需要可以将指示将所述第二渲染格式变更为第一渲染格式的第二渲染格式变更信息传递至GPU,以便GPU在进行第二帧的渲染时采用的渲染格式由第二渲染格式变更为第一渲染格式。
在一种可能的实现中,所述第二帧为所述第一帧之后相邻的帧。
示例性的,CPU可以检测Frame N帧(即上述第一帧)的渲染格式和透明特征,是否透明可以根据如下指令特征,比如主场景RenderPass指令中是否存在Disable/Enable(GL_BLEND)、BlendEquation、BlendFunc等指令,决定是否绘制透明物体。根据特征信息,做出决策结果,比如Frame N帧调用Enable(GL_BLEND),同时渲染格式是R11G11B10,则给出Frame N+1帧(即上述第二帧)动态切换渲染格式决策,将R11G11B10切换为RGBA16F格式,Frame N+1基于新替换的RGBA16F渲染格式进行绘制。
本申请实施例提供了一种渲染格式选择方法,所述方法应用于终端设备,所述终端设备包括图形处理器GPU,所述方法包括:获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;基于所述第一渲染指令集合包含用于绘制透明物体的指令,且当前的渲染格式为第二渲染格式,将第二渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第二渲染格式变更信息绘制第二帧的对象,其中,所述第二渲染格式不包括透明度通道,所述第二渲染格式变更信息用于指示将所述第二渲染格式变更为第一渲染格式,所述第一渲染格式包括透明度通道,所述第二帧为所述第一帧之后的帧。将第一帧的渲染指令的透明特征作为进行第二帧的渲染格式选择的参考,在第一渲染指令集合中包含用于绘制透明物体的指令时,可以认为第二帧也需要进行透明物体的渲染。然而,当前渲染格式为第二渲染格式(不包括透明度通道),若继续基于第二渲染格式进行第二帧的渲染,会大大降低渲染的画质。因此,本申请在进行第二帧的渲染时需要采用包括透明度通道的第一渲染格式,可以提高渲染的画质。
应理解,包括透明度通道的不同渲染格式之间存在着像素值表示精度的差异,例如,RGBA8中为每个像素分配32位,红绿蓝和透明度各占8位,RGBA16F中为每个像素分配64位,红绿蓝和透明度各占16位,也就是RGBA16F支持的像素值表示精度大于RGBA8支持的像素值表示精度。
在一种可能的实现中,可能存在渲染的画质所需的像素值表示精度高于当前渲染格式支持的像素值表示精度的情况,在这种情况下,需要将渲染格式更改为支持更高像素值表示精度的渲染格式,以提高渲染的画质。
在一种可能的实现中,可以获取所述第二帧的绘制结果,绘制结果可以包括渲染界面上各个像素点的各个通道的值,或者可以将渲染界面划分为多个瓦片区域(tile区域,具体可以参照图5所示),绘制结果可以包括渲染界面上各个tile中一个或多个像素点(例如中心点)的各个通道的值。CPU可以基于绘制结果来确定当前的渲染格式是否满足第二帧 所需的像素值表示精度。具体的,可以判断绘制结果中是否存在像素值等于像素值表示精度上限值的像素点,如果存在(或者存在多个),则可以认为所述绘制结果指示所述第二帧中存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点。
在一种可能的实现中,若所述绘制结果指示所述第二帧中存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点,则可以将第三渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第三渲染格式变更信息绘制第三帧的对象,其中,所述第三渲染格式变更信息用于指示将所述第一渲染格式变更为第三渲染格式,所述第三渲染格式支持的像素值表示精度上限大于所述第一渲染格式支持的像素值表示精度上限,所述第三帧为所述第二帧之后的帧。
在一种可能的实现中,所述第二渲染格式为RGBA8,所述第三渲染格式为RGBA16F。
示例性的,可以由RGBA8切换到RGBA16F,Frame N映射Render Buffer,比如1920*1080的Render Buffer,划分253个128*64个tile,从GPU内存读取每个tile中心值,判断值是否等于255,如果存在多个等于255,则认为存在高光区域,RGBA8渲染格式无法满足画质渲染要求,需要将RGBA8切换为RGBA16F。
在一种可能的实现中,可能存在渲染的画质所需的像素值表示精度低于当前渲染格式支持的像素值表示精度的情况,在这种情况下,需要将渲染格式更改为支持更低像素值表示精度的渲染格式,以降低DDR开销。
在一种可能的实现中,获取所述第二帧的绘制结果,基于所述绘制结果指示所述第二帧中不存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点,将第四渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第四渲染格式变更信息绘制第三帧的对象,其中,所述第四渲染格式变更信息用于指示将所述第一渲染格式变更为第四渲染格式,所述第四渲染格式支持的像素值表示精度上限小于所述第一渲染格式支持的像素值表示精度上限,所述第三帧为所述第二帧之后的帧。
在一种可能的实现中,所述第三渲染格式为RGBA16F,所述第二渲染格式为RGBA8。
示例性的,以由RGBA16F切换到RGBA8为例,Frame N映射Render Buffer,比如1920*1080的Render Buffer,划分253个128*64个tile,读取每个tile中心值,如图5,判断值是否都小于1.0(规一化后的结果),如果是,则将RGBA16F降低为RGBA8。
本申请实施例的一个应用流程可以参照图6所示。
基于某款Top游戏性能测试数据,应用本申请实施例的方案的平均帧率由40.65fps提升到53.16fps,帧率提升12.51fps;功耗由1636.7mA降低到1591.86mA,功耗降低44.54mA;每小时卡顿次数由206降低到179,最差丢帧持平;单帧功率由40.26mA降低到29.94mA,单帧功耗优化10.32mA。
参照图7,图7为本申请提供了一种渲染格式选择装置的结构示意,如图7所示,所述装置700可以应用于终端设备,所述终端设备包括图形处理器GPU,所述装置700包括:
指令获取模块701,用于获取第一渲染指令集合,所述第一渲染指令集合用于绘制第 一帧的对象;
关于指令获取模块701的描述,可以参照上述实施例中步骤301的描述,这里不再赘述。
指令格式转换模块702,用于基于所述第一渲染指令集合不包含用于绘制透明物体的指令,且当前的渲染格式为第一渲染格式,将第一渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第一渲染格式变更信息绘制第二帧的对象,其中,所述第一渲染格式包括透明度通道,所述第一渲染格式变更信息用于指示将所述第一渲染格式变更为第二渲染格式,所述第二渲染格式不包括透明度通道,所述第二帧为所述第一帧之后的帧。
关于指令格式转换模块702的描述,可以参照上述实施例中步骤302的描述,这里不再赘述。
在一种可能的实现中,所述第一渲染格式由R通道、G通道、B通道以及A通道组成,所述第二渲染格式由R通道、G通道以及B通道组成。
在一种可能的实现中,所述第一渲染格式为RGBA8或RGBA16F,所述第二渲染格式为R11G11B10F。
在一种可能的实现中,所述第二帧为所述第一帧之后相邻的帧。
参照图8,图8为本申请提供了一种渲染格式选择装置的结构示意,如图8所示,所述装置800可以应用于终端设备,所述终端设备包括图形处理器GPU,所述装置800包括:
指令获取模块801,用于获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;
关于指令获取模块801的描述,可以参照上述实施例中步骤401的描述,这里不再赘述。
指令格式转换模块802,用于基于所述第一渲染指令集合包含用于绘制透明物体的指令,且当前的渲染格式为第二渲染格式,将第二渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第二渲染格式变更信息绘制第二帧的对象,其中,所述第二渲染格式不包括透明度通道,所述第二渲染格式变更信息用于指示将所述第二渲染格式变更为第一渲染格式,所述第一渲染格式包括透明度通道,所述第二帧为所述第一帧之后的帧。
关于指令格式转换模块802的描述,可以参照上述实施例中步骤402的描述,这里不再赘述。
在一种可能的实现中,所述第一渲染格式由R通道、G通道、B通道以及A通道组成,所述第二渲染格式由R通道、G通道以及B通道组成。
在一种可能的实现中,所述第一渲染格式为RGBA8或RGBA16F,所述第二渲染格式为R11G11B10F。
在一种可能的实现中,所述指令获取模块,还用于获取所述第二帧的绘制结果;
所述指令格式转换模块,还用于基于所述绘制结果指示所述第二帧中存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点,将第三渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第三渲染格式变更信息绘制第三帧的 对象,其中,所述第三渲染格式变更信息用于指示将所述第一渲染格式变更为第三渲染格式,所述第三渲染格式支持的像素值表示精度上限大于所述第一渲染格式支持的像素值表示精度上限,所述第三帧为所述第二帧之后的帧。
在一种可能的实现中,所述第二渲染格式为RGBA8,所述第三渲染格式为RGBA16F。
在一种可能的实现中,所述指令获取模块,还用于获取所述第二帧的绘制结果;
所述指令格式转换模块,还用于基于所述绘制结果指示所述第二帧中不存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点,将第四渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第四渲染格式变更信息绘制第三帧的对象,其中,所述第四渲染格式变更信息用于指示将所述第一渲染格式变更为第四渲染格式,所述第四渲染格式支持的像素值表示精度上限小于所述第一渲染格式支持的像素值表示精度上限,所述第三帧为所述第二帧之后的帧。
在一种可能的实现中,所述第三渲染格式为RGBA16F,所述第二渲染格式为RGBA8。
参照图9,图9为本申请提供的一种终端设备900的结构示意,如图9示出的那样,所述终端设备包括处理器901和存储器903,所述处理器901包括中央处理器CPU和图形处理器GPU,所述CPU用于获取所述存储器的代码以执行:
获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;
基于所述第一渲染指令集合不包含用于绘制透明物体的指令,且当前的渲染格式为第一渲染格式,将第一渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第一渲染格式变更信息绘制第二帧的对象,其中,所述第一渲染格式包括透明度通道,所述第一渲染格式变更信息用于指示将所述第一渲染格式变更为第二渲染格式,所述第二渲染格式不包括透明度通道,所述第二帧为所述第一帧之后的帧。
在一种可能的实现中,所述第一渲染格式由R通道、G通道、B通道以及A通道组成,所述第二渲染格式由R通道、G通道以及B通道组成。
在一种可能的实现中,所述第一渲染格式为RGBA8或RGBA16F,所述第二渲染格式为R11G11B10F。
在一种可能的实现中,所述第二帧为所述第一帧之后相邻的帧。
其中,所述CPU用于获取所述存储器的代码以执行:
获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;
基于所述第一渲染指令集合包含用于绘制透明物体的指令,且当前的渲染格式为第二渲染格式,将第二渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第二渲染格式变更信息绘制第二帧的对象,其中,所述第二渲染格式不包括透明度通道,所述第二渲染格式变更信息用于指示将所述第二渲染格式变更为第一渲染格式,所述第一渲染格式包括透明度通道,所述第二帧为所述第一帧之后的帧。
在一种可能的实现中,所述第一渲染格式由R通道、G通道、B通道以及A通道组成,所述第二渲染格式由R通道、G通道以及B通道组成。
在一种可能的实现中,所述第一渲染格式为RGBA8或RGBA16F,所述第二渲染格式 为R11G11B10F。
在一种可能的实现中,所述方法还包括:
获取所述第二帧的绘制结果;
基于所述绘制结果指示所述第二帧中存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点,将第三渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第三渲染格式变更信息绘制第三帧的对象,其中,所述第三渲染格式变更信息用于指示将所述第一渲染格式变更为第三渲染格式,所述第三渲染格式支持的像素值表示精度上限大于所述第一渲染格式支持的像素值表示精度上限,所述第三帧为所述第二帧之后的帧。
在一种可能的实现中,所述第二渲染格式为RGBA8,所述第三渲染格式为RGBA16F。
在一种可能的实现中,所述方法还包括:
获取所述第二帧的绘制结果;
基于所述绘制结果指示所述第二帧中不存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点,将第四渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第四渲染格式变更信息绘制第三帧的对象,其中,所述第四渲染格式变更信息用于指示将所述第一渲染格式变更为第四渲染格式,所述第四渲染格式支持的像素值表示精度上限小于所述第一渲染格式支持的像素值表示精度上限,所述第三帧为所述第二帧之后的帧。
在一种可能的实现中,所述第三渲染格式为RGBA16F,所述第二渲染格式为RGBA8。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出 来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者其他网络设备等)执行本申请图3实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (25)

  1. 一种渲染格式选择方法,其特征在于,所述方法应用于终端设备,所述终端设备包括图形处理器GPU,所述方法包括:
    获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;
    基于所述第一渲染指令集合不包含用于绘制透明物体的指令,且当前的渲染格式为第一渲染格式,将第一渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第一渲染格式变更信息绘制第二帧的对象,其中,所述第一渲染格式包括透明度通道,所述第一渲染格式变更信息用于指示将所述第一渲染格式变更为第二渲染格式,所述第二渲染格式不包括透明度通道,所述第二帧为所述第一帧之后的帧。
  2. 根据权利要求1所述的方法,其特征在于,所述第一渲染格式由R通道、G通道、B通道以及A通道组成,所述第二渲染格式由R通道、G通道以及B通道组成。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第一渲染格式为RGBA8或RGBA16F,所述第二渲染格式为R11G11B10F。
  4. 根据权利要求1至3任一所述的方法,其特征在于,所述第二帧为所述第一帧之后相邻的帧。
  5. 一种渲染格式选择方法,其特征在于,所述方法应用于终端设备,所述终端设备包括图形处理器GPU,所述方法包括:
    获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;
    基于所述第一渲染指令集合包含用于绘制透明物体的指令,且当前的渲染格式为第二渲染格式,将第二渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第二渲染格式变更信息绘制第二帧的对象,其中,所述第二渲染格式不包括透明度通道,所述第二渲染格式变更信息用于指示将所述第二渲染格式变更为第一渲染格式,所述第一渲染格式包括透明度通道,所述第二帧为所述第一帧之后的帧。
  6. 根据权利要求5所述的方法,其特征在于,所述第一渲染格式由R通道、G通道、B通道以及A通道组成,所述第二渲染格式由R通道、G通道以及B通道组成。
  7. 根据权利要求5或6所述的方法,其特征在于,所述第一渲染格式为RGBA8或RGBA16F,所述第二渲染格式为R11G11B10F。
  8. 根据权利要求5至7任一所述的方法,其特征在于,所述方法还包括:
    获取所述第二帧的绘制结果;
    基于所述绘制结果指示所述第二帧中存在所需的像素值表示精度超过所述第一渲染格 式支持的像素值表示精度上限的像素点,将第三渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第三渲染格式变更信息绘制第三帧的对象,其中,所述第三渲染格式变更信息用于指示将所述第一渲染格式变更为第三渲染格式,所述第三渲染格式支持的像素值表示精度上限大于所述第一渲染格式支持的像素值表示精度上限,所述第三帧为所述第二帧之后的帧。
  9. 根据权利要求8所述的方法,其特征在于,所述第二渲染格式为RGBA8,所述第三渲染格式为RGBA16F。
  10. 根据权利要求5至7任一所述的方法,其特征在于,所述方法还包括:
    获取所述第二帧的绘制结果;
    基于所述绘制结果指示所述第二帧中不存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点,将第四渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第四渲染格式变更信息绘制第三帧的对象,其中,所述第四渲染格式变更信息用于指示将所述第一渲染格式变更为第四渲染格式,所述第四渲染格式支持的像素值表示精度上限小于所述第一渲染格式支持的像素值表示精度上限,所述第三帧为所述第二帧之后的帧。
  11. 根据权利要求10所述的方法,其特征在于,所述第三渲染格式为RGBA16F,所述第二渲染格式为RGBA8。
  12. 一种渲染格式选择装置,其特征在于,所述装置应用于终端设备,所述终端设备包括图形处理器GPU,所述装置包括:
    指令获取模块,用于获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;
    指令格式转换模块,用于基于所述第一渲染指令集合不包含用于绘制透明物体的指令,且当前的渲染格式为第一渲染格式,将第一渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第一渲染格式变更信息绘制第二帧的对象,其中,所述第一渲染格式包括透明度通道,所述第一渲染格式变更信息用于指示将所述第一渲染格式变更为第二渲染格式,所述第二渲染格式不包括透明度通道,所述第二帧为所述第一帧之后的帧。
  13. 根据权利要求12所述的装置,其特征在于,所述第一渲染格式由R通道、G通道、B通道以及A通道组成,所述第二渲染格式由R通道、G通道以及B通道组成。
  14. 根据权利要求12或13所述的装置,其特征在于,所述第一渲染格式为RGBA8或RGBA16F,所述第二渲染格式为R11G11B10F。
  15. 根据权利要求12至14任一所述的装置,其特征在于,所述第二帧为所述第一帧之后相邻的帧。
  16. 一种渲染格式选择装置,其特征在于,所述装置应用于终端设备,所述终端设备包括图形处理器GPU,所述装置包括:
    指令获取模块,用于获取第一渲染指令集合,所述第一渲染指令集合用于绘制第一帧的对象;
    指令格式转换模块,用于基于所述第一渲染指令集合包含用于绘制透明物体的指令,且当前的渲染格式为第二渲染格式,将第二渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第二渲染格式变更信息绘制第二帧的对象,其中,所述第二渲染格式不包括透明度通道,所述第二渲染格式变更信息用于指示将所述第二渲染格式变更为第一渲染格式,所述第一渲染格式包括透明度通道,所述第二帧为所述第一帧之后的帧。
  17. 根据权利要求16所述的装置,其特征在于,所述第一渲染格式由R通道、G通道、B通道以及A通道组成,所述第二渲染格式由R通道、G通道以及B通道组成。
  18. 根据权利要求16或17所述的装置,其特征在于,所述第一渲染格式为RGBA8或RGBA16F,所述第二渲染格式为R11G11B10F。
  19. 根据权利要求16至18任一所述的装置,其特征在于,所述指令获取模块,还用于获取所述第二帧的绘制结果;
    所述指令格式转换模块,还用于基于所述绘制结果指示所述第二帧中存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点,将第三渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第三渲染格式变更信息绘制第三帧的对象,其中,所述第三渲染格式变更信息用于指示将所述第一渲染格式变更为第三渲染格式,所述第三渲染格式支持的像素值表示精度上限大于所述第一渲染格式支持的像素值表示精度上限,所述第三帧为所述第二帧之后的帧。
  20. 根据权利要求19所述的装置,其特征在于,所述第二渲染格式为RGBA8,所述第三渲染格式为RGBA16F。
  21. 根据权利要求16至18任一所述的装置,其特征在于,所述指令获取模块,还用于获取所述第二帧的绘制结果;
    所述指令格式转换模块,还用于基于所述绘制结果指示所述第二帧中不存在所需的像素值表示精度超过所述第一渲染格式支持的像素值表示精度上限的像素点,将第四渲染格式变更信息传递至所述GPU,以便所述GPU根据所述第四渲染格式变更信息绘制第三帧的对象,其中,所述第四渲染格式变更信息用于指示将所述第一渲染格式变更为第四渲染 格式,所述第四渲染格式支持的像素值表示精度上限小于所述第一渲染格式支持的像素值表示精度上限,所述第三帧为所述第二帧之后的帧。
  22. 根据权利要求21所述的装置,其特征在于,所述第三渲染格式为RGBA16F,所述第二渲染格式为RGBA8。
  23. 一种非易失性计算机可读存储介质,其特征在于,所述非易失性可读存储介质包含计算机指令,用于执行权利要求1至11任一所述的渲染格式选择方法。
  24. 一种运算设备,其特征在于,所述运算设备包括存储器和处理器,所述存储器中存储有代码,所述处理器用于获取所述代码,以执行权利要求1至11任一所述的渲染格式选择方法。
  25. 一种计算机程序产品,其特征在于,包括计算机可读指令,当所述计算机可读指令在计算机设备上运行时,使得所述计算机设备执行如权利要求1至11任一所述的方法。
PCT/CN2022/122060 2021-09-29 2022-09-28 一种渲染格式选择方法及其相关设备 WO2023051590A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202280066021.6A CN118043842A (zh) 2021-09-29 2022-09-28 一种渲染格式选择方法及其相关设备
EP22874974.3A EP4379647A1 (en) 2021-09-29 2022-09-28 Render format selection method and device related thereto

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111155620.9 2021-09-29
CN202111155620.9A CN115880127A (zh) 2021-09-29 2021-09-29 一种渲染格式选择方法及其相关设备

Publications (1)

Publication Number Publication Date
WO2023051590A1 true WO2023051590A1 (zh) 2023-04-06

Family

ID=85756524

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/122060 WO2023051590A1 (zh) 2021-09-29 2022-09-28 一种渲染格式选择方法及其相关设备

Country Status (3)

Country Link
EP (1) EP4379647A1 (zh)
CN (2) CN115880127A (zh)
WO (1) WO2023051590A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117743195B (zh) * 2024-02-20 2024-04-26 北京麟卓信息科技有限公司 基于渲染时间差异度量的图形接口层次化实现验证方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150334403A1 (en) * 2014-05-15 2015-11-19 Disney Enterprises, Inc. Storage and compression methods for animated images
US20170358050A1 (en) * 2016-06-10 2017-12-14 Apple Inc. Dynamic Selection of Image Rendering Formats
CN108876887A (zh) * 2017-05-16 2018-11-23 北京京东尚科信息技术有限公司 渲染方法和装置
CN113313802A (zh) * 2021-05-25 2021-08-27 完美世界(北京)软件科技发展有限公司 图像渲染方法、装置、设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150334403A1 (en) * 2014-05-15 2015-11-19 Disney Enterprises, Inc. Storage and compression methods for animated images
US20170358050A1 (en) * 2016-06-10 2017-12-14 Apple Inc. Dynamic Selection of Image Rendering Formats
CN108876887A (zh) * 2017-05-16 2018-11-23 北京京东尚科信息技术有限公司 渲染方法和装置
CN113313802A (zh) * 2021-05-25 2021-08-27 完美世界(北京)软件科技发展有限公司 图像渲染方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN115880127A (zh) 2023-03-31
EP4379647A1 (en) 2024-06-05
CN118043842A (zh) 2024-05-14

Similar Documents

Publication Publication Date Title
EP4198909A1 (en) Image rendering method and apparatus, and computer device and storage medium
US10776997B2 (en) Rendering an image from computer graphics using two rendering computing devices
US9087410B2 (en) Rendering graphics data using visibility information
US10055883B2 (en) Frustum tests for sub-pixel shadows
US10049486B2 (en) Sparse rasterization
CN109785417B (zh) 一种实现OpenGL累积操作的方法及装置
US9224227B2 (en) Tile shader for screen space, a method of rendering and a graphics processing unit employing the tile shader
KR102006584B1 (ko) 레이트 심도 테스팅과 컨서버티브 심도 테스팅 간의 동적 스위칭
CN105550973B (zh) 图形处理单元、图形处理系统及抗锯齿处理方法
US11120591B2 (en) Variable rasterization rate
US10319068B2 (en) Texture not backed by real mapping
WO2023051590A1 (zh) 一种渲染格式选择方法及其相关设备
US9530237B2 (en) Interpolation circuitry and techniques for graphics processing
US20150015574A1 (en) System, method, and computer program product for optimizing a three-dimensional texture workflow
CN116563083A (zh) 渲染图像的方法和相关装置
JP7160495B2 (ja) 画像前処理方法、装置、電子機器及び記憶媒体
CN113838180A (zh) 一种渲染指令处理方法及其相关设备
KR20220164484A (ko) 섀도우 정보를 이용한 렌더링
CN112581575A (zh) 一种外视频做纹理系统
WO2022089504A1 (zh) 一种数据处理方法及相关装置
US20200380730A1 (en) Encoding positional coordinates based on multiple channel color values
CN117670641A (zh) 数据处理方法、装置、设备及存储介质
CN115779418A (zh) 图像渲染方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22874974

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022874974

Country of ref document: EP

Effective date: 20240228

NENP Non-entry into the national phase

Ref country code: DE