WO2021012257A1 - Methods and apparatus to facilitate a unified framework of post-processing for gaming - Google Patents
Methods and apparatus to facilitate a unified framework of post-processing for gaming Download PDFInfo
- Publication number
- WO2021012257A1 WO2021012257A1 PCT/CN2019/097687 CN2019097687W WO2021012257A1 WO 2021012257 A1 WO2021012257 A1 WO 2021012257A1 CN 2019097687 W CN2019097687 W CN 2019097687W WO 2021012257 A1 WO2021012257 A1 WO 2021012257A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- buffer
- image data
- image
- queue
- processing
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/363—Graphics controllers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/39—Control of the bit-mapped memory
- G09G5/393—Arrangements for updating the contents of the bit-mapped memory
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/39—Control of the bit-mapped memory
- G09G5/395—Arrangements specially adapted for transferring the contents of the bit-mapped memory to the screen
- G09G5/397—Arrangements specially adapted for transferring the contents of two or more bit-mapped memories to the screen simultaneously, e.g. for mixing or overlay
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/10—Mixing of images, i.e. displayed pixel being the result of an operation, e.g. adding, on the corresponding input pixels
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/12—Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/06—Use of more than one graphics processor to process data before displaying to one or more screens
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/12—Frame memory handling
Definitions
- the apparatus may be configured to queue, from a first buffer queue, a first buffer with first image data associated with a first image frame to an image engine.
- the apparatus may also be configured to perform, by the image engine, post-processing on the first image data to generate second image data on a second buffer.
- the apparatus may be configured to queue the second buffer at a second buffer queue.
- the apparatus may be configured to composite, from the second buffer, the second image data associated with the first image frame.
- FIG. 3 illustrates an example implementation of a gaming application graphics pipeline, in accordance with one or more techniques of this disclosure.
- One or more processors in the processing system may execute software.
- Software can be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
- the term application may refer to software.
- one or more techniques may refer to an application, i.e., software, being configured to perform one or more functions.
- the application may be stored on a memory, e.g., on-chip memory of a processor, system memory, or any other memory.
- such computer-readable media can comprise a random access memory (RAM) , a read-only memory (ROM) , an electrically erasable programmable ROM (EEPROM) , optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
- RAM random access memory
- ROM read-only memory
- EEPROM electrically erasable programmable ROM
- optical disk storage magnetic disk storage
- magnetic disk storage other magnetic storage devices
- combinations of the aforementioned types of computer-readable media or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
- FIG. 1 is a block diagram that illustrates an example content generation system 100 configured to implement one or more techniques of this disclosure.
- the content generation system 100 includes a device 104.
- the device 104 may include one or more components or circuits for performing various functions described herein.
- one or more components of the device 104 may be components of an SOC.
- the device 104 may include one or more components configured to perform one or more techniques of this disclosure.
- the device 104 may include a processing unit 120, and a system memory 124.
- the device 104 can include a number of optional components, e.g., a communication interface 126, a transceiver 132, a receiver 128, a transmitter 130, a display processor 127, and one or more displays 131.
- the processing unit 120 may include an internal memory 121.
- the processing unit 120 may be configured to perform graphics processing, such as in a graphics processing pipeline 107.
- the device 104 may include a display processor, such as the display processor 127, to perform one or more display processing techniques on one or more frames generated by the processing unit 120 before presentment by the one or more displays 131.
- the display processor 127 may be configured to perform display processing.
- the display processor 127 may be configured to perform one or more display processing techniques on one or more frames generated by the processing unit 120.
- the one or more displays 131 may be configured to display or otherwise present frames processed by the display processor 127.
- the graphics processing pipeline 107 may include a graphics post-processing component 198 configured to facilitate providing a unified framework of post-processing for gaming.
- the graphics post-processing component 198 may be configured to queue, from a first buffer queue, a first buffer with first image data associated with a first image frame to an image engine.
- the graphics post-processing component 198 may also be configured to perform, by the image engine, post-processing on the first image data to generate second image data on a second buffer.
- the graphics post-processing component 198 may be configured to queue the second buffer at a second buffer queue.
- the graphics post-processing component 198 may be configured to composite, from the second buffer, the second image data associated with the first image frame.
- an image engine refers to techniques for processing an image.
- an image engine may be implemented by particular hardware, such as a CPU, a DSP, a GPU, etc.
- aspects of the image engine may be implemented by different (or a combination of) processing units.
- an image engine may utilize predetermined techniques for processing an image.
- an image engine may dynamically determine which techniques to utilize for processing an image.
- An example image engine that may be utilized by techniques disclosed herein includes the Hollywood Quality Video (HQV) engine, which is provided by Qualcomm Technologies, Inc. The example HQV engine facilitates enhancing image quality utilizing adaptive image enhancement and noise reduction.
- HQV Hollywood Quality Video
- a buffer consumer may be one or more components that retrieve populated buffers from the buffer queue, make use of the buffer data, and then return the buffer to the buffer queue.
- a buffer consumer may retrieve a buffer from the buffer queue that includes image data rendered from the gaming application (sometimes referred to as “acquiring a buffer” ) and make use of the buffer data by compositing the image data.
- the compositing of the image data may include facilitating the displaying of the image data via a GPU processing the image data.
- the buffer consumer may then return the buffer to the buffer queue (sometimes referred to as “releasing a buffer” ) .
- the example VPF 330 may include aspects of the graphics post-processing component 198 of FIG. 1. While the example gaming application graphics pipeline 300 illustrates different components, it should be appreciated that one or more of the components may be implemented by a same component and/or two or more components may be combined.
- the VPF engines handler 340 may determine which image engine 380 to provide the image data for performing the specialized image post-processing. For example, the VPF engines handler 340 may determine workloads for the different hardware components 390 and select the image engine 380 based on the respective workloads. In some examples, the VPF engines handler 340 may select the image engine 380 (and the hardware component 390) based on characteristics associated with the image data and/or the gaming application. For example, the VPF engines handler 340 may select the HQV engine 380c (and the DSP 390c) for performing the specialized image post-processing of image data associated with the gaming application 302.
- the performing of the image post-processing of the image data associated with the second frame may include the VPF engines handler 340 acquiring a populated buffer from the VPF buffer queue 336, and processing the image data via a specialized image post-processing engine, such as the HQV engine, to generate improved quality image data.
- the VPG engines handler 340 and/or the VPF buffers handler 334 may populate a buffer with the improved quality image data.
- the performing of the compositing of image data associated with the third frame may include the buffer consumer 320 acquiring a buffer including improved quality image data from the compositing buffer queue 322, compositing the improved quality image data, and releasing the buffer to the compositing buffer queue 322.
- the order of the frames displayed may be the third frame displayed first, the second frame displayed second, and the third frame displayed third.
- the apparatus may queue the second buffer to an image engine, as described in connection with the examples in FIGs. 2 and/or 3.
- the VPF buffer producer 338 may queue the free buffer 346c to the VPF engines handler 340 and/or an image engine, such as the HQV engine.
- the apparatus may acquire the second image data from the second buffer queue, as described in connection with the examples in FIGs. 2 and/or 3.
- the buffer consumer 320 may acquire the populated buffer 346b from the compositing buffer queue 322 and retrieve the second image data from the populated buffer 346b.
- the apparatus may release the second buffer to the second buffer queue, as described in connection with the examples in FIGs. 2 and/or 3.
- the buffer consumer 320 may release the second buffer 346b to the compositing buffer queue 322.
- the dequeuing of the second buffer at 402, the queueing of the second buffer at 404, the dequeuing of the first buffer at 406, the rendering of the first image data at 408, the queueing of the first buffer at 410, and the queueing of the first buffer at 412 may correspond to a first stage of the gaming application graphics pipeline 300 of FIG.
- each of the three stages may be performed in parallel.
- the first stage may be operating on a first frame
- the second stage may be operating on a second frame
- the third stage may be operating on a third frame.
- the third frame may be displayed first
- the second frame may be displayed second
- the first stage may be displayed third.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Image Processing (AREA)
- Image Generation (AREA)
Abstract
The present disclosure relates to methods and apparatus for graphics processing including facilitating a unified framework of post-processing for gaming. In some aspects, an apparatus may queue, from a first buffer queue, a first buffer with first image data associated with a first image frame to an image engine. The apparatus may also perform, by the image engine, post-processing on the first image data to generate second image data on a second buffer. The apparatus may also queue the second buffer at a second buffer queue. The apparatus may also composite, from the second buffer, the second image data associated with the first image frame.
Description
The present disclosure relates generally to processing systems and, more particularly, to one or more techniques for graphics processing.
INTRODUCTION
Computing devices often utilize a graphics processing unit (GPU) to accelerate the rendering of graphical data for display. Such computing devices may include, for example, computer workstations, mobile phones such as so-called smartphones, embedded systems, personal computers, tablet computers, and video game consoles. GPUs execute a graphics processing pipeline that includes one or more processing stages that operate together to execute graphics processing commands and output a frame. A central processing unit (CPU) may control the operation of the GPU by issuing one or more graphics processing commands to the GPU. Modern day CPUs are typically capable of concurrently executing multiple applications, each of which may need to utilize the GPU during execution.
An electronic device may execute a program to present graphics content on a display. For example, an electronic device may execute a video game application.
SUMMARY
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
In an aspect of the disclosure, a method, a computer-readable medium, and an apparatus are provided. In some aspects, the apparatus may be configured to queue, from a first buffer queue, a first buffer with first image data associated with a first image frame to an image engine. The apparatus may also be configured to perform, by the image engine, post-processing on the first image data to generate second image data on a second buffer. Additionally, the apparatus may be configured to queue the second buffer at a second buffer queue. Also, the apparatus may be configured to composite, from the second buffer, the second image data associated with the first image frame.
The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 is a block diagram that illustrates an example content generation system, in accordance with one or more techniques of this disclosure.
FIG. 2 illustrates an example GPU, in accordance with one or more techniques of this disclosure.
FIG. 3 illustrates an example implementation of a gaming application graphics pipeline, in accordance with one or more techniques of this disclosure.
FIG. 4 illustrates an example flowchart of an example method, in accordance with one or more techniques of this disclosure.
Example techniques disclosed herein provide a unified framework of post-processing for gaming. In some examples, techniques disclosed herein enable gaming application data from a gaming application to be post-processed prior to the compositing of image data based on the gaming application data. For example, a visual post-processing framework (VPF) disclosed herein facilitates redirecting image data from a buffer producer to one or more post-processing image engines instead of to a buffer consumer that performs the compositing of the image data. In some examples, the post-processing image engines utilize specialized techniques for improving the image quality of the image data. In certain such examples, the specialized techniques may correspond to the type of image engine, may correspond to the type of image data (e.g., an image, graphics, video, etc. ) , and/or may correspond to the different hardware components (e.g., a CPU, a GPU, a DSP, etc. ) executing the respective image engine (s) . The example VPF may then provide the improved image data to the buffer consumer for performing the compositing.
Various aspects of systems, apparatuses, computer program products, and methods are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of this disclosure to those skilled in the art. Based on the teachings herein one skilled in the art should appreciate that the scope of this disclosure is intended to cover any aspect of the systems, apparatuses, computer program products, and methods disclosed herein, whether implemented independently of, or combined with, other aspects of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. Any aspect disclosed herein may be embodied by one or more elements of a claim.
Although various aspects are described herein, many variations and permutations of these aspects fall within the scope of this disclosure. Although some potential benefits and advantages of aspects of this disclosure are mentioned, the scope of this disclosure is not intended to be limited to particular benefits, uses, or objectives. Rather, aspects of this disclosure are intended to be broadly applicable to different wireless technologies, system configurations, networks, and transmission protocols, some of which are illustrated by way of example in the figures and in the following description. The detailed description and drawings are merely illustrative of this disclosure rather than limiting, the scope of this disclosure being defined by the appended claims and equivalents thereof.
Several aspects are presented with reference to various apparatus and methods. These apparatus and methods are described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, and the like (collectively referred to as “elements” ) . These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
By way of example, an element, or any portion of an element, or any combination of elements may be implemented as a “processing system” that includes one or more processors (which may also be referred to as processing units) . Examples of processors include microprocessors, microcontrollers, graphics processing units (GPUs) , general purpose GPUs (GPGPUs) , central processing units (CPUs) , application processors, digital signal processors (DSPs) , reduced instruction set computing (RISC) processors, systems-on-chip (SOC) , baseband processors, application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , programmable logic devices (PLDs) , state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software can be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. The term application may refer to software. As described herein, one or more techniques may refer to an application, i.e., software, being configured to perform one or more functions. In such examples, the application may be stored on a memory, e.g., on-chip memory of a processor, system memory, or any other memory. Hardware described herein, such as a processor may be configured to execute the application. For example, the application may be described as including code that, when executed by the hardware, causes the hardware to perform one or more techniques described herein. As an example, the hardware may access the code from a memory and execute the code accessed from the memory to perform one or more techniques described herein. In some examples, components are identified in this disclosure. In such examples, the components may be hardware, software, or a combination thereof. The components may be separate components or sub-components of a single component.
Accordingly, in one or more examples described herein, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise a random access memory (RAM) , a read-only memory (ROM) , an electrically erasable programmable ROM (EEPROM) , optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
In general, this disclosure describes techniques for having a graphics processing pipeline in a single device or multiple devices, improving the rendering of graphical content, and/or reducing the load of a processing unit (e.g., any processing unit configured to perform one or more techniques described herein, such as a GPU) . For example, this disclosure describes techniques for graphics processing in any device that utilizes graphics processing. Other example benefits are described throughout this disclosure.
As used herein, instances of the term “content” may refer to “graphical content, ” “image, ” and vice versa. This is true regardless of whether the terms are being used as an adjective, noun, or other parts of speech. In some examples, as used herein, the term “graphical content” may refer to a content produced by one or more processes of a graphics processing pipeline. In some examples, as used herein, the term “graphical content” may refer to a content produced by a processing unit configured to perform graphics processing. In some examples, as used herein, the term “graphical content” may refer to a content produced by a graphics processing unit.
In some examples, as used herein, the term “display content” may refer to content generated by a processing unit configured to perform displaying processing. In some examples, as used herein, the term “display content” may refer to content generated by a display processing unit. Graphical content may be processed to become display content. For example, a graphics processing unit may output graphical content, such as a frame, to a buffer (which may be referred to as a framebuffer) . A display processing unit may read the graphical content, such as one or more frames from the buffer, and perform one or more display processing techniques thereon to generate display content. For example, a display processing unit may be configured to perform composition on one or more rendered layers to generate a frame. As another example, a display processing unit may be configured to compose, blend, or otherwise combine two or more layers together into a single frame. A display processing unit may be configured to perform scaling, e.g., upscaling or downscaling, on a frame. In some examples, a frame may refer to a layer. In other examples, a frame may refer to two or more layers that have already been blended together to form the frame, i.e., the frame includes two or more layers, and the frame that includes two or more layers may subsequently be blended.
FIG. 1 is a block diagram that illustrates an example content generation system 100 configured to implement one or more techniques of this disclosure. The content generation system 100 includes a device 104. The device 104 may include one or more components or circuits for performing various functions described herein. In some examples, one or more components of the device 104 may be components of an SOC. The device 104 may include one or more components configured to perform one or more techniques of this disclosure. In the example shown, the device 104 may include a processing unit 120, and a system memory 124. In some aspects, the device 104 can include a number of optional components, e.g., a communication interface 126, a transceiver 132, a receiver 128, a transmitter 130, a display processor 127, and one or more displays 131. Reference to the display 131 may refer to the one or more displays 131. For example, the display 131 may include a single display or multiple displays. The display 131 may include a first display and a second display. The first display may be a left-eye display and the second display may be a right-eye display. In some examples, the first and second display may receive different frames for presentment thereon. In other examples, the first and second display may receive the same frames for presentment thereon. In further examples, the results of the graphics processing may not be displayed on the device, e.g., the first and second display may not receive any frames for presentment thereon. Instead, the frames or graphics processing results may be transferred to another device. In some aspects, this can be referred to as split-rendering.
The processing unit 120 may include an internal memory 121. The processing unit 120 may be configured to perform graphics processing, such as in a graphics processing pipeline 107. In some examples, the device 104 may include a display processor, such as the display processor 127, to perform one or more display processing techniques on one or more frames generated by the processing unit 120 before presentment by the one or more displays 131. The display processor 127 may be configured to perform display processing. For example, the display processor 127 may be configured to perform one or more display processing techniques on one or more frames generated by the processing unit 120. The one or more displays 131 may be configured to display or otherwise present frames processed by the display processor 127. In some examples, the one or more displays 131 may include one or more of: a liquid crystal display (LCD) , a plasma display, an organic light emitting diode (OLED) display, a projection display device, an augmented reality display device, a virtual reality display device, a head-mounted display, or any other type of display device.
Memory external to the processing unit 120, such as system memory 124, may be accessible to the processing unit 120. For example, the processing unit 120 may be configured to read from and/or write to external memory, such as the system memory 124. The processing unit 120 may be communicatively coupled to the system memory 124 over a bus. In some examples, the processing unit 120 may be communicatively coupled to each other over the bus or a different connection.
The internal memory 121 or the system memory 124 may include one or more volatile or non-volatile memories or storage devices. In some examples, internal memory 121 or the system memory 124 may include RAM, SRAM, DRAM, erasable programmable ROM (EPROM) , electrically erasable programmable ROM (EEPROM) , flash memory, a magnetic data media or an optical storage media, or any other type of memory.
The internal memory 121 or the system memory 124 may be a non-transitory storage medium according to some examples. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted to mean that internal memory 121 or the system memory 124 is non-movable or that its contents are static. As one example, the system memory 124 may be removed from the device 104 and moved to another device. As another example, the system memory 124 may not be removable from the device 104.
The processing unit 120 may be a central processing unit (CPU) , a graphics processing unit (GPU) , a general purpose GPU (GPGPU) , or any other processing unit that may be configured to perform graphics processing. In some examples, the processing unit 120 may be integrated into a motherboard of the device 104. In some examples, the processing unit 120 may be present on a graphics card that is installed in a port in a motherboard of the device 104, or may be otherwise incorporated within a peripheral device configured to interoperate with the device 104. The processing unit 120 may include one or more processors, such as one or more microprocessors, GPUs, application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , arithmetic logic units (ALUs) , digital signal processors (DSPs) , discrete logic, software, hardware, firmware, other equivalent integrated or discrete logic circuitry, or any combinations thereof. If the techniques are implemented partially in software, the processing unit 120 may store instructions for the software in a suitable, non-transitory computer-readable storage medium, e.g., internal memory 121, and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Any of the foregoing, including hardware, software, a combination of hardware and software, etc., may be considered to be one or more processors. For example, an example processing unit 120 may include one or more CPUs, one or more GPUs, and one or more DSPs.
In some aspects, the content generation system 100 can include an optional communication interface 126. The communication interface 126 may include a receiver 128 and a transmitter 130. The receiver 128 may be configured to perform any receiving function described herein with respect to the device 104. Additionally, the receiver 128 may be configured to receive information, e.g., eye or head position information, rendering commands, or location information, from another device. The transmitter 130 may be configured to perform any transmitting function described herein with respect to the device 104. For example, the transmitter 130 may be configured to transmit information to another device, which may include a request for content. The receiver 128 and the transmitter 130 may be combined into a transceiver 132. In such examples, the transceiver 132 may be configured to perform any receiving function and/or transmitting function described herein with respect to the device 104.
Referring again to FIG. 1, in certain aspects, the graphics processing pipeline 107 may include a graphics post-processing component 198 configured to facilitate providing a unified framework of post-processing for gaming. For example, the graphics post-processing component 198 may be configured to queue, from a first buffer queue, a first buffer with first image data associated with a first image frame to an image engine. The graphics post-processing component 198 may also be configured to perform, by the image engine, post-processing on the first image data to generate second image data on a second buffer. Additionally, the graphics post-processing component 198 may be configured to queue the second buffer at a second buffer queue. Also, the graphics post-processing component 198 may be configured to composite, from the second buffer, the second image data associated with the first image frame.
As used herein, the phrase “image engine, ” and variants thereof, refers to techniques for processing an image. In some examples, an image engine may be implemented by particular hardware, such as a CPU, a DSP, a GPU, etc. In some examples, aspects of the image engine may be implemented by different (or a combination of) processing units. In some examples, an image engine may utilize predetermined techniques for processing an image. In some examples, an image engine may dynamically determine which techniques to utilize for processing an image. An example image engine that may be utilized by techniques disclosed herein includes the Hollywood Quality Video (HQV) engine, which is provided by Qualcomm Technologies, Inc. The example HQV engine facilitates enhancing image quality utilizing adaptive image enhancement and noise reduction. The example HQV engine may operate on (or “run on” ) a DSP and provide relatively good performance and power. As used herein, the phrase “run on, ” and variants thereof, generally refers to being executed by. For example, the HQV engine running on the DSP indicates that the DSP executes the image processing techniques utilized by the HQV engine. It should be appreciated that while the disclosure generally refers to one engine running on one processing unit, in other examples, any suitable quantity of processing units may execute an image engine. For example, first aspects of an image engine may be executed by a first hardware component (e.g., a CPU) and second aspects of the image engine may be executed by a second hardware component (e.g., a GPU) . In certain such examples, the first and second aspects of the image engine may be executed in parallel, in series, or a combination thereof. In some examples, different hardware components may execute the same image engine. For example, a first hardware component (e.g., a CPU) may execute a first instance of an image engine and a second hardware component (e.g., a GPU) may execute a second instance of the image engine.
As described herein, a device, such as the device 104, may refer to any device, apparatus, or system configured to perform one or more techniques described herein. For example, a device may be a server, a base station, user equipment, a client device, a station, an access point, a computer, e.g., a personal computer, a desktop computer, a laptop computer, a tablet computer, a computer workstation, or a mainframe computer, an end product, an apparatus, a phone, a smart phone, a server, a video game platform or console, a handheld device, e.g., a portable video game device or a personal digital assistant (PDA) , a wearable computing device, e.g., a smart watch, an augmented reality device, or a virtual reality device, a non-wearable device, a display or display device, a television, a television set-top box, an intermediate network device, a digital media player, a video streaming device, a content streaming device, an in-car computer, any mobile device, any device configured to generate graphical content, or any device configured to perform one or more techniques described herein. Processes herein may be described as performed by a particular component (e.g., a GPU) , but, in further embodiments, can be performed using other components (e.g., a CPU) , consistent with disclosed embodiments.
GPUs can process multiple types of data or data packets in a GPU pipeline. For instance, in some aspects, a GPU can process two types of data or data packets, e.g., context register packets and draw call data. A context register packet can be a set of global state information, e.g., information regarding a global register, shading program, or constant data, which can regulate how a graphics context will be processed. For example, context register packets can include information regarding a color format. In some aspects of context register packets, there can be a bit that indicates which workload belongs to a context register. Also, there can be multiple functions or programming running at the same time and/or in parallel. For example, functions or programming can describe a certain operation, e.g., the color mode or color format. Accordingly, a context register can define multiple states of a GPU.
Context states can be utilized to determine how an individual processing unit functions, e.g., a vertex fetcher (VFD) , a vertex shader (VS) , a shader processor, or a geometry processor, and/or in what mode the processing unit functions. In order to do so, GPUs can use context registers and programming data. In some aspects, a GPU can generate a workload, e.g., a vertex or pixel workload, in the pipeline based on the context register definition of a mode or state. Certain processing units, e.g., a VFD, can use these states to determine certain functions, e.g., how a vertex is assembled. As these modes or states can change, GPUs can change the corresponding context. Additionally, the workload that corresponds to the mode or state may follow the changing mode or state (e.g., the workload may be received after the mode or state is changed) .
FIG. 2 illustrates an example GPU 200 in accordance with one or more techniques of this disclosure. As shown in FIG. 2, GPU 200 includes command processor (CP) 210, draw call packets 212, VFD 220, VS 222, vertex cache (VPC) 224, triangle setup engine (TSE) 226, rasterizer (RAS) 228, Z process engine (ZPE) 230, pixel interpolator (PI) 232, fragment shader (FS) 234, render backend (RB) 236, L2 cache (UCHE) 238, and system memory 240. Although FIG. 2 displays that GPU 200 includes processing units 210-238, GPU 200 can include a number of additional or fewer processing units. Additionally, processing units 210-238 are merely an example and any combination or order of processing units can be used by GPUs according to the present disclosure. GPU 200 also includes command buffer 250, context register packets 260, and context states 261.
As shown in FIG. 2, a GPU can utilize a CP, e.g., CP 210, or hardware accelerator to parse a command buffer into context register packets, e.g., context register packets 260, and/or draw call data packets, e.g., draw call packets 212. The CP 210 can then send the context register packets 260 or draw call packets 212 through separate paths to the processing units or blocks in the GPU. Further, the command buffer 250 can alternate different states of context registers and draw calls. For example, a command buffer can be structured as follows: context register of context N, draw call (s) of context N, context register of context N+1, and draw call (s) of context N+1.
As display resolutions continue to increase, the demand for high quality and high resolution digital images continues to increase. Furthermore, as display resolutions on mobile devices increases, the demand for higher image quality in gaming on mobile devices also continues to increase.
However, systems (e.g., mobile device operating systems) may restrict which processors and/or engines an application may access during operation. For example, gaming applications may use an image post-processing engine, such as OpenGL to improve image quality of the game. However, such image post-processing engines may execute only or primarily using a GPU and, thus, increase the workload of the GPU, which may negatively impact the performance of the gaming application and/or reduce the processing power available to the gaming application.
In some examples, the system (e.g., mobile device system) may include and/or access one or more image post-processing engines that can offload the workload from the CPU and/or GPU. For example, the system may include and/or access image post-processing engines that are specialized for processing (e.g., are designed for the efficient processing of) multimedia, such as images, graphics, video, etc., and/or the image post-processing engines may be specialized for different hardware (e.g., are designed for operating on a CPU, a GPU, a DSP, etc. ) . However, the system may restrict access to these specialized post-processing engines to certain applications via distinct frameworks.
Some traditional gaming application graphics pipelines may include a buffer queue that enables passing data from a buffer producer to a buffer consumer. In certain such examples, the buffer queue may include a producer interface to facilitate communicating with the buffer producer and a consumer interface to facilitate communicate with the buffer consumer. A buffer producer may be one or more components that request a free buffer from the buffer queue, populates the buffer with data, and returns the populated buffer to the buffer queue. For example, a buffer producer may request a free buffer from the buffer queue and specify one or more parameters associated with the free buffer (sometimes referred to as “dequeuing a buffer” ) . The buffer producer may then populate the free buffer with data including image data rendered from the gaming application, and return the populate free buffer to the buffer queue (sometimes referred to as “queueing a buffer” ) .
A buffer consumer may be one or more components that retrieve populated buffers from the buffer queue, make use of the buffer data, and then return the buffer to the buffer queue. For example, a buffer consumer may retrieve a buffer from the buffer queue that includes image data rendered from the gaming application (sometimes referred to as “acquiring a buffer” ) and make use of the buffer data by compositing the image data. In some examples, the compositing of the image data may include facilitating the displaying of the image data via a GPU processing the image data. The buffer consumer may then return the buffer to the buffer queue (sometimes referred to as “releasing a buffer” ) .
Example techniques disclosed herein provide a unified framework of post-processing for gaming. In some examples, techniques disclosed herein enable a gaming application to access one or more specialized post-processing engines to efficiently perform post-processing on image data associated with gaming applications. For example, disclosed techniques may modify the gaming application graphics pipeline for displaying gaming content by including a visual post-processing framework (VPF) that facilitates access to the one or more specialized post-processing engines by the gaming application. In certain such examples, image data associated with the gaming application may be processed using the one or more specialized post-processing engines that enable displaying image data having relatively improved image quality.
FIG. 3 illustrates an example implementation of a gaming application graphics pipeline 300. The example gaming application graphics pipeline 300 facilitates the displaying of image data corresponding to a gaming application 302. For example, the gaming application 302 may produce data 304 (e.g., via a CPU) for displaying via a display during gameplay. The example gaming application graphics pipeline 300 may receive the gaming application data 304, render image data based on the gaming application data 304 (e.g., via the CPU) , and then output the rendered image data via a display (e.g., via a GPU and/or a DSP) , such as the example display (s) 131 of FIG. 1. The example gaming application graphics pipeline 300 of FIG. 3 includes an example buffer producer 310, an example buffer consumer 320, and an example VPF 330. The example VPF 330 may include aspects of the graphics post-processing component 198 of FIG. 1. While the example gaming application graphics pipeline 300 illustrates different components, it should be appreciated that one or more of the components may be implemented by a same component and/or two or more components may be combined.
The example buffer producer 310 produces buffers that may be used by the example buffer consumer 320. For example, the buffer producer 310 may use the gaming application data 304 to render image data (e.g., via a GPU) and populate a buffer. In some examples, the buffer producer 310 may be implemented by a surface buffer producer and/or via a CPU.
The example buffer consumer 320 may, at some later time, retrieve the populated buffer and process the populated buffer for display (e.g., via a GPU and/or a DSP) . In the illustrated example of FIG. 3, the buffer consumer 320 includes a compositing buffer queue 322 to manage buffers for compositing. For example, the buffer consumer 320 may acquire a buffer from the compositing buffer queue 322, perform compositing on the image data of the acquired buffer (e.g., via a GPU) , and then release the buffer to the compositing buffer queue 322. The composited image data may then be displayed via a display, such as the example display (s) 131 of FIG. 1. In some examples, the buffer consumer 320 may be implemented by a SurfaceFlinger buffer consumer or any other composer for compositing buffers and sending buffers to the display.
The example VPF 330 facilitates applying one or more specialized post-processing engines to image data corresponding to a gaming application. In the illustrated example, the VPF 330 facilitates performing the specialized post-processing on the image data prior to the composting of image data by the buffer consumer 320. By performing the post-processing of the image data prior to the compositing of image data, the VPF 330 enables specialized image post-processing engines to improve the image quality of the image data, which then results in improved image quality of the image data displayed via the display, such as the example display (s) 131. In the illustrated example of FIG. 3, the VPF 330 includes a VPF tunneling handler 332, a VPF buffers handler 334, and a VPF engines handler 340. It should be appreciated that aspects of the VPF 330 may be implemented by one or more hardware components, such as one or more CPUs, one or more GPUs, one or more DSPs, etc.
The example VPF 330 of FIG. 3 includes the example VPF tunneling handler 332 to facilitate access to the gaming application 304. As described above, in some examples, the system may restrict access between different applications (or services) and processors or engines. For example, the system may restrict the data 304 generated by the gaming application 302 from being processed by the specialized image post-processing engines. The example VPF tunneling handler 332 of FIG. 3 interfaces with the buffer producer 310 to access the gaming application data 304 and to enable the VPF 330 to perform specialized post-processing on the gaming application data 304 to improve the image quality corresponding to the gaming application data 304.
For example, when the buffer producer 310 is ready to render image data, the VPF tunneling handler 332 may request a free buffer 342a from the VPF buffers handler 334. In some examples, the VPF tunneling handler 332 may cause the buffer producer 310 to request a buffer. In the illustrated example, the VPF tunneling handler 332 dequeues the buffer 342a from the VPF buffers handler 334. The buffer producer 310 may then render image data based on the gaming application data 304 and populate the received buffer with the image data. The VPF tunneling handler 332 may then return a populated buffer 342b to the VPF buffers handler 334.
In this manner, the VPF tunneling handler 332 may “hijack” the image data based on the gaming application data 304 from being sent directly to the buffer consumer 330 (and/or the compositing buffer queue332) . Instead, the example VPF tunneling handler 332 directs the image data based on the gaming application data 304 to the VPF buffer handler 334 and the VPF engines handler 340 for post-processing of the image data to improve the image quality of the image data prior to the compositing of the image data.
The example VPF 330 of FIG. 3 includes the VPF buffers handler 334 to manage one or more buffers associated with the rendering of image data, the post-processing of the image data, and the compositing of the image data. For example, the VPF buffers handler 334 facilitates exchanging buffers with the VPF tunneling handler 332 for rendering of the image data. The example VPF buffers handler 334 facilitates exchanging buffers with the VPF engines handler 340 for performing specialized image post-processing on the image data. The example VPF buffers handler 334 facilitates exchanging buffers with the buffer consumer 320 for performing compositing of the image data. In the illustrated example of FIG. 3, the VPF buffers handler 334 includes a VPF buffer queue 336 and a VPF buffer producer 338.
The example VPF buffers handler 334 includes the VPF buffer queue 336 to manage the buffer exchange with the VPF tunneling handler 332 and the VPF engines handler 340. For example, the VPF buffer queue 336 may receive requests for available (or free) buffers from the VPF tunneling handler 332 and may maintain populated buffers that are returned by the VPF tunneling handler 332. For example, when requested, the VPF buffer queue 336 may provide the free buffer 342a and, at a later time, receive the populated buffer 342b from the VPF tunneling handler 332.
The example VPF buffer queue 336 also manages a buffer exchange with the VPF engines handler 340. For example, the VPF buffer queue 336 may provide populated buffers to the VPF engines handler 340 for performing the specialized image post-processing. In the illustrated example, when requested, the VPF buffer queue 336 may provide a buffer 342c to the VPF engines handler 340 and, at a later time, receive a buffer 342d from the VPF engines handler 340.
The example VPF 330 of FIG. 3 includes the VPF engines handler 340 to perform post-processing on image data to improve the image quality of the image data prior to the compositing of the image data. In some examples, the VPF engines handler 340 may provide an interface to integrate different image engines 380 that may run on different hardware components 390. For example, in the illustrated example of FIG. 3, a first image engine 380a runs on a CPU 390a, a second image engine 380b runs on a GPU 390b, and a third image engine (e.g., an HQV engine) 380c runs on a DSP 390c, etc. However, it should be appreciated that in other examples, additional or alternative image engines may run on one or more hardware components.
In the illustrated example, the VPF engines handler 340 integrates at least the HQV engine 380c that facilitates enhancing image quality via adaptive image enhancement and noise reduction. In examples in which the gaming application graphics pipeline 300 does not include the VPF 330, the image engines 380 may be accessed by the buffer consumer 320 during compositing. However, as disclosed herein, the VPF 330 enables the gaming application graphics pipeline 300 to access specialized image post-processing engines, such as the HQV engine 380c prior to the buffer consumer 320 performing the compositing.
In the illustrated example of FIG. 3, the VPF engines handler 340 accesses a first buffer populated with image data, performs specialized image post-processing on the image data to improve the image quality of the image data, and provides the improved image data to the VPS buffers handler 338. For example, the VPF engines handler 340 may acquire the populated buffer 342c from the VPF buffer queue 336, perform image post-processing on the image data of the populated buffer 342c via the HQV engine 380c, and then provide the improved image data to the VPF buffer producer 338.
In some examples, the VPF engines handler 340 may determine which image engine 380 to provide the image data for performing the specialized image post-processing. For example, the VPF engines handler 340 may determine workloads for the different hardware components 390 and select the image engine 380 based on the respective workloads. In some examples, the VPF engines handler 340 may select the image engine 380 (and the hardware component 390) based on characteristics associated with the image data and/or the gaming application. For example, the VPF engines handler 340 may select the HQV engine 380c (and the DSP 390c) for performing the specialized image post-processing of image data associated with the gaming application 302. In some examples, the VPF engines handler 340 may select the image engine 380 (and the hardware component 390) based on one or more characteristics associated with the image data. For example, the first image engine 380a and the CPU 390a may perform post-processing of image data with shadows relatively better than the second image engine 380b and the GPU 390b, which may perform post-processing of image data with two-dimensional aspects relatively better than the first image engine 380a and the CPU 390a. In certain such examples, the VPF engines handler 340 may select the particular image engine 380 and the corresponding hardware component 390 based on the characteristics of the image data.
It should be appreciated that in some examples, the VPF engines handler 340 may determine that performing post-processing on particular image data may not be beneficial. In certain such examples, the “improved” image data provided by the VPF engines handler 340 to the VPF buffer producer 338 may be the same image data that was acquired from the populated buffer 342c from the VPF buffer queue 336.
The example VPF buffers handler 334 includes the VPF buffer producer 338 to manage the buffer exchange with the VPF engines handler 340 and the buffer consumer 320. For example, when the VPF engines layer 340 generates improved image quality image data, the VPF buffer producer 338 may request a free buffer 346a from the compositing buffer queue 322. In the illustrated example, the VPF buffer producer 338 dequeues the buffer 346a from the compositing buffer queue 322. The VPF buffer producer 338 may then populate the received buffer 346a with the improved image quality image data and return a populated buffer 346b to the compositing buffer queue 322. In some examples, the VPF buffer producer 338 may provide the free buffer 346c to the VPF engines layer 340 for populating and then receive, at a later time, the populated buffer 346d from the VPF engines layer 340.
As described above, the buffer consumer 320 makes use of populated buffers. In the illustrated example of FIG. 3, the buffer consumer 320 may acquire a buffer (e.g., the populated buffer 346b) from the compositing buffer queue 322, perform compositing on the image data of the acquired buffer, and then release the buffer to the compositing buffer queue 322 as a free buffer (e.g., the free buffer 346a) . The composited image data may then be displayed via a display, such as the example display (s) 131 of FIG. 1. In some examples, the buffer consumer 320 may be implemented by an OpenGL post-processing engine. However, in other examples, aspects of the buffer consumer 320 may be implemented by additional or alternative compositing engines.
It should be appreciated from the above that the disclosed example techniques enable image data to be post-processed to improve the quality of the image data prior to the compositing of the image data. While in some examples, the compositing of the image data may include improving the image quality of the image data, such improving of the image quality is typically performed by the GPU, thereby increasing the workload of the GPU. However, by utilizing the techniques disclosed herein, the example VPF 330 enables performing the post-processing of the image data to improve the image quality prior to the compositing, thereby conserving the resources used by the GPU for the performing of the post-processing of the image data. For example, the example VPF 330 redirects image data to the VPF engines handler 340, which interfaces with one or more image engines 380 that may provide specialized techniques for improving the image quality of the data. Thus, the disclosed techniques enable providing image data with improved image quality to the buffer consumer 320 for compositing.
It should be appreciated that in the illustrated example, the buffers 342 exchanged between the VPF tunneling handler 332, the VPF buffers handler 334, and the VPF engines handler 340 represent buffers from a first buffer pool at different points in time. For example, the free buffer 342a represents the buffer 342 being dequeued from the VPF buffers handler 334 by the VPF tunneling handler 332. The example populated buffer 342b represents the buffer 342 after the VPF tunneling handler 332 populates the buffer 342 with image data and being queued from the VPF tunneling handler 332 to the VPF buffers handler 334. The example populated buffer 342c represents the buffer 342 being acquired by the VPF engines handler 340 for post-processing. The example buffer 342d represents the buffer 342 being released by the VPF engines handler 340 to the VPF buffer queue 336 after performing the post-processing.
It should be appreciated that in the illustrated example, the buffers 346 exchanged between the VPF buffers handler 334, the VPF engines handler 340, and the composting buffer queue 322 /the buffer consumer 320 represent buffers from a second buffer pool at different points in time. For example, the free buffer 346a represents the buffer 346 being dequeued from the compositing buffer queue 322 by the VPF buffers handler 336. The example populated buffer 346b represents the buffer 346 after the VPF buffers handler 336 populates the buffer 346 with the improved image quality image data provided by the VPF engines handler 340 and being queued from the VPF buffer producer 338 to the compositing buffer queue 322. The example populated buffer 346b may then be acquired by the buffer consumer 320 from the composting buffer queue 322 for compositing, and may be subsequently released by the buffer consumer 320 to the compositing buffer queue 322 as a free buffer.
Furthermore, it should be appreciated that in a single frame example of the gaming application graphics pipeline 300 in which image data associated with a single frame is processed serially by the gaming application graphics pipeline 300, the image data associated with the first buffer pool and the image data associated with the second buffer pool may correspond to the same image content, but the image quality of the image data associated with the second buffer pool may be relatively better than the image data associated with the first buffer pool.
It should be appreciated that as the example gaming application graphics pipeline 300 includes two buffer pools, the corresponding image data may be processed in parallel. For example, at any particular point in time, the VPF tunneling handler 332 may be performing rendering of image data associated with a first frame, the VPF engines handler 340 may be performing image post-processing of image data associated with a second frame to generate improved quality image data, and the buffer consumer 320 may be performing composting of image data associated with a third frame. For example, rendering of image data associated with a first frame may include the VPF tunneling handler 332 dequeuing a free buffer from the VPF buffer queue 336, populating the free buffer with rendered image data associated with the gaming application data 304, and queuing the buffer at the VPF buffer queue 336. In some examples, the performing of the image post-processing of the image data associated with the second frame may include the VPF engines handler 340 acquiring a populated buffer from the VPF buffer queue 336, and processing the image data via a specialized image post-processing engine, such as the HQV engine, to generate improved quality image data. In some examples, the VPG engines handler 340 and/or the VPF buffers handler 334 may populate a buffer with the improved quality image data. In some examples, the performing of the compositing of image data associated with the third frame may include the buffer consumer 320 acquiring a buffer including improved quality image data from the compositing buffer queue 322, compositing the improved quality image data, and releasing the buffer to the compositing buffer queue 322. In certain such examples, the order of the frames displayed may be the third frame displayed first, the second frame displayed second, and the third frame displayed third.
FIG. 4 illustrates an example flowchart 400 of an example method in accordance with one or more techniques of this disclosure. The method may be performed by an apparatus for graphics processing. At 402, the apparatus may dequeue a second buffer from a second buffer queue, as described in connection with the examples in FIGs. 2 and/or 3. For example, the VPF buffer producer 338 may dequeue the free buffer 346a from the compositing buffer queue 322.
At 404, the apparatus may queue the second buffer to an image engine, as described in connection with the examples in FIGs. 2 and/or 3. For example, the VPF buffer producer 338 may queue the free buffer 346c to the VPF engines handler 340 and/or an image engine, such as the HQV engine.
At 406, the apparatus may dequeue a first buffer from a first buffer queue, as described in connection with the examples in FIGs. 2 and/or 3. For example, the VPF tunneling handler 332 may request the free buffer 342a from the VPF buffer queue 336.
At 408, the apparatus may render first image data on the first buffer, as described in connection with the examples in FIGs. 2 and/or 3. For example, the VPF tunneling handler 332 may render gaming application data 304 from the gaming application 302 to the buffer 342a. In some examples, the buffer producer 310 may render the first image data to the buffer 342a via the VPF tunneling handler 332.
At 410, the apparatus may queue the first buffer with the first image data to the first buffer queue, as described in connection with the examples in FIGs. 2 and/or 3. For example, the VPF tunneling handler 332 may queue the populated buffer 342b to the VPF buffer queue 336.
At 412, the apparatus may queue the first buffer with the first image data to the image engine as an input buffer, as described in connection with the examples in FIGs. 2 and/or 3. For example, the VPF engines handler 340 may acquire the populated buffer 342c from the VPF bugger queue 336.
At 414, the apparatus may perform post-processing on the first image data to generate second image data, as described in connection with the examples in FIGs. 2 and/or 3. For example, the VPF engines handler 340 may provide the image data of the populated buffer 342c to the image engine, such as the HQV engine, to perform specialized image post-processing to improve the image quality of the image data. It should be appreciated that the image content of the first image data and the second image data may be the same, but that the image quality of the second image data may be relatively improved compared to the image quality of the first image data. In the illustrated example, the VPF engines handler 340 may store the second image data using the second buffer.
At 416, the apparatus may release the first buffer to the first buffer queue, as described in connection with the examples in FIGs. 2 and/or 3. For example, the VPF engines handler 340 may return the buffer 342d to the VPF buffer queue 336.
At 418, the apparatus may dequeue the second buffer from the image engine, as described in connection with the examples in FIGs. 2 and/or 3. For example, the VPF engines handler 340 may return the buffer 346d populated with the second image data to the VPF buffer producer 338, and the VPF buffer producer 338 may dequeue the second buffer 346d populated with the second image data by the VPF engines handler 340.
At 420, the apparatus may queue the second buffer at the second buffer queue, as described in connection with the examples in FIGs. 2 and/or 3. For example, the VPF buffer producer 338 may provide the populated buffer 346b to the compositing buffer queue 322.
At 422, the apparatus may acquire the second image data from the second buffer queue, as described in connection with the examples in FIGs. 2 and/or 3. For example, the buffer consumer 320 may acquire the populated buffer 346b from the compositing buffer queue 322 and retrieve the second image data from the populated buffer 346b.
At 424, the apparatus may composite the second image data, as described in connection with the examples in FIGs. 2 and/or 3. For example, the buffer consumer 320 may perform the compositing of the second image data. In some examples, the buffer consumer 320 may facilitate the displaying of the second image data via a display, such as the example display (s) 131 of FIG. 1.
At 426, the apparatus may release the second buffer to the second buffer queue, as described in connection with the examples in FIGs. 2 and/or 3. For example, the buffer consumer 320 may release the second buffer 346b to the compositing buffer queue 322.
It should be appreciated that in some examples, one or more aspects of the method 400 may be performed in parallel (e.g., at or nearly at the same time) . For example, in some examples, the dequeuing of the second buffer at 402 and the queueing of the second buffer at 404 may be performed in parallel to the dequeuing of the first buffer at 406, the rendering of the first image data at 408, the queueing of the first buffer at 410, and the queueing of the first buffer at 412. In some examples, the releasing of the first buffer at 416 may be performed in parallel to the dequeuing of the second buffer at 418, the queueing of the second buffer at 420, the acquiring of the second image data at 422, the compositing of the second image data at 424, and the releasing of the buffer at 426.
It should be appreciated that in some examples, the dequeuing of the second buffer at 402, the queueing of the second buffer at 404, the dequeuing of the first buffer at 406, the rendering of the first image data at 408, the queueing of the first buffer at 410, and the queueing of the first buffer at 412 may correspond to a first stage of the gaming application graphics pipeline 300 of FIG. 3, that the performing of the post-processing at 414 may correspond to a second stage of the gaming application graphics pipeline 300, and that the releasing of the first buffer at 416, the dequeuing of the second buffer at 418, the queueing of the second buffer at 420, the acquiring of the second image data at 422, the compositing of the second image data at 424, and the releasing of the buffer at 426 may correspond to a third stage of the gaming application graphics pipeline 300. In certain such examples, each of the three stages may be performed in parallel. For example, the first stage may be operating on a first frame, the second stage may be operating on a second frame, and the third stage may be operating on a third frame. In certain such examples, in terms of a time stamp, the third frame may be displayed first, the second frame may be displayed second, and the first stage may be displayed third.
In one configuration, a method or apparatus for graphics processing is provided. The apparatus may be a GPU or some other processor that can perform graphics processing. In one aspect, the apparatus may be the processing unit 120 within the device 104, or may be some other hardware within device 104 or another device. The apparatus may include means for queueing, from a first buffer queue, a first buffer with first image data associated with a first image frame to an image engine. The apparatus may include means for performing, by the image engine, post-processing on the first image data to generate second image data on a second buffer. The apparatus may include means for queueing the second buffer at a second buffer queue. The apparatus may include means for compositing, from the second buffer, the second image data associated with the first image frame. The apparatus may include means for dequeuing the second buffer from the second buffer queue. The apparatus may include means for queueing the second buffer to the image engine before the performing of the post-processing on the first image data. In some examples, the means for queueing the first buffer with first image data to the image engine is performed in parallel to the means for dequeuing the second buffer from the second buffer queue and the means for queueing the second buffer to the image engine. The apparatus may include means for dequeuing the first buffer from the first buffer queue. The apparatus may include means for rendering the first image data on the first buffer. The apparatus may include means for queueing the first buffer with the first image data to the first buffer queue before the queueing of the first buffer with the first image data to the image engine. The apparatus may include means for dequeuing the second buffer from the second buffer queue. The apparatus may include means for queueing the second buffer to the image engine before the performing of the post-processing on the first image data. In some examples, the means for dequeuing the first buffer from the first buffer queue, the means for rendering the first image data on the first buffer, the means for queueing the first buffer with the first image data to the first buffer queue, and the means for queueing the first buffer with first image data to the image engine are performed in parallel to the means for dequeuing the second buffer from the second buffer queue and the means for queueing the second buffer to the image engine. The apparatus may include means for releasing the first buffer to the first buffer queue after the performing of the post-processing on the first image data. The apparatus may include means for dequeuing the second buffer from the image engine. The apparatus may include means for acquiring the second image data from the second buffer after the queueing of the second buffer at the second buffer queue. The apparatus may include means for releasing the second buffer to the second buffer queue after the compositing of the second image data. The apparatus may include means for performing, by the image engine, post-processing on new image data associated with a second image frame from the first buffer queue to generate post-processed image data associated with the second image frame to the second buffer queue, the second image frame being subsequent to the first image frame. The apparatus may include means for rendering to the first buffer queue a third image frame, the third image frame being subsequent to the second image frame. In some examples, the means for compositing associated with the first image frame, the means for performing the post-processing associated with the second image frame, and the means for rendering associated with the third image frame occur in parallel.
The subject matter described herein can be implemented to realize one or more benefits or advantages. For instance, the described graphics processing techniques can be used to enable performing specialized image post-processing of gaming application image data. Moreover, the graphics processing techniques disclosed herein can perform the specialized image post-processing prior to the compositing of the image data. Furthermore, as the buffer computer may perform the composting at regular intervals (e.g., every 16ms) , in some examples, the graphics processing techniques disclosed herein may introduce limited or negligible latency and/or frames-per-second drops. Furthermore, the graphics processing techniques disclosed herein can improve image quality while also offloading workflow from the GPU and/or the CPU as aspects of the specialized post-processing may be performed by dedicated hardware, such as a DSP. In some examples, the improved image quality may include introducing a brilliant color effect and/or details enhancement to the image content.
In accordance with this disclosure, the term “or” may be interrupted as “and/or” where context does not dictate otherwise. Additionally, while phrases such as “one or more” or “at least one” or the like may have been used for some features disclosed herein but not others, the features for which such language was not used may be interpreted to have such a meaning implied where context does not dictate otherwise.
In one or more examples, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. For example, although the term “processing unit” has been used throughout this disclosure, such processing units may be implemented in hardware, software, firmware, or any combination thereof. If any function, processing unit, technique described herein, or other module is implemented in software, the function, processing unit, technique described herein, or other module may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media may include computer data storage media or communication media including any medium that facilitates transfer of a computer program from one place to another. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, . Disk and disc, as used herein, includes compact disc (CD) , laser disc, optical disc, digital versatile disc (DVD) , floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. A computer program product may include a computer-readable medium.
The code may be executed by one or more processors, such as one or more digital signal processors (DSPs) , general purpose microprocessors, application specific integrated circuits (ASICs) , arithmetic logic units (ALUs) , field programmable logic arrays (FPGAs) , or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor, ” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Also, the techniques could be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs, e.g., a chip set. Various components, modules or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily need realization by different hardware units. Rather, as described above, various units may be combined in any hardware unit or provided by a collection of intraoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
Various examples have been described. These and other examples are within the scope of the following example claims.
Claims (30)
- A method of graphics processing in an apparatus, comprising:queueing, from a first buffer queue, a first buffer with first image data associated with a first image frame to an image engine;performing, by the image engine, post-processing on the first image data to generate second image data on a second buffer;queueing the second buffer at a second buffer queue; andcompositing, from the second buffer, the second image data associated with the first image frame.
- The method of claim 1, further comprising:dequeuing the second buffer from the second buffer queue; andqueueing the second buffer to the image engine before the performing of the post-processing on the first image data,wherein the queueing the first buffer with first image data to the image engine is performed in parallel to the dequeuing of the second buffer from the second buffer queue and the queueing of the second buffer to the image engine.
- The method of claim 1, further comprising:dequeuing the first buffer from the first buffer queue;rendering the first image data on the first buffer; andqueueing the first buffer with the first image data to the first buffer queue before the queueing of the first buffer with the first image data to the image engine.
- The method of claim 3, further comprising:dequeuing the second buffer from the second buffer queue; andqueueing the second buffer to the image engine before the performing of the post-processing on the first image data,wherein the dequeuing of the first buffer from the first buffer queue, the rendering of the first image data on the first buffer, the queueing of the first buffer with the first image data to the first buffer queue, and the queueing the first buffer with first image data to the image engine are performed in parallel to the dequeuing of the second buffer from the second buffer queue and the queueing of the second buffer to the image engine.
- The method of claim 1, further comprising:releasing the first buffer to the first buffer queue after the performing of the post-processing on the first image data; anddequeuing the second buffer from the image engine.
- The method of claim 1, further comprising:acquiring the second image data from the second buffer after the queueing of the second buffer at the second buffer queue.
- The method of claim 1, further comprising:releasing the second buffer to the second buffer queue after the compositing of the second image data.
- The method of claim 1, wherein the compositing of the second image data includes displaying the second image data.
- The method of claim 1, wherein the first image data and the second image data are associated with a same image content.
- The method of claim 1, further comprising:performing, by the image engine, post-processing on new image data associated with a second image frame from the first buffer queue to generate post-processed image data associated with the second image frame to the second buffer queue, the second image frame being subsequent to the first image frame; andrendering to the first buffer queue a third image frame, the third image frame being subsequent to the second image frame,wherein the compositing associated with the first image frame, the performing of the post-processing associated with the second image frame, and the rendering associated with the third image frame occur in parallel.
- An apparatus for graphics processing by a device, comprising:a memory; andat least one processor coupled to the memory and configured to:queue, from a first buffer queue, a first buffer with first image data associated with a first image frame to an image engine;perform, by the image engine, post-processing on the first image data to generate second image data on a second buffer;queue the second buffer at a second buffer queue; andcomposite, from the second buffer, the second image data associated with the first image frame.
- The apparatus of claim 11, wherein the at least one processor is further configured to:dequeue the second buffer from the second buffer queue; andqueue the second buffer to the image engine before the performing of the post-processing on the first image data,wherein the queueing the first buffer with first image data to the image engine is performed in parallel to the dequeuing of the second buffer from the second buffer queue and the queueing of the second buffer to the image engine.
- The apparatus of claim 11, wherein the at least one processor is further configured to:dequeue the first buffer from the first buffer queue;render the first image data on the first buffer; andqueue the first buffer with the first image data to the first buffer queue before the queueing of the first buffer with the first image data to the image engine.
- The apparatus of claim 13, wherein the at least one processor is further configured to:dequeue the second buffer from the second buffer queue; andqueue the second buffer to the image engine before the performing of the post-processing on the first image data,wherein the dequeuing of the first buffer from the first buffer queue, the rendering of the first image data on the first buffer, the queueing of the first buffer with the first image data to the first buffer queue, and the queueing the first buffer with first image data to the image engine are performed in parallel to the dequeuing of the second buffer from the second buffer queue and the queueing of the second buffer to the image engine.
- The apparatus of claim 11, wherein the at least one processor is further configured to:release the first buffer to the first buffer queue after the performing of the post-processing on the first image data; anddequeue the second buffer from the image engine.
- The apparatus of claim 11, wherein the at least one processor is further configured to:acquire the second image data from the second buffer after the queueing of the second buffer at the second buffer queue.
- The apparatus of claim 11, wherein the at least one processor is further configured to:release the second buffer to the second buffer queue after the compositing of the second image data.
- The apparatus of claim 11, wherein the at least one processor is further configured to:composite the second image data by displaying the second image data.
- The apparatus of claim 11, wherein the first image data and the second image data are associated with a same image content.
- The apparatus of claim 11, wherein the at least one processor is further configured to:perform, by the image engine, post-processing on new image data associated with a second image frame from the first buffer queue to generate post-processed image data associated with the second image frame to the second buffer queue, the second image frame being subsequent to the first image frame; andrender to the first buffer queue a third image frame, the third image frame being subsequent to the second image frame,wherein the compositing associated with the first image frame, the performing of the post-processing associated with the second image frame, and the rendering associated with the third image frame occur in parallel.
- A computer-readable medium storing computer executable code for graphics processing, comprising code to:at least one processor coupled to the memory and configured to:queue, from a first buffer queue, a first buffer with first image data associated with a first image frame to an image engine;perform, by the image engine, post-processing on the first image data to generate second image data on a second buffer;queue the second buffer at a second buffer queue; andcomposite, from the second buffer, the second image data associated with the first image frame.
- The computer-readable medium of claim 21, wherein the code is further configured to:dequeue the second buffer from the second buffer queue; andqueue the second buffer to the image engine before the performing of the post-processing on the first image data,wherein the queueing the first buffer with first image data to the image engine is performed in parallel to the dequeuing of the second buffer from the second buffer queue and the queueing of the second buffer to the image engine.
- The computer-readable medium of claim 21, wherein the code is further configured to:dequeue the first buffer from the first buffer queue;render the first image data on the first buffer; andqueue the first buffer with the first image data to the first buffer queue before the queueing of the first buffer with the first image data to the image engine.
- The computer-readable medium of claim 17, wherein the code is further configured to:dequeue the second buffer from the second buffer queue; andqueue the second buffer to the image engine before the performing of the post-processing on the first image data,wherein the dequeuing of the first buffer from the first buffer queue, the rendering of the first image data on the first buffer, the queueing of the first buffer with the first image data to the first buffer queue, and the queueing the first buffer with first image data to the image engine are performed in parallel to the dequeuing of the second buffer from the second buffer queue and the queueing of the second buffer to the image engine.
- The computer-readable medium of claim 21, wherein the code is further configured to:release the first buffer to the first buffer queue after the performing of the post-processing on the first image data; anddequeue the second buffer from the image engine.
- The computer-readable medium of claim 21, wherein the code is further configured to:acquire the second image data from the second buffer after the queueing of the second buffer at the second buffer queue.
- The computer-readable medium of claim 21, wherein the code is further configured to:release the second buffer to the second buffer queue after the compositing of the second image data.
- The computer-readable medium of claim 21, wherein the code is further configured to:composite the second image data by displaying the second image data.
- The computer-readable medium of claim 21, wherein the code is further configured to:perform, by the image engine, post-processing on new image data associated with a second image frame from the first buffer queue to generate post-processed image data associated with the second image frame to the second buffer queue, the second image frame being subsequent to the first image frame; andrender to the first buffer queue a third image frame, the third image frame being subsequent to the second image frame,wherein the compositing associated with the first image frame, the performing of the post-processing associated with the second image frame, and the rendering associated with the third image frame occur in parallel.
- An apparatus of graphics processing in an apparatus, comprising: :means for queueing, from a first buffer queue, a first buffer with first image data associated with a first image frame to an image engine;means for performing, by the image engine, post-processing on the first image data to generate second image data on a second buffer;means for queueing the second buffer at a second buffer queue; andmeans for compositing, from the second buffer, the second image data associated with the first image frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/097687 WO2021012257A1 (en) | 2019-07-25 | 2019-07-25 | Methods and apparatus to facilitate a unified framework of post-processing for gaming |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/097687 WO2021012257A1 (en) | 2019-07-25 | 2019-07-25 | Methods and apparatus to facilitate a unified framework of post-processing for gaming |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021012257A1 true WO2021012257A1 (en) | 2021-01-28 |
Family
ID=74193084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/097687 WO2021012257A1 (en) | 2019-07-25 | 2019-07-25 | Methods and apparatus to facilitate a unified framework of post-processing for gaming |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021012257A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030122836A1 (en) * | 2001-12-31 | 2003-07-03 | Doyle Peter L. | Automatic memory management for zone rendering |
US20040258160A1 (en) * | 2003-06-20 | 2004-12-23 | Sandeep Bhatia | System, method, and apparatus for decoupling video decoder and display engine |
US20060197849A1 (en) * | 2005-03-02 | 2006-09-07 | Mats Wernersson | Methods, electronic devices, and computer program products for processing images using multiple image buffers |
CN101673391A (en) * | 2008-09-09 | 2010-03-17 | 索尼株式会社 | Pipelined image processing engine |
CN103425534A (en) * | 2012-05-09 | 2013-12-04 | 辉达公司 | Graphics processing unit sharing between many applications |
-
2019
- 2019-07-25 WO PCT/CN2019/097687 patent/WO2021012257A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030122836A1 (en) * | 2001-12-31 | 2003-07-03 | Doyle Peter L. | Automatic memory management for zone rendering |
US20040258160A1 (en) * | 2003-06-20 | 2004-12-23 | Sandeep Bhatia | System, method, and apparatus for decoupling video decoder and display engine |
US20060197849A1 (en) * | 2005-03-02 | 2006-09-07 | Mats Wernersson | Methods, electronic devices, and computer program products for processing images using multiple image buffers |
CN101673391A (en) * | 2008-09-09 | 2010-03-17 | 索尼株式会社 | Pipelined image processing engine |
CN103425534A (en) * | 2012-05-09 | 2013-12-04 | 辉达公司 | Graphics processing unit sharing between many applications |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11481865B2 (en) | Methods and apparatus for tensor object support in machine learning workloads | |
EP3915008B1 (en) | Methods and apparatus for standardized apis for split rendering | |
US11763419B2 (en) | GPR optimization in a GPU based on a GPR release mechanism | |
US20200311859A1 (en) | Methods and apparatus for improving gpu pipeline utilization | |
WO2021000220A1 (en) | Methods and apparatus for dynamic jank reduction | |
US20230335049A1 (en) | Display panel fps switching | |
US11055808B2 (en) | Methods and apparatus for wave slot management | |
US11574380B2 (en) | Methods and apparatus for optimizing GPU kernel with SIMO approach for downscaling utilizing GPU cache | |
WO2021012257A1 (en) | Methods and apparatus to facilitate a unified framework of post-processing for gaming | |
US10891709B2 (en) | Methods and apparatus for GPU attribute storage | |
US20210358079A1 (en) | Methods and apparatus for adaptive rendering | |
US12033603B2 (en) | Methods and apparatus for plane planning for overlay composition | |
US11893654B2 (en) | Optimization of depth and shadow pass rendering in tile based architectures | |
US12002142B2 (en) | Performance overhead optimization in GPU scoping | |
WO2021042331A1 (en) | Methods and apparatus for graphics and display pipeline management | |
US11727631B2 (en) | Dynamic variable rate shading | |
US20220284536A1 (en) | Methods and apparatus for incremental resource allocation for jank free composition convergence | |
US11087431B2 (en) | Methods and apparatus for reducing draw command information | |
US20240096042A1 (en) | Methods and apparatus for saliency based frame color enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19938995 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19938995 Country of ref document: EP Kind code of ref document: A1 |