WO2021087826A1 - Methods and apparatus to improve image data transfer efficiency for portable devices - Google Patents

Methods and apparatus to improve image data transfer efficiency for portable devices Download PDF

Info

Publication number
WO2021087826A1
WO2021087826A1 PCT/CN2019/116070 CN2019116070W WO2021087826A1 WO 2021087826 A1 WO2021087826 A1 WO 2021087826A1 CN 2019116070 W CN2019116070 W CN 2019116070W WO 2021087826 A1 WO2021087826 A1 WO 2021087826A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
display panel
screen mask
modified image
examples
Prior art date
Application number
PCT/CN2019/116070
Other languages
French (fr)
Inventor
Yongjun XU
Wenkai YAO
Nan Zhang
Mark Sternberg
Original Assignee
Qualcomm Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Incorporated filed Critical Qualcomm Incorporated
Priority to PCT/CN2019/116070 priority Critical patent/WO2021087826A1/en
Publication of WO2021087826A1 publication Critical patent/WO2021087826A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Definitions

  • the present disclosure relates generally to processing systems and, more particularly, to one or more techniques for graphics processing.
  • GPUs graphics processing unit
  • Such computing devices may include, for example, computer workstations, mobile phones, such as so-called smartphones, embedded systems, personal computers, tablet computers, and video game consoles.
  • GPUs execute a graphics processing pipeline that includes one or more processing stages that operate together to execute graphics processing commands and output a frame.
  • a central processing unit may control the operation of the GPU by issuing one or more graphics processing commands to the GPU.
  • Modern day CPUs are typically capable of concurrently executing multiple applications, each of which may need to utilize the GPU during execution.
  • Portable electronic devices including smartphones and wearable devices, may present graphical content on a display.
  • graphical content may be presented on a display.
  • screen-to-body ratios there has developed an increased need for presenting graphical content on displays having irregular shapes.
  • the apparatus may be a display processor, a display processing unit (DPU) , a graphics processing unit (GPU) , or a video processor (sometimes generally referred to as a “host processor” ) .
  • the apparatus can obtain image data for a frame. Additionally, the apparatus can determine modified image data for the frame based on a screen mask.
  • the screen mask may be associated with a display panel and may be configured to define a visible area of the display panel.
  • the modified image data may include less pixel data than the obtained image data.
  • the apparatus can also transmit the modified image data to the display panel.
  • the apparatus may transmit the screen mask to the display panel prior to the transmitting of the modified image data to the display panel.
  • a shape associated with the obtained image data may correspond to a rectangular shaped image and a shape associated with the modified image data may correspond to a shape of the visible area of the display panel.
  • the screen mask may be pre-generated and stored in a system memory accessible for determining the modified image data.
  • the screen mask may be updated based on a change associated with at least one characteristic of a user interface for presentment via the display panel.
  • the apparatus may transmit the updated screen mask to the display panel prior to the transmitting of subsequently modified image data.
  • the apparatus may determine whether a pixel of the obtained image data corresponds to the visible area of the display panel based on the screen mask and a location of the pixel. Further, the apparatus may populate a payload portion of an image packet with a value based on the determination. In some examples, the apparatus may determine whether a pixel of the obtained image data corresponds to a non-visible area of the display panel based on the screen mask and a location of the pixel. Additionally, the apparatus may exclude the image data of locations corresponding to the non-visible area. In some examples, the apparatus may generate an image packet based on the modified image data, and where the image packet may include at least a data identifier portion and a payload portion. In some examples, the apparatus may transmit the modified image data by transmitting the image packet to the display panel.
  • the apparatus may include the display panel. In some examples, the apparatus may receive the screen mask. Further, the apparatus may determine which pixels of the display panel to activate based on the screen mask. Also, the apparatus may cause the displaying of the modified image data via the activated pixels of the display panel. In some examples, the apparatus may receive the modified image data. Further, the apparatus may map the modified image data to the activated pixels of the display panel. In some examples, the apparatus may receive the screen mask from a local memory of the display panel. In some examples, the apparatus may receive the screen mask from a host processor prior to receiving the modified image data. In some examples, the apparatus may include a wireless communication device.
  • FIG. 1 is a block diagram that illustrates an example content generation system, in accordance with one or more techniques of this disclosure.
  • FIG. 2 is a block diagram that illustrates an example display panel system, in accordance with one or more techniques of this disclosure.
  • FIG. 3 illustrates example screen masks, in accordance with one or more techniques of this disclosure.
  • FIGs. 4 to 7 illustrate example flowcharts of example methods that may be executed by the example host processor of FIG. 2, in accordance with one or more techniques of this disclosure.
  • FIGs. 8 and 9 illustrate example flowcharts of example methods that may be executed by the example display panel of FIGs. 1 and/or 2, in accordance with one or more techniques of this disclosure.
  • Example techniques disclosed herein provide for efficient transfer of image data from a host processor to a display panel.
  • the image data may correspond to a first shape
  • the display panel may include a display area (or visible area) that corresponds to a second shape that is different than the first shape.
  • techniques disclosed herein facilitate applying a screen mask to the image data to reduce the size of the image data.
  • the screen mask may define the visible area of the display panel.
  • disclosed techniques may discard or exclude pixel data for pixels corresponding to non-visible areas of the display panel, may replace the respective pixel data with a NULL value, or may replace the respective pixel data with a black pixel.
  • Example techniques disclosed herein may then generate an image packet based on the screen mask and the modified image data for transmitting to the display panel. It should be appreciated that by modifying the pixel data for one or more pixels of the image data enables the generated image packet to have less pixel data than the original image data, thereby reducing the amount of data being transmitted from the host processor to the display panel.
  • processors include microprocessors, microcontrollers, graphics processing units (GPUs) , general purpose GPUs (GPGPUs) , central processing units (CPUs) , application processors, digital signal processors (DSPs) , reduced instruction set computing (RISC) processors, systems-on-chip (SOC) , baseband processors, application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , programmable logic devices (PLDs) , state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure.
  • processors include microprocessors, microcontrollers, graphics processing units (GPUs) , general purpose GPUs (GPGPUs) , central processing units (CPUs) , application processors, digital signal processors (DSPs) , reduced instruction set computing (RISC) processors, systems-on-chip (SOC) , baseband processors, application specific integrated circuits (ASICs) ,
  • One or more processors in the processing system may execute software.
  • Software can be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
  • the term application may refer to software.
  • one or more techniques may refer to an application, i.e., software, being configured to perform one or more functions.
  • the application may be stored on a memory, e.g., on-chip memory of a processor, system memory, or any other memory.
  • Hardware described herein such as a processor, may be configured to execute the application.
  • the application may be described as including code that, when executed by the hardware, causes the hardware to perform one or more techniques described herein.
  • the hardware may access the code from a memory and execute the code accessed from the memory to perform one or more techniques described herein.
  • components are identified in this disclosure.
  • the components may be hardware, software, or a combination thereof.
  • the components may be separate components or sub-components of a single component.
  • the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer.
  • such computer-readable media can comprise a random access memory (RAM) , a read-only memory (ROM) , an electrically erasable programmable ROM (EEPROM) , optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable ROM
  • optical disk storage magnetic disk storage
  • magnetic disk storage other magnetic storage devices
  • combinations of the aforementioned types of computer-readable media or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
  • this disclosure describes techniques for having a graphics processing pipeline in a single device or multiple devices, improving the rendering of graphical content, reducing the load on a communication interface (e.g., a bus) , and/or reducing the load of a processing unit, i.e., any processing unit configured to perform one or more techniques described herein, such as a GPU, a DPU, and the like.
  • a processing unit i.e., any processing unit configured to perform one or more techniques described herein, such as a GPU, a DPU, and the like.
  • this disclosure describes techniques for graphics and/or display processing in any device that utilizes a display. Other example benefits are described throughout this disclosure.
  • instances of the term “content” may refer to “graphical content, ” “image, ” and vice versa. This is true regardless of whether the terms are being used as an adjective, noun, or other parts of speech.
  • the term “graphical content” may refer to content produced by one or more processes of a graphics processing pipeline.
  • the term “graphical content” may refer to content produced by a processing unit configured to perform graphics processing.
  • the term “graphical content” may refer to content produced by a graphics processing unit.
  • the term “display content” may refer to content generated by a processing unit configured to perform display processing.
  • the term “display content” may refer to content generated by a display processing unit.
  • Graphical content may be processed to become display content.
  • a graphics processing unit may output graphical content, such as a frame, to a buffer (which may be referred to as a framebuffer) .
  • a display processing unit may read the graphical content, such as one or more frames from the buffer, and perform one or more display processing techniques thereon to generate display content.
  • a display processing unit may be configured to perform composition on one or more rendered layers to generate a frame.
  • a display processing unit may be configured to compose, blend, or otherwise combine two or more layers together into a single frame.
  • a display processing unit may be configured to perform scaling, e.g., upscaling or downscaling, on a frame.
  • a frame may refer to a layer.
  • a frame may refer to two or more layers that have already been blended together to form the frame, i.e., the frame includes two or more layers, and the frame that includes two or more layers may subsequently be blended.
  • FIG. 1 is a block diagram that illustrates an example content generation system 100 configured to implement one or more techniques of this disclosure.
  • the content generation system 100 includes a device 104.
  • the device 104 may include one or more components or circuits for performing various functions described herein.
  • one or more components of the device 104 may be components of an SOC.
  • the device 104 may include one or more components configured to perform one or more techniques of this disclosure.
  • the device 104 may include a processing unit 120 and a system memory 124.
  • the device 104 can include a number of additional or alternative components, e.g., a communication interface 126, a transceiver 132, a receiver 128, a transmitter 130, a display processor 127, and a display panel 131 (sometimes referred to as a “display client” ) .
  • Reference to the display panel 131 may refer to one or more displays.
  • the display panel 131 may include a single display or multiple displays.
  • the display panel 131 may include a first display and a second display.
  • the results of the graphics processing may not be displayed on the device, e.g., the first and second displays may not receive any frames for presentment thereon. Instead, the frames or graphics processing results may be transferred to another device. In some aspects, this can be referred to as split-rendering.
  • the processing unit 120 may include an internal memory 121.
  • the processing unit 120 may be configured to perform graphics processing, such as in a graphics processing pipeline 107.
  • the device 104 may include a display processor, such as the display processor 127, to perform one or more display processing techniques on one or more frames generated by the processing unit 120 before presentment by the display panel 131.
  • the display processor 127 may be configured to perform display processing.
  • the display processor 127 may be configured to perform one or more display processing techniques on one or more frames generated by the processing unit 120.
  • the display panel 131 may be configured to display or otherwise present frames processed by the display processor 127.
  • the display panel 131 may include one or more of: a liquid crystal display (LCD) , a plasma display, an organic light emitting diode (OLED) display, a projection display device, an augmented reality display device, a virtual reality display device, a head-mounted display, or any other type of display device.
  • LCD liquid crystal display
  • OLED organic light emitting diode
  • a projection display device an augmented reality display device, a virtual reality display device, a head-mounted display, or any other type of display device.
  • Memory external to the processing unit 120 may be accessible to the processing unit 120.
  • the processing unit 120 may be configured to read from and/or write to external memory, such as the system memory 124.
  • the processing unit 120 may be communicatively coupled to the system memory 124 over a bus.
  • the processing unit 120 and the system memory 124 may be communicatively coupled to each other over the bus or a different connection.
  • the device 104 may include a content encoder/decoder configured to receive graphical and/or display content from any source, such as the system memory 124 and/or the communication interface 126.
  • the system memory 124 may be configured to store received encoded or decoded content.
  • the content encoder/decoder may be configured to receive encoded or decoded content, e.g., from the system memory 124 and/or the communication interface 126, in the form of encoded pixel data.
  • the content encoder/decoder may be configured to encode or decode any content.
  • the internal memory 121 or the system memory 124 may include one or more volatile or non-volatile memories or storage devices.
  • internal memory 121 or the system memory 124 may include RAM, SRAM, DRAM, erasable programmable ROM (EPROM) , electrically erasable programmable ROM (EEPROM) , flash memory, a magnetic data media or an optical storage media, or any other type of memory.
  • the internal memory 121 or the system memory 124 may be a non-transitory storage medium according to some examples.
  • the term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted to mean that internal memory 121 or the system memory 124 is non-movable or that its contents are static. As one example, the system memory 124 may be removed from the device 104 and moved to another device. As another example, the system memory 124 may not be removable from the device 104.
  • the processing unit 120 may be a central processing unit (CPU) , a graphics processing unit (GPU) , a general purpose GPU (GPGPU) , or any other processing unit that may be configured to perform graphics processing.
  • the processing unit 120 may be integrated into a motherboard of the device 104.
  • the processing unit 120 may be present on a graphics card that is installed in a port in a motherboard of the device 104, or may be otherwise incorporated within a peripheral device configured to interoperate with the device 104.
  • the processing unit 120 may include one or more processors, such as one or more microprocessors, GPUs, application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , arithmetic logic units (ALUs) , digital signal processors (DSPs) , discrete logic, software, hardware, firmware, other equivalent integrated or discrete logic circuitry, or any combinations thereof. If the techniques are implemented partially in software, the processing unit 120 may store instructions for the software in a suitable, non-transitory computer-readable storage medium, e.g., internal memory 121, and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Any of the foregoing, including hardware, software, a combination of hardware and software, etc., may be considered to be one or more processors.
  • processors such as one or more microprocessors, GPUs, application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , arithmetic logic units (A
  • the content generation system 100 can include a communication interface 126.
  • the communication interface 126 may include a receiver 128 and a transmitter 130.
  • the receiver 128 may be configured to perform any receiving function described herein with respect to the device 104. Additionally, the receiver 128 may be configured to receive information, e.g., eye or head position information, rendering commands, or location information, from another device.
  • the transmitter 130 may be configured to perform any transmitting function described herein with respect to the device 104. For example, the transmitter 130 may be configured to transmit information to another device, which may include a request for content.
  • the receiver 128 and the transmitter 130 may be combined into a transceiver 132. In such examples, the transceiver 132 may be configured to perform any receiving function and/or transmitting function described herein with respect to the device 104.
  • the graphical content from the processing unit 120 for display via the display panel 131 is not static and may be changing. Accordingly, the display processor 127 may periodically refresh the graphical content display via the display panel 131. For example, the display processor 127 may periodically retrieve graphical content from the system memory 124, wherein the graphical content may have been updated by the executed of an application (and/or the processing unit 120) that outputs the graphical content to the system memory 124.
  • processing unit 120 may be combined.
  • the display processor 127 and the display panel 131 may be combined, the processing unit 120 and the display processor 127 may be combined, the processing unit 120 and the system memory 124 may be combined, etc.
  • the processing unit 120 may include a screen mask facilitating component 198 configured to fetch image data for a frame.
  • the example screen mask facilitating component 198 may also be configured to generate an image packet including modified image data based on the fetched image data and a screen mask.
  • the screen mask may be associated with a display panel and may define a visible area of the display panel.
  • the modified image data may have less pixel data than the fetched image data.
  • the example screen mask facilitating component 198 may be configured to transmit the image packet with the modified image data to the display panel.
  • the example screen mask facilitating component 198 may also be configured to retrieve the screen mask from a system memory accessible to the display processor.
  • the screen mask may be hard-coded (or pre-generated) for the visible area of the display panel.
  • the screen mask facilitating component 198 may be configured to update the screen mask (e.g., in real-time, during runtime, etc. ) based on a change of at least one characteristic of a user interface being displayed via the display panel.
  • the screen mask facilitating component 198 may be configured to transmit the screen mask to the display panel prior to the transmitting of the image packet with the modified image data to the display panel.
  • a shape associated with the fetched image data may correspond to a rectangular shaped image and a shape associated with the modified image data may correspond to a shape of the visible area of the display panel.
  • the screen mask may be pre-generated and stored in a system memory accessible for generating the modified image data.
  • the screen mask may be updated based on a change associated with at least one characteristic of a user interface for presentment via the display panel.
  • the screen mask facilitating component 198 may be configured to transmit the updated screen mask to the display panel prior to the transmitting of a subsequent image packet.
  • the screen mask facilitating component 198 may be configured to determine whether a pixel of the fetched image data corresponds to the visible area of the display panel based on the screen mask and a location of the pixel. Further, the screen mask facilitating component 198 may be configured to populate a payload portion of the image packet with a value based on the determination. In some examples, the screen mask facilitating component 198 may be configured to determine whether a pixel of the fetched image data corresponds to a non-visible area of the display panel based on the screen mask and a location of the pixel. Additionally, the screen mask facilitating component 198 may be configured to exclude, from the image packet, the image data of locations corresponding to the non-visible area. In some examples, image packet may include at least a data identifier portion and a payload portion.
  • the screen mask facilitating component 198 may be included with the display panel 131. In some examples, the screen mask facilitating component 198 may be configured to receive the screen mask. Further, the screen mask facilitating component 198 may be configured to determine which pixels of the display panel to activate based on the screen mask. Also, the screen mask facilitating component 198 may be configured to cause the displaying of the modified image data via the activated pixels of the display panel. In some examples, the screen mask facilitating component 198 may be configured to receive the image packet with the modified image data. Further, the screen mask facilitating component 198 may be configured to map the modified image data of the received image packet to the activated pixels of the display panel.
  • the screen mask facilitating component 198 may be configured to receive the screen mask from a local memory of the display panel. In some examples, the screen mask facilitating component 198 may be configured to receive the screen mask from a host processor prior to receiving the image packet with the modified image data.
  • a device such as the device 104, may refer to any device, apparatus, or system configured to perform one or more techniques described herein.
  • a device may be a server, a base station, user equipment, a client device, a station, an access point, a computer, e.g., a personal computer, a desktop computer, a laptop computer, a tablet computer, a computer workstation, or a mainframe computer, an end product, an apparatus, a phone, a smart phone, a server, a video game platform or console, a handheld device, e.g., a portable video game device or a personal digital assistant (PDA) , a wearable computing device, e.g., a smart watch, an augmented reality device, or a virtual reality device, a non-wearable device, a display or display device, a television, a television set-top box, an intermediate network device, a digital media player, a video streaming device, a content streaming device, an in-car
  • PDA personal digital
  • Portable electronic devices may present graphical content on a display.
  • a host processor of the portable electronic device may generate graphical content for presentment via a display panel.
  • the host processor may be configured to generate the graphical content based on a standard shape, such as a rectangular-shaped image.
  • images generated in accordance with the MIPI DSI (Mobile Industry Processor Interface, Display Serial Interface) protocol correspond to a rectangular-shaped image.
  • the host processor may then transmit the graphical content to the display panel for presentment on a display.
  • MIPI DSI Mobile Industry Processor Interface, Display Serial Interface
  • a wearable device may include a circular shaped display screen
  • a smartphone may include one or more cutout sections of the display screen (e.g., a note for a camera) , etc.
  • Some such examples may introduce inefficiencies in transferring the graphical content from the host processor to the display panel. For example, image data corresponding to the corners of a rectangular-shaped image may not be displayed via a circular-shaped display screen.
  • Example techniques disclosed herein provide for efficient transfer of image data from the host processor to the display panel for presentment.
  • the image data may correspond to a first shape (e.g., a rectangular-shaped image)
  • the display panel may include a display area (or visible area) that corresponds to a second shape that is different than the first shape (e.g., a circular-shaped display screen, a display screen including one or more cutout sections, a notch, etc. )
  • techniques disclosed herein facilitate applying a screen mask to the image data to reduce the size of the image data being transmitted from the host processor to the display panel.
  • the screen mask may define the visible area of the display panel.
  • techniques disclosed herein reduce the size of the image data by modifying pixel data for pixels corresponding to non-visible areas of the display. For example, disclosed techniques may discard the respective pixel data, may replace the respective pixel data with NULL values, or may replace the respective pixel data with a black pixel. Example techniques disclosed herein may then generate an image packet based on the screen mask and the modified image data.
  • modifying the pixel data for one or more pixels of the image e.g., by excluding the pixel data, by replacing the pixel data with NULL values, or by replacing the pixel with black pixels
  • modifying the pixel data for one or more pixels of the image e.g., by excluding the pixel data, by replacing the pixel data with NULL values, or by replacing the pixel with black pixels
  • the visible area of the display screen may change, for example, based on an application.
  • an application presenting content via the display panel may define the visible area of a user interface being presented via the display panel.
  • the application may define the visible area to be different than the shape of the display screen.
  • a portable device may include one or more always-on elements, such as a current time and/or date.
  • the visible area of the display screen may correspond to those pixel elements positioned and associated with the displaying of the always-on elements, while the remaining pixel elements of the display screen may be associated with the non-visible area of the display panel.
  • Example techniques disclosed herein enable assigning visibility states to the pixel elements of the display screen.
  • a pixel element may be assigned a visible state or a non-visible state.
  • a pixel element that is assigned a visible state may be a pixel element of the display screen that is positioned within the visible area of the display screen.
  • pixel elements that are associated with the visible state may correspond to transitioning the respective pixel elements to an ON state (if the display panel is capable of turning pixel elements ON or OFF) or may allow pixel data for the corresponding pixel elements to be displayed.
  • Pixel elements that are assigned the non-visible state are OFF.
  • such pixel elements may be transitioned to the OFF state (if the display panel is capable of turning pixel elements ON or OFF) or may include setting the respective pixel elements to the color black.
  • pixel data for pixel elements associated with the non-visible state may be skipped or ignored from further processing.
  • the image data generated by the host processor may be associated with a first shape while the visible area of the display panel may be associated with a second shape.
  • the image data generated by the host processor may be a rectangle, while the visible area of the display panel may be a circle.
  • the first shape may be associated with a standard shape
  • the second shape may be associated with an “irregular” shape.
  • an irregular shape may include a circular display of a wearable device, may include a display screen with a cut-out section (e.g., a notch) , etc.
  • FIG. 2 is a block diagram that illustrates an example display panel system 200, in accordance with one or more techniques of this disclosure.
  • the display panel system 200 includes a host (or application) processor 205 and a display panel 255, which communicate via communication bus 250.
  • the host processor 205 may send image data to the display panel 255 via the communication bus 250.
  • the example host processor 205 and the display panel 255 may also send control information via the communication bus 250.
  • Aspects of the host processor 205 may be implemented by the processing unit 120 and/or the display processor 127 of FIG. 1.
  • Aspects of the display panel 255 may be implemented by the display processor 127 and/or the display panel 131 of FIG. 1.
  • the example host processor 205 of FIG. 2 includes a timing controller 210, a frame buffer 215, and a bus interface 220.
  • the timing controller 210 is in communication with the frame buffer 215 and may use synchronization signals to control the transfer of data from the frame buffer 215 to the bus interface 220.
  • the frame buffer 215 may receive image data 225 (e.g., from the system memory 124 of FIG. 1) , may temporarily store the image data 225, and may provide the image data 225 to the bus interface 220.
  • the image data 225 may include pixel data for a series of frames to be transferred to the display panel 255.
  • the bus interface 220 is coupled to the communication bus 250, which is coupled to a bus interface 260 of the display panel 255.
  • the host processor 205 may be implemented as one or more electronic hardware processors, such as a display processor, a DPU, a GPU, and/or a video processor, such as the example processing unit 120 of FIG. 1.
  • the communication bus 250 may be implemented by a display communication interface, such as the example MIPI DSI link.
  • a display communication interface such as the example MIPI DSI link.
  • other communication interfaces may be used to facilitate communication between the host processor 205 and the display panel 255.
  • the display panel 255 includes the bus interface 260, which is coupled to the communication bus 250, and is configured to receive information from the host processor 205.
  • the display panel 255 of FIG. 2 also includes a display driver 265, a buffer 270, and a display screen 275.
  • the display screen 275 includes a plurality of pixel elements for displaying image data.
  • the example display screen 275 may include a visible area corresponding to an irregular shape.
  • the display driver 265 is coupled to the bus interface 260 and the display screen 275. Additionally, the display driver 265 is coupled to the buffer 270, which is coupled to the display screen 275.
  • the display panel system 200 may operate in command-mode or video-mode.
  • image data is transmitted from the host processor 205 to the display panel 255 as a real-time pixel stream.
  • the host processor 205 may refresh the image data continuously at the display panel 255.
  • the host processor 205 may provide image data (e.g., pixel data) and synchronization information to the display panel 255.
  • Video-mode operation may be useful for display panels that do not include a frame buffer to store frames.
  • the host processor 205 may transfer image data over the bus interface 220 and the communication bus 250 at a display refresh rate, such as sixty (60) frames per second.
  • the display driver 265 may read the image data from the bus interface 260 and write the frames to the display screen 275.
  • image data may be transmitted from the host processor 205 to the display panel 255 via commands and data.
  • the host processor 205 can transfer image data over the bus interface 220 and the communication bus 250.
  • the display driver 265 may read the image data from the bus interface 260 and temporarily store the image data in the buffer 270 prior to presentment of the image data via the display screen 275.
  • the display driver 265 may also write the image data to the display screen 275.
  • the host processor 205 also includes an image data handler 230.
  • the image data handler 230 generates image packet (s) 240 for transmitting to the display panel 255 via the communication bus 250.
  • the image packet 240 includes a packet header section 240a, a packet payload section 240b, and a packet footer section 240c.
  • the example packet header section 240a may include a data identifier portion, a word count portion, and an error correction code portion.
  • the example data identifier portion may contain a virtual channel identifier and data type information.
  • the data type information may denote the format and content of payload data (e.g., the data contained in the packet payload section 240b) .
  • the example word count portion may indicate how many words (or bytes) are in the packet payload section 240b.
  • the receiver of the image packet 240 e.g., the display panel 255) may use the word count to determine the packet end (e.g., after the packet payload section 240b and the packet footer section 240c) .
  • the word count portion may be 16-bits long.
  • the example error correction code portion may include an error correction code (ECC) for the packet header section 240a of the image packet 240 and may protect the data in the packet header section 240a.
  • ECC error correction code
  • the ECC may enable one-bit errors in data of the packet header section 240a to be corrected and may enable two-bit errors to be detected.
  • the ECC may be 8-bits long.
  • the example packet payload section 240b includes the payload of the image packet 240 (e.g., application-specific payload) and may not be restricted in size. In some examples, the length of the packet payload section 240b may be determined based on the word count identified by the word count portion of the packet header section 240a.
  • the example packet footer section 240c includes a checksum.
  • the checksum may be 16-bits long.
  • the image data handler 230 facilitates improving the transfer efficiency of image data from the host processor 205 to the display panel 255.
  • the image data handler 230 may generate the image packet 240 based on the image data 225 and a screen mask 235.
  • the image data 225 may contain pixel data for an image associated with a first shape (e.g., a standard rectangle image shape) .
  • the visible area of the display screen 275 of the display panel 255 may correspond to an irregular shape (e.g., a rectangle shape with a cutout section, a circular screen, etc. ) .
  • the screen mask 235 may, thus, define one or more visible areas of the display screen 275 and one or more non-visible areas of the display screen.
  • the screen mask 235 may be a hard-coded screen mask (e.g., a pre-generated screen mask) that is accessible to the image data handler 230.
  • the image data handler 230 may obtain the screen mask 235 from the system memory 124 and/or from storage local to the image data handler 230.
  • the screen mask 235 may be defined and/or provided by an application (e.g., during runtime operation of the application) .
  • application presenting content via the display panel 255 may define a visible area of the display panel 255.
  • the application may generate and provide the screen mask to the image data handler 230.
  • the application may provide visible area information to the image data handler 230, which may then generate a screen mask based on the visible area information.
  • FIG. 3 illustrates example screen masks 300, in accordance with one or more techniques disclosed herein.
  • a first screen mask 300A and a second screen mask 300B correspond to a same display panel having a first display screen shape
  • the third screen mask 300C corresponds to a different display panel having a second display screen shape.
  • the first display screen shape may correspond to a circular shaped display screen
  • the second display screen shape may correspond to a generally rectangular shaped display screen including a notch.
  • the screen masks 300 define respective visible areas 305 and non-visible areas 310.
  • the first screen mask 300A includes a visible area 305A and four non-visible areas 310A that facilitate defining the generally circular shape of the corresponding display screen.
  • the second screen mask 300B includes a relatively small visible area 305B and a relatively large non-visible area 310B (e.g., when compared to the visible area 305A and the non-visible areas 310A of the first screen mask 300A) .
  • the visible area 305B defined by the second screen mask 300B may correspond to the position of the display screen for presentment of always-on elements.
  • the third screen mask 300C includes a relatively large visible area 305C and a relatively small non-visible area 310C.
  • the non-visible area 310C defined by the third screen mask 300C may correspond to a cutout in the display screen (e.g., a notch) that does not include pixel elements for presentment of image data.
  • the image data handler 230 may transmit the screen mask 235 to the display panel 255.
  • the image data handler 230 may encode the screen mask 235 and populate the packet payload section 240b of the image packet 240 with the encoded screen mask.
  • the example image data handler 230 may also generate the packet header section 240a of the image packet 240 by setting the data identifier of the packet header section 240a to indicate that the packet payload section 240b includes screen mask information corresponding to the screen mask 235.
  • the example image data handler 230 may then transmit the image packet 240 to the display panel 255 via the communication bus 250.
  • the screen mask may be associated with a display panel and may define a visible area of the display panel.
  • the screen masks 300 of FIG. 3 define a visible area 305 and one or more non-visible areas 310.
  • the visible areas 305 correspond to the shape of the display screen 275 of the display panel 255.
  • the first screen mask 300A defines the visible area 305A and the non-visible areas 310A corresponding to the circular shape of a display screen.
  • the third screen mask 300C defines the visible area 305C including the non- visible area 310C corresponding to the generally rectangular shape of a display screen including a notch.
  • the visible areas 305 may be based on one or more characteristics of a user interface (e.g., may be based on a portion of the display screen corresponding to one or more always-on elements) .
  • the first screen mask 300A and the second screen mask 300B correspond to the same display panel (e.g., a circular shaped display screen) , but define different visible areas.
  • the second screen mask 300B defines the visible area 305B corresponding to always-on elements (such as where the time and/or date may be displayed) .
  • the screen mask 235 may be pre-generated and stored in a system memory accessible to the image data handler 230.
  • the screen mask 235 may be stored by the image data handler 230, the internal memory 121 of FIG. 1, and/or the system memory 124 of FIG. 1.
  • the image data handler 230 may update the screen mask 235 based on one or more changes of a user interface being presented via the display panel 255.
  • the display panel 255 may display a first user interface having a first shape during a first operation state and may display a second user interface having a second shape during a second operation state.
  • the image data handler 230 may update the screen mask 235 from a first screen mask (e.g., the first screen mask 300A) defining the visible area 305A of the first user interface to a second screen mask (e.g., the second screen mask 300B) defining the visible area 305B of the second user interface when the operation state changes from the first operation state (e.g., while the display panel 255 is presenting an application interface where the pixel elements of the display screen 275 are used to display image data associated with the application interface) to the second operation state (e.g., while the display panel 255 is presenting a sleep state interface where a portion of the pixel elements of the display screen 275 are used to display always-one elements) .
  • a first screen mask e.g., the first screen mask 300A
  • a second screen mask e.g., the second screen mask 300B
  • the image data handler 230 may populate the payload portion of the image packet 240 with the encoded screen mask for transmitting the screen 235 to the display panel 255. For example, the image data handler 230 may populate the packet payload section 240b based on the encoded screen mask.
  • the image data handler 230 may fetch image data for a frame. For example, the image data handler 230 may obtain the image data 225 from the frame buffer 215. In some examples, the image data 225 may correspond to a standard, rectangle-shaped image.
  • the image data handler 230 may determine whether a pixel of the image data corresponds to a visible area of the display panel 255 based on the screen mask 235 and a location of the pixel. For example, for each pixel of the image data 225, the image data handler 230 may determine whether the location of the respective pixel corresponds to the visible area 305 defined by the screen masks 235, 300 or to the non-visible area (s) 310 defined by the screen masks 235, 300.
  • the image data handler 230 may modify the image data 225 based on the determination. For example, the image data handler 230 may modify the pixel data for the pixels at locations corresponding to the non-visible area (s) 310. In some examples, the image data handler 230 may modify the image data by discard (or excluding ) the respective pixel data. In some examples, the image data handler 230 may modify the image data by replacing the respective pixel data with NULL values or a black pixel. The example image data handler 230 may then populate the payload portion with the modified image data.
  • the image data handler 230 may populate the payload portion corresponding to the pixel with pixel data associated with the pixel when the location of the pixel corresponds to the visible area defined by the screen mask (e.g., the visible areas 305 of the screen masks 300) .
  • the payload portion populated by the image data handler 230 may include less pixel data than the fetched image data 225.
  • the image data handler 230 may generate the image packet 240 based on the populated payload portion. For example, the image data handler 230 may generate the image packet 240 including the packet header section 240a, the packet payload section 240b, and the packet footer section 240c. The example image data handler 230 may include the populated payload portion in the packet payload section 240b. The image data handler 230 may populate the packet header section 240a with a data identifier corresponding to the transmitting of screen mask information when the payload portion is populated with the encoded screen mask. The image data handler 230 may populate the packet header section 240a with a data identifier corresponding to the transmitting of image data when the payload portion is populated with the pixel data.
  • the image data handler 230 may transmit the image packet 240 to the display panel 255.
  • the image data handler 230 may transmit the image packet 240 through the bus interface 220 to the bus interface 260 of the display panel 255 via the communication bus 250.
  • the image packet 240 may include screen mask information defining the visible area of the display panel 255.
  • the image packet 240 may include pixel data for presentment of a frame of image data by the display panel 255.
  • the display panel 255 also includes a packet handler 280.
  • the example packet handler 280 receives the image packets 240 from the host processor 205 via the communication bus 250 and the bus interface 260.
  • the packet handler 280 facilitates determining whether the image packet 240 corresponds to a screen mask or to image data.
  • the packet handler 280 may decode and parse the packet header section 240a of the image packet 240 to determine the data identifier of the image packet 240.
  • the packet handler 280 may then use the data identifier to determine whether the packet payload section 240b includes information corresponding to a screen mask or to image data.
  • the packet handler 280 may determine the visibility states of the pixel elements of the display panel based on the screen mask included in the packet payload section 240b. For example, the packet handler 280 may cause the display screen 275 to transition those pixel elements that are positioned within the visible area 305 to the visible state, may transition those pixel elements that are positioned within the non-visible area 310 to the non-visible state, or may maintain the current visibility state of the respective pixel elements.
  • the packet handler 280 may process the image data of the image packet 240 for displaying on the display screen 275. For example, the packet handler 280 may process the information included in the packet payload portion 240b of the image packet 240 to determine how to display the respective pixels. In some examples, the packet handler 280 may process the payload information based on whether the display panel system 200 is operating in the video-mode or the command-mode. For example, when the display panel system 200 is operating in the video-mode, the display driver 265 may read the payload information and write the image data to the display screen 275. In other examples when the display panel system 200 is operating in the command-mode, the display driver 265 may read the payload information and temporarily store the image data in the buffer 270 prior to presentment of the image data via the display screen 275.
  • the packet handler 280 may cause the display screen 275 to update.
  • the packet handler 280 may cause the display screen 275 to display the image data associated with the image packet 240 and/or may update the visibility state of the respective pixel elements of the display screen 275 (e.g., may transition certain of the pixel elements positioned within the visible area of the display screen 275 to the visible state, may transition certain of the pixel elements positioned within the non-visible area of the display screen 275 to the non-visible state, and/or may maintain the visibility state of certain of the pixel elements) .
  • the packet handler 280 may use a screen mask to determine how to present received image data.
  • the image packet 240 received by the packet handler 280 may include “missing” pixel data corresponding to pixel data that was discarded (e.g., by the image data handler 230) .
  • the packet handler 280 may use the screen mask to map the pixel data of the received image data to pixel elements of the display screen 275 that are in the visible area of the display screen 275.
  • the image data 225 fetched by the image data handler 230 may include pixel data corresponding to ten pixel elements.
  • the image data handler 230 may determine, based on the screen mask 235, that three pixel elements are in the visible area of the display screen 275 and the remaining seven pixel elements are in the non-visible area of the display screen 275. In some such examples, the image data handler 230 may discard the pixel data for the seven non-visible area pixel elements and generate the image packet 240 including the pixel data for the three visible area pixel elements.
  • the packet handler 280 may receive the image packet 240 including the pixel data for the three visible area pixel elements. The packet handler 280 may then use the screen mask to map the pixel data to the three visible area pixel elements of the display screen 275. The packet handler 280 may then facilitate the presentment of the image data received via the image packet 240.
  • the data identifier of the packet header 240a may indicate that the packet payload portion 240b includes screen mask information and pixel data.
  • the packet handler 280 may process the information included in the packet payload portion 240b at a more granular level, such as for each pixel element of the display screen. For example, for each pixel element of the display screen, the packet handler 280 may determine whether the payload information for the respective pixel element corresponds to screen mask information (e.g., whether the pixel element is positioned within the visible area or the non-visible area of the display screen) or to pixel data.
  • screen mask information e.g., whether the pixel element is positioned within the visible area or the non-visible area of the display screen
  • the packet handler 280 may process the pixel data for displaying via the display screen 275. If the packet handler 280 determines that the payload information corresponds to screen mask information, the packet handler 280 may determine whether to transition the pixel element to the visible state, to transition the pixel element to the non-visible state, or to maintain the current visibility state of the pixel element. In some such examples, if the packet handler 280 determines to transition the pixel element to the visible state (or to maintain the current visible state of the pixel element) , the packet handler 280 may also process pixel data corresponding to the pixel element for displaying via the display screen.
  • the display panel 255 may store a screen mask.
  • the screen mask may be hard-coded (or pre-generated) and stored a memory local to the display panel 255 (e.g., in the buffer 270) .
  • the display panel 255 may receive a screen mask prior to receipt of image data for each frame.
  • the display panel 255 may receive a screen mask and apply the screen mask to the presentment of a series of frames.
  • the display panel 255 may receive a signal from the host processor 205 indicating how many frames a screen mask should be applied.
  • the display panel 255 may receive a screen mask periodically (e.g., after every five frames, after every ten frames, etc.
  • the display panel 255 may apply a screen mask (e.g., a hard-coded (or pre-generated) screen mask or a screen mask previously received) until the display panel 255 receives a different screen mask (or an indication to use a different screen mask) .
  • a screen mask e.g., a hard-coded (or pre-generated) screen mask or a screen mask previously received
  • FIG. 4 illustrates an example flowchart 400 of an example method in accordance with one or more techniques disclosed herein.
  • the method may be performed by an apparatus such as a host processor and/or a component of the host processor (e.g., the example processing unit 120 of FIG. 1, the example display processor 127 of FIG. 1, the example host processor 205 of FIG. 2, and/or the example image data handler 230 of FIG. 2) .
  • a host processor and/or a component of the host processor e.g., the example processing unit 120 of FIG. 1, the example display processor 127 of FIG. 1, the example host processor 205 of FIG. 2, and/or the example image data handler 230 of FIG. 2 .
  • the apparatus may determine whether to transmit a screen mask to a display panel, as described in connection with the examples in FIGs. 1, 2, and/or 3.
  • the apparatus may transmit a screen mask to a display panel at start-up. For example, when the apparatus powers on (e.g., after being in an off mode, a sleep mode, and similar operation modes) , the apparatus may determine to transmit a screen mask to the display panel.
  • the apparatus may transmit a screen mask for each frame of image data. For example, the apparatus may transmit the screen mask prior to the transmittal of image data for a frame. In some examples, the apparatus may periodically transmit a screen mask to the display panel.
  • the apparatus may transmit a screen mask prior to the transmittal of image data for every fifth frame, for every tenth frame, etc.
  • the apparatus may transmit a screen mask when a change in the visible area of the display panel is determined (e.g., by an application during a runtime operation) .
  • the apparatus may update the screen mask for transmitting.
  • the apparatus may be transmitting a first screen mask defining a first visible area of a first user interface and then determine to transmit a second screen mask defining a second visible area of a second user interface.
  • the apparatus may determine to update the screen mask for transmitting based on a change detected from the first user interface to the second user interface.
  • the apparatus may determine not to transmit a screen mask. For example, when the screen mask is static (e.g., is the same as a previously transmitted screen mask and/or is hard-coded at the display panel) , the apparatus may determine not to transmit a screen mask. In some examples, the apparatus may periodically transmit a screen mask and, thus, may determine whether to transmit a screen mask based on the period (e.g. the apparatus may determine to transmit the screen mask before image data for every fifth frame and determine not to transmit the screen mask for the remaining frames) .
  • the apparatus may periodically transmit a screen mask and, thus, may determine whether to transmit a screen mask based on the period (e.g. the apparatus may determine to transmit the screen mask before image data for every fifth frame and determine not to transmit the screen mask for the remaining frames) .
  • the apparatus may determine not to transmit a screen mask when the apparatus modified image data by replacing some pixel data with a NULL value. It should be appreciated that in some such examples in which the apparatus may not transmit a screen mask, the example method of flowchart 400 may begin at 406.
  • the apparatus may transmit an image packet corresponding to a screen mask to the display panel, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may populate a payload portion of the image packet with the screen mask and transmit the corresponding image packet to the display panel.
  • Example techniques for transmitting the image packet corresponding to the screen mask are described in connection with an example flowchart 500 of FIG. 5.
  • the apparatus may populate a payload portion of an image packet with modified image data based on image data for a frame and a screen mask, as described in connection with the examples in FIGs. 1 2, and/or 3.
  • the apparatus may apply the screen mask to the image data for the frame to determine which pixels of the image data correspond to a visible area of the display panel and/or which pixels of the image data correspond to a non-visible area of the display panel.
  • the apparatus may modify the image data by replacing pixel data for the pixels corresponding to the non-visible area with NULL values.
  • the apparatus may modify the image data by removing pixel data for the pixels corresponding to the non-visible area.
  • Example techniques for populating the payload portion of the image packet with the modified image data are described in connection with example flowcharts 600 and 700 of FIGs. 6 and 7, respectively.
  • the apparatus may generate an image packet with modified image data based on the populated payload portion, as described in connection with the examples in FIGs. 1, 2, and/or 3.
  • the apparatus may generate the image packet 240 including the packet header section 240a, the packet payload section 240b, and the packet footer section 240c.
  • the example apparatus may include the populated payload portion in the packet payload section 240b.
  • the apparatus may populate the packet header section 240a with a data identifier corresponding to the transmitting of image data.
  • the apparatus may transmit the image packet to the display panel, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may transmit the image packet 240 to the display panel 255 via the communication bus 250. In some examples, the apparatus may transmit the image packet to the display panel after the generating of the image packet. In some examples, the apparatus may wait to receive a fetch message from the display panel before transmitting the image packet to the display panel.
  • FIG. 5 illustrates an example flowchart 500 of an example method in accordance with one or more techniques disclosed herein.
  • the method may be performed by an apparatus such as a host processor and/or a component of the host processor (e.g., the example processing unit 120 of FIG. 1, the example display processor 127 of FIG. 1, the example host processor 205 of FIG. 2, and/or the example image data handler 230 of FIG. 2) .
  • the flowchart 500 may be executed to facilitate the transmitting of the image packet corresponding to a screen mask to the display panel (404 of FIG. 4) .
  • the apparatus may encode a screen mask, as described in connection with the examples in FIGs. 1, 2, and/or 3.
  • the apparatus may retrieve a screen mask from a memory accessible to the apparatus.
  • the apparatus may retrieve the screen mask 235 from the system memory 124.
  • the apparatus may populate a payload portion of an image packet with the encoded screen mask, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may populate the packet payload section 240b based on the encoded screen mask.
  • the apparatus may generate an image packet corresponding to a screen mask based on the populated payload portion, as described in connection with the examples in FIGs. 1, 2, and/or 3.
  • the apparatus may generate the image packet 240 including the packet header section 240a, the packet payload section 240b, and the packet footer section 240c.
  • the example apparatus may include the populated payload portion in the packet payload section 240b.
  • the apparatus may populate the packet header section 240a with a data identifier corresponding to the transmitting of screen mask information.
  • the apparatus may transmit the image packet corresponding to the screen mask to the display panel, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may transmit the image packet 240 to the display panel 255 via the communication bus 250. In some examples, the apparatus may transmit the image packet to the display panel after the generating of the image packet. In some examples, the apparatus may wait to receive a fetch message from the display panel before transmitting the image packet to the display panel.
  • control may then return to 502 to wait to encode another screen mask.
  • FIG. 6 illustrates an example flowchart 600 of an example method in accordance with one or more techniques disclosed herein.
  • the method may be performed by an apparatus such as a host processor and/or a component of the host processor (e.g., the example processing unit 120 of FIG. 1, the example display processor 127 of FIG. 1, the example host processor 205 of FIG. 2, and/or the example image data handler 230 of FIG. 2) .
  • the flowchart 600 may be executed to facilitate the populating of the payload portion of the image packet with modified image data based on image data for a frame and a screen mask (406 of FIG. 4) .
  • the flowchart 600 may be executed when the host processor does not transmit a screen mask to the display panel and the display panel does not have access to a screen mask (e.g., does not have access to a hard-coded screen mask) .
  • the apparatus may fetch image data for a frame, as described in connection with the examples in FIGs. 1, 2, and/or 3.
  • the image data handler 230 may obtain the image data 225 from the frame buffer 215.
  • the image data 225 may correspond to a standard, rectangle-shaped image.
  • the apparatus may fetch portions of the image data 225 based on a screen mask. For example, the apparatus may use the screen mask to determine which portions of the image data 225 correspond to pixels within the visible area of the display panel and fetch the respective portions of the image data 225.
  • the apparatus may identify pixels of the image data corresponding to non-visible areas of the display panel based on the screen mask and locations of the respective pixels, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, for each pixel of the image data 225, the apparatus may determine whether the location of the respective pixel corresponds to the visible area 305 defined by the screen masks 235, 300 or to the non-visible area (s) 310 defined by the screen masks 235, 300.
  • the apparatus may replace pixel data for locations of the image data corresponding to non-visible areas of the display panel with NULL values, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, for each pixel location that is within the non-visible area (s) 310 defined by the screen masks 235, 300, the apparatus may replace the respective pixel data with a NULL value. It should be appreciated that in some examples, the apparatus may replace the respective pixel data with a black pixel.
  • the apparatus populate a payload portion of an image packet with the modified image data, as described in connection with the examples in FIGs. 1, 2, and/or 3.
  • the apparatus may populate the payload portion 240b with pixel data corresponding to visible area pixel elements (e.g., defined by the screen mask based on the visible area (s) 305 of the screen mask 300) .
  • the apparatus may also populate the payload portion 240b with NULL values for pixel locations corresponding to the non-visible area (s) 310.
  • the modified image data may be associated with a size relatively smaller than the original image data that was fetched (e.g., at 602) .
  • control may then return to 602 to wait to fetch image data for another frame.
  • FIG. 7 illustrates an example flowchart 700 of an example method in accordance with one or more techniques disclosed herein.
  • the method may be performed by an apparatus such as a host processor and/or a component of the host processor (e.g., the example processing unit 120 of FIG. 1, the example display processor 127 of FIG. 1, the example host processor 205 of FIG. 2, and/or the example image data handler 230 of FIG. 2) .
  • the flowchart 700 may be executed to facilitate the populating of the payload portion of the image packet with modified image data based on image data for a frame and a screen mask (406 of FIG. 4) .
  • the flowchart 700 may be executed when the host processor transmits (e.g., periodically, a-periodically, or as a one-time event) a screen mask to the display panel and/or the display panel has access to a screen mask (e.g., the display panel is capable of storing a previously received screen mask and/or the display panel has access to a hard-coded screen mask) .
  • a screen mask e.g., the display panel is capable of storing a previously received screen mask and/or the display panel has access to a hard-coded screen mask
  • the apparatus may fetch image data for a frame, as described in connection with the examples in FIGs. 1, 2, and/or 3.
  • the image data handler 230 may obtain the image data 225 from the frame buffer 215.
  • the image data 225 may correspond to a standard, rectangle-shaped image.
  • the apparatus may fetch portions of the image data 225 based on a screen mask. For example, the apparatus may use the screen mask to determine which portions of the image data 225 correspond to pixels within the visible area of the display panel and fetch the respective portions of the image data 225.
  • the apparatus may identify pixels of the image data corresponding to non-visible areas of the display panel based on the screen mask and locations of the respective pixels, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, for each pixel of the image data 225, the apparatus may determine whether the location of the respective pixel corresponds to the visible area 305 defined by the screen masks 235, 300 or to the non-visible area (s) 310 defined by the screen masks 235, 300.
  • the apparatus may remove pixel data for locations of the image data corresponding to non-visible areas of the display panel based on the screen mask, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, for each pixel location that is within the non-visible area (s) 310 defined by the screen masks 235, 300, the apparatus may remove (or discard) the respective pixel data of the image data 225.
  • the apparatus populate a payload portion of an image packet with the modified image data, as described in connection with the examples in FIGs. 1, 2, and/or 3.
  • the apparatus may populate the payload portion 240b with pixel data corresponding to visible area pixel elements (e.g., defined by the screen mask based on the visible area (s) 305 of the screen mask 300) .
  • the modified image data may be associated with a size relatively smaller than the original image data that was fetched (e.g., at 702) .
  • control may then return to 702 to wait to fetch image data for another frame.
  • FIG. 8 illustrates an example flowchart 800 of an example method, in accordance with one or more techniques disclosed herein.
  • the method may be performed by an apparatus such as the display panel 131 and/or a component of the display panel 131 of FIGs. 1 and/or 3, the display driver 265 of FIG. 2, and/or the packet handler 280 of FIG. 2) .
  • the apparatus may be incapable of accessing a screen mask (e.g., a previously received screen mask and/or a hard-coded screen mask) .
  • a screen mask e.g., a previously received screen mask and/or a hard-coded screen mask
  • the apparatus may receive an image packet, as described in connection with the examples in FIGs. 1, 2, and/or 3.
  • the apparatus may receive the image packet 240 via the bus interface 260 over the communication bus 250.
  • the apparatus may receive the image packet 240 in response to transmitting a fetch message.
  • the apparatus may parse pixel data of the image data of the received image packet, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may process the pixel data for each pixel location of the image data of the image packet 240 to determine whether the pixel data corresponds to a NULL value or to a valid value (e.g., a non-NULL value) .
  • a NULL value e.g., a non-NULL value
  • the apparatus may disregard displaying of image data for locations with pixel data set to NULL, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, while parsing the pixel data of the image data, the apparatus may disregard the further processing of pixels with pixel data set to NULL.
  • the setting of the pixel data to the NULL value may indicate that the respective pixel location is not located within the visible area of the display panel and, thus, does not need to be processed for displaying at the display panel.
  • the apparatus may display remaining image data based on respective pixel data, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may process the pixel data for the pixel locations within the visible area (s) of the display panel for presentment via the display panel. The apparatus may cause the display screen 275 to display the image data associated with the image packet 240 based on the pixel data for the pixel locations within the visible area (s) of the display screen 275.
  • control may then return to 802 to wait to receive another image packet.
  • FIG. 9 illustrates an example flowchart 900 of an example method, in accordance with one or more techniques disclosed herein.
  • the method may be performed by an apparatus such as the display panel 131 and/or a component of the display panel 131 of FIGs. 1 and/or 3, the display driver 265 of FIG. 2, and/or the packet handler 280 of FIG. 2) .
  • the apparatus may access a screen mask (e.g., a previously received screen mask and/or a hard-coded screen mask) to facilitate the presentment of image data.
  • a screen mask e.g., a previously received screen mask and/or a hard-coded screen mask
  • the apparatus may receive an image packet, as described in connection with the examples in FIGs. 1, 2, and/or 3.
  • the apparatus may receive the image packet 240 via the bus interface 260 over the communication bus 250.
  • the apparatus may receive the image packet 240 in response to transmitting a fetch message.
  • the apparatus may determine whether the image packet corresponds to a screen mask or to image data, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may decode and parse the packet header section 240a of the image packet 240 to determine the data identifier and to determine whether the image packet corresponds to a screen mask or to image data.
  • the host processor 205 may transmit (e.g., periodically transmit, a-periodically transmit, or transmit as a one-time event) the screen mask to the display panel 255.
  • the display panel 255 may receive a screen mask, followed by image data for a sequence of frames (e.g., one or more frames) , and then receive another screen mask.
  • the example display panel 255 may verify whether the received image packet corresponds to a screen mask or to image data.
  • the display panel 255 may be capable of accessing a hard-coded screen mask and that the host processor 205 may not transmit a screen mask to the display panel 255. That is, the host processor 205 may modify the image data prior with the assumption that the display panel 255 is capable of accessing a hard-coded screen mask and, thus, that the host processor 205 does not need to transmit a screen mask to the display panel 255.
  • the apparatus may execute the method of flowchart 900 by receiving the image packet at 902 and then proceeding to 914 to process pixel data for an image packet based on the screen mask.
  • the apparatus may modify the display based on the screen mask, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may determine to set pixel elements of the display screen 275 that are within the visible area 305 of the screen mask 235, 300 to the visible state and may set pixel elements of the display screen 275 that are within the non-visible area (s) 310 of the screen mask 235, 300 to the non-visible state (e.g., may determine to activate or deactivate respective pixel elements of the display screen 275) .
  • the apparatus may cause certain pixels elements to transition to the visible state, may cause certain pixel elements to transition to the non-visible state, and/or may maintain the current visibility state of certain pixel elements.
  • the apparatus may store the screen mask in a local memory (e.g., the buffer 270) for use when processing subsequent image data.
  • the apparatus may receive an image packet corresponding to image data, as described in connection with the examples in FIGs. 1, 2, and/or 3. It should be appreciated that in some examples, the apparatus may wait to receive the image packet corresponding to image data. Control then proceeds to 914 to process pixel data of the image packet based on the screen mask.
  • the apparatus may determine whether a screen mask associated with the image data was received, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may periodically receive a screen mask and the image data may be within the periodicity of the screen mask transmissions.
  • control proceeds to 914 to process pixel data of the image packet based on the screen mask.
  • the apparatus may retrieve a screen mask from local memory, as described in connection with the examples in FIGs. 1, 2, and/or 3.
  • the apparatus may obtain a screen mask from the buffer 270.
  • the screen mask obtained from the local memory may be a hard-coded screen mask.
  • the screen mask obtained from the local memory may a screen mask that was previously provided to the display panel 255 and stored by the display panel 255 in the local memory (e.g., the buffer 270) .
  • the apparatus may process pixel data based on the screen mask, as described in connection with the examples in FIGs. 1, 2, and/or 3.
  • the apparatus may process the information included in the packet payload portion 240b of the image packet 240 to determine how to display the respective image data.
  • the modified image data of the image packet 240 may not include pixel data for pixel locations that are outside the visible area 305 of the display panel 255.
  • the apparatus may use the screen mask to map the pixel data to respective pixel elements of the display screen 275.
  • the apparatus may map the pixel data to activated pixels of the display panel (e.g., based on the screen mask) .
  • the apparatus may also process the respective pixel data to determine how to display the corresponding image data.
  • the apparatus may display the image data, as described in connection with the examples in FIGs. 1, 2, and/or 3.
  • the apparatus may cause the display screen 275 to display the image data associated with the image packet 240.
  • control may then return to 902 to wait to receive another image packet.
  • the present disclosure can improve the efficiency of transferring image data from a host processor to a display panel.
  • disclosed techniques may use a screen mask to reduce the pixel data transferred from a host processor to a display panel over a communication bus based on whether the corresponding pixel locations are within a visible area or a non-visible area of the display panel.
  • the described techniques may work with different shapes associated with the image data and the visible area of the display area. For example, as long as a screen mask is provided to apply to the image data, the disclosed techniques may facilitate reducing the amount of data transferred from the host processor to the display panel.
  • a method or apparatus for display processing may be a display processor, a display processing unit, a GPU, an application processor, a host processor, a video processor, or some other processor that can perform display processing, and/or a component thereof.
  • the apparatus may be the processing unit 120 within the device 104, the display processor 127 within the device 104, or may be some other hardware within device 104 or another device.
  • the apparatus may include means for obtaining image data for a frame.
  • the apparatus may also include means for determining modified image data for the frame based on a screen mask.
  • the screen mask may be associated with a display panel and may be configured to define a visible area of the display panel.
  • the modified image data may include less pixel data than the fetched image data.
  • the apparatus may also include means for transmitting the modified image data to the display panel.
  • the apparatus may also include means for transmitting the screen mask to the display panel prior to the transmitting of the modified image data to the display panel.
  • the apparatus may include means for transmitting an updated screen mask to the display panel prior to the transmitting of subsequently modified image data.
  • the apparatus may also include means for determining whether a pixel of the obtained image data corresponds to the visible area of the display panel based on the screen mask and a location of the pixel. Further, the apparatus may include means for populating a payload portion of an image packet with a value based on the determination.
  • the apparatus may also include means for determining whether a pixel of the obtained image data corresponds to a non-visible area of the display panel based on the screen mask and a location of the pixel.
  • the apparatus may include means for excluding the image data of locations corresponding to the non-visible area.
  • the apparatus may also include means for generating an image packet based on the modified data, where the image packet includes at least a data identifier portion and a payload portion.
  • the apparatus may also include means for receiving the screen mask. Further, the apparatus may include means for determining which pixels of the display panel to activate based on the screen mask. Also, the apparatus may include means for causing the displaying of the modified image data via the activated pixels of the display panel.
  • the apparatus may also include means for receiving the modified image data.
  • the apparatus may include means for mapping the modified image data to the activated pixels of the display panel.
  • the apparatus may also include means for receiving the screen mask from a local memory of the display panel.
  • the apparatus may also include means for receiving the screen mask from a host processor prior to receiving the modified image data.
  • the described display and/or graphics processing techniques can be used by a display processor, a display processing unit (DPU) , a GPU, a video processor, or some other processor that can perform display processing to implement the techniques described herein for improving image data transfer efficiency in portable devices.
  • a display processor a display processing unit (DPU)
  • a GPU a graphics processing unit
  • video processor or some other processor that can perform display processing to implement the techniques described herein for improving image data transfer efficiency in portable devices.
  • the term “or” may be interrupted as “and/or” where context does not dictate otherwise. Additionally, while phrases such as “one or more” or “at least one” or the like may have been used for some features disclosed herein but not others, the features for which such language was not used may be interpreted to have such a meaning implied where context does not dictate otherwise.
  • the functions described herein may be implemented in hardware, software, firmware, or any combination thereof.
  • processing unit has been used throughout this disclosure, such processing units may be implemented in hardware, software, firmware, or any combination thereof. If any function, processing unit, technique described herein, or other module is implemented in software, the function, processing unit, technique described herein, or other module may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media may include computer data storage media or communication media including any medium that facilitates transfer of a computer program from one place to another. In this manner, computer-readable media generally may correspond to (1) tangible computer- readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave.
  • Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure.
  • such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, .
  • Disk and disc includes compact disc (CD) , laser disc, optical disc, digital versatile disc (DVD) , floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • a computer program product may include a computer-readable medium.
  • the code may be executed by one or more processors, such as one or more digital signal processors (DSPs) , general purpose microprocessors, application specific integrated circuits (ASICs) , arithmetic logic units (ALUs) , field programmable logic arrays (FPGAs) , or other equivalent integrated or discrete logic circuitry.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • ALUs arithmetic logic units
  • FPGAs field programmable logic arrays
  • the techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs, e.g., a chip set.
  • IC integrated circuit
  • Various components, modules or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily need realization by different hardware units. Rather, as described above, various units may be combined in any hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.

Abstract

The present disclosure relates to methods and apparatus for display processing. For example disclosed techniques facilitate improving image data transfer efficiency for portable devices. Example techniques disclosed herein can obtain image data for a frame. Example techniques disclosed herein can also determine modified image data for the frame based on a screen mask. In some examples, the screen mask may be associated with a display panel and be configured to define a visible area of the display panel. In some examples, the modified image data may have less pixel data than the obtained image data. Further, example techniques disclosed herein can transmit the modified image data to the display panel.

Description

METHODS AND APPARATUS TO IMPROVE IMAGE DATA TRANSFER EFFICIENCY FOR PORTABLE DEVICES TECHNICAL FIELD
The present disclosure relates generally to processing systems and, more particularly, to one or more techniques for graphics processing.
INTRODUCTION
Computing devices often utilize a graphics processing unit (GPU) to accelerate the rendering of graphical data for display. Such computing devices may include, for example, computer workstations, mobile phones, such as so-called smartphones, embedded systems, personal computers, tablet computers, and video game consoles. GPUs execute a graphics processing pipeline that includes one or more processing stages that operate together to execute graphics processing commands and output a frame. A central processing unit (CPU) may control the operation of the GPU by issuing one or more graphics processing commands to the GPU. Modern day CPUs are typically capable of concurrently executing multiple applications, each of which may need to utilize the GPU during execution.
Portable electronic devices, including smartphones and wearable devices, may present graphical content on a display. However, with the goal of achieving increased screen-to-body ratios, there has developed an increased need for presenting graphical content on displays having irregular shapes.
SUMMARY
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
In an aspect of the disclosure, a method, a computer-readable medium, and an apparatus are provided. The apparatus may be a display processor, a display processing unit (DPU) , a graphics processing unit (GPU) , or a video processor (sometimes generally referred to as a “host processor” ) . The apparatus can obtain  image data for a frame. Additionally, the apparatus can determine modified image data for the frame based on a screen mask. In some examples, the screen mask may be associated with a display panel and may be configured to define a visible area of the display panel. In some examples, the modified image data may include less pixel data than the obtained image data. The apparatus can also transmit the modified image data to the display panel.
In some examples, the apparatus may transmit the screen mask to the display panel prior to the transmitting of the modified image data to the display panel. In some examples, a shape associated with the obtained image data may correspond to a rectangular shaped image and a shape associated with the modified image data may correspond to a shape of the visible area of the display panel. In some examples, the screen mask may be pre-generated and stored in a system memory accessible for determining the modified image data. In some examples, the screen mask may be updated based on a change associated with at least one characteristic of a user interface for presentment via the display panel. In some examples, the apparatus may transmit the updated screen mask to the display panel prior to the transmitting of subsequently modified image data. In some examples, the apparatus may determine whether a pixel of the obtained image data corresponds to the visible area of the display panel based on the screen mask and a location of the pixel. Further, the apparatus may populate a payload portion of an image packet with a value based on the determination. In some examples, the apparatus may determine whether a pixel of the obtained image data corresponds to a non-visible area of the display panel based on the screen mask and a location of the pixel. Additionally, the apparatus may exclude the image data of locations corresponding to the non-visible area. In some examples, the apparatus may generate an image packet based on the modified image data, and where the image packet may include at least a data identifier portion and a payload portion. In some examples, the apparatus may transmit the modified image data by transmitting the image packet to the display panel.
In some examples, the apparatus may include the display panel. In some examples, the apparatus may receive the screen mask. Further, the apparatus may determine which pixels of the display panel to activate based on the screen mask. Also, the apparatus may cause the displaying of the modified image data via the activated pixels of the display panel. In some examples, the apparatus may receive the modified image data. Further, the apparatus may map the modified image data to the activated pixels  of the display panel. In some examples, the apparatus may receive the screen mask from a local memory of the display panel. In some examples, the apparatus may receive the screen mask from a host processor prior to receiving the modified image data. In some examples, the apparatus may include a wireless communication device.
The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 is a block diagram that illustrates an example content generation system, in accordance with one or more techniques of this disclosure.
FIG. 2 is a block diagram that illustrates an example display panel system, in accordance with one or more techniques of this disclosure.
FIG. 3 illustrates example screen masks, in accordance with one or more techniques of this disclosure.
FIGs. 4 to 7 illustrate example flowcharts of example methods that may be executed by the example host processor of FIG. 2, in accordance with one or more techniques of this disclosure.
FIGs. 8 and 9 illustrate example flowcharts of example methods that may be executed by the example display panel of FIGs. 1 and/or 2, in accordance with one or more techniques of this disclosure.
DETAILED DESCRIPTION
Example techniques disclosed herein provide for efficient transfer of image data from a host processor to a display panel. For example, the image data may correspond to a first shape, while the display panel may include a display area (or visible area) that corresponds to a second shape that is different than the first shape. In some examples, techniques disclosed herein facilitate applying a screen mask to the image data to reduce the size of the image data. For example, the screen mask may define the visible area of the display panel. By applying the screen mask to the image data, techniques disclosed herein reduce the size of the image data by modifying pixel data for pixels corresponding to non-visible areas of the display panel. For example, disclosed techniques may discard or exclude pixel data for pixels corresponding to non-visible  areas of the display panel, may replace the respective pixel data with a NULL value, or may replace the respective pixel data with a black pixel. Example techniques disclosed herein may then generate an image packet based on the screen mask and the modified image data for transmitting to the display panel. It should be appreciated that by modifying the pixel data for one or more pixels of the image data enables the generated image packet to have less pixel data than the original image data, thereby reducing the amount of data being transmitted from the host processor to the display panel.
Various aspects of systems, apparatuses, computer program products, and methods are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of this disclosure to those skilled in the art. Based on the teachings herein one skilled in the art should appreciate that the scope of this disclosure is intended to cover any aspect of the systems, apparatuses, computer program products, and methods disclosed herein, whether implemented independently of, or combined with, other aspects of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. Any aspect disclosed herein may be embodied by one or more elements of a claim.
Although various aspects are described herein, many variations and permutations of these aspects fall within the scope of this disclosure. Although some potential benefits and advantages of aspects of this disclosure are mentioned, the scope of this disclosure is not intended to be limited to particular benefits, uses, or objectives. Rather, aspects of this disclosure are intended to be broadly applicable to different wireless technologies, system configurations, networks, and transmission protocols, some of which are illustrated by way of example in the figures and in the following description. The detailed description and drawings are merely illustrative of this disclosure rather than limiting, the scope of this disclosure being defined by the appended claims and equivalents thereof.
Several aspects are presented with reference to various apparatus and methods. These apparatus and methods are described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, and the like (collectively referred to as “elements” ) . These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
By way of example, an element, or any portion of an element, or any combination of elements may be implemented as a “processing system” that includes one or more processors (which may also be referred to as processing units) . Examples of processors include microprocessors, microcontrollers, graphics processing units (GPUs) , general purpose GPUs (GPGPUs) , central processing units (CPUs) , application processors, digital signal processors (DSPs) , reduced instruction set computing (RISC) processors, systems-on-chip (SOC) , baseband processors, application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , programmable logic devices (PLDs) , state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software can be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. The term application may refer to software. As described herein, one or more techniques may refer to an application, i.e., software, being configured to perform one or more functions. In such examples, the application may be stored on a memory, e.g., on-chip memory of a processor, system memory, or any other memory. Hardware described herein, such as a processor, may be configured to execute the application. For example, the application may be described as including code that, when executed by the hardware, causes the hardware to perform one or more techniques described herein. As an example, the hardware may access the code from a memory and execute the code accessed from the memory to perform one or more techniques described herein. In  some examples, components are identified in this disclosure. In such examples, the components may be hardware, software, or a combination thereof. The components may be separate components or sub-components of a single component.
Accordingly, in one or more examples described herein, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise a random access memory (RAM) , a read-only memory (ROM) , an electrically erasable programmable ROM (EEPROM) , optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
In general, this disclosure describes techniques for having a graphics processing pipeline in a single device or multiple devices, improving the rendering of graphical content, reducing the load on a communication interface (e.g., a bus) , and/or reducing the load of a processing unit, i.e., any processing unit configured to perform one or more techniques described herein, such as a GPU, a DPU, and the like. For example, this disclosure describes techniques for graphics and/or display processing in any device that utilizes a display. Other example benefits are described throughout this disclosure.
As used herein, instances of the term “content” may refer to “graphical content, ” “image, ” and vice versa. This is true regardless of whether the terms are being used as an adjective, noun, or other parts of speech. In some examples, as used herein, the term “graphical content” may refer to content produced by one or more processes of a graphics processing pipeline. In some examples, as used herein, the term “graphical content” may refer to content produced by a processing unit configured to perform graphics processing. In some examples, as used herein, the term “graphical content” may refer to content produced by a graphics processing unit.
In some examples, as used herein, the term “display content” may refer to content generated by a processing unit configured to perform display processing. In some examples, as used herein, the term “display content” may refer to content generated  by a display processing unit. Graphical content may be processed to become display content. For example, a graphics processing unit may output graphical content, such as a frame, to a buffer (which may be referred to as a framebuffer) . A display processing unit may read the graphical content, such as one or more frames from the buffer, and perform one or more display processing techniques thereon to generate display content. For example, a display processing unit may be configured to perform composition on one or more rendered layers to generate a frame. As another example, a display processing unit may be configured to compose, blend, or otherwise combine two or more layers together into a single frame. A display processing unit may be configured to perform scaling, e.g., upscaling or downscaling, on a frame. In some examples, a frame may refer to a layer. In other examples, a frame may refer to two or more layers that have already been blended together to form the frame, i.e., the frame includes two or more layers, and the frame that includes two or more layers may subsequently be blended.
FIG. 1 is a block diagram that illustrates an example content generation system 100 configured to implement one or more techniques of this disclosure. The content generation system 100 includes a device 104. The device 104 may include one or more components or circuits for performing various functions described herein. In some examples, one or more components of the device 104 may be components of an SOC. The device 104 may include one or more components configured to perform one or more techniques of this disclosure. In the example shown, the device 104 may include a processing unit 120 and a system memory 124. In some aspects, the device 104 can include a number of additional or alternative components, e.g., a communication interface 126, a transceiver 132, a receiver 128, a transmitter 130, a display processor 127, and a display panel 131 (sometimes referred to as a “display client” ) . Reference to the display panel 131 may refer to one or more displays. For example, the display panel 131 may include a single display or multiple displays. The display panel 131 may include a first display and a second display. In further examples, the results of the graphics processing may not be displayed on the device, e.g., the first and second displays may not receive any frames for presentment thereon. Instead, the frames or graphics processing results may be transferred to another device. In some aspects, this can be referred to as split-rendering.
The processing unit 120 may include an internal memory 121. The processing unit 120 may be configured to perform graphics processing, such as in a graphics  processing pipeline 107. In some examples, the device 104 may include a display processor, such as the display processor 127, to perform one or more display processing techniques on one or more frames generated by the processing unit 120 before presentment by the display panel 131. The display processor 127 may be configured to perform display processing. For example, the display processor 127 may be configured to perform one or more display processing techniques on one or more frames generated by the processing unit 120. The display panel 131 may be configured to display or otherwise present frames processed by the display processor 127. In some examples, the display panel 131 may include one or more of: a liquid crystal display (LCD) , a plasma display, an organic light emitting diode (OLED) display, a projection display device, an augmented reality display device, a virtual reality display device, a head-mounted display, or any other type of display device.
Memory external to the processing unit 120, such as system memory 124, may be accessible to the processing unit 120. For example, the processing unit 120 may be configured to read from and/or write to external memory, such as the system memory 124. The processing unit 120 may be communicatively coupled to the system memory 124 over a bus. In some examples, the processing unit 120 and the system memory 124 may be communicatively coupled to each other over the bus or a different connection.
It should be appreciated that in some examples, the device 104 may include a content encoder/decoder configured to receive graphical and/or display content from any source, such as the system memory 124 and/or the communication interface 126. The system memory 124 may be configured to store received encoded or decoded content. In some examples, the content encoder/decoder may be configured to receive encoded or decoded content, e.g., from the system memory 124 and/or the communication interface 126, in the form of encoded pixel data. In some examples, the content encoder/decoder may be configured to encode or decode any content.
The internal memory 121 or the system memory 124 may include one or more volatile or non-volatile memories or storage devices. In some examples, internal memory 121 or the system memory 124 may include RAM, SRAM, DRAM, erasable programmable ROM (EPROM) , electrically erasable programmable ROM (EEPROM) , flash memory, a magnetic data media or an optical storage media, or any other type of memory.
The internal memory 121 or the system memory 124 may be a non-transitory storage medium according to some examples. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted to mean that internal memory 121 or the system memory 124 is non-movable or that its contents are static. As one example, the system memory 124 may be removed from the device 104 and moved to another device. As another example, the system memory 124 may not be removable from the device 104.
The processing unit 120 may be a central processing unit (CPU) , a graphics processing unit (GPU) , a general purpose GPU (GPGPU) , or any other processing unit that may be configured to perform graphics processing. In some examples, the processing unit 120 may be integrated into a motherboard of the device 104. In some examples, the processing unit 120 may be present on a graphics card that is installed in a port in a motherboard of the device 104, or may be otherwise incorporated within a peripheral device configured to interoperate with the device 104. The processing unit 120 may include one or more processors, such as one or more microprocessors, GPUs, application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , arithmetic logic units (ALUs) , digital signal processors (DSPs) , discrete logic, software, hardware, firmware, other equivalent integrated or discrete logic circuitry, or any combinations thereof. If the techniques are implemented partially in software, the processing unit 120 may store instructions for the software in a suitable, non-transitory computer-readable storage medium, e.g., internal memory 121, and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Any of the foregoing, including hardware, software, a combination of hardware and software, etc., may be considered to be one or more processors.
In some aspects, the content generation system 100 can include a communication interface 126. The communication interface 126 may include a receiver 128 and a transmitter 130. The receiver 128 may be configured to perform any receiving function described herein with respect to the device 104. Additionally, the receiver 128 may be configured to receive information, e.g., eye or head position information, rendering commands, or location information, from another device. The transmitter 130 may be configured to perform any transmitting function described herein with respect to the device 104. For example, the transmitter 130 may be configured to  transmit information to another device, which may include a request for content. The receiver 128 and the transmitter 130 may be combined into a transceiver 132. In such examples, the transceiver 132 may be configured to perform any receiving function and/or transmitting function described herein with respect to the device 104.
In some examples, the graphical content from the processing unit 120 for display via the display panel 131 is not static and may be changing. Accordingly, the display processor 127 may periodically refresh the graphical content display via the display panel 131. For example, the display processor 127 may periodically retrieve graphical content from the system memory 124, wherein the graphical content may have been updated by the executed of an application (and/or the processing unit 120) that outputs the graphical content to the system memory 124.
Although shown as separate components in the illustrated example of FIG. 1, it should be appreciated that one or more of the processing unit 120, the system memory 124, the communication interface 126, the display processor 127, and/or the display panel 131 may be combined. For example, the display processor 127 and the display panel 131 may be combined, the processing unit 120 and the display processor 127 may be combined, the processing unit 120 and the system memory 124 may be combined, etc.
Referring again to FIG. 1, in certain aspects, the processing unit 120 may include a screen mask facilitating component 198 configured to fetch image data for a frame. The example screen mask facilitating component 198 may also be configured to generate an image packet including modified image data based on the fetched image data and a screen mask. In some examples, the screen mask may be associated with a display panel and may define a visible area of the display panel. In some examples, the modified image data may have less pixel data than the fetched image data. Also, the example screen mask facilitating component 198 may be configured to transmit the image packet with the modified image data to the display panel.
The example screen mask facilitating component 198 may also be configured to retrieve the screen mask from a system memory accessible to the display processor. In some examples, the screen mask may be hard-coded (or pre-generated) for the visible area of the display panel. Additionally, in some examples, the screen mask facilitating component 198 may be configured to update the screen mask (e.g., in real-time, during runtime, etc. ) based on a change of at least one characteristic of a user interface being displayed via the display panel.
In some examples, the screen mask facilitating component 198 may be configured to transmit the screen mask to the display panel prior to the transmitting of the image packet with the modified image data to the display panel. In some examples, a shape associated with the fetched image data may correspond to a rectangular shaped image and a shape associated with the modified image data may correspond to a shape of the visible area of the display panel. In some examples, the screen mask may be pre-generated and stored in a system memory accessible for generating the modified image data. In some examples, the screen mask may be updated based on a change associated with at least one characteristic of a user interface for presentment via the display panel. In some examples, the screen mask facilitating component 198 may be configured to transmit the updated screen mask to the display panel prior to the transmitting of a subsequent image packet. In some examples, the screen mask facilitating component 198 may be configured to determine whether a pixel of the fetched image data corresponds to the visible area of the display panel based on the screen mask and a location of the pixel. Further, the screen mask facilitating component 198 may be configured to populate a payload portion of the image packet with a value based on the determination. In some examples, the screen mask facilitating component 198 may be configured to determine whether a pixel of the fetched image data corresponds to a non-visible area of the display panel based on the screen mask and a location of the pixel. Additionally, the screen mask facilitating component 198 may be configured to exclude, from the image packet, the image data of locations corresponding to the non-visible area. In some examples, image packet may include at least a data identifier portion and a payload portion.
In some examples, the screen mask facilitating component 198 may be included with the display panel 131. In some examples, the screen mask facilitating component 198 may be configured to receive the screen mask. Further, the screen mask facilitating component 198 may be configured to determine which pixels of the display panel to activate based on the screen mask. Also, the screen mask facilitating component 198 may be configured to cause the displaying of the modified image data via the activated pixels of the display panel. In some examples, the screen mask facilitating component 198 may be configured to receive the image packet with the modified image data. Further, the screen mask facilitating component 198 may be configured to map the modified image data of the received image packet to the activated pixels of the display panel. In some examples, the screen mask facilitating component 198 may be  configured to receive the screen mask from a local memory of the display panel. In some examples, the screen mask facilitating component 198 may be configured to receive the screen mask from a host processor prior to receiving the image packet with the modified image data.
As described herein, a device, such as the device 104, may refer to any device, apparatus, or system configured to perform one or more techniques described herein. For example, a device may be a server, a base station, user equipment, a client device, a station, an access point, a computer, e.g., a personal computer, a desktop computer, a laptop computer, a tablet computer, a computer workstation, or a mainframe computer, an end product, an apparatus, a phone, a smart phone, a server, a video game platform or console, a handheld device, e.g., a portable video game device or a personal digital assistant (PDA) , a wearable computing device, e.g., a smart watch, an augmented reality device, or a virtual reality device, a non-wearable device, a display or display device, a television, a television set-top box, an intermediate network device, a digital media player, a video streaming device, a content streaming device, an in-car computer, any mobile device, any device configured to generate graphical content, or any device configured to perform one or more techniques described herein. Processes herein may be described as performed by a particular component (e.g., a GPU) , but, in further embodiments, can be performed using other components (e.g., a CPU) , consistent with disclosed embodiments.
Portable electronic devices, including smartphones and wearable devices, may present graphical content on a display. For example, a host processor of the portable electronic device may generate graphical content for presentment via a display panel. In some examples, the host processor may be configured to generate the graphical content based on a standard shape, such as a rectangular-shaped image. For example, images generated in accordance with the MIPI DSI (Mobile Industry Processor Interface, Display Serial Interface) protocol correspond to a rectangular-shaped image. The host processor may then transmit the graphical content to the display panel for presentment on a display.
However, as technology has improved, it has become possible to utilize irregular shaped display screens for presentment of the graphical content. For example, a wearable device may include a circular shaped display screen, a smartphone may include one or more cutout sections of the display screen (e.g., a note for a camera) , etc. Some such examples may introduce inefficiencies in transferring the graphical  content from the host processor to the display panel. For example, image data corresponding to the corners of a rectangular-shaped image may not be displayed via a circular-shaped display screen.
Example techniques disclosed herein provide for efficient transfer of image data from the host processor to the display panel for presentment. For example, the image data may correspond to a first shape (e.g., a rectangular-shaped image) , while the display panel may include a display area (or visible area) that corresponds to a second shape that is different than the first shape (e.g., a circular-shaped display screen, a display screen including one or more cutout sections, a notch, etc. ) . In some examples, techniques disclosed herein facilitate applying a screen mask to the image data to reduce the size of the image data being transmitted from the host processor to the display panel. For example, the screen mask may define the visible area of the display panel. By applying the screen mask to the image data, techniques disclosed herein reduce the size of the image data by modifying pixel data for pixels corresponding to non-visible areas of the display. For example, disclosed techniques may discard the respective pixel data, may replace the respective pixel data with NULL values, or may replace the respective pixel data with a black pixel. Example techniques disclosed herein may then generate an image packet based on the screen mask and the modified image data. It should be appreciated that modifying the pixel data for one or more pixels of the image (e.g., by excluding the pixel data, by replacing the pixel data with NULL values, or by replacing the pixel with black pixels) enables the generated image packet to have less pixel data than the original image data, thereby reducing the amount of data being transmitted from the host processor to the display panel.
In some examples, the visible area of the display screen may change, for example, based on an application. For example, an application presenting content via the display panel may define the visible area of a user interface being presented via the display panel. In some such examples, the application may define the visible area to be different than the shape of the display screen. For example, a portable device may include one or more always-on elements, such as a current time and/or date. In some such examples, the visible area of the display screen may correspond to those pixel elements positioned and associated with the displaying of the always-on elements, while the remaining pixel elements of the display screen may be associated with the non-visible area of the display panel.
Example techniques disclosed herein enable assigning visibility states to the pixel elements of the display screen. For example, a pixel element may be assigned a visible state or a non-visible state. In some examples, a pixel element that is assigned a visible state may be a pixel element of the display screen that is positioned within the visible area of the display screen. In some such examples, pixel elements that are associated with the visible state may correspond to transitioning the respective pixel elements to an ON state (if the display panel is capable of turning pixel elements ON or OFF) or may allow pixel data for the corresponding pixel elements to be displayed.
Pixel elements that are assigned the non-visible state are OFF. In some examples, such pixel elements may be transitioned to the OFF state (if the display panel is capable of turning pixel elements ON or OFF) or may include setting the respective pixel elements to the color black. In some examples, pixel data for pixel elements associated with the non-visible state may be skipped or ignored from further processing.
It should be appreciated that in some examples, the image data generated by the host processor may be associated with a first shape while the visible area of the display panel may be associated with a second shape. For example, the image data generated by the host processor may be a rectangle, while the visible area of the display panel may be a circle. In some examples, the first shape may be associated with a standard shape, while the second shape may be associated with an “irregular” shape. As used herein, when referring to the shape of the visible area of the display panel, the term “irregular, ” or variants thereof, refers to a shape that is different than the standard shape associated with the image data. For example, an irregular shape may include a circular display of a wearable device, may include a display screen with a cut-out section (e.g., a notch) , etc.
Although the following description may provide examples based on the MIPI Display Serial Interface (DSI) link between the host processor and the display panel, it should be appreciated that the concepts described herein may be applicable to additional or alternative display communication interfaces that facilitate transferring of image data associated with a standard shape from a host processor to a display panel.
FIG. 2 is a block diagram that illustrates an example display panel system 200, in accordance with one or more techniques of this disclosure. In the illustrated example of FIG. 2, the display panel system 200 includes a host (or application) processor 205 and a display panel 255, which communicate via communication bus 250. In the illustrated example of FIG. 2, the host processor 205 may send image data to the  display panel 255 via the communication bus 250. The example host processor 205 and the display panel 255 may also send control information via the communication bus 250. Aspects of the host processor 205 may be implemented by the processing unit 120 and/or the display processor 127 of FIG. 1. Aspects of the display panel 255 may be implemented by the display processor 127 and/or the display panel 131 of FIG. 1.
The example host processor 205 of FIG. 2 includes a timing controller 210, a frame buffer 215, and a bus interface 220. The timing controller 210 is in communication with the frame buffer 215 and may use synchronization signals to control the transfer of data from the frame buffer 215 to the bus interface 220. In the illustrated example, the frame buffer 215 may receive image data 225 (e.g., from the system memory 124 of FIG. 1) , may temporarily store the image data 225, and may provide the image data 225 to the bus interface 220. The image data 225 may include pixel data for a series of frames to be transferred to the display panel 255.
In the illustrated example of FIG. 2, the bus interface 220 is coupled to the communication bus 250, which is coupled to a bus interface 260 of the display panel 255. It should be appreciated that the host processor 205 may be implemented as one or more electronic hardware processors, such as a display processor, a DPU, a GPU, and/or a video processor, such as the example processing unit 120 of FIG. 1.
In the illustrated example, the communication bus 250 may be implemented by a display communication interface, such as the example MIPI DSI link. However, it should be appreciated that in additional or alternative examples, other communication interfaces may be used to facilitate communication between the host processor 205 and the display panel 255.
In the illustrated example of FIG. 2, the display panel 255 includes the bus interface 260, which is coupled to the communication bus 250, and is configured to receive information from the host processor 205. The display panel 255 of FIG. 2 also includes a display driver 265, a buffer 270, and a display screen 275.
In the illustrated example, the display screen 275 includes a plurality of pixel elements for displaying image data. The example display screen 275 may include a visible area corresponding to an irregular shape. The display driver 265 is coupled to the bus interface 260 and the display screen 275. Additionally, the display driver 265 is coupled to the buffer 270, which is coupled to the display screen 275.
In some examples, the display panel system 200 may operate in command-mode or video-mode. When operating in video mode, image data is transmitted from the host processor 205 to the display panel 255 as a real-time pixel stream. For example, the host processor 205 may refresh the image data continuously at the display panel 255. In some such examples, the host processor 205 may provide image data (e.g., pixel data) and synchronization information to the display panel 255. Video-mode operation may be useful for display panels that do not include a frame buffer to store frames. In some such examples, the host processor 205 may transfer image data over the bus interface 220 and the communication bus 250 at a display refresh rate, such as sixty (60) frames per second. The display driver 265 may read the image data from the bus interface 260 and write the frames to the display screen 275.
When operating in the command-mode, image data may be transmitted from the host processor 205 to the display panel 255 via commands and data. For example, the host processor 205 can transfer image data over the bus interface 220 and the communication bus 250. The display driver 265 may read the image data from the bus interface 260 and temporarily store the image data in the buffer 270 prior to presentment of the image data via the display screen 275. The display driver 265 may also write the image data to the display screen 275.
In the illustrated example of FIG. 2, the host processor 205 also includes an image data handler 230. The image data handler 230 generates image packet (s) 240 for transmitting to the display panel 255 via the communication bus 250. In the illustrated example, the image packet 240 includes a packet header section 240a, a packet payload section 240b, and a packet footer section 240c.
The example packet header section 240a may include a data identifier portion, a word count portion, and an error correction code portion. The example data identifier portion may contain a virtual channel identifier and data type information. For example, the data type information may denote the format and content of payload data (e.g., the data contained in the packet payload section 240b) . The example word count portion may indicate how many words (or bytes) are in the packet payload section 240b. In some examples, the receiver of the image packet 240 (e.g., the display panel 255) may use the word count to determine the packet end (e.g., after the packet payload section 240b and the packet footer section 240c) . In some examples, the word count portion may be 16-bits long. The example error correction code portion may include an error correction code (ECC) for the packet header section 240a of the image  packet 240 and may protect the data in the packet header section 240a. For example, the ECC may enable one-bit errors in data of the packet header section 240a to be corrected and may enable two-bit errors to be detected. In some examples, the ECC may be 8-bits long.
The example packet payload section 240b includes the payload of the image packet 240 (e.g., application-specific payload) and may not be restricted in size. In some examples, the length of the packet payload section 240b may be determined based on the word count identified by the word count portion of the packet header section 240a.
The example packet footer section 240c includes a checksum. In some examples, the checksum may be 16-bits long.
In the illustrated example of FIG. 2, the image data handler 230 facilitates improving the transfer efficiency of image data from the host processor 205 to the display panel 255. For example, the image data handler 230 may generate the image packet 240 based on the image data 225 and a screen mask 235. As described above, in some examples, the image data 225 may contain pixel data for an image associated with a first shape (e.g., a standard rectangle image shape) . However, the visible area of the display screen 275 of the display panel 255 may correspond to an irregular shape (e.g., a rectangle shape with a cutout section, a circular screen, etc. ) . The screen mask 235 may, thus, define one or more visible areas of the display screen 275 and one or more non-visible areas of the display screen.
In some examples, the screen mask 235 may be a hard-coded screen mask (e.g., a pre-generated screen mask) that is accessible to the image data handler 230. For example, the image data handler 230 may obtain the screen mask 235 from the system memory 124 and/or from storage local to the image data handler 230. In some examples, the screen mask 235 may be defined and/or provided by an application (e.g., during runtime operation of the application) . For example, application presenting content via the display panel 255 may define a visible area of the display panel 255. In some such examples, the application may generate and provide the screen mask to the image data handler 230. In some examples, the application may provide visible area information to the image data handler 230, which may then generate a screen mask based on the visible area information.
FIG. 3 illustrates example screen masks 300, in accordance with one or more techniques disclosed herein. In the illustrated example of FIG. 3, a first screen mask 300A and a second screen mask 300B correspond to a same display panel having a  first display screen shape, while the third screen mask 300C corresponds to a different display panel having a second display screen shape. For example, the first display screen shape may correspond to a circular shaped display screen and the second display screen shape may correspond to a generally rectangular shaped display screen including a notch.
In the illustrated example of FIG. 3, the screen masks 300 define respective visible areas 305 and non-visible areas 310. For example, the first screen mask 300A includes a visible area 305A and four non-visible areas 310A that facilitate defining the generally circular shape of the corresponding display screen. The second screen mask 300B includes a relatively small visible area 305B and a relatively large non-visible area 310B (e.g., when compared to the visible area 305A and the non-visible areas 310A of the first screen mask 300A) . In the illustrated example, the visible area 305B defined by the second screen mask 300B may correspond to the position of the display screen for presentment of always-on elements. The third screen mask 300C includes a relatively large visible area 305C and a relatively small non-visible area 310C. In the illustrated example of FIG. 3, the non-visible area 310C defined by the third screen mask 300C may correspond to a cutout in the display screen (e.g., a notch) that does not include pixel elements for presentment of image data.
Referring back to FIG. 2, in some examples, the image data handler 230 may transmit the screen mask 235 to the display panel 255. For example, the image data handler 230 may encode the screen mask 235 and populate the packet payload section 240b of the image packet 240 with the encoded screen mask. The example image data handler 230 may also generate the packet header section 240a of the image packet 240 by setting the data identifier of the packet header section 240a to indicate that the packet payload section 240b includes screen mask information corresponding to the screen mask 235. The example image data handler 230 may then transmit the image packet 240 to the display panel 255 via the communication bus 250.
In some examples, the screen mask may be associated with a display panel and may define a visible area of the display panel. For example, the screen masks 300 of FIG. 3 define a visible area 305 and one or more non-visible areas 310. In some examples, the visible areas 305 correspond to the shape of the display screen 275 of the display panel 255. For example, the first screen mask 300A defines the visible area 305A and the non-visible areas 310A corresponding to the circular shape of a display screen. Similarly, the third screen mask 300C defines the visible area 305C including the non- visible area 310C corresponding to the generally rectangular shape of a display screen including a notch.
In additional or alternative examples, the visible areas 305 may be based on one or more characteristics of a user interface (e.g., may be based on a portion of the display screen corresponding to one or more always-on elements) . For example, the first screen mask 300A and the second screen mask 300B correspond to the same display panel (e.g., a circular shaped display screen) , but define different visible areas. In the illustrated example, the second screen mask 300B defines the visible area 305B corresponding to always-on elements (such as where the time and/or date may be displayed) .
In some examples, the screen mask 235 may be pre-generated and stored in a system memory accessible to the image data handler 230. For example, the screen mask 235 may be stored by the image data handler 230, the internal memory 121 of FIG. 1, and/or the system memory 124 of FIG. 1. In some examples, the image data handler 230 may update the screen mask 235 based on one or more changes of a user interface being presented via the display panel 255. For example, the display panel 255 may display a first user interface having a first shape during a first operation state and may display a second user interface having a second shape during a second operation state. In certain such examples, the image data handler 230 may update the screen mask 235 from a first screen mask (e.g., the first screen mask 300A) defining the visible area 305A of the first user interface to a second screen mask (e.g., the second screen mask 300B) defining the visible area 305B of the second user interface when the operation state changes from the first operation state (e.g., while the display panel 255 is presenting an application interface where the pixel elements of the display screen 275 are used to display image data associated with the application interface) to the second operation state (e.g., while the display panel 255 is presenting a sleep state interface where a portion of the pixel elements of the display screen 275 are used to display always-one elements) .
In some examples, the image data handler 230 may populate the payload portion of the image packet 240 with the encoded screen mask for transmitting the screen 235 to the display panel 255. For example, the image data handler 230 may populate the packet payload section 240b based on the encoded screen mask.
In some examples, if the image data handler 230 determined to generate an image packet for the transmitting of image data, then the image data handler 230 may fetch  image data for a frame. For example, the image data handler 230 may obtain the image data 225 from the frame buffer 215. In some examples, the image data 225 may correspond to a standard, rectangle-shaped image.
In some examples, the image data handler 230 may determine whether a pixel of the image data corresponds to a visible area of the display panel 255 based on the screen mask 235 and a location of the pixel. For example, for each pixel of the image data 225, the image data handler 230 may determine whether the location of the respective pixel corresponds to the visible area 305 defined by the screen masks 235, 300 or to the non-visible area (s) 310 defined by the screen masks 235, 300.
In some examples, the image data handler 230 may modify the image data 225 based on the determination. For example, the image data handler 230 may modify the pixel data for the pixels at locations corresponding to the non-visible area (s) 310. In some examples, the image data handler 230 may modify the image data by discard (or excluding ) the respective pixel data. In some examples, the image data handler 230 may modify the image data by replacing the respective pixel data with NULL values or a black pixel. The example image data handler 230 may then populate the payload portion with the modified image data. The image data handler 230 may populate the payload portion corresponding to the pixel with pixel data associated with the pixel when the location of the pixel corresponds to the visible area defined by the screen mask (e.g., the visible areas 305 of the screen masks 300) . Thus, it should be appreciated that in some examples, the payload portion populated by the image data handler 230 may include less pixel data than the fetched image data 225.
In some examples, the image data handler 230 may generate the image packet 240 based on the populated payload portion. For example, the image data handler 230 may generate the image packet 240 including the packet header section 240a, the packet payload section 240b, and the packet footer section 240c. The example image data handler 230 may include the populated payload portion in the packet payload section 240b. The image data handler 230 may populate the packet header section 240a with a data identifier corresponding to the transmitting of screen mask information when the payload portion is populated with the encoded screen mask. The image data handler 230 may populate the packet header section 240a with a data identifier corresponding to the transmitting of image data when the payload portion is populated with the pixel data.
In some examples, the image data handler 230 may transmit the image packet 240 to the display panel 255. For example, the image data handler 230 may transmit the image packet 240 through the bus interface 220 to the bus interface 260 of the display panel 255 via the communication bus 250. In some examples, the image packet 240 may include screen mask information defining the visible area of the display panel 255. In some examples, the image packet 240 may include pixel data for presentment of a frame of image data by the display panel 255.
In the illustrated example of FIG. 2, the display panel 255 also includes a packet handler 280. The example packet handler 280 receives the image packets 240 from the host processor 205 via the communication bus 250 and the bus interface 260. The packet handler 280 facilitates determining whether the image packet 240 corresponds to a screen mask or to image data. For example, the packet handler 280 may decode and parse the packet header section 240a of the image packet 240 to determine the data identifier of the image packet 240. The packet handler 280 may then use the data identifier to determine whether the packet payload section 240b includes information corresponding to a screen mask or to image data.
In some examples, if the data identifier indicates that the packet payload section 240b includes information corresponding to a screen mask, then the packet handler 280 may determine the visibility states of the pixel elements of the display panel based on the screen mask included in the packet payload section 240b. For example, the packet handler 280 may cause the display screen 275 to transition those pixel elements that are positioned within the visible area 305 to the visible state, may transition those pixel elements that are positioned within the non-visible area 310 to the non-visible state, or may maintain the current visibility state of the respective pixel elements.
In some examples, if the data identifier indicates that the packet payload section 240b includes information corresponding to image data, then the packet handler 280 may process the image data of the image packet 240 for displaying on the display screen 275. For example, the packet handler 280 may process the information included in the packet payload portion 240b of the image packet 240 to determine how to display the respective pixels. In some examples, the packet handler 280 may process the payload information based on whether the display panel system 200 is operating in the video-mode or the command-mode. For example, when the display panel system 200 is operating in the video-mode, the display driver 265 may read the payload information and write the image data to the display screen 275. In other examples when the display  panel system 200 is operating in the command-mode, the display driver 265 may read the payload information and temporarily store the image data in the buffer 270 prior to presentment of the image data via the display screen 275.
In some examples, the packet handler 280 may cause the display screen 275 to update. For example, the packet handler 280 may cause the display screen 275 to display the image data associated with the image packet 240 and/or may update the visibility state of the respective pixel elements of the display screen 275 (e.g., may transition certain of the pixel elements positioned within the visible area of the display screen 275 to the visible state, may transition certain of the pixel elements positioned within the non-visible area of the display screen 275 to the non-visible state, and/or may maintain the visibility state of certain of the pixel elements) .
In some examples, the packet handler 280 may use a screen mask to determine how to present received image data. For example, in some examples, the image packet 240 received by the packet handler 280 may include “missing” pixel data corresponding to pixel data that was discarded (e.g., by the image data handler 230) . In some such examples, the packet handler 280 may use the screen mask to map the pixel data of the received image data to pixel elements of the display screen 275 that are in the visible area of the display screen 275.
For example, the image data 225 fetched by the image data handler 230 may include pixel data corresponding to ten pixel elements. The image data handler 230 may determine, based on the screen mask 235, that three pixel elements are in the visible area of the display screen 275 and the remaining seven pixel elements are in the non-visible area of the display screen 275. In some such examples, the image data handler 230 may discard the pixel data for the seven non-visible area pixel elements and generate the image packet 240 including the pixel data for the three visible area pixel elements. The packet handler 280 may receive the image packet 240 including the pixel data for the three visible area pixel elements. The packet handler 280 may then use the screen mask to map the pixel data to the three visible area pixel elements of the display screen 275. The packet handler 280 may then facilitate the presentment of the image data received via the image packet 240.
It should be appreciated that in some examples, the data identifier of the packet header 240a may indicate that the packet payload portion 240b includes screen mask information and pixel data. In certain such examples, the packet handler 280 may process the information included in the packet payload portion 240b at a more  granular level, such as for each pixel element of the display screen. For example, for each pixel element of the display screen, the packet handler 280 may determine whether the payload information for the respective pixel element corresponds to screen mask information (e.g., whether the pixel element is positioned within the visible area or the non-visible area of the display screen) or to pixel data. If the packet handler 280 determines that the payload information corresponds to pixel data, the packet handler 280 may process the pixel data for displaying via the display screen 275. If the packet handler 280 determines that the payload information corresponds to screen mask information, the packet handler 280 may determine whether to transition the pixel element to the visible state, to transition the pixel element to the non-visible state, or to maintain the current visibility state of the pixel element. In some such examples, if the packet handler 280 determines to transition the pixel element to the visible state (or to maintain the current visible state of the pixel element) , the packet handler 280 may also process pixel data corresponding to the pixel element for displaying via the display screen.
It should be appreciated that in some examples, the display panel 255 may store a screen mask. For example, the screen mask may be hard-coded (or pre-generated) and stored a memory local to the display panel 255 (e.g., in the buffer 270) . In some examples, the display panel 255 may receive a screen mask prior to receipt of image data for each frame. In some examples, the display panel 255 may receive a screen mask and apply the screen mask to the presentment of a series of frames. In some examples, the display panel 255 may receive a signal from the host processor 205 indicating how many frames a screen mask should be applied. In some examples, the display panel 255 may receive a screen mask periodically (e.g., after every five frames, after every ten frames, etc. ) In some examples, the display panel 255 may apply a screen mask (e.g., a hard-coded (or pre-generated) screen mask or a screen mask previously received) until the display panel 255 receives a different screen mask (or an indication to use a different screen mask) .
FIG. 4 illustrates an example flowchart 400 of an example method in accordance with one or more techniques disclosed herein. The method may be performed by an apparatus such as a host processor and/or a component of the host processor (e.g., the example processing unit 120 of FIG. 1, the example display processor 127 of FIG. 1, the example host processor 205 of FIG. 2, and/or the example image data handler 230 of FIG. 2) .
At 402, the apparatus may determine whether to transmit a screen mask to a display panel, as described in connection with the examples in FIGs. 1, 2, and/or 3. In some examples, the apparatus may transmit a screen mask to a display panel at start-up. For example, when the apparatus powers on (e.g., after being in an off mode, a sleep mode, and similar operation modes) , the apparatus may determine to transmit a screen mask to the display panel. In some examples, the apparatus may transmit a screen mask for each frame of image data. For example, the apparatus may transmit the screen mask prior to the transmittal of image data for a frame. In some examples, the apparatus may periodically transmit a screen mask to the display panel. For example, the apparatus may transmit a screen mask prior to the transmittal of image data for every fifth frame, for every tenth frame, etc. In some examples, the apparatus may transmit a screen mask when a change in the visible area of the display panel is determined (e.g., by an application during a runtime operation) . In some such examples, the apparatus may update the screen mask for transmitting. For example, the apparatus may be transmitting a first screen mask defining a first visible area of a first user interface and then determine to transmit a second screen mask defining a second visible area of a second user interface. In some such examples, the apparatus may determine to update the screen mask for transmitting based on a change detected from the first user interface to the second user interface.
In some examples, the apparatus may determine not to transmit a screen mask. For example, when the screen mask is static (e.g., is the same as a previously transmitted screen mask and/or is hard-coded at the display panel) , the apparatus may determine not to transmit a screen mask. In some examples, the apparatus may periodically transmit a screen mask and, thus, may determine whether to transmit a screen mask based on the period (e.g. the apparatus may determine to transmit the screen mask before image data for every fifth frame and determine not to transmit the screen mask for the remaining frames) .
In some examples, the apparatus may determine not to transmit a screen mask when the apparatus modified image data by replacing some pixel data with a NULL value. It should be appreciated that in some such examples in which the apparatus may not transmit a screen mask, the example method of flowchart 400 may begin at 406.
If, at 402, the apparatus determines to transmit a screen mask, then, at 404, the apparatus may transmit an image packet corresponding to a screen mask to the display panel, as described in connection with the examples in FIGs. 1, 2, and/or 3. For  example, the apparatus may populate a payload portion of the image packet with the screen mask and transmit the corresponding image packet to the display panel. Example techniques for transmitting the image packet corresponding to the screen mask are described in connection with an example flowchart 500 of FIG. 5.
At 406, the apparatus may populate a payload portion of an image packet with modified image data based on image data for a frame and a screen mask, as described in connection with the examples in FIGs. 1 2, and/or 3. In some examples, the apparatus may apply the screen mask to the image data for the frame to determine which pixels of the image data correspond to a visible area of the display panel and/or which pixels of the image data correspond to a non-visible area of the display panel. In some examples, the apparatus may modify the image data by replacing pixel data for the pixels corresponding to the non-visible area with NULL values. In some examples, the apparatus may modify the image data by removing pixel data for the pixels corresponding to the non-visible area. Example techniques for populating the payload portion of the image packet with the modified image data are described in connection with  example flowcharts  600 and 700 of FIGs. 6 and 7, respectively.
At 408, the apparatus may generate an image packet with modified image data based on the populated payload portion, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may generate the image packet 240 including the packet header section 240a, the packet payload section 240b, and the packet footer section 240c. The example apparatus may include the populated payload portion in the packet payload section 240b. The apparatus may populate the packet header section 240a with a data identifier corresponding to the transmitting of image data.
At 410, the apparatus may transmit the image packet to the display panel, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may transmit the image packet 240 to the display panel 255 via the communication bus 250. In some examples, the apparatus may transmit the image packet to the display panel after the generating of the image packet. In some examples, the apparatus may wait to receive a fetch message from the display panel before transmitting the image packet to the display panel.
FIG. 5 illustrates an example flowchart 500 of an example method in accordance with one or more techniques disclosed herein. The method may be performed by an apparatus such as a host processor and/or a component of the host processor (e.g., the  example processing unit 120 of FIG. 1, the example display processor 127 of FIG. 1, the example host processor 205 of FIG. 2, and/or the example image data handler 230 of FIG. 2) . In some examples, the flowchart 500 may be executed to facilitate the transmitting of the image packet corresponding to a screen mask to the display panel (404 of FIG. 4) .
At 502, the apparatus may encode a screen mask, as described in connection with the examples in FIGs. 1, 2, and/or 3. In some examples, the apparatus may retrieve a screen mask from a memory accessible to the apparatus. For example, the apparatus may retrieve the screen mask 235 from the system memory 124.
At 504, the apparatus may populate a payload portion of an image packet with the encoded screen mask, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may populate the packet payload section 240b based on the encoded screen mask.
At 506, the apparatus may generate an image packet corresponding to a screen mask based on the populated payload portion, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may generate the image packet 240 including the packet header section 240a, the packet payload section 240b, and the packet footer section 240c. The example apparatus may include the populated payload portion in the packet payload section 240b. The apparatus may populate the packet header section 240a with a data identifier corresponding to the transmitting of screen mask information.
At 508, the apparatus may transmit the image packet corresponding to the screen mask to the display panel, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may transmit the image packet 240 to the display panel 255 via the communication bus 250. In some examples, the apparatus may transmit the image packet to the display panel after the generating of the image packet. In some examples, the apparatus may wait to receive a fetch message from the display panel before transmitting the image packet to the display panel.
It should be appreciated that in some examples, control may then return to 502 to wait to encode another screen mask.
FIG. 6 illustrates an example flowchart 600 of an example method in accordance with one or more techniques disclosed herein. The method may be performed by an apparatus such as a host processor and/or a component of the host processor (e.g., the example processing unit 120 of FIG. 1, the example display processor 127 of FIG. 1,  the example host processor 205 of FIG. 2, and/or the example image data handler 230 of FIG. 2) . In some examples, the flowchart 600 may be executed to facilitate the populating of the payload portion of the image packet with modified image data based on image data for a frame and a screen mask (406 of FIG. 4) . For example, the flowchart 600 may be executed when the host processor does not transmit a screen mask to the display panel and the display panel does not have access to a screen mask (e.g., does not have access to a hard-coded screen mask) .
At 602, the apparatus may fetch image data for a frame, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the image data handler 230 may obtain the image data 225 from the frame buffer 215. In some examples, the image data 225 may correspond to a standard, rectangle-shaped image. In some examples, the apparatus may fetch portions of the image data 225 based on a screen mask. For example, the apparatus may use the screen mask to determine which portions of the image data 225 correspond to pixels within the visible area of the display panel and fetch the respective portions of the image data 225.
At 604, the apparatus may identify pixels of the image data corresponding to non-visible areas of the display panel based on the screen mask and locations of the respective pixels, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, for each pixel of the image data 225, the apparatus may determine whether the location of the respective pixel corresponds to the visible area 305 defined by the screen masks 235, 300 or to the non-visible area (s) 310 defined by the screen masks 235, 300.
At 606, the apparatus may replace pixel data for locations of the image data corresponding to non-visible areas of the display panel with NULL values, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, for each pixel location that is within the non-visible area (s) 310 defined by the screen masks 235, 300, the apparatus may replace the respective pixel data with a NULL value. It should be appreciated that in some examples, the apparatus may replace the respective pixel data with a black pixel.
At 608, the apparatus populate a payload portion of an image packet with the modified image data, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may populate the payload portion 240b with pixel data corresponding to visible area pixel elements (e.g., defined by the screen mask based on the visible area (s) 305 of the screen mask 300) . The apparatus may also populate  the payload portion 240b with NULL values for pixel locations corresponding to the non-visible area (s) 310. It should be appreciated that by replacing the respective pixel data with a NULL value (or a black pixel) , the modified image data may be associated with a size relatively smaller than the original image data that was fetched (e.g., at 602) .
It should be appreciated that in some examples, control may then return to 602 to wait to fetch image data for another frame.
FIG. 7 illustrates an example flowchart 700 of an example method in accordance with one or more techniques disclosed herein. The method may be performed by an apparatus such as a host processor and/or a component of the host processor (e.g., the example processing unit 120 of FIG. 1, the example display processor 127 of FIG. 1, the example host processor 205 of FIG. 2, and/or the example image data handler 230 of FIG. 2) . In some examples, the flowchart 700 may be executed to facilitate the populating of the payload portion of the image packet with modified image data based on image data for a frame and a screen mask (406 of FIG. 4) . For example, the flowchart 700 may be executed when the host processor transmits (e.g., periodically, a-periodically, or as a one-time event) a screen mask to the display panel and/or the display panel has access to a screen mask (e.g., the display panel is capable of storing a previously received screen mask and/or the display panel has access to a hard-coded screen mask) .
At 702, the apparatus may fetch image data for a frame, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the image data handler 230 may obtain the image data 225 from the frame buffer 215. In some examples, the image data 225 may correspond to a standard, rectangle-shaped image. In some examples, the apparatus may fetch portions of the image data 225 based on a screen mask. For example, the apparatus may use the screen mask to determine which portions of the image data 225 correspond to pixels within the visible area of the display panel and fetch the respective portions of the image data 225.
At 704, the apparatus may identify pixels of the image data corresponding to non-visible areas of the display panel based on the screen mask and locations of the respective pixels, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, for each pixel of the image data 225, the apparatus may determine whether the location of the respective pixel corresponds to the visible area 305 defined  by the screen masks 235, 300 or to the non-visible area (s) 310 defined by the screen masks 235, 300.
At 706, the apparatus may remove pixel data for locations of the image data corresponding to non-visible areas of the display panel based on the screen mask, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, for each pixel location that is within the non-visible area (s) 310 defined by the screen masks 235, 300, the apparatus may remove (or discard) the respective pixel data of the image data 225.
At 708, the apparatus populate a payload portion of an image packet with the modified image data, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may populate the payload portion 240b with pixel data corresponding to visible area pixel elements (e.g., defined by the screen mask based on the visible area (s) 305 of the screen mask 300) . It should be appreciated that by removing the respective pixel data from the image data, the modified image data may be associated with a size relatively smaller than the original image data that was fetched (e.g., at 702) .
It should be appreciated that in some examples, control may then return to 702 to wait to fetch image data for another frame.
FIG. 8 illustrates an example flowchart 800 of an example method, in accordance with one or more techniques disclosed herein. The method may be performed by an apparatus such as the display panel 131 and/or a component of the display panel 131 of FIGs. 1 and/or 3, the display driver 265 of FIG. 2, and/or the packet handler 280 of FIG. 2) . In the illustrated example of FIG. 8, the apparatus may be incapable of accessing a screen mask (e.g., a previously received screen mask and/or a hard-coded screen mask) .
At 802, the apparatus may receive an image packet, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may receive the image packet 240 via the bus interface 260 over the communication bus 250. In some examples, the apparatus may receive the image packet 240 in response to transmitting a fetch message.
At 804, the apparatus may parse pixel data of the image data of the received image packet, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may process the pixel data for each pixel location of the image  data of the image packet 240 to determine whether the pixel data corresponds to a NULL value or to a valid value (e.g., a non-NULL value) .
At 806, the apparatus may disregard displaying of image data for locations with pixel data set to NULL, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, while parsing the pixel data of the image data, the apparatus may disregard the further processing of pixels with pixel data set to NULL. As described above, the setting of the pixel data to the NULL value may indicate that the respective pixel location is not located within the visible area of the display panel and, thus, does not need to be processed for displaying at the display panel.
At 808, the apparatus may display remaining image data based on respective pixel data, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may process the pixel data for the pixel locations within the visible area (s) of the display panel for presentment via the display panel. The apparatus may cause the display screen 275 to display the image data associated with the image packet 240 based on the pixel data for the pixel locations within the visible area (s) of the display screen 275.
It should be appreciated that in some examples, control may then return to 802 to wait to receive another image packet.
FIG. 9 illustrates an example flowchart 900 of an example method, in accordance with one or more techniques disclosed herein. The method may be performed by an apparatus such as the display panel 131 and/or a component of the display panel 131 of FIGs. 1 and/or 3, the display driver 265 of FIG. 2, and/or the packet handler 280 of FIG. 2) . In the illustrated example of FIG. 9, the apparatus may access a screen mask (e.g., a previously received screen mask and/or a hard-coded screen mask) to facilitate the presentment of image data.
At 902, the apparatus may receive an image packet, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may receive the image packet 240 via the bus interface 260 over the communication bus 250. In some examples, the apparatus may receive the image packet 240 in response to transmitting a fetch message.
At 904, the apparatus may determine whether the image packet corresponds to a screen mask or to image data, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may decode and parse the packet header  section 240a of the image packet 240 to determine the data identifier and to determine whether the image packet corresponds to a screen mask or to image data.
As described above, in some examples, the host processor 205 may transmit (e.g., periodically transmit, a-periodically transmit, or transmit as a one-time event) the screen mask to the display panel 255. In some such examples, the display panel 255 may receive a screen mask, followed by image data for a sequence of frames (e.g., one or more frames) , and then receive another screen mask. Thus, the example display panel 255 may verify whether the received image packet corresponds to a screen mask or to image data.
However, it should be appreciated that in some examples, the display panel 255 may be capable of accessing a hard-coded screen mask and that the host processor 205 may not transmit a screen mask to the display panel 255. That is, the host processor 205 may modify the image data prior with the assumption that the display panel 255 is capable of accessing a hard-coded screen mask and, thus, that the host processor 205 does not need to transmit a screen mask to the display panel 255. In some such examples, the apparatus may execute the method of flowchart 900 by receiving the image packet at 902 and then proceeding to 914 to process pixel data for an image packet based on the screen mask.
If, at 904, the apparatus determines that the image packet corresponds to a screen mask, then, at 906, the apparatus may modify the display based on the screen mask, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may determine to set pixel elements of the display screen 275 that are within the visible area 305 of the screen mask 235, 300 to the visible state and may set pixel elements of the display screen 275 that are within the non-visible area (s) 310 of the screen mask 235, 300 to the non-visible state (e.g., may determine to activate or deactivate respective pixel elements of the display screen 275) . In some examples, the apparatus may cause certain pixels elements to transition to the visible state, may cause certain pixel elements to transition to the non-visible state, and/or may maintain the current visibility state of certain pixel elements. In some examples, the apparatus may store the screen mask in a local memory (e.g., the buffer 270) for use when processing subsequent image data.
At 908, the apparatus may receive an image packet corresponding to image data, as described in connection with the examples in FIGs. 1, 2, and/or 3. It should be appreciated that in some examples, the apparatus may wait to receive the image packet  corresponding to image data. Control then proceeds to 914 to process pixel data of the image packet based on the screen mask.
If, at 904, the apparatus determines that the image packet corresponds to image data, then, at 910, the apparatus may determine whether a screen mask associated with the image data was received, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may periodically receive a screen mask and the image data may be within the periodicity of the screen mask transmissions.
If, at 910, the apparatus determines that the screen mask associated with the image data was received, then control proceeds to 914 to process pixel data of the image packet based on the screen mask.
If, at 910, the apparatus determines that the screen mask associated with the image data is was not received, then, at 912, the apparatus may retrieve a screen mask from local memory, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may obtain a screen mask from the buffer 270. In some examples, the screen mask obtained from the local memory may be a hard-coded screen mask. In some examples, the screen mask obtained from the local memory may a screen mask that was previously provided to the display panel 255 and stored by the display panel 255 in the local memory (e.g., the buffer 270) .
At 914, the apparatus may process pixel data based on the screen mask, as described in connection with the examples in FIGs. 1, 2, and/or 3. In some examples, the apparatus may process the information included in the packet payload portion 240b of the image packet 240 to determine how to display the respective image data. For example, in some examples, the modified image data of the image packet 240 may not include pixel data for pixel locations that are outside the visible area 305 of the display panel 255. In some such examples, the apparatus may use the screen mask to map the pixel data to respective pixel elements of the display screen 275. For example, the apparatus may map the pixel data to activated pixels of the display panel (e.g., based on the screen mask) . The apparatus may also process the respective pixel data to determine how to display the corresponding image data.
At 916, the apparatus may display the image data, as described in connection with the examples in FIGs. 1, 2, and/or 3. For example, the apparatus may cause the display screen 275 to display the image data associated with the image packet 240.
It should be appreciated that in some examples, control may then return to 902 to wait to receive another image packet.
As indicated above, the present disclosure can improve the efficiency of transferring image data from a host processor to a display panel. For example, disclosed techniques may use a screen mask to reduce the pixel data transferred from a host processor to a display panel over a communication bus based on whether the corresponding pixel locations are within a visible area or a non-visible area of the display panel. It should be appreciated that the described techniques may work with different shapes associated with the image data and the visible area of the display area. For example, as long as a screen mask is provided to apply to the image data, the disclosed techniques may facilitate reducing the amount of data transferred from the host processor to the display panel.
In one configuration, a method or apparatus for display processing is provided. The apparatus may be a display processor, a display processing unit, a GPU, an application processor, a host processor, a video processor, or some other processor that can perform display processing, and/or a component thereof. In one example, the apparatus may be the processing unit 120 within the device 104, the display processor 127 within the device 104, or may be some other hardware within device 104 or another device. The apparatus may include means for obtaining image data for a frame. The apparatus may also include means for determining modified image data for the frame based on a screen mask. In some examples, the screen mask may be associated with a display panel and may be configured to define a visible area of the display panel. In some examples, the modified image data may include less pixel data than the fetched image data. The apparatus may also include means for transmitting the modified image data to the display panel. The apparatus may also include means for transmitting the screen mask to the display panel prior to the transmitting of the modified image data to the display panel. Also, the apparatus may include means for transmitting an updated screen mask to the display panel prior to the transmitting of subsequently modified image data. The apparatus may also include means for determining whether a pixel of the obtained image data corresponds to the visible area of the display panel based on the screen mask and a location of the pixel. Further, the apparatus may include means for populating a payload portion of an image packet with a value based on the determination. The apparatus may also include means for determining whether a pixel of the obtained image data corresponds to a non-visible area of the display panel based on the screen mask and a location of the pixel. The apparatus may include means for excluding the image data of locations corresponding  to the non-visible area. The apparatus may also include means for generating an image packet based on the modified data, where the image packet includes at least a data identifier portion and a payload portion. The apparatus may also include means for receiving the screen mask. Further, the apparatus may include means for determining which pixels of the display panel to activate based on the screen mask. Also, the apparatus may include means for causing the displaying of the modified image data via the activated pixels of the display panel. The apparatus may also include means for receiving the modified image data. Further, the apparatus may include means for mapping the modified image data to the activated pixels of the display panel. The apparatus may also include means for receiving the screen mask from a local memory of the display panel. The apparatus may also include means for receiving the screen mask from a host processor prior to receiving the modified image data.
The subject matter described herein can be implemented to realize one or more benefits or advantages. For instance, the described display and/or graphics processing techniques can be used by a display processor, a display processing unit (DPU) , a GPU, a video processor, or some other processor that can perform display processing to implement the techniques described herein for improving image data transfer efficiency in portable devices.
In accordance with this disclosure, the term “or” may be interrupted as “and/or” where context does not dictate otherwise. Additionally, while phrases such as “one or more” or “at least one” or the like may have been used for some features disclosed herein but not others, the features for which such language was not used may be interpreted to have such a meaning implied where context does not dictate otherwise.
In one or more examples, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. For example, although the term “processing unit” has been used throughout this disclosure, such processing units may be implemented in hardware, software, firmware, or any combination thereof. If any function, processing unit, technique described herein, or other module is implemented in software, the function, processing unit, technique described herein, or other module may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media may include computer data storage media or communication media including any medium that facilitates transfer of a computer program from one place to another. In this manner, computer-readable media generally may correspond to (1) tangible computer- readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, . Disk and disc, as used herein, includes compact disc (CD) , laser disc, optical disc, digital versatile disc (DVD) , floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. A computer program product may include a computer-readable medium.
The code may be executed by one or more processors, such as one or more digital signal processors (DSPs) , general purpose microprocessors, application specific integrated circuits (ASICs) , arithmetic logic units (ALUs) , field programmable logic arrays (FPGAs) , or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor, ” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Also, the techniques could be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs, e.g., a chip set. Various components, modules or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily need realization by different hardware units. Rather, as described above, various units may be combined in any hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
Various examples have been described. These and other examples are within the scope of the following claims.

Claims (30)

  1. A method of operation of a display, comprising:
    obtaining image data for a frame;
    determining modified image data for the frame based on a screen mask, the screen mask associated with a display panel and configured to define a visible area of the display panel, and the modified image data having less pixel data than the obtained image data; and
    transmitting the modified image data to the display panel.
  2. The method of claim 1, further comprising:
    transmitting the screen mask to the display panel prior to the transmitting of the modified image data to the display panel.
  3. The method of claim 1, wherein a shape associated with the obtained image data corresponds to a rectangular shaped image and a shape associated with the modified image data corresponds to a shape of the visible area of the display panel.
  4. The method of claim 1, wherein the screen mask is pre-generated and stored in a system memory accessible for determining the modified image data.
  5. The method of claim 1, wherein the screen mask is updated based on a change associated with at least one characteristic of a user interface for presentment via the display panel.
  6. The method of claim 5, further comprising:
    transmitting the updated screen mask to the display panel prior to the transmitting of subsequently modified image data.
  7. The method of claim 1, wherein the determining of the modified image data comprises:
    determining whether a pixel of the obtained image data corresponds to the visible area of the display panel based on the screen mask and a location of the pixel; and
    populating a payload portion of an image packet with a value based on the determination.
  8. The method of claim 1, wherein the determining of the modified image data comprises:
    determining whether a pixel of the obtained image data corresponds to a non-visible area of the display panel based on the screen mask and a location of the pixel; and
    excluding the image data of locations corresponding to the non-visible area.
  9. The method of claim 1, further comprising generating an image packet based on the modified image data, wherein the image packet includes at least a data identifier portion and a payload portion, and wherein the transmitting of the modified image data includes transmitting the image packet to the display panel.
  10. A method of operation of an apparatus, the apparatus including a processor, the method of the apparatus comprising:
    obtaining image data for a frame;
    determining modified image data for the frame based on a screen mask, the screen mask associated with a display panel and configured to define a visible area of the display panel, and the modified image data having less pixel data than the obtained image data; and
    transmitting the modified image data to the display panel.
  11. The method of claim 10, wherein the apparatus further comprises the display panel, the method further comprising:
    receiving the screen mask at the display panel;
    determining which pixels of the display panel to activate based on the screen mask; and
    displaying the modified image data via the activated pixels of the display panel.
  12. The method of claim 11, further comprising:
    receiving the modified image data; and
    mapping the modified image data to the activated pixels of the display panel.
  13. The method of claim 11, wherein the display panel is configured to receive the screen mask from a local memory of the display panel.
  14. The method of claim 11, wherein the display panel is configured to receive the screen mask from the processor prior to receiving the modified image data.
  15. The method of claim 10, wherein the apparatus includes a wireless communication device.
  16. An apparatus for operation of a display, comprising:
    a memory; and
    at least one processor coupled to the memory and configured to:
    obtain image data for a frame;
    determine modified image data for the frame based on a screen mask, the screen mask associated with a display panel and configured to define a visible area of the display panel, and the modified image data having less pixel data than the fetched image data; and
    transmit the modified image data to the display panel.
  17. The apparatus of claim 16, wherein the at least one processor is further configured to:
    transmit the screen mask to the display panel prior to the transmitting of the modified image data to the display panel.
  18. The apparatus of claim 16, wherein a shape associated with the obtained image data corresponds to a rectangular shaped image and a shape associated with the modified image data corresponds to a shape of the visible area of the display panel.
  19. The apparatus of claim 16, wherein the screen mask is pre-generated and stored in a system memory accessible for determining the modified image data.
  20. The apparatus of claim 16, wherein the screen mask is updated based on a change associated with at least one characteristic of a user interface for presentment via the display panel.
  21. The apparatus of claim 20, wherein the at least one processor is further configured to:
    transmit the updated screen mask to the display panel prior to the transmitting of subsequently modified image data.
  22. The apparatus of claim 16, wherein the at least one processor is configured to determine the modified image data by:
    determining whether a pixel of the obtained image data corresponds to the visible area of the display panel based on the screen mask and a location of the pixel; and
    populating a payload portion of an image packet with a value based on the determination.
  23. The apparatus of claim 16, wherein the at least one processor is configured to determine the modified image data by:
    determining whether a pixel of the obtained image data corresponds to a non-visible area of the display panel based on the screen mask and a location of the pixel; and
    excluding the image data of locations corresponding to the non-visible area.
  24. The apparatus of claim 16, wherein the at least one processor is further configured to generate an image packet based on the modified image data, wherein the image packet includes at least a data identifier portion and a payload portion, and wherein the at least one processor is configured to transmit the modified image data by transmitting the image packet to the display panel.
  25. An apparatus for operating a display, comprising:
    a display panel; and
    a host processor coupled to the display panel and configured to:
    obtain image data for a frame;
    determine modified image data for the frame based on a screen mask, the screen mask associated with a display panel and configured to define a visible area of the display panel, and the modified image data having less pixel data than the obtained image data; and
    transmit the modified image data to the display panel.
  26. The apparatus of claim 25, wherein the display panel is configured to:
    receive the screen mask;
    determine which pixels of the display panel to activate based on the screen mask; and
    cause the displaying of the modified image data via the activated pixels of the display panel.
  27. The apparatus of claim 26, wherein the display panel is further configured to:
    receive the modified image data; and
    map the modified image data to the activated pixels of the display panel.
  28. The apparatus of claim 26, wherein the display panel is configured to receive the screen mask from a local memory of the display panel.
  29. The apparatus of claim 26, wherein the display panel is configured to receive the screen mask from the host processor prior to receiving the modified image data.
  30. The apparatus of claim 25, wherein the apparatus includes a wireless communication device.
PCT/CN2019/116070 2019-11-06 2019-11-06 Methods and apparatus to improve image data transfer efficiency for portable devices WO2021087826A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/116070 WO2021087826A1 (en) 2019-11-06 2019-11-06 Methods and apparatus to improve image data transfer efficiency for portable devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/116070 WO2021087826A1 (en) 2019-11-06 2019-11-06 Methods and apparatus to improve image data transfer efficiency for portable devices

Publications (1)

Publication Number Publication Date
WO2021087826A1 true WO2021087826A1 (en) 2021-05-14

Family

ID=75849426

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/116070 WO2021087826A1 (en) 2019-11-06 2019-11-06 Methods and apparatus to improve image data transfer efficiency for portable devices

Country Status (1)

Country Link
WO (1) WO2021087826A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996391A (en) * 2009-08-21 2011-03-30 英特尔公司 Method for storing and retrieving graphics data
US8005316B1 (en) * 2007-05-16 2011-08-23 Adobe Systems Incorporated System and method for editing image data for media repurposing
CN102292978A (en) * 2009-07-10 2011-12-21 松下电器产业株式会社 Marker display control device, integrated circuit, and marker display control method
US8209632B2 (en) * 2010-01-26 2012-06-26 Apple Inc. Image mask interface
US8749690B2 (en) * 2011-12-13 2014-06-10 Facebook, Inc. In-context content capture
CN108897881A (en) * 2018-07-05 2018-11-27 腾讯科技(深圳)有限公司 Interactive image display methods, device, equipment and readable storage medium storing program for executing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8005316B1 (en) * 2007-05-16 2011-08-23 Adobe Systems Incorporated System and method for editing image data for media repurposing
CN102292978A (en) * 2009-07-10 2011-12-21 松下电器产业株式会社 Marker display control device, integrated circuit, and marker display control method
CN101996391A (en) * 2009-08-21 2011-03-30 英特尔公司 Method for storing and retrieving graphics data
US8209632B2 (en) * 2010-01-26 2012-06-26 Apple Inc. Image mask interface
US8749690B2 (en) * 2011-12-13 2014-06-10 Facebook, Inc. In-context content capture
CN108897881A (en) * 2018-07-05 2018-11-27 腾讯科技(深圳)有限公司 Interactive image display methods, device, equipment and readable storage medium storing program for executing

Similar Documents

Publication Publication Date Title
US20230290034A1 (en) Fast incremental shared constants
WO2020062069A1 (en) Frame composition alignment to target frame rate for janks reduction
TW202230325A (en) Methods and apparatus for display panel fps switching
US11625806B2 (en) Methods and apparatus for standardized APIs for split rendering
US20230040998A1 (en) Methods and apparatus for partial display of frame buffers
WO2021134462A1 (en) Methods and apparatus to facilitate region of interest tracking for in-motion frames
WO2021087826A1 (en) Methods and apparatus to improve image data transfer efficiency for portable devices
US11935502B2 (en) Software Vsync filtering
WO2021102772A1 (en) Methods and apparatus to smooth edge portions of an irregularly-shaped display
WO2023141917A1 (en) Sequential flexible display shape resolution
WO2023151067A1 (en) Display mask layer generation and runtime adjustment
US10755666B2 (en) Content refresh on a display with hybrid refresh mode
US11373267B2 (en) Methods and apparatus for reducing the transfer of rendering information
WO2024087152A1 (en) Image processing for partial frame updates
US11087431B2 (en) Methods and apparatus for reducing draw command information
US20230368714A1 (en) Smart compositor module
US20220172695A1 (en) Methods and apparatus for plane planning for overlay composition
US20230298123A1 (en) Compatible compression for different types of image views
US20200279433A1 (en) Methods and apparatus for gpu tile clearance
US20190385567A1 (en) Display processing blinking operation
WO2021051305A1 (en) Methods and apparatus for calibrating new displays
TW202223831A (en) Methods and apparatus for motion estimation based on region discontinuity
CN117616398A (en) Performance overhead optimization in GPU range definition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19951306

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19951306

Country of ref document: EP

Kind code of ref document: A1