WO2021102772A1 - Methods and apparatus to smooth edge portions of an irregularly-shaped display - Google Patents

Methods and apparatus to smooth edge portions of an irregularly-shaped display Download PDF

Info

Publication number
WO2021102772A1
WO2021102772A1 PCT/CN2019/121449 CN2019121449W WO2021102772A1 WO 2021102772 A1 WO2021102772 A1 WO 2021102772A1 CN 2019121449 W CN2019121449 W CN 2019121449W WO 2021102772 A1 WO2021102772 A1 WO 2021102772A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen mask
locations
display
locations corresponding
visible area
Prior art date
Application number
PCT/CN2019/121449
Other languages
French (fr)
Inventor
Yongjun XU
Long HAN
Ya KONG
Original Assignee
Qualcomm Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Incorporated filed Critical Qualcomm Incorporated
Priority to PCT/CN2019/121449 priority Critical patent/WO2021102772A1/en
Publication of WO2021102772A1 publication Critical patent/WO2021102772A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • the present disclosure relates generally to processing systems and, more particularly, to one or more techniques for display or graphics processing.
  • GPUs graphics processing unit
  • Such computing devices may include, for example, computer workstations, mobile phones such as so-called smartphones, embedded systems, personal computers, tablet computers, and video game consoles.
  • GPUs execute a graphics processing pipeline that includes one or more processing stages that operate together to execute graphics processing commands and output a frame.
  • a central processing unit may control the operation of the GPU by issuing one or more graphics processing commands to the GPU.
  • Modern day CPUs are typically capable of concurrently executing multiple applications, each of which may need to utilize the GPU during execution.
  • Portable electronic devices including smartphones and wearable devices, may present graphical content on a display.
  • graphical content may be presented on a display.
  • screen-to-body ratios there has developed an increased need for presenting graphical content on displays having irregular shapes.
  • the apparatus may be a display processor, a display processing unit (DPU) , a graphics processing unit (GPU) , or a video processor.
  • the apparatus can obtain a first screen mask associated with a display, the first screen mask defining a first visible area of the display.
  • the apparatus can also obtain a second screen mask associated with the display, the second screen mask defining a second visible area of the display, the first visible area being different than the second visible area.
  • the apparatus can transmit image packets to the display for displaying of image data by the display, the image packets corresponding to image data for a sequence of frames, and each image packet based on image data for a respective frame and based on alternating of the first screen mask and the second screen mask.
  • a shape of the second visible area corresponds to a shape of the first visible area.
  • the first screen mask includes an inner portion, an edge portion, and an outer portion
  • the second screen mask includes an inner portion, an edge portion, and an outer portion.
  • locations corresponding to the first screen mask inner portion are the same as locations corresponding to the second screen mask inner portion
  • locations corresponding to the first screen mask outer portion are the same as locations corresponding to the second screen mask outer portion.
  • the apparatus can also divide locations corresponding to the first screen mask edge portion into a first set of locations and a second set of locations. Also, the apparatus can assign a first value to the locations corresponding to the first set of locations.
  • the apparatus can also assign a second value to the locations corresponding to the second set of locations. Further, the apparatus can assign the second value to locations corresponding to the second screen mask edge portion that correspond to the first set of locations of the first screen mask edge portion. The apparatus can also assign the first value to locations corresponding to the second screen mask edge portion that correspond to the second set of locations of the first screen mask edge portion. Additionally, the apparatus can assign the first value to the locations corresponding to the first screen mask inner portion and the locations corresponding to the second screen mask inner portion. The apparatus can also assign the second value to the locations corresponding to the first screen mask outer portion and the locations corresponding to the second screen mask outer portion. In some examples, the first value may indicate a visible area and the second value may indicate a non-visible area.
  • the locations corresponding to the first set of locations and the locations corresponding to the second set of locations may be randomly selected.
  • a first quantity may correspond to the locations corresponding to the first set of locations and a second quantity may correspond to the locations corresponding to the second set of locations.
  • the first quantity may be within a threshold quantity of the second quantity.
  • a quantity of locations corresponding to the first set of locations may be randomly selected.
  • the transmitted image packets may exclude image data for locations corresponding to respective non-visible areas of the first screen mask and the second screen mask.
  • FIG. 1 is a block diagram that illustrates an example content generation system, in accordance with one or more techniques of this disclosure.
  • FIG. 2A illustrates an example image frame with an overlapping display, in accordance with one or more techniques of this disclosure.
  • FIG. 2B illustrates an example screen mask with an overlapping display, in accordance with one or more techniques of this disclosure.
  • FIG. 2C illustrates an example image frame with an applied screen mask, in accordance with one or more techniques of this disclosure.
  • FIG. 3 is a block diagram illustrating the example processing unit of FIG. 1, the example system memory of FIG. 1, the example display processor of FIG. 1, and the example display client of FIG. 1, in accordance with one or more techniques of this disclosure.
  • FIG. 4 illustrates an example first screen mask and a second screen mask, in accordance with one or more techniques of this disclosure.
  • FIG. 5 illustrates an example timing diagram, in accordance with one or more techniques of this disclosure.
  • FIGs. 6 to 8 illustrate example flowcharts of example methods, in accordance with one or more techniques of this disclosure.
  • an apparatus may modify image data by applying a screen mask to the image data prior to the presentment of the image data by the irregularly-shaped display.
  • the screen mask may correspond to the irregularly-shaped display and define visible area (s) of the display and non-visible area (s) of the display.
  • the screen mask may include an inner portion corresponding to the visible area (s) of the display (e.g., where the value of the screen mask is set to a transparent value) , an outer portion corresponding to the non-visible area (s) of the display (e.g., where the value of the screen mask is set to a black value) , and an edge portion corresponding to the locations of the display where the visible area (s) and the non-visible area (s) of the display meet.
  • the locations of the image data corresponding to the edge portion of the screen mask may appear with a zigzag pattern or a generally non-smooth portion.
  • Example techniques disclosed herein perform the smoothing of the edge portion of the displayed image data by using two screen masks and alternating the applying of the two screen masks to the image data. For example, disclosed techniques may use a first screen mask prior to presentment of first image data, may use a second screen mask prior to presentment of second image data, may use the first screen mask prior to presentment of third image data, etc.
  • first screen mask and the second screen mask may be configured to include the same inner portion and the same outer portion. In some such examples, the first screen mask and the second screen mask may be configured to include different edge portions. For example, disclosed techniques may identify locations that correspond to an edge position of an irregularly-shaped display (e.g., ten locations that correspond to where the visible area (s) of the display and the non-visible area (s) of the display meet) .
  • Example techniques may then set the value of the locations of the edge portion of the first screen mask corresponding to a first subset of the identified locations to a first value (e.g., to be transparent or corresponding to the visible area of the display) and set the value of the locations of the first screen mask corresponding to the remaining identified locations to a second value (e.g., to be black or corresponding to the non-visible area of the display) .
  • Example techniques then set the value of the locations of the edge portion of the second screen mask as the opposite value. For example, if an edge portion location of the first screen mask is set to the first value (e.g., to be transparent) , then the corresponding edge portion location of the second screen mask may be set to the second value (e.g., to be black) .
  • disclosed techniques facilitate the smoothing of the edge portions of the presentment of the image data.
  • Disclosed techniques take advantage of the persistence of vision of human eyes in which there is a delay in the perception of an object after light from the object changes.
  • disclosed techniques take advantage of the relatively fast display refresh rate of displays (e.g., 60 frames per second (FPS) , 90 FPS, 120 FPS, etc. ) so that the changing of the edge portion location between a transparent value and a black value appears relatively smooth. That is, example techniques disclosed herein provide the functionality of applying alpha blending image data without the additional performance cost and power cost of performing the alpha blending.
  • FPS frames per second
  • processors include microprocessors, microcontrollers, graphics processing units (GPUs) , general purpose GPUs (GPGPUs) , central processing units (CPUs) , application processors, digital signal processors (DSPs) , reduced instruction set computing (RISC) processors, systems-on-chip (SOC) , baseband processors, application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , programmable logic devices (PLDs) , state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure.
  • processors include microprocessors, microcontrollers, graphics processing units (GPUs) , general purpose GPUs (GPGPUs) , central processing units (CPUs) , application processors, digital signal processors (DSPs) , reduced instruction set computing (RISC) processors, systems-on-chip (SOC) , baseband processors, application specific integrated circuits (ASICs) ,
  • One or more processors in the processing system may execute software.
  • Software can be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
  • the term application may refer to software.
  • one or more techniques may refer to an application, i.e., software, being configured to perform one or more functions.
  • the application may be stored on a memory, e.g., on-chip memory of a processor, system memory, or any other memory.
  • Hardware described herein such as a processor may be configured to execute the application.
  • the application may be described as including code that, when executed by the hardware, causes the hardware to perform one or more techniques described herein.
  • the hardware may access the code from a memory and execute the code accessed from the memory to perform one or more techniques described herein.
  • components are identified in this disclosure.
  • the components may be hardware, software, or a combination thereof.
  • the components may be separate components or sub-components of a single component.
  • the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer.
  • such computer-readable media can comprise a random access memory (RAM) , a read-only memory (ROM) , an electrically erasable programmable ROM (EEPROM) , optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable ROM
  • optical disk storage magnetic disk storage
  • magnetic disk storage other magnetic storage devices
  • combinations of the aforementioned types of computer-readable media or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
  • examples disclosed herein provide techniques for smoothing the edge portion of image data for presentment via an irregularly-shaped display.
  • Example techniques may improve the rendering of graphical content, reducing the load on a communication interface (e.g., a bus) , and/or reducing the load of a processing unit (e.g., any processing unit configured to perform one or more techniques disclosed herein, such as a GPU, a DPU, and the like) .
  • a processing unit e.g., any processing unit configured to perform one or more techniques disclosed herein, such as a GPU, a DPU, and the like.
  • this disclosure describes techniques for graphics and/or display processing in any device that utilizes a display.
  • Other example benefits are described throughout this disclosure.
  • instances of the term “content” may refer to “graphical content, ” “image, ” and vice versa. This is true regardless of whether the terms are being used as an adjective, noun, or other parts of speech.
  • the term “graphical content” may refer to content produced by one or more processes of a graphics processing pipeline.
  • the term “graphical content” may refer to content produced by a processing unit configured to perform graphics processing.
  • the term “graphical content” may refer to content produced by a graphics processing unit.
  • the term “display content” may refer to content generated by a processing unit configured to perform display processing.
  • the term “display content” may refer to content generated by a display processing unit.
  • Graphical content may be processed to become display content.
  • a graphics processing unit may output graphical content, such as a frame, to a buffer (which may be referred to as a framebuffer) .
  • a display processing unit may read the graphical content, such as one or more frames from the buffer, and perform one or more display processing techniques thereon to generate display content.
  • a display processing unit may be configured to perform composition on one or more rendered layers to generate a frame.
  • a display processing unit may be configured to compose, blend, or otherwise combine two or more layers together into a single frame.
  • a display processing unit may be configured to perform scaling, e.g., upscaling or downscaling, on a frame.
  • a frame may refer to a layer.
  • a frame may refer to two or more layers that have already been blended together to form the frame, i.e., the frame includes two or more layers, and the frame that includes two or more layers may subsequently be blended.
  • FIG. 1 is a block diagram that illustrates an example content generation system 100 configured to implement one or more techniques of this disclosure.
  • the content generation system 100 includes a device 104.
  • the device 104 may include one or more components or circuits for performing various functions described herein.
  • one or more components of the device 104 may be components of an SOC.
  • the device 104 may include one or more components configured to perform one or more techniques of this disclosure.
  • the device 104 may include a processing unit 120 and a system memory 124.
  • the device 104 can include a number of additional or alternative components, e.g., a communication interface 126, a transceiver 132, a receiver 128, a transmitter 130, a display processor 127, and a display client 131.
  • Reference to the display client 131 may refer to one or more displays.
  • the display client 131 may include a single display or multiple displays.
  • the display client 131 may include a first display and a second display.
  • the results of the graphics processing may not be displayed on the device, e.g., the first and second displays may not receive any frames for presentment thereon. Instead, the frames or graphics processing results may be transferred to another device. In some aspects, this can be referred to as split-rendering.
  • the processing unit 120 may include an internal memory 121.
  • the processing unit 120 may be configured to perform graphics processing, such as in a graphics processing pipeline 107.
  • the device 104 may include a display processor, such as the display processor 127, to perform one or more display processing techniques on one or more frames generated by the processing unit 120 before presentment by the display client 131.
  • the display processor 127 may be configured to perform display processing.
  • the display processor 127 may be configured to perform one or more display processing techniques on one or more frames generated by the processing unit 120.
  • the display client 131 may be configured to display or otherwise present frames processed by the display processor 127.
  • the display client 131 may include one or more of: a liquid crystal display (LCD) , a plasma display, an organic light emitting diode (OLED) display, a projection display device, an augmented reality display device, a virtual reality display device, a head-mounted display, or any other type of display device.
  • LCD liquid crystal display
  • OLED organic light emitting diode
  • a projection display device an augmented reality display device, a virtual reality display device, a head-mounted display, or any other type of display device.
  • Memory external to the processing unit 120 may be accessible to the processing unit 120.
  • the processing unit 120 may be configured to read from and/or write to external memory, such as the system memory 124.
  • the processing unit 120 may be communicatively coupled to the system memory 124 over a bus.
  • the processing unit 120 and the system memory 124 may be communicatively coupled to each other over the bus or a different connection.
  • the device 104 may include a content encoder/decoder configured to receive graphical and/or display content from any source, such as the system memory 124 and/or the communication interface 126.
  • the system memory 124 may be configured to store received encoded or decoded content.
  • the content encoder/decoder may be configured to receive encoded or decoded content, e.g., from the system memory 124 and/or the communication interface 126, in the form of encoded pixel data.
  • the content encoder/decoder may be configured to encode or decode any content.
  • the internal memory 121 or the system memory 124 may include one or more volatile or non-volatile memories or storage devices.
  • internal memory 121 or the system memory 124 may include RAM, SRAM, DRAM, erasable programmable ROM (EPROM) , electrically erasable programmable ROM (EEPROM) , flash memory, a magnetic data media or an optical storage media, or any other type of memory.
  • the internal memory 121 or the system memory 124 may be a non-transitory storage medium according to some examples.
  • the term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted to mean that internal memory 121 or the system memory 124 is non-movable or that its contents are static. As one example, the system memory 124 may be removed from the device 104 and moved to another device. As another example, the system memory 124 may not be removable from the device 104.
  • the processing unit 120 may be a central processing unit (CPU) , a graphics processing unit (GPU) , a general purpose GPU (GPGPU) , or any other processing unit that may be configured to perform graphics processing.
  • the processing unit 120 may be integrated into a motherboard of the device 104.
  • the processing unit 120 may be present on a graphics card that is installed in a port in a motherboard of the device 104, or may be otherwise incorporated within a peripheral device configured to interoperate with the device 104.
  • the processing unit 120 may include one or more processors, such as one or more microprocessors, GPUs, application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , arithmetic logic units (ALUs) , digital signal processors (DSPs) , discrete logic, software, hardware, firmware, other equivalent integrated or discrete logic circuitry, or any combinations thereof. If the techniques are implemented partially in software, the processing unit 120 may store instructions for the software in a suitable, non-transitory computer-readable storage medium, e.g., internal memory 121, and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Any of the foregoing, including hardware, software, a combination of hardware and software, etc., may be considered to be one or more processors.
  • processors such as one or more microprocessors, GPUs, application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , arithmetic logic units (A
  • the content generation system 100 can include a communication interface 126.
  • the communication interface 126 may include a receiver 128 and a transmitter 130.
  • the receiver 128 may be configured to perform any receiving function described herein with respect to the device 104. Additionally, the receiver 128 may be configured to receive information, e.g., eye or head position information, rendering commands, or location information, from another device.
  • the transmitter 130 may be configured to perform any transmitting function described herein with respect to the device 104. For example, the transmitter 130 may be configured to transmit information to another device, which may include a request for content.
  • the receiver 128 and the transmitter 130 may be combined into a transceiver 132. In such examples, the transceiver 132 may be configured to perform any receiving function and/or transmitting function described herein with respect to the device 104.
  • the graphical content from the processing unit 120 for display via the display client 131 is not static and may be changing. Accordingly, the display processor 127 may periodically refresh the graphical content displayed via the display client 131. For example, the display processor 127 may periodically retrieve graphical content from the system memory 124, where the graphical content may have been updated by the execution of an application (and/or the processing unit 120) that outputs the graphical content to the system memory 124.
  • the display client 131 may include the display processor 127.
  • the processing unit 120 may include an edge portion smoothing component 198 to facilitate the smoothing of the presentment of the edge portion of image data.
  • the edge portion smoothing component 198 may be configured to obtain a first screen mask associated with a display, the first screen mask defining a first visible area of the display.
  • the edge portion smoothing component 198 may also be configured to obtain a second screen mask associated with the display, the second screen mask defining a second visible area of the display, the first visible area being different than the second visible area.
  • the edge portion smoothing component 198 may be configured to transmit image packets to the display for displaying of image data by the display, the image packets corresponding to image data for a sequence of frames, and each image packet based on image data for a respective frame and based on alternating of the first screen mask and the second screen mask.
  • a shape of the second visible area may correspond to a shape of the first visible area.
  • the first screen mask may include an inner portion, an edge portion, and an outer portion
  • the second screen mask may include an inner portion, an edge portion, and an outer portion.
  • locations corresponding to the first screen mask inner portion may be the same as locations corresponding to the second screen mask inner portion
  • locations corresponding to the first screen mask outer portion may be the same as locations corresponding to the second screen mask outer portion.
  • the edge portion smoothing component 198 may also be configured to divide locations corresponding to the first screen mask edge portion into a first set of locations and a second set of locations. Also, the edge portion smoothing component 198 may be configured to assign a first value to the locations corresponding to the first set of locations. The edge portion smoothing component 198 may also be configured to assign a second value to the locations corresponding to the second set of locations. Further, the edge portion smoothing component 198 may be configured to assign the second value to locations corresponding to the second screen mask edge portion that correspond to the first set of locations of the first screen mask edge portion. The edge portion smoothing component 198 may also be configured to assign the first value to locations corresponding to the second screen mask edge portion that correspond to the second set of locations of the first screen mask edge portion.
  • the edge portion smoothing component 198 may be configured to assign the first value to the locations corresponding to the first screen mask inner portion and the locations corresponding to the second screen mask inner portion.
  • the edge portion smoothing component 198 may also be configured to assign the second value to the locations corresponding to the first screen mask outer portion and the locations corresponding to the second screen mask outer portion.
  • the first value may indicate a visible area and the second value may indicate a non-visible area.
  • the locations corresponding to the first set of locations and the locations corresponding to the second set of locations may be randomly selected.
  • a first quantity may correspond to the locations corresponding to the first set of locations and a second quantity may correspond to the locations corresponding to the second set of locations.
  • the first quantity may be within a threshold quantity of the second quantity.
  • a quantity of locations corresponding to the first set of locations may be randomly selected.
  • the transmitted image packets may exclude image data for locations corresponding to respective non-visible areas of the first screen mask and the second screen mask.
  • a device such as the device 104, may refer to any device, apparatus, or system configured to perform one or more techniques described herein.
  • a device may be a server, a base station, user equipment, a client device, a station, an access point, a computer, e.g., a personal computer, a desktop computer, a laptop computer, a tablet computer, a computer workstation, or a mainframe computer, an end product, an apparatus, a phone, a smart phone, a server, a video game platform or console, a handheld device, e.g., a portable video game device or a personal digital assistant (PDA) , a wearable computing device, e.g., a smart watch, an augmented reality device, or a virtual reality device, a non-wearable device, a display or display device, a television, a television set-top box, an intermediate network device, a digital media player, a video streaming device, a content streaming device, an in-car
  • PDA personal digital
  • FIG. 2A illustrates example image data 205 for a frame.
  • the shape of the image data 205 corresponds to a rectangular-shaped image.
  • FIG. 2A also illustrates a perimeter 210 of an irregularly-shaped display that overlaps with the image data 205.
  • the irregularly-shaped display is circular.
  • the irregular-shaped display may be any reasonable non-rectangular shaped display, including, for example, a generally rectangular-shaped display that includes a cutout portion (or notch) (e.g., for a camera (s) , for a microphone (s) , for a speaker (s) , etc. ) .
  • the shape and size of the irregularly-shaped display results in a visible area 215 and a non-visible area 220.
  • the visible area 215 corresponds to portions of the image data 205 that are at or within the perimeter 210. That is, the visible area 215 corresponds to the portions of the image data 205 that are visible when the image data 205 is transmitted for presentment via the irregularly-shaped display.
  • the non-visible area 220 corresponds to portions of the image data 205 that are outside the perimeter 210. That is, the non-visible area 220 corresponds to the portions of the image data 205 that are not visible when the image data 205 is transmitted for presentment via the irregularly-shaped display.
  • example techniques disclosed herein use a screen mask to define the one or more visible area (s) and the one or more non-visible area (s) of the display client.
  • FIG. 2B illustrates an example screen mask 250 that corresponds to the irregularly-shaped display and the perimeter 210 of the irregularly-shaped display.
  • the screen mask 250 includes locations 255 that correspond to pixel elements of the irregularly-shaped display.
  • a first set of locations 255 of the screen mask 250 may be completely within the perimeter 210
  • a set second of locations 255 of the screen mask 250 may be completely outside the perimeter 210
  • a third set of locations 255 of the screen mask 250 may correspond to those locations that overlap with the perimeter 210.
  • locations 255a of the screen mask 250 are those locations that are completely within the perimeter 210 and correspond to the visible area 215 (sometimes referred to herein as the “inner portion” of the screen mask 250) .
  • locations 255b of the screen mask 250 are those locations that are completely outside the perimeter 210 and correspond to the non-visible area 220 (sometimes referred to herein as the “outer portion” of the screen mask 250) .
  • the example screen mask 250 also includes locations 255c that overlap with the perimeter 210 (sometimes referred to herein as the “edge portion” of the screen mask 250) .
  • each of the locations 255 of the screen mask 250 are associated with a value based on whether the respective location corresponds to the visible area or the non-visible area.
  • the locations 255a of the inner portion of the screen mask 250 may be assigned a first value (e.g., “1” ) to indicate that those locations of image data correspond to the visible area (e.g., the visible area 215 of FIG. 2A) .
  • the locations 255b of the outer portion of the screen mask 250 may be assigned a second value (e.g., “’ 0” ) to indicate that those locations of image data correspond to the non-visible area (e.g., the non-visible area 220 of FIG. 2B) .
  • the locations 255c of the edge portion of the screen mask 250 are assigned the first value (e.g., “1” ) to indicate that those values correspond to the visible area (e.g., the visible area 215 of FIG. 2A) .
  • the screen mask 250 may then be used to modify the image data 205 to, for example, reduce the load on a communication interface (e.g., a bus) , and/or reduce the load of a processing unit (e.g., any processing unit configured to perform one or more techniques disclosed herein, such as a GPU, a DPU, and the like) .
  • a processing unit e.g., any processing unit configured to perform one or more techniques disclosed herein, such as a GPU, a DPU, and the like
  • the processing unit 120 may map the screen mask 250 to the image data 205 and change the pixel information of the image data 205 that correspond to those locations 255b of the outer portion of the screen mask 250.
  • the processing unit 120 may change the pixel information for the respective image data 205 to a black color pixel, may change the pixel information for the respective image data 205 to indicate that the corresponding pixel element should be off, etc. In some examples, the processing unit 120 may discard the pixel information for the respective image data 205 that correspond to those locations 255b of the outer portion of the screen mask 250.
  • FIG. 2C illustrates an example image frame 280.
  • the image frame 280 may be generated by applying the screen mask 250 of FIG. 2B to the image data 205 of FIG. 2A.
  • alpha blending uses power resources and processing resources of the apparatus.
  • FIG. 3 is a block diagram 300 illustrating the example processing unit 120 of FIG. 1, the example system memory 124 of FIG. 1, the example display processor 127 of FIG. 1, and the example display client 131 of FIG. 1.
  • the processing unit 120 includes the example edge portion smoothing component 198 of FIG. 1 and a screen mask generating component 305.
  • the example display client 131 includes a display controller 310, a buffer 315, and a display 320.
  • the example display 320 may include an irregular shape, such as a circular-shaped display, a display including a cutout or a notch (e.g., for a camera (s) , for a microphone (s) , for a speaker (s) , etc. ) , etc.
  • the example display 320 includes a plurality of pixel elements for displaying image data.
  • the processing unit 120 includes the example edge portion smoothing component 198 and the screen mask generating component 305.
  • the example edge portion smoothing component 198 facilitates the smoothing of the presentment of the edge portion of image data.
  • the edge portion smoothing component 198 disclosed herein uses two screen masks for defining the visible area (s) and non-visible area (s) corresponding to the display 320.
  • each screen mask may include an inner portion, an outer portion, and an edge portion.
  • each screen mask may correspond to the same locations and may have the same first value (e.g., a “1” to indicate that the respective location corresponds to a visible area) and the outer portion of each screen mask may correspond to the same locations and may have the same second value (e.g., a “0” to indicate that the respective location corresponds to a non-visible area) .
  • first value e.g., a “1” to indicate that the respective location corresponds to a visible area
  • the outer portion of each screen mask may correspond to the same locations and may have the same second value (e.g., a “0” to indicate that the respective location corresponds to a non-visible area) .
  • the edge portion of each screen mask may correspond to the same locations.
  • the values of the locations of each screen mask edge portion may be different.
  • each of the locations of the first screen mask edge portion may be randomly assigned the first value or the second value.
  • other examples may use additional or alternative techniques for assigning the values of each of the locations of the first screen mask.
  • the corresponding locations of the second screen mask edge portion may be assigned the opposite value as the value assigned for the first screen mask.
  • a first screen mask 330a and a second screen mask 330b are stored in the system memory 124.
  • each of the screen masks 330 may be hard-coded screen masks that are stored in the system memory 124.
  • each of the screen masks 330 may be designed based on the specific irregular shape of the display 320.
  • the screen masks 330 may be generated, for example, during run-time.
  • the example screen mask generating component 305 facilitates the generating of the screen masks 330.
  • the screen mask generating component 305 may be part of an application space (sometimes referred to as a “user space” ) of the processing unit 120.
  • the application space may include software application (s) and/or application framework (s) .
  • software application (s) may include operating systems, media applications, graphical applications, office suite applications, etc.
  • Application framework (s) may include frameworks that may be used with one or more software applications, such as libraries, services (e.g., display services, input services, etc. ) , application program interfaces (APIs) , etc.
  • the screen mask generating component 305 may generate the respective screen masks 330 based on the shape of the display 320. For example, for a circular-shaped display, the screen mask generating component 305 may use one or more radii to determine the inner portion, the outer portion, and the edge portion of the screen masks 330. For example, the screen mask generating component 305 may designate locations outside a first radius as the outer portion of the screen mask, may designate locations within a second radius as the inner portion of the screen mask, and may designate locations between the first radius and the second radius as the edge portion.
  • the screen mask generating component 305 may use one radius to determine the portions of the screen masks 330. For example, the screen mask generating component 305 may designate locations that are completely outside a radius (e.g., locations that are outside the radius and do not overlap with the radius) as the outer portion of the screen mask, may designate locations that are completely within the radius (e.g., locations that are within the radius and do not overlap with the radius) as the inner portion of the screen mask, and may designate locations that overlap with the radius as the edge portion.
  • a radius e.g., locations that are outside the radius and do not overlap with the radius
  • the inner portion of the screen mask e.g., locations that are within the radius and do not overlap with the radius
  • the screen mask generating component 305 may assign the locations associated with the inner portion of the screen masks 330 with a first value (e.g., a “1” indicating that the location corresponds to a visible area) , and may assign the locations associated with the outer portion of the screen masks 330 with a second value (e.g., a “0” indicating that the location corresponds to a non-visible area) .
  • the example screen mask generating component 305 may then split the locations of the edge portion of the first screen mask 330a into two sets. In some examples, the screen mask generating component 305 may randomly split the locations into the two sets.
  • the screen mask generating component 305 may use a pattern to split the locations into the two sets.
  • the quantity of locations in each of the two sets may be the same and/or within a threshold quantity of each other.
  • the quantity of locations in each of the two sets may be randomly selected.
  • the quantity of locations in the first set may be randomly selected from a range between zero locations and a total quantity of locations of the edge portion, and the quantity of locations in the second set may be the remaining locations of the edge portion.
  • additional or alternative techniques for splitting the locations of the edge portion of the first screen mask 330a into the two sets may use additional or alternative techniques for splitting the locations of the edge portion of the first screen mask 330a into the two sets.
  • the screen mask generating component 305 may assign the locations of the first one of the two sets to a first value and assign the locations of the second one of the two sets to a second value. The screen mask generating component 305 may then assign the opposite value to the corresponding locations of the second screen mask 330b.
  • the screen mask generating component 305 may assign a location of the edge portion of the first screen mask 330a to the first value (e.g., a “1” indicating that the location corresponds to a visible area) , then the screen mask generating component 305 assigns the corresponding location of the edge portion of the second screen mask 330b to the second value (e.g., a “0” indicating that the location corresponds to a non-visible area) .
  • first screen mask 330a and the second screen mask 330b is the same (e.g. circular) and that the corresponding locations of the inner portion and outer portion of the first screen mask 330a and the second screen mask 330b are also the same. It should also be appreciated that the quantity of locations that differ between the first screen mask 330a and the second screen mask 330b (e.g., the locations corresponding to the edge portion) may be relatively small compared to the overall quantity of locations of the screen masks 330.
  • the screen mask generating component 305 may store the generated screen masks 330 in the system memory 124.
  • the screen mask generating component 305 may generate the respective screen masks 330 based on a shape of the display 320 during run-time.
  • an application e.g., operating in an application space
  • an operating system e.g., operating in the application space
  • the screen mask generating component 305 may determine the visible area (s) and the non-visible area (s) of the display 320 and then determine the location and values to assign to each of the locations of the inner portions, the outer portions, and the edge portions of each screen mask 330.
  • the display processor 127 may be configured to operate functions of the display client 131. In some examples, the display processor 127 may perform post-processing of image data provided by the processing unit 120. The display processor 127 may be configured to cause the display client 131 to display image frames and to display the image frames based on one or more visible area (s) and/or one or more non-visible area (s) defined by screen masks. The display processor 127 may output image data to the display client 131 according to an interface protocol, such as, for example, the MIPI DSI (Mobile Industry Processor Interface, Display Serial Interface) .
  • MIPI DSI Mobile Industry Processor Interface, Display Serial Interface
  • the display client 131 includes the display controller 310, the buffer 315, and the display 320.
  • the display controller 310 may receive image data from the display processor 127 and store the received image data in the buffer 315.
  • the display controller 310 may output the image data stored in the buffer 315 to the display 320.
  • the buffer 315 may represent a local memory to the display client 131.
  • the display controller 310 may output the image data received from the display processor 127 to the display 320.
  • the display client 131 may be configured in accordance with MIPI DSI standards.
  • the MIPI DSI standard supports a video mode and a command mode.
  • the display processor 127 may continuously refresh the graphical content of the display client 131. For example, the entire graphical content may be refreshed per refresh cycle (e.g., line-by-line) .
  • the display processor 127 may write the graphical content of a frame to the buffer 315. In some such examples, the display processor 127 may not continuously refresh the graphical content of the display client 131. Instead, the display processor 127 may use a vertical synchronization (Vsync) pulse to coordinate rendering and consuming of graphical content at the buffer 315. For example, when a Vsync pulse is generated, the display processor 127 may output new graphical content to the buffer 315. Thus, the generating of the Vsync pulse may indicate when current graphical content at the buffer 315 has been rendered.
  • Vsync vertical synchronization
  • the edge portion smoothing component 198 may obtain a screen mask from the system memory 124 and transmit the obtained screen mask to the display client 131 (e.g., via the display processor 127) .
  • the edge portion smoothing component 198 may obtain the first screen mask 330a from the system memory 127 and transmit the first screen mask 330a to the display client 131.
  • the edge portion smoothing component 198 may then obtain image data for an image frame and transmit the image data for presentment via the display client 131.
  • the example edge portion smoothing component 198 may then alternate the obtaining and transmitting of the first and second screen masks 330 prior to the transmitting of image data for a sequence of frames.
  • the edge portion smoothing component 198 may modify the obtained image data based on a respective screen mask prior to transmitting the image data for presentment. For example, prior to transmitting image data for a first image frame, the edge portion smoothing component 198 may modify the image data based on the first screen mask 330a obtained prior to the obtaining of the image data for the first image frame. In some examples, the edge portion smoothing component 198 may modify the image data based on the value of the screen mask at the respective locations. For example, the edge portion smoothing component 198 may change the image data corresponding to the locations associated with the second value (e.g., a “0” indicating that the respective location corresponds to a non-visible area) to a black value or a null value.
  • the second value e.g., a “0” indicating that the respective location corresponds to a non-visible area
  • the edge portion smoothing component 198 may discard portions of the image data for the image frame that correspond to the second value. In some examples, by changing the image data (e.g., to a black value or a null value) or by discarding portions of the image data, the quantity of information being transmitted from the processing unit 120 for presentment may be reduced, thereby reducing the load on a communication interface and/or reducing the load of the processing unit 120.
  • the processing unit 120 may transmit the image data without modifying the image data prior to transmittal. It should be appreciated that in some such examples, the display client 131 may then perform the applying of the screen mask to the image data. For example, the display controller 310 may modify the image data based on a screen mask. For example the display controller 310 may turn on or turn off certain pixel elements of the display 320 based on the values of the screen mask.
  • the screen masks 330 may be stored locally at the display client 131.
  • the screen masks 330 may be hard-coded screen masks that are stored in a local memory of the display client 131 (e.g., the buffer 315) .
  • the display client 131 may alternate which screen mask 330 to apply to received image data. For example, for the presentment of first image data, the display client 131 may obtain the first screen mask 330a from a local memory (e.g., the example buffer 315) , for the presentment of second image data, the display client 131 may obtain the second screen mask 330b from the local memory, etc. In this manner, the example display client may be able to apply screen masks 330 without waiting for transmission of the respective screen masks 330 from the processing unit 120.
  • a local memory e.g., the example buffer 315
  • the display client 131 may obtain the second screen mask 330b from the local memory, etc. In this manner, the example display client may be able to apply screen masks 330 without waiting for transmission of the respective screen masks 330 from the processing unit 120.
  • the display client 131 may include hard-coded screen masks, but may also receive (e.g., periodically, aperiodically, or as a one-time event) screen masks from the processing unit 120. For example, for a particular sequence of frames, the processing unit 120 may determine that one or two different screen masks are to be used than what is hard-coded at the display client 131. For example, while display client 131 may include two hard-coded screen masks that are based on the shape of the display 320, the processing unit 120 may determine, during run-time, to use different screen masks, for example, based on an operating state of the display client 131.
  • the processing unit 120 may determine to use screen masks that include relatively smaller visible area (s) for the presentment of limited information (e.g. a current time, a date, a battery status indicator, etc. ) .
  • the processing unit 120 may first signal to the display client 131 that the processing unit 120 is transmitting screen masks for use by the display client 131 for certain image data prior to the transmitting of the respective screen masks.
  • the display client 131 may process information as received and determine whether to apply a hard-coded screen mask or to apply a screen mask provided by the processing unit 120. For example, the display client 131 may receive information from the processing unit 120 and determine whether the information corresponds to a screen mask or to image data. In some such examples, if the received information corresponds to a screen mask, then the display client 131 may use the screen mask for the subsequently received image data. For example, the display client 131 may turn on or turn off certain pixel elements of the display 320 based on the received screen mask. In some examples, the display client 131 may temporarily store the received screen mask in a local memory (e.g., in the buffer 315) .
  • a local memory e.g., in the buffer 315.
  • the display client 131 may determine whether a screen mask was received prior to the receipt of the image data. For example, if the display client 131 determines that a screen mask was received prior to the receipt of the image data, then the display client 131 may apply the received screen mask to the received image data. In some examples, if the display client 131 determines that a screen mask was not received prior to the receipt of the image data, then the display client 131 may obtain a locally stored screen mask (e.g., a hard-coded screen mask or a screen mask previously provided to the display client 131 by the processing unit 120 and stored in the buffer 315) for applying to the received image data.
  • a locally stored screen mask e.g., a hard-coded screen mask or a screen mask previously provided to the display client 131 by the processing unit 120 and stored in the buffer 315.
  • the processing unit 120 may modify image data prior to transmitting the image data to the display client 131.
  • the display client 131 may use a screen mask to decode the received image data.
  • the processing unit 120 may discard portions of the image data based on a screen mask.
  • the display client 131 may use the same screen mask (either received from the processing unit or obtained from local memory) to determine which locations of the display 320 that the received image data corresponds.
  • the processing unit 120 may apply a screen mask that reduces the image data transmitted to the display client 131 from information for 100 pixels to information for ten pixels.
  • the display client 131 may use the screen mask to determine which ten pixels the received image data corresponds and then proceed with the presentment of the image data.
  • FIG. 4 illustrates an example first screen mask 400 and an example second screen mask 450.
  • Aspects of the first screen mask 400 may be implemented by the first screen mask 330a of FIG. 3.
  • Aspects of the second screen mask 450 may be implemented by the second screen mask 330b of FIG. 3.
  • the first screen mask 400 includes locations 405 that correspond to the outer portion of the first screen mask 400, includes locations 410 that correspond to the inner portion of the first screen mask 400, and includes locations 415 that correspond to the edge portion of the first screen mask 400.
  • the example second screen mask 450 includes locations 455 that correspond to the outer portion of the second screen mask 450, includes locations 460 that correspond to the inner portion of the second screen mask 450, and includes locations 465 that correspond to the edge portion of the second screen mask 450.
  • the locations 405, 455 corresponding to the outer portions of the screen masks 400, 450, respectively are assigned the second value (e.g., a “0” to indicate that the respective locations correspond to a non-visible area)
  • the locations 410, 460 corresponding to the inner portions of the screen masks 400, 450, respectively are assigned the first value (e.g., a “1” to indicate that the respective locations correspond to a visible area) .
  • the locations 415 corresponding to the edge portion of the first screen mask 400 are divided into two sets 415a, 415b.
  • the locations 415 associated with the first set 415a are assigned the first value (e.g., a “1” ) and the locations 415 associated with the second set 415b are assigned the second value (e.g., a “0” ) .
  • the locations 465 corresponding to the edge portion of the second screen mask 450 are assigned the opposite value as assigned to the corresponding location of the first screen mask 400.
  • locations of the second screen mask 450 that correspond to locations of the first set of locations 415a of the first screen mask 400 are assigned the second value (e.g., a “0” ) and the locations of the second screen mask 450 that correspond to locations of the second set of locations 415b of the first screen mask 400 are assigned the first value (e.g. a “1” ) .
  • the values of the locations 415, 465 of the edge portions of the screen masks 400, 450, respectively, are assigned in alternating pattern.
  • additional or alternative techniques for assigning the values to the locations 415, 465 may be used.
  • the locations 415 selected for the first set of locations 415a may be randomly selected.
  • the quantity of locations included in the first set of locations 415a and the quantity of locations included in the second set of locations 415b may be the same or within a threshold quantity (e.g., within 1, 2, 3, etc. locations) .
  • the quantity of locations included in the first set of locations 415a and the quantity of locations included in the second set of locations 415b may be random and, thus, the respective quantities may not be within any threshold quantity.
  • FIG. 5 illustrates an example timing diagram 500, in accordance with one or more techniques of this disclosure. More specifically, FIG. 5 displays a timing diagram 500 of the vertical synchronization (Vsync) pulses 505 for the display client 131 of FIGs. 1 and/or 3. FIG. 5 also shows a transmission sequence 550 of screen masks 555 and image data 560 for presentment by the display client 131. For example, the transmission sequence 550 may correspond to transmissions by the processing unit 120 of FIGs. 1 and/or 3 to the display client 131.
  • Vsync vertical synchronization
  • the screen masks 555 are transmitted prior to the occurrence of the respective Vsync pulses 505 and the image data 560 are transmitted after the occurrence of the respective Vsync pulses 505.
  • the processing unit 120 may transmit a first screen mask 555a, and after the start of the first Vsync pulse 505a, the processing unit 120 may transmit first image data 560a for presentment via the display client 131.
  • the processing unit 120 may then transmit a second screen mask 555b prior to a start of a second Vsync pulse 505b and transmit second image data 560a for presentment via the display client 131 after the start of the second Vsync pulse 505b. Similarly, prior to a start of a third Vsync pulse 505c, the processing unit 120 may transmit a third screen mask 555c, and after the start of the third Vsync pulse 505c, the processing unit 120 may transmit third image data 560c for presentment via the display client 131.
  • the processing unit 120 may then transmit a fourth screen mask 555d prior to a start of a fourth Vsync pulse 505d and transmit fourth image data 560d for presentment via the display client 131 after the start of the fourth Vsync pulse 505d.
  • the first screen mask 555a and the third screen mask 555c are the same screen mask (e.g., a “mask A, ” which may be implemented by the first screen mask 330a of FIG. 3 and/or the first screen mask 400 of FIG. 4)
  • the second screen mask 555b and the fourth screen mask 555d are the same screen mask (e.g., a “mask B, ” which may be implemented by the second screen mask 330b of FIG. 3 and/or the second screen mask 450 of FIG. 4) .
  • the processing unit 120 is alternating the transmitting of the screen masks 555 to the display client 131.
  • the screen masks 555 are used to modify the following image data 560 and, thus, the respective edge portions of the presentment of the image data 560 may be different, as disclosed herein. More specifically, the alternating use of the screen masks 555 may result in different edges of the image data 560 when presented via the display client 131. However, as shown in FIG. 5, as the transmitting of the image data 560 is synchronized with the occurrences of the Vsync pulses 505 and because of the delay in visual perception of the human eye, the edges of the image data 560, when presented via the display client 131, appear to be generally smooth.
  • the sequence of events of the transmission sequence 550 may correspond to a sequence of events performed by the display client 131.
  • the screen masks 555 may be hard-coded at the display client 131.
  • the display client 131 may alternate which screen mask 555 to apply to the received image data 560.
  • the display client 131 may obtain the first screen mask 555a from a local memory (e.g., the example buffer 315 of FIG.
  • the display client 131 may obtain the second screen mask 555b from the local memory, etc. In this manner, the example display client 131 is able to apply screen masks 555 without waiting for transmission of the respective screen masks 555 from the processing unit 120.
  • the display client 131 may include hard-coded screen masks, but may also receive (e.g., periodically, aperiodically, or as a one-time event) screen masks from the processing unit 120. For example, for a particular sequence of frames, the processing unit 120 may determine that one or two different screen masks are to be used than what is hard-coded at the display client 131. For example, an application may define a visible area that is different than the visible areas defined by the hard-coded screen masks. In some such examples, the processing unit 120 may first signal to the display client 131 that the processing unit 120 is transmitting screen masks for use by the display client 131 for certain image data prior to the transmitting of the respective screen masks.
  • the processing unit 120 may first signal to the display client 131 that the processing unit 120 is transmitting screen masks for use by the display client 131 for certain image data prior to the transmitting of the respective screen masks.
  • the display client 131 may process information as received and determine whether to apply a hard-coded screen mask or to apply a screen mask provided by the processing unit 120. For example, the display client 131 may receive information from the processing unit 120 and determine whether the information corresponds to a screen mask or to image data. In some such examples, if the received information corresponds to a screen mask, then the display client 131 may use the screen mask for the subsequently received image data. For example, the display client 131 may turn on or off certain pixel elements of the display 320 based on the received screen mask. In some examples, the display client 131 may temporarily store the received screen mask in a local memory (e.g., in the buffer 315 of FIG. 3) .
  • a local memory e.g., in the buffer 315 of FIG. 3
  • the display client 131 may determine whether a screen mask was received prior to the receipt of the image data. For example, if the display client 131 determines that a screen mask was received prior to the receipt of the image data, then the display client 131 may apply the received screen mask to the received image data. In some examples, if the display client 131 determines that a screen mask was not received prior to the receipt of the image data, then the display client 131 may obtain a locally stored screen mask (e.g., a hard-coded screen mask or a screen mask previously provided to the display client 131 by the processing unit 120) for applying to the received image data.
  • a locally stored screen mask e.g., a hard-coded screen mask or a screen mask previously provided to the display client 131 by the processing unit 120
  • FIG. 6 illustrates an example flowchart 600 of an example method in accordance with one or more techniques of this disclosure.
  • the method may be performed by an apparatus, such as the device 104 of FIG. 1, the processing unit 120 of FIGs. 1 and/or 3, the display processor 127 of FIGs. 1 and/or 3, a DPU, a GPU, a video processor, and/or a component of the processing unit 120.
  • the apparatus may obtain a first screen mask associated with a display, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may obtain the first screen mask 330a from the system memory 124 of FIGs. 1 and/or 3.
  • the first screen mask 330a may define one or more visible area (s) of the display 320 and may also define one or more non-visible area (s) of the display 320.
  • the apparatus may transmit the first mask to the display, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may transmit the first screen mask 330a to the display client 131 (e.g., via the display processor 127) .
  • the apparatus may generate an image packet based on the first screen mask and first image data of a sequence of frames for presentment via the display, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may generate an image packet for first image data by modifying the first image data based on the first screen mask 330a.
  • the processing unit 120 may modify pixel information of the image data, discard portions of the image data, etc. to generate the image packet.
  • the apparatus may transmit the generated image packet to the display for presentment, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may transmit the modified image data to the display client 131 (e.g., via the display processor 127) .
  • the apparatus may obtain a second screen mask associated with the display, as described in connection with the examples of FIG.s 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may obtain the second screen mask 330b from the system memory 124.
  • the second screen mask 330b may define one or more visible area (s) of the display 320 and/or one or more non-visible area (s) of the display 320 that is different than the visible area (s) and/or non-visible area (s) of the display 320 defined by the first screen mask 330a.
  • first screen mask 330a and the second screen mask 330b define one or more different visible area (s) and/or non-visible area (s) of the display 320
  • the general shape of the first and second screen masks 330 is the same.
  • the shape of the visible area of the first screen mask 400 is circular and the shape of the visible area of the second screen mask 450 is also circular.
  • the first screen mask and the second screen mask may be hard-coded screen masks.
  • the first screen mask and the second screen mask may be generated during run-time (e.g., by the screen mask generating component 305 of FIG. 3) .
  • the apparatus may transmit the second mask to the display, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may transmit the second screen mask 330b to the display client 131 (e.g., via the display processor 127) .
  • the apparatus may generate an image packet based on the second screen mask and second image data of the sequence of frames for presentment via the display, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may generate an image packet for second image data by modifying the second image data based on the second screen mask 330b.
  • the processing unit 120 may modify pixel information of the image data, discard portions of the image data, etc. to generate the image packet.
  • the apparatus may transmit the generated image packet to the display for presentment, as described in connection with the examples of FIG. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may transmit the modified image data to the display client 131 (e.g., via the display processor 127) .
  • control may then return to 602 to obtain the first screen mask to transmit to the display and for generating another image packet for transmitting to the display.
  • the apparatus alternates the transmitting of the first and second screen masks 330 to the display client 131 and also alternates the edge portions of the image data transmitted for presentment by the display client 131.
  • the alternating of the edge portions of the presented image data along with the delay in visual perception by the human eye and the relatively fast display refresh rate associated with the display 320 (e.g., 60fps, 90fps, 120fps, etc. ) , may result in edge portions that appear generally smooth when viewed by the human eye.
  • FIG. 7 illustrates an example flowchart 700 of an example method in accordance with one or more techniques of this disclosure.
  • the method may be performed by an apparatus, such as the device 104 of FIG. 1, the processing unit 120 of FIGs. 1 and/or 3, the display processor 127 of FIGs. 1 and/or 3, a DPU, a GPU, a video processor, and/or a component of the processing unit 120.
  • the apparatus may identify locations of a first screen mask corresponding to an inner portion, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may identify the locations 410 of the first screen mask 400.
  • the locations corresponding to the inner portion may be locations of the screen mask that are completely within a radius (e.g., within the perimeter 210 of FIG. 2) , that are within a first radius, etc.
  • the apparatus may assign the locations corresponding to the inner portion of the first screen mask and a second screen mask with a first value, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may assign the locations 410 of the first screen mask 400 and the locations 460 of the second screen mask 450 with the first value (e.g., a “1” ) .
  • the apparatus may identify locations of the first screen mask corresponding to an outer portion, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may identify the locations 405 of the first screen mask 400.
  • the locations corresponding to the outer portion may be locations of the screen mask that are completely outside of a radius (e.g. outside the perimeter 210 of FIG. 2) , that are outside a second radius, etc.
  • the apparatus may assign the locations corresponding to the outer portion of the first screen mask and the second screen mask with a second value, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may assign the locations 405 of the first screen mask 400 and the locations 455 of the second screen mask 450 with the second value (e.g., a “0” ) .
  • the apparatus may identify locations of the first screen mask corresponding to an edge portion, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may identify the locations 415 of the first screen mask 400.
  • the locations corresponding to the edge portion may be locations of the screen mask that are overlap with a radius (e.g., that overlap with the perimeter 210 of FIG. 2) , that are between a first radius and a second radius, etc.
  • the apparatus may divide the locations corresponding to the edge portion into a first set and a second set, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may divide the locations 415 into a first set 415a and a second set 415b.
  • the processing unit 120 may randomly select the locations for the respective sets 415a, 415b.
  • the processing unit 120 may use an algorithm to divide the locations into the first set 415a and the second set 415b.
  • the quantity of locations within the first set 415a and the second set 415b may be the same or within a threshold quantity.
  • the quantity of locations within the first set 415a and the second set 415b may be randomly selected.
  • the apparatus may assign the locations of the first set of the first screen mask with the first value, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may assign the locations of the first set 415a of the first screen mask 400 with the first value (e.g., a “1” ) .
  • the apparatus may assign the locations of the second set of the first screen mask with the second value, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may assign the locations of the second set 415b of the first screen mask 400 with the second value (e.g., a “0” ) .
  • the apparatus may assign the locations of the second screen mask corresponding to the first set and the second set with the opposite values as assigned in the first screen mask, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the processing unit 120 may assign the locations 465a of the second screen mask 450 that correspond to the first set 415a of the first screen mask 400 with the second value (e.g., a “0” ) .
  • the processing unit 120 may also assign the locations 465b of the second screen mask 450 that correspond to the second set 415b of the second screen mask 400 with the first value (e.g., a “1” ) .
  • FIG. 8 illustrates an example flowchart 800 of an example method in accordance with one or more techniques of this disclosure.
  • the method may be performed by an apparatus, such as the device 104 of FIG. 1, the display client 131 of FIGs. 1 and/or 3, a DPU, a GPU, a video processor, and/or a component of the display client 131.
  • the apparatus may receive information corresponding to a screen mask or to image data, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the display client 131 may receive information (e.g., an information packet) from the processing unit 120 (e.g., via the display processor 127) .
  • the apparatus may determine whether the received information corresponds to a screen mask or to image data, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the apparatus may modify the display based on the screen mask, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the display client 131 may turn on or turn off pixels of the display 320 based on the screen mask.
  • the apparatus may receive image data, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the display client 131 may receive image data after the Vsync pulse 505 of FIG. 5.
  • the apparatus may display the image data based on the screen mask, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the display client 131 may cause the display 320 to present the image data.
  • the apparatus may determine whether a screen mask associated with the presentment of the image data was received, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example the display client 131 may have received the screen mask 555 prior to the start of the Vsync pulse 505 of FIG. 5.
  • control proceeds to 816 to modify the display based on the screen mask.
  • the apparatus may obtain a screen mask from local memory, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the display client 131 may obtain a screen mask from the buffer 315 of FIG. 3.
  • the screen mask obtained from the local memory may be a hard-coded screen mask.
  • the screen mask obtained from the local memory may a screen mask that was previously provided to the display client 131 and stored by the display client in the local memory.
  • the apparatus may modify the display based on the screen mask, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the display client 131 may turn on or turn off pixels of the display 320 based on the screen mask.
  • the apparatus may display the image data based on the screen mask, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
  • the display client 131 may cause the display 320 to present the image data.
  • a method or apparatus for display processing may be a processing unit, a display processor, a display processing unit (DPU) , a GPU) , a video processor, or some other processor that can perform display processing.
  • the apparatus may be the processing unit 120 within the device 104, or may be some other hardware within the device 104, or another device.
  • the apparatus may include means obtaining a first screen mask associated with a display, the first screen mask defining a first visible area of the display.
  • the apparatus may also include means for obtaining a second screen mask associated with the display, the second screen mask defining a second visible area of the display, the first visible area being different than the second visible area.
  • the apparatus may also include means for transmitting image packets to the display for displaying of image data by the display, the image packets corresponding to image data for a sequence of frames, and each image packet based on image data for a respective frame and based on alternating of the first screen mask and the second screen mask.
  • the apparatus may include means for dividing locations corresponding to a first screen mask edge portion into a first set of locations and a second set of locations.
  • the apparatus may also include means for assigning a first value to the locations corresponding to the first set of locations.
  • the apparatus may also include means for assigning a second value to the locations corresponding to the second set of locations.
  • the apparatus may further include means for assigning the second value to locations corresponding to the second screen mask edge portion that correspond to the first set of locations of the first screen mask edge portion.
  • the apparatus may also include means for assigning the first value to locations corresponding to the second screen mask edge portion that correspond to the second set of locations of the first screen mask edge portion.
  • the apparatus may include means for assigning the first value to the locations corresponding to the first screen mask inner portion and the locations corresponding to the second screen mask inner portion.
  • the apparatus may include means for assigning the second value to the locations corresponding to the first screen mask outer portion and the locations corresponding to the second screen mask outer portion.
  • the apparatus may also include means for randomly selecting the locations corresponding to the first set of locations and the locations corresponding to the second set of locations.
  • the apparatus may also include means for randomly selecting a quantity of locations corresponding to the first set of locations.
  • the apparatus may also include means for excluding image data for locations corresponding to respective non-visible areas of the first screen mask and the second screen when transmitting the image packets.
  • the described display and/or graphics processing techniques can be used by a display processor, a display processing unit (DPU) , a GPU, or a video processor or some other processor that can perform display processing to implement the smoothing of edge portions of a display techniques disclosed herein. This can also accomplished at a low cost compared to other display or graphics processing techniques.
  • the display or graphics processing techniques herein can improve or speed up data processing or execution. Further, the display or graphics processing techniques herein can improve resource or data utilization and/or resource efficiency. For examples, aspects of the present disclosure can reduce the load of communication interfaces and/or reduce the load of a processing unit.
  • the term “or” may be interrupted as “and/or” where context does not dictate otherwise. Additionally, while phrases such as “one or more” or “at least one” or the like may have been used for some features disclosed herein but not others, the features for which such language was not used may be interpreted to have such a meaning implied where context does not dictate otherwise.
  • the functions described herein may be implemented in hardware, software, firmware, or any combination thereof.
  • processing unit has been used throughout this disclosure, such processing units may be implemented in hardware, software, firmware, or any combination thereof. If any function, processing unit, technique described herein, or other module is implemented in software, the function, processing unit, technique described herein, or other module may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media may include computer data storage media or communication media including any medium that facilitates transfer of a computer program from one place to another. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave.
  • Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure.
  • such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, .
  • Disk and disc includes compact disc (CD) , laser disc, optical disc, digital versatile disc (DVD) , floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • a computer program product may include a computer-readable medium.
  • the code may be executed by one or more processors, such as one or more digital signal processors (DSPs) , general purpose microprocessors, application specific integrated circuits (ASICs) , arithmetic logic units (ALUs) , field programmable logic arrays (FPGAs) , or other equivalent integrated or discrete logic circuitry.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • ALUs arithmetic logic units
  • FPGAs field programmable logic arrays
  • the techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs, e.g., a chip set.
  • IC integrated circuit
  • Various components, modules or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily need realization by different hardware units. Rather, as described above, various units may be combined in any hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The present disclosure relates to methods and apparatus for display processing. For example, disclosed techniques facilitate smoothing edge portions of an irregularly-shaped display. Aspects of the present disclosure can obtain a first screen mask associated with a display, the first screen mask defining a first visible area of the display. Aspects of the present disclosure can also obtain a second screen mask associated with the display, the second screen mask defining a second visible area of the display, the first visible area being different than the second visible area. Further, aspects of the present disclosure can transmit image packets to the display for displaying of image data by the display, the image packets corresponding to image data for a sequence of frames, and each image packet based on image data for a respective frame and based on alternating of the first screen mask and the second screen mask.

Description

METHODS AND APPARATUS TO SMOOTH EDGE PORTIONS OF AN IRREGULARLY-SHAPED DISPLAY TECHNICAL FIELD
The present disclosure relates generally to processing systems and, more particularly, to one or more techniques for display or graphics processing.
INTRODUCTION
Computing devices often utilize a graphics processing unit (GPU) to accelerate the rendering of graphical data for display. Such computing devices may include, for example, computer workstations, mobile phones such as so-called smartphones, embedded systems, personal computers, tablet computers, and video game consoles. GPUs execute a graphics processing pipeline that includes one or more processing stages that operate together to execute graphics processing commands and output a frame. A central processing unit (CPU) may control the operation of the GPU by issuing one or more graphics processing commands to the GPU. Modern day CPUs are typically capable of concurrently executing multiple applications, each of which may need to utilize the GPU during execution.
Portable electronic devices, including smartphones and wearable devices, may present graphical content on a display. However, with the goal of achieving increased screen-to-body ratios, there has developed an increased need for presenting graphical content on displays having irregular shapes.
SUMMARY
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
In an aspect of the disclosure, a method, a computer-readable medium, and an apparatus are provided. The apparatus may be a display processor, a display processing unit (DPU) , a graphics processing unit (GPU) , or a video processor. The apparatus can obtain a first screen mask associated with a display, the first screen mask defining a first visible area of the display.  The apparatus can also obtain a second screen mask associated with the display, the second screen mask defining a second visible area of the display, the first visible area being different than the second visible area. Additionally, the apparatus can transmit image packets to the display for displaying of image data by the display, the image packets corresponding to image data for a sequence of frames, and each image packet based on image data for a respective frame and based on alternating of the first screen mask and the second screen mask.
In some examples, a shape of the second visible area corresponds to a shape of the first visible area. In some examples, the first screen mask includes an inner portion, an edge portion, and an outer portion, the second screen mask includes an inner portion, an edge portion, and an outer portion. In some such examples, locations corresponding to the first screen mask inner portion are the same as locations corresponding to the second screen mask inner portion, and locations corresponding to the first screen mask outer portion are the same as locations corresponding to the second screen mask outer portion. The apparatus can also divide locations corresponding to the first screen mask edge portion into a first set of locations and a second set of locations. Also, the apparatus can assign a first value to the locations corresponding to the first set of locations. The apparatus can also assign a second value to the locations corresponding to the second set of locations. Further, the apparatus can assign the second value to locations corresponding to the second screen mask edge portion that correspond to the first set of locations of the first screen mask edge portion. The apparatus can also assign the first value to locations corresponding to the second screen mask edge portion that correspond to the second set of locations of the first screen mask edge portion. Additionally, the apparatus can assign the first value to the locations corresponding to the first screen mask inner portion and the locations corresponding to the second screen mask inner portion. The apparatus can also assign the second value to the locations corresponding to the first screen mask outer portion and the locations corresponding to the second screen mask outer portion. In some examples, the first value may indicate a visible area and the second value may indicate a non-visible area. In some examples, the locations corresponding to the first set of locations and the locations corresponding to the second set of locations may be randomly selected. In some examples, a first quantity may correspond to the locations corresponding to the first set of locations and a second quantity may correspond to the locations corresponding to the second set of locations. In some such examples, the first quantity may be within a threshold quantity of the second  quantity. In some examples, a quantity of locations corresponding to the first set of locations may be randomly selected. In some examples, the transmitted image packets may exclude image data for locations corresponding to respective non-visible areas of the first screen mask and the second screen mask.
The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 is a block diagram that illustrates an example content generation system, in accordance with one or more techniques of this disclosure.
FIG. 2A illustrates an example image frame with an overlapping display, in accordance with one or more techniques of this disclosure.
FIG. 2B illustrates an example screen mask with an overlapping display, in accordance with one or more techniques of this disclosure.
FIG. 2C illustrates an example image frame with an applied screen mask, in accordance with one or more techniques of this disclosure.
FIG. 3 is a block diagram illustrating the example processing unit of FIG. 1, the example system memory of FIG. 1, the example display processor of FIG. 1, and the example display client of FIG. 1, in accordance with one or more techniques of this disclosure.
FIG. 4 illustrates an example first screen mask and a second screen mask, in accordance with one or more techniques of this disclosure.
FIG. 5 illustrates an example timing diagram, in accordance with one or more techniques of this disclosure.
FIGs. 6 to 8 illustrate example flowcharts of example methods, in accordance with one or more techniques of this disclosure.
DETAILED DESCRIPTION
In general, examples disclosed herein provide techniques for smoothing edge portions of an irregularly-shaped display. In some examples, an apparatus may modify image data by applying a screen mask to the image data prior to the presentment of the image data by the  irregularly-shaped display. The screen mask may correspond to the irregularly-shaped display and define visible area (s) of the display and non-visible area (s) of the display. For example, the screen mask may include an inner portion corresponding to the visible area (s) of the display (e.g., where the value of the screen mask is set to a transparent value) , an outer portion corresponding to the non-visible area (s) of the display (e.g., where the value of the screen mask is set to a black value) , and an edge portion corresponding to the locations of the display where the visible area (s) and the non-visible area (s) of the display meet. In some such examples, when the image data for presentment is modified by the screen mask, the locations of the image data corresponding to the edge portion of the screen mask may appear with a zigzag pattern or a generally non-smooth portion.
Example techniques disclosed herein perform the smoothing of the edge portion of the displayed image data by using two screen masks and alternating the applying of the two screen masks to the image data. For example, disclosed techniques may use a first screen mask prior to presentment of first image data, may use a second screen mask prior to presentment of second image data, may use the first screen mask prior to presentment of third image data, etc.
In some examples, the first screen mask and the second screen mask may be configured to include the same inner portion and the same outer portion. In some such examples, the first screen mask and the second screen mask may be configured to include different edge portions. For example, disclosed techniques may identify locations that correspond to an edge position of an irregularly-shaped display (e.g., ten locations that correspond to where the visible area (s) of the display and the non-visible area (s) of the display meet) . Example techniques may then set the value of the locations of the edge portion of the first screen mask corresponding to a first subset of the identified locations to a first value (e.g., to be transparent or corresponding to the visible area of the display) and set the value of the locations of the first screen mask corresponding to the remaining identified locations to a second value (e.g., to be black or corresponding to the non-visible area of the display) . Example techniques then set the value of the locations of the edge portion of the second screen mask as the opposite value. For example, if an edge portion location of the first screen mask is set to the first value (e.g., to be transparent) , then the corresponding edge portion location of the second screen mask may be set to the second value (e.g., to be black) .
It should be appreciated that by using two screen masks and alternating the applying of the two screen masks to the presentment of image data, disclosed techniques facilitate the smoothing of the edge portions of the presentment of the image data. Disclosed techniques take advantage of the persistence of vision of human eyes in which there is a delay in the perception of an object after light from the object changes. Furthermore, disclosed techniques take advantage of the relatively fast display refresh rate of displays (e.g., 60 frames per second (FPS) , 90 FPS, 120 FPS, etc. ) so that the changing of the edge portion location between a transparent value and a black value appears relatively smooth. That is, example techniques disclosed herein provide the functionality of applying alpha blending image data without the additional performance cost and power cost of performing the alpha blending.
Various aspects of systems, apparatuses, computer program products, and methods are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of this disclosure to those skilled in the art. Based on the teachings herein one skilled in the art should appreciate that the scope of this disclosure is intended to cover any aspect of the systems, apparatuses, computer program products, and methods disclosed herein, whether implemented independently of, or combined with, other aspects of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. Any aspect disclosed herein may be embodied by one or more elements of a claim.
Although various aspects are described herein, many variations and permutations of these aspects fall within the scope of this disclosure. Although some potential benefits and advantages of aspects of this disclosure are mentioned, the scope of this disclosure is not intended to be limited to particular benefits, uses, or objectives. Rather, aspects of this disclosure are intended to be broadly applicable to different wireless technologies, system configurations, networks, and transmission protocols, some of which are illustrated by way of example in the figures and in the following description. The detailed description and drawings  are merely illustrative of this disclosure rather than limiting, the scope of this disclosure being defined by the appended claims and equivalents thereof.
Several aspects are presented with reference to various apparatus and methods. These apparatus and methods are described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, and the like (collectively referred to as “elements” ) . These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
By way of example, an element, or any portion of an element, or any combination of elements may be implemented as a “processing system” that includes one or more processors (which may also be referred to as processing units) . Examples of processors include microprocessors, microcontrollers, graphics processing units (GPUs) , general purpose GPUs (GPGPUs) , central processing units (CPUs) , application processors, digital signal processors (DSPs) , reduced instruction set computing (RISC) processors, systems-on-chip (SOC) , baseband processors, application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , programmable logic devices (PLDs) , state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software can be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. The term application may refer to software. As described herein, one or more techniques may refer to an application, i.e., software, being configured to perform one or more functions. In such examples, the application may be stored on a memory, e.g., on-chip memory of a processor, system memory, or any other memory. Hardware described herein, such as a processor may be configured to execute the application. For example, the application may be described as including code that, when executed by the hardware, causes the hardware to perform one or more techniques described herein. As an example, the hardware may access the code from a memory and execute the code  accessed from the memory to perform one or more techniques described herein. In some examples, components are identified in this disclosure. In such examples, the components may be hardware, software, or a combination thereof. The components may be separate components or sub-components of a single component.
Accordingly, in one or more examples described herein, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise a random access memory (RAM) , a read-only memory (ROM) , an electrically erasable programmable ROM (EEPROM) , optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
In general, examples disclosed herein provide techniques for smoothing the edge portion of image data for presentment via an irregularly-shaped display. Example techniques may improve the rendering of graphical content, reducing the load on a communication interface (e.g., a bus) , and/or reducing the load of a processing unit (e.g., any processing unit configured to perform one or more techniques disclosed herein, such as a GPU, a DPU, and the like) . For example, this disclosure describes techniques for graphics and/or display processing in any device that utilizes a display. Other example benefits are described throughout this disclosure.
As used herein, instances of the term “content” may refer to “graphical content, ” “image, ” and vice versa. This is true regardless of whether the terms are being used as an adjective, noun, or other parts of speech. In some examples, as used herein, the term “graphical content” may refer to content produced by one or more processes of a graphics processing pipeline. In some examples, as used herein, the term “graphical content” may refer to content produced by a processing unit configured to perform graphics processing. In some examples, as used herein, the term “graphical content” may refer to content produced by a graphics processing unit.
In some examples, as used herein, the term “display content” may refer to content generated by a processing unit configured to perform display processing. In some examples, as used  herein, the term “display content” may refer to content generated by a display processing unit. Graphical content may be processed to become display content. For example, a graphics processing unit may output graphical content, such as a frame, to a buffer (which may be referred to as a framebuffer) . A display processing unit may read the graphical content, such as one or more frames from the buffer, and perform one or more display processing techniques thereon to generate display content. For example, a display processing unit may be configured to perform composition on one or more rendered layers to generate a frame. As another example, a display processing unit may be configured to compose, blend, or otherwise combine two or more layers together into a single frame. A display processing unit may be configured to perform scaling, e.g., upscaling or downscaling, on a frame. In some examples, a frame may refer to a layer. In other examples, a frame may refer to two or more layers that have already been blended together to form the frame, i.e., the frame includes two or more layers, and the frame that includes two or more layers may subsequently be blended.
FIG. 1 is a block diagram that illustrates an example content generation system 100 configured to implement one or more techniques of this disclosure. The content generation system 100 includes a device 104. The device 104 may include one or more components or circuits for performing various functions described herein. In some examples, one or more components of the device 104 may be components of an SOC. The device 104 may include one or more components configured to perform one or more techniques of this disclosure. In the example shown, the device 104 may include a processing unit 120 and a system memory 124. In some examples, the device 104 can include a number of additional or alternative components, e.g., a communication interface 126, a transceiver 132, a receiver 128, a transmitter 130, a display processor 127, and a display client 131. Reference to the display client 131 may refer to one or more displays. For example, the display client 131 may include a single display or multiple displays. The display client 131 may include a first display and a second display. In further examples, the results of the graphics processing may not be displayed on the device, e.g., the first and second displays may not receive any frames for presentment thereon. Instead, the frames or graphics processing results may be transferred to another device. In some aspects, this can be referred to as split-rendering.
The processing unit 120 may include an internal memory 121. The processing unit 120 may be configured to perform graphics processing, such as in a graphics processing pipeline 107.  In some examples, the device 104 may include a display processor, such as the display processor 127, to perform one or more display processing techniques on one or more frames generated by the processing unit 120 before presentment by the display client 131. The display processor 127 may be configured to perform display processing. For example, the display processor 127 may be configured to perform one or more display processing techniques on one or more frames generated by the processing unit 120. The display client 131 may be configured to display or otherwise present frames processed by the display processor 127. In some examples, the display client 131 may include one or more of: a liquid crystal display (LCD) , a plasma display, an organic light emitting diode (OLED) display, a projection display device, an augmented reality display device, a virtual reality display device, a head-mounted display, or any other type of display device.
Memory external to the processing unit 120, such as system memory 124, may be accessible to the processing unit 120. For example, the processing unit 120 may be configured to read from and/or write to external memory, such as the system memory 124. The processing unit 120 may be communicatively coupled to the system memory 124 over a bus. In some examples, the processing unit 120 and the system memory 124 may be communicatively coupled to each other over the bus or a different connection.
It should be appreciated that in some examples, the device 104 may include a content encoder/decoder configured to receive graphical and/or display content from any source, such as the system memory 124 and/or the communication interface 126. The system memory 124 may be configured to store received encoded or decoded content. In some examples, the content encoder/decoder may be configured to receive encoded or decoded content, e.g., from the system memory 124 and/or the communication interface 126, in the form of encoded pixel data. In some examples, the content encoder/decoder may be configured to encode or decode any content.
The internal memory 121 or the system memory 124 may include one or more volatile or non-volatile memories or storage devices. In some examples, internal memory 121 or the system memory 124 may include RAM, SRAM, DRAM, erasable programmable ROM (EPROM) , electrically erasable programmable ROM (EEPROM) , flash memory, a magnetic data media or an optical storage media, or any other type of memory.
The internal memory 121 or the system memory 124 may be a non-transitory storage medium according to some examples. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted to mean that internal memory 121 or the system memory 124 is non-movable or that its contents are static. As one example, the system memory 124 may be removed from the device 104 and moved to another device. As another example, the system memory 124 may not be removable from the device 104.
The processing unit 120 may be a central processing unit (CPU) , a graphics processing unit (GPU) , a general purpose GPU (GPGPU) , or any other processing unit that may be configured to perform graphics processing. In some examples, the processing unit 120 may be integrated into a motherboard of the device 104. In some examples, the processing unit 120 may be present on a graphics card that is installed in a port in a motherboard of the device 104, or may be otherwise incorporated within a peripheral device configured to interoperate with the device 104. The processing unit 120 may include one or more processors, such as one or more microprocessors, GPUs, application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , arithmetic logic units (ALUs) , digital signal processors (DSPs) , discrete logic, software, hardware, firmware, other equivalent integrated or discrete logic circuitry, or any combinations thereof. If the techniques are implemented partially in software, the processing unit 120 may store instructions for the software in a suitable, non-transitory computer-readable storage medium, e.g., internal memory 121, and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Any of the foregoing, including hardware, software, a combination of hardware and software, etc., may be considered to be one or more processors.
In some aspects, the content generation system 100 can include a communication interface 126. The communication interface 126 may include a receiver 128 and a transmitter 130. The receiver 128 may be configured to perform any receiving function described herein with respect to the device 104. Additionally, the receiver 128 may be configured to receive information, e.g., eye or head position information, rendering commands, or location information, from another device. The transmitter 130 may be configured to perform any transmitting function described herein with respect to the device 104. For example, the transmitter 130 may be configured to transmit information to another device, which may  include a request for content. The receiver 128 and the transmitter 130 may be combined into a transceiver 132. In such examples, the transceiver 132 may be configured to perform any receiving function and/or transmitting function described herein with respect to the device 104.
In some examples, the graphical content from the processing unit 120 for display via the display client 131 is not static and may be changing. Accordingly, the display processor 127 may periodically refresh the graphical content displayed via the display client 131. For example, the display processor 127 may periodically retrieve graphical content from the system memory 124, where the graphical content may have been updated by the execution of an application (and/or the processing unit 120) that outputs the graphical content to the system memory 124.
It should be appreciated that while shown as separate components in FIG. 1, in some examples, the display client 131 (sometimes referred to as a “display panel” ) may include the display processor 127.
Referring again to FIG. 1, in certain aspects, the processing unit 120 may include an edge portion smoothing component 198 to facilitate the smoothing of the presentment of the edge portion of image data. For example, the edge portion smoothing component 198 may be configured to obtain a first screen mask associated with a display, the first screen mask defining a first visible area of the display. The edge portion smoothing component 198 may also be configured to obtain a second screen mask associated with the display, the second screen mask defining a second visible area of the display, the first visible area being different than the second visible area. Additionally, the edge portion smoothing component 198 may be configured to transmit image packets to the display for displaying of image data by the display, the image packets corresponding to image data for a sequence of frames, and each image packet based on image data for a respective frame and based on alternating of the first screen mask and the second screen mask.
In some examples, a shape of the second visible area may correspond to a shape of the first visible area. In some examples, the first screen mask may include an inner portion, an edge portion, and an outer portion, and the second screen mask may include an inner portion, an edge portion, and an outer portion. In some such examples, locations corresponding to the first screen mask inner portion may be the same as locations corresponding to the second screen  mask inner portion, and locations corresponding to the first screen mask outer portion may be the same as locations corresponding to the second screen mask outer portion.
The edge portion smoothing component 198 may also be configured to divide locations corresponding to the first screen mask edge portion into a first set of locations and a second set of locations. Also, the edge portion smoothing component 198 may be configured to assign a first value to the locations corresponding to the first set of locations. The edge portion smoothing component 198 may also be configured to assign a second value to the locations corresponding to the second set of locations. Further, the edge portion smoothing component 198 may be configured to assign the second value to locations corresponding to the second screen mask edge portion that correspond to the first set of locations of the first screen mask edge portion. The edge portion smoothing component 198 may also be configured to assign the first value to locations corresponding to the second screen mask edge portion that correspond to the second set of locations of the first screen mask edge portion. Additionally, the edge portion smoothing component 198 may be configured to assign the first value to the locations corresponding to the first screen mask inner portion and the locations corresponding to the second screen mask inner portion. The edge portion smoothing component 198 may also be configured to assign the second value to the locations corresponding to the first screen mask outer portion and the locations corresponding to the second screen mask outer portion. In some examples, the first value may indicate a visible area and the second value may indicate a non-visible area. In some examples, the locations corresponding to the first set of locations and the locations corresponding to the second set of locations may be randomly selected. In some examples, a first quantity may correspond to the locations corresponding to the first set of locations and a second quantity may correspond to the locations corresponding to the second set of locations. In some such examples, the first quantity may be within a threshold quantity of the second quantity. In some examples, a quantity of locations corresponding to the first set of locations may be randomly selected. In some examples, the transmitted image packets may exclude image data for locations corresponding to respective non-visible areas of the first screen mask and the second screen mask.
As described herein, a device, such as the device 104, may refer to any device, apparatus, or system configured to perform one or more techniques described herein. For example, a device may be a server, a base station, user equipment, a client device, a station, an access point, a  computer, e.g., a personal computer, a desktop computer, a laptop computer, a tablet computer, a computer workstation, or a mainframe computer, an end product, an apparatus, a phone, a smart phone, a server, a video game platform or console, a handheld device, e.g., a portable video game device or a personal digital assistant (PDA) , a wearable computing device, e.g., a smart watch, an augmented reality device, or a virtual reality device, a non-wearable device, a display or display device, a television, a television set-top box, an intermediate network device, a digital media player, a video streaming device, a content streaming device, an in-car computer, any mobile device, any device configured to generate graphical content, or any device configured to perform one or more techniques described herein. Processes herein may be described as performed by a particular component (e.g., a GPU) , but, in further embodiments, can be performed using other components (e.g., a CPU) , consistent with disclosed embodiments.
FIG. 2A illustrates example image data 205 for a frame. In accordance with the MIPI DSI (Mobile Industry Processor Interface, Display Serial Interface) protocol, the shape of the image data 205 corresponds to a rectangular-shaped image. FIG. 2A also illustrates a perimeter 210 of an irregularly-shaped display that overlaps with the image data 205. In the illustrated example of FIG. 2A, the irregularly-shaped display is circular. However, it should be appreciated that the irregular-shaped display may be any reasonable non-rectangular shaped display, including, for example, a generally rectangular-shaped display that includes a cutout portion (or notch) (e.g., for a camera (s) , for a microphone (s) , for a speaker (s) , etc. ) .
As shown in FIG. 2A, the shape and size of the irregularly-shaped display results in a visible area 215 and a non-visible area 220. In the illustrated example, the visible area 215 corresponds to portions of the image data 205 that are at or within the perimeter 210. That is, the visible area 215 corresponds to the portions of the image data 205 that are visible when the image data 205 is transmitted for presentment via the irregularly-shaped display. In the illustrated example, the non-visible area 220 corresponds to portions of the image data 205 that are outside the perimeter 210. That is, the non-visible area 220 corresponds to the portions of the image data 205 that are not visible when the image data 205 is transmitted for presentment via the irregularly-shaped display.
As disclosed above, to alleviate the cost of transmitting information to the display client that may not be useful, example techniques disclosed herein use a screen mask to define the one or  more visible area (s) and the one or more non-visible area (s) of the display client. FIG. 2B illustrates an example screen mask 250 that corresponds to the irregularly-shaped display and the perimeter 210 of the irregularly-shaped display. For example, the screen mask 250 includes locations 255 that correspond to pixel elements of the irregularly-shaped display.
It should be appreciated that a first set of locations 255 of the screen mask 250 may be completely within the perimeter 210, a set second of locations 255 of the screen mask 250 may be completely outside the perimeter 210, and a third set of locations 255 of the screen mask 250 may correspond to those locations that overlap with the perimeter 210. For example, locations 255a of the screen mask 250 are those locations that are completely within the perimeter 210 and correspond to the visible area 215 (sometimes referred to herein as the “inner portion” of the screen mask 250) . In the illustrated example of FIG. 2B, locations 255b of the screen mask 250 are those locations that are completely outside the perimeter 210 and correspond to the non-visible area 220 (sometimes referred to herein as the “outer portion” of the screen mask 250) . The example screen mask 250 also includes locations 255c that overlap with the perimeter 210 (sometimes referred to herein as the “edge portion” of the screen mask 250) .
In the illustrated example of FIG. 2B, each of the locations 255 of the screen mask 250 are associated with a value based on whether the respective location corresponds to the visible area or the non-visible area. For example, the locations 255a of the inner portion of the screen mask 250 may be assigned a first value (e.g., “1” ) to indicate that those locations of image data correspond to the visible area (e.g., the visible area 215 of FIG. 2A) . Additionally, the locations 255b of the outer portion of the screen mask 250 may be assigned a second value (e.g., “’ 0” ) to indicate that those locations of image data correspond to the non-visible area (e.g., the non-visible area 220 of FIG. 2B) .
In the illustrated example of FIG. 2B, the locations 255c of the edge portion of the screen mask 250 are assigned the first value (e.g., “1” ) to indicate that those values correspond to the visible area (e.g., the visible area 215 of FIG. 2A) .
As described above, the screen mask 250 may then be used to modify the image data 205 to, for example, reduce the load on a communication interface (e.g., a bus) , and/or reduce the load of a processing unit (e.g., any processing unit configured to perform one or more techniques disclosed herein, such as a GPU, a DPU, and the like) . In some examples, the processing unit  120 may map the screen mask 250 to the image data 205 and change the pixel information of the image data 205 that correspond to those locations 255b of the outer portion of the screen mask 250. For example, the processing unit 120 may change the pixel information for the respective image data 205 to a black color pixel, may change the pixel information for the respective image data 205 to indicate that the corresponding pixel element should be off, etc. In some examples, the processing unit 120 may discard the pixel information for the respective image data 205 that correspond to those locations 255b of the outer portion of the screen mask 250.
However, it should be appreciated that the applying of the screen mask 250 to the image data 205 may result in certain portions of the displayed image that may appear unsmooth due to, for example, a zig-zag pattern. For example, FIG. 2C illustrates an example image frame 280. As shown in FIG. 2C, the image frame 280 may be generated by applying the screen mask 250 of FIG. 2B to the image data 205 of FIG. 2A. While some examples may use alpha blending on the image data to reduce the appearance of the unsmooth edge portions, it should be appreciated that the performing of alpha blending increases the resources used for displaying the image. For example, performing alpha blending uses power resources and processing resources of the apparatus.
FIG. 3 is a block diagram 300 illustrating the example processing unit 120 of FIG. 1, the example system memory 124 of FIG. 1, the example display processor 127 of FIG. 1, and the example display client 131 of FIG. 1. In the illustrated example of FIG. 3, the processing unit 120 includes the example edge portion smoothing component 198 of FIG. 1 and a screen mask generating component 305. The example display client 131 includes a display controller 310, a buffer 315, and a display 320. The example display 320 may include an irregular shape, such as a circular-shaped display, a display including a cutout or a notch (e.g., for a camera (s) , for a microphone (s) , for a speaker (s) , etc. ) , etc. The example display 320 includes a plurality of pixel elements for displaying image data.
In the illustrated example of FIG. 3, the processing unit 120 includes the example edge portion smoothing component 198 and the screen mask generating component 305. As described above, the example edge portion smoothing component 198 facilitates the smoothing of the presentment of the edge portion of image data. For example, instead of using one screen mask to define the visible area (s) and non-visible area (s) corresponding to the display 320, the edge  portion smoothing component 198 disclosed herein uses two screen masks for defining the visible area (s) and non-visible area (s) corresponding to the display 320. In some examples, each screen mask may include an inner portion, an outer portion, and an edge portion. In some such examples, the inner portion of each screen mask may correspond to the same locations and may have the same first value (e.g., a “1” to indicate that the respective location corresponds to a visible area) and the outer portion of each screen mask may correspond to the same locations and may have the same second value (e.g., a “0” to indicate that the respective location corresponds to a non-visible area) .
In some examples, the edge portion of each screen mask may correspond to the same locations. However, the values of the locations of each screen mask edge portion may be different. For example, each of the locations of the first screen mask edge portion may be randomly assigned the first value or the second value. However, it should be appreciated that other examples may use additional or alternative techniques for assigning the values of each of the locations of the first screen mask. In any case, the corresponding locations of the second screen mask edge portion may be assigned the opposite value as the value assigned for the first screen mask.
In the illustrated example of FIG. 3, a first screen mask 330a and a second screen mask 330b are stored in the system memory 124. In some examples, each of the screen masks 330 may be hard-coded screen masks that are stored in the system memory 124. For example, each of the screen masks 330 may be designed based on the specific irregular shape of the display 320.
In some examples, the screen masks 330 may be generated, for example, during run-time. The example screen mask generating component 305 facilitates the generating of the screen masks 330. In some examples, the screen mask generating component 305 may be part of an application space (sometimes referred to as a “user space” ) of the processing unit 120. The application space may include software application (s) and/or application framework (s) . For example, software application (s) may include operating systems, media applications, graphical applications, office suite applications, etc. Application framework (s) may include frameworks that may be used with one or more software applications, such as libraries, services (e.g., display services, input services, etc. ) , application program interfaces (APIs) , etc.
The screen mask generating component 305 may generate the respective screen masks 330 based on the shape of the display 320. For example, for a circular-shaped display, the screen mask generating component 305 may use one or more radii to determine the inner portion, the  outer portion, and the edge portion of the screen masks 330. For example, the screen mask generating component 305 may designate locations outside a first radius as the outer portion of the screen mask, may designate locations within a second radius as the inner portion of the screen mask, and may designate locations between the first radius and the second radius as the edge portion.
In some examples, the screen mask generating component 305 may use one radius to determine the portions of the screen masks 330. For example, the screen mask generating component 305 may designate locations that are completely outside a radius (e.g., locations that are outside the radius and do not overlap with the radius) as the outer portion of the screen mask, may designate locations that are completely within the radius (e.g., locations that are within the radius and do not overlap with the radius) as the inner portion of the screen mask, and may designate locations that overlap with the radius as the edge portion.
In some examples, after determining the locations of the screen mask that correspond to the respective portions of the screen mask, the screen mask generating component 305 may assign the locations associated with the inner portion of the screen masks 330 with a first value (e.g., a “1” indicating that the location corresponds to a visible area) , and may assign the locations associated with the outer portion of the screen masks 330 with a second value (e.g., a “0” indicating that the location corresponds to a non-visible area) . The example screen mask generating component 305 may then split the locations of the edge portion of the first screen mask 330a into two sets. In some examples, the screen mask generating component 305 may randomly split the locations into the two sets. In some examples, the screen mask generating component 305 may use a pattern to split the locations into the two sets. In some examples, the quantity of locations in each of the two sets may be the same and/or within a threshold quantity of each other. In some examples, the quantity of locations in each of the two sets may be randomly selected. For example, the quantity of locations in the first set may be randomly selected from a range between zero locations and a total quantity of locations of the edge portion, and the quantity of locations in the second set may be the remaining locations of the edge portion. However, it should be appreciated that other examples may use additional or alternative techniques for splitting the locations of the edge portion of the first screen mask 330a into the two sets.
In some examples, after determining the two sets of locations of the first screen mask 330a, the screen mask generating component 305 may assign the locations of the first one of the two sets to a first value and assign the locations of the second one of the two sets to a second value. The screen mask generating component 305 may then assign the opposite value to the corresponding locations of the second screen mask 330b. For example, the screen mask generating component 305 may assign a location of the edge portion of the first screen mask 330a to the first value (e.g., a “1” indicating that the location corresponds to a visible area) , then the screen mask generating component 305 assigns the corresponding location of the edge portion of the second screen mask 330b to the second value (e.g., a “0” indicating that the location corresponds to a non-visible area) .
Thus, it should be appreciated that the general shape of the first screen mask 330a and the second screen mask 330b is the same (e.g. circular) and that the corresponding locations of the inner portion and outer portion of the first screen mask 330a and the second screen mask 330b are also the same. It should also be appreciated that the quantity of locations that differ between the first screen mask 330a and the second screen mask 330b (e.g., the locations corresponding to the edge portion) may be relatively small compared to the overall quantity of locations of the screen masks 330.
In some examples, after each of the locations of the first screen mask 330a and the second screen mask 330b have been assigned a respective value, the screen mask generating component 305 may store the generated screen masks 330 in the system memory 124.
It should be appreciated that in some examples, the screen mask generating component 305 may generate the respective screen masks 330 based on a shape of the display 320 during run-time. For example, an application (e.g., operating in an application space) may set custom visible area (s) and non-visible area (s) of the display 320 that are different than the shape of the display 320. For example, an operating system (e.g., operating in the application space) may determine that while the display client 131 is operating in an idle state or a low power state that certain elements of an interface may be displayed, such as a current time, a date, a battery status indicator, etc. In some such examples, the screen mask generating component 305 may determine the visible area (s) and the non-visible area (s) of the display 320 and then determine the location and values to assign to each of the locations of the inner portions, the outer portions, and the edge portions of each screen mask 330.
In the illustrated example of FIG. 3, the display processor 127 may be configured to operate functions of the display client 131. In some examples, the display processor 127 may perform post-processing of image data provided by the processing unit 120. The display processor 127 may be configured to cause the display client 131 to display image frames and to display the image frames based on one or more visible area (s) and/or one or more non-visible area (s) defined by screen masks. The display processor 127 may output image data to the display client 131 according to an interface protocol, such as, for example, the MIPI DSI (Mobile Industry Processor Interface, Display Serial Interface) .
In the illustrated example of FIG. 3, the display client 131 includes the display controller 310, the buffer 315, and the display 320. The display controller 310 may receive image data from the display processor 127 and store the received image data in the buffer 315. In some examples, the display controller 310 may output the image data stored in the buffer 315 to the display 320. Thus, the buffer 315 may represent a local memory to the display client 131. In some examples, the display controller 310 may output the image data received from the display processor 127 to the display 320.
Furthermore, as disclosed above, the display client 131 may be configured in accordance with MIPI DSI standards. The MIPI DSI standard supports a video mode and a command mode. In examples where the display client 131 is operating in video mode, the display processor 127 may continuously refresh the graphical content of the display client 131. For example, the entire graphical content may be refreshed per refresh cycle (e.g., line-by-line) .
In examples where the display client 131 is operating in command mode, the display processor 127 may write the graphical content of a frame to the buffer 315. In some such examples, the display processor 127 may not continuously refresh the graphical content of the display client 131. Instead, the display processor 127 may use a vertical synchronization (Vsync) pulse to coordinate rendering and consuming of graphical content at the buffer 315. For example, when a Vsync pulse is generated, the display processor 127 may output new graphical content to the buffer 315. Thus, the generating of the Vsync pulse may indicate when current graphical content at the buffer 315 has been rendered.
In operation, prior to transmitting image data for presentment via the display client 131, the edge portion smoothing component 198 may obtain a screen mask from the system memory 124 and transmit the obtained screen mask to the display client 131 (e.g., via the display  processor 127) . For example, the edge portion smoothing component 198 may obtain the first screen mask 330a from the system memory 127 and transmit the first screen mask 330a to the display client 131. The edge portion smoothing component 198 may then obtain image data for an image frame and transmit the image data for presentment via the display client 131. The example edge portion smoothing component 198 may then alternate the obtaining and transmitting of the first and second screen masks 330 prior to the transmitting of image data for a sequence of frames.
In some examples, the edge portion smoothing component 198 may modify the obtained image data based on a respective screen mask prior to transmitting the image data for presentment. For example, prior to transmitting image data for a first image frame, the edge portion smoothing component 198 may modify the image data based on the first screen mask 330a obtained prior to the obtaining of the image data for the first image frame. In some examples, the edge portion smoothing component 198 may modify the image data based on the value of the screen mask at the respective locations. For example, the edge portion smoothing component 198 may change the image data corresponding to the locations associated with the second value (e.g., a “0” indicating that the respective location corresponds to a non-visible area) to a black value or a null value. In some examples, the edge portion smoothing component 198 may discard portions of the image data for the image frame that correspond to the second value. In some examples, by changing the image data (e.g., to a black value or a null value) or by discarding portions of the image data, the quantity of information being transmitted from the processing unit 120 for presentment may be reduced, thereby reducing the load on a communication interface and/or reducing the load of the processing unit 120.
In some examples, the processing unit 120 may transmit the image data without modifying the image data prior to transmittal. It should be appreciated that in some such examples, the display client 131 may then perform the applying of the screen mask to the image data. For example, the display controller 310 may modify the image data based on a screen mask. For example the display controller 310 may turn on or turn off certain pixel elements of the display 320 based on the values of the screen mask.
It should be appreciated that in some examples, the screen masks 330 may be stored locally at the display client 131. For example, the screen masks 330 may be hard-coded screen masks that are stored in a local memory of the display client 131 (e.g., the buffer 315) .
In some such examples, the display client 131 may alternate which screen mask 330 to apply to received image data. For example, for the presentment of first image data, the display client 131 may obtain the first screen mask 330a from a local memory (e.g., the example buffer 315) , for the presentment of second image data, the display client 131 may obtain the second screen mask 330b from the local memory, etc. In this manner, the example display client may be able to apply screen masks 330 without waiting for transmission of the respective screen masks 330 from the processing unit 120.
It should be appreciated that in some examples, the display client 131 may include hard-coded screen masks, but may also receive (e.g., periodically, aperiodically, or as a one-time event) screen masks from the processing unit 120. For example, for a particular sequence of frames, the processing unit 120 may determine that one or two different screen masks are to be used than what is hard-coded at the display client 131. For example, while display client 131 may include two hard-coded screen masks that are based on the shape of the display 320, the processing unit 120 may determine, during run-time, to use different screen masks, for example, based on an operating state of the display client 131. For example, while the display client 131 is operating in an idle state or a low-power state, the processing unit 120 may determine to use screen masks that include relatively smaller visible area (s) for the presentment of limited information (e.g. a current time, a date, a battery status indicator, etc. ) . In some such examples, the processing unit 120 may first signal to the display client 131 that the processing unit 120 is transmitting screen masks for use by the display client 131 for certain image data prior to the transmitting of the respective screen masks.
In some examples, the display client 131 may process information as received and determine whether to apply a hard-coded screen mask or to apply a screen mask provided by the processing unit 120. For example, the display client 131 may receive information from the processing unit 120 and determine whether the information corresponds to a screen mask or to image data. In some such examples, if the received information corresponds to a screen mask, then the display client 131 may use the screen mask for the subsequently received image data. For example, the display client 131 may turn on or turn off certain pixel elements of the display 320 based on the received screen mask. In some examples, the display client 131 may temporarily store the received screen mask in a local memory (e.g., in the buffer 315) .
In some examples, if the received information corresponds to image data, then the display client 131 may determine whether a screen mask was received prior to the receipt of the image data. For example, if the display client 131 determines that a screen mask was received prior to the receipt of the image data, then the display client 131 may apply the received screen mask to the received image data. In some examples, if the display client 131 determines that a screen mask was not received prior to the receipt of the image data, then the display client 131 may obtain a locally stored screen mask (e.g., a hard-coded screen mask or a screen mask previously provided to the display client 131 by the processing unit 120 and stored in the buffer 315) for applying to the received image data.
As described above, in some examples, the processing unit 120 may modify image data prior to transmitting the image data to the display client 131. In some such examples, the display client 131 may use a screen mask to decode the received image data. For example, the processing unit 120 may discard portions of the image data based on a screen mask. In some such examples, the display client 131 may use the same screen mask (either received from the processing unit or obtained from local memory) to determine which locations of the display 320 that the received image data corresponds. For example, the processing unit 120 may apply a screen mask that reduces the image data transmitted to the display client 131 from information for 100 pixels to information for ten pixels. In some such examples, the display client 131 may use the screen mask to determine which ten pixels the received image data corresponds and then proceed with the presentment of the image data.
FIG. 4 illustrates an example first screen mask 400 and an example second screen mask 450. Aspects of the first screen mask 400 may be implemented by the first screen mask 330a of FIG. 3. Aspects of the second screen mask 450 may be implemented by the second screen mask 330b of FIG. 3.
In the illustrated example of FIG. 4, the first screen mask 400 includes locations 405 that correspond to the outer portion of the first screen mask 400, includes locations 410 that correspond to the inner portion of the first screen mask 400, and includes locations 415 that correspond to the edge portion of the first screen mask 400. Similarly, the example second screen mask 450 includes locations 455 that correspond to the outer portion of the second screen mask 450, includes locations 460 that correspond to the inner portion of the second  screen mask 450, and includes locations 465 that correspond to the edge portion of the second screen mask 450.
As shown in FIG. 4, the  locations  405, 455 corresponding to the outer portions of the screen masks 400, 450, respectively, are assigned the second value (e.g., a “0” to indicate that the respective locations correspond to a non-visible area) , and the  locations  410, 460 corresponding to the inner portions of the screen masks 400, 450, respectively, are assigned the first value (e.g., a “1” to indicate that the respective locations correspond to a visible area) .
In the illustrated example of FIG. 4, the locations 415 corresponding to the edge portion of the first screen mask 400 are divided into two  sets  415a, 415b. For example, the locations 415 associated with the first set 415a are assigned the first value (e.g., a “1” ) and the locations 415 associated with the second set 415b are assigned the second value (e.g., a “0” ) .
In the illustrated example of FIG. 4, the locations 465 corresponding to the edge portion of the second screen mask 450 are assigned the opposite value as assigned to the corresponding location of the first screen mask 400. For example, locations of the second screen mask 450 that correspond to locations of the first set of locations 415a of the first screen mask 400 are assigned the second value (e.g., a “0” ) and the locations of the second screen mask 450 that correspond to locations of the second set of locations 415b of the first screen mask 400 are assigned the first value (e.g. a “1” ) .
As shown in FIG. 4, the values of the locations 415, 465 of the edge portions of the screen masks 400, 450, respectively, are assigned in alternating pattern. However, it should be appreciated that in other examples, additional or alternative techniques for assigning the values to the locations 415, 465 may be used. For example, the locations 415 selected for the first set of locations 415a may be randomly selected. In some examples, the quantity of locations included in the first set of locations 415a and the quantity of locations included in the second set of locations 415b may be the same or within a threshold quantity (e.g., within 1, 2, 3, etc. locations) . In some examples, the quantity of locations included in the first set of locations 415a and the quantity of locations included in the second set of locations 415b may be random and, thus, the respective quantities may not be within any threshold quantity.
FIG. 5 illustrates an example timing diagram 500, in accordance with one or more techniques of this disclosure. More specifically, FIG. 5 displays a timing diagram 500 of the vertical synchronization (Vsync) pulses 505 for the display client 131 of FIGs. 1 and/or 3. FIG. 5 also  shows a transmission sequence 550 of screen masks 555 and image data 560 for presentment by the display client 131. For example, the transmission sequence 550 may correspond to transmissions by the processing unit 120 of FIGs. 1 and/or 3 to the display client 131.
As shown in FIG. 5, the screen masks 555 are transmitted prior to the occurrence of the respective Vsync pulses 505 and the image data 560 are transmitted after the occurrence of the respective Vsync pulses 505. For example, prior to a start of a first Vsync pulse 505a, the processing unit 120 may transmit a first screen mask 555a, and after the start of the first Vsync pulse 505a, the processing unit 120 may transmit first image data 560a for presentment via the display client 131. The processing unit 120 may then transmit a second screen mask 555b prior to a start of a second Vsync pulse 505b and transmit second image data 560a for presentment via the display client 131 after the start of the second Vsync pulse 505b. Similarly, prior to a start of a third Vsync pulse 505c, the processing unit 120 may transmit a third screen mask 555c, and after the start of the third Vsync pulse 505c, the processing unit 120 may transmit third image data 560c for presentment via the display client 131. The processing unit 120 may then transmit a fourth screen mask 555d prior to a start of a fourth Vsync pulse 505d and transmit fourth image data 560d for presentment via the display client 131 after the start of the fourth Vsync pulse 505d.
In the illustrated example of FIG. 5, the first screen mask 555a and the third screen mask 555c are the same screen mask (e.g., a “mask A, ” which may be implemented by the first screen mask 330a of FIG. 3 and/or the first screen mask 400 of FIG. 4) , and the second screen mask 555b and the fourth screen mask 555d are the same screen mask (e.g., a “mask B, ” which may be implemented by the second screen mask 330b of FIG. 3 and/or the second screen mask 450 of FIG. 4) . Thus, it should be appreciated that the processing unit 120 is alternating the transmitting of the screen masks 555 to the display client 131. Furthermore, it should be appreciated that the screen masks 555 are used to modify the following image data 560 and, thus, the respective edge portions of the presentment of the image data 560 may be different, as disclosed herein. More specifically, the alternating use of the screen masks 555 may result in different edges of the image data 560 when presented via the display client 131. However, as shown in FIG. 5, as the transmitting of the image data 560 is synchronized with the occurrences of the Vsync pulses 505 and because of the delay in visual perception of the human  eye, the edges of the image data 560, when presented via the display client 131, appear to be generally smooth.
Although the illustrated example of FIG. 5 illustrates the transmission sequence 550 corresponding to the transmissions from, for example, the processing unit 120 to the display client 131, it should be appreciated that in some examples, the sequence of events of the transmission sequence 550 may correspond to a sequence of events performed by the display client 131. For example, as disclosed above, in some examples, the screen masks 555 may be hard-coded at the display client 131. In some such examples, the display client 131 may alternate which screen mask 555 to apply to the received image data 560. For example, for the presentment of the first image data 560a, the display client 131 may obtain the first screen mask 555a from a local memory (e.g., the example buffer 315 of FIG. 3) , for the presentment of the second image data 560b, the display client 131 may obtain the second screen mask 555b from the local memory, etc. In this manner, the example display client 131 is able to apply screen masks 555 without waiting for transmission of the respective screen masks 555 from the processing unit 120.
It should be appreciated that in some examples, the display client 131 may include hard-coded screen masks, but may also receive (e.g., periodically, aperiodically, or as a one-time event) screen masks from the processing unit 120. For example, for a particular sequence of frames, the processing unit 120 may determine that one or two different screen masks are to be used than what is hard-coded at the display client 131. For example, an application may define a visible area that is different than the visible areas defined by the hard-coded screen masks. In some such examples, the processing unit 120 may first signal to the display client 131 that the processing unit 120 is transmitting screen masks for use by the display client 131 for certain image data prior to the transmitting of the respective screen masks.
In some examples, the display client 131 may process information as received and determine whether to apply a hard-coded screen mask or to apply a screen mask provided by the processing unit 120. For example, the display client 131 may receive information from the processing unit 120 and determine whether the information corresponds to a screen mask or to image data. In some such examples, if the received information corresponds to a screen mask, then the display client 131 may use the screen mask for the subsequently received image data. For example, the display client 131 may turn on or off certain pixel elements of the display  320 based on the received screen mask. In some examples, the display client 131 may temporarily store the received screen mask in a local memory (e.g., in the buffer 315 of FIG. 3) .
In some examples, if the received information corresponds to image data, then the display client 131 may determine whether a screen mask was received prior to the receipt of the image data. For example, if the display client 131 determines that a screen mask was received prior to the receipt of the image data, then the display client 131 may apply the received screen mask to the received image data. In some examples, if the display client 131 determines that a screen mask was not received prior to the receipt of the image data, then the display client 131 may obtain a locally stored screen mask (e.g., a hard-coded screen mask or a screen mask previously provided to the display client 131 by the processing unit 120) for applying to the received image data.
FIG. 6 illustrates an example flowchart 600 of an example method in accordance with one or more techniques of this disclosure. The method may be performed by an apparatus, such as the device 104 of FIG. 1, the processing unit 120 of FIGs. 1 and/or 3, the display processor 127 of FIGs. 1 and/or 3, a DPU, a GPU, a video processor, and/or a component of the processing unit 120.
At 602, the apparatus may obtain a first screen mask associated with a display, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may obtain the first screen mask 330a from the system memory 124 of FIGs. 1 and/or 3. In some examples, the first screen mask 330a may define one or more visible area (s) of the display 320 and may also define one or more non-visible area (s) of the display 320.
At 604, the apparatus may transmit the first mask to the display, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may transmit the first screen mask 330a to the display client 131 (e.g., via the display processor 127) .
At 606, the apparatus may generate an image packet based on the first screen mask and first image data of a sequence of frames for presentment via the display, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may generate an image packet for first image data by modifying the first image data based on the  first screen mask 330a. For example, the processing unit 120 may modify pixel information of the image data, discard portions of the image data, etc. to generate the image packet.
At 608, the apparatus may transmit the generated image packet to the display for presentment, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may transmit the modified image data to the display client 131 (e.g., via the display processor 127) .
At 610, the apparatus may obtain a second screen mask associated with the display, as described in connection with the examples of FIG.s 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may obtain the second screen mask 330b from the system memory 124. In some examples, the second screen mask 330b may define one or more visible area (s) of the display 320 and/or one or more non-visible area (s) of the display 320 that is different than the visible area (s) and/or non-visible area (s) of the display 320 defined by the first screen mask 330a. In some examples, while the first screen mask 330a and the second screen mask 330b define one or more different visible area (s) and/or non-visible area (s) of the display 320, it should be appreciated that the general shape of the first and second screen masks 330 is the same. For example, as shown in FIG. 4, the shape of the visible area of the first screen mask 400 is circular and the shape of the visible area of the second screen mask 450 is also circular. In some examples, the first screen mask and the second screen mask may be hard-coded screen masks. In some examples, the first screen mask and the second screen mask may be generated during run-time (e.g., by the screen mask generating component 305 of FIG. 3) .
At 612, the apparatus may transmit the second mask to the display, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may transmit the second screen mask 330b to the display client 131 (e.g., via the display processor 127) .
At 614, the apparatus may generate an image packet based on the second screen mask and second image data of the sequence of frames for presentment via the display, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may generate an image packet for second image data by modifying the second image data based on the second screen mask 330b. For example, the processing unit 120 may modify pixel information of the image data, discard portions of the image data, etc. to generate the image packet.
At 616, the apparatus may transmit the generated image packet to the display for presentment, as described in connection with the examples of FIG. 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may transmit the modified image data to the display client 131 (e.g., via the display processor 127) .
In some examples, control may then return to 602 to obtain the first screen mask to transmit to the display and for generating another image packet for transmitting to the display. In this manner, the apparatus alternates the transmitting of the first and second screen masks 330 to the display client 131 and also alternates the edge portions of the image data transmitted for presentment by the display client 131. In some such examples, the alternating of the edge portions of the presented image data, along with the delay in visual perception by the human eye and the relatively fast display refresh rate associated with the display 320 (e.g., 60fps, 90fps, 120fps, etc. ) , may result in edge portions that appear generally smooth when viewed by the human eye.
FIG. 7 illustrates an example flowchart 700 of an example method in accordance with one or more techniques of this disclosure. The method may be performed by an apparatus, such as the device 104 of FIG. 1, the processing unit 120 of FIGs. 1 and/or 3, the display processor 127 of FIGs. 1 and/or 3, a DPU, a GPU, a video processor, and/or a component of the processing unit 120.
At 702, the apparatus may identify locations of a first screen mask corresponding to an inner portion, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may identify the locations 410 of the first screen mask 400. In some examples, the locations corresponding to the inner portion may be locations of the screen mask that are completely within a radius (e.g., within the perimeter 210 of FIG. 2) , that are within a first radius, etc.
At 704, the apparatus may assign the locations corresponding to the inner portion of the first screen mask and a second screen mask with a first value, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may assign the locations 410 of the first screen mask 400 and the locations 460 of the second screen mask 450 with the first value (e.g., a “1” ) .
At 706, the apparatus may identify locations of the first screen mask corresponding to an outer portion, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For  example, the processing unit 120 may identify the locations 405 of the first screen mask 400. In some examples, the locations corresponding to the outer portion may be locations of the screen mask that are completely outside of a radius (e.g. outside the perimeter 210 of FIG. 2) , that are outside a second radius, etc.
At 708, the apparatus may assign the locations corresponding to the outer portion of the first screen mask and the second screen mask with a second value, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may assign the locations 405 of the first screen mask 400 and the locations 455 of the second screen mask 450 with the second value (e.g., a “0” ) .
At 710, the apparatus may identify locations of the first screen mask corresponding to an edge portion, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may identify the locations 415 of the first screen mask 400. In some examples, the locations corresponding to the edge portion may be locations of the screen mask that are overlap with a radius (e.g., that overlap with the perimeter 210 of FIG. 2) , that are between a first radius and a second radius, etc.
At 712, the apparatus may divide the locations corresponding to the edge portion into a first set and a second set, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may divide the locations 415 into a first set 415a and a second set 415b. In some examples, the processing unit 120 may randomly select the locations for the  respective sets  415a, 415b. In some examples, the processing unit 120 may use an algorithm to divide the locations into the first set 415a and the second set 415b. In some examples, the quantity of locations within the first set 415a and the second set 415b may be the same or within a threshold quantity. In some examples, the quantity of locations within the first set 415a and the second set 415b may be randomly selected.
At 714, the apparatus may assign the locations of the first set of the first screen mask with the first value, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may assign the locations of the first set 415a of the first screen mask 400 with the first value (e.g., a “1” ) .
At 716, the apparatus may assign the locations of the second set of the first screen mask with the second value, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or  5. For example, the processing unit 120 may assign the locations of the second set 415b of the first screen mask 400 with the second value (e.g., a “0” ) .
At 718, the apparatus may assign the locations of the second screen mask corresponding to the first set and the second set with the opposite values as assigned in the first screen mask, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the processing unit 120 may assign the locations 465a of the second screen mask 450 that correspond to the first set 415a of the first screen mask 400 with the second value (e.g., a “0” ) . The processing unit 120 may also assign the locations 465b of the second screen mask 450 that correspond to the second set 415b of the second screen mask 400 with the first value (e.g., a “1” ) .
FIG. 8 illustrates an example flowchart 800 of an example method in accordance with one or more techniques of this disclosure. The method may be performed by an apparatus, such as the device 104 of FIG. 1, the display client 131 of FIGs. 1 and/or 3, a DPU, a GPU, a video processor, and/or a component of the display client 131.
At 802, the apparatus may receive information corresponding to a screen mask or to image data, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the display client 131 may receive information (e.g., an information packet) from the processing unit 120 (e.g., via the display processor 127) .
At 804, the apparatus may determine whether the received information corresponds to a screen mask or to image data, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5.
If, at 804, the apparatus determines that the information corresponds to a screen mask, then, at 806, the apparatus may modify the display based on the screen mask, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the display client 131 may turn on or turn off pixels of the display 320 based on the screen mask.
At 808, the apparatus may receive image data, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the display client 131 may receive image data after the Vsync pulse 505 of FIG. 5.
At 810, the apparatus may display the image data based on the screen mask, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the display client 131 may cause the display 320 to present the image data.
If, at 804, the apparatus determines that the information corresponds to image data, then, at 812, the apparatus may determine whether a screen mask associated with the presentment of the image data was received, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example the display client 131 may have received the screen mask 555 prior to the start of the Vsync pulse 505 of FIG. 5.
If, at 812, the apparatus determines that the screen mask associated with the presentment of the image data was received, then control proceeds to 816 to modify the display based on the screen mask.
If, at 812, the apparatus determines that the screen mask associated with the presentment of the image data was not received, then, at 814, the apparatus may obtain a screen mask from local memory, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the display client 131 may obtain a screen mask from the buffer 315 of FIG. 3. In some examples, the screen mask obtained from the local memory may be a hard-coded screen mask. In some examples, the screen mask obtained from the local memory may a screen mask that was previously provided to the display client 131 and stored by the display client in the local memory.
At 816, the apparatus may modify the display based on the screen mask, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the display client 131 may turn on or turn off pixels of the display 320 based on the screen mask.
At 818, the apparatus may display the image data based on the screen mask, as described in connection with the examples of FIGs. 1, 2A, 2B, 3, 4, and/or 5. For example, the display client 131 may cause the display 320 to present the image data.
In one configuration, a method or apparatus for display processing is provided. The apparatus may be a processing unit, a display processor, a display processing unit (DPU) , a GPU) , a video processor, or some other processor that can perform display processing. In some examples, the apparatus may be the processing unit 120 within the device 104, or may be some other hardware within the device 104, or another device. The apparatus may include means obtaining a first screen mask associated with a display, the first screen mask defining a first visible area of the display. The apparatus may also include means for obtaining a second screen mask associated with the display, the second screen mask defining a second visible area of the display, the first visible area being different than the second visible area. The apparatus may  also include means for transmitting image packets to the display for displaying of image data by the display, the image packets corresponding to image data for a sequence of frames, and each image packet based on image data for a respective frame and based on alternating of the first screen mask and the second screen mask. The apparatus may include means for dividing locations corresponding to a first screen mask edge portion into a first set of locations and a second set of locations. The apparatus may also include means for assigning a first value to the locations corresponding to the first set of locations. The apparatus may also include means for assigning a second value to the locations corresponding to the second set of locations. The apparatus may further include means for assigning the second value to locations corresponding to the second screen mask edge portion that correspond to the first set of locations of the first screen mask edge portion. The apparatus may also include means for assigning the first value to locations corresponding to the second screen mask edge portion that correspond to the second set of locations of the first screen mask edge portion. Further, the apparatus may include means for assigning the first value to the locations corresponding to the first screen mask inner portion and the locations corresponding to the second screen mask inner portion. Also, the apparatus may include means for assigning the second value to the locations corresponding to the first screen mask outer portion and the locations corresponding to the second screen mask outer portion. The apparatus may also include means for randomly selecting the locations corresponding to the first set of locations and the locations corresponding to the second set of locations. The apparatus may also include means for randomly selecting a quantity of locations corresponding to the first set of locations. The apparatus may also include means for excluding image data for locations corresponding to respective non-visible areas of the first screen mask and the second screen when transmitting the image packets.
The subject matter described herein can be implemented to realize one or more benefits or advantages. For instance, the described display and/or graphics processing techniques can be used by a display processor, a display processing unit (DPU) , a GPU, or a video processor or some other processor that can perform display processing to implement the smoothing of edge portions of a display techniques disclosed herein. This can also accomplished at a low cost compared to other display or graphics processing techniques. Moreover, the display or graphics processing techniques herein can improve or speed up data processing or execution. Further, the display or graphics processing techniques herein can improve resource or data utilization  and/or resource efficiency. For examples, aspects of the present disclosure can reduce the load of communication interfaces and/or reduce the load of a processing unit.
In accordance with this disclosure, the term “or” may be interrupted as “and/or” where context does not dictate otherwise. Additionally, while phrases such as “one or more” or “at least one” or the like may have been used for some features disclosed herein but not others, the features for which such language was not used may be interpreted to have such a meaning implied where context does not dictate otherwise.
In one or more examples, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. For example, although the term “processing unit” has been used throughout this disclosure, such processing units may be implemented in hardware, software, firmware, or any combination thereof. If any function, processing unit, technique described herein, or other module is implemented in software, the function, processing unit, technique described herein, or other module may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media may include computer data storage media or communication media including any medium that facilitates transfer of a computer program from one place to another. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, . Disk and disc, as used herein, includes compact disc (CD) , laser disc, optical disc, digital versatile disc (DVD) , floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. A computer program product may include a computer-readable medium.
The code may be executed by one or more processors, such as one or more digital signal processors (DSPs) , general purpose microprocessors, application specific integrated circuits (ASICs) , arithmetic logic units (ALUs) , field programmable logic arrays (FPGAs) , or other  equivalent integrated or discrete logic circuitry. Accordingly, the term “processor, ” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Also, the techniques could be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs, e.g., a chip set. Various components, modules or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily need realization by different hardware units. Rather, as described above, various units may be combined in any hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
Various examples have been described. These and other examples are within the scope of the following claims.

Claims (30)

  1. A method of operation of a display processor, comprising:
    obtaining a first screen mask associated with a display, the first screen mask defining a first visible area of the display;
    obtaining a second screen mask associated with the display, the second screen mask defining a second visible area of the display, the first visible area being different than the second visible area; and
    transmitting image packets to the display for displaying of image data by the display, the image packets corresponding to image data for a sequence of frames, and each image packet based on image data for a respective frame and based on alternating of the first screen mask and the second screen mask.
  2. The method of claim 1, wherein a shape of the second visible area corresponds to a shape of the first visible area.
  3. The method of claim 1, wherein the first screen mask includes an inner portion, an edge portion, and an outer portion, the second screen mask includes an inner portion, an edge portion, and an outer portion, wherein locations corresponding to the first screen mask inner portion are the same as locations corresponding to the second screen mask inner portion, and wherein locations corresponding to the first screen mask outer portion are the same as locations corresponding to the second screen mask outer portion.
  4. The method of claim 3, further comprising:
    dividing locations corresponding to the first screen mask edge portion into a first set of locations and a second set of locations;
    assigning a first value to the locations corresponding to the first set of locations; and
    assigning a second value to the locations corresponding to the second set of locations.
  5. The method of claim 4, further comprising:
    assigning the second value to locations corresponding to the second screen mask edge portion that correspond to the first set of locations of the first screen mask edge portion; and
    assigning the first value to locations corresponding to the second screen mask edge portion that correspond to the second set of locations of the first screen mask edge portion.
  6. The method of claim 5, further comprising:
    assigning the first value to the locations corresponding to the first screen mask inner portion and the locations corresponding to the second screen mask inner portion; and
    assigning the second value to the locations corresponding to the first screen mask outer portion and the locations corresponding to the second screen mask outer portion.
  7. The method of claim 6, wherein the first value indicates a visible area and the second value indicates a non-visible area.
  8. The method of claim 4, wherein the locations corresponding to the first set of locations and the locations corresponding to the second set of locations are randomly selected.
  9. The method of claim 4, wherein a first quantity corresponds to the locations corresponding to the first set of locations and a second quantity corresponds to the locations corresponding to the second set of locations, and wherein the first quantity is within a threshold quantity of the second quantity.
  10. The method of claim 4, wherein a quantity of locations corresponding to the first set of locations is randomly selected.
  11. The method of claim 1, wherein the transmitted image packets exclude image data for locations corresponding to respective non-visible areas of the first screen mask and the second screen mask.
  12. An apparatus for display processing, comprising:
    a memory; and
    at least one processor coupled to the memory and configured to:
    obtain a first screen mask associated with a display, the first screen mask defining a first visible area of the display;
    obtain a second screen mask associated with the display, the second screen mask defining a second visible area of the display, the first visible area being different than the second visible area; and
    transmit image packets to the display for displaying of image data by the display, the image packets corresponding to image data for a sequence of frames, and each image packet based on image data for a respective frame and based on alternating of the first screen mask and the second screen mask.
  13. The apparatus of claim 12, wherein a shape of the second visible area corresponds to a shape of the first visible area.
  14. The apparatus of claim 12, wherein the first screen mask includes an inner portion, an edge portion, and an outer portion, the second screen mask includes an inner portion, an edge portion, and an outer portion, wherein locations corresponding to the first screen mask inner portion are the same as locations corresponding to the second screen mask inner portion, and wherein locations corresponding to the first screen mask outer portion are the same as locations corresponding to the second screen mask outer portion.
  15. The apparatus of claim 14, wherein the at least one processor is further configured to:
    divide locations corresponding to the first screen mask edge portion into a first set of locations and a second set of locations;
    assign a first value to the locations corresponding to the first set of locations; and
    assign a second value to the locations corresponding to the second set of locations.
  16. The apparatus of claim 15, wherein the at least one processor is further configured to:
    assign the second value to locations corresponding to the second screen mask edge portion that correspond to the first set of locations of the first screen mask edge portion; and
    assign the first value to locations corresponding to the second screen mask edge portion that correspond to the second set of locations of the first screen mask edge portion.
  17. The apparatus of claim 16, wherein the at least one processor is further configured to:
    assign the first value to the locations corresponding to the first screen mask inner portion and the locations corresponding to the second screen mask inner portion; and
    assign the second value to the locations corresponding to the first screen mask outer portion and the locations corresponding to the second screen mask outer portion.
  18. The apparatus of claim 17, wherein the first value indicates a visible area and the second value indicates a non-visible area.
  19. The apparatus of claim 15, wherein the at least one processor is further configured to:
    randomly select the locations corresponding to the first set of locations and the locations corresponding to the second set of locations.
  20. The apparatus of claim 15, wherein a first quantity corresponds to the locations corresponding to the first set of locations and a second quantity corresponds to the locations corresponding to the second set of locations, and wherein the first quantity is within a threshold quantity of the second quantity.
  21. The apparatus of claim 15, wherein the at least one processor is further configured to randomly select a quantity of locations corresponding to the first set of locations.
  22. The apparatus of claim 12, wherein the at least one processor is further configured to exclude image data for locations corresponding to respective non-visible areas of the first screen mask and the second screen mask when transmitting the image packets.
  23. The apparatus of claim 12, wherein the apparatus includes a wireless communication device.
  24. A computer-readable medium storing computer executable code for display processing, comprising code to:
    obtain a first screen mask associated with a display, the first screen mask defining a first visible area of the display;
    obtain a second screen mask associated with the display, the second screen mask defining a second visible area of the display, the first visible area being different than the second visible area; and
    transmit image packets to the display for displaying of image data by the display, the image packets corresponding to image data for a sequence of frames, and each image packet based on image data for a respective frame and based on alternating of the first screen mask and the second screen mask.
  25. The apparatus of claim 24, wherein the first screen mask includes an inner portion, an edge portion, and an outer portion, the second screen mask includes an inner portion, an edge portion, and an outer portion, wherein locations corresponding to the first screen mask inner portion are the same as locations corresponding to the second screen mask inner portion, and wherein locations corresponding to the first screen mask outer portion are the same as locations corresponding to the second screen mask outer portion.
  26. The apparatus of claim 25, wherein the code is further configured to:
    divide locations corresponding to the first screen mask edge portion into a first set of locations and a second set of locations;
    assign a first value to the locations corresponding to the first set of locations; and
    assign a second value to the locations corresponding to the second set of locations.
  27. The apparatus of claim 26, wherein the code is further configured to:
    assign the second value to locations corresponding to the second screen mask edge portion that correspond to the first set of locations of the first screen mask edge portion; and
    assign the first value to locations corresponding to the second screen mask edge portion that correspond to the second set of locations of the first screen mask edge portion.
  28. The apparatus of claim 27, wherein the first value indicates a visible area and the second value indicates a non-visible area.
  29. The apparatus of claim 26, wherein the code is further configured to:
    randomly select the locations corresponding to the first set of locations and the locations corresponding to the second set of locations.
  30. The apparatus of claim 26, wherein a first quantity corresponds to the locations corresponding to the first set of locations and a second quantity corresponds to the locations corresponding to the second set of locations, and wherein the first quantity is within a threshold quantity of the second quantity.
PCT/CN2019/121449 2019-11-28 2019-11-28 Methods and apparatus to smooth edge portions of an irregularly-shaped display WO2021102772A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/121449 WO2021102772A1 (en) 2019-11-28 2019-11-28 Methods and apparatus to smooth edge portions of an irregularly-shaped display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/121449 WO2021102772A1 (en) 2019-11-28 2019-11-28 Methods and apparatus to smooth edge portions of an irregularly-shaped display

Publications (1)

Publication Number Publication Date
WO2021102772A1 true WO2021102772A1 (en) 2021-06-03

Family

ID=76128710

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121449 WO2021102772A1 (en) 2019-11-28 2019-11-28 Methods and apparatus to smooth edge portions of an irregularly-shaped display

Country Status (1)

Country Link
WO (1) WO2021102772A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674303A (en) * 2021-08-31 2021-11-19 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4703363A (en) * 1983-11-10 1987-10-27 Dainippon Screen Mfg. Co., Ltd. Apparatus for smoothing jagged border lines
CN101496065A (en) * 2006-08-03 2009-07-29 高通股份有限公司 Graphics system employing pixel mask
CN103997687A (en) * 2013-02-20 2014-08-20 英特尔公司 Techniques for adding interactive features to videos
CN105791798A (en) * 2016-03-03 2016-07-20 北京邮电大学 Method and device for converting 4K multi-viewpoint 3D video in real time based on GPU (Graphics Processing Unit)
CN106130984A (en) * 2016-06-29 2016-11-16 努比亚技术有限公司 Encrypted video sharing apparatus and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4703363A (en) * 1983-11-10 1987-10-27 Dainippon Screen Mfg. Co., Ltd. Apparatus for smoothing jagged border lines
CN101496065A (en) * 2006-08-03 2009-07-29 高通股份有限公司 Graphics system employing pixel mask
CN103997687A (en) * 2013-02-20 2014-08-20 英特尔公司 Techniques for adding interactive features to videos
CN105791798A (en) * 2016-03-03 2016-07-20 北京邮电大学 Method and device for converting 4K multi-viewpoint 3D video in real time based on GPU (Graphics Processing Unit)
CN106130984A (en) * 2016-06-29 2016-11-16 努比亚技术有限公司 Encrypted video sharing apparatus and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674303A (en) * 2021-08-31 2021-11-19 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11164357B2 (en) In-flight adaptive foveated rendering
US20230073736A1 (en) Reduced display processing unit transfer time to compensate for delayed graphics processing unit render time
US11037271B2 (en) Dynamic rendering for foveated rendering
US10565689B1 (en) Dynamic rendering for foveated rendering
US20230335049A1 (en) Display panel fps switching
CN112740278B (en) Method and apparatus for graphics processing
WO2021102772A1 (en) Methods and apparatus to smooth edge portions of an irregularly-shaped display
US11847995B2 (en) Video data processing based on sampling rate
US20230074876A1 (en) Delaying dsi clock change based on frame update to provide smoother user interface experience
WO2023151067A1 (en) Display mask layer generation and runtime adjustment
US11705091B2 (en) Parallelization of GPU composition with DPU topology selection
WO2024087152A1 (en) Image processing for partial frame updates
WO2023230744A1 (en) Display driver thread run-time scheduling
WO2023141917A1 (en) Sequential flexible display shape resolution
US20240169953A1 (en) Display processing unit (dpu) pixel rate based on display region of interest (roi) geometry
WO2023225771A1 (en) Concurrent frame buffer composition scheme
WO2024044936A1 (en) Composition for layer roi processing
WO2024044934A1 (en) Visual quality optimization for gpu composition
WO2023065100A1 (en) Power optimizations for sequential frame animation
US10755666B2 (en) Content refresh on a display with hybrid refresh mode
WO2024020825A1 (en) Block searching procedure for motion estimation
WO2024055234A1 (en) Oled anti-aging regional compensation
WO2021042331A1 (en) Methods and apparatus for graphics and display pipeline management
WO2021072626A1 (en) Methods and apparatus to facilitate regional processing of images for under-display device displays

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19954033

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19954033

Country of ref document: EP

Kind code of ref document: A1