US20120154428A1 - Spatio-temporal color luminance dithering techniques - Google Patents

Spatio-temporal color luminance dithering techniques Download PDF

Info

Publication number
US20120154428A1
US20120154428A1 US12/970,543 US97054310A US2012154428A1 US 20120154428 A1 US20120154428 A1 US 20120154428A1 US 97054310 A US97054310 A US 97054310A US 2012154428 A1 US2012154428 A1 US 2012154428A1
Authority
US
United States
Prior art keywords
matrix
group
luminance
source image
msb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/970,543
Inventor
Ulrich T. Barnhoefer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Inc filed Critical Apple Inc
Priority to US12/970,543 priority Critical patent/US20120154428A1/en
Assigned to APPLE INC. reassignment APPLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BARNHOEFER, ULRICH T., DR.
Priority to EP11192268A priority patent/EP2466575A3/en
Priority to PCT/US2011/064478 priority patent/WO2012082649A2/en
Priority to KR1020110135711A priority patent/KR101356334B1/en
Priority to TW100146941A priority patent/TW201234868A/en
Priority to CN201110421629.XA priority patent/CN102568436B/en
Publication of US20120154428A1 publication Critical patent/US20120154428A1/en
Priority to KR1020130117251A priority patent/KR20130114632A/en
Priority to US14/178,178 priority patent/US9552654B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2003Display of colours
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2007Display of intermediate tones
    • G09G3/2044Display of intermediate tones using dithering
    • G09G3/2051Display of intermediate tones using dithering with use of a spatial dither pattern
    • G09G3/2055Display of intermediate tones using dithering with use of a spatial dither pattern the pattern being varied in time
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2007Display of intermediate tones
    • G09G3/2074Display of intermediate tones using sub-pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/52Circuits or arrangements for halftone screening
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/67Circuits for processing colour signals for matrixing
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0242Compensation of deficiencies in the appearance of colours
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • G09G2340/0414Vertical resolution change
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • G09G2340/0421Horizontal resolution change
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • G09G2340/0428Gradation resolution change
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/16Calculation or use of calculated indices related to luminance levels in display data
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed

Definitions

  • the present disclosure relates generally to techniques for dithering images using a luminance approach.
  • Electronic displays are typically configured to output a set number of colors within a color range.
  • a graphical image to be displayed may have a number of colors greater than the number of colors that are capable of being shown by the electronic display.
  • a graphical image may be encoded with a 24-bit color depth (e.g., 8 bits for each of red, green, and blue components of the image), while an electronic display may be configured to provide output images at an 18-bit color depth (e.g., 6 bits for each of red, green, and blue components of the image).
  • dithering techniques may be used to output a graphical image that appears to be a closer approximation of the original color image. However, the dithering techniques may not approximate the original image as closely as desired.
  • the present disclosure generally relates to dithering techniques that may be used to display color images on an electronic display.
  • the electronic display may include one or more electronic components, including a power source, pixelation hardware (e.g., light emitting diodes, liquid crystal display), and circuitry for receiving signals representative of image data to be displayed.
  • pixelation hardware e.g., light emitting diodes, liquid crystal display
  • circuitry for receiving signals representative of image data to be displayed.
  • a processor may be internal to the display while in other embodiments the processor may be external to the display and included as part of an electronic device, such as a computer workstation or a cell phone.
  • the processor may use dithering techniques, including spatial and temporal dithering techniques disclosed herein, to output color images on the electronic display.
  • dithering techniques including spatial and temporal dithering techniques disclosed herein, to output color images on the electronic display.
  • adjacent pixels are color-shifted with respect to each other and the color values of certain pixels are temporally alternated with color values of other pixels in the group.
  • the luminance of a group of adjacent pixels is determined and the luminance of the group is made more homogenous spatially and temporally by distributing color variations over a larger number of pixels so as to reduce the luminance difference between the pixel with the least luminance and the pixel with the greatest luminance.
  • Individual color components e.g., red, green, blue
  • FIG. 1 is a simplified block diagram depicting components of an example of an electronic device that includes image processing circuitry configured to implement one or more of the image processing techniques set forth in the present disclosure
  • FIG. 2 is a front view of the electronic device of FIG. 1 in the form of a desktop computing device, in accordance with aspects of the present disclosure
  • FIG. 3 is a front view of the electronic device of FIG. 1 in the form of a handheld portable electronic device, in accordance with aspects of the present disclosure
  • FIG. 4 shows a graphical representation of an M ⁇ N pixel array that may be included in the device of FIG. 1 , in accordance with aspects of the present disclosure
  • FIG. 5 is a block diagram illustrating an image signal processing (ISP) logic, in accordance with aspects of the present disclosure
  • FIG. 6 is a logic diagram illustrating an operation of the device of FIG. 1 , in accordance with aspects of the present disclosure
  • FIG. 7 is a block diagram generally representative of certain aspects of the logic of FIG. 6 , in accordance with aspects of the present disclosure.
  • FIG. 8 is a block diagram illustrating the use of temporal dithering, in accordance with aspects of the present disclosure.
  • FIG. 9 is a second logic diagram illustrating an operation of the device of FIG. 1 , in accordance with aspects of the present disclosure.
  • FIG. 10 is a block diagram generally representative of certain aspects of FIG. 9 , in accordance with aspects of the present disclosure.
  • FIGS. 11-14 are generally illustrative of an example of temporal dithering, in accordance with one embodiment of the present disclosure.
  • FIG. 15 is a second block diagram generally representative of certain aspects of FIG. 9 , in accordance with aspects of the present disclosure.
  • the present disclosure relates generally to techniques for processing and displaying image data on an electronic display device.
  • certain aspects of the present disclosure may relate to techniques for processing images using temporal and spatial dithering techniques.
  • the presently disclosed techniques may be applied to both still images and moving images (e.g., video), and may be utilized in any suitable type of electronic display, such as a cell phone, a desktop computer monitor, a tablet computing device, an e-book reader, a television, and so forth.
  • FIG. 1 is a block diagram illustrating an example of an electronic device 10 that may provide for the processing of image data using one or more of the image processing techniques mentioned above.
  • the electronic device 10 may be any type of electronic device, such as a laptop or desktop computer, a mobile phone, a digital media player, a television, or the like, that is configured to process and display image data.
  • the electronic device 10 may be a portable electronic device, such as a model of an iPad®, iPod® or iPhone®, available from Apple Inc. of Cupertino, Calif.
  • the electronic device 10 may be a desktop or laptop computer, such as a model of a MacBook®, MacBook® Pro, MacBook Air®, iMac®, Mac® Mini, or Mac Pro®, available from Apple Inc.
  • the electronic device 10 may provide for the processing of image data using one or more of the image processing techniques briefly discussed above, which may include spatial and/or temporal dithering techniques, among others. In some embodiments, the electronic device 10 may apply such image processing techniques to image data stored in a memory of the electronic device 10 . Embodiments showing both portable and non-portable embodiments of the electronic device 10 will be further discussed below with respect to FIGS. 2 and 3 .
  • the electronic device 10 may include various internal and/or external components which contribute to the function of the device 10 .
  • the various functional blocks shown in FIG. 1 may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium) or a combination of both hardware and software elements.
  • the electronic device 10 may include input/output (I/O) ports 12 , input structures 14 , one or more processors 16 , memory device 18 , non-volatile storage 20 , expansion card(s) 22 , networking device 24 , power source 26 , and display 28 .
  • the electronic device 10 may include one or more imaging devices 30 , such as a digital camera, and image processing circuitry 32 .
  • the image processing circuitry 32 may be configured to implement one or more of the above-discussed image processing techniques.
  • image data processed by image processing circuitry 32 may be retrieved from the memory 18 and/or the non-volatile storage device(s) 20 , or may be acquired using the imaging device 30 .
  • the system block diagram of the device 10 shown in FIG. 1 is intended to be a high-level control diagram depicting various components that may be included in such a device 10 .
  • the depicted processor(s) 16 may, in some embodiments, include multiple processors, such as a main processor (e.g., CPU), and dedicated image and/or video processors. In such embodiments, the processing of image data may be primarily handled by these dedicated processors, thus effectively offloading such tasks from a main processor (CPU).
  • main processor e.g., CPU
  • dedicated image and/or video processors dedicated image and/or video processors.
  • CPU main processor
  • the input structures 14 may provide user input or feedback to the processor(s) 16 .
  • input structures 14 may be configured to control one or more functions of electronic device 10 , such as applications running on electronic device 10 .
  • the processor(s) 16 may control the general operation of the device 10 .
  • the processor(s) 16 may provide the processing capability to execute an operating system, programs, user and application interfaces, and any other functions of the electronic device 10 .
  • the processor(s) 16 may include one or more microprocessors, such as one or more “general-purpose” microprocessors, one or more special-purpose microprocessors and/or application-specific microprocessors (ASICs), or a combination of such processing components.
  • the processor(s) 16 may include one or more reduced instruction set (e.g., RISC) processors, as well as graphics processors (GPU), video (GPU), video processors, audio processors and/or related chip sets.
  • the processor(s) 16 may be coupled to one or more data buses for transferring data and instructions between various components of the device 10 .
  • the processor(s) 16 may provide the processing capability to execute source code embodiments capable of employing the dithering techniques described herein.
  • the instructions or data to be processed by the processor(s) 16 may be stored in a computer-readable medium, such as a memory device 18 .
  • the memory device 18 may be provided as a volatile memory, such as random access memory (RAM) or as a non-volatile memory, such as read-only memory (ROM), or as a combination of one or more RAM and ROM devices.
  • the memory 18 may be used for buffering or caching during operation of the electronic device 10 .
  • the memory 18 includes one or more frame buffers for buffering video data as it is being output to the display 28 .
  • the electronic device 10 may further include a non-volatile storage 20 for persistent storage of data and/or instructions.
  • the non-volatile storage 20 may include flash memory, a hard drive, or any other optical, magnetic, and/or solid-state storage media, or some combination thereof.
  • image processing data stored in the non-volatile storage 20 and/or the memory device 18 may be processed by the image processing circuitry 32 prior to being output on a display.
  • the embodiment illustrated in FIG. 1 may also include one or more card or or expansion slots.
  • the card slots may be configured to receive an expansion card 22 that may be used to add functionality, such as additional memory, I/O functionality, networking capability, or graphics processing capability to the electronic device 10 .
  • the electronic device 10 also includes the network device 24 , which may be a network controller or a network interface card (NIC) that may provide for network connectivity over a wireless 802.11 standard or any other suitable networking standard, such as a local area network (LAN), a wide area network (WAN).
  • LAN local area network
  • WAN wide area network
  • the power source 26 of the device 10 may include the capability to power the device 10 in both non-portable and portable settings.
  • the display 28 may be used to display various images generated by device 10 , such as a GUI for an operating system, or image data (including still images and video data) processed by the image processing circuitry 32 , as will be discussed further below.
  • the image data may include image data acquired using the imaging device 30 or image data retrieved from the memory 18 and/or non-volatile storage 20 .
  • the display 28 may be any suitable type of display, such as a liquid crystal display (LCD), plasma display, or an organic light emitting diode (OLED) display, for example.
  • LCD liquid crystal display
  • OLED organic light emitting diode
  • the display 28 may be provided in conjunction with the above-discussed touch-sensitive mechanism (e.g., a touch screen) that may function as part of a control interface for the electronic device 10 .
  • the illustrated imaging device(s) 30 may be provided as a digital camera configured to acquire both still images and moving images (e.g., video).
  • the image processing circuitry 32 may provide for various image processing steps, such as spatial dithering, temporal dithering, pixel color-space conversion, luminance determination, luminance optimization, image scaling operations, and so forth.
  • the image processing circuitry 32 may include various subcomponents and/or discrete units of logic that collectively form an image processing “pipeline” for performing each of the various image processing steps. These subcomponents may be implemented using hardware (e.g., digital signal processors or ASICs) or software, or via a combination of hardware and software components.
  • the various image processing operations that may be provided by the image processing circuitry 32 and, particularly those processing operations relating to spatial dithering, temporal dithering, pixel color-space conversion, luminance determination, and luminance optimization, will be discussed in greater detail below.
  • FIGS. 2 and 3 illustrate various forms that the electronic device 10 may take.
  • the electronic device 10 may take the form of a computer, including computers that are generally portable (such as laptop, notebook, and tablet computers) as well as computers that are generally non-portable (such as desktop computers, workstations and/or servers), or other type of electronic device, such as handheld portable electronic devices (e.g., a digital media player or mobile phone).
  • FIGS. 2 and 3 depict the electronic device 10 in the form of a desktop computer 34 and a handheld portable electronic device 36 , respectively.
  • FIG. 2 further illustrates an embodiment in which the electronic device 10 is provided as the desktop computer 34 .
  • the desktop computer 34 may be housed in an enclosure 38 that includes a display 28 , as well as various other components discussed above with regard to the block diagram shown in FIG. 1 .
  • the desktop computer 34 may include an external keyboard and mouse (input structures 14 ) that may be coupled to the computer 34 via one or more I/O ports 12 (e.g., USB) or may communicate with the computer 34 wirelessly (e.g., RF, Bluetooth, etc.).
  • the desktop computer 34 also includes an imaging device 40 , which may be an integrated or external camera, as discussed above.
  • the depicted desktop computer 34 may be a model of an iMac®, Mac® mini, or Mac Pro®, available from Apple Inc.
  • the display 28 may be configured to generate various images that may be viewed by a user, such as a dithered image 42 .
  • the dithered image 42 may have been generated by using, for example, the spatial and temporal dithering techniques described in more detail below.
  • the display 28 may display a graphical user interface (“GUI”) 44 that allows the user to interact with an operating system and/or application running on the computer 34 .
  • GUI graphical user interface
  • each input structure 14 may be configured to control one or more respective device functions when pressed or actuated.
  • one or more of the input structures 14 may be configured to invoke a “home” screen or menu to be displayed, to toggle between a sleep, wake, or powered on/off mode, to silence a ringer for a cellular phone application, to increase or decrease a volume output, and so forth.
  • the handheld device 36 may include any number of suitable user input structures existing in various forms including buttons, switches, keys, knobs, scroll wheels, and so forth.
  • the handheld device 36 includes the display device 28 .
  • the display device 28 which may be an LCD, OLED, or any suitable type of display, may display various images generated by the techniques disclosed herein.
  • the display 28 may display the dithered image 42 .
  • the display device 28 may be any suitable type of display, such as a liquid crystal display (LCD), plasma display, a digital light processing (DLP) projector, an organic light emitting diode (OLED) display, and so forth.
  • the display 28 may include a matrix of pixel elements such as an example M ⁇ N matrix 48 depicted in FIG. 4 . Accordingly, the display 28 is capable of presenting an image at a natural display resolution of M ⁇ N. For example, in embodiments where the display 28 is included in a 30-inch Apple Cinema HD Display®, the natural display resolution may be approximately 2560 ⁇ 1600 pixels.
  • a pixel matrix 50 is depicted in greater detail and includes four adjacent pixels 52 , 54 , 56 , and 58 .
  • each pixel of the display device 28 may include three sub-pixels capable of displaying a red (R), a green (G), and a blue (B) color.
  • the human eye is capable of perceiving the particular RGB color combination displayed by the pixel and translating the combination into a specific color.
  • a number of colors may be displayed by each individual pixel by varying the individual RGB intensity levels of the pixel. For example, a pixel having a level of 50% R, 50% G, and 50% B may be perceived as the color gray, while a pixel having a level of 100% R, 100% G, and 0% B may be perceived as the color yellow.
  • the number of colors that a pixel is capable of displaying is dependent on the hardware capabilities of the display 28 .
  • a display 28 with a 6-bit color depth for each sub-pixel is capable of producing 64 (2 6 ) intensity levels for each of the R, G, and B color components.
  • the number of bits per sub-pixel, e.g. 6 bits, is referred to as the pixel depth.
  • the pixel depth At a pixel depth of 6 bits, 262,144 (2 6 ⁇ 2 6 ⁇ 2 6 ) color combinations are possible, while at pixel depth of 8 bits, 16,777,216 (2 8 ⁇ 2 8 ⁇ 2 8 ) color combinations are possible.
  • an 8-bit pixel depth display 28 may be superior to the visual quality of images produced by a display 28 using 6-bit pixel depth, the cost of the 8-bit display 28 is also higher. Accordingly, it would be beneficial to apply imaging processing techniques, such as the techniques described herein, that are capable displaying a source image with improved visual reproduction even when utilizing lower pixel depth displays 28 . Further, a source image may contain more colors than those supported by the display 28 , even displays 28 having higher pixel depths. Accordingly, it would also be beneficial to apply imaging processing techniques that are capable of improved visual representation of any number of colors. Indeed, the image processing techniques described herein, such as those described in more detail with respect to FIG. 5 below, are capable of displaying improved visual reproductions at any number of pixel depths from any number of source images having a greater number of of colors than that which can be output by display hardware.
  • FIG. 5 the figure depicts an embodiment of an image signal processing (ISP) pipeline logic 60 that may be utilized for processing and displaying a source image 62 .
  • the ISP logic 60 may be implemented using hardware and/or software components, such as the image processing circuitry 32 of FIG. 1 .
  • a source image 62 may be provided, for example, by placing an electronic representation of the source image 62 onto embodiments of the memory 18 . In such an example, the source image 62 may be placed onto frame buffer embodiments of the memory 18 .
  • the source image 62 may include colors that are not directly supported by the hardware of the electronic device 10 .
  • the source image 62 may be stored at a pixel depth of 8 bits while the hardware includes a 6-bit pixel depth display 28 . Accordingly, the source image 62 may be manipulated by the techniques disclosed herein so that it may be displayed in a lower pixel depth display 28 .
  • the source image 62 may first undergo color decomposition (block 64 ).
  • the color decomposition (block 64 ) is capable of decomposing the color of each pixel of the source image 62 into the three RGB color levels. That is, the RGB intensity levels for each pixel may be determined by the color decomposition (block 64 ).
  • Such a decomposition may be referred to as a three-channel decomposition, because the colors may be decomposed into a red channel, a green channel, and a blue channel, for example.
  • the source image 62 may also undergo a luminance analysis (block 66 ).
  • a luminance is related to the perceived brightness of an image or an image component (such as a pixel) to the human eye. Further, humans typically perceive colors as having different luminance even if each color has equal radiance. For example, at equal radiances, humans typically perceive the color green as having higher luminance than the color red. Additionally, the color red is perceived as having a higher luminance than the color blue.
  • a luminance formula Y may be arrived at by incorporating observations based on the perception of luminance by humans as defined below.
  • the luminance equation Y above is an additive formula based on 30% red, 60% green, and 10% blue chromaticities (e.g., color values).
  • the luminance formula Y can thus use the RGB color levels of a pixel to determine an approximate human perception of the luminance of the pixel. It is to be understood that because of the variability of human perception, among other factors, the values used in the luminance equation are approximate. Indeed, in other embodiments, the percentage values for R, G, and B may be different. For example, in another embodiment, the values may be approximately 29.9% red, 58.7% green, and 11.4% blue.
  • the luminance value of each pixel may then be utilized for spatial dithering (block 68 ).
  • spatial dithering the image may be manipulated so as to increase the “noise” of the image, decrease color banding, and make sharp edges of the image less detectable. Spatial dithering may therefore improve the image perception and quality.
  • the pixels from the source image 62 may first be converted to a lower pixel depth, for example, through a most significant bit (MSB) and a least significant bit (LSB) process, as described in more detail below with respect to FIG. 6 .
  • MSB most significant bit
  • LSB least significant bit
  • Multiple dither patterns 70 may also be used during spatial dithering so as to enable a displayed image 74 to more closely approximate the source image 62 .
  • two sets of dither patterns 70 and 70 ′ may be stored in memory.
  • the set of dither patterns 70 may be used with a color channel such as green, and the set 70 ′ may be used with color channels red and blue.
  • the dither patterns 70 (and 70 ′) may be dynamically calculated based on the luminance analysis (block 66 ) and not stored in memory.
  • the dither patterns 70 corresponding to a single color channel, such as green may be stored in memory.
  • the dither patterns 70 ′ may be derived based on the stored dither patterns 70 .
  • the dither patterns 70 and 70 ′ are described in more detail below with respect to FIG. 7 .
  • a group e.g., matrix
  • spatial dithering may be capable of spatially distributing the colors and luminance of the source image 62 so as to enable the display of the source image 62 at a lower pixel depth while significantly preserving the perceived image quality.
  • the ISP logic 60 may be capable of utilizing temporal dithering (block 72 ).
  • temporal dithering block 72
  • the colors and/or luminosity of pixels may be alternated frame-by-frame so as to improve the perceived image quality of the displayed image 74 . That is, a first frame of the processed image may be presented at time T 0 , followed by a second frame of the processed image which may be presented at time T 1 . The second frame may have color and/or luminance variations from the first frame.
  • a third frame of the processed image may be presented at time T 2 having color/and or luminance that differ from the second frame.
  • additional frames may then be presented also having color/and or luminance values that differ form the third frame.
  • the temporal dithering may iteratively loop through the frame presentations. That is, after presenting a certain n-th frame at time T n , the first frame may then be presented again, followed by the second frame, and so on, up to the n-th frame and then returning to the first frame.
  • Humans may perceive multiple frames presented sequentially one after the other as a single image. Indeed, in some embodiments, 60, 120, 240, or more frames per second (FPS) may be presented sequentially. By alternating the color and/or the luminance of each frame and by presenting the frames sequentially, it is possible to enable a single perceived image that is more natural and pleasing to the human eye. Accordingly, the dithering techniques described herein, such as the MSB-LSB based technique described in more detail with respect to FIG. 6 , may allow for the visually pleasing presentation of the displayed image 74 having a lower pixel depth than the source image 62 .
  • the dithering techniques described herein such as the MSB-LSB based technique described in more detail with respect to FIG. 6 , may allow for the visually pleasing presentation of the displayed image 74 having a lower pixel depth than the source image 62 .
  • FIG. 6 is illustrative of an embodiment of a logic 76 capable of utilizing MSB-LSB techniques to spatially and temporally dither the source image 62 . That is, the logic 76 is capable of transforming the image 62 having a higher pixel depth into the displayed image 74 having a lower pixel depth. Accordingly, the logic 76 may include non-transitory machine readable code or computer instructions that may be used by a processor, for example, to transform image data.
  • the source image 62 may first be decomposed (block 78 ) into three R, G, B, channels 80 . In one embodiment, three M ⁇ N matrices may be created based on the M ⁇ N source image resolution, each matrix corresponding to one of the three color channels 80 .
  • the values included in each cell of a red channel matrix (R) correspond to the red intensity values of each pixel
  • the values of each cell in a green channel matrix (G) correspond to the green intensity values of each pixel
  • the values of each cell in a blue channel matrix (B) correspond to the blue intensity values of each pixel.
  • Each R, G, B color channel matrix 80 may then be subdivided (block 82 ) into multiple source image groups (e.g., matrices) 84 corresponding to different areas of the image.
  • a group 84 is sized as 4 ⁇ 4 pixel group having a total of 16 pixels. Accordingly, the subdivision (block 82 ) of the source image 62 may be accomplished by selecting multiple 4 ⁇ 4 adjacent pixel groups 84 so as to partition the entire image into the 4 ⁇ 4 pixel groups 84 .
  • Each 4 ⁇ 4 pixel group 84 may then be used to create (block 86 ) a corresponding LSB group 88 and MSB group 90 , as shown in more detail with respect to FIG. 7 .
  • the LSB group 88 and the MSB group 90 may be created by dividing the pixel depth information of each pixel into two values, a LSB value and a MSB value. The LSB values of the all the pixels in the group may then be used to create the LSB group 88 . Likewise, the MSB values of the all the pixels in the group may then be used to create the MSB group 90 .
  • the pixel's color value may be provided in, or converted to, a binary value.
  • the binary value may then be divided into two binary values, the LSB value and the MSB value.
  • the most significant bits equal to the pixel depth (e.g., 6 bits) of the display device 28 are selected as the MSB value and the remainder bits are selected as the LSB value.
  • the pixel depth e.g. 6 bits
  • the remainder bits are selected as the LSB value.
  • a dither pattern 70 may then be selected and used to create a modification matrix 94 (block 92 ).
  • one of the dither patterns 70 may be selected based on the LSB value or magnitude and used to create (block 92 ) the modification matrix 94 .
  • the values of the LSB group 88 may be used to define the modification matrix's 94 values, resulting in the modification matrix 94 having ones and zeros. Examples of the use of the LSB group 88 to create the modification matrix 94 based on the dither patterns 70 are described in more detail with respect to FIG. 7 below.
  • the modification matrix 94 may then be mathematically added (i.e., through matrix addition) to the MSB group 90 (block 96 ) to create a new lower pixel depth (e.g., 6 bit) MSB matrix 98 .
  • the resulting lower pixel depth MSB matrix 98 is thus capable of being displayed by the display 28 .
  • multiple new MSB matrices 98 98 may be derived corresponding to all the pixels groups of the source image 94 .
  • the multiple new MSB matrices 98 may then be displayed as the displayed image 74 .
  • FIG. 7 depicts an example set of source image pixel group 84 , LSB group 88 , MSB group 90 , dither patterns 70 , and new MSB matrix 98 having values illustrative of the transformation of an individual color channel (e.g., R, G, or B) of a source image 62 into a displayed image 74 color channel by using logic 76 as described above.
  • a source image group 84 may contain four values (e.g., 9-bit values), A, B, C, D, in a first row corresponding to an individual color channel (e.g., R, G, or B).
  • any numeric value may be assigned to A, B, C, or D, and that other rows of the source image group 84 may include additional values.
  • One of the dither patterns 70 e.g., individual dither patterns 102 , 104 , 106 , 108 , 110 , 112 , 114 , and 116 ) may then be selected and used to create the modification group 94 based on the LSB group 88 .
  • the dither pattern 110 is selected.
  • a dither pattern such as dither pattern 110
  • dither pattern 110 is selected based on the LSB group 88 as described in more detail below. Once selected, the dither pattern 110 and the LSB 88 may be used to create the modification matrix 94 .
  • the value (i.e., magnitude) of each cell of the LSB 88 is used to select one of the dither patterns 70 .
  • the values of the 3-bit LSB cells may vary from the decimal value “0” to the decimal “7”, there are eight possible values. Accordingly, eight dither patterns 70 are provided when using the 3-bit LSB 88 . It is to be understood that when the LSB 88 stores more (or less) binary bits, then more (or less) dither patterns 70 may be provided. For example, when using a 2-bit LSB 88 , there may be four (i.e., 2 2 ) dither patterns 70 provided. Likewise, when using a 4-bit LSB 88 , sixteen (i.e., 2 4 ) dither patterns 70 may be provided.
  • the magnitude or value of the 3-bit binary number stored in each cell of the LSB 88 may then be used to select one of the eight illustrated dither patterns 70 .
  • cell L 4 of the LSB 88 may have the value “0”, which corresponds to the first of eight possible values “0” to “7”. Accordingly, the first dither pattern 102 of the eight dither patterns 70 may be selected.
  • the cell L 3 contains the value “4”, which corresponds to the fifth of eight possible values “0” to “7”. Accordingly, the fifth dither pattern 110 may be selected.
  • the cell L 2 contains the value “3” which corresponds to the fourth dither pattern 108 .
  • L 1 contains the value “5”, which in turn corresponds to the sixth dither pattern 112 .
  • the first row of the LSB group 88 containing the cells L 1 , L 2 , L 3 , and L 4 may map to one of the dither patterns 70 . All other cells of the LSB group 88 may be mapped to one of the dither patterns 70 in a similar manner.
  • two sets of dither patterns 70 and 70 ′ may be used.
  • the dither patterns 70 illustrated in FIG. 7 may be used with the green color channel.
  • the set of dither patterns 70 ′ may then be used with the colors red and blue.
  • This second set of dither patterns 70 ′ may be derived by shifting the ones and zeros of each of the illustrated dither patterns 104 , 106 , 108 , 110 , 112 , 114 , and 116 so as to more homogenously distribute luminance.
  • a dither pattern 104 ′ may be used with the colors red and/or blue, where the first value “1” found in the dither pattern 104 at position (1,1) may have been shifted to position to position (2,2). Likewise, the second value “1” found in the dither pattern 104 at position (3,3) may be shifted to a position (4,4) in the dither pattern 104 ′.
  • Such a phase shifting of the values from 104 to 104 ′ may enable a more homogenous distribution of the overall luminance because the green values (e.g., when using dither pattern 104 ) are counterbalanced with the red and blue values (e.g., when using dither pattern 104 ′).
  • all dither patterns 104 , 106 , 108 , 110 , 112 , 114 , and 116 may be phase-shifted into dither patterns 104 ′, 106 ′, 108 ′, 110 ′, 112 ′, 114 ′, and 116 ′ so as more homogeneously distribute the luminance.
  • the phase-shifting may be accomplished by shifting the “1” values to counterbalance the effect on luminance of the previous position of the “1” values.
  • dither pattern 108 ′ may be arrived at having a first row “0 1 0 0”, a second row “1 0 0 1”, a third row “0 0 0 1” and a fourth row “0 1 1 0” by counterbalancing the effect of the “1” values of dither pattern 108 .
  • a dither pattern 116 ′ may be arrived at having a first row “1 0 1 1”, a second row “1 1 1 1”, a third row “1 1 1 0”, and a fourth row “1 1 1 1” by counterbalancing the effect of the “1” values of the dither pattern 116 .
  • the LSB group 88 may again be used to select one of the cells in each of the selected dither patterns 70 (or 70 ′). To make such a cell selection, the position of each cell in the LSB group 88 is used to “point” to the same position in the selected dither pattern 70 (or 70 ′).
  • L 3 may first be used to select the dither pattern 110 and then L 3 's cell position may be used to select one of the cells of the dither pattern 110 . L 3 is positioned in the first row, third column cell. Accordingly, the cell in the first row, third column of the dither pattern 110 may then be selected.
  • the value in this first row, third column cell (i.e., “1”) of the dither pattern 110 may then be used to fill the cell at the same position (i.e., first row, third column) in the modification matrix 94 .
  • the cells L 1 , L 2 , and L 4 may be used.
  • L 1 is in the first row, first column of the LSB group 88 , so the first row and first column value of the dither pattern 112 (i.e., “1”) is copied to the first row and first column cell of the modification matrix 94 .
  • L 2 is in the first first row, second column of the LSB group 88 , so the first row and second column value of the dither pattern 108 (i.e., “0”) is copied to the first, row second column cell of the modification matrix 94 .
  • the value (i.e., “0”) of the cell in the first row, fourth column of the dither pattern 102 is copied into the first row, fourth column cell of the modification matrix 94 .
  • all of the cells of the modification matrix 94 may be derived as having a zero or a one.
  • the MSB block 90 may then be added to the modification matrix 94 by using, for example, matrix addition. That is, every cell in the MSB block 90 may be added to the corresponding cell in the modification matrix 94 .
  • the result of the addition operation is a new MSB block 98 .
  • the remaining rows of the new MSB block 98 may then be similarly computed based on the values for the corresponding rows of the source image block 84 .
  • the new MSB block 98 may include color values at a lower pixel depth than the source image block 84 suitable for display by the display 28 .
  • the dithering techniques disclosed herein allow for the creation of multiple new MSB blocks 98 suitable for displaying the higher pixel depth (e.g., 9-bit) source image 62 at a lower pixel depth (e.g., 6-bit).
  • FIG. 8 the figure depicts an example of dither patterns 102 , 110 , 106 , and 114 , as they may be temporally dithered. Indeed, any of the dither patterns 70 may be temporally dithered in some embodiments, and such temporal dithering of the dither patterns 70 may be used in addition to the LSB-MSB techniques described above to further transform the source image 62 .
  • FIG. 8 depicts three rows, each row representing a temporal frame at times T 0 , T 1 , and T 2 .
  • a first row shows an example of an initial condition (i.e., position of the zero and ones) at time T 0 for each of the dither patterns 102 , 110 , 106 , and 114 .
  • Time T 0 may correspond to the display of the first frame of the image, as mentioned above.
  • the example depicted dither patterns 102 , 110 , 106 , and 114 may be used to create the modification matrix 94 at time T 0 using the methodology described above with respect to FIG. 7 .
  • the modification matrix may then be used to transform the source image 62 into a displayed image 74 at time T 0 .
  • the second row of the depicted example corresponds to time T 1 .
  • the bits of the dither patterns at time T 1 have been temporally shifted from their positions at time T 0 .
  • the shift of the bits is accomplished by a clockwise rotation of the bits.
  • each of the dither patterns may be divided into a top left quadrant 118 , a top right quadrant 120 , a bottom right quadrant 122 , and a bottom left quadrant 124 , each quadrant having four bits.
  • each of the quadrants may have the bits rotated in a clockwise direction as depicted in FIG. 8 .
  • the top row (e.g., top two bits) of the depicted quadrant 118 of the dither pattern 110 has shifted from storing the bits “1” and “0” at time T 0 to storing the bits “0” and “1” at time T 1 .
  • the bottom row (e.g., bottom two bits) of the aforementioned quadrant 118 has shifted from storing the bits “0” and “1” at time T 0 to storing the bits “1” and “0” at time T 1 .
  • the example depicted dither patterns 102 , 110 , 106 , and 114 at time T 1 may be used to create the modification matrix 94 as described above. described above.
  • the modification matrix 94 may then be used to transform the source image 62 into a displayed image 74 at time T 1 .
  • a third row in FIG. 8 corresponding to time T 2 may then be similarly created (e.g., by shifting of the bits in each quadrant) and used to display a frame of the image at time T 2 .
  • the top row of quadrant 118 of the dither pattern 110 has shifted from storing the bits “0” and “1” at time T 1 to storing the bits “1” and “0” at time T 2 .
  • the bottom row of the quadrant 118 has shifted from storing the bits “1” and “0” at time T 1 to storing the bits “0” and “1” at time T 2 .
  • the other quadrants 120 , 122 , and 124 may be similarly shifted as the dither patterns 70 undergo temporal dithering. Such temporal dithering of the dither patterns may allow the resulting displayed image 74 to be perceived as having a higher visual quality because the human eye may perceive the multiple frames displayed sequentially in time as a single frame having an improved image quality.
  • FIG. 9 is illustrative of a logic 126 capable of employing spatial, temporal, and/or luminance-based dithering techniques so as to enhance the visual quality of a lower pixel-depth image.
  • the logic 126 may include non-transitory machine readable code or computer instructions that may be used by a processor, for example, to transform image data.
  • the source image 62 may first be decomposed (block 78 ) into three R, G, B, channels 80 . That is, three M ⁇ N matrices may be created based on the M ⁇ N source image 62 resolution, each matrix corresponding to one of the three color channels 80 (e.g., red, green, blue).
  • the values included in each cell of the red channel matrix (R) correspond to the red color values of each pixel
  • the values of each cell in the green channel matrix (G) corresponds to the green color values of each pixel
  • the values of each cell in the blue channel matrix (B) corresponds to the blue color values of each pixel.
  • Each R, G, B color channel matrix 80 may then be used (block 82 ) to create multiple source image groups (e.g., matrices) 84 corresponding to different areas of the image, or to different pixels of the image, with each cell in the group having an red, green, and blue color component.
  • the group is sized as 2 ⁇ 2 pixel group having a total of 4 pixels.
  • the values for each of the pixels in the source group 84 may be derived from a single pixel of the source image. That is, the RGB values of a source image pixel may be copied into the 2 ⁇ 2 pixel group 84 .
  • multiple 2 ⁇ 2 adjacent pixels of the source image may be copied into the 2 ⁇ 2 pixel group 84 . Accordingly, the entire image may be divided either pixel by pixel or by selecting adjacent pixels. It is to be understood that, in other embodiments, other sizes of source image groups 84 may be used, for example, 4 ⁇ 4, 6 ⁇ 6, 8 ⁇ 8, and so forth.
  • a luminance value of each cell in the source image group 84 may then be determined (block 128 ), for example, through the use of the luminance formula Y described above.
  • a matrix of source image RGB values may then be derived based on the color values of each cell in the source image group 84 .
  • the source image RGB matrix may include four cells where each cell includes three sub-cells, each sub-cell storing a luminance for each RGB channel. An example 4 ⁇ 4 source image RGB matrix is shown in FIG. 10 below.
  • the source image RGB matrix may then be used to derive a displayed displayed image RGB matrix having a reduced luminance amplitude.
  • a higher pixel depth (e.g., 8-bit depth) RGB matrix may be converted into a lower pixel depth (e.g., 6-bit) RGB matrix suitable for display by the display device 28 .
  • the luminance values for the cells of the lower pixel RGB matrix may be used to determine a luminance difference (block 132 ) of the lower pixel RGB matrix.
  • the luminance difference may be calculated by using the highest and the lowest luminance values in the lower pixel depth RGB matrix to find the largest luminance difference in the lower pixel RGB matrix.
  • the luminance difference or amplitude is minimized by color shifting the RGB values of each sub-cell of the lower pixel depth RGB matrix (block 134 ).
  • a set of rules may be used to more evenly distribute the luminance values of the RGB matrix, as described in more detail below.
  • other techniques such as creating a reduced-amplitude luminance matrix 136 and then using the luminance values of the matrix to re-assign RGB values may be used that results in the displayed image 74 having less differences between luminance values (i.e., reduce amplitude between values).
  • the color shifting reduces the overall luminance amplitude by dividing the luminance of each source image RGB channel into four lower pixel depth values. That is, a higher pixel depth value, such as an 8-bit value may be divided into four lower pixel depth values, such as four 6-bit values.
  • the overall luminance difference of the lower pixel are reduced by reapportioning the red, green, and color values of the four lower pixel depth RGB values so as to result in a reduced-amplitude luminance matrix 136 that has more homogenous luminance values.
  • the RGB color components of the cells in the reduced-amplitude luminance matrix are distributed spatially (e.g., moved from one cell to another cell) so as to reduce the luminance amplitude (e.g., luminance difference of the highest luminance versus the lowest luminance) of the reduced-amplitude luminance matrix 136 .
  • luminance amplitude e.g., luminance difference of the highest luminance versus the lowest luminance
  • FIG. 10 An example of such a spatial distribution of values is described in more detail with respect to FIG. 10 below.
  • the color shifting (block 134 ) thus results in the reduced-amplitude luminance matrix 136 that is capable of improving the quality of perception of a displayed image 74 .
  • the reduced-amplitude luminance matrix 136 may be able to minimize gradations between adjacent luminance and/or color levels so as to present a displayed image 74 that is more pleasing and natural to the human eye. Additionally, the reduced-amplitude luminance matrix 136 may undergo a temporal dithering (block 138 ) to as to further enhance the visual quality of the resulting displayed image 74 .
  • a temporal dithering (block 138 ) is described in more detail below with respect to FIGS. 11-14 .
  • FIG. 10 the figure depicts an example of a reapportioning (i.e., color-shifting) of the RGB values so as to visually improve luminance homogeneity, as has been previously described in relation to the logic 126 above. It may be useful to explain the logic 126 by using example numeric values. Accordingly, FIG. 10 illustrates example RGB values and describes how such example values may result in a reduced-amplitude luminance matrix 136 .
  • the source image RGB matrix 130 includes four cells divided into three sub-cells, each sub-cell storing an R, G, or B value. As mentioned earlier, the RGB values may be arrived at by decomposing a pixel color into its RGB color components and storing such components in a source image group 84 .
  • the source image group 84 may then be used to create the source image matrix matrix 130 having a higher pixel depth (e.g., 8-bit pixel depth) suitable for transformation into the reduced-amplitude luminance matrix 136 having a lower pixel depth (e.g., 6-bit pixel depth).
  • each sub-cell of the source RGB matrix 130 stores the same image source color values (i.e., R s , G s , and B s ) as each other sub-cell.
  • a table 142 depicts example decimal values for R s , G s , and B s (e.g., “229”, “131”, and “190”). Because the values in the source image RGB matrix 130 are stored at a higher pixel depth (e.g., 8-bits), the values may need to be transformed to lower bit values (e.g., 6-bit pixel depth values) in order to allow display by the display 28 . In one embodiment, each of the R s , G s , and B s values (e.g., 8-bit values) may first be converted into lower pixel depth integer values (e.g., 6-bit values).
  • One such conversion from an 8-bit value into a 6-bit value may include dividing the original source value by four (i.e., divide by 2 2 ).
  • the first six bits of the 8-bit values may be used to arrive at the 6-bit value.
  • the resulting decimal values for the conversion are depicted as R 1 , R 2 , R 3 , and R 4 .
  • the conversion from a higher pixel depth value into a lower pixel depth may result in the numbers having fractional components. For example, for the R s value of “229”, a division by four results in the number “57.25” having the fractional component “0.25”. Because the hardware may not be suitable for displaying fractional color levels, the fractional component is usually not used.
  • the original source value “229” is approximated by using four lower-pixel depth values R 1 , R 2 , R 3 , and R 4 set to “57”, “57”, “57”, and “58”, respectively.
  • the G s value of “131” may result in G 1 , G 2 , G 3 , and G 4 set to “32”, “33”, “33”, and “33”, respectively.
  • the B s value of “190” may result in B 1 , B 2 , B 3 , and B 4 set to values “47”, “47”, “48”, and “48”, respectively.
  • These four sets of values representing the lower pixel-depth bit (e.g., 6-bit) values may then be color-shifted, that is, distributed spatially so as to reduce the luminance amplitude of the matrix 136 .
  • a luminance difference may be first calculated by finding a highest luminance value and a lowest luminance value based on all the RGB values of the luminance matrix 136 , by using, for example, the luminance equation Y.
  • the luminance difference may be adjusted by increasing or decreasing the values for red, green, and blue to reduce luminance variation within the matrix 136 .
  • Increasing or decreasing the green value has the greatest perceived effect on luminance, based on the perceived luminance equation Y described above.
  • Increasing or decreasing the color red (while keeping the other colors the same) has second greatest effect on luminance, and increasing or decreasing the color blue (while keeping the other colors the same) has the least perceived effect on luminance.
  • an algorithm such a value optimization algorithm (e.g., greedy algorithm) may be used to assign the sets of values into specific cells (e.g., spatially distribute the values) so as to minimize the luminance difference of the reduced-amplitude luminance matrix 136 by using the luminance Y equation to more evenly distribute the integer values.
  • the algorithm may first assign the four R 1 , R 2 , R 3 , and R 4 values by increasing order, random order, or any other ordering.
  • the four green values may then be assigned to minimize the red-green luminance difference between the four cells. For example, if a cell has a high red value compared to one or more other cells, then the cell may be used to store a low green value (compared to one or more other cells). In the depicted example, the highest red value is stored in R 4 , therefore, G 4 may get the lowest green value.
  • the blue color values may then be similarly assigned so that the resulting luminance difference of the reduced-amplitude luminance matrix 136 is lowered or minimized.
  • any algorithm including brute force search algorithms, suitable for spatially redistributing the sets of values (e.g., R 1 , R 2 , R 3 , R 4 , G 1 , G 2 , G 3 , G 4 , B 1 , B 2 , B 3 , and B 4 ) may be used to derive the reduced-amplitude luminance matrix 136 .
  • the values of the reduced-amplitude luminance matrix 136 may then be used to display a more improved and visually pleasing displayed image 74 .
  • the reduced-amplitude luminance matrix 136 may be temporally dithered in order to further improve the visual perception of the displayed image 74 .
  • FIGS. 11-14 depict an embodiment of the use of temporal dithering to improve the visual perception of the reduced-amplitude luminance matrix 136 .
  • FIG. 11 the figure depicts the matrix 136 at time T 0 .
  • the values for R 1 , R 2 , R 3 , R 4 , G 1 , G 2 , G 3 , G 4 , B 1 , B 2 , B 3 , and B 4 may have been distributed so as to result in a more homogenous luminance for the matrix 136 .
  • Temporal dithering of the matrix 136 may result in a further improvement the perceived visual aspects of the image.
  • FIG. 12 illustrates a temporal dithering of the cells at time T 1 .
  • a resulting temporally dithered matrix 146 depicts a clockwise temporal dithering of the cells of the matrix 136 .
  • the R 1 , G 1 , and B 1 values have been temporally shifted in clockwise direction to the cell that was previously storing the R 2 , G 2 , and B 2 values.
  • the R 2 , G 2 , and B 2 values have been temporally shifted to the cell that used to store the R 4 , G 4 , and B 4 values.
  • the R 4 , G 4 , and B 4 values have been temporally shifted to the cell that used to store the R 3 , G 3 , and B 3 values.
  • the R 3 , G 3 , and B 3 values have been temporally shifted to the cell that used to store the R 1 , G 1 , and B 1 values.
  • FIG. 13 depicts the a similar clockwise temporal dithering of the matrix 146 of at time T 2 , resulting in a temporally dithered matrix 148 .
  • FIG. 14 is illustrative of the clockwise temporal dithering of the matrix 148 , resulting in a temporally dithered matrix 150 .
  • the temporal dithering embodiment depicted in FIGS. 11-14 is but one of any number of temporal dithering embodiments that may be utilized to improve the visual perception of displayed image 74 .
  • FIG. 15 the figure depicts another example of the transformation of an example source image RGB matrix 130 into a reduced-amplitude luminance matrix 136 .
  • 10-bit source image values may be used to derive 8-bit hardware values suitable for display by the display 28 .
  • any number of other conversions of higher pixel depths to lower pixel depths may be possible.
  • the techniques described herein may be used to convert 9-bit to 6-bit, 10-bit to 6-bit, 12-bit to 6-bit, 9-bit to 8-bit, 12-bit, to 8-bit, and so forth.
  • higher pixel depth values (e.g., 10-bit values) of the original image may be converted into lower pixel depth values (e.g., 8-bit values) by various techniques, including using the first eight bits of the 10-bit value.
  • Example 10-bit values for R s , G s , and B s . are show in Table 142 (e.g., “935”, “606”, and “366”).
  • the 10-bit value “935” may be approximated by the 8-bit values “233”, “234”, “234”, and “234”.
  • the 10-bit value “606” may be approximated by the 8-bit values “151”, “151”, “152”, and “152”.
  • the 10-bit value “366” may be value “366” may be approximated by the 8-bit values “92”, “92”, “91”, and “91”.
  • the reduced-amplitude luminance matrix 136 may then be arrived at by color-shifting or spatially distributing the 8-bit values so as to reduce the overall perceived luminance difference of the reduced-amplitude luminance matrix 136 .
  • the lower pixel depth values may then be re-assigned as depicted in table 144 so as to reduce the luminance difference between the cell having the highest luminance and the cell having the lowest luminance.
  • the four green values may then be assigned to minimize the red-green luminance difference between the four cells.
  • the luminance difference of the cells may be minimized by balancing the assignment of the high red value in one cell with the assignment of the high green value in another cell so as to more evenly spread the high value assignments.
  • the highest red values are stored in R 1 , R 2 , and R 3 , therefore, G 1 and G 2 may get the two lowest green values (e.g., “151”, “151”).
  • the blue color values may then be similarly assigned so that the resulting luminance difference of the reduced-amplitude luminance matrix 136 is lowered or minimized.
  • the blue value “91” may be assigned to the two cells of the matrix 136 containing the highest green values (e.g., third and fourth cells) to counterbalance the assignment of the blue value “92” to the first two cells of the matrix 136 .
  • the resulting displayed image 74 may be perceived as having having an improved visual quality.

Abstract

Systems and methods are disclosed to enable the creation and the display of spatio-temporal dithered images. Embodiments include techniques that use color-shifting and luminance. In one embodiment, adjacent pixels are color-shifted with respect to each other and the color values of the adjacent pixels are temporally alternated with color values of pixels in the group. In another embodiment, the luminance of group of adjacent pixels is determined and the luminance of the group is made more homogenous spatially and temporally by distributing color variations over a larger number of pixels so as to reduce the luminance difference between the pixel with the least luminance and the pixel with the greatest luminance. Individual color components (e.g., red, green, blue) may also be separated and used so that the color-shifts associated with each color component may be simultaneously present in different pixels.

Description

    BACKGROUND
  • The present disclosure relates generally to techniques for dithering images using a luminance approach.
  • This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present disclosure, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
  • In recent years, electronic display devices have become increasingly popular due, at least in part, to such devices becoming more and more affordable for the average consumer. Further, in addition to a number of electronic display devices currently available for desktop monitors and notebook computers, it is not uncommon for digital display devices to be integrated as part of another electronic device, such as a cellular phone, a tablet computing device, or a portable media player.
  • Electronic displays are typically configured to output a set number of colors within a color range. In certain cases, a graphical image to be displayed may have a number of colors greater than the number of colors that are capable of being shown by the electronic display. For example, a graphical image may be encoded with a 24-bit color depth (e.g., 8 bits for each of red, green, and blue components of the image), while an electronic display may be configured to provide output images at an 18-bit color depth (e.g., 6 bits for each of red, green, and blue components of the image). Rather than simply discarding least-significant bits, dithering techniques may be used to output a graphical image that appears to be a closer approximation of the original color image. However, the dithering techniques may not approximate the original image as closely as desired.
  • SUMMARY
  • A summary of certain embodiments disclosed herein is set forth below. It should be understood that these aspects are presented merely to provide the reader with a brief summary of these certain embodiments and that these aspects are not intended to limit the scope of this disclosure. Indeed, this disclosure may encompass a variety of aspects that may not be set forth below.
  • The present disclosure generally relates to dithering techniques that may be used to display color images on an electronic display. The electronic display may include one or more electronic components, including a power source, pixelation hardware (e.g., light emitting diodes, liquid crystal display), and circuitry for receiving signals representative of image data to be displayed. In certain embodiments, a processor may be internal to the display while in other embodiments the processor may be external to the display and included as part of an electronic device, such as a computer workstation or a cell phone.
  • The processor may use dithering techniques, including spatial and temporal dithering techniques disclosed herein, to output color images on the electronic display. In one embodiment, adjacent pixels are color-shifted with respect to each other and the color values of certain pixels are temporally alternated with color values of other pixels in the group. In another embodiment, the luminance of a group of adjacent pixels is determined and the luminance of the group is made more homogenous spatially and temporally by distributing color variations over a larger number of pixels so as to reduce the luminance difference between the pixel with the least luminance and the pixel with the greatest luminance. Individual color components (e.g., red, green, blue) may also be separated and used so that the color-shifts associated with each color component may be simultaneously present in different pixels.
  • Various refinements of the features noted above may exist in relation to various aspects of the present disclosure. Further features may also be incorporated in these various aspects as well. These refinements and additional features may exist individually or in any combination. For instance, various features discussed below in relation to one or more of the illustrated embodiments may be incorporated into any of the above-described aspects of the present disclosure alone or in any combination. Again, the brief summary presented above is intended only to familiarize the reader with certain aspects and contexts of embodiments of the present disclosure without limitation to the claimed subject matter.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Various aspects of this disclosure may be better understood upon reading the following detailed description and upon reference to the drawings in which:
  • FIG. 1 is a simplified block diagram depicting components of an example of an electronic device that includes image processing circuitry configured to implement one or more of the image processing techniques set forth in the present disclosure;
  • FIG. 2 is a front view of the electronic device of FIG. 1 in the form of a desktop computing device, in accordance with aspects of the present disclosure;
  • FIG. 3 is a front view of the electronic device of FIG. 1 in the form of a handheld portable electronic device, in accordance with aspects of the present disclosure;
  • FIG. 4 shows a graphical representation of an M×N pixel array that may be included in the device of FIG. 1, in accordance with aspects of the present disclosure;
  • FIG. 5 is a block diagram illustrating an image signal processing (ISP) logic, in accordance with aspects of the present disclosure;
  • FIG. 6 is a logic diagram illustrating an operation of the device of FIG. 1, in accordance with aspects of the present disclosure;
  • FIG. 7 is a block diagram generally representative of certain aspects of the logic of FIG. 6, in accordance with aspects of the present disclosure;
  • FIG. 8 is a block diagram illustrating the use of temporal dithering, in accordance with aspects of the present disclosure;
  • FIG. 9 is a second logic diagram illustrating an operation of the device of FIG. 1, in accordance with aspects of the present disclosure;
  • FIG. 10 is a block diagram generally representative of certain aspects of FIG. 9, in accordance with aspects of the present disclosure;
  • FIGS. 11-14 are generally illustrative of an example of temporal dithering, in accordance with one embodiment of the present disclosure; and
  • FIG. 15 is a second block diagram generally representative of certain aspects of FIG. 9, in accordance with aspects of the present disclosure.
  • DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS
  • One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
  • As will be discussed below, the present disclosure relates generally to techniques for processing and displaying image data on an electronic display device. In particular, certain aspects of the present disclosure may relate to techniques for processing images using temporal and spatial dithering techniques. Further, it should be understood that the presently disclosed techniques may be applied to both still images and moving images (e.g., video), and may be utilized in any suitable type of electronic display, such as a cell phone, a desktop computer monitor, a tablet computing device, an e-book reader, a television, and so forth.
  • With the foregoing in mind, it may be beneficial to first discuss embodiments of certain display systems that may incorporate the dithering techniques as described herein. With this in mind, and turning now to the figures, FIG. 1 is a block diagram illustrating an example of an electronic device 10 that may provide for the processing of image data using one or more of the image processing techniques mentioned above. The electronic device 10 may be any type of electronic device, such as a laptop or desktop computer, a mobile phone, a digital media player, a television, or the like, that is configured to process and display image data. By way of example only, the electronic device 10 may be a portable electronic device, such as a model of an iPad®, iPod® or iPhone®, available from Apple Inc. of Cupertino, Calif. Additionally, the electronic device 10 may be a desktop or laptop computer, such as a model of a MacBook®, MacBook® Pro, MacBook Air®, iMac®, Mac® Mini, or Mac Pro®, available from Apple Inc.
  • Regardless of its form (e.g., portable or non-portable), it should be understood that the electronic device 10 may provide for the processing of image data using one or more of the image processing techniques briefly discussed above, which may include spatial and/or temporal dithering techniques, among others. In some embodiments, the electronic device 10 may apply such image processing techniques to image data stored in a memory of the electronic device 10. Embodiments showing both portable and non-portable embodiments of the electronic device 10 will be further discussed below with respect to FIGS. 2 and 3.
  • As shown in FIG. 1, the electronic device 10 may include various internal and/or external components which contribute to the function of the device 10. The various functional blocks shown in FIG. 1 may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium) or a combination of both hardware and software elements. For example, in the presently illustrated embodiment, the electronic device 10 may include input/output (I/O) ports 12, input structures 14, one or more processors 16, memory device 18, non-volatile storage 20, expansion card(s) 22, networking device 24, power source 26, and display 28. Additionally, the electronic device 10 may include one or more imaging devices 30, such as a digital camera, and image processing circuitry 32. As will be discussed further below, the image processing circuitry 32 may be configured to implement one or more of the above-discussed image processing techniques. As can be appreciated, image data processed by image processing circuitry 32 may be retrieved from the memory 18 and/or the non-volatile storage device(s) 20, or may be acquired using the imaging device 30.
  • It should be understood that the system block diagram of the device 10 shown in FIG. 1 is intended to be a high-level control diagram depicting various components that may be included in such a device 10. Indeed, as discussed below, the depicted processor(s) 16 may, in some embodiments, include multiple processors, such as a main processor (e.g., CPU), and dedicated image and/or video processors. In such embodiments, the processing of image data may be primarily handled by these dedicated processors, thus effectively offloading such tasks from a main processor (CPU).
  • The input structures 14 may provide user input or feedback to the processor(s) 16. For instance, input structures 14 may be configured to control one or more functions of electronic device 10, such as applications running on electronic device 10. In addition to processing various input signals received via the input structure(s) 14, the processor(s) 16 may control the general operation of the device 10. For instance, the processor(s) 16 may provide the processing capability to execute an operating system, programs, user and application interfaces, and any other functions of the electronic device 10.
  • The processor(s) 16 may include one or more microprocessors, such as one or more “general-purpose” microprocessors, one or more special-purpose microprocessors and/or application-specific microprocessors (ASICs), or a combination of such processing components. For example, the processor(s) 16 may include one or more reduced instruction set (e.g., RISC) processors, as well as graphics processors (GPU), video (GPU), video processors, audio processors and/or related chip sets. As will be appreciated, the processor(s) 16 may be coupled to one or more data buses for transferring data and instructions between various components of the device 10. In certain embodiments, the processor(s) 16 may provide the processing capability to execute source code embodiments capable of employing the dithering techniques described herein.
  • The instructions or data to be processed by the processor(s) 16 may be stored in a computer-readable medium, such as a memory device 18. The memory device 18 may be provided as a volatile memory, such as random access memory (RAM) or as a non-volatile memory, such as read-only memory (ROM), or as a combination of one or more RAM and ROM devices. In addition, the memory 18 may be used for buffering or caching during operation of the electronic device 10. For instance, in one embodiment, the memory 18 includes one or more frame buffers for buffering video data as it is being output to the display 28.
  • In addition to the memory device 18, the electronic device 10 may further include a non-volatile storage 20 for persistent storage of data and/or instructions. The non-volatile storage 20 may include flash memory, a hard drive, or any other optical, magnetic, and/or solid-state storage media, or some combination thereof. In accordance with aspects of the present disclosure, image processing data stored in the non-volatile storage 20 and/or the memory device 18 may be processed by the image processing circuitry 32 prior to being output on a display.
  • The embodiment illustrated in FIG. 1 may also include one or more card or or expansion slots. The card slots may be configured to receive an expansion card 22 that may be used to add functionality, such as additional memory, I/O functionality, networking capability, or graphics processing capability to the electronic device 10. The electronic device 10 also includes the network device 24, which may be a network controller or a network interface card (NIC) that may provide for network connectivity over a wireless 802.11 standard or any other suitable networking standard, such as a local area network (LAN), a wide area network (WAN).
  • The power source 26 of the device 10 may include the capability to power the device 10 in both non-portable and portable settings. The display 28 may be used to display various images generated by device 10, such as a GUI for an operating system, or image data (including still images and video data) processed by the image processing circuitry 32, as will be discussed further below. As mentioned above, the image data may include image data acquired using the imaging device 30 or image data retrieved from the memory 18 and/or non-volatile storage 20. The display 28 may be any suitable type of display, such as a liquid crystal display (LCD), plasma display, or an organic light emitting diode (OLED) display, for example. Additionally, as discussed above, the display 28 may be provided in conjunction with the above-discussed touch-sensitive mechanism (e.g., a touch screen) that may function as part of a control interface for the electronic device 10. The illustrated imaging device(s) 30 may be provided as a digital camera configured to acquire both still images and moving images (e.g., video).
  • The image processing circuitry 32 may provide for various image processing steps, such as spatial dithering, temporal dithering, pixel color-space conversion, luminance determination, luminance optimization, image scaling operations, and so forth. In some embodiments, the image processing circuitry 32 may include various subcomponents and/or discrete units of logic that collectively form an image processing “pipeline” for performing each of the various image processing steps. These subcomponents may be implemented using hardware (e.g., digital signal processors or ASICs) or software, or via a combination of hardware and software components. The various image processing operations that may be provided by the image processing circuitry 32 and, particularly those processing operations relating to spatial dithering, temporal dithering, pixel color-space conversion, luminance determination, and luminance optimization, will be discussed in greater detail below.
  • Referring again to the electronic device 10, FIGS. 2 and 3 illustrate various forms that the electronic device 10 may take. As mentioned above, the electronic device 10 may take the form of a computer, including computers that are generally portable (such as laptop, notebook, and tablet computers) as well as computers that are generally non-portable (such as desktop computers, workstations and/or servers), or other type of electronic device, such as handheld portable electronic devices (e.g., a digital media player or mobile phone). In particular, FIGS. 2 and 3 depict the electronic device 10 in the form of a desktop computer 34 and a handheld portable electronic device 36, respectively.
  • FIG. 2 further illustrates an embodiment in which the electronic device 10 is provided as the desktop computer 34. As shown, the desktop computer 34 may be housed in an enclosure 38 that includes a display 28, as well as various other components discussed above with regard to the block diagram shown in FIG. 1. Further, the desktop computer 34 may include an external keyboard and mouse (input structures 14) that may be coupled to the computer 34 via one or more I/O ports 12 (e.g., USB) or may communicate with the computer 34 wirelessly (e.g., RF, Bluetooth, etc.). The desktop computer 34 also includes an imaging device 40, which may be an integrated or external camera, as discussed above. In certain embodiments, the depicted desktop computer 34 may be a model of an iMac®, Mac® mini, or Mac Pro®, available from Apple Inc.
  • As further shown, the display 28 may be configured to generate various images that may be viewed by a user, such as a dithered image 42. The dithered image 42 may have been generated by using, for example, the spatial and temporal dithering techniques described in more detail below. During operation of the computer 34, the display 28 may display a graphical user interface (“GUI”) 44 that allows the user to interact with an operating system and/or application running on the computer 34.
  • Turning to FIG. 3, the electronic device 10 is further illustrated in the form of portable handheld electronic device 36, which may be a model of an iPod® or iPhone® available from Apple Inc. The handheld device 36 includes various user input structures 14 through which a user may interface with the handheld device 36. For instance, each input structure 14 may be configured to control one or more respective device functions when pressed or actuated. By way of example, one or more of the input structures 14 may be configured to invoke a “home” screen or menu to be displayed, to toggle between a sleep, wake, or powered on/off mode, to silence a ringer for a cellular phone application, to increase or decrease a volume output, and so forth. It should be understood that the understood that the illustrated input structures 14 are merely exemplary, and that the handheld device 36 may include any number of suitable user input structures existing in various forms including buttons, switches, keys, knobs, scroll wheels, and so forth. In the depicted embodiment, the handheld device 36 includes the display device 28. The display device 28, which may be an LCD, OLED, or any suitable type of display, may display various images generated by the techniques disclosed herein. For example, the display 28 may display the dithered image 42.
  • Having provided some context with regard to various forms that the electronic device 10 may take and now turning to FIG. 4, the present discussion will focus on details of the display device 28 and on the image processing circuitry 32. As mentioned above, the display device 28 may be any suitable type of display, such as a liquid crystal display (LCD), plasma display, a digital light processing (DLP) projector, an organic light emitting diode (OLED) display, and so forth. The display 28 may include a matrix of pixel elements such as an example M×N matrix 48 depicted in FIG. 4. Accordingly, the display 28 is capable of presenting an image at a natural display resolution of M×N. For example, in embodiments where the display 28 is included in a 30-inch Apple Cinema HD Display®, the natural display resolution may be approximately 2560×1600 pixels.
  • A pixel matrix 50 is depicted in greater detail and includes four adjacent pixels 52, 54, 56, and 58. In the depicted embodiment, each pixel of the display device 28 may include three sub-pixels capable of displaying a red (R), a green (G), and a blue (B) color. The human eye is capable of perceiving the particular RGB color combination displayed by the pixel and translating the combination into a specific color. A number of colors may be displayed by each individual pixel by varying the individual RGB intensity levels of the pixel. For example, a pixel having a level of 50% R, 50% G, and 50% B may be perceived as the color gray, while a pixel having a level of 100% R, 100% G, and 0% B may be perceived as the color yellow.
  • The number of colors that a pixel is capable of displaying is dependent on the hardware capabilities of the display 28. For example, a display 28 with a 6-bit color depth for each sub-pixel is capable of producing 64 (26) intensity levels for each of the R, G, and B color components. The number of bits per sub-pixel, e.g. 6 bits, is referred to as the pixel depth. At a pixel depth of 6 bits, 262,144 (26×26×26) color combinations are possible, while at pixel depth of 8 bits, 16,777,216 (28×28×28) color combinations are possible. Although the visual quality of images produced by an 8-bit pixel depth display 28 may be superior to the visual quality of images produced by a display 28 using 6-bit pixel depth, the cost of the 8-bit display 28 is also higher. Accordingly, it would be beneficial to apply imaging processing techniques, such as the techniques described herein, that are capable displaying a source image with improved visual reproduction even when utilizing lower pixel depth displays 28. Further, a source image may contain more colors than those supported by the display 28, even displays 28 having higher pixel depths. Accordingly, it would also be beneficial to apply imaging processing techniques that are capable of improved visual representation of any number of colors. Indeed, the image processing techniques described herein, such as those described in more detail with respect to FIG. 5 below, are capable of displaying improved visual reproductions at any number of pixel depths from any number of source images having a greater number of of colors than that which can be output by display hardware.
  • Turning to FIG. 5, the figure depicts an embodiment of an image signal processing (ISP) pipeline logic 60 that may be utilized for processing and displaying a source image 62. The ISP logic 60 may be implemented using hardware and/or software components, such as the image processing circuitry 32 of FIG. 1. A source image 62 may be provided, for example, by placing an electronic representation of the source image 62 onto embodiments of the memory 18. In such an example, the source image 62 may be placed onto frame buffer embodiments of the memory 18. The source image 62 may include colors that are not directly supported by the hardware of the electronic device 10. For example, the source image 62 may be stored at a pixel depth of 8 bits while the hardware includes a 6-bit pixel depth display 28. Accordingly, the source image 62 may be manipulated by the techniques disclosed herein so that it may be displayed in a lower pixel depth display 28.
  • The source image 62 may first undergo color decomposition (block 64). The color decomposition (block 64) is capable of decomposing the color of each pixel of the source image 62 into the three RGB color levels. That is, the RGB intensity levels for each pixel may be determined by the color decomposition (block 64). Such a decomposition may be referred to as a three-channel decomposition, because the colors may be decomposed into a red channel, a green channel, and a blue channel, for example.
  • In the depicted embodiment, the source image 62 may also undergo a luminance analysis (block 66). A luminance is related to the perceived brightness of an image or an image component (such as a pixel) to the human eye. Further, humans typically perceive colors as having different luminance even if each color has equal radiance. For example, at equal radiances, humans typically perceive the color green as having higher luminance than the color red. Additionally, the color red is perceived as having a higher luminance than the color blue. In one example, a luminance formula Y may be arrived at by incorporating observations based on the perception of luminance by humans as defined below.

  • Y=0.30 R+0.60 G+0.10 B
  • Indeed, the luminance equation Y above is an additive formula based on 30% red, 60% green, and 10% blue chromaticities (e.g., color values). The luminance formula Y can thus use the RGB color levels of a pixel to determine an approximate human perception of the luminance of the pixel. It is to be understood that because of the variability of human perception, among other factors, the values used in the luminance equation are approximate. Indeed, in other embodiments, the percentage values for R, G, and B may be different. For example, in another embodiment, the values may be approximately 29.9% red, 58.7% green, and 11.4% blue.
  • The luminance value of each pixel may then be utilized for spatial dithering (block 68). In spatial dithering, the image may be manipulated so as to increase the “noise” of the image, decrease color banding, and make sharp edges of the image less detectable. Spatial dithering may therefore improve the image perception and quality. In certain spatial dithering embodiments, the pixels from the source image 62 may first be converted to a lower pixel depth, for example, through a most significant bit (MSB) and a least significant bit (LSB) process, as described in more detail below with respect to FIG. 6.
  • Multiple dither patterns 70 may also be used during spatial dithering so as to enable a displayed image 74 to more closely approximate the source image 62. In one embodiment, two sets of dither patterns 70 and 70′ may be stored in memory. In this embodiment, the set of dither patterns 70 may be used with a color channel such as green, and the set 70′ may be used with color channels red and blue. In another embodiment, the dither patterns 70 (and 70′) may be dynamically calculated based on the luminance analysis (block 66) and not stored in memory. In yet another embodiment, the dither patterns 70 corresponding to a single color channel, such as green, may be stored in memory. In this embodiment the dither patterns 70′ may be derived based on the stored dither patterns 70. The dither patterns 70 and 70′ are described in more detail below with respect to FIG. 7. In certain embodiments, a group (e.g., matrix) of pixels is made more homogenous by distributing certain RGB values of the pixels throughout the pixel group so as to more evenly distribute the luminance of the pixel matrix. Indeed, spatial dithering (block 68) may be capable of spatially distributing the colors and luminance of the source image 62 so as to enable the display of the source image 62 at a lower pixel depth while significantly preserving the perceived image quality.
  • Additionally, the ISP logic 60 may be capable of utilizing temporal dithering (block 72). In temporal dithering (block 72), the colors and/or luminosity of pixels may be alternated frame-by-frame so as to improve the perceived image quality of the displayed image 74. That is, a first frame of the processed image may be presented at time T0, followed by a second frame of the processed image which may be presented at time T1. The second frame may have color and/or luminance variations from the first frame. Likewise, a third frame of the processed image may be presented at time T2 having color/and or luminance that differ from the second frame. In certain embodiments, additional frames may then be presented also having color/and or luminance values that differ form the third frame. Additionally, the temporal dithering (block 72) may iteratively loop through the frame presentations. That is, after presenting a certain n-th frame at time Tn, the first frame may then be presented again, followed by the second frame, and so on, up to the n-th frame and then returning to the first frame.
  • Humans may perceive multiple frames presented sequentially one after the other as a single image. Indeed, in some embodiments, 60, 120, 240, or more frames per second (FPS) may be presented sequentially. By alternating the color and/or the luminance of each frame and by presenting the frames sequentially, it is possible to enable a single perceived image that is more natural and pleasing to the human eye. Accordingly, the dithering techniques described herein, such as the MSB-LSB based technique described in more detail with respect to FIG. 6, may allow for the visually pleasing presentation of the displayed image 74 having a lower pixel depth than the source image 62.
  • FIG. 6 is illustrative of an embodiment of a logic 76 capable of utilizing MSB-LSB techniques to spatially and temporally dither the source image 62. That is, the logic 76 is capable of transforming the image 62 having a higher pixel depth into the displayed image 74 having a lower pixel depth. Accordingly, the logic 76 may include non-transitory machine readable code or computer instructions that may be used by a processor, for example, to transform image data. The source image 62 may first be decomposed (block 78) into three R, G, B, channels 80. In one embodiment, three M×N matrices may be created based on the M×N source image resolution, each matrix corresponding to one of the three color channels 80. Accordingly, the values included in each cell of a red channel matrix (R) correspond to the red intensity values of each pixel, the values of each cell in a green channel matrix (G) correspond to the green intensity values of each pixel, and the values of each cell in a blue channel matrix (B) correspond to the blue intensity values of each pixel.
  • Each R, G, B color channel matrix 80 may then be subdivided (block 82) into multiple source image groups (e.g., matrices) 84 corresponding to different areas of the image. In one example, a group 84 is sized as 4×4 pixel group having a total of 16 pixels. Accordingly, the subdivision (block 82) of the source image 62 may be accomplished by selecting multiple 4×4 adjacent pixel groups 84 so as to partition the entire image into the 4×4 pixel groups 84. Each 4×4 pixel group 84 may then be used to create (block 86) a corresponding LSB group 88 and MSB group 90, as shown in more detail with respect to FIG. 7. The LSB group 88 and the MSB group 90 may be created by dividing the pixel depth information of each pixel into two values, a LSB value and a MSB value. The LSB values of the all the pixels in the group may then be used to create the LSB group 88. Likewise, the MSB values of the all the pixels in the group may then be used to create the MSB group 90.
  • To arrive at the MSB and LSB values, the pixel's color value may be provided in, or converted to, a binary value. The binary value may then be divided into two binary values, the LSB value and the MSB value. The most significant bits equal to the pixel depth (e.g., 6 bits) of the display device 28 are selected as the MSB value and the remainder bits are selected as the LSB value. As an example, suppose that the original image is stored at a 9-bit pixel depth and the display 28 is a 6-bit pixel depth display. If the original pixel color channel has a decimal color value of forty-four, the resulting binary number is “000101100”. The six most significant bits are “000101”, which corresponds to the decimal number five. Accordingly the MSB value becomes equal to the number five. The remainder three binary bits of “100” correspond to the decimal number four. Accordingly, the LSB value becomes equal to four. A dither pattern 70 may then be selected and used to create a modification matrix 94 (block 92).
  • In one embodiment, one of the dither patterns 70 may be selected based on the LSB value or magnitude and used to create (block 92) the modification matrix 94. Indeed, the values of the LSB group 88 may be used to define the modification matrix's 94 values, resulting in the modification matrix 94 having ones and zeros. Examples of the use of the LSB group 88 to create the modification matrix 94 based on the dither patterns 70 are described in more detail with respect to FIG. 7 below.
  • The modification matrix 94 may then be mathematically added (i.e., through matrix addition) to the MSB group 90 (block 96) to create a new lower pixel depth (e.g., 6 bit) MSB matrix 98. The resulting lower pixel depth MSB matrix 98 is thus capable of being displayed by the display 28. Indeed, multiple new MSB matrices 98 98 may be derived corresponding to all the pixels groups of the source image 94. The multiple new MSB matrices 98 may then be displayed as the displayed image 74.
  • FIG. 7 depicts an example set of source image pixel group 84, LSB group 88, MSB group 90, dither patterns 70, and new MSB matrix 98 having values illustrative of the transformation of an individual color channel (e.g., R, G, or B) of a source image 62 into a displayed image 74 color channel by using logic 76 as described above. A source image group 84 may contain four values (e.g., 9-bit values), A, B, C, D, in a first row corresponding to an individual color channel (e.g., R, G, or B). For illustration purposes, we can assign some example decimal values as follows: A=“141”, B=“411”, C=“44”, and D=“480”. It is to be understood that any numeric value may be assigned to A, B, C, or D, and that other rows of the source image group 84 may include additional values. The 9-bit binary representation of values then becomes as follows: A=“010001101”, B=“110011011”, C=“000101100”, and D=“111100000”.
  • The most significant bits (e.g., six bits) of the source image values A, B, C, and D, may then be used to derive the values M1=“010001”, M2=“110011”, M3=“000101”, M4=“111100”, of a first row of the MSB group 90. The number of most significant bits may be based on the pixel depth of the display 28. For instance, six bits may be selected as the most significant bits if the display 28 is capable of a 6-bit pixel depth. Should the display be capable of, for example, only a 4-bit pixel depth, then the first four bits of the source image values could be used. For the 6-bit pixel depth example, the decimal values for the 6-bit binary values are M1=“17”, M2=“51”, M3=“5”, M4=“60”.
  • The remainder three bits of the of the source image values A, B, C, D, may then be used to derive the binary values L1=“101”, L2=“011”, L3=“100”, L=“000”, of a first row of the LSB group 88. The decimal values equivalent to the 3-bit binary values are L1=“5”, L2=“3”, L3=“4”, L4=“0”. One of the dither patterns 70 (e.g., individual dither patterns 102, 104, 106, 108, 110, 112, 114, and 116) may then be selected and used to create the modification group 94 based on the LSB group 88. In the depicted example, the dither pattern 110 is selected. In certain embodiments, a dither pattern, such as dither pattern 110, is selected based on the LSB group 88 as described in more detail below. Once selected, the dither pattern 110 and the LSB 88 may be used to create the modification matrix 94.
  • In one embodiment, the value (i.e., magnitude) of each cell of the LSB 88, such as cells L1, L2, L3, and L4, is used to select one of the dither patterns 70. Because the values of the 3-bit LSB cells may vary from the decimal value “0” to the decimal “7”, there are eight possible values. Accordingly, eight dither patterns 70 are provided when using the 3-bit LSB 88. It is to be understood that when the LSB 88 stores more (or less) binary bits, then more (or less) dither patterns 70 may be provided. For example, when using a 2-bit LSB 88, there may be four (i.e., 22) dither patterns 70 provided. Likewise, when using a 4-bit LSB 88, sixteen (i.e., 24) dither patterns 70 may be provided.
  • The magnitude or value of the 3-bit binary number stored in each cell of the LSB 88 may then be used to select one of the eight illustrated dither patterns 70. For example, cell L4 of the LSB 88 may have the value “0”, which corresponds to the first of eight possible values “0” to “7”. Accordingly, the first dither pattern 102 of the eight dither patterns 70 may be selected. Similarly, the cell L3 contains the value “4”, which corresponds to the fifth of eight possible values “0” to “7”. Accordingly, the fifth dither pattern 110 may be selected. Likewise, the cell L2 contains the value “3” which corresponds to the fourth dither pattern 108. L1 contains the value “5”, which in turn corresponds to the sixth dither pattern 112. In this way, the first row of the LSB group 88 containing the cells L1, L2, L3, and L4, may map to one of the dither patterns 70. All other cells of the LSB group 88 may be mapped to one of the dither patterns 70 in a similar manner.
  • As mentioned above with respect to FIG. 5, in certain embodiments, two sets of dither patterns 70 and 70′ may used. For example, the dither patterns 70 illustrated in FIG. 7 may be used with the green color channel. The set of dither patterns 70′ may then be used with the colors red and blue. This second set of dither patterns 70′ may be derived by shifting the ones and zeros of each of the illustrated dither patterns 104, 106, 108, 110, 112, 114, and 116 so as to more homogenously distribute luminance. For example, a dither pattern 104′ may be used with the colors red and/or blue, where the first value “1” found in the dither pattern 104 at position (1,1) may have been shifted to position to position (2,2). Likewise, the second value “1” found in the dither pattern 104 at position (3,3) may be shifted to a position (4,4) in the dither pattern 104′. Such a phase shifting of the values from 104 to 104′ may enable a more homogenous distribution of the overall luminance because the green values (e.g., when using dither pattern 104) are counterbalanced with the red and blue values (e.g., when using dither pattern 104′).
  • Indeed, all dither patterns 104, 106, 108, 110, 112, 114, and 116 may be phase-shifted into dither patterns 104′, 106′, 108′, 110′, 112′, 114′, and 116′ so as more homogeneously distribute the luminance. As mentioned above, the phase-shifting may be accomplished by shifting the “1” values to counterbalance the effect on luminance of the previous position of the “1” values. For example, dither pattern 108′ may be arrived at having a first row “0 1 0 0”, a second row “1 0 0 1”, a third row “0 0 0 1” and a fourth row “0 1 1 0” by counterbalancing the effect of the “1” values of dither pattern 108. In yet another example, a dither pattern 116′ may be arrived at having a first row “1 0 1 1”, a second row “1 1 1 1”, a third row “1 1 1 0”, and a fourth row “1 1 1 1” by counterbalancing the effect of the “1” values of the dither pattern 116.
  • Once one of the dither patterns 70 (or 70′) is selected, then the LSB group 88 may again be used to select one of the cells in each of the selected dither patterns 70 (or 70′). To make such a cell selection, the position of each cell in the LSB group 88 is used to “point” to the same position in the selected dither pattern 70 (or 70′). In the depicted example, L3 may first be used to select the dither pattern 110 and then L3's cell position may be used to select one of the cells of the dither pattern 110. L3 is positioned in the first row, third column cell. Accordingly, the cell in the first row, third column of the dither pattern 110 may then be selected. The value in this first row, third column cell (i.e., “1”) of the dither pattern 110, may then be used to fill the cell at the same position (i.e., first row, third column) in the modification matrix 94. Likewise, the cells L1, L2, and L4 may be used. For example, L1 is in the first row, first column of the LSB group 88, so the first row and first column value of the dither pattern 112 (i.e., “1”) is copied to the first row and first column cell of the modification matrix 94. Similarly, L2 is in the first first row, second column of the LSB group 88, so the first row and second column value of the dither pattern 108 (i.e., “0”) is copied to the first, row second column cell of the modification matrix 94. In a similar way, the value (i.e., “0”) of the cell in the first row, fourth column of the dither pattern 102 is copied into the first row, fourth column cell of the modification matrix 94. By using this methodology, all of the cells of the modification matrix 94 may be derived as having a zero or a one.
  • The MSB block 90 may then be added to the modification matrix 94 by using, for example, matrix addition. That is, every cell in the MSB block 90 may be added to the corresponding cell in the modification matrix 94. The result of the addition operation is a new MSB block 98. Using the numbers used in the depicted example, the decimal values for the first row of the new MSB block 98 are A1=“17”+“1”=“18”, B1=“51”+“0”=“51”, C1=“5”+“1”=“6”, and D1=“60”+“0”=“60”. The remaining rows of the new MSB block 98 may then be similarly computed based on the values for the corresponding rows of the source image block 84. As mentioned above, the new MSB block 98 may include color values at a lower pixel depth than the source image block 84 suitable for display by the display 28. Indeed, the dithering techniques disclosed herein allow for the creation of multiple new MSB blocks 98 suitable for displaying the higher pixel depth (e.g., 9-bit) source image 62 at a lower pixel depth (e.g., 6-bit).
  • Turning to FIG. 8, the figure depicts an example of dither patterns 102, 110, 106, and 114, as they may be temporally dithered. Indeed, any of the dither patterns 70 may be temporally dithered in some embodiments, and such temporal dithering of the dither patterns 70 may be used in addition to the LSB-MSB techniques described above to further transform the source image 62. FIG. 8 depicts three rows, each row representing a temporal frame at times T0, T1, and T2. In the depicted temporal dithering embodiment, a first row shows an example of an initial condition (i.e., position of the zero and ones) at time T0 for each of the dither patterns 102, 110, 106, and 114. Time T0 may correspond to the display of the first frame of the image, as mentioned above. Accordingly, the example depicted dither patterns 102, 110, 106, and 114 may be used to create the modification matrix 94 at time T0 using the methodology described above with respect to FIG. 7. The modification matrix may then be used to transform the source image 62 into a displayed image 74 at time T0.
  • The second row of the depicted example corresponds to time T1. As illustrated, the bits of the dither patterns at time T1 have been temporally shifted from their positions at time T0. In certain embodiments, the shift of the bits is accomplished by a clockwise rotation of the bits. In one example, each of the dither patterns may be divided into a top left quadrant 118, a top right quadrant 120, a bottom right quadrant 122, and a bottom left quadrant 124, each quadrant having four bits. In this example, each of the quadrants may have the bits rotated in a clockwise direction as depicted in FIG. 8. For example, the top row (e.g., top two bits) of the depicted quadrant 118 of the dither pattern 110 has shifted from storing the bits “1” and “0” at time T0 to storing the bits “0” and “1” at time T1. Additionally, the bottom row (e.g., bottom two bits) of the aforementioned quadrant 118 has shifted from storing the bits “0” and “1” at time T0 to storing the bits “1” and “0” at time T1. Accordingly, the example depicted dither patterns 102, 110, 106, and 114 at time T1 may be used to create the modification matrix 94 as described above. described above. The modification matrix 94 may then be used to transform the source image 62 into a displayed image 74 at time T1.
  • A third row in FIG. 8 corresponding to time T2 may then be similarly created (e.g., by shifting of the bits in each quadrant) and used to display a frame of the image at time T2. In the depicted example, the top row of quadrant 118 of the dither pattern 110 has shifted from storing the bits “0” and “1” at time T1 to storing the bits “1” and “0” at time T2. Likewise, the bottom row of the quadrant 118 has shifted from storing the bits “1” and “0” at time T1 to storing the bits “0” and “1” at time T2. The other quadrants 120, 122, and 124 may be similarly shifted as the dither patterns 70 undergo temporal dithering. Such temporal dithering of the dither patterns may allow the resulting displayed image 74 to be perceived as having a higher visual quality because the human eye may perceive the multiple frames displayed sequentially in time as a single frame having an improved image quality.
  • FIG. 9 is illustrative of a logic 126 capable of employing spatial, temporal, and/or luminance-based dithering techniques so as to enhance the visual quality of a lower pixel-depth image. The logic 126 may include non-transitory machine readable code or computer instructions that may be used by a processor, for example, to transform image data. As mentioned above with respect to the logic 76, the source image 62 may first be decomposed (block 78) into three R, G, B, channels 80. That is, three M×N matrices may be created based on the M×N source image 62 resolution, each matrix corresponding to one of the three color channels 80 (e.g., red, green, blue). Accordingly, the values included in each cell of the red channel matrix (R) correspond to the red color values of each pixel, the values of each cell in the green channel matrix (G) corresponds to the green color values of each pixel, and the values of each cell in the blue channel matrix (B) corresponds to the blue color values of each pixel.
  • Each R, G, B color channel matrix 80 may then be used (block 82) to create multiple source image groups (e.g., matrices) 84 corresponding to different areas of the image, or to different pixels of the image, with each cell in the group having an red, green, and blue color component. In certain embodiments of the source group 84, the group is sized as 2×2 pixel group having a total of 4 pixels. In one embodiment, the values for each of the pixels in the source group 84 may be derived from a single pixel of the source image. That is, the RGB values of a source image pixel may be copied into the 2×2 pixel group 84. In other embodiments, multiple 2×2 adjacent pixels of the source image may be copied into the 2×2 pixel group 84. Accordingly, the entire image may be divided either pixel by pixel or by selecting adjacent pixels. It is to be understood that, in other embodiments, other sizes of source image groups 84 may be used, for example, 4×4, 6×6, 8×8, and so forth.
  • A luminance value of each cell in the source image group 84 may then be determined (block 128), for example, through the use of the luminance formula Y described above. A matrix of source image RGB values may then be derived based on the color values of each cell in the source image group 84. The source image RGB matrix may include four cells where each cell includes three sub-cells, each sub-cell storing a luminance for each RGB channel. An example 4×4 source image RGB matrix is shown in FIG. 10 below. The source image RGB matrix may then be used to derive a displayed displayed image RGB matrix having a reduced luminance amplitude. That is, a higher pixel depth (e.g., 8-bit depth) RGB matrix may be converted into a lower pixel depth (e.g., 6-bit) RGB matrix suitable for display by the display device 28. During or after the conversion from the higher pixel RGB matrix to the lower pixel RGB matrix, the luminance values for the cells of the lower pixel RGB matrix may be used to determine a luminance difference (block 132) of the lower pixel RGB matrix. In one embodiment, the luminance difference may be calculated by using the highest and the lowest luminance values in the lower pixel depth RGB matrix to find the largest luminance difference in the lower pixel RGB matrix. In certain embodiments, the luminance difference or amplitude is minimized by color shifting the RGB values of each sub-cell of the lower pixel depth RGB matrix (block 134). In one embodiment, a set of rules may be used to more evenly distribute the luminance values of the RGB matrix, as described in more detail below. In other embodiments, other techniques such as creating a reduced-amplitude luminance matrix 136 and then using the luminance values of the matrix to re-assign RGB values may be used that results in the displayed image 74 having less differences between luminance values (i.e., reduce amplitude between values).
  • In one example, the color shifting (block 134) reduces the overall luminance amplitude by dividing the luminance of each source image RGB channel into four lower pixel depth values. That is, a higher pixel depth value, such as an 8-bit value may be divided into four lower pixel depth values, such as four 6-bit values. The overall luminance difference of the lower pixel are reduced by reapportioning the red, green, and color values of the four lower pixel depth RGB values so as to result in a reduced-amplitude luminance matrix 136 that has more homogenous luminance values. That is, the RGB color components of the cells in the reduced-amplitude luminance matrix are distributed spatially (e.g., moved from one cell to another cell) so as to reduce the luminance amplitude (e.g., luminance difference of the highest luminance versus the lowest luminance) of the reduced-amplitude luminance matrix 136. An example of such a spatial distribution of values is described in more detail with respect to FIG. 10 below. The color shifting (block 134) thus results in the reduced-amplitude luminance matrix 136 that is capable of improving the quality of perception of a displayed image 74. Indeed, the reduced-amplitude luminance matrix 136 may be able to minimize gradations between adjacent luminance and/or color levels so as to present a displayed image 74 that is more pleasing and natural to the human eye. Additionally, the reduced-amplitude luminance matrix 136 may undergo a temporal dithering (block 138) to as to further enhance the visual quality of the resulting displayed image 74. An example of temporal dithering is described in more detail below with respect to FIGS. 11-14.
  • Turning to FIG. 10, the figure depicts an example of a reapportioning (i.e., color-shifting) of the RGB values so as to visually improve luminance homogeneity, as has been previously described in relation to the logic 126 above. It may be useful to explain the logic 126 by using example numeric values. Accordingly, FIG. 10 illustrates example RGB values and describes how such example values may result in a reduced-amplitude luminance matrix 136. In the depicted example, the source image RGB matrix 130 includes four cells divided into three sub-cells, each sub-cell storing an R, G, or B value. As mentioned earlier, the RGB values may be arrived at by decomposing a pixel color into its RGB color components and storing such components in a source image group 84. The source image group 84 may then be used to create the source image matrix matrix 130 having a higher pixel depth (e.g., 8-bit pixel depth) suitable for transformation into the reduced-amplitude luminance matrix 136 having a lower pixel depth (e.g., 6-bit pixel depth). In the depicted embodiment, each sub-cell of the source RGB matrix 130 stores the same image source color values (i.e., Rs, Gs, and Bs) as each other sub-cell.
  • A table 142 depicts example decimal values for Rs, Gs, and Bs (e.g., “229”, “131”, and “190”). Because the values in the source image RGB matrix 130 are stored at a higher pixel depth (e.g., 8-bits), the values may need to be transformed to lower bit values (e.g., 6-bit pixel depth values) in order to allow display by the display 28. In one embodiment, each of the Rs, Gs, and Bs values (e.g., 8-bit values) may first be converted into lower pixel depth integer values (e.g., 6-bit values). One such conversion from an 8-bit value into a 6-bit value may include dividing the original source value by four (i.e., divide by 22). In another conversion, the first six bits of the 8-bit values may be used to arrive at the 6-bit value. In the depicted embodiment, the resulting decimal values for the conversion are depicted as R1, R2, R3, and R4.
  • It is to be noted that the conversion from a higher pixel depth value into a lower pixel depth may result in the numbers having fractional components. For example, for the Rs value of “229”, a division by four results in the number “57.25” having the fractional component “0.25”. Because the hardware may not be suitable for displaying fractional color levels, the fractional component is usually not used. In one embodiment, the original source value “229” is approximated by using four lower-pixel depth values R1, R2, R3, and R4 set to “57”, “57”, “57”, and “58”, respectively. Likewise, the Gs value of “131” may result in G1, G2, G3, and G4 set to “32”, “33”, “33”, and “33”, respectively. Similarly, the Bs value of “190” may result in B1, B2, B3, and B4 set to values “47”, “47”, “48”, and “48”, respectively. These four sets of values representing the lower pixel-depth bit (e.g., 6-bit) values may then be color-shifted, that is, distributed spatially so as to reduce the luminance amplitude of the matrix 136.
  • In order to reduce the luminance amplitude of the matrix 136, a luminance difference may be first calculated by finding a highest luminance value and a lowest luminance value based on all the RGB values of the luminance matrix 136, by using, for example, the luminance equation Y. In our example, the highest luminance value could be obtained with a cell having the values R=“58”, G=“33”, and B=“48”. The lowest luminance value could be obtained with a cell having the values R=“57”, G=“32”, and B=“47”. But in some embodiments, the luminance difference may be adjusted by increasing or decreasing the values for red, green, and blue to reduce luminance variation within the matrix 136. Increasing or decreasing the green value (while keeping the other colors the same) has the greatest perceived effect on luminance, based on the perceived luminance equation Y described above. Increasing or decreasing the color red (while keeping the other colors the same) has second greatest effect on luminance, and increasing or decreasing the color blue (while keeping the other colors the same) has the least perceived effect on luminance.
  • In certain embodiments, an algorithm, such a value optimization algorithm (e.g., greedy algorithm), may be used to assign the sets of values into specific cells (e.g., spatially distribute the values) so as to minimize the luminance difference of the reduced-amplitude luminance matrix 136 by using the luminance Y equation to more evenly distribute the integer values. For example, the algorithm may first assign the four R1, R2, R3, and R4 values by increasing order, random order, or any other ordering. A table 144 of display luminance values depicts the four R1, R2, R3, and R4 values assigned by increasing order (e.g., R1=“57”, R2=“57”, R3=“57”, and R4=“58”). The four green values may then be assigned to minimize the red-green luminance difference between the four cells. For example, if a cell has a high red value compared to one or more other cells, then the cell may be used to store a low green value (compared to one or more other cells). In the depicted example, the highest red value is stored in R4, therefore, G4 may get the lowest green value.
  • The blue color values may then be similarly assigned so that the resulting luminance difference of the reduced-amplitude luminance matrix 136 is lowered or minimized. For example, the lowest blue value of “47” may be assigned to the cells having red=“57” and green=“33” (e.g., second and third cells) of the matrix 136 to counterbalance an assignment of a blue value of “48” to the fourth cell of the matrix 136. A high perceived luminance value YH=“41.7” for the reassigned matrix 136 may then found in the first cell having the values R1=“57”, G1=“33”, and B1=“48”. A low luminance value YL=“41.4” for the matrix 136 may be found in the fourth cell having the values R4=“58”, G4=“32”, and B4=“48”, with the luminance values of the second and third cells falling between YL and YH. It is to be understood that any algorithm, including brute force search algorithms, suitable for spatially redistributing the sets of values (e.g., R1, R2, R3, R4, G1, G2, G3, G4, B1, B2, B3, and B4) may be used to derive the reduced-amplitude luminance matrix 136.
  • In one embodiment, the values of the reduced-amplitude luminance matrix 136 may then be used to display a more improved and visually pleasing displayed image 74. In another embodiment, such as the embodiment described in more detail below with respect to FIGS. 11-14, the reduced-amplitude luminance matrix 136 may be temporally dithered in order to further improve the visual perception of the displayed image 74.
  • FIGS. 11-14 depict an embodiment of the use of temporal dithering to improve the visual perception of the reduced-amplitude luminance matrix 136. Turning to FIG. 11, the figure depicts the matrix 136 at time T0. As mentioned above, the values for R1, R2, R3, R4, G1, G2, G3, G4, B1, B2, B3, and B4 may have been distributed so as to result in a more homogenous luminance for the matrix 136. Temporal dithering of the matrix 136 may result in a further improvement the perceived visual aspects of the image. Accordingly, FIG. 12 illustrates a temporal dithering of the cells at time T1. A resulting temporally dithered matrix 146 depicts a clockwise temporal dithering of the cells of the matrix 136. In the depicted embodiment, the R1, G1, and B1 values have been temporally shifted in clockwise direction to the cell that was previously storing the R2, G2, and B2 values. Similarly, the R2, G2, and B2 values have been temporally shifted to the cell that used to store the R4, G4, and B4 values. The R4, G4, and B4 values have been temporally shifted to the cell that used to store the R3, G3, and B3 values. Finally, the R3, G3, and B3 values have been temporally shifted to the cell that used to store the R1, G1, and B1 values.
  • FIG. 13 depicts the a similar clockwise temporal dithering of the matrix 146 of at time T2, resulting in a temporally dithered matrix 148. Likewise, FIG. 14 is illustrative of the clockwise temporal dithering of the matrix 148, resulting in a temporally dithered matrix 150. It is to be understood that the temporal dithering embodiment depicted in FIGS. 11-14 is but one of any number of temporal dithering embodiments that may be utilized to improve the visual perception of displayed image 74. Indeed, in another embodiment, the cells having the lowest luminance value (e.g., R=“58”, G=“32”, and B=“48”) and the highest luminance value (e.g., R=“57”, G=“33”, and B=“47”) of the initial reduced-amplitude luminance matrix 136 may be alternated with each other, and the remaining two other cells then also alternated with each other.
  • Turning to FIG. 15, the figure depicts another example of the transformation of an example source image RGB matrix 130 into a reduced-amplitude luminance matrix 136. In this example, 10-bit source image values may be used to derive 8-bit hardware values suitable for display by the display 28. It is to be understood that in addition to a 10-bit to 8-bit conversion, any number of other conversions of higher pixel depths to lower pixel depths may be possible. Indeed, the techniques described herein may be used to convert 9-bit to 6-bit, 10-bit to 6-bit, 12-bit to 6-bit, 9-bit to 8-bit, 12-bit, to 8-bit, and so forth. As mentioned above, higher pixel depth values (e.g., 10-bit values) of the original image may be converted into lower pixel depth values (e.g., 8-bit values) by various techniques, including using the first eight bits of the 10-bit value. Example 10-bit values for Rs, Gs, and Bs. are show in Table 142 (e.g., “935”, “606”, and “366”). In one embodiment, the 10-bit value “935” may be approximated by the 8-bit values “233”, “234”, “234”, and “234”. Similarly, the 10-bit value “606” may be approximated by the 8-bit values “151”, “151”, “152”, and “152”. Likewise, the 10-bit value “366” may be value “366” may be approximated by the 8-bit values “92”, “92”, “91”, and “91”.
  • The reduced-amplitude luminance matrix 136 may then be arrived at by color-shifting or spatially distributing the 8-bit values so as to reduce the overall perceived luminance difference of the reduced-amplitude luminance matrix 136. In our example, the highest luminance value could be obtained with a cell having the values R=“234”, G=“152”, and B=“92”. The lowest luminance value could be obtained with a cell having the values R=“233”, G=“151”, and B=“91”. The lower pixel depth values may then be re-assigned as depicted in table 144 so as to reduce the luminance difference between the cell having the highest luminance and the cell having the lowest luminance. In this example, the four R1, R2, R3, and R4 values are first assigned by decreasing order (e.g., R1=“234”, R2=“234”, R3=“234”, and R4=“233”). The four green values may then be assigned to minimize the red-green luminance difference between the four cells. For example, the luminance difference of the cells may be minimized by balancing the assignment of the high red value in one cell with the assignment of the high green value in another cell so as to more evenly spread the high value assignments. In the depicted example, the highest red values are stored in R1, R2, and R3, therefore, G1 and G2 may get the two lowest green values (e.g., “151”, “151”). The blue color values may then be similarly assigned so that the resulting luminance difference of the reduced-amplitude luminance matrix 136 is lowered or minimized. In this example, the blue value “91” may be assigned to the two cells of the matrix 136 containing the highest green values (e.g., third and fourth cells) to counterbalance the assignment of the blue value “92” to the first two cells of the matrix 136. By using the techniques described herein, the resulting displayed image 74 may be perceived as having having an improved visual quality.
  • The specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but rather to cover all modifications, equivalents, and alternatives falling within the spirit and scope of this disclosure.

Claims (24)

1. A method comprising:
decomposing a source image comprising a plurality of pixels into Red (R), Green (G), and Blue (B) color components corresponding to each pixel;
creating a red source image group, a green source image group, and a blue source image group by assigning the respective RGB color components of each pixel to the red, green and blue source image groups;
deriving a matrix based on the red, green, and blue source image groups;
determining a luminance difference between cells in the matrix, each cell including a red component, a green component, and a blue component, wherein determining the luminance difference includes determining a luminance amplitude of the matrix, the luminance amplitude of the matrix equal to the difference in luminance between a cell of the matrix having the highest luminance and a cell of the matrix having the lowest luminance based on their respective red, green, and blue components; and
reducing the luminance amplitude of the matrix.
2. The method of claim 1, comprising temporally dithering at least one of the red, green, or blue components of two or more cells in the matrix.
3. The method of claim 2, wherein the temporally dithering the at least one of the red, green, or blue components of the two or more cells in the matrix comprises a clockwise or counterclockwise temporal dithering of the cells in the matrix.
4. The method of claim 1, wherein the deriving the matrix comprises deriving a lower pixel depth value from a higher pixel depth value of the red, green, or blue source image groups.
5. The method of claim 4, wherein the deriving the matrix includes spatially dithering at least one of the RGB color components.
6. The method of claim 1, wherein the reducing the luminance amplitude of the matrix comprises a color-shifting of at least one of the red, green, or blue components by distributing the color components spatially across one or more cells in the matrix such that the luminance amplitude between the most and least luminescent cells of the matrix following the color-shifting is less than the luminance amplitude between the most and least luminescent cells of the matrix before the color-shifting.
7. A non-transitory computer-readable medium comprising code adapted to:
decompose a source image comprising pixels into Red (R), Green (G), and Blue (B) color components corresponding to each pixel;
create an individual color component source image group by assigning the R, G, or B color component of each pixel to one or more adjacent cells of the source image group;
create a first most significant bit (MSB) matrix by using the most significant bits of each cell of the source image group;
create a least significant bit (LSB) matrix by using the least significant bits of each cell of the source image group;
select a dither pattern from a plurality of dither patterns by using the LSB matrix;
create a modification matrix by using the selected dither pattern and the LSB matrix;
add the modification matrix to the first MSB matrix to create a second MSB matrix; and
provide visual output to a display based on the second MSB matrix.
8. The non-transitory computer-readable medium of claim 7, wherein the code adapted to select the dither pattern by using the LSB matrix comprises code adapted to use a magnitude of a value of a cell in the LSB matrix to select the dither pattern.
9. The non-transitory computer-readable medium of claim 7, wherein the code adapted to select the dither pattern by using the LSB matrix comprises code adapted to temporally dither the selected dither pattern.
10. The non-transitory computer-readable medium of claim 7, wherein the code adapted to create the modification matrix by using the selected dither pattern and the LSB matrix comprises code adapted to use a row and column position of a first cell of the LSB matrix to select a value in a second cell of the selected dither pattern, the second cell of the selected dither pattern having the row and column position of the first cell.
11. The non-transitory computer-readable medium of claim 7, comprising code adapted to temporally dither the second MSB matrix.
12. The non-transitory computer-readable medium of claim 11, wherein the code adapted to temporally dither the second MSB matrix comprises code adapted to perform a clockwise or counterclockwise temporal dithering of the second MSB matrix.
13. An electronic device comprising:
a display comprising a plurality of pixels; and
a processor configured to transmit signals representative of image data to the plurality of pixels of the display, wherein the processor is adapted to decompose an area of a source image into Red (R), Green (G), and Blue (B) color components; create a source image group by assigning the RGB color components of the area to one or more adjacent cells of the source image group; create a most significant bit (MSB) matrix by using the source image group; derive a matrix based on the RGB colors of each cell of the MSB matrix; determine a luminance difference of the cells in the matrix; and reduce the luminance amplitude of the matrix.
14. The electronic device of claim 13, wherein the area of the source image comprises a single pixel.
15. The electronic device of claim 13, wherein the processor is adapted to create the MSB matrix using only the most significant bits of the source image group.
16. A method comprising:
decomposing a source image comprising pixels into Red (R), Green (G), and Blue (B) color components corresponding to each pixel;
creating an individual color source image group by assigning the R, G, or B color component of each pixel to one or more adjacent cells of the source image group;
creating a first most significant bit (MSB) group by using the source image group;
creating a least significant bit (LSB) group by using the source image group;
selecting a spatial dither pattern from a plurality of spatial dither patterns by using the LSB group;
creating a modification matrix by using the dither pattern and the LSB group;
creating a second MSB group by using the modification matrix and the first MSB group;
creating a reduced-amplitude luminance matrix based on the second MSB group; and
temporally dithering the reduced-amplitude luminance matrix.
17. The method of claim 16, comprising temporally dithering the selected spatial dither pattern.
18. The method of claim 17, wherein the temporally dithering of the selected spatial dither pattern comprises a clockwise temporal dithering or a counterclockwise temporal dithering.
19. The method of claim 17, wherein the temporally dithering of the selected spatial dither pattern comprises dividing the spatial dither pattern into a plurality of quadrants and then performing either a clockwise shifting of bit values in each quadrant or a counterclockwise shifting of the bit values in each quadrant.
20. The method of claim 16, comprising temporally dithering the second MSB group.
21. A non-transitory computer-readable medium comprising code adapted to:
create a first most significant bit (MSB) group by selecting most significant bits from an area of a source image;
create a least significant bit (LSB) group by selecting least significant bits from the area of the source image;
select a dither pattern;
create a modification matrix by using the LSB group; and
create a second MSB group by adding the modification matrix to the first MSB group.
22. The non-transitory computer-readable medium of claim 21, wherein the code adapted to select the dither pattern comprises code adapted to use the LSB group to select the dither pattern.
23. The non-transitory computer-readable medium of claim 22, wherein the code adapted to use the LSB group to select the dither pattern comprises code adapted to select the dither pattern based on a magnitude of a value of a cell in the LSB group.
24. The non-transitory computer-readable medium of claim 21, wherein the code adapted to create the second MSB group by adding the modification matrix to the first MSB group comprises code adapted to add a first value corresponding to a first cell in the first MSB group to a second value corresponding to a second cell in the modification matrix.
US12/970,543 2010-12-16 2010-12-16 Spatio-temporal color luminance dithering techniques Abandoned US20120154428A1 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US12/970,543 US20120154428A1 (en) 2010-12-16 2010-12-16 Spatio-temporal color luminance dithering techniques
EP11192268A EP2466575A3 (en) 2010-12-16 2011-12-07 Spatio-temporal color dithering techniques
PCT/US2011/064478 WO2012082649A2 (en) 2010-12-16 2011-12-12 Spatio-temporal color luminance dithering techniques
KR1020110135711A KR101356334B1 (en) 2010-12-16 2011-12-15 Spatio-temporal color luminance dithering techniques
TW100146941A TW201234868A (en) 2010-12-16 2011-12-16 Spatio-temporal color luminance dithering techniques
CN201110421629.XA CN102568436B (en) 2010-12-16 2011-12-16 Spatio-temporal color luminance dithering techniques
KR1020130117251A KR20130114632A (en) 2010-12-16 2013-10-01 Spatio-temporal color luminance dithering techniques
US14/178,178 US9552654B2 (en) 2010-12-16 2014-02-11 Spatio-temporal color luminance dithering techniques

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/970,543 US20120154428A1 (en) 2010-12-16 2010-12-16 Spatio-temporal color luminance dithering techniques

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/178,178 Division US9552654B2 (en) 2010-12-16 2014-02-11 Spatio-temporal color luminance dithering techniques

Publications (1)

Publication Number Publication Date
US20120154428A1 true US20120154428A1 (en) 2012-06-21

Family

ID=45444729

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/970,543 Abandoned US20120154428A1 (en) 2010-12-16 2010-12-16 Spatio-temporal color luminance dithering techniques
US14/178,178 Expired - Fee Related US9552654B2 (en) 2010-12-16 2014-02-11 Spatio-temporal color luminance dithering techniques

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/178,178 Expired - Fee Related US9552654B2 (en) 2010-12-16 2014-02-11 Spatio-temporal color luminance dithering techniques

Country Status (6)

Country Link
US (2) US20120154428A1 (en)
EP (1) EP2466575A3 (en)
KR (2) KR101356334B1 (en)
CN (1) CN102568436B (en)
TW (1) TW201234868A (en)
WO (1) WO2012082649A2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120268478A1 (en) * 2011-04-22 2012-10-25 Mstar Semiconductor, Inc. Method for Dithering in Display Panel and Associated Apparatus
US20120274628A1 (en) * 2011-04-29 2012-11-01 Samsung Electronics Co., Ltd. 3-dimensional display device and data processing method thereof
US20140204113A1 (en) * 2013-01-18 2014-07-24 Himax Technologies Limited Carry table generating method
US20150281702A1 (en) * 2014-03-31 2015-10-01 Hon Hai Precision Industry Co., Ltd. Method for encoding and decoding color signals
US9489909B2 (en) 2013-12-19 2016-11-08 Samsung Display Co., Ltd. Method of driving a display panel, display panel driving apparatus for performing the method and display apparatus having the display panel driving apparatus
US20160364825A1 (en) * 2015-06-12 2016-12-15 Shaoher Pan Watermark image code
US9552654B2 (en) 2010-12-16 2017-01-24 Apple Inc. Spatio-temporal color luminance dithering techniques
US10127869B2 (en) * 2015-08-17 2018-11-13 Samsung Display Co., Ltd. Timing controller, display apparatus including the same and method of driving the display apparatus
US10210829B2 (en) 2015-10-27 2019-02-19 Samsung Display Co., Ltd. Display apparatus and method of operation
US10366646B2 (en) 2014-12-26 2019-07-30 Samsung Electronics Co., Ltd. Devices including first and second buffers, and methods of operating devices including first and second buffers
US20220392413A1 (en) * 2021-04-20 2022-12-08 Huzhou China Star Optoelectronics Display Co., Ltd. Driving method, driving circuit, and display device

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI493535B (en) * 2013-01-31 2015-07-21 Himax Tech Ltd Carry table generating method
US9640146B2 (en) 2013-09-04 2017-05-02 Himax Technologies Limited Method for performing dithering upon both normal mode and self refresh mode in lower transmission data rate and related apparatus
CN106162130B (en) * 2015-04-15 2018-06-05 深圳市中兴微电子技术有限公司 A kind of image processing method and device, terminal
KR102503819B1 (en) * 2016-08-31 2023-02-23 엘지디스플레이 주식회사 Timing controlor and display device including the same
CN108322723B (en) * 2018-02-06 2020-01-24 深圳创维-Rgb电子有限公司 Color distortion compensation method and device and television
CN112104818B (en) * 2020-08-28 2022-07-01 稿定(厦门)科技有限公司 RGB channel separation method and system
CN116965037A (en) * 2021-03-10 2023-10-27 谷歌有限责任公司 Real-time pre-encoded content adaptive GPU image dithering

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4016493B2 (en) 1998-08-05 2007-12-05 三菱電機株式会社 Display device and multi-gradation circuit thereof
US7098927B2 (en) 2002-02-01 2006-08-29 Sharp Laboratories Of America, Inc Methods and systems for adaptive dither structures
US7038814B2 (en) * 2002-03-21 2006-05-02 Nokia Corporation Fast digital image dithering method that maintains a substantially constant value of luminance
US8243093B2 (en) 2003-08-22 2012-08-14 Sharp Laboratories Of America, Inc. Systems and methods for dither structure creation and application for reducing the visibility of contouring artifacts in still and video images
US7352373B2 (en) 2003-09-30 2008-04-01 Sharp Laboratories Of America, Inc. Systems and methods for multi-dimensional dither structure creation and application
JP4145284B2 (en) * 2004-04-21 2008-09-03 シャープ株式会社 Display device, instrument panel including the display device, and motor vehicle
US7834887B2 (en) * 2005-04-05 2010-11-16 Samsung Electronics Co., Ltd. Methods and systems for combining luminance preserving quantization and halftoning
US7420570B2 (en) * 2005-04-14 2008-09-02 Samsung Electronics Co., Ltd. Methods and systems for video processing using super dithering
ATE507552T1 (en) * 2005-09-28 2011-05-15 Sony Ericsson Mobile Comm Ab METHOD FOR INCREASE THE RESOLUTION OF A COLOR REPRESENTATION AND APPARATUS CARRYING OUT SUCH METHOD
KR100745979B1 (en) 2006-01-04 2007-08-06 삼성전자주식회사 Apparatus and method for dithering for multitoning
TWI329853B (en) * 2006-12-28 2010-09-01 Mstar Semiconductor Inc Dithering method and related dithering module and liquid crystal display (lcd)
JP2008299270A (en) * 2007-06-04 2008-12-11 Sharp Corp Driving device for display device, and electronic device
KR101308465B1 (en) * 2008-06-04 2013-09-16 엘지디스플레이 주식회사 Video display device for compensating display defect
WO2009157915A1 (en) 2008-06-27 2009-12-30 Aurora Systems, Inc. Field-sequential color display systems and methods with reduced color break-up
CN101908205B (en) * 2010-06-09 2011-11-30 河北师范大学 Magic square coding-based median filter method
US20120154428A1 (en) 2010-12-16 2012-06-21 Apple Inc. Spatio-temporal color luminance dithering techniques

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9552654B2 (en) 2010-12-16 2017-01-24 Apple Inc. Spatio-temporal color luminance dithering techniques
US8928688B2 (en) * 2011-04-22 2015-01-06 Mstar Semiconductor, Inc. Method for dithering in display panel and associated apparatus
US20120268478A1 (en) * 2011-04-22 2012-10-25 Mstar Semiconductor, Inc. Method for Dithering in Display Panel and Associated Apparatus
US20120274628A1 (en) * 2011-04-29 2012-11-01 Samsung Electronics Co., Ltd. 3-dimensional display device and data processing method thereof
US8970617B2 (en) * 2011-04-29 2015-03-03 Samsung Display Co., Ltd. 3-dimensional display device and data processing method thereof
US20140204113A1 (en) * 2013-01-18 2014-07-24 Himax Technologies Limited Carry table generating method
US9189987B2 (en) * 2013-01-18 2015-11-17 Himax Technologies Limited Method for generating dither carry tables by conversion procedure
US9489909B2 (en) 2013-12-19 2016-11-08 Samsung Display Co., Ltd. Method of driving a display panel, display panel driving apparatus for performing the method and display apparatus having the display panel driving apparatus
US20150281702A1 (en) * 2014-03-31 2015-10-01 Hon Hai Precision Industry Co., Ltd. Method for encoding and decoding color signals
US10366646B2 (en) 2014-12-26 2019-07-30 Samsung Electronics Co., Ltd. Devices including first and second buffers, and methods of operating devices including first and second buffers
US20160364825A1 (en) * 2015-06-12 2016-12-15 Shaoher Pan Watermark image code
US10863202B2 (en) * 2015-06-12 2020-12-08 Shaoher Pan Encoding data in a source image with watermark image codes
US10127869B2 (en) * 2015-08-17 2018-11-13 Samsung Display Co., Ltd. Timing controller, display apparatus including the same and method of driving the display apparatus
US10210829B2 (en) 2015-10-27 2019-02-19 Samsung Display Co., Ltd. Display apparatus and method of operation
US20220392413A1 (en) * 2021-04-20 2022-12-08 Huzhou China Star Optoelectronics Display Co., Ltd. Driving method, driving circuit, and display device
US11682356B2 (en) * 2021-04-20 2023-06-20 Huizhou China Star Optoelectronics Display Co., Ltd. Driving method, driving circuit, and display device

Also Published As

Publication number Publication date
US20140160146A1 (en) 2014-06-12
WO2012082649A2 (en) 2012-06-21
TW201234868A (en) 2012-08-16
KR20120089556A (en) 2012-08-13
EP2466575A2 (en) 2012-06-20
KR20130114632A (en) 2013-10-18
CN102568436A (en) 2012-07-11
KR101356334B1 (en) 2014-02-06
EP2466575A3 (en) 2012-08-22
US9552654B2 (en) 2017-01-24
CN102568436B (en) 2014-12-10
WO2012082649A3 (en) 2012-09-07

Similar Documents

Publication Publication Date Title
US9552654B2 (en) Spatio-temporal color luminance dithering techniques
US9997135B2 (en) Method for producing a color image and imaging device employing same
KR101256030B1 (en) Extended dynamic range and extended dimensionality image signal conversion
US9501983B2 (en) Color conversion device, display device, and color conversion method
KR102268961B1 (en) Method of data conversion and data converter
US20120154423A1 (en) Luminance-based dithering technique
US8860750B2 (en) Devices and methods for dynamic dithering
JP2016213828A (en) Perceptual color transformations for wide color gamut video coding
US8760465B2 (en) Method and apparatus to increase bit-depth on gray-scale and multi-channel images (inverse dithering)
KR102103730B1 (en) Display driving device and display device including the same
US11651719B2 (en) Enhanced smoothness digital-to-analog converter interpolation systems and methods
Nezamabadi et al. Color signal encoding for high dynamic range and wide color gamut based on human perception
US9779650B2 (en) Display device and driving method of display panel
US10593252B1 (en) Electronic display spatiotemporal dithering systems and methods
US8159512B2 (en) Method of driving a display
KR20150140514A (en) Method of compensating color of transparent display device
CN110718178B (en) Display panel and image display apparatus including the same
JP2010109794A (en) Video signal processor, video signal processing method, program, and display device
JP2015194607A (en) Display device and display device driving method
US9569999B2 (en) Signal generation apparatus, signal generation program, signal generation method, and image display apparatus
JP2014093617A (en) Color video signal processing device, processing method, and processing program
TWI496442B (en) Image processing method and image display device
US11810494B2 (en) Dither enhancement of display gamma DAC systems and methods
US20240135856A1 (en) Multi-least significant bit (lsb) dithering systems and methods
US20230368718A1 (en) Display Pixel Non-Uniformity Compensation

Legal Events

Date Code Title Description
AS Assignment

Owner name: APPLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BARNHOEFER, ULRICH T., DR.;REEL/FRAME:025513/0528

Effective date: 20101213

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION