US20110285713A1 - Processing Color Sub-Pixels - Google Patents
Processing Color Sub-Pixels Download PDFInfo
- Publication number
- US20110285713A1 US20110285713A1 US12/907,178 US90717810A US2011285713A1 US 20110285713 A1 US20110285713 A1 US 20110285713A1 US 90717810 A US90717810 A US 90717810A US 2011285713 A1 US2011285713 A1 US 2011285713A1
- Authority
- US
- United States
- Prior art keywords
- pixel
- color
- sub
- display
- pixels
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000012545 processing Methods 0.000 title claims description 103
- 238000013139 quantization Methods 0.000 claims abstract description 46
- 238000013507 mapping Methods 0.000 claims abstract description 44
- 230000015654 memory Effects 0.000 claims abstract description 39
- 238000000034 method Methods 0.000 claims abstract description 35
- 230000008859 change Effects 0.000 claims description 7
- 239000000872 buffer Substances 0.000 description 29
- 238000012937 correction Methods 0.000 description 25
- 230000006870 function Effects 0.000 description 25
- 238000001652 electrophoretic deposition Methods 0.000 description 23
- 229920001345 ε-poly-D-lysine Polymers 0.000 description 23
- 239000011159 matrix material Substances 0.000 description 21
- 238000001914 filtration Methods 0.000 description 20
- 241000023320 Luma <angiosperm> Species 0.000 description 19
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 19
- 238000012360 testing method Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 17
- 239000003086 colorant Substances 0.000 description 14
- 239000003094 microcapsule Substances 0.000 description 14
- 239000002245 particle Substances 0.000 description 13
- 230000014509 gene expression Effects 0.000 description 12
- 241001428800 Cell fusing agent virus Species 0.000 description 11
- 208000036971 interstitial lung disease 2 Diseases 0.000 description 11
- YBJHBAHKTGYVGT-ZKWXMUAHSA-N (+)-Biotin Chemical compound N1C(=O)N[C@@H]2[C@H](CCCCC(=O)O)SC[C@@H]21 YBJHBAHKTGYVGT-ZKWXMUAHSA-N 0.000 description 10
- FEPMHVLSLDOMQC-UHFFFAOYSA-N virginiamycin-S1 Natural products CC1OC(=O)C(C=2C=CC=CC=2)NC(=O)C2CC(=O)CCN2C(=O)C(CC=2C=CC=CC=2)N(C)C(=O)C2CCCN2C(=O)C(CC)NC(=O)C1NC(=O)C1=NC=CC=C1O FEPMHVLSLDOMQC-UHFFFAOYSA-N 0.000 description 10
- 230000015572 biosynthetic process Effects 0.000 description 9
- 230000003287 optical effect Effects 0.000 description 9
- 238000003786 synthesis reaction Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 238000009877 rendering Methods 0.000 description 8
- 238000009792 diffusion process Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 6
- 230000005684 electric field Effects 0.000 description 5
- 239000012530 fluid Substances 0.000 description 5
- 238000012805 post-processing Methods 0.000 description 5
- 229910052721 tungsten Inorganic materials 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 239000000382 optic material Substances 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- WFKWXMTUELFFGS-UHFFFAOYSA-N tungsten Chemical compound [W] WFKWXMTUELFFGS-UHFFFAOYSA-N 0.000 description 3
- 239000010937 tungsten Substances 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 241000238370 Sepia Species 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000009529 body temperature measurement Methods 0.000 description 1
- 238000009125 cardiac resynchronization therapy Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004020 luminiscence type Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000009736 wetting Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/34—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/68—Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/77—Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0457—Improvement of perceived resolution by subpixel rendering
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/06—Colour space transformation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/133—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/135—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements
Definitions
- the field of the present invention relates generally to digital image processing for display devices.
- a digital image is comprised of a multitude of small picture elements or pixels.
- a single pixel may be formed from red, green, and blue (RGB) sub-pixels.
- the sub-pixels in some RGB display devices may include either a red, green, or blue filter.
- the sub-pixels in a display device are spatially close and, for this reason, human vision perceives the red, green, and blue sub-pixels as a single-colored pixel. By modulating the colors of the individual sub-pixels, a range of colors can be generated for each pixel.
- a color filter array describes the arrangement of sub-pixels in color image sensors and in color display devices.
- CFAs are known.
- the Bayer CFA is one well-known example. Red, green, and blue sub-pixels are arranged in a square gird in the Bayer CFA. There are as many green sub-pixels as blue and red sub-pixels combined, with a green sub-pixel at every other position in both the horizontal and vertical directions, and the remaining positions being populated with blue and red sub-pixels.
- a single pixel includes two green and one each of blue and red sub-pixels.
- the data for a color pixel define how much color each sub-pixel adds to the perceived color of the pixel.
- the data for each sub-pixel can vary within a range depending on the number of data bits allocated in the display system for sub-pixel values. For example, for 24-bit RGB color, 8 bits are allocated per sub-pixel, providing a range of 256 possible values for each color channel. If the data values for all components of an RGB pixel are zero, the pixel theoretically appears black. On the other hand, if all three sub-pixel values are at their maximum value, the pixel theoretically appears white.
- RGB pixel data expressed using 24-bits (8:8:8) provides for a color palette of 16,777,216 colors. Color pixel data, however, need not be expressed using 24-bits.
- RGB pixel data may be represented using as few as one bit per channel (1:1:1), providing a color palette of eight colors.
- An electro-optic material has at least two “display states,” the states differing in at least one optical property.
- An electro-optic material may be changed from one state to another by applying an electric field across the material.
- the optical property may or may not be perceptible to the human eye, and may include optical transmission, reflectance, or luminescence.
- the optical property may be a perceptible color or shade of gray.
- Electro-optic displays include the rotating bichromal member, electrochromic medium, electro-wetting, and particle-based electrophoretic types.
- Electrophoretic display devices (“EPD”), sometimes referred to as “electronic paper” devices, may employ one of several different types of electro-optic technologies.
- Particle-based electrophoretic media include a fluid, which may be either a liquid, or a gaseous fluid.
- Various types of particle-based EPD devices include those using encapsulated electrophoretic, polymer-dispersed electrophoretic, and microcellular media.
- Another electro-optic display type similar to EPDs is the dielectrophoretic display.
- An electro-optic display device may have display pixels or sub-pixels that have multiple stable display states. Display devices in this category are capable of displaying (a) two or more display states, and (b) the display states are considered stable.
- the display pixels or sub-pixels of a bistable display may have first and second stable display states.
- the first and second display states differ in at least one optical property, such as a perceptible color or shade of gray. For example, in the first display state, the display pixel may appear black and in the second display state, the display pixel may appear white.
- the display pixels or sub-pixels of a display device having multiple stable display states may have three or more stable display states, each of the display states differing in at least one optical property, e.g., light, medium, and dark shades of a particular color.
- the display pixels or sub-pixels may display states corresponding with 4, 8, 16, 32, or 64 different shades of gray.
- the display states may be considered to be stable, according to one definition, if the persistence of the display state with respect to display pixel drive time is sufficiently large.
- An exemplary electro-optic display pixel or sub-pixel may include a layer of electro-optic material situated between a common electrode and a pixel electrode.
- the display state of the display pixel or sub-pixel may be changed by driving a drive pulse (typically a voltage pulse) on one of the electrodes until the desired appearance is obtained.
- a drive pulse typically a voltage pulse
- the display state of a display pixel or sub-pixel may be changed by driving a series of pulses on the electrode. In either case, the display pixel or sub-pixel exhibits a new display state at the conclusion of the drive time.
- the new display state may be considered stable.
- the display states of display pixels of liquid crystal displays (“LCD”) and CRTs are not considered to be stable, whereas electrophoretic displays, for example, are considered stable.
- Color data pixels include a color component for each color channel. Accordingly, a capability for enhancing individual color components of the data pixels of a color image may be useful.
- An embodiment is directed to a method for processing color sub-pixels.
- the method may include receiving a color image and mapping the color image to a display device.
- the color image may be defined by two or more data pixels, each data pixel having at least a first and second color component.
- the display device may have two or more display pixels, each display pixel having two or more sub-pixels.
- the mapping may include mapping a first color component of a first data pixel to a first sub-pixel of a first display pixel, mapping a second color component of a second data pixel to a second sub-pixel of the first display pixel, and storing the first and second color components in a memory.
- the display device is an electro-optic display device having two or more stable display states.
- the method may include causing the display states of the first and second sub-pixels to change to display states corresponding with the first and second color components.
- the first and second color components each have an associated color property
- the method may include selecting one or more sub-pixel locations in a color filter array map to diffuse quantization error, determining a first quantized color component for the first color component, determining a first quantization error associated with the first quantized color component, and diffusing the first quantization error to the selected one or more sub-pixel locations.
- the method may include determining whether the first color component has a value within a particular range of color component values, and excluding the first color component from the diffusing of the first quantization error to the selected one or more sub-pixel locations if the value of the first color component is outside of the particular range.
- the color filter array map may include white sub-pixels.
- An embodiment is directed to a method for reducing the resolution of color sub-pixels.
- the method may include selecting one or more sub-pixel locations in a color filter array map to diffuse quantization error, receiving a color image defined by two or more data pixels, each data pixel having two or more color components, each color component having a color property, and determining a quantized color component for each color component of a first data pixel.
- the method may further include determining a quantization error associated with each quantized color component, and diffusing the quantization errors to the selected one or more sub-pixel locations.
- the method for reducing the resolution of color sub-pixels may include determining whether the first data pixel has a value within a particular range of data pixel values, and excluding the first data pixel from the diffusing of the quantization errors to the selected one or more sub-pixel locations if the value of the first data pixel is outside of the particular range.
- An embodiment is directed to a processor.
- the processor may include an interface to receive a color image and a mapping unit.
- the color image may be defined by two or more data pixels, each data pixel having at least a first and second color component.
- the mapping unit may be operable to map the color image to a display device having two or more display pixels, each display pixel having two or more sub-pixels.
- the mapping may include mapping a first color component of a first data pixel to a first sub-pixel of a first display pixel, and mapping a second color component of a second data pixel to a second sub-pixel of the first display pixel.
- the display device may be an electro-optic display device having two or more stable display states.
- the processor may include a display engine to provide waveforms to cause the display states of the first and second sub-pixels to change to display states corresponding with the first and second color components.
- the display device may be an electrophoretic display device.
- the processor may be a display controller.
- the first and second color components may each have an associated color property
- the processor may include a color processing unit.
- the color processing unit may receive a selection of one or more sub-pixel locations in a color filter array map to diffuse quantization error.
- the color processing unit may determine a quantized color component for each color component of the color image, determine a quantization error associated with each quantized color component, and diffuse respective quantization errors to the selected one or more sub-pixel locations.
- the color processing unit may determine whether the first color component has a value within a first range of color component values, and exclude the first color component from the diffusing of the respective quantization errors to the selected one or more sub-pixel locations if the value of the first color component is outside of the first range.
- the color processing unit may determine whether the second color component has a value within a second range of color component values, and exclude the second color component from the diffusing of the respective quantization errors to the selected one or more sub-pixel locations if the value of the second color component is outside of the second range, wherein the first and second ranges are different.
- the display device may be an electrophoretic display device.
- the processor may be a display controller and the display device may be an electrophoretic display device.
- the color filter array map may include white sub-pixels.
- FIG. 1 is a simplified illustration of an exemplary system in which embodiments may be implemented.
- FIG. 2 is a simplified illustration of a memory and a color processor of the system of FIG. 1 according to one embodiment.
- FIG. 3 illustrates a flexible data path for color synthesis of primaries according to one embodiment.
- FIG. 4 is a block diagram of an exemplary circuit for implementing the flexible data path of FIG. 3 .
- FIG. 5 is a simplified block diagram of an exemplary saturation adjustment unit according to one embodiment.
- FIG. 6 is a diagram illustrating an exemplary diffusion of quantization error of an input pixel to pixels neighboring the input pixel.
- FIG. 7 is a diagram illustrating quantization errors of neighbor pixels that may be used in an exemplary calculation of a dithered pixel.
- FIG. 8 is a simplified diagram of an exemplary white sub-pixel generation unit according to one embodiment.
- FIG. 9 illustrates an exemplary CFA mapping and post-processing unit according to one embodiment.
- FIG. 10 illustrates an example of mapping samples of input image pixels to subpixels of a display device.
- FIG. 11 illustrates pixels in a portion of an exemplary image and sub-pixels in a portion of a display device.
- FIG. 12 illustrates exemplary color filter arrays.
- FIG. 13 illustrates a map for use in specifying neighbor pixels or sub-pixels to receive a quantization error of a sub-pixel.
- FIG. 14 illustrates an exemplary use of the map of FIG. 13 for specifying neighbor pixels or sub-pixels to receive a quantization error of a sub-pixel.
- FIG. 15 is a simplified diagram of a cross-section of a portion of an exemplary electrophoretic display, depicting ambient light entering through a first color filter and exiting through an adjacent color filter.
- FIG. 16 is a simplified diagram of a cross-section of a portion of an exemplary electrophoretic display, depicting ambient light entering through a first color filter and exiting through a gap between adjacent color filters.
- FIG. 17 is a simplified diagram of a cross-section of a portion of an exemplary electrophoretic display, and a front view of a color filter array according to one embodiment.
- FIG. 18 illustrates front views of two exemplary color filter arrays.
- FIG. 19 illustrates a block diagram of a circuit for implementing the flexible data path for color synthesis of primaries according to one alternative embodiment.
- FIG. 20 is a simplified block diagram of a color processor, a white sub-pixel generation unit, and a post-processing unit according to one embodiment.
- FIG. 21 is a simplified diagram of an exemplary white sub-pixel generation unit according to one embodiment.
- FIG. 22 illustrates exemplary, alternative configurations for use of a look up table memory of the FIG. 21 .
- FIG. 23 illustrates front views of two color filter arrays.
- FIG. 1 illustrates a block diagram of an exemplary display system 120 illustrating one context in which embodiments may be implemented.
- the system 120 includes a host 122 , a display device 124 having a display matrix 126 , a display controller 128 , and a system memory 130 .
- the system 120 may include an image sensor 118 .
- the system 120 may also include a waveform memory 134 , a temperature sensor 136 , and a display power module 137 .
- the system 120 may include buses 138 , 140 , 142 , 144 , 146 , 148 , and 149 .
- the display controller 128 includes a display controller memory 150 , a color processor 152 , a display engine 154 , and other components (not shown). In one embodiment, the display controller 128 may include circuitry or logic that executes instructions of any computer-readable type to perform operations.
- the system 120 may be any digital system or appliance.
- the system 120 may be a battery powered (not shown) portable appliance, such as an electronic reader, cellular telephone, digital photo frame, or display sign.
- FIG. 1 shows only those aspects of the system 120 believed to be helpful for understanding the disclosed embodiments, numerous other aspects having been omitted.
- the host 122 may be a general purpose microprocessor, digital signal processor, controller, computer, or any other type of device, circuit, or logic that executes instructions of any computer-readable type to perform operations. Any type of device that can function as a host or master is contemplated as being within the scope of the embodiments.
- the host 122 may be a “system-on-a-chip,” having functional units for performing functions other than traditional host or processor functions.
- the host 122 may include a transceiver or a display controller.
- the term “processor” may be used in this specification and in the claims to refer to either the host 122 or the display controller 128 .
- the system memory 130 may be may be an SRAM, VRAM, SGRAM, DDRDRAM, SDRAM, DRAM, flash, hard disk, or any other suitable volatile or non-volatile memory.
- the system memory may store instructions that the host 122 may read and execute to perform operations.
- the system memory may also store data.
- the display device 124 may have display pixels that may be arranged in rows and columns forming a matrix (“display matrix”) 126 .
- a display pixel may be a single element or may include two or more sub-pixels.
- the display device 124 may be an electro-optic display device with display pixels having multiple stable display states in which individual display pixels may be driven from a current display state to a new display state by series of two or more drive pulses.
- the display device 124 may be an electro-optic display device with display pixels having multiple stable display states in which individual display pixels may be driven from a current display state to a new display state by a single drive pulse.
- the display device 124 may be an active-matrix display device.
- the display device 124 may be an active-matrix, particle-based electrophoretic display device having display pixels that include one or more types of electrically-charged particles suspended in a fluid, the optical appearance of the display pixels being changeable by applying an electric field across the display pixel causing particle movement through the fluid.
- the display device 124 may be coupled with the display controller 128 via one or more buses 142 , 149 that the display controller uses to provide pixel data and control signals to the display.
- the display device 124 may be a gray-scale display or a color display.
- the display controller 128 may receive as input and provide as output either gray-scale or color images.
- the display state of a display pixel is defined by one or more bits of data, which may be referred to as a “data pixel.”
- An image is defined by data pixels and may be referred to as a “frame.”
- the display controller 128 may be disposed on an integrated circuit (“IC”) separate from other elements of the system 120 . In an alternative embodiment, the display controller 128 need not be embodied on a separate IC. In one embodiment, the display controller 128 may be integrated into one or more other elements of the system 120 . For example, the display controller 128 may integrated with the host 122 on a singe IC.
- IC integrated circuit
- the display memory 150 may be internal or external to the display controller 128 , or may be divided with one or more components internal to the display controller, and one or more components external to the display controller.
- the display memory 150 may be an SRAM, VRAM, SGRAM, DDRDRAM, SDRAM, DRAM, flash, hard disk, or any other suitable volatile or non-volatile memory.
- the display memory 150 may store data or instructions.
- the waveform memory 134 may be a flash memory, EPROM, EEPROM, or any other suitable non-volatile memory.
- the waveform memory 134 may store one or more different drive schemes, each drive scheme including one or more waveforms used for driving a display pixel to a new display state.
- the waveform memory 134 may include a different set of waveforms for one or more update modes.
- the waveform memory 134 may include waveforms suitable for use at one or more temperatures.
- the waveform memory 134 may be coupled with the display controller 128 via a serial or parallel bus. In one embodiment, the waveform memory 134 may store data or instructions.
- the temperature sensor 136 may be provided to determine ambient temperature.
- the drive pulse (or more typically, the series of drive pulses) required to change the display state of a display pixel to a new display state may depend, in part, on temperature.
- the temperature sensor 136 may be mounted in any location suitable for obtaining temperature measurements that approximate the actual temperatures of the display pixels of the display device 124 .
- the temperature sensor 136 may be coupled with the display controller 128 in order to provide temperature data that may be used in selecting a drive scheme.
- the power module 137 may be coupled with the display controller 128 and the display device 124 .
- the power module 137 may receive signals from the display controller 128 and generate appropriate voltages (or currents) to drive selected display pixels of the display device 124 .
- the power module 137 may generate voltages of +15V, ⁇ 15V, or 0V.
- the image sensor 118 may include a charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) type image sensor that converts light into electronic signals that represent the level of light at each pixel. Other image sensing devices that are known or may become known that are capable of converting an image formed by light impinging onto a surface into electronic signals representative of the image may also be used.
- the image sensor 118 may also includes circuits for converting the electronic signals into image data and interfacing with other components of the system.
- the display engine 154 may perform a display update operation.
- the display engine 154 may include a pixel processor (not shown) and an update pipe sequencer (not shown).
- a display update operation may include updating display pixels of a display matrix of an electro-optic display device.
- a display update operation may include: (a) a pixel synthesis operation; and (b) a display output operation.
- a display update operation may be performed with respect to all of the display pixels of the display matrix 126 (an “entire” display update).
- a display update operation may be performed with respect to less than all of the display pixels of the display matrix 126 (a “regional” display update).
- two or more regional display updates may be performed in parallel.
- a regional display update of a first region of the display matrix 126 may operate in parallel with a regional display update of a second region, provided the first and second regions do not include any of the same display pixels or sub-pixels.
- the image to be rendered on a display device may include two or more images, and each sub-image or region may be processed using a different color processing algorithm. Because the pixel synthesis and display output operations are performed after color processing, and because the pixel synthesis and display output operations may be performed independently on distinct regions of the display matrix 126 , it will be appreciated that simultaneous display updates may be update display pixels that were processed using different color processing algorithms.
- FIG. 2 illustrates the display controller 128 of FIG. 1 according to one embodiment.
- the display controller memory 150 may include a first portion allocated as a color image buffer 220 and a second portion allocated as a processed color image buffer 222 .
- the color processor 152 fetches data from the color image buffer 220 and stores data in the processed color image buffer 222 using the bus 138 . So that the color processor 152 may access the memory 150 , it includes a Read Master unit 224 and a Write Master unit 226 .
- the color processor 152 includes a Color Synthesis of Primaries (CSP) unit 228 , a White Sub-Pixel Generation (WSG) unit 230 , and a CFA Mapping and Post-Processing Unit (PPU) 232 .
- CSP Color Synthesis of Primaries
- WSG White Sub-Pixel Generation
- PPU Post-Processing Unit
- a selecting unit 234 permits the outputs of the CSP unit 228 and the WSG unit 230 to be selected for input to the PPU 232 .
- the WSG unit 230 may receive pixel data from the CSP unit 228 and may provide saturation factor data to the CSP unit 228 .
- the color processor 152 provides for flexible processing of image data read from the color image buffer 220 .
- a user may configure the color processor 152 to implement a custom color processing algorithm for a particular display device by writing parameters to configuration and status registers 236 that may be included in the color processor 152 . These parameters may be written by the host 122 to a bus interface 238 via the bus 140 .
- the color processor 152 may include an input latency buffer 240 for delaying input data as required by a particular color processing algorithm.
- a color processing algorithm for a particular type of display device may include: (a) color correction; (b) color linearization (sometimes referred to as gamma correction); (c) luma scaling; (d) filtering; (e) color saturation adjustment; (f) dithering; and (g) other functions.
- An apparatus for implementing a color processing algorithm that has the capability to include a variety of different functions would be desirable. In general, the effect of applying two or more functions in succession is additive. In other words, the final appearance of an image after performing two different functions is affected by the order in which the functions are applied. An apparatus for implementing a color processing algorithm that has the capability to perform desired functions in any order would be desirable.
- FIG. 3 illustrates a block diagram of a flexible data path 320 for color synthesis of primaries according to one embodiment.
- a data switch 322 At the center of the flexible data path 320 is a data switch 322 .
- the flexible data path 320 may also include: (a) color correction module 324 ; (b) filtering module 326 ; (c) color linearization module 328 ; (d) color saturation adjustment module 330 ; (e) luma scaling module 332 ; and (f) dithering module 334 .
- the data switch 322 includes an input 336 for receiving image data and an output 338 for outputting image data.
- Image data may be received in any desired format, e.g., RGB, YCrCb, HSL, CMY, etc.
- the pixel depth of input image data may be any desired number of bits, e.g., 24-bit. In one embodiment, the input image pixels may be defined in 12 bit-per-pixel resolution.
- the data switch 322 may be programmable or configurable.
- the flexible data path 320 may be configured to include one or more of the processing modules 324 to 334 .
- the flexible data path 320 may be configured to include one or more additional modules (not shown). Any particular processing module may be included in the data path 320 more than once.
- the flexible data path 320 may be configured to exclude one or more of the modules 324 to 334 .
- One advantage of the capability of exclude any particular processing module is that it permits separate analysis of each processing module apart from the effects of other processing modules.
- a particular module may be included or excluded from the flexible data path 320 by programming or configuring the data switch 322 .
- the data switch 322 may be programmed or configured by storing one or more control words in the configuration and status register 236 .
- control words may be used to specify the order in which processing modules are used and to select parameters associated with particular processing modules.
- FIG. 4 illustrates a block diagram of a circuit 420 for implementing the flexible data path 320 for color synthesis of primaries according to one embodiment.
- the circuit 420 may be included in the CSP unit 228 in one embodiment.
- the circuit 420 may include, in one embodiment, a data switch 422 and a variety of processing modules.
- the circuit 420 may include the color correction module 324 , filtering module 326 , color linearization module 328 , dithering module 334 , color saturation adjustment module 330 , and luma scaling module 332 .
- the data switch 422 may include multiplexers M 0 to M 6 , or any other suitable selecting device. Each of the multiplexers M 0 to M 6 includes a select input (not shown).
- the select inputs are used to select the processing modules that are to be included in a color processing algorithm as well as to program the order in which the processing modules are used.
- the data switch 422 includes an input 434 for receiving image data and an output 436 for outputting image data. Input image data may be any desired number of bits.
- the inputs of each of the multiplexers M 0 to M 6 are numbered 0 to 6 from top to bottom.
- all of the modules may be bypassed by selecting the 0 input of multiplexer 0.
- the inputs of the multiplexers should be selected as follows: (a) multiplexer M 0 —select input 4, (b) multiplexer M 1 —select input 2, (c) multiplexer M 2 —select input 3, (d) multiplexer M 3 —select input 0, (e) multiplexer M 4 —select input 5, and (f) multiplexer M 5 —select input 1.
- the color correction module 324 may be used as part of a color processing algorithm for a particular type of display device to generate color-corrected pixels.
- the color correction module 324 may make independent adjustments to each color component of a pixel.
- the level of reflectance of an EPD pixel or sub-pixel may be less than one hundred percent. Consequently, when a color image is rendered on an EPD, colors may tend to lack brightness, saturation, or both brightness and saturation.
- colors may tend to lack brightness, saturation, or both brightness and saturation.
- a color image when a color image is rendered on a display device, it may have a “color cast.” An image rendered on a display device that lacks brightness or saturation, appears too dark.
- An image rendered on a display device that has a color cast may appear tinted.
- a color cast may be the result of properties of the display device or properties inherent in the image data.
- the color correction module 324 may be used to modify the brightness or saturation of pixels.
- the color correction module 324 may be used to shift color values.
- the color correction module 324 may include logic to multiply an RGB vector by a 3 ⁇ 3 kernel matrix, and to add the product to an RGB offset vector, RGB-outoff. Stated symbolically, the color correction module 324 may be used to evaluate the following expression:
- R 0 , G 0 , and B 0 are input RGB values.
- the R′, G′, and B′ are color corrected values.
- the respective RGB “inoff” and “outoff” are offset values.
- the “K” values of the 3 ⁇ 3 kernel matrix may be programmable coefficients.
- the color correction module 324 may be used to perform a color space conversion in addition to its use for correcting color. For example, RGB may be converted to YCrCb, YCrCb may be converted to RGB, or YCrCb may be converted to CMY using the above expression. In a color space conversion configuration, different input, output, and offset variables may be substituted.
- the RGB input values R 0 , G 0 , and B 0 in the above expression may be replaced with Y 0 , Cr 0 , and Cb 0
- the corrected values R′, G′, and B′ may be replaced with either Y′Cr′Cb′ or C′M′Y′.
- the color correction module 324 may be used to implement a scaling function with or without an offset.
- the color correction module 324 may be used to adjust color saturation of an image defined in YCrCb space. This may be accomplished by programming the K values of the kernel matrix as shown in the expression below:
- [ Y ′ Cr ′ Cb ′ ] [ 1 0 0 0 S 0 0 0 S ] ⁇ [ Y 0 + 0 Cb 0 + 0 Cr 0 + 0 ] + [ 0 0 0 ]
- the filtering module 326 may be used as part of a color processing algorithm for a particular type of display device to sharpen, blur, or value-scale an image. In addition, the filtering module 326 may be used for other purposes, such as bump mapping and line detection.
- the filtering module 326 may include a separate filter for each color channel. In one embodiment, the filters may be 3 ⁇ 3 filters. For example:
- the R 0 , G 0 , and B 0 are original color values
- the R′, G′, and B′ are filtered color values
- the programmable kernel values “K” define the filter. It is not critical that the filtering module 326 process RGB pixel data.
- the filtering module 326 may process pixel data in any desired format, e.g., YCrCb.
- the type of filtering that is performed may be different on each color channel.
- the filter on a Y channel may be a sharpening filter while the filters on Cr and Cb channels may perform blurring or saturation adjustment.
- the color linearization module 328 may be used as part of a color processing algorithm for a particular type of display device to generate pixels that are compensated for the non-linearity of the response of the display device to input pixel values.
- the brightness of a pixel generated in response to a signal may not be a linear function of the signal.
- the color linearization module 328 may include three 256 entry look-up tables (LUT), one for each color channel, each LUT defining a function to compensate for non-linearity of display device response. More specifically, the color linearization module 328 may implement a compensation function on each of three color channels. For example, the color linearization module 328 may implement the following:
- R′ f ( R 0 )
- the R′, G′, and B′ are linearized color values.
- the color linearization LUTs may store entries of any suitable precision.
- the color linearization LUTs may be 8 or 6 bits wide. In one alternative, the color linearization LUTs may be 4 bits wide.
- the color saturation adjustment module 330 may be used as part of a color processing algorithm for a particular type of display device to adjust levels of saturation in color pixels.
- the color saturation adjustment module 330 may make independent adjustments to each color component of a pixel.
- the color saturation adjustment module 330 may accept input data in any desired color format.
- the color saturation adjustment module 330 may accept input data in RGB, YCrCb, HSL, CMY, etc.
- input image data is typically provided in RGB format.
- the color saturation adjustment module 330 adjusts the color saturation of an RGB image by first determining the Y component for each pixel of the RGB image.
- the Y component is determined according to the following equation:
- R 0 , G 0 , and B 0 are color components of an original or input RGB image pixel.
- the Y component is individually subtracted from each of the RGB components. The difference is then multiplied by an adjustment factor S. Finally, the products produced in the second operation are added to the Y component. The respective sums are the saturation adjusted RGB components. Equations for the saturation adjusted components R′, G′, and B′ are presented below:
- R′ S ⁇ ( R 0 ⁇ Y )+ Y
- G′ S ⁇ ( G 0 ⁇ Y )+ Y
- FIG. 5 illustrates a saturation adjustment module 518 according to one embodiment.
- the saturation adjustment module 518 includes input 520 for receiving pixel data, an input 522 for receiving a saturation factor value S, and an output 524 for outputting a saturation adjusted pixel.
- the pixel data received on the input 520 may be in any desired color format.
- the pixel data received on the input 520 may be in the RGB color format.
- the pixel data received on the input 520 is used as an index to a look-up table memory (LUT) 526 , which responds to an index by furnishing a saturation factor value S to the saturation adjustment unit 518 .
- the pixel data received on the input 520 may be in any desired bit-per-pixel resolution. For example, if the input image pixels are defined in 12 bit-per-pixel resolution, the lookup table 526 stores 4096 adjustment factors S.
- the saturation adjustment unit 518 includes a calculating module 528 that evaluates the expression:
- RGB ′ ( S ⁇ R 0 G 0 B 0 )+((1 ⁇ S ) ⁇ Y ),
- RGB′ is a saturation-adjusted R 0 G 0 B 0 pixel
- S is the saturation factor value
- Y is the luma value of the input pixel R 0 G 0 B 0 .
- the luma value Y may be calculated using second calculating module 530 , which may evaluate the equation:
- the saturation adjustment module 518 and the saturation adjustment module 330 may be the same.
- the luma scaling module 332 may be used as part of a color processing algorithm for a particular type of display device to adjust the lightness or brightness of a digital image.
- the luma scaling module 332 may be used to adjust the contrast in a digital image.
- the luma scaling module 332 may be used to adjust color saturation or pixels defined in the YCrCb color space.
- the luma scaling module 332 may implement the following:
- R′ R 0 ⁇ P+C
- the R 0 , G 0 , and B 0 are original color values and the R′, G′, and B′ are luma scaled color values.
- a scale factor is P and a scale offset is C.
- the luma scaling module 332 may be used as part of a color processing algorithm for a particular type of display device to adjust the brightness or saturation of pixels in the luma, chroma-blue, chroma-red (YCrCb) color space. That is, original colors values Y 0 , Cr 0 , and Cb 0 may be substituted for R 0 , G 0 , and B 0 in the above equations.
- the dithering module 334 may be used as part of a color processing algorithm for a particular type of display device.
- the number of brightness or intensity levels for sub-pixels that is available in some display devices may be less than 256.
- an EPD pixel may include sub-pixels having 16 intensity levels.
- a 12-bit RGB data value (4:4:4) may be used to define all possible pixel colors.
- the gamut of colors that corresponds with 12-bit RGB data is a relatively small 4,096.
- the dithering module 334 may be included in the color processing algorithm to increase the apparent color gamut of a display device
- the dithering module 334 may employ an error-diffusion scheme, an ordered-diffusion scheme, or any other diffusion suitable scheme.
- the dithering module 334 may employ an error-diffusion scheme.
- an exemplary error-diffusion scheme pixels of an input image are processed in raster order.
- the bit-depth of the input pixels may be greater than the bit-depth of the output pixels.
- the input pixels may be 24-bit RGB data (8:8:8), whereas the output pixels may be 12-bit RGB data (4:4:4).
- a quantization error may be calculated for each input data pixel according to the following equation:
- a quantization error may be calculated for each input data sub-pixel. As shown in FIG. 6 , the quantization error may be diffused to four neighboring pixels. The amount of the error that is distributed to a particular neighbor is determined by a weight coefficient. Where the quantization error is distributed to four neighbors, there may be four weight coefficients, ⁇ , ⁇ , ⁇ , ⁇ , which are subject to following condition:
- FIG. 6 shows one example of how weight coefficients may be used to diffuse a quantization error associated with input pixel P(i, j) to neighbor pixels P(i+1, j), P(i ⁇ 1, j+1), P(i, j+1), and P(i+1, j+1), where i and j are, respectively, column and row indices.
- FIG. 7 shows neighbor pixels and associated weight coefficients that may be included in a calculation of a dithered pixel P′′(i, j), according to one embodiment.
- a dithered pixel value may be calculated by adding error terms to the quantized pixel value P′(i, j).
- the value of dithered pixel P′′(i, j) may be determined according to the following equation:
- the ⁇ , ⁇ , ⁇ , ⁇ coefficients used by the dithering module 334 may be programmed or configured to suit a color processing algorithm for a particular type of display device.
- the particular neighbor pixels that are used in the error term calculation may be programmed to suit a particular color processing algorithm.
- dithering module 334 may be configured to include only the two neighbor pixels, such as only the horizontally and vertically adjacent pixels.
- the dithering module 334 may include a buffer to store error terms for one line of pixel data (e.g., line j ⁇ 1) plus the pixel value on the same line (e.g., line j) and to the left of the currently processed pixel.
- the range of pixel color values for which dithering is enabled may be programmed or configured for a particular color processing algorithm. For example, consider an input image defined by 6-bit RGB data (6:6:6) that includes both a color photograph, and black and white text. In this example, a pixel having the maximum value of 32d:32d:32d may appear white, while a pixel having the minimum value of 0d:0d:0d may appear black.
- the range of pixel color values may be set to exclude dithering of the textual portion of the image, while at the same time to include dithering of the color photograph portion of the image by setting, for example, a range having a maximum of 30d:30d:30d and a minimum of 2d:2d:2d.
- the 6.25% whitest and the 6.25% blackest pixels are excluded from dithering. Any desired or suitable range of values to exclude from dithering may be selected.
- the capability to configure a color processing algorithm may be desirable because dithering textual image data can reduce the quality of the rendered image of the text.
- the dithering module 334 may be programmed or configured to operate at sub-pixel resolution.
- a data pixel includes one or more color components, and a range of color component values for which dithering is enabled may be specified. For example, a range having a maximum of 28d and a minimum of 4d may be specified for red color component values for which dithering is enabled. Different color channels may have different ranges.
- the color processor 152 may include a WSG unit 230 .
- FIG. 8 illustrates a white sub-pixel saturation (WSG) unit 818 according to one embodiment.
- the WSG unit 818 includes an input 820 for pixel data and may include two outputs 822 , 824 , one for outputting a saturation factor S and another for outputting “fourth sub-pixel” (“WSP”) data.
- the input pixel data may be defined in any color space.
- the input pixel data may be RGB, YCrCb, HSL, or CMY.
- the WSG unit 818 may include a first lookup table (LUT) memory 826 for storing saturation factors, and a second lookup table (LUT) memory 828 for storing fourth pixel values.
- the WSG unit 818 may also include a first input/output path selector 830 and a second input path selector 832 .
- the WSG unit 818 may include a third output path selector 834 and a color space converter (“CSC”) 836 .
- the color space converter 836 may be employed, for example, to convert input pixel data in RGB format to YCrCb or CMY format.
- the color space converter 836 may convert pixel data in a first color format into a single component of pixel data in a second color format.
- the color space converter 836 may convert RGB pixel data into the Y component of YCrCb pixel data according to the following expression:
- the LUT 826 may be employed to store saturation factor values S that may be used by a color saturation module, e.g., module 330 .
- the saturation factor values S may be stored in the LUT 826 by a user.
- the saturation factor values S stored in the LUT 826 may be user determined values based on the image rendering properties of a particular display device.
- a color processing algorithm may include a non-linear saturation factor in a color saturation adjustment function.
- a non-linear saturation function may provide an advantage over a linear saturation function in that it provides increased control of the color gamut that may be rendered on an EPD.
- Saturation factor values S may be retrieved from the LUT 826 using different arguments or indices.
- the retrieval index may be determined by appropriately configuring path selectors 830 and 834 , and color space converter 836 .
- a pixel value received at input 820 may be used as an index to the LUT 826 .
- a down-sampled RGB or YCrCb pixel value may be used as an index for retrieving a stored saturation factor value S.
- an RGB pixel may be received on input 820 and converted to a YCrCb pixel, which may then be used as an index.
- a single component of a color pixel may be used as an index to the LUT 826 .
- the R value of a received RGB pixel, or the Y value of YCrCb pixel may be used as an index for retrieving a stored saturation factor value S.
- the Y value of the YCrCb pixel may determined from a YCrCb pixel received on input 820 , or the Y value may be received from the color space converter 836 following conversion of a received RGB pixel.
- a constant saturation factor value S may be stored in the LUT 826 , providing a constant value for S.
- the color processing algorithm for a particular type of display device may include adding a fourth sub-pixel “WSP” to three-component pixel data.
- a white sub-pixel may be added to each RGB triplet to create RGBW pixels, or a white sub-pixel may be added to each CMY triplet to create CMYW pixels.
- the fourth sub-pixel may be added to pixels of any color model and the fourth sub-pixel need not be white.
- the fourth sub-pixel may be any suitable color or may be no color.
- a fourth sub-pixel may be yellow or black, e.g., RGBY, CMYB, or CMYK pixels may be generated.
- a fourth sub-pixel for inclusion with an RGB pixel may be a duplicate of the green sub-pixel of the RGB triplet.
- the resultant pixel is RGBG, where the G values are identical.
- the G value of an RGB pixel may be passed from input 820 to output 824 using data path 846 .
- the WSG unit 818 may provide several options for determining fourth sub-pixel values. The choices may include calculating options and lookup table options.
- the first input/output path selector 830 may be configured to choose an option for determining a fourth sub-pixel. Depending on the option, different parameters are required. The parameters may be taken directly from, or may be derived from, the input pixel value received on input 820 .
- the color space converter 836 may color space convert an input pixel, and the third output path selector 834 may be configured to include or exclude the color space converter 836 .
- the LUT 828 may be employed to store fourth sub-pixel data.
- the WSG unit 818 may allow retrieval of a fourth sub-pixel from the LUT 828 using a pixel value as an index to the LUT. For example, a down-sampled RGB or YCrCb pixel value may be used as an index for retrieving a fourth sub-pixel.
- the fourth sub-pixel values may be stored in the LUT 828 by a user.
- the fourth sub-pixel values stored in the LUT 828 may be user-determined values based on the image rendering properties of a particular display device.
- the fourth sub-pixel may be calculated.
- the fourth sub-pixel may be calculated using a calculating unit 838 , which evaluates the expression:
- W 1 is the calculated fourth sub-pixel and is set to the minimum of the R, G, and B sub-pixel values.
- the path selectors 830 and 834 are configured to provide RGB pixel values to the input of calculating unit 838 .
- the fourth sub-pixel may be calculated using calculating unit 840 , which evaluates the expression:
- the fourth sub-pixel “W 2 ” is a weighted average of the RGB sub-pixel values.
- the path selectors 830 and 834 are configured to provide RGB pixel values to the input of calculating unit 840 .
- the coefficients ⁇ , ⁇ , and ⁇ may be selected by a user by writing appropriate values to configuration and status registers 70 .
- the path selectors 830 and 834 are configured to provide YCrCb pixel values to the input of calculating unit 840 , but a fourth path selector 842 is configured so that the calculating unit 840 is bypassed.
- the fourth sub-pixel may be calculated using calculating unit 844 , which evaluates the expression:
- the weighting factor A may be selected to weight one of W 1 or W 2 more heavily, or both may be weighted equally, in the determination of the fourth sub-pixel “W 3 .”
- a user may select a desired value for A by writing an appropriate value to configuration and status registers 236 .
- the weighting factor A may be varied as function of input pixel value.
- a user may store a set of weighting factors A in the LUT 828 .
- the WSG units 230 and 818 may include a saturation factor latency buffer 846 that may be used to buffer the S output 822 , and a fourth sub-pixel latency buffer 848 that may be used to buffer the WSP output 824 .
- the latency buffers 846 and 848 , and the input latency buffer 240 may be used individually or in combination to synchronize aspects of the respective operations of the CSP unit 228 and the WSG unit 818 (or WSG unit 230 ), which operate in parallel. In particular, it may be necessary to synchronize the outputting of a saturation factor S by the WSG unit 818 to the saturation adjustment module 330 (or unit 518 ) of a CSP unit.
- the latency buffers 846 , 848 , and 240 may be variable depth FIFOs.
- a method for determining how latency buffers may be according to one embodiment set is next described.
- a first step the processing modules to be used and the order in which the modules are use for a color processing algorithm are determined.
- a second step includes calculating the latency through a CSP unit up to completion of a saturation adjustment operation, and calculating the total latency through the CSP unit.
- latencies of the WSP unit for determining saturation factor S and determining a fourth sub-pixel, if applicable are calculated.
- the latencies calculated for the CSP and WSP data paths are compared.
- the input latency buffer 240 may be set to the difference between the two latency values.
- the fourth sub-pixel latency buffer may be set to the difference between the two latency values.
- the saturation factor latency buffer is set to the difference between the two latency values.
- the table may additionally contain latency values corresponding with each configuration.
- the second and third steps may be automatically performed by looking up latency values in the table once configurations are set.
- a comparing circuit may then compare latency values from the table to determine appropriate latency buffer settings.
- the comparing circuit may automatically establish the latency buffer settings.
- the table may be stored in a memory.
- FIG. 9 illustrates a CFA Mapping and Post-Processing Unit (PPU) 232 according to one embodiment.
- the PPU 232 may include an input 920 , a convolution unit 922 , a line buffer 924 , a CFA mapping unit 926 , and an output 928 .
- the PPU may include other components, such as selecting units 930 and 932 .
- the PPU 232 may be programmed or configured to operate in one of two modes: sub-pixel or pixel mode.
- the PPU 232 may output sub-pixel data in a user-defined CFA format.
- the PPU 232 may accept as input pixel data having four color components, e.g., RGBW, CMYW. In alternative embodiments, the PPU 232 may accept pixel data defined by any number of components.
- the selecting unit 234 may be configured to obtain three color components from a CSP unit and a fourth color component from a WSG unit, or to obtain four color components from a WSG unit.
- the sub-pixel data may be stored in the processed color image buffer 222 .
- the PPU 232 writes sub-pixel data to the processed color image buffer 222 so that it is arranged in the buffer 222 for fetching by the display engine 154 in raster order.
- each pixel of an input image is mapped to one sub-pixel of a display device. Consequently, sub-pixel mode requires that the resolution of the input image be higher than the resolution of the display device. For example, each pixel of a 1,200 ⁇ 1,600 pixel color input image may be mapped to one sub-pixel of a 600 ⁇ 800 sub-pixel display device that has four sub-pixels per display pixel.
- just one color component of each pixel of the input image may be sampled in the mapping process. The sampled color component may be assigned to a mapped display sub-pixel.
- the value assigned to a mapped display sub-pixel may be determined based, at least in part, on a corresponding pixel's color components.
- a mapped display sub-pixel may be assigned the value of a fourth sub-pixel, where the fourth sub-pixel is determined based on the RGB or CMY values of the corresponding input pixel.
- FIG. 10 illustrates an example of mapping samples of input image pixels to sub-pixels of a display device.
- a portion of an exemplary color input image 1020 and a portion of an exemplary display device 1022 are shown in FIG. 10 .
- the color input image 1020 includes pixels 1024 .
- Each input pixel 1024 includes two or more color components 1026 , which in this example are R, B, G, and W color components.
- the display device 1022 includes display pixels 1028 .
- each display pixel 1028 includes R, B, G, and W sub-pixels 1030 .
- FIG. 10 illustrates that each pixel of an input image may be mapped to one sub-pixel of a display device in sub-pixel mode.
- input pixel P 0 may be mapped to display sub-pixel R 0
- input pixel P 1 may be mapped to display sub-pixel B 1
- input pixel P 6 may be mapped to display sub-pixel G 6
- input pixel P 7 may be mapped to display sub-pixel W 7 .
- FIG. 10 also illustrates that one color component of each pixel of the input image may be sampled and the sampled component assigned to the mapped sub-pixel.
- the R 0 color component of input pixel P 0 is sampled and assigned to the mapped sub-pixel R 0
- the B 1 color component of input pixel P 1 is sampled and assigned to the mapped sub-pixel B 1 .
- the components of an image pixel not sampled may not be assigned to display sub-pixel. For instance, color components G 0 , B 0 , and W 0 of input pixel P 0 are not sampled and not assigned to a display sub-pixel.
- An advantage of the sub-pixel mode of mapping of the PPU 232 is that it may produce better color appearance than the pixel mode of operation.
- Use of the sub-pixel mode of mapping may result in image artifacts, however. For example, in an image with a high gradient, gray-scaled edges may become colored.
- Empirical testing indicates that image artifacts resulting from processing an input image in sub-pixel mode may be reduced by processing the input pixels with a convolution operation, which implements a blurring function.
- the convolution operation is preferably performed before a sub-pixel mapping operation.
- the convolution operation may be performed by convolution module 922 .
- a user may configure the selecting unit 930 to include or bypass the convolution module 922 , as desired for a particular color processing algorithm.
- each pixel of an input image is mapped to one pixel of a display device. For example, each pixel of a 600 ⁇ 800 pixel input image is mapped to one pixel of a 600 ⁇ 800 pixel display. If each display pixel includes four sub-pixels, each input pixel is mapped to four sub-pixels in the display.
- the line buffer 924 may be used to store one line of the input image.
- the input image may be received by the PPU unit 232 in raster order.
- the color components of each pixel may appear adjacent one another in the input data stream. For example, if pixels of the input image are in an RGBW format, the four color components of each input pixel may arrive in parallel at the input 920 .
- the sub-pixels of an RGBW pixel may not, however, appear adjacent one another in an output data stream, i.e., the order in which sub-pixel data are written to the processed color image buffer 222 .
- the sub-pixels of a particular input pixel may appear on different lines in the output data stream, as illustrated in a portion of an image 1120 and a portion of a display device 1122 shown in FIG. 11 .
- the image portion 1120 includes part of a line of pixels P 0 , P 1 , etc.
- the display device portion 1122 also includes part of a line of display pixels P 0 , P 1 , P 2 .
- Each display pixel includes R, G, B, and W sub-pixels. It may be seen from the example of FIG. 11 that the R 0 and B 0 sub-pixels in the display device 1122 are side-by-side on a first line, and the G 0 and W 0 sub-pixels are side-by-side on a second line.
- the sub-pixel pairs may be non-adjacent in the output data stream.
- the sub-pixels of a particular pixel need not all be written at the same time, i.e., the sub-pixels may be placed in non-adjacent locations in the output data stream.
- a user may configure the selecting unit 932 to include or bypass the line buffer 924 , as desired for a particular color processing algorithm.
- the PPU 232 may provide for flexible CFA mapping, i.e., the PPU 232 may be configured to output sub-pixel data in a user-defined CFA format.
- Different display devices may employ different CFAs. Consequently, it may be desirable to have a capability to map sub-pixels to a variety of different CFAs.
- CFAs may be viewed as arranging sub-pixels in columns and rows. Different CFAs may have different numbers of columns and rows. While sub-pixels may be square, this is not critical. Sub-pixels may be any desired shape, rectangular, polygonal, circular, etc.
- FIG. 12 illustrates several exemplary CFAs configurations.
- CFA 1220 is a 2 ⁇ 2 sub-pixel matrix.
- CFA 1224 is a 4 ⁇ 4 sub-pixel matrix.
- CFA 1226 is a 2 ⁇ 4 sub-pixel matrix.
- a user may write parameters to configuration and status registers 236 that specify the dimensions of the CFA in terms of number of rows and columns.
- a user may write parameters to the configuration registers 236 that specify the color component to be assigned to a matrix location. For instance, if a 2 ⁇ 2 sub-pixel matrix, the locations may be defined in terms of rows and columns (row, column): (1, 1), (1, 2), (2, 1), and (2, 2).
- a user may specify that R is assigned location (1, 1), B is assigned location (1, 2), G is assigned location (2, 1), and W is assigned location (2, 2), for example.
- the PPU 232 uses the specified CFA dimensions and mapping scheme to map pixel data to sub-pixels of a display device.
- the PPU 232 may include horizontal and vertical sub-pixel counters that may be configured to place the sub-pixels in matrix locations corresponding to the designated mapping and CFA size.
- the dithering module 334 may be programmed or configured to operate at sub-pixel resolution.
- the PPU 232 may be programmed or configured to operate in sub-pixel or pixel modes.
- Sub-pixel dithering may be employed in conjunction with CFA mapping in either pixel or sub-pixel mode.
- each pixel of an input image may be mapped to 3 or 4 sub-pixels of a display device.
- each pixel of an input image may be mapped to one sub-pixel of a display device.
- the dithering module 334 is configured to operate at sub-pixel resolution, the quantization error of a particular color channel is diffused to same-colored sub-pixels of neighbor pixels. For example, the quantization error of a red sub-pixel of the input image is diffused to red sub-pixels of neighboring dithered pixels.
- FIG. 13 illustrates an exemplary map or template 1320 for specifying which neighbor pixels or sub-pixels should receive a quantization error of a current sub-pixel “P.”
- a current pixel or sub-pixel P is at location 1322 .
- Possible neighbors on the same line and to the right of P in a display are designated “A.”
- Possible neighbors on the next two lower lines directly below P in a display are designated “B.”
- Possible neighbors on the next two lower lines, but in columns preceding P's column are designated “C.”
- Possible neighbors on the next two lower lines, but in columns following P's column are designated “D.”
- locations A 0 , A 1 , A 2 , and A 3 of FIG. 13 correspond with the location of neighbor pixel P(i+1, j) of FIG.
- locations B 0 , and B 1 correspond with the location of neighbor pixel P(i, j+1)
- locations C 00 , C 10 , C 20 , C 01 , C 11 , and C 21 correspond with the location of neighbor pixel P(i ⁇ 1, j+1)
- locations D 00 , D 10 , D 20 , D 01 , D 11 , and D 21 correspond with the location of neighbor pixel P(i+1, j+1).
- Locations with subscripts of 0 or 00 are used to designate pixel locations. Locations with subscripts other than 0 or 00 are used to designate sub-pixel locations.
- the map 1320 is conceptually superimposed on a CFA so that the current pixel or sub-pixel P is aligned with location 1322 .
- the map 1320 is conceptually moved so that location 1322 is aligned with a next current pixel or sub-pixel.
- quantization error may be diffused to adjacent pixels and a user may specify locations A 0 , B 0 , C 00 , and D 00 of the map of FIG. 13 .
- quantization error may be diffused to adjacent sub-pixels having the same color as the current sub-pixel.
- the particular mapping will depend on the particular CFA of the display device 124 .
- a user will select different neighbor sub-pixel locations depending on the particular CFA, e.g., a user may select A 1 for a first CFA, but A 2 for a second CFA.
- FIG. 14 illustrates an example of specifying locations for diffusing quantization error to sub-pixels in sub-pixel mode CFA mapping mode.
- the CFA 14 assumes an exemplary CFA 1418 in which subpixels appear in the order R, B, G, W on a first line, and these sub-pixels are vertically adjacent to subpixels that appear in the order G, W, R, B on a second line.
- the CFA includes two types of pixels: First pixels form a 2 ⁇ 2 matrix of sub-pixels, wherein the first row includes an R sub-pixel to the left of a B sub-pixel, and the second row includes a G sub-pixel to the left of a W sub-pixel.
- Second pixels form a 2 ⁇ 2 matrix of sub-pixels, wherein the first row includes a G sub-pixel to the left of a W sub-pixel, and the second row includes an R sub-pixel to the left of a B sub-pixel.
- the map 1420 is shown twice in FIG. 14 . First, it is shown, without alphabetical notations specifying sub-pixel locations, superimposed on the exemplary CFA 1418 . Second, the map 1420 is shown with sub-pixel color values associated with the sub-pixel locations on the map when superimposed on the CFA. The associated sub-pixel color value according the CFA 1418 is shown in FIG. 14 above the diagonal line in each sub-pixel location. The current sub-pixel location 1322 is aligned with a R (red) sub-pixel.
- the sub-pixel locations A 3 , B 1 , C 10 , and D 10 may be selected by a user, as each of these locations corresponds with a neighbor red sub-pixel.
- a user may select different locations for a CFA different from the exemplary CFA 1418 .
- the maps 1320 , 1420 may be used with other CFAs to designate which neighbor pixels or sub-pixels should receive a quantization error of a current pixel or sub-pixel.
- a user may select one or more sub-pixel locations for error diffusion for a particular CFA by writing appropriate values to configuration and status registers 236 .
- a user may specify the weight, amount, or percent of error to be diffused to specified sub-pixels by writing appropriate values to configuration and status registers 236 .
- FIG. 15 depicts a simplified cross-sectional representation of a portion of the exemplary electrophoretic display 1518 .
- the display 1518 may include electrophoretic media sandwiched between a transparent common electrode 1520 and a plurality of sub-pixel electrodes 1522 .
- the sub-pixel electrodes 1522 may reside on a substrate 1524 .
- the electrophoretic media may include one or more (and typically, many) microcapsules 1526 .
- Each microcapsule 1526 may include positively charged white particles 1528 and negatively charged black particles 1530 suspended in a fluid 1532 .
- white particles may be negatively charged and black particles positively charged.
- each sub-pixel may correspond with one sub-pixel electrode 1522 , however, this is not required or critical.
- Each sub-pixel may correspond with one or more microcapsules 1526 .
- each sub-pixel pixel includes a filter disposed between the transparent common electrode 1520 and the microcapsules 1526 associated with the particular sub-pixel.
- the filter 1534 may be a blue color filter
- the filter 1536 may be a green color filter
- the filter 1538 may be a white filter
- the filter 1540 may be a red color filter.
- the white filter 1538 may be a transparent structure; alternatively, a white filter may be omitted or absent from the location between the microcapsules 1526 associated with a particular sub-pixel and the common electrode 1520 .
- the transparent common electrode 1520 may be disposed between sub-pixel pixel filters and the microcapsules 1526 associated with the particular sub-pixel.
- the color filters of display 1518 correspond with the RGBW color model. Any desired set of color filters may be used, e.g., RGB, CMY, RGBY, CMYB, or CMYK.
- the common electrode 1520 may be placed at ground or some other suitable voltage, and a suitable voltage is placed on a sub-pixel electrode 1522 .
- a suitable voltage is placed on a sub-pixel electrode 1522 .
- an electric field is established across the microcapsule(s) 1526 associated with the sub-pixel.
- the white particles 1528 may move toward the common electrode 1520 , which results in the display pixel becoming whiter or more reflective in appearance.
- the black particles 1530 may move toward the common electrode 1520 , which results in the display pixel becoming blacker or less reflective in appearance.
- an incident ray 1542 of ambient light is reflected off one of the microcapsules 1526 associated with the blue display sub-pixel 1534 . While the ray 1542 enters through blue color filter 1534 , it exits through the green color filter 1536 associated with an adjacent sub-pixel. As a result, a reflected ray 1544 is influenced by both the blue and green color filters 1534 and 1536 . As a consequence, the reflected ray 1544 may appear as cyan. Generally, this is not desirable. Scattered light reflections that exit through the color filters of adjacent sub-pixels may alter the color appearance of images on a display device in undesirable and unnatural appearing ways. Further, this side scattering problem may reduce the gamut of displayable colors. Moreover, this side scattering problem may become more pronounced when the display 1518 is viewed at an angle from one side or the other. Consequently, the side scattering problem may also reduce usable viewing angle.
- FIG. 16 illustrates one possible solution for the side scattering problem.
- FIG. 16 depicts a simplified cross-sectional representation of a portion of the exemplary electrophoretic display 1618 . Parts of the display 1618 numbered the same as parts of display 1518 may be the same.
- the display 1618 includes a blue color filter 1620 , a green color filter 1624 , a white filter 1626 , and a red color filter 1628 .
- the color filters for the display 1618 differ from the color filters display 1518 in that they do not fully cover the microcapsules 1526 associated with a sub-pixel. Instead, there are gaps 1630 between adjacent color filters.
- the openings 1630 may be present on all four sides of a sub-pixel as viewed from the front, i.e., there may be a separation 1630 between a particular filter and the filters to either side in a row, and the filters in the rows above and below the particular filter.
- an incident ray 1632 of ambient light is reflected off one of the microcapsules 1526 associated with the blue display sub-pixel. While the ray 1632 enters through blue color filter 1620 , reflected ray 1634 exits through the gap 1630 between the blue and green color filters 1620 , 1624 . Unlike the reflected ray 1544 , the reflected ray 1634 is only influenced by the blue color filter 1620 . The color of reflected ray 1634 will be influenced by the filter it passes through on the way to the microcapsule 1526 and the transparency of the gap 1630 . However, the use of gaps 1630 separating color filters may reduce the saturation of colors rendered on the display.
- FIG. 17 illustrates an alternative solution to the side scattering problem, which may minimize or eliminate the reduction in color saturation that can occur when color filters are sized so that gaps or openings separate adjacent color filters.
- FIG. 17 depicts a simplified cross-sectional representation of a portion of the exemplary electrophoretic display 1718 , according to one embodiment. Parts of the display 1718 numbered the same as parts of display 1518 may be the same.
- the display 1718 includes a green color filter 1720 , a white color filter 1722 , blue color filters 1724 and 1726 .
- an incident ray 1742 of ambient light is reflected off one of the microcapsules 1526 associated with the green display sub-pixel.
- a reflected ray 1734 exits through the white color filter 1722 associated with an adjacent sub-pixel.
- the color of reflected ray 1730 will be influenced by the filter it passes through on the way to the microcapsule 1526 , i.e., green filter 1720 , and the transparency of the white color filter 1722 .
- the reflected ray 1730 is not undesirably influenced by an adjacent red or blue color filter.
- FIG. 17 also illustrates a front view of a CFA 1732 , which corresponds with the display portion 1718 .
- the CFA 1732 may include four sub-pixel color filters of the same color surrounded by white sub-pixels.
- the white sub-pixels of the CFA 1732 may be modulated to appear in varying states of reflectance.
- An advantage of the CFA 1732 is that white sub-pixels may be controlled or modulated to reflect more or less light to compensate for any reduction in saturation due to the inclusion of white pixels in the CFA.
- sub-pixels having color filters may be arranged in rows and columns in a repeating pattern, e.g., a Bayer pattern.
- each sub-pixel having a color filter may be horizontally adjacent or vertically adjacent to one or more white sub-pixels (or both horizontally adjacent and vertically adjacent).
- a color filter for a colored sub-pixel, e.g., green, and a color filter for a white sub-pixel, e.g., transparent may be horizontally or vertically adjacent one another.
- the color filter for the colored sub-pixel may horizontally or vertically contact or adjoin a white sub-pixel.
- vertical and horizontal refer to the front view of a CFA.
- the green sub-pixel 1720 shown in the CFA 1732 of FIG. 17 is horizontally adjacent the white sub-pixel 1722 .
- the green sub-pixel 1720 vertically contact or adjoins the white sub-pixel 1722 .
- the white sub-pixel 1722 shown in the CFA 1732 of FIG. 17 is not horizontally or vertically adjacent to a colored sub-pixel. Instead, the white sub-pixel 1722 diagonally adjacent to colored sub-pixels. In one embodiment, a diagonally adjacent sub-pixel need not be a white sub-pixel. In particular, even though the white sub-pixel 1722 is labeled in FIG. 17 as a white sub-pixel, it may be a red, green, or blue sub-pixel in this example. In one embodiment, the white sub-pixel 1722 may be a green sub-pixel.
- the white color filters 1722 be white; they may be any desired color, e.g., yellow.
- the white filter 1722 may be a transparent structure; alternatively, a white filter may be omitted or absent from the location between the microcapsules 1526 associated with a particular sub-pixel and the common electrode 1520 .
- the CFA 1732 may be used with an RGBW color model. Any desired set of color filters may be substituted for the primary colors RGB, e.g., CMY.
- FIGS. 18 and 23 illustrate alternative embodiments of the CFA 1732 .
- FIG. 18 shows a CFA 1820 in which the white sub-pixels are smaller than the colored sub-pixels. In this example, the white sub-pixels are half-as-tall and half-as-wide as the colored sub-pixels.
- FIG. 18 shows a CFA 1822 in which the colored sub-pixels are one-fourth-as-tall and one-fourth-as-wide as the colored sub-pixels.
- FIG. 23 illustrates a CFA 2320 and a CFA 2322 .
- the CFAs 2320 and 2322 show that the white sub-pixels in a CFA may provided in two more sizes, and that the white sub-pixels in a CFA may differ in horizontal and vertical dimensions. In addition, the white sub-pixels in a CFA may differ dimensionally from the non-white sub-pixels.
- the color processor 152 may be configured in many different ways, the color processor 152 may be used to evaluate many different color processing algorithms for EPDs. Empirical testing of the color processor 152 with a variety of color processing algorithms indicates that color processing algorithms suitable for color EPDs can still be implemented even though certain functions available in the color processor 152 are eliminated, or even though some of the options associated with a particular function are eliminated. In addition, empirical testing of the color processor 152 with a variety of color processing algorithms indicates that color processing algorithms suitable for color EPDs can still be implemented even though the order of performing color processing functions is restricted.
- FIG. 19 illustrates a block diagram of a circuit 1920 for implementing the flexible data path 322 for color synthesis of primaries according to an alternative embodiment.
- the circuit 1920 employs a smaller number of logic gates than the circuit 420 .
- the circuit 1920 may be included in the CSP unit 228 in one embodiment.
- the circuit 1920 may include, in one embodiment, a data switch 1922 , a color correction module 1924 , a filtering module 1926 , a color linearization module 1928 , a dithering module 1930 , and a color saturation adjustment module 1932 .
- the data switch 1922 includes an input 1934 for receiving image data and an output 1936 for outputting image data.
- the data switch 1922 includes multiplexers M 7 to M 11 , each multiplexer including a select input (not shown).
- the data switch 1922 may be programmed or configured to include or exclude any particular processing module in a color processing algorithm using the select inputs.
- One advantage of the capability of exclude any particular processing module is that it permits separate analysis of each processing module apart from the effects of other processing modules. The order in which processing modules are used, however, is limited, as shown in FIG. 20 .
- the input color depth of the circuit 1920 is set at (5:6:5) rather than (8:8:8). In one embodiment, the input color depth of the circuit 1920 is RGB (5:6:5). Further, the circuit 1920 is limited to providing as output 12-bit pixel data in a 4:4:4 format.
- a color processing algorithm that only operates on image data in its native resolution may be wasteful of power and processing time.
- use of a color processing algorithm to pro-processes a digital image at the bit-per-pixel resolution of the electro-optic display device may result in a rendered image having a sub-optimal appearance.
- the inventor has recognized that one reason that the appearance of the rendered image may be less than sub-optimal may be that performing the color processing algorithm at a higher degree precision than the electro-optic display is capable of rendering results in an improved selection of available display states or colors. For example, experiments by the inventor showed better color appearance of rendered images when a color processing algorithm performed its operations on 5:6:5 pixel data than when the same operations were performed on 4:4:4 pixel data.
- a color processing algorithm may include two or more operations and it may be desirable to perform certain of those operations at different pixel resolutions.
- FIG. 20 is a simplified block diagram of a color processor including an alternative representation of the circuit of FIG. 19 according to one embodiment.
- the data switch 1922 (not shown in FIG. 20 ) may be programmed or configured so that any of the processing modules 1924 , 1926 , 1928 , 1930 , and 1932 may be included or excluded from a color processing algorithm.
- FIG. 20 illustrates that the order in which the shown processing modules are used is generally fixed, except that the color linearization module 1928 may be invoked either preceding the dithering module 1930 or following color saturation adjustment module 1932 . As shown in FIG. 20 , if all modules are used, the color correction module 1924 may only be used first, and the filtering module 1926 may only be used second.
- the color linearization module 1928 may be used after the filtering module 1926 . If the color linearization module 1928 is used after filtering, the dithering module 1930 may only be used fourth. Otherwise, the dithering module 1930 may only be used third.
- the saturation adjustment module 1932 may only be used after the dithering module 1930 . The saturation adjustment module 1932 may only be used last if the color linearization module 1928 is used following the filtering module 1926 . If the color linearization module 1928 is not used following the filtering module 1926 , the color linearization module 1928 is used last.
- the CSP circuit 1920 reflects empirical testing with a variety of color processing algorithms for color EPDs. Testing indicated that if color correction is necessary, it is advantageous to perform this process first. Further, it was determined that it is not critical to include RGB to YCrCb conversion in the color correction module 1924 . Accordingly, color correction module 1924 does not include this color space conversion capability. In one embodiment, the color correction module 1924 implements the following expression:
- the color correction module 1924 includes one or more predetermined sets of kernel coefficients and RGB offset values.
- a user may choose a predetermined setting. Examples of predetermined settings include (a) mild color enhance; (b) color enhance; (c) strong color enhance; (d) gray scale; (e) mild white warm; (f) mild daylight; and (g) mild illuminant.
- the user may choose to select individual values for the color correction variables.
- a user may select a predetermined setting or a custom setting by writing appropriate values to configuration and status registers 236 .
- the filtering module 1926 is sized to process 5:6:5 pixel data.
- the filtering module 1926 includes one or more predetermined sets of filter coefficients. Instead of selecting individual values for the filter coefficients, a user may choose a predetermined setting. Examples of predetermined settings include: five levels of sharpening, plus (a) blur; (b) edge detect; (c) sketch; (d) sepia; (e) edge enhance; (f) emboss; (g) gray scale; and (h) bump mapping. Alternatively, the user may choose to select individual values for the filter coefficients. A user may select a predetermined setting or a custom setting by writing appropriate values to configuration and status registers 236 .
- the color linearization module 1928 may be the same as the color linearization module 328 .
- the dithering module 1930 may be placed so that it is performed after the color correction and image sharpening functions. In one embodiment, the dithering module 1930 may be the same as the dithering module 334 .
- CFAs that include white sub-pixels have decreased color saturation in comparison with CFAs that omit white sup-pixels.
- Testing identified color saturation adjustment as an important function for inclusion in many color processing algorithms, especially those color processing algorithms for displays having CFAs that include white sub-pixels. Testing indicated that performing color saturation adjustment after performing a dithering operation produced visually pleasing results.
- the color saturation adjustment module 1932 implements the following equations:
- R′ S ⁇ ( R 0 ⁇ Y )+ Y
- G′ S ⁇ ( G 0 ⁇ Y )+ Y
- the portion of the color saturation adjustment module 1932 that determines R′G′B′ uses only 3 multipliers and 6 adders.
- the portion of the color saturation adjustment module 85 that determines Y uses only 2 adders. Consequently, the color saturation adjustment module 1932 is smaller and more efficient that color saturation adjustment module 330 .
- the circuit 1920 accepts 16-bit pixel data (5:6:5). Bit depth of an input image may be reduced to 16-bits by truncating the least significant bits of each sub-pixel. Alternatively, input pixels may have their bit depth reduced by rounding or using the floor function. For example:
- X is the 8-bit data value of an input image sub-pixel and Y is the 5-bit value of the corresponding bit-depth reduced pixel.
- Empirical testing with a variety of color processing algorithms for color EPDs sought to identify an appropriate level of calculation accuracy for each of the processing modules.
- the color linearization LUTs are of a size that accommodates 6-bits per pixel.
- FIG. 21 illustrates a block diagram of a WSG unit 2120 according to an alternative embodiment.
- the WSG unit 2120 employs a smaller number of logic gates than WSG unit 818 .
- the WSG unit 2120 does not require latency FIFOs as the latency for S is constant and it is zero with respect to saturation adjustment module 1932 .
- the latency for WSP is either 1 or 2.
- flip-flop delays may be employed.
- the WSG unit 2120 reflects empirical testing with a variety of color processing algorithms for color EPDs.
- the WSG unit 2120 includes LUT memory 2122 , which may be 16-bits wide.
- the LUT 2122 may be used to implement two or more configurations.
- FIG. 22 illustrates three possible configurations in which the LUT 2122 may be used.
- bits 0 - 7 of LUT 2122 may be used to store values of saturation factor S
- bits 8 - 11 may be used to store values of fourth pixel “WSP.”
- bits 0 - 3 of LUT 2122 may be used to store R values
- bits 4 - 7 may be used to store G values
- bits 8 - 11 may be used to store B values
- bits 12 - 15 may be used to store values of fourth pixel WSP.
- bits 0 - 3 of LUT 2122 may be used to store C values
- bits 4 - 7 may be used to store M values
- bits 8 - 11 may be used to store Y values
- bits 12 - 15 may be used to store values of fourth pixel WSP.
- the output 2128 may output 8-bit S values, the R and G values of a 4:4:4:4 RGBW pixel, or the C and M values of a 4:4:4:4 CMYW pixel.
- the output 2130 may output 4-bit fourth pixel values that may be combined with RGB values.
- the output 2130 may output the Y and W values of a CMYW pixel.
- the second configuration 2222 and third configuration 2224 show that the WSG unit 2120 enables one-to-one mapping of input and output pixel values. A user may store desired values in the LUT 2122 .
- the concepts disclosed in this specification can be used to develop and modify color processing algorithms for existing and future-developed color EPDs in a flexible manner.
- the most desirable color processing algorithm for a particular EPD will depend on ambient lighting conditions and the type of image being rendered.
- the determination of a color processing algorithm for a particular EPD is a complex process involving many variables. If an assumption is made that the EPD will be viewed in bright light, less upward adjustment of luma and saturation will likely be called for than in cases where it is assumed that the EPD will be viewed in relatively dim light. Similarly, different luma and saturation adjustments may be deemed optimum for viewing black and white text as compared with those desired for color photographs of human faces or natural landscapes.
- parameters for programming or configuring first, second, third, and fourth color processing algorithms may be stored in either system memory 130 or display controller memory 150 .
- the first color processing algorithm may be determined to be optimum for viewing a particular EPD rendering a text image in bright, natural ambient lighting conditions, e.g., sunlight.
- the second color processing algorithm may be determined to be optimum for viewing the particular EPD rendering a photographic image of a human face in bright, natural ambient lighting conditions.
- the third color processing algorithm may be determined to be optimum for viewing the particular EPD rendering the text image in low, artificial ambient lighting conditions, e.g., a tungsten light source in a darkened room.
- the third color processing algorithm may boost luma and saturation as compared with the first color processing algorithm.
- the fourth color processing algorithm may be determined to be optimum for viewing the particular EPD rendering the photographic image of a human face in low, artificial ambient lighting conditions.
- the fourth color processing algorithm may boost luma and saturation in a manner similar to the third algorithm and may additionally adjust color to correct for color distortion caused by the tungsten light source.
- the storing of two or more color processing algorithms in a memory allows selection and used of a color processing algorithm best suited for viewing conditions, image type, and display type.
- the determination of current viewing conditions may be made explicitly by an end user of the display system, or automatically through the use of the image sensor 118 .
- the end user may select a current viewing condition by choosing one of two or more predetermined options from a menu, e.g., sunlight, overcast outdoor light, bright indoor light, bright indoor light, tungsten light, fluorescent light, etc.
- the image sensor 118 may determine both the ambient light level and the spectral components of the ambient light source.
- the determination of image type may be made explicitly by an end user of the display system, or automatically.
- the end user may select a current viewing condition by choosing one of two or more predetermined options from a menu, e.g., black and white text, black and white text including fewer than five highly saturated colors, color photograph of human face, color photograph of landscape, cartoon, etc.
- the determination of image type may be performed automatically by pre-coding the image file with image type, or by use of one or more known automatic image analysis techniques.
- an automatic image analysis technique software or hardware may be used to prepare a color histogram of an image. Using the histogram, mages may be categorized by color content. For example, a text image may be recognized as having characteristic color content.
- a facial image may be recognized as having one or more characteristic color contents.
- the most suitable color processing algorithm for the determined viewing conditions and image type may be retrieved from memory and used to program or configure the display system.
- the display system may be reconfigured, either automatically or explicitly by the user, to use a more suitable algorithm.
- parameters for configuring multiple color processing algorithms may be stored in a memory, and the image to be rendered on a display device includes two or more images.
- the image to be rendered includes a text image and a color photograph.
- the storing of two or more color processing algorithms in a memory allows selection and use of a color processing algorithm suited for the type of sub-image. Where there are two image types to be rendered simultaneously, a different color processing algorithm may be selected for each sub-image. Selection of a suitable color processing algorithm for each sub-image may be automatic using a known automatic image analysis technique, or may be explicitly made by an end user.
- the selecting of the set of operations to include in a color processing algorithm may be based on a determined optical property of an ambient light source, the determined image type, and the type of display device. For example, the image rendering characteristics of a particular type of electro-optic display device may be taken into consideration along with lighting conditions and image type when specifying a color processing algorithm.
- some or all of the operations and methods described in this description may be performed by executing instructions that are stored in or on a non-transitory computer-readable medium.
- computer-readable medium may include, but is not limited to, non-volatile memories, such as EPROMs, EEPROMs, ROMs, floppy disks, hard disks, flash memory, and optical media such as CD-ROMs and DVDs.
- the instructions may be executed by any suitable apparatus, e.g., the host 122 or the display controller 128 . When the instructions are executed, the apparatus performs physical machine operations.
- references may be made to “one embodiment” or “an embodiment.” These references mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the claimed inventions. Thus, the phrases “in one embodiment” or “an embodiment” in various places are not necessarily all referring to the same embodiment. Furthermore, particular features, structures, or characteristics may be combined in one or more embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Control Of Indicators Other Than Cathode Ray Tubes (AREA)
- Video Image Reproduction Devices For Color Tv Systems (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
Description
- The present application claims the benefit under 35 USC Section 119(e) of U.S. Provisional Patent Application Ser. No. 61/347,263, filed May 21, 2010. The present application is based on and claims priority from the provisional application, the disclosure of which is hereby expressly incorporated herein by reference in its entirety.
- The field of the present invention relates generally to digital image processing for display devices.
- A digital image is comprised of a multitude of small picture elements or pixels. When a color digital image is rendered on a display device, a single pixel may be formed from red, green, and blue (RGB) sub-pixels. The sub-pixels in some RGB display devices may include either a red, green, or blue filter. The sub-pixels in a display device are spatially close and, for this reason, human vision perceives the red, green, and blue sub-pixels as a single-colored pixel. By modulating the colors of the individual sub-pixels, a range of colors can be generated for each pixel.
- A color filter array (CFA) describes the arrangement of sub-pixels in color image sensors and in color display devices. A variety of CFAs are known. The Bayer CFA is one well-known example. Red, green, and blue sub-pixels are arranged in a square gird in the Bayer CFA. There are as many green sub-pixels as blue and red sub-pixels combined, with a green sub-pixel at every other position in both the horizontal and vertical directions, and the remaining positions being populated with blue and red sub-pixels. In the Bayer CFA, a single pixel includes two green and one each of blue and red sub-pixels.
- Conventionally, the data for a color pixel define how much color each sub-pixel adds to the perceived color of the pixel. The data for each sub-pixel can vary within a range depending on the number of data bits allocated in the display system for sub-pixel values. For example, for 24-bit RGB color, 8 bits are allocated per sub-pixel, providing a range of 256 possible values for each color channel. If the data values for all components of an RGB pixel are zero, the pixel theoretically appears black. On the other hand, if all three sub-pixel values are at their maximum value, the pixel theoretically appears white. RGB pixel data expressed using 24-bits (8:8:8) provides for a color palette of 16,777,216 colors. Color pixel data, however, need not be expressed using 24-bits. RGB pixel data may be represented using as few as one bit per channel (1:1:1), providing a color palette of eight colors.
- An electro-optic material has at least two “display states,” the states differing in at least one optical property. An electro-optic material may be changed from one state to another by applying an electric field across the material. The optical property may or may not be perceptible to the human eye, and may include optical transmission, reflectance, or luminescence. For example, the optical property may be a perceptible color or shade of gray.
- Electro-optic displays include the rotating bichromal member, electrochromic medium, electro-wetting, and particle-based electrophoretic types. Electrophoretic display devices (“EPD”), sometimes referred to as “electronic paper” devices, may employ one of several different types of electro-optic technologies. Particle-based electrophoretic media include a fluid, which may be either a liquid, or a gaseous fluid. Various types of particle-based EPD devices include those using encapsulated electrophoretic, polymer-dispersed electrophoretic, and microcellular media. Another electro-optic display type similar to EPDs is the dielectrophoretic display.
- An electro-optic display device may have display pixels or sub-pixels that have multiple stable display states. Display devices in this category are capable of displaying (a) two or more display states, and (b) the display states are considered stable. The display pixels or sub-pixels of a bistable display may have first and second stable display states. The first and second display states differ in at least one optical property, such as a perceptible color or shade of gray. For example, in the first display state, the display pixel may appear black and in the second display state, the display pixel may appear white. The display pixels or sub-pixels of a display device having multiple stable display states may have three or more stable display states, each of the display states differing in at least one optical property, e.g., light, medium, and dark shades of a particular color. For example, the display pixels or sub-pixels may display states corresponding with 4, 8, 16, 32, or 64 different shades of gray.
- With respect to capability (b), the display states may be considered to be stable, according to one definition, if the persistence of the display state with respect to display pixel drive time is sufficiently large. An exemplary electro-optic display pixel or sub-pixel may include a layer of electro-optic material situated between a common electrode and a pixel electrode. The display state of the display pixel or sub-pixel may be changed by driving a drive pulse (typically a voltage pulse) on one of the electrodes until the desired appearance is obtained. Alternatively, the display state of a display pixel or sub-pixel may be changed by driving a series of pulses on the electrode. In either case, the display pixel or sub-pixel exhibits a new display state at the conclusion of the drive time. If the new display state persists for at least several times the duration of the drive time, the new display state may be considered stable. Generally, in the art, the display states of display pixels of liquid crystal displays (“LCD”) and CRTs are not considered to be stable, whereas electrophoretic displays, for example, are considered stable.
- The appearance of a color image on a display device may be improved by enhancing the color image before it is rendered. Color data pixels include a color component for each color channel. Accordingly, a capability for enhancing individual color components of the data pixels of a color image may be useful.
- An embodiment is directed to a method for processing color sub-pixels. The method may include receiving a color image and mapping the color image to a display device. The color image may be defined by two or more data pixels, each data pixel having at least a first and second color component. The display device may have two or more display pixels, each display pixel having two or more sub-pixels. The mapping may include mapping a first color component of a first data pixel to a first sub-pixel of a first display pixel, mapping a second color component of a second data pixel to a second sub-pixel of the first display pixel, and storing the first and second color components in a memory. In one embodiment, the display device is an electro-optic display device having two or more stable display states. In one embodiment, the method may include causing the display states of the first and second sub-pixels to change to display states corresponding with the first and second color components.
- In one embodiment, the first and second color components each have an associated color property, and the method may include selecting one or more sub-pixel locations in a color filter array map to diffuse quantization error, determining a first quantized color component for the first color component, determining a first quantization error associated with the first quantized color component, and diffusing the first quantization error to the selected one or more sub-pixel locations. In addition, the method may include determining whether the first color component has a value within a particular range of color component values, and excluding the first color component from the diffusing of the first quantization error to the selected one or more sub-pixel locations if the value of the first color component is outside of the particular range. In one embodiment, the color filter array map may include white sub-pixels.
- An embodiment is directed to a method for reducing the resolution of color sub-pixels. The method may include selecting one or more sub-pixel locations in a color filter array map to diffuse quantization error, receiving a color image defined by two or more data pixels, each data pixel having two or more color components, each color component having a color property, and determining a quantized color component for each color component of a first data pixel. In addition, the method may further include determining a quantization error associated with each quantized color component, and diffusing the quantization errors to the selected one or more sub-pixel locations.
- In one embodiment, the method for reducing the resolution of color sub-pixels may include determining whether the first data pixel has a value within a particular range of data pixel values, and excluding the first data pixel from the diffusing of the quantization errors to the selected one or more sub-pixel locations if the value of the first data pixel is outside of the particular range.
- An embodiment is directed to a processor. The processor may include an interface to receive a color image and a mapping unit. The color image may be defined by two or more data pixels, each data pixel having at least a first and second color component. The mapping unit may be operable to map the color image to a display device having two or more display pixels, each display pixel having two or more sub-pixels. The mapping may include mapping a first color component of a first data pixel to a first sub-pixel of a first display pixel, and mapping a second color component of a second data pixel to a second sub-pixel of the first display pixel.
- In one embodiment, the display device may be an electro-optic display device having two or more stable display states. In one embodiment, the processor may include a display engine to provide waveforms to cause the display states of the first and second sub-pixels to change to display states corresponding with the first and second color components. In one embodiment, the display device may be an electrophoretic display device. In one embodiment, the processor may be a display controller.
- In one embodiment, the first and second color components may each have an associated color property, and the processor may include a color processing unit. The color processing unit may receive a selection of one or more sub-pixel locations in a color filter array map to diffuse quantization error. In addition, the color processing unit may determine a quantized color component for each color component of the color image, determine a quantization error associated with each quantized color component, and diffuse respective quantization errors to the selected one or more sub-pixel locations.
- In one embodiment, the color processing unit may determine whether the first color component has a value within a first range of color component values, and exclude the first color component from the diffusing of the respective quantization errors to the selected one or more sub-pixel locations if the value of the first color component is outside of the first range.
- In one embodiment, the color processing unit may determine whether the second color component has a value within a second range of color component values, and exclude the second color component from the diffusing of the respective quantization errors to the selected one or more sub-pixel locations if the value of the second color component is outside of the second range, wherein the first and second ranges are different. In addition, the display device may be an electrophoretic display device. Further, the processor may be a display controller and the display device may be an electrophoretic display device. In one embodiment, the color filter array map may include white sub-pixels.
-
FIG. 1 is a simplified illustration of an exemplary system in which embodiments may be implemented. -
FIG. 2 is a simplified illustration of a memory and a color processor of the system ofFIG. 1 according to one embodiment. -
FIG. 3 illustrates a flexible data path for color synthesis of primaries according to one embodiment. -
FIG. 4 is a block diagram of an exemplary circuit for implementing the flexible data path ofFIG. 3 . -
FIG. 5 is a simplified block diagram of an exemplary saturation adjustment unit according to one embodiment. -
FIG. 6 is a diagram illustrating an exemplary diffusion of quantization error of an input pixel to pixels neighboring the input pixel. -
FIG. 7 is a diagram illustrating quantization errors of neighbor pixels that may be used in an exemplary calculation of a dithered pixel. -
FIG. 8 is a simplified diagram of an exemplary white sub-pixel generation unit according to one embodiment. -
FIG. 9 illustrates an exemplary CFA mapping and post-processing unit according to one embodiment. -
FIG. 10 illustrates an example of mapping samples of input image pixels to subpixels of a display device. -
FIG. 11 illustrates pixels in a portion of an exemplary image and sub-pixels in a portion of a display device. -
FIG. 12 illustrates exemplary color filter arrays. -
FIG. 13 illustrates a map for use in specifying neighbor pixels or sub-pixels to receive a quantization error of a sub-pixel. -
FIG. 14 illustrates an exemplary use of the map ofFIG. 13 for specifying neighbor pixels or sub-pixels to receive a quantization error of a sub-pixel. -
FIG. 15 is a simplified diagram of a cross-section of a portion of an exemplary electrophoretic display, depicting ambient light entering through a first color filter and exiting through an adjacent color filter. -
FIG. 16 is a simplified diagram of a cross-section of a portion of an exemplary electrophoretic display, depicting ambient light entering through a first color filter and exiting through a gap between adjacent color filters. -
FIG. 17 is a simplified diagram of a cross-section of a portion of an exemplary electrophoretic display, and a front view of a color filter array according to one embodiment. -
FIG. 18 illustrates front views of two exemplary color filter arrays. -
FIG. 19 illustrates a block diagram of a circuit for implementing the flexible data path for color synthesis of primaries according to one alternative embodiment. -
FIG. 20 is a simplified block diagram of a color processor, a white sub-pixel generation unit, and a post-processing unit according to one embodiment. -
FIG. 21 is a simplified diagram of an exemplary white sub-pixel generation unit according to one embodiment. -
FIG. 22 illustrates exemplary, alternative configurations for use of a look up table memory of theFIG. 21 . -
FIG. 23 illustrates front views of two color filter arrays. - This detailed description and the drawings illustrate exemplary embodiments. In the drawings, like referenced-numerals may identify like units, components, operations, or elements. In addition to the embodiments specifically described, other embodiments may be implemented and changes may be made to the described embodiments without departing from the spirit or scope of the subject matter presented herein. This detailed description and drawings are not to be taken in a limiting sense; the scopes of the inventions described herein are defined by the claims.
-
FIG. 1 illustrates a block diagram of anexemplary display system 120 illustrating one context in which embodiments may be implemented. Thesystem 120 includes ahost 122, adisplay device 124 having adisplay matrix 126, adisplay controller 128, and asystem memory 130. In one embodiment, thesystem 120 may include animage sensor 118. Thesystem 120 may also include awaveform memory 134, atemperature sensor 136, and adisplay power module 137. In addition, thesystem 120 may includebuses display controller 128 includes adisplay controller memory 150, acolor processor 152, adisplay engine 154, and other components (not shown). In one embodiment, thedisplay controller 128 may include circuitry or logic that executes instructions of any computer-readable type to perform operations. Thesystem 120 may be any digital system or appliance. For example, thesystem 120 may be a battery powered (not shown) portable appliance, such as an electronic reader, cellular telephone, digital photo frame, or display sign.FIG. 1 shows only those aspects of thesystem 120 believed to be helpful for understanding the disclosed embodiments, numerous other aspects having been omitted. - The
host 122 may be a general purpose microprocessor, digital signal processor, controller, computer, or any other type of device, circuit, or logic that executes instructions of any computer-readable type to perform operations. Any type of device that can function as a host or master is contemplated as being within the scope of the embodiments. Thehost 122 may be a “system-on-a-chip,” having functional units for performing functions other than traditional host or processor functions. For example, thehost 122 may include a transceiver or a display controller. The term “processor” may be used in this specification and in the claims to refer to either thehost 122 or thedisplay controller 128. - The
system memory 130 may be may be an SRAM, VRAM, SGRAM, DDRDRAM, SDRAM, DRAM, flash, hard disk, or any other suitable volatile or non-volatile memory. The system memory may store instructions that thehost 122 may read and execute to perform operations. The system memory may also store data. - The
display device 124 may have display pixels that may be arranged in rows and columns forming a matrix (“display matrix”) 126. A display pixel may be a single element or may include two or more sub-pixels. Thedisplay device 124 may be an electro-optic display device with display pixels having multiple stable display states in which individual display pixels may be driven from a current display state to a new display state by series of two or more drive pulses. In one alternative, thedisplay device 124 may be an electro-optic display device with display pixels having multiple stable display states in which individual display pixels may be driven from a current display state to a new display state by a single drive pulse. Thedisplay device 124 may be an active-matrix display device. In one embodiment, thedisplay device 124 may be an active-matrix, particle-based electrophoretic display device having display pixels that include one or more types of electrically-charged particles suspended in a fluid, the optical appearance of the display pixels being changeable by applying an electric field across the display pixel causing particle movement through the fluid. Thedisplay device 124 may be coupled with thedisplay controller 128 via one ormore buses display device 124 may be a gray-scale display or a color display. In one embodiment, thedisplay controller 128 may receive as input and provide as output either gray-scale or color images. - The display state of a display pixel is defined by one or more bits of data, which may be referred to as a “data pixel.” An image is defined by data pixels and may be referred to as a “frame.”
- In one embodiment, the
display controller 128 may be disposed on an integrated circuit (“IC”) separate from other elements of thesystem 120. In an alternative embodiment, thedisplay controller 128 need not be embodied on a separate IC. In one embodiment, thedisplay controller 128 may be integrated into one or more other elements of thesystem 120. For example, thedisplay controller 128 may integrated with thehost 122 on a singe IC. - The
display memory 150 may be internal or external to thedisplay controller 128, or may be divided with one or more components internal to the display controller, and one or more components external to the display controller. Thedisplay memory 150 may be an SRAM, VRAM, SGRAM, DDRDRAM, SDRAM, DRAM, flash, hard disk, or any other suitable volatile or non-volatile memory. Thedisplay memory 150 may store data or instructions. - The
waveform memory 134 may be a flash memory, EPROM, EEPROM, or any other suitable non-volatile memory. Thewaveform memory 134 may store one or more different drive schemes, each drive scheme including one or more waveforms used for driving a display pixel to a new display state. Thewaveform memory 134 may include a different set of waveforms for one or more update modes. Thewaveform memory 134 may include waveforms suitable for use at one or more temperatures. Thewaveform memory 134 may be coupled with thedisplay controller 128 via a serial or parallel bus. In one embodiment, thewaveform memory 134 may store data or instructions. - The
temperature sensor 136 may be provided to determine ambient temperature. The drive pulse (or more typically, the series of drive pulses) required to change the display state of a display pixel to a new display state may depend, in part, on temperature. Thetemperature sensor 136 may be mounted in any location suitable for obtaining temperature measurements that approximate the actual temperatures of the display pixels of thedisplay device 124. Thetemperature sensor 136 may be coupled with thedisplay controller 128 in order to provide temperature data that may be used in selecting a drive scheme. - The
power module 137 may be coupled with thedisplay controller 128 and thedisplay device 124. Thepower module 137 may receive signals from thedisplay controller 128 and generate appropriate voltages (or currents) to drive selected display pixels of thedisplay device 124. In one embodiment, thepower module 137 may generate voltages of +15V, −15V, or 0V. - The
image sensor 118 may include a charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) type image sensor that converts light into electronic signals that represent the level of light at each pixel. Other image sensing devices that are known or may become known that are capable of converting an image formed by light impinging onto a surface into electronic signals representative of the image may also be used. Theimage sensor 118 may also includes circuits for converting the electronic signals into image data and interfacing with other components of the system. - The
display engine 154 may perform a display update operation. Thedisplay engine 154 may include a pixel processor (not shown) and an update pipe sequencer (not shown). A display update operation may include updating display pixels of a display matrix of an electro-optic display device. In particular, a display update operation may include: (a) a pixel synthesis operation; and (b) a display output operation. A display update operation may be performed with respect to all of the display pixels of the display matrix 126 (an “entire” display update). Alternatively, a display update operation may be performed with respect to less than all of the display pixels of the display matrix 126 (a “regional” display update). In addition, two or more regional display updates may be performed in parallel. For example, a regional display update of a first region of thedisplay matrix 126 may operate in parallel with a regional display update of a second region, provided the first and second regions do not include any of the same display pixels or sub-pixels. As described below, the image to be rendered on a display device may include two or more images, and each sub-image or region may be processed using a different color processing algorithm. Because the pixel synthesis and display output operations are performed after color processing, and because the pixel synthesis and display output operations may be performed independently on distinct regions of thedisplay matrix 126, it will be appreciated that simultaneous display updates may be update display pixels that were processed using different color processing algorithms. -
FIG. 2 illustrates thedisplay controller 128 ofFIG. 1 according to one embodiment. Thedisplay controller memory 150 may include a first portion allocated as acolor image buffer 220 and a second portion allocated as a processedcolor image buffer 222. Thecolor processor 152 fetches data from thecolor image buffer 220 and stores data in the processedcolor image buffer 222 using thebus 138. So that thecolor processor 152 may access thememory 150, it includes aRead Master unit 224 and aWrite Master unit 226. In one embodiment, thecolor processor 152 includes a Color Synthesis of Primaries (CSP)unit 228, a White Sub-Pixel Generation (WSG)unit 230, and a CFA Mapping and Post-Processing Unit (PPU) 232. A selectingunit 234 permits the outputs of theCSP unit 228 and theWSG unit 230 to be selected for input to thePPU 232. TheWSG unit 230 may receive pixel data from theCSP unit 228 and may provide saturation factor data to theCSP unit 228. Thecolor processor 152 provides for flexible processing of image data read from thecolor image buffer 220. A user may configure thecolor processor 152 to implement a custom color processing algorithm for a particular display device by writing parameters to configuration and status registers 236 that may be included in thecolor processor 152. These parameters may be written by thehost 122 to abus interface 238 via thebus 140. Thecolor processor 152 may include aninput latency buffer 240 for delaying input data as required by a particular color processing algorithm. - A color processing algorithm for a particular type of display device may include: (a) color correction; (b) color linearization (sometimes referred to as gamma correction); (c) luma scaling; (d) filtering; (e) color saturation adjustment; (f) dithering; and (g) other functions. An apparatus for implementing a color processing algorithm that has the capability to include a variety of different functions would be desirable. In general, the effect of applying two or more functions in succession is additive. In other words, the final appearance of an image after performing two different functions is affected by the order in which the functions are applied. An apparatus for implementing a color processing algorithm that has the capability to perform desired functions in any order would be desirable.
-
FIG. 3 illustrates a block diagram of aflexible data path 320 for color synthesis of primaries according to one embodiment. At the center of theflexible data path 320 is adata switch 322. In one embodiment, theflexible data path 320 may also include: (a)color correction module 324; (b)filtering module 326; (c)color linearization module 328; (d) colorsaturation adjustment module 330; (e)luma scaling module 332; and (f) ditheringmodule 334. The data switch 322 includes aninput 336 for receiving image data and anoutput 338 for outputting image data. Image data may be received in any desired format, e.g., RGB, YCrCb, HSL, CMY, etc. In addition, the pixel depth of input image data may be any desired number of bits, e.g., 24-bit. In one embodiment, the input image pixels may be defined in 12 bit-per-pixel resolution. The data switch 322 may be programmable or configurable. In other words, theflexible data path 320 may be configured to include one or more of theprocessing modules 324 to 334. In addition, theflexible data path 320 may be configured to include one or more additional modules (not shown). Any particular processing module may be included in thedata path 320 more than once. In addition, theflexible data path 320 may be configured to exclude one or more of themodules 324 to 334. One advantage of the capability of exclude any particular processing module is that it permits separate analysis of each processing module apart from the effects of other processing modules. A particular module may be included or excluded from theflexible data path 320 by programming or configuring the data switch 322. The data switch 322 may be programmed or configured by storing one or more control words in the configuration andstatus register 236. In addition, control words may be used to specify the order in which processing modules are used and to select parameters associated with particular processing modules. -
FIG. 4 illustrates a block diagram of acircuit 420 for implementing theflexible data path 320 for color synthesis of primaries according to one embodiment. Thecircuit 420 may be included in theCSP unit 228 in one embodiment. Thecircuit 420 may include, in one embodiment, adata switch 422 and a variety of processing modules. In one embodiment, thecircuit 420 may include thecolor correction module 324,filtering module 326,color linearization module 328, ditheringmodule 334, colorsaturation adjustment module 330, andluma scaling module 332. The data switch 422 may include multiplexers M0 to M6, or any other suitable selecting device. Each of the multiplexers M0 to M6 includes a select input (not shown). The select inputs are used to select the processing modules that are to be included in a color processing algorithm as well as to program the order in which the processing modules are used. The data switch 422 includes aninput 434 for receiving image data and anoutput 436 for outputting image data. Input image data may be any desired number of bits. - For purposes of illustration, assume that the inputs of each of the multiplexers M0 to M6 are numbered 0 to 6 from top to bottom. As one example, all of the modules may be bypassed by selecting the 0 input of multiplexer 0. As a second example, to select modules in the order (1)
linearize color 328, (2)filter 326, (3) color correct 324, (4) adjustsaturation 330, and (5)dither 334, excluding theluma scaling module 332, the inputs of the multiplexers should be selected as follows: (a) multiplexer M0—select input 4, (b) multiplexer M1—select input 2, (c) multiplexer M2—select input 3, (d) multiplexer M3—select input 0, (e) multiplexer M4—select input 5, and (f) multiplexer M5—select input 1. - Turning now to exemplary modules in that may be included in the
flexible data path 320 ofFIG. 3 , thecolor correction module 324 may be used as part of a color processing algorithm for a particular type of display device to generate color-corrected pixels. Thecolor correction module 324 may make independent adjustments to each color component of a pixel. The level of reflectance of an EPD pixel or sub-pixel may be less than one hundred percent. Consequently, when a color image is rendered on an EPD, colors may tend to lack brightness, saturation, or both brightness and saturation. In addition, when a color image is rendered on a display device, it may have a “color cast.” An image rendered on a display device that lacks brightness or saturation, appears too dark. An image rendered on a display device that has a color cast may appear tinted. A color cast may be the result of properties of the display device or properties inherent in the image data. To compensate for a lack of brightness, undesirable or unnatural appearances, or other issues, thecolor correction module 324 may be used to modify the brightness or saturation of pixels. In addition, thecolor correction module 324 may be used to shift color values. In one embodiment, thecolor correction module 324 may include logic to multiply an RGB vector by a 3×3 kernel matrix, and to add the product to an RGB offset vector, RGB-outoff. Stated symbolically, thecolor correction module 324 may be used to evaluate the following expression: -
- where R0, G0, and B0 are input RGB values. The R′, G′, and B′ are color corrected values. The respective RGB “inoff” and “outoff” are offset values. The “K” values of the 3×3 kernel matrix may be programmable coefficients. In addition to changing pixel intensity or brightness, the
color correction module 324 may be used to perform a color space conversion in addition to its use for correcting color. For example, RGB may be converted to YCrCb, YCrCb may be converted to RGB, or YCrCb may be converted to CMY using the above expression. In a color space conversion configuration, different input, output, and offset variables may be substituted. For example, the RGB input values R0, G0, and B0 in the above expression may be replaced with Y0, Cr0, and Cb0, and the corrected values R′, G′, and B′ may be replaced with either Y′Cr′Cb′ or C′M′Y′. Moreover, thecolor correction module 324 may be used to implement a scaling function with or without an offset. For example, thecolor correction module 324 may be used to adjust color saturation of an image defined in YCrCb space. This may be accomplished by programming the K values of the kernel matrix as shown in the expression below: -
- where S is a saturation adjustment factor.
- The
filtering module 326 may be used as part of a color processing algorithm for a particular type of display device to sharpen, blur, or value-scale an image. In addition, thefiltering module 326 may be used for other purposes, such as bump mapping and line detection. Thefiltering module 326 may include a separate filter for each color channel. In one embodiment, the filters may be 3×3 filters. For example: -
- The R0, G0, and B0 are original color values, the R′, G′, and B′ are filtered color values, and the programmable kernel values “K” define the filter. It is not critical that the
filtering module 326 process RGB pixel data. Thefiltering module 326 may process pixel data in any desired format, e.g., YCrCb. In one embodiment, the type of filtering that is performed may be different on each color channel. For example, the filter on a Y channel may be a sharpening filter while the filters on Cr and Cb channels may perform blurring or saturation adjustment. - The
color linearization module 328 may be used as part of a color processing algorithm for a particular type of display device to generate pixels that are compensated for the non-linearity of the response of the display device to input pixel values. In an EPD or other display device, the brightness of a pixel generated in response to a signal may not be a linear function of the signal. In one embodiment, thecolor linearization module 328 may include three 256 entry look-up tables (LUT), one for each color channel, each LUT defining a function to compensate for non-linearity of display device response. More specifically, thecolor linearization module 328 may implement a compensation function on each of three color channels. For example, thecolor linearization module 328 may implement the following: -
R′=f(R 0) -
G′=f(G 0) -
B′=f(B 0) - The R′, G′, and B′ are linearized color values. The color linearization LUTs may store entries of any suitable precision. For example, the color linearization LUTs may be 8 or 6 bits wide. In one alternative, the color linearization LUTs may be 4 bits wide.
- The color
saturation adjustment module 330 may be used as part of a color processing algorithm for a particular type of display device to adjust levels of saturation in color pixels. The colorsaturation adjustment module 330 may make independent adjustments to each color component of a pixel. The colorsaturation adjustment module 330 may accept input data in any desired color format. For example, the colorsaturation adjustment module 330 may accept input data in RGB, YCrCb, HSL, CMY, etc. However, input image data is typically provided in RGB format. - One known way to adjust the color saturation of an RGB image is to convert the image to the YCbCr color space, multiply the Cb and Cr values of each YCbCr pixel by adjustment factor S, and then convert the YCbCr image back to the RGB color space. The two color space conversion operations, however, make this method inefficient. In one embodiment, the color
saturation adjustment module 330 adjusts the color saturation of an RGB image by first determining the Y component for each pixel of the RGB image. The Y component is determined according to the following equation: -
Y=(0.299×R 0)+(0.587×G 0)+(0.114×G 0) - where R0, G0, and B0 are color components of an original or input RGB image pixel. Second, the Y component is individually subtracted from each of the RGB components. The difference is then multiplied by an adjustment factor S. Finally, the products produced in the second operation are added to the Y component. The respective sums are the saturation adjusted RGB components. Equations for the saturation adjusted components R′, G′, and B′ are presented below:
-
R′=S×(R 0 −Y)+Y -
G′=S×(G 0 −Y)+Y -
B′=S×(B 0 −Y)+Y - One adjustment factor S may be used for all three RGB components. Alternatively, three unique adjustment factors S may be used for each of the respective RGB components. In addition, the adjustment factor S may be uniquely defined for each combination of RGB input image component values. In other words, in one embodiment, S=f(R,G,B). Alternatively, the adjustment factor S may be uniquely defined for each combination of YCrCb input image component values. In one embodiment, the saturation factor S may be a constant.
-
FIG. 5 illustrates asaturation adjustment module 518 according to one embodiment. Thesaturation adjustment module 518 includesinput 520 for receiving pixel data, aninput 522 for receiving a saturation factor value S, and anoutput 524 for outputting a saturation adjusted pixel. The pixel data received on theinput 520 may be in any desired color format. In one embodiment, the pixel data received on theinput 520 may be in the RGB color format. In one embodiment, the pixel data received on theinput 520 is used as an index to a look-up table memory (LUT) 526, which responds to an index by furnishing a saturation factor value S to thesaturation adjustment unit 518. The pixel data received on theinput 520 may be in any desired bit-per-pixel resolution. For example, if the input image pixels are defined in 12 bit-per-pixel resolution, the lookup table 526 stores 4096 adjustment factors S. Thesaturation adjustment unit 518 includes a calculatingmodule 528 that evaluates the expression: -
RGB′=(S·R 0 G 0 B 0)+((1−S)·Y), - where RGB′ is a saturation-adjusted R0G0B0 pixel, S is the saturation factor value, and Y is the luma value of the input pixel R0G0B0. The luma value Y may be calculated using
second calculating module 530, which may evaluate the equation: -
Y=(0.299×R 0)+(0.587×G 0)+(0.114×G 0) - In one embodiment, the
saturation adjustment module 518 and thesaturation adjustment module 330 may be the same. - Referring again to
FIG. 3 , theluma scaling module 332 may be used as part of a color processing algorithm for a particular type of display device to adjust the lightness or brightness of a digital image. In addition, theluma scaling module 332 may be used to adjust the contrast in a digital image. Further, theluma scaling module 332 may be used to adjust color saturation or pixels defined in the YCrCb color space. As one example, theluma scaling module 332 may implement the following: -
R′=R 0 ×P+C -
G′=G 0 ×P+C -
B′=B 0 ×P+C - The R0, G0, and B0 are original color values and the R′, G′, and B′ are luma scaled color values. A scale factor is P and a scale offset is C. In one alternative, the
luma scaling module 332 may be used as part of a color processing algorithm for a particular type of display device to adjust the brightness or saturation of pixels in the luma, chroma-blue, chroma-red (YCrCb) color space. That is, original colors values Y0, Cr0, and Cb0 may be substituted for R0, G0, and B0 in the above equations. - The
dithering module 334 may be used as part of a color processing algorithm for a particular type of display device. The number of brightness or intensity levels for sub-pixels that is available in some display devices may be less than 256. For example, an EPD pixel may include sub-pixels having 16 intensity levels. In this case for example, a 12-bit RGB data value (4:4:4) may be used to define all possible pixel colors. The gamut of colors that corresponds with 12-bit RGB data is a relatively small 4,096. Thedithering module 334 may be included in the color processing algorithm to increase the apparent color gamut of a display device Thedithering module 334 may employ an error-diffusion scheme, an ordered-diffusion scheme, or any other diffusion suitable scheme. - In one embodiment, the
dithering module 334 may employ an error-diffusion scheme. In an exemplary error-diffusion scheme, pixels of an input image are processed in raster order. The bit-depth of the input pixels may be greater than the bit-depth of the output pixels. For example, the input pixels may be 24-bit RGB data (8:8:8), whereas the output pixels may be 12-bit RGB data (4:4:4). A quantization error may be calculated for each input data pixel according to the following equation: -
err(i,j)=P(i,j)−P′(i,j) - where P(i, j) is a pixel of an input image in the native bit-depth, e.g., 24-bit per pixel, P′(i, j) is the pixel of an input image in the bit-depth that will be provided as an output of the dithering process (the “quantized” pixel value), e.g., 12-bit per pixel, and i and j are column and row indices. In one embodiment, a quantization error may be calculated for each input data sub-pixel. As shown in
FIG. 6 , the quantization error may be diffused to four neighboring pixels. The amount of the error that is distributed to a particular neighbor is determined by a weight coefficient. Where the quantization error is distributed to four neighbors, there may be four weight coefficients, α, β, γ, δ, which are subject to following condition: -
α+β+λ+β=1 -
FIG. 6 shows one example of how weight coefficients may be used to diffuse a quantization error associated with input pixel P(i, j) to neighbor pixels P(i+1, j), P(i−1, j+1), P(i, j+1), and P(i+1, j+1), where i and j are, respectively, column and row indices. -
FIG. 7 shows neighbor pixels and associated weight coefficients that may be included in a calculation of a dithered pixel P″(i, j), according to one embodiment. A dithered pixel value may be calculated by adding error terms to the quantized pixel value P′(i, j). For example, the value of dithered pixel P″(i, j) may be determined according to the following equation: -
P″(i,j)=P′(i,j)+α×err(i−1,j)+β×err(i,j−1)+λ×(i+1,j−1)+δ×(i−1,j−1) - The α, β, γ, δ coefficients used by the
dithering module 334 may be programmed or configured to suit a color processing algorithm for a particular type of display device. In addition, the particular neighbor pixels that are used in the error term calculation may be programmed to suit a particular color processing algorithm. For example, ditheringmodule 334 may be configured to include only the two neighbor pixels, such as only the horizontally and vertically adjacent pixels. To facilitate calculation of a current pixel, thedithering module 334 may include a buffer to store error terms for one line of pixel data (e.g., line j−1) plus the pixel value on the same line (e.g., line j) and to the left of the currently processed pixel. - The range of pixel color values for which dithering is enabled may be programmed or configured for a particular color processing algorithm. For example, consider an input image defined by 6-bit RGB data (6:6:6) that includes both a color photograph, and black and white text. In this example, a pixel having the maximum value of 32d:32d:32d may appear white, while a pixel having the minimum value of 0d:0d:0d may appear black. The range of pixel color values may be set to exclude dithering of the textual portion of the image, while at the same time to include dithering of the color photograph portion of the image by setting, for example, a range having a maximum of 30d:30d:30d and a minimum of 2d:2d:2d. In this example, the 6.25% whitest and the 6.25% blackest pixels are excluded from dithering. Any desired or suitable range of values to exclude from dithering may be selected. The capability to configure a color processing algorithm may be desirable because dithering textual image data can reduce the quality of the rendered image of the text. In an alternative embodiment, as described below, the
dithering module 334 may be programmed or configured to operate at sub-pixel resolution. In one embodiment, a data pixel includes one or more color components, and a range of color component values for which dithering is enabled may be specified. For example, a range having a maximum of 28d and a minimum of 4d may be specified for red color component values for which dithering is enabled. Different color channels may have different ranges. - Referring again to
FIG. 2 , thecolor processor 152 may include aWSG unit 230.FIG. 8 illustrates a white sub-pixel saturation (WSG)unit 818 according to one embodiment. TheWSG unit 818 includes aninput 820 for pixel data and may include twooutputs WSG unit 818 may include a first lookup table (LUT)memory 826 for storing saturation factors, and a second lookup table (LUT)memory 828 for storing fourth pixel values. TheWSG unit 818 may also include a first input/output path selector 830 and a secondinput path selector 832. In addition, theWSG unit 818 may include a thirdoutput path selector 834 and a color space converter (“CSC”) 836. Thecolor space converter 836 may be employed, for example, to convert input pixel data in RGB format to YCrCb or CMY format. In one embodiment, thecolor space converter 836 may convert pixel data in a first color format into a single component of pixel data in a second color format. For example, thecolor space converter 836 may convert RGB pixel data into the Y component of YCrCb pixel data according to the following expression: -
Y=0.299R+0.587G 0+0.144B 0 - In one embodiment, the
LUT 826 may be employed to store saturation factor values S that may be used by a color saturation module, e.g.,module 330. The saturation factor values S may be stored in theLUT 826 by a user. The saturation factor values S stored in theLUT 826 may be user determined values based on the image rendering properties of a particular display device. By storing saturation factor values S in theLUT 826, a color processing algorithm may include a non-linear saturation factor in a color saturation adjustment function. A non-linear saturation function may provide an advantage over a linear saturation function in that it provides increased control of the color gamut that may be rendered on an EPD. Saturation factor values S may be retrieved from theLUT 826 using different arguments or indices. The retrieval index may be determined by appropriately configuringpath selectors color space converter 836. In one configuration, a pixel value received atinput 820 may be used as an index to theLUT 826. For example, a down-sampled RGB or YCrCb pixel value may be used as an index for retrieving a stored saturation factor value S. As another example, an RGB pixel may be received oninput 820 and converted to a YCrCb pixel, which may then be used as an index. In another configuration, a single component of a color pixel may be used as an index to theLUT 826. For example, the R value of a received RGB pixel, or the Y value of YCrCb pixel may be used as an index for retrieving a stored saturation factor value S. In the latter example, the Y value of the YCrCb pixel may determined from a YCrCb pixel received oninput 820, or the Y value may be received from thecolor space converter 836 following conversion of a received RGB pixel. In yet another configuration, a constant saturation factor value S may be stored in theLUT 826, providing a constant value for S. - The color processing algorithm for a particular type of display device may include adding a fourth sub-pixel “WSP” to three-component pixel data. For example, a white sub-pixel may be added to each RGB triplet to create RGBW pixels, or a white sub-pixel may be added to each CMY triplet to create CMYW pixels. The fourth sub-pixel may be added to pixels of any color model and the fourth sub-pixel need not be white. The fourth sub-pixel may be any suitable color or may be no color. For instance, a fourth sub-pixel may be yellow or black, e.g., RGBY, CMYB, or CMYK pixels may be generated. In addition, in one embodiment, a fourth sub-pixel for inclusion with an RGB pixel may be a duplicate of the green sub-pixel of the RGB triplet. In other words, the resultant pixel is RGBG, where the G values are identical. The G value of an RGB pixel may be passed from
input 820 tooutput 824 usingdata path 846. - The
WSG unit 818 may provide several options for determining fourth sub-pixel values. The choices may include calculating options and lookup table options. The first input/output path selector 830 may be configured to choose an option for determining a fourth sub-pixel. Depending on the option, different parameters are required. The parameters may be taken directly from, or may be derived from, the input pixel value received oninput 820. Thecolor space converter 836 may color space convert an input pixel, and the thirdoutput path selector 834 may be configured to include or exclude thecolor space converter 836. - In a first option, the
LUT 828 may be employed to store fourth sub-pixel data. TheWSG unit 818 may allow retrieval of a fourth sub-pixel from theLUT 828 using a pixel value as an index to the LUT. For example, a down-sampled RGB or YCrCb pixel value may be used as an index for retrieving a fourth sub-pixel. The fourth sub-pixel values may be stored in theLUT 828 by a user. The fourth sub-pixel values stored in theLUT 828 may be user-determined values based on the image rendering properties of a particular display device. - In various alternative options, the fourth sub-pixel may be calculated. In one embodiment, the fourth sub-pixel may be calculated using a calculating
unit 838, which evaluates the expression: -
W1=min(RGB), - where “W1” is the calculated fourth sub-pixel and is set to the minimum of the R, G, and B sub-pixel values. When the calculating
unit 838 is used for determining fourth sub-pixel values, thepath selectors unit 838. - In another option, the fourth sub-pixel may be calculated using calculating
unit 840, which evaluates the expression: -
W 2=(α·R)+(β·G)+(λ·B), - where the fourth sub-pixel “W2” is a weighted average of the RGB sub-pixel values. When this option is desired, the
path selectors unit 840. The coefficients α, β, and λ may be selected by a user by writing appropriate values to configuration and status registers 70. - In yet another option, the
path selectors unit 840, but afourth path selector 842 is configured so that the calculatingunit 840 is bypassed. In this option, W2 is set equal to luma, i.e., W2=Y. In still another option, the fourth sub-pixel may be calculated using calculatingunit 844, which evaluates the expression: -
W3=((1−A)·W2)+(A·W1), - where W1 and W2 are determined using one of the methods described above. The weighting factor A may be selected to weight one of W1 or W2 more heavily, or both may be weighted equally, in the determination of the fourth sub-pixel “W3.” A user may select a desired value for A by writing an appropriate value to configuration and status registers 236. Alternatively, the weighting factor A may be varied as function of input pixel value. In this alternative, a user may store a set of weighting factors A in the
LUT 828. - The
WSG units factor latency buffer 846 that may be used to buffer theS output 822, and a fourthsub-pixel latency buffer 848 that may be used to buffer theWSP output 824. The latency buffers 846 and 848, and theinput latency buffer 240, may be used individually or in combination to synchronize aspects of the respective operations of theCSP unit 228 and the WSG unit 818 (or WSG unit 230), which operate in parallel. In particular, it may be necessary to synchronize the outputting of a saturation factor S by theWSG unit 818 to the saturation adjustment module 330 (or unit 518) of a CSP unit. In addition, it may be necessary to synchronize the outputting of pixel data by CSP and WSG units to the CFA mapping andpost-processing unit 232. The latency buffers 846, 848, and 240 may be variable depth FIFOs. - A method for determining how latency buffers may be according to one embodiment set is next described. In a first step, the processing modules to be used and the order in which the modules are use for a color processing algorithm are determined. Once the modules to be used and the order of operations are determined, a second step includes calculating the latency through a CSP unit up to completion of a saturation adjustment operation, and calculating the total latency through the CSP unit. In a third step, latencies of the WSP unit for determining saturation factor S and determining a fourth sub-pixel, if applicable, are calculated. In a fourth step, the latencies calculated for the CSP and WSP data paths are compared. If the total latency through the CSP unit is less than the latency for determining a fourth sub-pixel by the WSG unit, the
input latency buffer 240 may be set to the difference between the two latency values. On the other hand, if the total latency through the CSP unit is greater than the latency for determining a fourth sub-pixel by the WSG unit, the fourth sub-pixel latency buffer may be set to the difference between the two latency values. Finally, if the latency through the CSP unit up to completion of the saturation adjustment operation is greater than the latency for determining a saturation factor by the WSG unit, the saturation factor latency buffer is set to the difference between the two latency values. In one embodiment, a table containing all possible configurations for the CSP and WSG units may be provided. The table may additionally contain latency values corresponding with each configuration. The second and third steps may be automatically performed by looking up latency values in the table once configurations are set. A comparing circuit may then compare latency values from the table to determine appropriate latency buffer settings. The comparing circuit may automatically establish the latency buffer settings. The table may be stored in a memory. -
FIG. 9 illustrates a CFA Mapping and Post-Processing Unit (PPU) 232 according to one embodiment. ThePPU 232 may include aninput 920, aconvolution unit 922, aline buffer 924, aCFA mapping unit 926, and anoutput 928. In addition, the PPU may include other components, such as selectingunits PPU 232 may be programmed or configured to operate in one of two modes: sub-pixel or pixel mode. In addition, thePPU 232 may output sub-pixel data in a user-defined CFA format. - In one embodiment, the
PPU 232 may accept as input pixel data having four color components, e.g., RGBW, CMYW. In alternative embodiments, thePPU 232 may accept pixel data defined by any number of components. The selectingunit 234 may be configured to obtain three color components from a CSP unit and a fourth color component from a WSG unit, or to obtain four color components from a WSG unit. After processing by thePPU 232, the sub-pixel data may be stored in the processedcolor image buffer 222. ThePPU 232 writes sub-pixel data to the processedcolor image buffer 222 so that it is arranged in thebuffer 222 for fetching by thedisplay engine 154 in raster order. - In the sub-pixel mode of operation of
PPU 232, each pixel of an input image is mapped to one sub-pixel of a display device. Consequently, sub-pixel mode requires that the resolution of the input image be higher than the resolution of the display device. For example, each pixel of a 1,200×1,600 pixel color input image may be mapped to one sub-pixel of a 600×800 sub-pixel display device that has four sub-pixels per display pixel. In addition, just one color component of each pixel of the input image may be sampled in the mapping process. The sampled color component may be assigned to a mapped display sub-pixel. Alternatively, the value assigned to a mapped display sub-pixel may be determined based, at least in part, on a corresponding pixel's color components. For example, a mapped display sub-pixel may be assigned the value of a fourth sub-pixel, where the fourth sub-pixel is determined based on the RGB or CMY values of the corresponding input pixel. -
FIG. 10 illustrates an example of mapping samples of input image pixels to sub-pixels of a display device. A portion of an exemplarycolor input image 1020 and a portion of anexemplary display device 1022 are shown inFIG. 10 . Thecolor input image 1020 includespixels 1024. Eachinput pixel 1024 includes two ormore color components 1026, which in this example are R, B, G, and W color components. Thedisplay device 1022 includesdisplay pixels 1028. In this example, eachdisplay pixel 1028 includes R, B, G, and W sub-pixels 1030.FIG. 10 illustrates that each pixel of an input image may be mapped to one sub-pixel of a display device in sub-pixel mode. For example, input pixel P0 may be mapped to display sub-pixel R0, input pixel P1 may be mapped to display sub-pixel B1, input pixel P6 may be mapped to display sub-pixel G6, and input pixel P7 may be mapped to display sub-pixel W7.FIG. 10 also illustrates that one color component of each pixel of the input image may be sampled and the sampled component assigned to the mapped sub-pixel. For example, the R0 color component of input pixel P0 is sampled and assigned to the mapped sub-pixel R0. Similarly, the B1 color component of input pixel P1 is sampled and assigned to the mapped sub-pixel B1. The components of an image pixel not sampled may not be assigned to display sub-pixel. For instance, color components G0, B0, and W0 of input pixel P0 are not sampled and not assigned to a display sub-pixel. - An advantage of the sub-pixel mode of mapping of the
PPU 232 is that it may produce better color appearance than the pixel mode of operation. Use of the sub-pixel mode of mapping may result in image artifacts, however. For example, in an image with a high gradient, gray-scaled edges may become colored. Empirical testing indicates that image artifacts resulting from processing an input image in sub-pixel mode may be reduced by processing the input pixels with a convolution operation, which implements a blurring function. The convolution operation is preferably performed before a sub-pixel mapping operation. The convolution operation may be performed byconvolution module 922. A user may configure the selectingunit 930 to include or bypass theconvolution module 922, as desired for a particular color processing algorithm. - In the pixel mode of operation of
PPU 232, each pixel of an input image is mapped to one pixel of a display device. For example, each pixel of a 600×800 pixel input image is mapped to one pixel of a 600×800 pixel display. If each display pixel includes four sub-pixels, each input pixel is mapped to four sub-pixels in the display. - When mapping is performed in pixel mode, the
line buffer 924 may be used to store one line of the input image. The input image may be received by thePPU unit 232 in raster order. In addition, the color components of each pixel may appear adjacent one another in the input data stream. For example, if pixels of the input image are in an RGBW format, the four color components of each input pixel may arrive in parallel at theinput 920. The sub-pixels of an RGBW pixel may not, however, appear adjacent one another in an output data stream, i.e., the order in which sub-pixel data are written to the processedcolor image buffer 222. Instead, the sub-pixels of a particular input pixel may appear on different lines in the output data stream, as illustrated in a portion of animage 1120 and a portion of adisplay device 1122 shown inFIG. 11 . Theimage portion 1120 includes part of a line of pixels P0, P1, etc. Thedisplay device portion 1122 also includes part of a line of display pixels P0, P1, P2. Each display pixel includes R, G, B, and W sub-pixels. It may be seen from the example ofFIG. 11 that the R0 and B0 sub-pixels in thedisplay device 1122 are side-by-side on a first line, and the G0 and W0 sub-pixels are side-by-side on a second line. If sub-pixels are written to the processedcolor image buffer 222 in raster order in pixel mode, there will be a time delay after writing sub-pixels R0 and B0 and before writing sub-pixels G0 and W0, i.e., the sub-pixel pairs may be non-adjacent in the output data stream. By storing one line of an input image in theline buffer 924, the sub-pixels of a particular pixel need not all be written at the same time, i.e., the sub-pixels may be placed in non-adjacent locations in the output data stream. A user may configure the selectingunit 932 to include or bypass theline buffer 924, as desired for a particular color processing algorithm. - According to one embodiment, the
PPU 232 may provide for flexible CFA mapping, i.e., thePPU 232 may be configured to output sub-pixel data in a user-defined CFA format. Different display devices may employ different CFAs. Consequently, it may be desirable to have a capability to map sub-pixels to a variety of different CFAs. CFAs may be viewed as arranging sub-pixels in columns and rows. Different CFAs may have different numbers of columns and rows. While sub-pixels may be square, this is not critical. Sub-pixels may be any desired shape, rectangular, polygonal, circular, etc.FIG. 12 illustrates several exemplary CFAs configurations.CFA 1220 is a 2×2 sub-pixel matrix.CFA 1224 is a 4×4 sub-pixel matrix.CFA 1226 is a 2×4 sub-pixel matrix. In one embodiment, a user may write parameters to configuration and status registers 236 that specify the dimensions of the CFA in terms of number of rows and columns. In addition, a user may write parameters to the configuration registers 236 that specify the color component to be assigned to a matrix location. For instance, if a 2×2 sub-pixel matrix, the locations may be defined in terms of rows and columns (row, column): (1, 1), (1, 2), (2, 1), and (2, 2). A user may specify that R is assigned location (1, 1), B is assigned location (1, 2), G is assigned location (2, 1), and W is assigned location (2, 2), for example. ThePPU 232 then uses the specified CFA dimensions and mapping scheme to map pixel data to sub-pixels of a display device. Specifically, thePPU 232 may include horizontal and vertical sub-pixel counters that may be configured to place the sub-pixels in matrix locations corresponding to the designated mapping and CFA size. - In an alternative embodiment, referring again to
FIG. 3 , thedithering module 334 may be programmed or configured to operate at sub-pixel resolution. As described above, thePPU 232 may be programmed or configured to operate in sub-pixel or pixel modes. Sub-pixel dithering may be employed in conjunction with CFA mapping in either pixel or sub-pixel mode. In the pixel mode, each pixel of an input image may be mapped to 3 or 4 sub-pixels of a display device. In the sub-pixel mode, each pixel of an input image may be mapped to one sub-pixel of a display device. When thedithering module 334 is configured to operate at sub-pixel resolution, the quantization error of a particular color channel is diffused to same-colored sub-pixels of neighbor pixels. For example, the quantization error of a red sub-pixel of the input image is diffused to red sub-pixels of neighboring dithered pixels. -
FIG. 13 illustrates an exemplary map ortemplate 1320 for specifying which neighbor pixels or sub-pixels should receive a quantization error of a current sub-pixel “P.” A current pixel or sub-pixel P is atlocation 1322. Possible neighbors on the same line and to the right of P in a display are designated “A.” Possible neighbors on the next two lower lines directly below P in a display are designated “B.” Possible neighbors on the next two lower lines, but in columns preceding P's column are designated “C.” Possible neighbors on the next two lower lines, but in columns following P's column are designated “D.” Referring toFIG. 6 , locations A0, A1, A2, and A3 ofFIG. 13 correspond with the location of neighbor pixel P(i+1, j) ofFIG. 6 . Similarly, i.e., locations B0, and B1 correspond with the location of neighbor pixel P(i, j+1), locations C00, C10, C20, C01, C11, and C21 correspond with the location of neighbor pixel P(i−1, j+1), and locations D00, D10, D20, D01, D11, and D21 correspond with the location of neighbor pixel P(i+1, j+1). Locations with subscripts of 0 or 00 are used to designate pixel locations. Locations with subscripts other than 0 or 00 are used to designate sub-pixel locations. In use, themap 1320 is conceptually superimposed on a CFA so that the current pixel or sub-pixel P is aligned withlocation 1322. After processing the current pixel or sub-pixel, themap 1320 is conceptually moved so thatlocation 1322 is aligned with a next current pixel or sub-pixel. - In a pixel mode of CFA mapping, quantization error may be diffused to adjacent pixels and a user may specify locations A0, B0, C00, and D00 of the map of
FIG. 13 . - In sub-pixel mode of CFA mapping, quantization error may be diffused to adjacent sub-pixels having the same color as the current sub-pixel. The particular mapping will depend on the particular CFA of the
display device 124. A user will select different neighbor sub-pixel locations depending on the particular CFA, e.g., a user may select A1 for a first CFA, but A2 for a second CFA.FIG. 14 illustrates an example of specifying locations for diffusing quantization error to sub-pixels in sub-pixel mode CFA mapping mode.FIG. 14 assumes anexemplary CFA 1418 in which subpixels appear in the order R, B, G, W on a first line, and these sub-pixels are vertically adjacent to subpixels that appear in the order G, W, R, B on a second line. Stated differently, the CFA includes two types of pixels: First pixels form a 2×2 matrix of sub-pixels, wherein the first row includes an R sub-pixel to the left of a B sub-pixel, and the second row includes a G sub-pixel to the left of a W sub-pixel. Second pixels form a 2×2 matrix of sub-pixels, wherein the first row includes a G sub-pixel to the left of a W sub-pixel, and the second row includes an R sub-pixel to the left of a B sub-pixel. In addition, themap 1420 is shown twice inFIG. 14 . First, it is shown, without alphabetical notations specifying sub-pixel locations, superimposed on theexemplary CFA 1418. Second, themap 1420 is shown with sub-pixel color values associated with the sub-pixel locations on the map when superimposed on the CFA. The associated sub-pixel color value according theCFA 1418 is shown inFIG. 14 above the diagonal line in each sub-pixel location. Thecurrent sub-pixel location 1322 is aligned with a R (red) sub-pixel. To diffuse the quantization error associated with the current sub-pixel R, the sub-pixel locations A3, B1, C10, and D10 may be selected by a user, as each of these locations corresponds with a neighbor red sub-pixel. As mentioned, a user may select different locations for a CFA different from theexemplary CFA 1418. It will be appreciated that themaps -
FIG. 15 depicts a simplified cross-sectional representation of a portion of theexemplary electrophoretic display 1518. Thedisplay 1518 may include electrophoretic media sandwiched between a transparentcommon electrode 1520 and a plurality ofsub-pixel electrodes 1522. Thesub-pixel electrodes 1522 may reside on asubstrate 1524. The electrophoretic media may include one or more (and typically, many)microcapsules 1526. Eachmicrocapsule 1526 may include positively chargedwhite particles 1528 and negatively chargedblack particles 1530 suspended in afluid 1532. Alternatively, white particles may be negatively charged and black particles positively charged. In addition, it is not critical that the particles be only white and black; other colors may be used. In one embodiment, each sub-pixel may correspond with onesub-pixel electrode 1522, however, this is not required or critical. Each sub-pixel may correspond with one ormore microcapsules 1526. In theexemplary display 1518, each sub-pixel pixel includes a filter disposed between the transparentcommon electrode 1520 and themicrocapsules 1526 associated with the particular sub-pixel. In one embodiment, thefilter 1534 may be a blue color filter, thefilter 1536 may be a green color filter, thefilter 1538 may be a white filter, and thefilter 1540 may be a red color filter. Thewhite filter 1538 may be a transparent structure; alternatively, a white filter may be omitted or absent from the location between themicrocapsules 1526 associated with a particular sub-pixel and thecommon electrode 1520. In one alternative, the transparentcommon electrode 1520 may be disposed between sub-pixel pixel filters and themicrocapsules 1526 associated with the particular sub-pixel. In addition, it is not required or critical that the color filters ofdisplay 1518 correspond with the RGBW color model. Any desired set of color filters may be used, e.g., RGB, CMY, RGBY, CMYB, or CMYK. - To change the display state of a sub-pixel, the
common electrode 1520 may be placed at ground or some other suitable voltage, and a suitable voltage is placed on asub-pixel electrode 1522. As a result, an electric field is established across the microcapsule(s) 1526 associated with the sub-pixel. When the electric field is positive, thewhite particles 1528 may move toward thecommon electrode 1520, which results in the display pixel becoming whiter or more reflective in appearance. On the other hand, when the electric field is negative, theblack particles 1530 may move toward thecommon electrode 1520, which results in the display pixel becoming blacker or less reflective in appearance. - In
FIG. 15 , anincident ray 1542 of ambient light is reflected off one of themicrocapsules 1526 associated with theblue display sub-pixel 1534. While theray 1542 enters throughblue color filter 1534, it exits through thegreen color filter 1536 associated with an adjacent sub-pixel. As a result, a reflectedray 1544 is influenced by both the blue andgreen color filters ray 1544 may appear as cyan. Generally, this is not desirable. Scattered light reflections that exit through the color filters of adjacent sub-pixels may alter the color appearance of images on a display device in undesirable and unnatural appearing ways. Further, this side scattering problem may reduce the gamut of displayable colors. Moreover, this side scattering problem may become more pronounced when thedisplay 1518 is viewed at an angle from one side or the other. Consequently, the side scattering problem may also reduce usable viewing angle. -
FIG. 16 illustrates one possible solution for the side scattering problem.FIG. 16 depicts a simplified cross-sectional representation of a portion of theexemplary electrophoretic display 1618. Parts of thedisplay 1618 numbered the same as parts ofdisplay 1518 may be the same. Thedisplay 1618 includes ablue color filter 1620, agreen color filter 1624, awhite filter 1626, and ared color filter 1628. The color filters for thedisplay 1618 differ from the color filters display 1518 in that they do not fully cover themicrocapsules 1526 associated with a sub-pixel. Instead, there aregaps 1630 between adjacent color filters. Theopenings 1630 may be present on all four sides of a sub-pixel as viewed from the front, i.e., there may be aseparation 1630 between a particular filter and the filters to either side in a row, and the filters in the rows above and below the particular filter. InFIG. 16 , anincident ray 1632 of ambient light is reflected off one of themicrocapsules 1526 associated with the blue display sub-pixel. While theray 1632 enters throughblue color filter 1620, reflectedray 1634 exits through thegap 1630 between the blue andgreen color filters ray 1544, the reflectedray 1634 is only influenced by theblue color filter 1620. The color of reflectedray 1634 will be influenced by the filter it passes through on the way to themicrocapsule 1526 and the transparency of thegap 1630. However, the use ofgaps 1630 separating color filters may reduce the saturation of colors rendered on the display. -
FIG. 17 illustrates an alternative solution to the side scattering problem, which may minimize or eliminate the reduction in color saturation that can occur when color filters are sized so that gaps or openings separate adjacent color filters.FIG. 17 depicts a simplified cross-sectional representation of a portion of theexemplary electrophoretic display 1718, according to one embodiment. Parts of thedisplay 1718 numbered the same as parts ofdisplay 1518 may be the same. In one embodiment, thedisplay 1718 includes agreen color filter 1720, awhite color filter 1722,blue color filters FIG. 17 , anincident ray 1742 of ambient light is reflected off one of themicrocapsules 1526 associated with the green display sub-pixel. While theray 1742 enters throughgreen color filter 1720, a reflectedray 1734 exits through thewhite color filter 1722 associated with an adjacent sub-pixel. The color of reflected ray 1730 will be influenced by the filter it passes through on the way to themicrocapsule 1526, i.e.,green filter 1720, and the transparency of thewhite color filter 1722. As a result, the reflected ray 1730 is not undesirably influenced by an adjacent red or blue color filter. -
FIG. 17 also illustrates a front view of aCFA 1732, which corresponds with thedisplay portion 1718. As shown inFIG. 17 , theCFA 1732 may include four sub-pixel color filters of the same color surrounded by white sub-pixels. In one embodiment, the white sub-pixels of theCFA 1732 may be modulated to appear in varying states of reflectance. An advantage of theCFA 1732 is that white sub-pixels may be controlled or modulated to reflect more or less light to compensate for any reduction in saturation due to the inclusion of white pixels in the CFA. - In one embodiment, sub-pixels having color filters may be arranged in rows and columns in a repeating pattern, e.g., a Bayer pattern. In addition, each sub-pixel having a color filter may be horizontally adjacent or vertically adjacent to one or more white sub-pixels (or both horizontally adjacent and vertically adjacent). In this regard, a color filter for a colored sub-pixel, e.g., green, and a color filter for a white sub-pixel, e.g., transparent, may be horizontally or vertically adjacent one another. In one alternative, the color filter for the colored sub-pixel may horizontally or vertically contact or adjoin a white sub-pixel. In this context, vertical and horizontal refer to the front view of a CFA. For example, the
green sub-pixel 1720 shown in theCFA 1732 ofFIG. 17 is horizontally adjacent thewhite sub-pixel 1722. In addition, thegreen sub-pixel 1720 vertically contact or adjoins thewhite sub-pixel 1722. - The
white sub-pixel 1722 shown in theCFA 1732 ofFIG. 17 is not horizontally or vertically adjacent to a colored sub-pixel. Instead, thewhite sub-pixel 1722 diagonally adjacent to colored sub-pixels. In one embodiment, a diagonally adjacent sub-pixel need not be a white sub-pixel. In particular, even though thewhite sub-pixel 1722 is labeled inFIG. 17 as a white sub-pixel, it may be a red, green, or blue sub-pixel in this example. In one embodiment, thewhite sub-pixel 1722 may be a green sub-pixel. - With regard to the
display 1718 andCFA 1732, it is not critical that thewhite color filters 1722 be white; they may be any desired color, e.g., yellow. Thewhite filter 1722 may be a transparent structure; alternatively, a white filter may be omitted or absent from the location between themicrocapsules 1526 associated with a particular sub-pixel and thecommon electrode 1520. In addition, while theCFA 1732 may be used with an RGBW color model. Any desired set of color filters may be substituted for the primary colors RGB, e.g., CMY. - While the colored and white sub-pixels of the
CFA 1732 are shown as being of the same size and shape, this is not critical.FIGS. 18 and 23 illustrate alternative embodiments of theCFA 1732.FIG. 18 shows aCFA 1820 in which the white sub-pixels are smaller than the colored sub-pixels. In this example, the white sub-pixels are half-as-tall and half-as-wide as the colored sub-pixels. In addition,FIG. 18 shows aCFA 1822 in which the colored sub-pixels are one-fourth-as-tall and one-fourth-as-wide as the colored sub-pixels.FIG. 23 illustrates aCFA 2320 and aCFA 2322. The CFAs 2320 and 2322 show that the white sub-pixels in a CFA may provided in two more sizes, and that the white sub-pixels in a CFA may differ in horizontal and vertical dimensions. In addition, the white sub-pixels in a CFA may differ dimensionally from the non-white sub-pixels. - It may be desirable to reduce the size of the
color processor 152. This may be desirable, for example, where a color processor is implemented in an integrated circuit or other hardware. Reducing the size of a hardware-implemented color processor may correspond with a reduced number of logic gates as compared with thecolor processor 152. - Because the
color processor 152 may be configured in many different ways, thecolor processor 152 may be used to evaluate many different color processing algorithms for EPDs. Empirical testing of thecolor processor 152 with a variety of color processing algorithms indicates that color processing algorithms suitable for color EPDs can still be implemented even though certain functions available in thecolor processor 152 are eliminated, or even though some of the options associated with a particular function are eliminated. In addition, empirical testing of thecolor processor 152 with a variety of color processing algorithms indicates that color processing algorithms suitable for color EPDs can still be implemented even though the order of performing color processing functions is restricted. -
FIG. 19 illustrates a block diagram of acircuit 1920 for implementing theflexible data path 322 for color synthesis of primaries according to an alternative embodiment. Thecircuit 1920 employs a smaller number of logic gates than thecircuit 420. Thecircuit 1920 may be included in theCSP unit 228 in one embodiment. Thecircuit 1920 may include, in one embodiment, adata switch 1922, acolor correction module 1924, afiltering module 1926, acolor linearization module 1928, adithering module 1930, and a colorsaturation adjustment module 1932. The data switch 1922 includes aninput 1934 for receiving image data and anoutput 1936 for outputting image data. The data switch 1922 includes multiplexers M7 to M11, each multiplexer including a select input (not shown). The data switch 1922 may be programmed or configured to include or exclude any particular processing module in a color processing algorithm using the select inputs. One advantage of the capability of exclude any particular processing module is that it permits separate analysis of each processing module apart from the effects of other processing modules. The order in which processing modules are used, however, is limited, as shown inFIG. 20 . To reduce size of the CSP unit, the input color depth of thecircuit 1920 is set at (5:6:5) rather than (8:8:8). In one embodiment, the input color depth of thecircuit 1920 is RGB (5:6:5). Further, thecircuit 1920 is limited to providing as output 12-bit pixel data in a 4:4:4 format. - A color processing algorithm that only operates on image data in its native resolution may be wasteful of power and processing time. On the other hand, use of a color processing algorithm to pro-processes a digital image at the bit-per-pixel resolution of the electro-optic display device may result in a rendered image having a sub-optimal appearance. The inventor has recognized that one reason that the appearance of the rendered image may be less than sub-optimal may be that performing the color processing algorithm at a higher degree precision than the electro-optic display is capable of rendering results in an improved selection of available display states or colors. For example, experiments by the inventor showed better color appearance of rendered images when a color processing algorithm performed its operations on 5:6:5 pixel data than when the same operations were performed on 4:4:4 pixel data. On the other hand, the color appearance of rendered images did not exhibit further improvement when the color processing algorithm performed its operations on 8:8:8 pixel data as compared with performing same operations using pixel data in the 5:6:5 resolution. In addition, as further described below, a color processing algorithm may include two or more operations and it may be desirable to perform certain of those operations at different pixel resolutions.
-
FIG. 20 is a simplified block diagram of a color processor including an alternative representation of the circuit ofFIG. 19 according to one embodiment. The data switch 1922 (not shown inFIG. 20 ) may be programmed or configured so that any of theprocessing modules FIG. 20 illustrates that the order in which the shown processing modules are used is generally fixed, except that thecolor linearization module 1928 may be invoked either preceding thedithering module 1930 or following colorsaturation adjustment module 1932. As shown inFIG. 20 , if all modules are used, thecolor correction module 1924 may only be used first, and thefiltering module 1926 may only be used second. Thecolor linearization module 1928 may be used after thefiltering module 1926. If thecolor linearization module 1928 is used after filtering, thedithering module 1930 may only be used fourth. Otherwise, thedithering module 1930 may only be used third. Thesaturation adjustment module 1932 may only be used after thedithering module 1930. Thesaturation adjustment module 1932 may only be used last if thecolor linearization module 1928 is used following thefiltering module 1926. If thecolor linearization module 1928 is not used following thefiltering module 1926, thecolor linearization module 1928 is used last. - The
CSP circuit 1920 reflects empirical testing with a variety of color processing algorithms for color EPDs. Testing indicated that if color correction is necessary, it is advantageous to perform this process first. Further, it was determined that it is not critical to include RGB to YCrCb conversion in thecolor correction module 1924. Accordingly,color correction module 1924 does not include this color space conversion capability. In one embodiment, thecolor correction module 1924 implements the following expression: -
- In addition, the
color correction module 1924 includes one or more predetermined sets of kernel coefficients and RGB offset values. Instead of selecting individual values for the color correction variables, a user may choose a predetermined setting. Examples of predetermined settings include (a) mild color enhance; (b) color enhance; (c) strong color enhance; (d) gray scale; (e) mild white warm; (f) mild daylight; and (g) mild illuminant. Alternatively, the user may choose to select individual values for the color correction variables. A user may select a predetermined setting or a custom setting by writing appropriate values to configuration and status registers 236. - Testing indicated that it is often desirable to include some type of filtering operation in a color processing algorithm. In addition, testing indicated that performing filtering after color correction and before color linearization produces good results. The
filtering module 1926 is sized to process 5:6:5 pixel data. Thefiltering module 1926 includes one or more predetermined sets of filter coefficients. Instead of selecting individual values for the filter coefficients, a user may choose a predetermined setting. Examples of predetermined settings include: five levels of sharpening, plus (a) blur; (b) edge detect; (c) sketch; (d) sepia; (e) edge enhance; (f) emboss; (g) gray scale; and (h) bump mapping. Alternatively, the user may choose to select individual values for the filter coefficients. A user may select a predetermined setting or a custom setting by writing appropriate values to configuration and status registers 236. - Testing related to color linearization indicated that color linearization is commonly required. In one embodiment, the
color linearization module 1928 may be the same as thecolor linearization module 328. - Testing revealed that an important pre-processing function is dithering. To reduce the effects of CSP functions on accuracy of dithering algorithm, the
dithering module 1930 may be placed so that it is performed after the color correction and image sharpening functions. In one embodiment, thedithering module 1930 may be the same as thedithering module 334. - CFAs that include white sub-pixels have decreased color saturation in comparison with CFAs that omit white sup-pixels. Testing identified color saturation adjustment as an important function for inclusion in many color processing algorithms, especially those color processing algorithms for displays having CFAs that include white sub-pixels. Testing indicated that performing color saturation adjustment after performing a dithering operation produced visually pleasing results. The color
saturation adjustment module 1932 implements the following equations: -
R′=S×(R 0 −Y)+Y -
G′=S×(G 0 −Y)+Y -
B′=S×(B 0 −Y)+Y -
where -
Y=0.299R 0+0.587G 0+0.144B 0 - The portion of the color
saturation adjustment module 1932 that determines R′G′B′ uses only 3 multipliers and 6 adders. The portion of the color saturation adjustment module 85 that determines Y uses only 2 adders. Consequently, the colorsaturation adjustment module 1932 is smaller and more efficient that colorsaturation adjustment module 330. - Testing indicated that the
luma scaling module 332 may be omitted without significantly reducing the flexibility of theCSP circuit 1920. - The
circuit 1920 accepts 16-bit pixel data (5:6:5). Bit depth of an input image may be reduced to 16-bits by truncating the least significant bits of each sub-pixel. Alternatively, input pixels may have their bit depth reduced by rounding or using the floor function. For example: -
Y=floor(31×X÷255) - where X is the 8-bit data value of an input image sub-pixel and Y is the 5-bit value of the corresponding bit-depth reduced pixel.
- Empirical testing with a variety of color processing algorithms for color EPDs sought to identify an appropriate level of calculation accuracy for each of the processing modules. Testing indicated that the
color correction module 1924 andfiltering module 1926 of thecircuit 1920 may perform their respective operations at 16-bit pixel depth (5:6:5). Further, testing indicated that thecolor linearization module 1928 may accept as input 16-bit (5:6:5) pixel data and output 18-bit (6:6:6) pixel data, or alternatively, thecolor linearization module 1928 may accept as input 12-bit (4:4:4) pixel data and output 12-bit (4:4:4) pixel data. To handle both cases, the color linearization LUTs are of a size that accommodates 6-bits per pixel. In addition, testing indicated that thedithering module 1930 may accept as input 18-bit (6:6:6) pixel data (or 16-bit (5:6:5) pixel data) and output 12-bit (4:4:4) pixel data. Additionally, testing indicated that the colorsaturation adjustment module 1932 may accepts as input and outputs 12-bit (4:4:4) pixel data. The colorsaturation adjustment module 1932 may perform its calculations at 4-bits per sub-pixel. -
FIG. 21 illustrates a block diagram of aWSG unit 2120 according to an alternative embodiment. TheWSG unit 2120 employs a smaller number of logic gates thanWSG unit 818. In addition, theWSG unit 2120 does not require latency FIFOs as the latency for S is constant and it is zero with respect tosaturation adjustment module 1932. Further, the latency for WSP is either 1 or 2. Instead of requiring latency FIFOs, flip-flop delays (not shown) may be employed. TheWSG unit 2120 reflects empirical testing with a variety of color processing algorithms for color EPDs. TheWSG unit 2120 includesLUT memory 2122, which may be 16-bits wide. - The
LUT 2122 may be used to implement two or more configurations.FIG. 22 illustrates three possible configurations in which theLUT 2122 may be used. In afirst configuration 2220, bits 0-7 ofLUT 2122 may be used to store values of saturation factor S, and bits 8-11 may be used to store values of fourth pixel “WSP.” In asecond configuration 2222, bits 0-3 ofLUT 2122 may be used to store R values, bits 4-7 may be used to store G values, bits 8-11 may be used to store B values, and bits 12-15 may be used to store values of fourth pixel WSP. In athird configuration 2224, bits 0-3 ofLUT 2122 may be used to store C values, bits 4-7 may be used to store M values, bits 8-11 may be used to store Y values, and bits 12-15 may be used to store values of fourth pixel WSP. Accordingly, theoutput 2128 may output 8-bit S values, the R and G values of a 4:4:4:4 RGBW pixel, or the C and M values of a 4:4:4:4 CMYW pixel. In addition, theoutput 2130 may output 4-bit fourth pixel values that may be combined with RGB values. Alternatively, theoutput 2130 may output the Y and W values of a CMYW pixel. Thesecond configuration 2222 andthird configuration 2224 show that theWSG unit 2120 enables one-to-one mapping of input and output pixel values. A user may store desired values in theLUT 2122. - Accordingly, it should be appreciated that the concepts disclosed in this specification can be used to develop and modify color processing algorithms for existing and future-developed color EPDs in a flexible manner. In many cases, the most desirable color processing algorithm for a particular EPD will depend on ambient lighting conditions and the type of image being rendered. The determination of a color processing algorithm for a particular EPD is a complex process involving many variables. If an assumption is made that the EPD will be viewed in bright light, less upward adjustment of luma and saturation will likely be called for than in cases where it is assumed that the EPD will be viewed in relatively dim light. Similarly, different luma and saturation adjustments may be deemed optimum for viewing black and white text as compared with those desired for color photographs of human faces or natural landscapes.
- In one embodiment, parameters for programming or configuring first, second, third, and fourth color processing algorithms may be stored in either
system memory 130 ordisplay controller memory 150. For example, the first color processing algorithm may be determined to be optimum for viewing a particular EPD rendering a text image in bright, natural ambient lighting conditions, e.g., sunlight. The second color processing algorithm may be determined to be optimum for viewing the particular EPD rendering a photographic image of a human face in bright, natural ambient lighting conditions. The third color processing algorithm may be determined to be optimum for viewing the particular EPD rendering the text image in low, artificial ambient lighting conditions, e.g., a tungsten light source in a darkened room. The third color processing algorithm may boost luma and saturation as compared with the first color processing algorithm. The fourth color processing algorithm may be determined to be optimum for viewing the particular EPD rendering the photographic image of a human face in low, artificial ambient lighting conditions. The fourth color processing algorithm may boost luma and saturation in a manner similar to the third algorithm and may additionally adjust color to correct for color distortion caused by the tungsten light source. - The storing of two or more color processing algorithms in a memory allows selection and used of a color processing algorithm best suited for viewing conditions, image type, and display type. The determination of current viewing conditions may be made explicitly by an end user of the display system, or automatically through the use of the
image sensor 118. The end user may select a current viewing condition by choosing one of two or more predetermined options from a menu, e.g., sunlight, overcast outdoor light, bright indoor light, bright indoor light, tungsten light, fluorescent light, etc. Theimage sensor 118 may determine both the ambient light level and the spectral components of the ambient light source. - Similarly, the determination of image type may be made explicitly by an end user of the display system, or automatically. The end user may select a current viewing condition by choosing one of two or more predetermined options from a menu, e.g., black and white text, black and white text including fewer than five highly saturated colors, color photograph of human face, color photograph of landscape, cartoon, etc. The determination of image type may be performed automatically by pre-coding the image file with image type, or by use of one or more known automatic image analysis techniques. As one example of an automatic image analysis technique, software or hardware may be used to prepare a color histogram of an image. Using the histogram, mages may be categorized by color content. For example, a text image may be recognized as having characteristic color content. As another example, a facial image may be recognized as having one or more characteristic color contents. Once the foregoing determinations have been made, the most suitable color processing algorithm for the determined viewing conditions and image type may be retrieved from memory and used to program or configure the display system. When viewing conditions and image type change, the display system may be reconfigured, either automatically or explicitly by the user, to use a more suitable algorithm.
- In one embodiment, parameters for configuring multiple color processing algorithms may be stored in a memory, and the image to be rendered on a display device includes two or more images. For example, the image to be rendered includes a text image and a color photograph. The storing of two or more color processing algorithms in a memory allows selection and use of a color processing algorithm suited for the type of sub-image. Where there are two image types to be rendered simultaneously, a different color processing algorithm may be selected for each sub-image. Selection of a suitable color processing algorithm for each sub-image may be automatic using a known automatic image analysis technique, or may be explicitly made by an end user.
- In one embodiment, the selecting of the set of operations to include in a color processing algorithm (or the order in which selected operations are to be performed or the parameters used for particular operations) may be based on a determined optical property of an ambient light source, the determined image type, and the type of display device. For example, the image rendering characteristics of a particular type of electro-optic display device may be taken into consideration along with lighting conditions and image type when specifying a color processing algorithm.
- While the concepts disclosed in this specification have been described in terms of a display system having a display controller and a display device, it should be appreciated that the disclosed embodiments are exemplary. The disclosed concepts may be used with other types of display device, including reflective and self-illuminating types. Moreover, the disclosed concepts may be used in any application, e.g., printing or projecting an image, where it is desired to modify the color characteristics of a digital image.
- In one embodiment, some or all of the operations and methods described in this description may be performed by hardware, software, or by a combination of hardware and software.
- In one embodiment, some or all of the operations and methods described in this description may be performed by executing instructions that are stored in or on a non-transitory computer-readable medium. The term “computer-readable medium” may include, but is not limited to, non-volatile memories, such as EPROMs, EEPROMs, ROMs, floppy disks, hard disks, flash memory, and optical media such as CD-ROMs and DVDs. The instructions may be executed by any suitable apparatus, e.g., the
host 122 or thedisplay controller 128. When the instructions are executed, the apparatus performs physical machine operations. - In this description, references may be made to “one embodiment” or “an embodiment.” These references mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the claimed inventions. Thus, the phrases “in one embodiment” or “an embodiment” in various places are not necessarily all referring to the same embodiment. Furthermore, particular features, structures, or characteristics may be combined in one or more embodiments.
- Although embodiments have been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims. Accordingly, the described embodiments are to be considered as illustrative and not restrictive, and the claimed inventions are not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims. Further, the terms and expressions which have been employed in the foregoing specification are used as terms of description and not of limitation, and there is no intention in the use of such terms and expressions to exclude equivalents of the features shown and described or portions thereof, it being recognized that the scope of the inventions are defined and limited only by the claims which follow.
Claims (19)
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/907,178 US20110285713A1 (en) | 2010-05-21 | 2010-10-19 | Processing Color Sub-Pixels |
JP2011109077A JP2011248358A (en) | 2010-05-21 | 2011-05-16 | Image processing method and image processing apparatus |
CN201110132591.4A CN102254540B (en) | 2010-05-21 | 2011-05-17 | Processing color sub-pixels |
KR1020110048166A KR101249083B1 (en) | 2010-05-21 | 2011-05-20 | Processing color sub-pixels |
EP11167003A EP2388773A3 (en) | 2010-05-21 | 2011-05-20 | Processing color sub-pixels |
TW100117855A TW201211978A (en) | 2010-05-21 | 2011-05-20 | Processing color sub-pixels |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US34726310P | 2010-05-21 | 2010-05-21 | |
US12/907,178 US20110285713A1 (en) | 2010-05-21 | 2010-10-19 | Processing Color Sub-Pixels |
Publications (1)
Publication Number | Publication Date |
---|---|
US20110285713A1 true US20110285713A1 (en) | 2011-11-24 |
Family
ID=44972151
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/907,178 Abandoned US20110285713A1 (en) | 2010-05-21 | 2010-10-19 | Processing Color Sub-Pixels |
US12/907,208 Expired - Fee Related US8565522B2 (en) | 2010-05-21 | 2010-10-19 | Enhancing color images |
US12/907,189 Expired - Fee Related US8547394B2 (en) | 2010-05-21 | 2010-10-19 | Arranging and processing color sub-pixels |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/907,208 Expired - Fee Related US8565522B2 (en) | 2010-05-21 | 2010-10-19 | Enhancing color images |
US12/907,189 Expired - Fee Related US8547394B2 (en) | 2010-05-21 | 2010-10-19 | Arranging and processing color sub-pixels |
Country Status (5)
Country | Link |
---|---|
US (3) | US20110285713A1 (en) |
JP (1) | JP2011248358A (en) |
KR (1) | KR101249083B1 (en) |
CN (1) | CN102254540B (en) |
TW (1) | TW201211978A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120062586A1 (en) * | 2010-09-15 | 2012-03-15 | Hon Hai Precision Industry Co., Ltd. | Projector and color improvement method of the projector |
US20130141480A1 (en) * | 2011-12-02 | 2013-06-06 | Industrial Technology Research Institute | System and method for improving visual effect of a display device |
WO2013175214A1 (en) * | 2012-05-23 | 2013-11-28 | Plastic Logic Limited | Electronic display |
US9170468B2 (en) | 2013-05-17 | 2015-10-27 | E Ink California, Llc | Color display device |
US9360733B2 (en) | 2012-10-02 | 2016-06-07 | E Ink California, Llc | Color display device |
US9383623B2 (en) | 2013-05-17 | 2016-07-05 | E Ink California, Llc | Color display device |
US9423666B2 (en) | 2011-09-23 | 2016-08-23 | E Ink California, Llc | Additive for improving optical performance of an electrophoretic display |
EP2997420A4 (en) * | 2013-05-17 | 2017-01-18 | E Ink California, LLC | Color display device with color filters |
US20170237957A1 (en) * | 2016-02-15 | 2017-08-17 | Samsung Electronics Co., Ltd. | Image sensor and method of generating restoration image |
US9778537B2 (en) | 2011-09-23 | 2017-10-03 | E Ink California, Llc | Additive particles for improving optical performance of an electrophoretic display |
WO2018011123A1 (en) * | 2016-07-13 | 2018-01-18 | Robert Bosch Gmbh | Light-sensor module, method for operating a light-sensor module and method for producing a light-sensor module |
US9997115B2 (en) | 2014-10-31 | 2018-06-12 | E Ink Holdings Inc. | Electrophoretic display apparatus and image processing method thereof |
US10147366B2 (en) | 2014-11-17 | 2018-12-04 | E Ink California, Llc | Methods for driving four particle electrophoretic display |
CN109946901A (en) * | 2014-09-26 | 2019-06-28 | 伊英克公司 | Color set for the low resolution shake in reflective color display |
US10460653B2 (en) | 2017-05-26 | 2019-10-29 | Microsoft Technology Licensing, Llc | Subpixel wear compensation for graphical displays |
US20220180824A1 (en) * | 2020-12-08 | 2022-06-09 | E Ink Corporation | Methods for driving electro-optic displays |
WO2023004234A1 (en) * | 2021-07-20 | 2023-01-26 | OLEDWorks LLC | Display with three regions of color space |
US20230139706A1 (en) * | 2020-05-31 | 2023-05-04 | E Ink Corporation | Electro-optic displays, and methods for driving same |
US20240177647A1 (en) * | 2021-08-11 | 2024-05-30 | Zte Corporation | Compensation method for a display area with an under-display camera, device, and storage medium |
Families Citing this family (90)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8477247B2 (en) | 2008-09-30 | 2013-07-02 | Intel Corporation | Joint enhancement of lightness, color and contrast of images and video |
US8363067B1 (en) * | 2009-02-05 | 2013-01-29 | Matrox Graphics, Inc. | Processing multiple regions of an image in a graphics display system |
US8576243B2 (en) * | 2010-10-08 | 2013-11-05 | Hewlett-Packard Development Company, L.P. | Display-color function image conversion |
WO2012049845A1 (en) * | 2010-10-12 | 2012-04-19 | パナソニック株式会社 | Color signal processing device |
US20120154423A1 (en) * | 2010-12-16 | 2012-06-21 | Apple Inc. | Luminance-based dithering technique |
CN103339944B (en) | 2011-01-31 | 2016-09-07 | 马维尔国际贸易有限公司 | Color monitor performs the system and method that the color of pixel adjusts |
TWI428902B (en) * | 2011-12-07 | 2014-03-01 | Orise Technology Co Ltd | Pixel data conversion apparatus and method for display with delta panel arrangement |
TWI447693B (en) * | 2011-12-07 | 2014-08-01 | Orise Technology Co Ltd | Pixel data conversion apparatus and method for display with delta panel arrangement |
US9472163B2 (en) * | 2012-02-17 | 2016-10-18 | Monotype Imaging Inc. | Adjusting content rendering for environmental conditions |
US9165526B2 (en) | 2012-02-28 | 2015-10-20 | Shenzhen Yunyinggu Technology Co., Ltd. | Subpixel arrangements of displays and method for rendering the same |
WO2013158592A2 (en) * | 2012-04-16 | 2013-10-24 | Magna Electronics, Inc. | Vehicle vision system with reduced image color data processing by use of dithering |
KR101948316B1 (en) * | 2012-07-25 | 2019-04-25 | 리쿠아비스타 비.브이. | Electrowetting display device and fabrication method of the same |
US8897580B2 (en) | 2012-07-30 | 2014-11-25 | Apple Inc. | Error diffusion with color conversion and encoding |
US9245485B1 (en) * | 2012-09-13 | 2016-01-26 | Amazon Technologies, Inc. | Dithering techniques for electronic paper displays |
US11017705B2 (en) | 2012-10-02 | 2021-05-25 | E Ink California, Llc | Color display device including multiple pixels for driving three-particle electrophoretic media |
US9665973B2 (en) * | 2012-11-20 | 2017-05-30 | Intel Corporation | Depth buffering |
JP2014106289A (en) * | 2012-11-26 | 2014-06-09 | Sony Corp | Display device, electronic equipment and method for driving display device |
CN105556944B (en) * | 2012-11-28 | 2019-03-08 | 核心光电有限公司 | Multiple aperture imaging system and method |
US9183812B2 (en) * | 2013-01-29 | 2015-11-10 | Pixtronix, Inc. | Ambient light aware display apparatus |
US20140225912A1 (en) * | 2013-02-11 | 2014-08-14 | Qualcomm Mems Technologies, Inc. | Reduced metamerism spectral color processing for multi-primary display devices |
US9759980B2 (en) | 2013-04-18 | 2017-09-12 | Eink California, Llc | Color display device |
US9285649B2 (en) | 2013-04-18 | 2016-03-15 | E Ink California, Llc | Color display device |
JP6139713B2 (en) | 2013-06-13 | 2017-05-31 | コアフォトニクス リミテッド | Dual aperture zoom digital camera |
CN108519655A (en) | 2013-07-04 | 2018-09-11 | 核心光电有限公司 | Small-sized focal length lens external member |
CN108989649B (en) | 2013-08-01 | 2021-03-19 | 核心光电有限公司 | Thin multi-aperture imaging system with auto-focus and method of use thereof |
TWI534520B (en) | 2013-10-11 | 2016-05-21 | 電子墨水加利福尼亞有限責任公司 | Color display device |
GB2519777B (en) * | 2013-10-30 | 2020-06-17 | Flexenable Ltd | Display systems and methods |
KR102117775B1 (en) | 2014-01-14 | 2020-06-01 | 이 잉크 캘리포니아 엘엘씨 | Full color display device |
JP2015152645A (en) * | 2014-02-10 | 2015-08-24 | シナプティクス・ディスプレイ・デバイス合同会社 | Image processing apparatus, image processing method, display panel driver, and display apparatus |
TWI515710B (en) * | 2014-02-17 | 2016-01-01 | 友達光電股份有限公司 | Method for driving display |
WO2015127045A1 (en) | 2014-02-19 | 2015-08-27 | E Ink California, Llc | Color display device |
CN103886809B (en) * | 2014-02-21 | 2016-03-23 | 北京京东方光电科技有限公司 | Display packing and display device |
US10380955B2 (en) | 2014-07-09 | 2019-08-13 | E Ink California, Llc | Color display device and driving methods therefor |
US9922603B2 (en) | 2014-07-09 | 2018-03-20 | E Ink California, Llc | Color display device and driving methods therefor |
US10891906B2 (en) | 2014-07-09 | 2021-01-12 | E Ink California, Llc | Color display device and driving methods therefor |
JP6441449B2 (en) | 2014-07-09 | 2018-12-19 | イー インク カリフォルニア, エルエルシー | Color display device |
JP2016024276A (en) * | 2014-07-17 | 2016-02-08 | 株式会社ジャパンディスプレイ | Display device |
JP6462259B2 (en) * | 2014-07-22 | 2019-01-30 | 株式会社ジャパンディスプレイ | Image display device and image display method |
US9392188B2 (en) | 2014-08-10 | 2016-07-12 | Corephotonics Ltd. | Zoom dual-aperture camera with folded lens |
KR102275712B1 (en) * | 2014-10-31 | 2021-07-09 | 삼성전자주식회사 | Rendering method and apparatus, and electronic apparatus |
US10205940B1 (en) | 2014-12-15 | 2019-02-12 | Amazon Technologies, Inc. | Determining calibration settings for displaying content on a monitor |
US9927600B2 (en) | 2015-04-16 | 2018-03-27 | Corephotonics Ltd | Method and system for providing auto focus and optical image stabilization in a compact folded camera |
KR102306652B1 (en) * | 2015-04-28 | 2021-09-29 | 삼성디스플레이 주식회사 | Display device and driving method thereof |
KR102348760B1 (en) * | 2015-07-24 | 2022-01-07 | 삼성전자주식회사 | Image sensor and signal processing method thereof |
EP3787281B1 (en) | 2015-08-13 | 2024-08-21 | Corephotonics Ltd. | Dual aperture zoom camera with video support and switching / non-switching dynamic control |
JP2017040733A (en) * | 2015-08-19 | 2017-02-23 | 株式会社ジャパンディスプレイ | Display device |
CN105430361B (en) * | 2015-12-18 | 2018-03-20 | 广东欧珀移动通信有限公司 | Imaging method, imaging sensor, imaging device and electronic installation |
US10074321B2 (en) * | 2016-01-05 | 2018-09-11 | Amazon Technologies, Inc. | Controller and methods for quantization and error diffusion in an electrowetting display device |
US10600213B2 (en) * | 2016-02-27 | 2020-03-24 | Focal Sharp, Inc. | Method and apparatus for color-preserving spectrum reshape |
US10488631B2 (en) | 2016-05-30 | 2019-11-26 | Corephotonics Ltd. | Rotational ball-guided voice coil motor |
KR102390572B1 (en) | 2016-07-07 | 2022-04-25 | 코어포토닉스 리미티드 | Linear ball guided voice coil motor for folded optic |
US10545242B2 (en) * | 2016-09-14 | 2020-01-28 | Apple Inc. | Systems and methods for in-frame sensing and adaptive sensing control |
US10403192B2 (en) * | 2016-09-22 | 2019-09-03 | Apple Inc. | Dithering techniques for electronic displays |
US20180137602A1 (en) * | 2016-11-14 | 2018-05-17 | Google Inc. | Low resolution rgb rendering for efficient transmission |
KR102269547B1 (en) | 2016-12-28 | 2021-06-25 | 코어포토닉스 리미티드 | Folded camera structure with extended light-folding-element scanning range |
CN113791484A (en) | 2017-01-12 | 2021-12-14 | 核心光电有限公司 | Compact folding camera and method of assembling the same |
US10444592B2 (en) * | 2017-03-09 | 2019-10-15 | E Ink Corporation | Methods and systems for transforming RGB image data to a reduced color set for electro-optic displays |
US10684482B2 (en) * | 2017-04-05 | 2020-06-16 | Facebook Technologies, Llc | Corrective optics for reducing fixed pattern noise in head mounted displays |
CN106898291B (en) * | 2017-04-28 | 2019-08-02 | 武汉华星光电技术有限公司 | The driving method and driving device of display panel |
CN111295182A (en) | 2017-11-14 | 2020-06-16 | 伊英克加利福尼亚有限责任公司 | Electrophoretic active substance delivery system comprising a porous conductive electrode layer |
KR102424791B1 (en) | 2017-11-23 | 2022-07-22 | 코어포토닉스 리미티드 | Compact folded camera structure |
CN114609746A (en) | 2018-02-05 | 2022-06-10 | 核心光电有限公司 | Folding camera device |
TWI657690B (en) * | 2018-03-07 | 2019-04-21 | 奇景光電股份有限公司 | Image processing device |
CN111936908B (en) | 2018-04-23 | 2021-12-21 | 核心光电有限公司 | Optical path folding element with extended two-degree-of-freedom rotation range |
CN108806615B (en) * | 2018-05-25 | 2020-09-01 | 福州大学 | Novel pixel data encoding method and device for electrowetting display |
CN109033815A (en) * | 2018-06-15 | 2018-12-18 | 国网浙江省电力有限公司 | Webshell detection method based on matrix decomposition |
US11302234B2 (en) * | 2018-08-07 | 2022-04-12 | Facebook Technologies, Llc | Error correction for display device |
WO2020039302A1 (en) | 2018-08-22 | 2020-02-27 | Corephotonics Ltd. | Two-state zoom folded camera |
US10672363B2 (en) * | 2018-09-28 | 2020-06-02 | Apple Inc. | Color rendering for images in extended dynamic range mode |
US11302288B2 (en) | 2018-09-28 | 2022-04-12 | Apple Inc. | Ambient saturation adaptation |
US11024260B2 (en) | 2018-09-28 | 2021-06-01 | Apple Inc. | Adaptive transfer functions |
KR102613309B1 (en) * | 2018-10-25 | 2023-12-14 | 삼성전자주식회사 | Display apparatus consisting multi display system and control method thereof |
US11138765B2 (en) * | 2018-12-10 | 2021-10-05 | Gopro, Inc. | Non-linear color correction |
TWI711005B (en) * | 2019-03-14 | 2020-11-21 | 宏碁股份有限公司 | Method for adjusting luminance of images and computer program product |
CN110097505A (en) * | 2019-05-16 | 2019-08-06 | 中国人民解放军海军工程大学 | A kind of Law of DEM Data processing method and processing device |
CN114728155B (en) | 2019-11-27 | 2024-04-26 | 伊英克公司 | Benefit agent delivery system including microcells with electrolytic seal layers |
US11949976B2 (en) | 2019-12-09 | 2024-04-02 | Corephotonics Ltd. | Systems and methods for obtaining a smart panoramic image |
DE102021109047A1 (en) * | 2020-04-16 | 2021-10-21 | Panasonic Intellectual Property Management Co., Ltd. | Image processing apparatus, image processing method, and image processing system |
CN113522789B (en) * | 2020-04-17 | 2023-04-18 | 盐城阿特斯阳光能源科技有限公司 | Method for determining key parameters for sorting battery pieces |
US11216631B2 (en) * | 2020-04-24 | 2022-01-04 | Avocado Scan, Inc. | Contrast edge barcodes |
KR102495627B1 (en) | 2020-05-17 | 2023-02-06 | 코어포토닉스 리미티드 | Image stitching in the presence of a full-field reference image |
EP3966631B1 (en) | 2020-05-30 | 2023-01-25 | Corephotonics Ltd. | Systems and methods for obtaining a super macro image |
KR20230004887A (en) | 2020-07-15 | 2023-01-06 | 코어포토닉스 리미티드 | Point of view aberrations correction in a scanning folded camera |
US11637977B2 (en) | 2020-07-15 | 2023-04-25 | Corephotonics Ltd. | Image sensors and sensing methods to obtain time-of-flight and phase detection information |
CN114067758B (en) * | 2020-08-05 | 2022-09-13 | 青岛海信移动通信技术股份有限公司 | Mobile terminal and image display method thereof |
CN112884753A (en) * | 2021-03-10 | 2021-06-01 | 杭州申昊科技股份有限公司 | Track fastener detection and classification method based on convolutional neural network |
CN115868168A (en) | 2021-03-11 | 2023-03-28 | 核心光电有限公司 | System for pop-up camera |
CN113299247B (en) * | 2021-06-08 | 2022-01-07 | 广州文石信息科技有限公司 | Method and related device for optimizing display effect of color electronic ink screen |
WO2022259154A2 (en) | 2021-06-08 | 2022-12-15 | Corephotonics Ltd. | Systems and cameras for tilting a focal plane of a super-macro image |
CN117392184A (en) * | 2023-11-22 | 2024-01-12 | 江苏信息职业技术学院 | Novel registration and dithering algorithm for multiple target colors of color image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6100872A (en) * | 1993-05-25 | 2000-08-08 | Canon Kabushiki Kaisha | Display control method and apparatus |
US7420571B2 (en) * | 2003-11-26 | 2008-09-02 | Lg Electronics Inc. | Method for processing a gray level in a plasma display panel and apparatus using the same |
US20090058873A1 (en) * | 2005-05-20 | 2009-03-05 | Clairvoyante, Inc | Multiprimary Color Subpixel Rendering With Metameric Filtering |
US7515119B2 (en) * | 2003-03-21 | 2009-04-07 | Lg Electronics Inc. | Method and apparatus for calculating an average picture level and plasma display using the same |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5701135A (en) * | 1993-05-25 | 1997-12-23 | Canon Kabushiki Kaisha | Display control method and apparatus |
US6714180B1 (en) * | 1999-01-13 | 2004-03-30 | Intel Corporation | Automatic control of gray scaling algorithms |
EP1324305A4 (en) * | 2000-10-03 | 2006-10-11 | Seiko Epson Corp | Image processing method, image processing apparatus, electronic device, image processing program, and recorded medium on which the program is recorded |
US7583279B2 (en) * | 2004-04-09 | 2009-09-01 | Samsung Electronics Co., Ltd. | Subpixel layouts and arrangements for high brightness displays |
JP4506092B2 (en) | 2003-03-27 | 2010-07-21 | セイコーエプソン株式会社 | Image processing method, image processing apparatus, and display device |
KR20040094084A (en) * | 2003-05-01 | 2004-11-09 | 엘지전자 주식회사 | Plasma Display Panel and Driving Method thereof |
US7221374B2 (en) * | 2003-10-21 | 2007-05-22 | Hewlett-Packard Development Company, L.P. | Adjustment of color in displayed images based on identification of ambient light sources |
KR100698284B1 (en) * | 2004-12-16 | 2007-03-22 | 삼성전자주식회사 | Apparatus and method for color error reduction in display of subpixel structure |
US20060158466A1 (en) * | 2005-01-18 | 2006-07-20 | Sitronix Technology Corp. | Shared pixels rendering display |
KR20070009015A (en) * | 2005-07-14 | 2007-01-18 | 삼성전자주식회사 | Electro phoretic indication display and driving method of eletro phoretic indication display |
JP2009116187A (en) | 2007-11-08 | 2009-05-28 | Toshiba Matsushita Display Technology Co Ltd | Display device |
-
2010
- 2010-10-19 US US12/907,178 patent/US20110285713A1/en not_active Abandoned
- 2010-10-19 US US12/907,208 patent/US8565522B2/en not_active Expired - Fee Related
- 2010-10-19 US US12/907,189 patent/US8547394B2/en not_active Expired - Fee Related
-
2011
- 2011-05-16 JP JP2011109077A patent/JP2011248358A/en not_active Withdrawn
- 2011-05-17 CN CN201110132591.4A patent/CN102254540B/en not_active Expired - Fee Related
- 2011-05-20 TW TW100117855A patent/TW201211978A/en unknown
- 2011-05-20 KR KR1020110048166A patent/KR101249083B1/en not_active IP Right Cessation
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6100872A (en) * | 1993-05-25 | 2000-08-08 | Canon Kabushiki Kaisha | Display control method and apparatus |
US7515119B2 (en) * | 2003-03-21 | 2009-04-07 | Lg Electronics Inc. | Method and apparatus for calculating an average picture level and plasma display using the same |
US7420571B2 (en) * | 2003-11-26 | 2008-09-02 | Lg Electronics Inc. | Method for processing a gray level in a plasma display panel and apparatus using the same |
US20090058873A1 (en) * | 2005-05-20 | 2009-03-05 | Clairvoyante, Inc | Multiprimary Color Subpixel Rendering With Metameric Filtering |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8605106B2 (en) * | 2010-09-15 | 2013-12-10 | Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd. | Projector and color improvement method of the projector |
US20120062586A1 (en) * | 2010-09-15 | 2012-03-15 | Hon Hai Precision Industry Co., Ltd. | Projector and color improvement method of the projector |
US9423666B2 (en) | 2011-09-23 | 2016-08-23 | E Ink California, Llc | Additive for improving optical performance of an electrophoretic display |
US10578943B2 (en) | 2011-09-23 | 2020-03-03 | E Ink California, Llc | Additive particles for improving optical performance of an electrophoretic display |
US9778537B2 (en) | 2011-09-23 | 2017-10-03 | E Ink California, Llc | Additive particles for improving optical performance of an electrophoretic display |
US20130141480A1 (en) * | 2011-12-02 | 2013-06-06 | Industrial Technology Research Institute | System and method for improving visual effect of a display device |
US9053557B2 (en) * | 2011-12-02 | 2015-06-09 | Industrial Technology Research Institute | System and method for improving visual effect of a display device |
US9514691B2 (en) * | 2012-05-23 | 2016-12-06 | Flexenable Limited | Electronic display |
US20150097879A1 (en) * | 2012-05-23 | 2015-04-09 | Plastic Logic Limited | Electronic display |
WO2013175214A1 (en) * | 2012-05-23 | 2013-11-28 | Plastic Logic Limited | Electronic display |
US9360733B2 (en) | 2012-10-02 | 2016-06-07 | E Ink California, Llc | Color display device |
US9170468B2 (en) | 2013-05-17 | 2015-10-27 | E Ink California, Llc | Color display device |
US9646547B2 (en) | 2013-05-17 | 2017-05-09 | E Ink California, Llc | Color display device |
US9383623B2 (en) | 2013-05-17 | 2016-07-05 | E Ink California, Llc | Color display device |
EP3264170A1 (en) * | 2013-05-17 | 2018-01-03 | E Ink California, LLC | Color display device with color filters |
EP2997420A4 (en) * | 2013-05-17 | 2017-01-18 | E Ink California, LLC | Color display device with color filters |
US11846861B2 (en) | 2014-09-26 | 2023-12-19 | E Ink Corporation | Color sets for low resolution dithering in reflective color displays color sets for low resolution dithering in reflective color displays |
US11402718B2 (en) | 2014-09-26 | 2022-08-02 | E Ink Corporation | Color sets for low resolution dithering in reflective color displays |
US10353266B2 (en) | 2014-09-26 | 2019-07-16 | E Ink Corporation | Color sets for low resolution dithering in reflective color displays |
CN109946901A (en) * | 2014-09-26 | 2019-06-28 | 伊英克公司 | Color set for the low resolution shake in reflective color display |
US9997115B2 (en) | 2014-10-31 | 2018-06-12 | E Ink Holdings Inc. | Electrophoretic display apparatus and image processing method thereof |
US10147366B2 (en) | 2014-11-17 | 2018-12-04 | E Ink California, Llc | Methods for driving four particle electrophoretic display |
US10431168B2 (en) | 2014-11-17 | 2019-10-01 | E Ink California, Llc | Methods for driving four particle electrophoretic display |
US10891907B2 (en) | 2014-11-17 | 2021-01-12 | E Ink California, Llc | Electrophoretic display including four particles with different charges and optical characteristics |
US10586499B2 (en) | 2014-11-17 | 2020-03-10 | E Ink California, Llc | Electrophoretic display including four particles with different charges and optical characteristics |
US10171782B2 (en) * | 2016-02-15 | 2019-01-01 | Samsung Electronics Co., Ltd. | Image sensor and method of generating restoration image |
US20170237957A1 (en) * | 2016-02-15 | 2017-08-17 | Samsung Electronics Co., Ltd. | Image sensor and method of generating restoration image |
US10868987B2 (en) | 2016-07-13 | 2020-12-15 | Robert Bosch Gmbh | Light-sensor module, method for operating a light-sensor module and method for producing a light-sensor module |
US20190320130A1 (en) * | 2016-07-13 | 2019-10-17 | Robert Bosch Gmbh | Light-sensor module, method for operating a light-sensor module and method for producing a light-sensor module |
WO2018011123A1 (en) * | 2016-07-13 | 2018-01-18 | Robert Bosch Gmbh | Light-sensor module, method for operating a light-sensor module and method for producing a light-sensor module |
US10460653B2 (en) | 2017-05-26 | 2019-10-29 | Microsoft Technology Licensing, Llc | Subpixel wear compensation for graphical displays |
US20230139706A1 (en) * | 2020-05-31 | 2023-05-04 | E Ink Corporation | Electro-optic displays, and methods for driving same |
US20220180824A1 (en) * | 2020-12-08 | 2022-06-09 | E Ink Corporation | Methods for driving electro-optic displays |
US11657772B2 (en) * | 2020-12-08 | 2023-05-23 | E Ink Corporation | Methods for driving electro-optic displays |
WO2023004234A1 (en) * | 2021-07-20 | 2023-01-26 | OLEDWorks LLC | Display with three regions of color space |
US20240177647A1 (en) * | 2021-08-11 | 2024-05-30 | Zte Corporation | Compensation method for a display area with an under-display camera, device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US8547394B2 (en) | 2013-10-01 |
KR101249083B1 (en) | 2013-04-01 |
TW201211978A (en) | 2012-03-16 |
CN102254540B (en) | 2014-09-24 |
JP2011248358A (en) | 2011-12-08 |
US20110285746A1 (en) | 2011-11-24 |
KR20110128253A (en) | 2011-11-29 |
US20110285714A1 (en) | 2011-11-24 |
CN102254540A (en) | 2011-11-23 |
US8565522B2 (en) | 2013-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8547394B2 (en) | Arranging and processing color sub-pixels | |
EP2388773A2 (en) | Processing color sub-pixels | |
CN109937444B (en) | Display device | |
US9997135B2 (en) | Method for producing a color image and imaging device employing same | |
KR101007714B1 (en) | INput Gamma Dithering Systems and Methods | |
KR101048374B1 (en) | Histogram Based Dynamic Backlight Control Systems and Methods | |
US9430986B2 (en) | Color signal processing device | |
KR101048375B1 (en) | Post-Color Space Conversion Processing System and Method | |
US9578296B2 (en) | Signal conversion apparatus and method, and program and recording medium | |
KR100970260B1 (en) | Adaptive backlight control dampening to reduce flicker | |
WO2019119794A1 (en) | Driving method and driving apparatus for display apparatus | |
KR20080031947A (en) | A method and apparatus for converting colour signals for driving an rgbw display and a display using the same | |
JP2013513835A (en) | Method and system for backlight control using statistical attributes of image data blocks | |
KR20150110507A (en) | Method for producing a color image and imaging device employing same | |
US20080122861A1 (en) | System and method to generate multiprimary signals | |
US20100026705A1 (en) | Systems and methods for reducing desaturation of images rendered on high brightness displays | |
EP2523187A1 (en) | Electronic device, method for adjusting color saturation, program therefor, and recording medium | |
US9311886B2 (en) | Display device including signal processing unit that converts an input signal for an input HSV color space, electronic apparatus including the display device, and drive method for the display device | |
CN109377966B (en) | Display method, system and display device | |
CN109461419B (en) | Display data processing method and system and display device | |
CN115762380A (en) | Display method and display device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: EPSON RESEARCH & DEVELOPMENT, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SWIC, JERZY WIESLAW;SONG, JILIANG;LIN, CHUN-LIANG;REEL/FRAME:025159/0523 Effective date: 20101013 |
|
AS | Assignment |
Owner name: SEIKO EPSON CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EPSON RESEARCH & DEVELOPMENT, INC.;REEL/FRAME:025205/0475 Effective date: 20101025 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |