US20140375670A1 - Field sequential color encoding for displays - Google Patents
Field sequential color encoding for displays Download PDFInfo
- Publication number
- US20140375670A1 US20140375670A1 US14/326,190 US201414326190A US2014375670A1 US 20140375670 A1 US20140375670 A1 US 20140375670A1 US 201414326190 A US201414326190 A US 201414326190A US 2014375670 A1 US2014375670 A1 US 2014375670A1
- Authority
- US
- United States
- Prior art keywords
- video
- color
- image
- primary
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims abstract description 70
- 210000001525 retina Anatomy 0.000 claims description 18
- 230000004044 response Effects 0.000 claims description 13
- 241001270131 Agaricus moelleri Species 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 6
- 239000002131 composite material Substances 0.000 claims description 3
- 230000002123 temporal effect Effects 0.000 abstract description 36
- 239000003086 colorant Substances 0.000 abstract description 24
- 238000012163 sequencing technique Methods 0.000 abstract description 4
- 230000003287 optical effect Effects 0.000 abstract description 3
- 239000012141 concentrate Substances 0.000 abstract 1
- 238000005286 illumination Methods 0.000 description 17
- 230000000116 mitigating effect Effects 0.000 description 17
- 230000000007 visual effect Effects 0.000 description 11
- 230000004907 flux Effects 0.000 description 9
- 230000008707 rearrangement Effects 0.000 description 8
- 230000008901 benefit Effects 0.000 description 7
- 230000002045 lasting effect Effects 0.000 description 7
- 239000000470 constituent Substances 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 239000000463 material Substances 0.000 description 6
- 238000010223 real-time analysis Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 238000013459 approach Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 239000004020 conductor Substances 0.000 description 4
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 230000001052 transient effect Effects 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000004418 eye rotation Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 238000004904 shortening Methods 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 239000010409 thin film Substances 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000010494 dissociation reaction Methods 0.000 description 1
- 230000005593 dissociations Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000036433 growing body Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000009416 shuttering Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/68—Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/2003—Display of colours
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/2007—Display of intermediate tones
- G09G3/2018—Display of intermediate tones by time modulation using two or more time intervals
- G09G3/2022—Display of intermediate tones by time modulation using two or more time intervals using sub-frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3102—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM] using two-dimensional electronic spatial light modulators
- H04N9/3111—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM] using two-dimensional electronic spatial light modulators for displaying the colours sequentially, e.g. by using sequentially activated light sources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3102—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM] using two-dimensional electronic spatial light modulators
- H04N9/312—Driving therefor
- H04N9/3123—Driving therefor using pulse width modulation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3197—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM] using light modulating optical valves
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2310/00—Command of the display device
- G09G2310/02—Addressing, scanning or driving the display screen or processing steps related thereto
- G09G2310/0235—Field-sequential colour display
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/02—Improving the quality of display appearance
- G09G2320/0261—Improving the quality of display appearance in the context of movement of objects on the screen or movement of the observer relative to the screen
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/02—Improving the quality of display appearance
- G09G2320/0266—Reduction of sub-frame artefacts
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/16—Calculation or use of calculated indices related to luminance levels in display data
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/34—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
- G09G3/3433—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source using light modulating elements actuated by an electric field and being other than liquid crystal devices and electrochromic devices
Definitions
- the present application relates to the field of field sequential color display systems, and more particularly to methods of encoding data to control the data input to individual pixels of an array of pixels and/or to control the data input to illumination light sources for enhancing the visual performance of field sequential color displays, whether in a direct-view display system or a projection-based display system.
- Data encoding methods or algorithms are utilized in electronic video displays, particularly with respect to flat panel display systems to selectively control the bursts of locally transmitted primary light emitted from individual pixels disposed across the display surface.
- One example for the application of such encoding algorithms is a direct-view flat panel display system that uses sequentially-pulsed bursts of red, green, and blue colored light (i.e., primary colored light) emanating from the display surface to create a sequence of primary color images, also referred to as primary color subframes, that integrated together form a full color image or frame by the temporal mixing of emitted primary light that is being directed from the display surface to a viewer.
- a term commonly used to define this technique is called field sequential color (FSC).
- FSC field sequential color
- the human eye (i.e., human visual system) of the viewer effectively integrates the pulsed light from a light source to form the perception of a level of light intensity of each primary color (i.e., primary subframe).
- the gray scale level generated at each pixel location on the display surface is proportional to the percentage of time the pixel is ON during the primary color subframe time, t color .
- the frame rates at which this occurs are high enough to create the illusion of a continuous stable image, rather than a flickering one (i.e., a noticeable series of primary color subframes).
- t color the shade of that primary color emitted by an individual pixel is controlled by encoding data that selectively controls the appropriate fraction of t color (i.e., amount of time) that the individual pixel is open during the time period t color .
- PWM pulse width modulation
- the embodiments of the present disclosure provide various data encoding methods that reduce motional color breakup native to field sequential color displays. Some of the embodiments also provide a means to reduce demands on pixel response speeds.
- a first embodiment of the present disclosure is a method including modulating an intensity of the illumination means of the display system in tightly-coordinated conjunction with the temporal modulation of the pixel actuation sequence.
- a second embodiment of the present disclosure is a method including hard-wired front-loaded bit weighting to enhance perceived mitigation of motional artifacts using virtual aggregate pulse width truncation.
- a third embodiment is a method including bit-splitting to divide higher order bits (such as the most significant bit (MSB)) which have the longest temporal duration, into smaller subunits that may be distributed and interleaved (i.e., intermixed) for all three stimulus colors. Bit-splitting across all three tristimulus colored lights may be combined with the principle of intensity modulation of the illumination source disclosed in the first embodiment.
- a fourth embodiment is a method including distributing pulse-width-modulated temporal pulses so as to average out the relative bit weights over the entire tristimulus sequence comprising a full color frame, thereby interleaving the various tristimulus components to provide the best image quality as empirically determined by actual visual performance of such systems.
- a fifth embodiment is a method including distributing pulse-width-modulated temporal pulses so as to front-load all the most significant bits relative to the specific single video frame being generated. This embodiment hardwires the bit sequence from the most significant bits of all three primaries being encoded first, followed by the next most significant bits of all three primaries, and so on down to the least significant bits.
- a sixth embodiment is a method including real-time evaluation of each video frame being generated to determine the particular rearrangement of the bits to be displayed. In contrast to the hardwired bit sequence of the fifth embodiment, the sixth embodiment dynamically adjusts the bit sequence in light of the exigencies of each frame's program content, thereby insuring that the correct bits are truly weighted to the front of the aggregate pulse being encoded across all primaries.
- a seventh embodiment is a method including determining in a received plurality of video frames that an object to be displayed in a foreground of a video image is in motion relative to a background of the video image, and modifying a gray scale of the video frames associated with the object to be displayed in the foreground of the video image.
- FIG. 1 illustrates a temporal breakdown of binary-weighted consecutive order bits for a single primary color according to conventional pulse width modulation methods to secure gray scales at a bit depth of 8, representing 256 different brightness levels from black to white, generated within 1/180 of a second, with blackout (blanking) periods between each bit level omitted;
- FIG. 2 illustrates a concatenation of three consecutive 8-bit gray-scale units of 1/180 th of a second in duration such as shown in FIG. 1 , with the first block representing one tristimulus color (e.g., red), the second block representing the second tristimulus color (e.g., green), and the third block representing the third tristimulus color (e.g., blue), such that the entire temporal period shown of 1/60 th of a second duration represents the switching template required to generate full color images on a display at a total color bit depth of 24 (approximately 16.7 million colors);
- the first block representing one tristimulus color (e.g., red)
- the second block representing the second tristimulus color (e.g., green)
- the third block representing the third tristimulus color (e.g., blue)
- the entire temporal period shown of 1/60 th of a second duration represents the switching template required to generate full color images on a display
- FIGS. 3A-3F compare the original temporal subdivision of a single encoded primary color depicted in FIG. 3A (previously shown in FIG. 1 ) with: FIG. 3B which illustrates an intensity-modulated equivalent that elects to truncate total duration of the primary color pulse by postpositing the accrued time savings, and FIG. 3C which illustrates an alternate intensity-modulated equivalent that elects to redistribute the time saved over all relevant bits to reduce addressing overhead times intrinsic to the high speed operation of field sequential color-based display systems, and FIG. 3D which illustrates the aggregate pulse encoding for all three tristimulus primaries (as previously shown in FIG. 2 ), and FIG. 3E which illustrates an intensity-modulated variation that extends the principle of FIG.
- FIG. 3B to all three primaries in the full color encoded packet
- FIG. 3F which illustrates actual noncontiguous interleaving of the respective primary bits weighted such that the higher order bits appear earlier in the aggregate full color sequence of pulses
- FIG. 3G which illustrates the conflating of intensity modulation and actual noncontiguous interleaving of the respective primary bits weighted such that the higher order bits appear earlier in the aggregate full color sequence of pulses
- FIG. 4A illustrates a subset of the pulse width modulated durations for a single primary color shown in FIG. 1 , namely, the four highest order bits, omitting the four lowest order bits for illustrative purposes;
- FIG. 4B illustrates the splitting of the durations for a single primary color shown in FIG. 4A into fractional subdivisions bearing the same duration as the smallest illustrated bit width, without any rearrangement of the bits in terms of sequence;
- FIG. 4C illustrates the rearrangement of the split bits for a single primary color previously shown without rearrangement in FIG. 4B such that the higher order bits are distributed as evenly as possible over the entire duration of the primary color generation time;
- FIG. 5A illustrates the full color cycle for all three primary colors as formerly shown in FIG. 3E with intensity modulation applied to the sequence, with only the four highest order bits being annotated in the figure;
- FIG. 5B illustrates the mixing of several levels of intensity modulation and pulse width modulation as applied only to the four highest order bits for the entire color cycle comprised of all three primary colors;
- FIG. 5C illustrates the interleaving of the various intensities and pulse widths shown in FIG. 5B so as to best average out the intensity for any given primary color across the entire aggregate pulse duration for the entire three-primary video frame being displayed;
- FIG. 5D illustrates the hard-wired interleaving of the various intensities and pulse widths shown in FIG. 5B so as to force all the highest order bits, regardless of program content, to be displayed at the beginning of the aggregate pulse comprising the video frame for all three primary colors;
- FIG. 5E illustrates one possible result due to real-time analysis of each video frame to determine which bits are most important based on image content, therefore making the frame-to-frame sequence of the intensities and pulse widths to be dynamically variable based on the image content of the frame being displayed;
- FIG. 6 sets forth a method for dynamically altering the gray scale depth for regions of the video display that are contextually shown, according to real-time analysis of consecutive frames within the display system's video cache, to represent foreground objects in relative motion against the perceived background, such that if a detectable threshold for motional artifact generation is crossed by such motion, the moving foreground object is posterized and/or quantized to permit maximum front-loading of image data during the full three-primary color pulse displaying the frames in question;
- FIG. 7 illustrates a perspective view of a direct view flat panel display suitable for implementation of the present invention
- FIG. 8A is a side view schematic of a single pixel in an OFF state
- FIG. 8B illustrates the pixel shown in FIG. 8A in an ON state
- FIG. 9A is a side view schematic of a single pixel in an OFF state, wherein the pixel has collector-coupler features
- FIG. 9B illustrates the pixel shown in FIG. 9A in an ON state
- FIG. 10 illustrates what causes the phenomenon of color image breakup when an observer views an image generated using field sequential color generation techniques during rotational motion of the observer's eye
- FIG. 11A illustrates the perceived image that is desired irrespective of eye rotation and/or other motion in accordance with an embodiment of the present invention.
- FIG. 11B illustrates the phenomenon of color image breakup by depicting a perceived image due to eye rotation and/or other motion.
- One possible display technology to be enhanced may be the current iteration of the display technology originally disclosed in U.S. Pat. No. 5,319,491, wherein pixels emit light using the principle of frustrated total internal reflection within a display architecture that leverages principles of field sequential color generation and pulse width modulated gray scale creation.
- light is edge-injected into a planar slab waveguide and undergoes total internal reflection (TIR) within the guide, trapping the light inside it.
- TIR total internal reflection
- Edge-injected light may comprise any number of consecutive primary colored lights, for example three primary colored lights (also referred to as tristimulus primaries), synchronously clocked to a desirable global frame rate (e.g., 60 Hz, requiring a 180 Hz rate to accommodate each of the three primaries utilized therein, namely, red, green, and blue).
- Pixels are electrostatically controlled MEMS structures that propel a thin film layer, hereinafter termed the “active layer”, which is controllably deformable across a microscopic gap (measuring between 300 to 1000 nanometers) into contact or near-contact with the waveguide, at which point light transits across from the waveguide to the active layer either by direct contact propagation and/or by way of evanescent coupling.
- actuation is deemed completed (i.e., ON state) when the active layer can move no closer to the slab waveguide (either in itself, or due to physical contact with the slab waveguide).
- the active layer in contact (or near contact) with a surface of the waveguide optically couples light out of the waveguide thereby extracting light from the waveguide via frustration of total internal reflected light (FTIR).
- FTIR total internal reflected light
- FIG. 7 illustrates a simplified depiction of a flat panel display 700 comprised of a waveguide (i.e., light guidance substrate) 701 which may further include a flat panel matrix of pixels 702 . Behind the waveguide 701 and in a parallel relationship with waveguide 701 may be a transparent (e.g., glass, plastic, etc.) substrate 703 . It is noted that flat panel display 700 may include other elements than illustrated such as a light source, an opaque throat, an opaque backing layer, a reflector, and tubular lamps (or other light sources, such as LEDs, etc.).
- a principle of operation for any of the plurality of pixels distributed across the slab waveguide involves locally, selectively, and controllably frustrating the total internal reflection of light bound within the slab at each pixel location by positioning the active layer, into contact or near contact with a surface of the slab waveguide during the individual pixel's ON state (i.e., light emitting state).
- the active layer is sufficiently displaced from the waveguide by a microscopic gap (e.g., an air gap) between the active layer and the surface of the waveguide such that light coupling and evanescent coupling across the gap is negligible.
- the deformable active layer may be a thin film sheet of polymeric material with a refractive index selected to optimize the coupling of light during the contact/near-contact events, which can occur at very high speeds in order to permit the generation of adequate gray scale levels for multiple primary colors at video frame rates in order to avoid excessive motional and color breakup artifacts while preserving smooth video generation.
- FIGS. 8A and 8B illustrate a more detailed side view of one pixel 702 in an OFF and ON states, respectively.
- FIG. 8A shows an isolated view of a pixel 800 , in an OFF state geometry, having an active layer 804 disposed in a spaced-apart relationship to a waveguide 803 by a microscopic gap 806 .
- Each pixel 800 may include a first conductor layer (not shown) in or on a waveguide 803 , and a second conductor layer (not shown) in or on the active layer 804 .
- the pixel 800 is switched to an ON state, as depicted in FIG.
- a plurality of collector-coupler features 903 may be disposed on a lower surface of the active layer 902 , as depicted by pixel 900 in OFF and ON states illustrated in FIGS. 9A and 9B , respectively.
- the collector-coupler features 903 interact with light waves 912 that approach the vicinity of the waveguide-active layer interface, increasing the probability of light waves to exit the waveguide and enter the active layer 902 and directing emitted light 910 towards a viewer.
- an opaque material 904 can be disposed between the collector-coupler features 903 .
- the opaque material 904 prevents light from entering the active layer 902 at undesired locations, improving overall contrast ratio of the display and mitigating pixel cross-talk.
- the opaque material 904 can substantially fill the interstitial area between the collector coupler features 903 of each pixel, or it can comprise a conformal coating of these features and the interstitial spaces between them.
- each collector-coupler 903 remains uncoated so that light can be coupled into the collector-coupler 903 .
- the opaque material 904 may be either a specific color (e.g., black) or reflective.
- FIG. 7 Certain field sequential color displays, such as the one illustrated in FIG. 7 , exhibit undesirable visual artifacts under certain viewing conditions and video content.
- the cause of such harmful artifacts proceeds from relative motion of the observer's retina and the individual primary components of a given video frame during the successive transmission in time of each respective subframe primary component.
- the display of FIG. 7 serves as a pertinent example that will be used, with some modifications for the purpose of generalization, throughout this disclosure to illustrate the operative principles in question.
- FIG. 10 illustrates in accordance with an embodiment of the present disclosure the general phenomenon of color image breakup in FSC displays.
- the information being displayed on the display surface during a given video frame proceeds to the observer's retina 1009 as a series of collinear pulses 1001 and 1005 comprised of the respective consecutively-generated primary information constituting each video frame.
- video frame information for a frame 1001 is composed of temporally separated primaries 1002 , 1003 , and 1004
- another video frame 1005 (one frame prior in time to frame 1001 ) is likewise composed of temporally separated primaries 1006 , 1007 , and 1008 .
- the information contained as an array of pulse width modulated colored light for each primary color arrives at the retina 1009 to form an image.
- the eye will merge the primaries and perceive a composite image without any color breakup.
- the retina 1009 is in rotational motion, then the phenomenon at the retina follows the pattern of video frame 1010 , where the individual primary components 1011 , 1012 , and 1013 fall on different parts of the retina, causing the color breakup artifact to be perceptible.
- FIGS. 11A and 11B illustrate the intended image depicted in FIG. 11A as compared to the actual perceived image depicted in FIG. 11B .
- the eye would merge the primary subframes to accurately form the composite image 1000 , which in this example is an image of a gray airplane.
- retina 1009 moves with respect to the consecutive primary colored images (i.e., primary subframes) comprising video frame 1010 , such that 1011 , 1012 , and 1013 (the primary components comprising the entire frame 1010 ) fall at different locations on retina 1009 , resulting in the perceived image 1102 , where the separate primary components 1103 , 1104 , and 1105 are perceived no longer as fully overlapping, but rather distributed across the field of view in a dissociated form, as shown.
- Recovery of the intended image 1101 is the goal of artifact suppression, whereby the splayed, dissociated image 1102 is reduced or suppressed by virtue of extirpation of the cause of such dissociation.
- Color display systems that utilize a temporally-based color generation method may require the means to mitigate, suppress, and control motional artifacts arising from the temporal decoupling of the primary color constituents comprising an image due to these constituents arriving at different points on the observer's retina due either to rotary motion of the eye or to the eye's tracking of a foreground object in the video image that is in motion relative to the perceived background of that image.
- the various embodiments of the present disclosure provide encoding methods to mitigate color breakup motional artifacts and/or reduce demands on pixel response speeds.
- the present invention may be implemented on a host of display systems (direct view or projection-based) that could be expected to use field sequential color encoding techniques and thus would be highly desirable and lead to improved image generation by system architectures based on the field sequential color paradigm.
- FIG. 1 illustrates the conventional breakdown of a single primary color into pulse width modulated constituents that stand in binary proportions one to another.
- the three tristimulus primary colored lights used in field sequential color displays are red, green, and blue, and displays using field sequential color use at least these three primaries for image generation.
- a conventional frame rate for video frames in such displays is 60 frames per second (60 fps) for all three colors, which means that one-third of this time period of 1/60th of a second is allocated to each of the tristimulus primaries: 1/180 th of a second for red, for green, and for blue, totaling 1/60 th of a second.
- FIG. 1 illustrates the conventional breakdown of a single primary color into pulse width modulated constituents that stand in binary proportions one to another.
- the three tristimulus primary colored lights used in field sequential color displays are red, green, and blue, and displays using field sequential color use at least these three primaries for image generation.
- FIG. 1 illustrates one of these primary colored light (e.g., red) and its duration, referred to as a primary pulse duration 100 , and how it is further subdivided into smaller fractions.
- a primary pulse duration 100 For 8-bit color, which provides 2 8 different intensities or gray scale levels for the primary color (256 gray scales), it is appropriate to subdivide the entire pulse of 1/180 th second into eight fractions referred to as bits 101 , 102 , 103 , 104 , 105 , 106 , 107 , 108 .
- the second bit 102 is a second subdivision lasting 1 ⁇ 2 the primary pulse of bit 101 (i.e., the first subdivision)
- the third bit 103 is a third subdivision lasting 1 ⁇ 4 the primary pulse of bit 101
- the fourth bit 104 is a fourth subdivision lasting 1 ⁇ 8 the primary pulse of bit 101
- the fifth bit 105 is a fifth subdivision lasting 1/16 the primary pulse of bit 101
- the sixth bit 106 is a sixth subdivision lasting 1/32 the primary pulse of bit 101
- the seventh bit 107 is a seventh subdivision lasting 1/64 the primary pulse of bit 101
- the eighth bit 108 is an eighth subdivision lasting 1/128 the primary pulse of bit 101 .
- each bit 102 through 108 is a consecutive halving of the previous bit, wherein bit 108 represents 1/256 of the original primary pulse duration also referred to an aggregate primary pulse 100 .
- Any one of 256 different values of intensity based on pulse width modulation can be generated by appropriate actuation of the pixel to either an ON or OFF state during the eight time periods of bits 101 through 108 .
- the gray scale level being displayed for that pixel is zero (no intensity of the given primary), and if the pixel is ON (i.e., open) for all eight subdivisions, the intensity is maximized for the pixel being so actuated.
- the horizontal axis represents time passing from left to right, while the vertical axis represents the intensity of the illumination source (i.e., light source) feeding at least one (if not all) pixels being actuated according to the principles of display operation to enable field sequential color image generation.
- the illumination source i.e., light source
- FIG. 2 illustrates how the three consecutive primaries (of 1/180 th second duration each) are arrayed sequentially, one after another, to form a full color video frame of 1/60 th of a second duration (i.e., 1/180 th of a second per color primary multiplied by three total color primaries being modulated) in a conventional FSC display.
- a first primary color that lasts for a primary pulse duration 100 a has an 8-bit gray scale decomposition as previously shown by the single primary color light that lasts for a primary pulse duration 100 in FIG. 1 .
- the first primary color represented by duration 100 a may be the color red, although the present invention need not be tied to any one of the six possible combinations of the three tristimulus primaries that provide illumination to the pixels (which are turned on or off (i.e., switched between ON and OFF states) according to the requirements of the color frame as encoded according to the sequence of bits for each primary shown in FIG. 2 ).
- the first primary pulse duration 100 a is followed by a second primary pulse duration 100 b that illuminates a different color (e.g., green) for an identical duration of 1/180 th of a second, and a third primary pulse duration 100 c that illuminates a different color (e.g., blue) for an identical duration of 1/180 th of a second.
- the consecutive sequential series of tristimulus primaries i.e., three primary colored light
- aggregate tristimulus pulse 200 The consecutive sequential series of tristimulus primaries (i.e., three primary colored light) and their possible binary subdivisions is referred to as an aggregate tristimulus pulse 200 .
- each primary color is subdivided into 8 bits (256 possible gray scales)
- the aggregate color capability for this example is 24-bit color (over 16.7 million possible colors based on the gray scales available for each of the three tristimulus primaries of red, green, and blue).
- blanking periods may be inserted between each of the subdivisions to accommodate driver timing issues (e.g., to load the pixel data) and illumination transient response issues to insure linearity of response, but these possible blanking states (where the illumination means are briefly shut off at the boundaries between individual gray scale bits, or between consecutive primary colors being displayed) are omitted in the figures of the present disclosure since their possible inclusion is assumed as likely, without thereby being necessarily an intrinsic part of the present disclosure and its diverse methods for meeting the existing needs in the art to mitigate motional artifacts and/or relax pixel response time requirements.
- the methods of the present disclosure are applicable whether or not such blanking periods are present.
- modulating an intensity of the illumination means of the system in tightly-coordinated conjunction with the temporal modulation of the pixel actuation sequence permits either (a) truncation of the aggregate pulse defining any given primary color full-length frame (all bits accounted for), such that by asynchronous distribution the entire series of tristimulus pulses would be thereby truncated to mitigate motional artifacts, or (b) said sequence permits distribution of the time gained (by using such synchronized intensity modulation within the illumination system as a surrogate for temporal duration for the most significant bits) amongst all bit durations comprising the total gray scale makeup of the full primary frame so as to relax the transient response requirements upon the pixels at the heart of the display architecture.
- a combination of these two desirable effects is also possible under this embodiment.
- the first embodiment provides a method for combining intensity modulation with PWM to achieve aggregate pulse truncation to mitigate color motional breakup artifacts, as illustrated in FIG. 3B , or to relax pixel transient response requirements as illustrated in FIG. 3C .
- FIG. 3A again displays the fundamental pulse 100 as previously depicted in FIG. 1 , showing that the entire pulse of the primary color has an overall duration of 1/180 th of a second. It is possible, however, to shorten the amount of time that the primary color is displayed. Shortening of the display time for a given gray scale value, which serves to truncate the pulse, is a valuable tool for mitigating the effects of motional color breakup.
- the temporal decoupling of the constituent primaries comprising an image generated on a field sequential color display can be reduced when the various temporal components of the frame arrive closer together in time (due to aggregate pulse truncation).
- the means of shortening the display time without affecting the desired flux being displayed by any given pixel involves amplitude modulation of the intensity being displayed through a given pixel.
- the pulse width of that most significant bit can be halved without affecting its luminous flux.
- the most significant bit 101 in the non-intensity-modulated primary pulse shown in FIG. 3A becomes truncated in FIG. 3B as bit 301 .
- Bits 102 , 103 , 104 through 108 of FIG. 3A therefore become bits 302 , 303 , 304 through 308 of FIG. 3B .
- These latter seven bits i.e., pulses
- the first pulse of the full set of eight subdivisions i.e., the most significant bit 101 of FIG. 3A
- bit 301 of FIG. 3B that exhibits half the temporal duration but double the intensity of the global illumination system feeding the pixels comprising the display.
- the area of bit 101 in FIG. 3A and the area of bit 301 in FIG. 3B are identical, and this area (representing the product of intensity and time) represents the perceived intensity of the pulse based on total light flux passing through the pixel(s) in question.
- time savings 310 is equal to the duration of bit 302 (which is equivalent to the time duration of bit 102 in FIG. 3A ). This saved time 310 may be used to truncate the aggregate pulse.
- the overall pulse is shortened by the duration of time savings 310 .
- Truncation of the aggregate pulse for any one or more of the primary color full-length frames may be employed such that each of the tristimulus pulses would be thereby truncated to mitigate motional artifacts.
- FIG. 3C illustrates that the additional time provided by time savings 310 is not used to shorten the aggregate pulse and create dead space after 308 as displayed in FIG. 3B , but rather, the duration of the time savings 310 may be equally distributed among all eight binary subdivisions 311 through 318 . This may be appropriate where the speed of pixel response becomes a factor in displaying the least significant bits (shortest pulse durations) required to generate certain gray scale values.
- the primary color being displayed e.g., red
- red is just as long as was displayed in FIG.
- intensity modulation as shown in FIGS. 3B and 3C is not limited to simply doubling the intensity for the most significant bit 301 , 311 .
- This embodiment allows for any level of coordinated segmentation between controlled intensity modulation and pulse width modulation to secure the correct binary proportions of net flux reaching the observer's retinas while viewing the display being so enhanced.
- FIG. 3D again illustrates the entire three-primary frame of 1/60 th second duration, as previously depicted in FIG. 2 , capable of exhibiting an aggregate total color bit depth of 24 bits (i.e., 8 bits per primary multiplied by three tristimulus primary colors (denoted by lower case “a”, “b”, and “c”) being modulated sequentially in time.
- First primary pulse duration 100 a, second primary pulse duration 100 b, and third primary pulse duration 100 c would represent, for example, the respective red, green, and blue temporal subframes of 1/180 th second each duration comprising the entire full color frame 200 in FIG. 3D .
- the durations (i.e., pulse widths) of most significant bits 301 a, 301 b and 301 c are halved in duration, as compared to 101 a, 101 b, and 101 c depicted in FIG. 3D , so that the respective total primary pulse durations 300 a, 300 b and 300 c are individually truncated and thus truncated in the aggregate, with the time savings by a duration referred to as a time savings 320 depicted in FIG. 3E .
- the time savings 320 is equal to three times the duration of pulse 302 a.
- This time savings 320 may be utilized as a blanking period during which black is displayed to reduce decoupling of the primary components as they fall on the observer's retina.
- a second embodiment of the present disclosure provides a method to create hard-wired front-loaded bit weighting to enhance perceived mitigation of motional artifacts using virtual aggregate pulse width truncation.
- the temporal rearrangement of the various bit weights comprising the total aggregate color gray scale to be produced by a given pixel during a full frame can be conducted along lines geared toward favoring the front-loading of the most significant bits 101 a, 101 b, and 101 c corresponding to each of the primary colored lights a, b, c, as compared to the embodiment previously described with respect to FIG.
- bit-rearranging with non-contiguous primary processing may position the same bit weights for each of the three primary colors (a, b, c), contiguous with one another thereby forming a first triplet 322 of all three most significant bits 101 a, 101 b, 101 c.
- the second most significant bits 102 a, 102 b, 102 c are similarly aggregated as a second triplet 324
- the third most significant bits 103 a, 103 b, 103 c are similarly aggregated as a third triplet 326
- the fourth most significant bits are similarly aggregated as a fourth triplet 328
- the fifth most significant bits are similarly aggregated as a fifth triplet 330
- the sixth most significant bits are similarly aggregated as a sixth triplet 332
- the seventh most significant bits are similarly aggregated as a seventh triplet 34
- the eighth most significant bits i.e., least significant bits
- are similarly aggregated as an eighth triplet 336 representing each descending bit weight being encoded according to the binary weighting paradigm defined at the outset.
- a display system must be capable of displaying bits in such a sequence, including a noncontiguous primary color light a, b, c processing sequence (where noncontiguous would be defined as a sequence violating the standard red-green-blue or similar sequential illumination of the display system to provide light to the matrix of pixels on the display surface).
- All gray scales are generated solely by temporal means such as pulse width modulation of light flux by way of pixel actuation and deactuation.
- PWM is adequate to create and control digital gray scale values independently at each pixel for a suitably configured display system.
- the temporal rearrangement of the various bit weights comprising the total aggregate color gray scale to be produced by a given pixel during a full frame can be conducted along lines geared toward favoring the front-loading of the most significant bits.
- the most significant bits 301 a, 301 b, and 301 c are distributed in separate first, second, and third primary pulse durations 300 a, 300 b, and 300 c, respectively, in the embodiment illustrated in FIG. 3G the same bit weights are contiguous with one another, forming a first triplet 322 of all three most significant bits 301 a, 301 b, 301 c.
- the second most significant bits 302 a, 302 b, 302 c are similarly aggregated as a second triplet 324 , the third most significant bits representing a third triplet 326 , the fourth most significant bits representing a fourth triplet 328 , and so on such that the fifth, sixth, seventh and eighth most significant bits are rearranged into fifth, sixth, seventh, and eighth triplets 330 , 332 , 334 , 336 representing a triplet for each descending bit weight being encoded according to the binary weighting paradigm defined at the outset.
- a display system must be capable of displaying bits in such a sequence, including a noncontiguous primary color light processing sequence mitigates motional artifacts native to field sequential color display systems. In subsequent embodiments it will be noted that manipulation of such color sequences becomes an important adjunct to mitigating motional color artifacts, particularly where the transient response of the pixels is already near their performance limits.
- bit-splitting is deployed to divide higher order bits (e.g., the most significant bit (MSB) 101 ), which have the longest temporal duration, into smaller subunits that total the duration of the original pulse of MSB 101 .
- the split bits may be distributed and interleaved for all three stimulus colors a, b, c, which are thus intermixed (unlike prior art bit-splitting which is limited to bit-splitting within a single primary due to the exigencies of color wheel operation native to the displays incorporating such techniques).
- FIG. 4A illustrates the four most significant bits 101 , 102 , 103 , and 104 of the original primary color binary weighting paradigm set forth in FIG. 1 .
- FIG. 4B illustrates how the larger, more significant bits (e.g., bits 101 , 102 , 103 illustrated in FIG. 4A ) can be subdivided.
- bits 101 , 102 , and 103 are all subdivided into multiples of the duration set for the fourth most significant bit 104 , as shown in FIG. 4B .
- the value of bit splitting as a method is not properly realized until the bits forming the single primary color gray scale are rearranged, such as illustrated in FIG. 4C , which attempts to provide an averaged weighting of each of the respective bit weights across the entire 1/180 th of a second for the single primary color pulse duration 100 .
- FIGS. 4C illustrates how the larger, more significant bits
- fractional durations 101 ′ comprising the most significant bit 101
- four fractional durations 102 ′ comprising the second most significant bit 102
- two fractional durations 103 ′ comprising the third most significant bit 103
- only one duration 104 comprising the fourth most significant bit 104 .
- the lower order bits (bits 105 , 106 , 107 , and 108 ) are omitted for the sake of illustrative clarity, although they would arguably be handled in a similar fashion according to this method.
- bit-splitting across all three tristimulus colors may be combined with the principle of intensity modulation of the illumination source disclosed in the first embodiment to secure superior optical performance while gaining either (a) additional aggregate pulse width truncation to further mitigate motional artifacts, (b) distribute any time saved due to the addition of intensity modulation among all bits being temporally generated to reduce the demand on pixel speeds in the display architecture, or (c) a combination of both motional artifact mitigation and reduction of pixel response requirements.
- FIG. 5A reproduces the primary time savings already disclosed in FIG. 3E
- FIG. 5B (for the sake of clarity) omits the four lowest order bits to set forth a variation on an intensity-modulated method as applied to the four highest order bits, namely 301 a, 302 a, 303 a, and 304 a.
- the area of each of these respective bit weights represents the potential flux that can be generated by any given pixel in question (if the pixel were turned on for all such segments the flux would be actual).
- the respective primaries are still treated contiguously, such that 300 a represents one primary color light pulse being modulated by intensity and pulse width modulation, and 300 b and 300 c represent the other two primary colored lights.
- a fourth embodiment is a method that may include combining intensity modulation with PMW to create averaged primary weights across the aggregate full color three primary frame duration via noncontiguous primary colored light sequencing.
- the fourth embodiment of the present disclosure distributes conventional pulse-width-modulated temporal pulses so as to average out the relative bit weights over the entire tristimulus primary color sequence comprising a full color frame, thereby interleaving the various tristimulus primary color components to provide the best image quality as empirically determined by actual visual performance of such systems.
- the respective primary colored light durations 300 a, 300 b, and 300 c are contiguous with one another.
- a display system for example, the display disclosed in U.S. Pat. No.
- the primary colored light a, b, c may be interleaved as illustrated in FIG. 5C so as to provide average potential intensity across the entire aggregate three-primary pulse duration of 1/60 th of a second. This differs from the effects of bit-splitting-based averaging disclosed in FIG. 4C , which is limited to gray scale generation within a single primary color, and does not involve the interleaving of bit weights across color boundaries such as is shown in FIG.
- FIG. 5C secures the benefits of the approach alluded to in FIG. 4C , except that FIG.
- 5C employs coordinated manipulation of illumination intensity to the display, the interleaving of the primary colors, and the implementation of noncontiguous sequencing to insure maximum averaging of light energy for any given primary over the entire three-primary duration of 1/60 th of a second, which was arbitrarily selected at the outset as a reasonably nominal frame rate for a field sequential color display system.
- a fifth embodiment is a method that may include combining intensity modulation with PWM to create hard-wired front-loaded bit weighting to further enhance perceived mitigation of motional artifacts using virtual aggregate pulse width truncation.
- the fifth embodiment of the present disclosure distributes conventional pulse-width-modulated temporal pulses so as to front-load all the most significant bits relative to the specific single video frame being generated, with the lower order bits contributing less information to the image being generated being relegated to the end of the aggregate tristimulus colored light pulse comprising the entire color frame.
- front-loading or back-loading, which would be optically equivalent to the human eye: the benefit arises from grouping the most important pulse of all colors as closely as possible) the most important bits
- the image undergoes virtual truncation due to such temporal rearrangement of the pulses comprising it.
- This embodiment hardwires the bit sequence from the most significant bits of all three primary colored lights being encoded first, followed by the next most significant bits of all three primary colored lights, and so on down to the least significant bits.
- Such virtual truncation can serve to mitigate motional breakup artifacts without, in fact, truly truncating the pulse in time, by means of such synchronous hardwired weighting of the respective bits to be displayed.
- the bulk of the information comprising the important features of the frame are displayed first, and the less important features are displayed later in time (and are likely to represent lower image value portions of the display and thus will be more difficult to resolve at video speed causing undesirable artifacts to be less noticeable as a result).
- the unmodified method for implementing multiple intensity levels in conjunction with pulse width modulation is shown without respect to the lower order bits which are ignored for the sake of visual and explanatory clarity.
- this fifth embodiment already enjoys some actual pulse width truncation as described for earlier embodiments where such intensity modulation is taught herein.
- there are additional performance enhancements available that may be gained by introducing a new method to add virtual pulse width truncation.
- Virtual pulse width truncation involves the principles of visual perception, and exploits the fact that by rearranging the bits in an appropriate way, the bulk of the visual data comprising the important elements of a video frame will be displayed close together in time, forming a tighter-packed ensemble of bit weights, while the lower order bit weights are postposited behind the more significant bits and, being lower order bits, are (in principle) less visible, tending to cause whatever artifacts may arise to be reduced in visual magnitude.
- FIG. 5D illustrates the hard-wired interleaving of primaries to achieve the desired virtual pulse width modulation. Note that the highest order bits referred to as the most significant bits 301 a, 301 b, and 301 c for the three primary colored lights a, b, c are displayed first as a first ensemble 501 comprising bits 301 a, 301 b, and 301 c.
- the second most significant bits 302 a, 302 b, 302 c are displayed second as a second ensemble 502 , followed by a descending series of bit orders (i.e., third most significant bits 303 a, 303 b, 303 c are displayed third as a third ensemble 503 , followed by the fourth most significant bits 304 a, 304 b, 304 c displayed fourth as a fourth ensemble 504 ).
- This sequence is termed hard-wired since it invariably displays the bit planes comprising the respective gray scale decomposition of any given video frame according to this precise sequence.
- This sequence need not be synchronous (tied irrevocably to a clock) if some bit weights are not included in the program content, a circumstance which could lead to further temporal truncation.
- the bit weight order taught in FIG. 5D will tend to insure that the phenomenon of virtual pulse width truncation will occur for a majority of video frames being processed by the display system.
- One advantage of this approach is that it requires no real time analysis of video content, by virtue of being hard-wired as a method for encoding the data. Therefore, this fifth embodiment enjoys not only the benefits of actual aggregate pulse width truncation, it adds the benefits of virtual pulse width truncation to further reduce the appearance of possible visual artifacts related to field sequential color display system operation.
- This embodiment of the present disclosure may also include any ordering of the bits within each of the ensembles 501 , 502 , 503 , etc., shown in FIG. 5D , whether or not the flux is generated by a combination of intensity and pulse width modulation, solely by pulse width modulation, or solely by intensity modulation, for example in ensemble 501 the bits may be any order such as 301 b, 301 c, 301 a. It should also be noted that the precise position within the 1/60 th overall frame being displayed for the most significant bits is not important, for example, while FIG.
- 5D suggests that the heaviest weighting (i.e., most significant bits) occurs at the left, early in the aggregate pulse for all the bit weights to be displayed, the most significant bits could just have easily been situated at the right, and have thus been back-loaded, with identical generation of the virtual pulse width truncation effect. Distribution of the heaviest weighting around the center of the 1/60 th of a second frame period would be of equal potential value.
- the embodiment covers all such variations of the loading sequence, for which a point to be grasped for this method is an explicit concentration of the most significant bits as close in time as possible, regardless where within the frame pulse said concentration is to occur.
- 5D illustrates that this concentration is at the front of the pulse, at the left, is not material to the embodiment but only represents one possible (and convenient) approach. Wherever the concentration of the highest bits is positioned, it may be hard-wired to always fall in that position insofar as this embodiment is concerned.
- a sixth embodiment is a method that may include combining intensity modulation with PWM to create a real-time dynamically-determined front-loaded bit weighting to further enhance perceived mitigation of motional artifacts using virtual aggregate pulse width truncation.
- the sixth embodiment of the present disclosure may apply the same general virtual truncation as the fifth embodiment does, and thereby also serves to mitigate motional breakup artifacts without, in fact, truly truncating the pulse in time.
- This sixth embodiment is dependent upon real-time evaluation of each frame being generated, because the distribution and weighting of the bits is no longer hardwired and synchronously fixed, but is determined in real time video frame by video frame.
- the rearrangement of the bits to be displayed in this sixth embodiment must be calculated prior to generation of the frame for actual display to the viewer, which will likely require look-ahead capability in the video cache to preprocess each frame in regard to the required real-time analysis process.
- the sixth embodiment dynamically adjusts the bit sequence in light of the exigencies of each frame's program content, thereby insuring that the correct bits are truly weighted to the front of the aggregate pulse being encoded across all primary colored lights, which cannot always be guaranteed in the case of the hardwired fifth embodiment.
- the fifth embodiment represents a very useful approximation and provides virtual aggregate pulse width truncation for about 75% or more of the information being displayed on a video system so equipped. But, for the remaining 25% of program content, the fifth embodiment may provide no apparent advantage or could even theoretically serve to extend rather than truncate the virtual perception of the pulse, therefore aggravating rather than mitigating motional artifacts during the display of the affected video frames.
- This sixth embodiment discloses an alternative method for determining the best order in which to display the bit weights comprising a video frame.
- Real-time video analysis software that is capable of calculating the visually significant gray scale levels in a full color frame at full operational speed can supply a display system with the means to re-order the bit weights on a frame-by-frame basis to insure that the bit weights that truly represent the most visually significant bits (not merely the bits that represent the largest arithmetic values) are displayed first. In most cases, these values are likely to match that for the hard-wired front-loaded bit weight encoding method disclosed in the fifth embodiment above, but any given frame may well deviate from this standard, as suggested in FIG.
- 5E gives an arbitrary example, for implementation of this sixth embodiment would likely entail the customization of the bit weight sequence for every consecutive frame of video being encoded by the display driver circuitry to provide both maximum actual aggregate pulse width truncation across all colors, plus maximum virtual pulse width truncation due to front-loading of the most visually significant bits determined on a frame-by-frame basis by the system doing the video analysis and providing the driver circuitry the correct information to re-arrange the interleaving of the various primaries, their intensities, and their pulse widths to insure the desired result as elaborated above.
- a seventh embodiment is a method that may include the real-time quantizing/posterizing of foreground objects moving relative to their background in a video frame sequence to gain actual and virtual aggregate pulse width truncation and concomitant mitigation of motional artifacts associated with FSC displays.
- the seventh embodiment of the present disclosure may extend a method of the fifth and sixth embodiments to a more elaborate level to achieve further gains in the realm of motional artifact mitigation in field sequential color displays.
- This embodiment is a variant of the previous method and includes determination (by intelligent real-time frame analysis software) of which portions of the image represent foreground objects in sufficiently rapid motion against the background being displayed to warrant the imposition of artifact mitigation methods.
- Motional artifacts are known to aggregate around such foreground objects at their borders in the direction of motion across the display.
- the method calls for the identified part of the frame that represents the moving foreground object to be converted, via an intelligent modulo method, into a reduced bit-depth image with the bits comprising the object being targeted for front-loading for the frame in question.
- an airplane against a blue sky would be identified as a foreground moving object by the video analysis software.
- Both airplane and blue sky are originally encoded as 24-bit color (8 bits per tristimulus primary).
- the revised frame in this method would re-encode the airplane portion of the frame in a reduced bit-depth (e.g., 12-bit color, or 4 bits per tristimulus primary), and those moving-object-related bits for all relevant tristimulus primary colors would be front-loaded and transduced first before processing the remaining 12-bits which define the sky's greater bit depth. Because the foreground object is in sufficiently rapid motion, the loss of bit-depth is not apparent to the eye during its transit across the display. This real-time dynamic bit-depth adjusting method is invoked only as needed based on program content to mitigate and/or prevent the onset of motional color artifacts.
- bit-depth e.g., 12-bit color, or 4 bits per tristimulus primary
- This seventh embodiment is schematically illustrated in a flow chart depicted in FIG. 6 , which provides a method for preprocessing video data to be encoded using field sequential color pulse width modulation techniques.
- the mitigation of motional artifacts may be applied to specific regions of a video frame, those regions most likely to be affected by artifacts by virtue of relative motion between that region (which thus represents a moving foreground object) and the background. Motional artifacts tend to occur at the borders of the foreground object and appear as color smearing and decoupling in the direction of apparent motion of the object.
- the principles that inhere in this method may also be adapted to situations where motional breakup is caused by relative motion of the display screen with the viewer's head (such as might occur with an avionics display when the pilot's cockpit is significantly vibrating). In that instance, a desired preprocessing effect may be imposed on the entire display if vibration is detected by an appropriate sensor. In general use, however, the preprocessing effect under this method is applied to objects in the video frame sequence that are intelligently determined, via real-time video analysis, to be moving at sufficient speeds relative to their background as to cause a significant risk for motional artifact generation and the resulting temporal decoupling of the edges of the object being tracked by the observer's eyes.
- this embodiment is based on selective posterization (or quantization) of the moving region(s), if any, in the video stream being analyzed.
- Posterization involves a deliberate reduction in the amount of gray scales involved in depicting an image or a portion of an image.
- One way to induce such truncation is to reconfigure the moving object in regard to the bit weights comprising it, so that it may be represented by four or five bits rather than all eight bits of each primary color.
- incoming video signal 601 is received in the video cache 602 that is used to feed data in appropriate increments frame-by-frame to the real-time motion analysis subsystem 603 , a system such as was alluded to above in regard to the prior art in the field of real-time image analysis applied to video streams.
- Several frames are processed at once by the analyzer 603 to determine the status, in temporal context, of the first frame being processed.
- the system determines 604 whether or not there is any relative motion between one or more foreground objects and the apparent background, and whether or not the motion is sufficiently rapid as to be likely to cause motional color breakup artifacts.
- the defined region(s) of the video frame that are detected to be in rapid relative motion are resampled/posterized/quantized 605 so as to intelligently reduce the number of gray scales defining the region(s) detected to be in motion (and thus representing moving objects against the displayed background in the video frame).
- the bit sequence for encoding may also be adjusted in real-time if the principles of the sixth embodiment are being followed in the device, otherwise not. If there was no detected relative motion, then the video frame is not posterized or resampled but is passed on as-is to the primary encoding engine 606 .
- the parameters for the video adjustment are handled at the analyzer 605 and any encoding-specific adjustments (if the sixth embodiment is also implemented) are passed as determinative parameters to the encoding engine 606 which then determines the order of the bits to be displayed across all three primary colors.
- the video frame is displayed at 607 ; the video cache is queried 608 to determine if it is empty. If it is, information display would naturally cease 610 ; if there are more frames to process, then the system increments 609 such that sufficient video frames are present in the cache 602 to insure that the real time motion analysis system 603 can continue to correctly determine whether the frames in context exhibit sufficient relative motion within the program content to trigger the posterization step 605 .
- the posterization step may involve any number of rounding methods to secure the desired result.
- One example of such a method is disclosed in “Adaptive color quantization using the baker's transformation,” by Montagne et al., Journal of Electronic Imaging, April-June 2006, Vol. 15(2), 023015-1ff), however there are a host of methods for securing useful results.
- the purpose of the posterization step is to provide further gains in artifact mitigation by reducing the system overhead required for generating the full gray scale bit depth for the moving object. Instead of, for example, the object (which may be an aircraft being displayed against a blue sky) being displayed in 24-bit color, it may be displayed in 12-bit color if it is moving rapidly against the sky in the video sequence.
- the combination of the sixth and seventh embodiments herein will provide maximum actual and virtual truncation of the aggregate color pulses that generate these images.
- the methods outlined here are not limited to the three tristimulus primaries but may be applied to extended gamut scenarios where additional visible primary colored light(s) are used to encode video data.
- the methods outlined here may also be applied to the interleaving of nonvisible primaries (e.g., infrared light) with visible primaries as well, and is not restricted in any way to the three well-known tristimulus primaries (e.g., red-green-blue).
- nonvisible primaries e.g., infrared light
- tristimulus primaries e.g., red-green-blue.
- the same methods for intensity modulation coordinated with pulse width modulation and the other methods disclosed apply to all such display systems.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Control Of Indicators Other Than Cathode Ray Tubes (AREA)
- Liquid Crystal Display Device Control (AREA)
Abstract
The optical performance is enhanced of display systems that use field sequential color and pulse width modulation to generate color and color gray scale values. Such enhancement may be achieved by various data encoding methods disclosed herein that may include temporal redistribution of bit values to mitigate color motional artifacts associated with field sequential color-based display systems, selective combination of intensity modulation, pulse width modulation, and/or the noncontiguous sequencing of primary colors. There is further an intelligent real-time dynamic manipulation of gray scale values in portions of an image that are computationally determined to be images of objects moving against a global background, so as to temporally front load or concentrate the bits comprising such moving objects and thereby further mitigate said motional artifacts using both actual and virtual aggregate pulse truncation across all primary colors being modulated.
Description
- This application is a continuation of U.S. application Ser. No. 13/772,457 filed on Feb. 21, 2013, which is a continuation of U.S. application Ser. No. 12/564,894 filed on Sep. 22, 2009, which claims priority to U.S. Provisional Patent Application No. 61/098,931 filed on Sep. 22, 2008, the entire disclosures of which are herein incorporated by reference.
- The present application relates to the field of field sequential color display systems, and more particularly to methods of encoding data to control the data input to individual pixels of an array of pixels and/or to control the data input to illumination light sources for enhancing the visual performance of field sequential color displays, whether in a direct-view display system or a projection-based display system.
- This section is intended to introduce the reader to various aspects of art that may be related to aspects of the present technique, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present technique. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
- Data encoding methods or algorithms are utilized in electronic video displays, particularly with respect to flat panel display systems to selectively control the bursts of locally transmitted primary light emitted from individual pixels disposed across the display surface. One example for the application of such encoding algorithms is a direct-view flat panel display system that uses sequentially-pulsed bursts of red, green, and blue colored light (i.e., primary colored light) emanating from the display surface to create a sequence of primary color images, also referred to as primary color subframes, that integrated together form a full color image or frame by the temporal mixing of emitted primary light that is being directed from the display surface to a viewer. A term commonly used to define this technique is called field sequential color (FSC). The human eye (i.e., human visual system) of the viewer effectively integrates the pulsed light from a light source to form the perception of a level of light intensity of each primary color (i.e., primary subframe).
- In another aspect, the gray scale level generated at each pixel location on the display surface is proportional to the percentage of time the pixel is ON during the primary color subframe time, tcolor. The frame rates at which this occurs are high enough to create the illusion of a continuous stable image, rather than a flickering one (i.e., a noticeable series of primary color subframes). During each primary color's determinant time period, tcolor, the shade of that primary color emitted by an individual pixel is controlled by encoding data that selectively controls the appropriate fraction of tcolor (i.e., amount of time) that the individual pixel is open during the time period tcolor. A term commonly used to define this technique is called pulse width modulation (PWM). For example, producing 24-bit encoded color requires 256 (0-255) shades defined for each primary color. If one pixel requires a 50% shade of red, then that pixel will be assigned with shade 128 (128/256=0.5) and stay on for 50% of tcolor. This form of data encoding assumes a constant magnitude light intensity from the light source that is modulated (i.e., pulse width modulated) across the display screen by the selective opening and closing of individual pixels. Moreover, it achieves gray scales by subdividing tcolor into fractional temporal components. An individual pixel that is open refers to the pixel in an ON state (i.e., light emitting), whereas an individual pixel that is closed refers to the pixel in an OFF state (i.e., not light emitting). By making an array of pixels on a display emit, or transmit, light in a properly pulsed manner (i.e., controllably switched between ON and OFF states), one can create a full-color FSC display.
- Various strategies for adjusting the color generation method for field sequential color-based display systems are geared either to the avoidance of solarization or posterization (linearity errors in creating a true uniformly sloped monotonic gray scale relationship between input and optical output of a display at any given point) and motional color breakup artifacts related to the temporal decoupling of the various primary frames comprising an image when they arrive at the retina, such that an object noticeably decomposes into its constituent primary components since those components no longer properly overlap as a consequence of relative retina-object motion during the viewing of the display. However, these various strategies induce engineering compromises since the response time of various pixel architectures may be either marginal or inadequate to generate both adequate gray scale and incorporate the artifact mitigation strategies proposed to correct for the kind of imaging errors just described. The larger a display system, or the higher its pixel density, the greater this gap between response requirements and minimal performance to deploy motional color breakup mitigation strategies becomes. However, the display industry continues to evolve toward larger, higher-density display systems because of growing industrial, military, and commercial needs in regard to information display. Because these issues have yet to be satisfactorily addressed in regard to motional color breakup in particular, the prior art has fallen short in respect to presenting a workable solution to this ongoing problem in display technology.
- The problems outlined above may at least in part be solved in some embodiments of the methods described herein. The following presents a simplified summary of the disclosed subject matter in order to provide a basic understanding of some aspects of the disclosed subject matter. This summary is not an exhaustive overview of the disclosed subject matter. It is not intended to identify key or critical elements of the disclosed subject matter or to delineate the scope of the disclosed subject matter. Indeed, the disclosed subject matter may encompass a variety of aspects that may not be set forth below. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is discussed later.
- The embodiments of the present disclosure provide various data encoding methods that reduce motional color breakup native to field sequential color displays. Some of the embodiments also provide a means to reduce demands on pixel response speeds. A first embodiment of the present disclosure is a method including modulating an intensity of the illumination means of the display system in tightly-coordinated conjunction with the temporal modulation of the pixel actuation sequence. A second embodiment of the present disclosure is a method including hard-wired front-loaded bit weighting to enhance perceived mitigation of motional artifacts using virtual aggregate pulse width truncation. A third embodiment is a method including bit-splitting to divide higher order bits (such as the most significant bit (MSB)) which have the longest temporal duration, into smaller subunits that may be distributed and interleaved (i.e., intermixed) for all three stimulus colors. Bit-splitting across all three tristimulus colored lights may be combined with the principle of intensity modulation of the illumination source disclosed in the first embodiment. A fourth embodiment is a method including distributing pulse-width-modulated temporal pulses so as to average out the relative bit weights over the entire tristimulus sequence comprising a full color frame, thereby interleaving the various tristimulus components to provide the best image quality as empirically determined by actual visual performance of such systems. A fifth embodiment is a method including distributing pulse-width-modulated temporal pulses so as to front-load all the most significant bits relative to the specific single video frame being generated. This embodiment hardwires the bit sequence from the most significant bits of all three primaries being encoded first, followed by the next most significant bits of all three primaries, and so on down to the least significant bits. A sixth embodiment is a method including real-time evaluation of each video frame being generated to determine the particular rearrangement of the bits to be displayed. In contrast to the hardwired bit sequence of the fifth embodiment, the sixth embodiment dynamically adjusts the bit sequence in light of the exigencies of each frame's program content, thereby insuring that the correct bits are truly weighted to the front of the aggregate pulse being encoded across all primaries. A seventh embodiment is a method including determining in a received plurality of video frames that an object to be displayed in a foreground of a video image is in motion relative to a background of the video image, and modifying a gray scale of the video frames associated with the object to be displayed in the foreground of the video image.
- Various features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying figures in which like characters represent like parts throughout the figures, wherein:
-
FIG. 1 illustrates a temporal breakdown of binary-weighted consecutive order bits for a single primary color according to conventional pulse width modulation methods to secure gray scales at a bit depth of 8, representing 256 different brightness levels from black to white, generated within 1/180 of a second, with blackout (blanking) periods between each bit level omitted; -
FIG. 2 illustrates a concatenation of three consecutive 8-bit gray-scale units of 1/180th of a second in duration such as shown inFIG. 1 , with the first block representing one tristimulus color (e.g., red), the second block representing the second tristimulus color (e.g., green), and the third block representing the third tristimulus color (e.g., blue), such that the entire temporal period shown of 1/60th of a second duration represents the switching template required to generate full color images on a display at a total color bit depth of 24 (approximately 16.7 million colors); -
FIGS. 3A-3F compare the original temporal subdivision of a single encoded primary color depicted inFIG. 3A (previously shown inFIG. 1 ) with:FIG. 3B which illustrates an intensity-modulated equivalent that elects to truncate total duration of the primary color pulse by postpositing the accrued time savings, andFIG. 3C which illustrates an alternate intensity-modulated equivalent that elects to redistribute the time saved over all relevant bits to reduce addressing overhead times intrinsic to the high speed operation of field sequential color-based display systems, andFIG. 3D which illustrates the aggregate pulse encoding for all three tristimulus primaries (as previously shown inFIG. 2 ), andFIG. 3E which illustrates an intensity-modulated variation that extends the principle ofFIG. 3B to all three primaries in the full color encoded packet, andFIG. 3F which illustrates actual noncontiguous interleaving of the respective primary bits weighted such that the higher order bits appear earlier in the aggregate full color sequence of pulses, andFIG. 3G which illustrates the conflating of intensity modulation and actual noncontiguous interleaving of the respective primary bits weighted such that the higher order bits appear earlier in the aggregate full color sequence of pulses; -
FIG. 4A illustrates a subset of the pulse width modulated durations for a single primary color shown inFIG. 1 , namely, the four highest order bits, omitting the four lowest order bits for illustrative purposes; -
FIG. 4B illustrates the splitting of the durations for a single primary color shown inFIG. 4A into fractional subdivisions bearing the same duration as the smallest illustrated bit width, without any rearrangement of the bits in terms of sequence; -
FIG. 4C illustrates the rearrangement of the split bits for a single primary color previously shown without rearrangement inFIG. 4B such that the higher order bits are distributed as evenly as possible over the entire duration of the primary color generation time; -
FIG. 5A illustrates the full color cycle for all three primary colors as formerly shown inFIG. 3E with intensity modulation applied to the sequence, with only the four highest order bits being annotated in the figure; -
FIG. 5B illustrates the mixing of several levels of intensity modulation and pulse width modulation as applied only to the four highest order bits for the entire color cycle comprised of all three primary colors; -
FIG. 5C illustrates the interleaving of the various intensities and pulse widths shown inFIG. 5B so as to best average out the intensity for any given primary color across the entire aggregate pulse duration for the entire three-primary video frame being displayed; -
FIG. 5D illustrates the hard-wired interleaving of the various intensities and pulse widths shown inFIG. 5B so as to force all the highest order bits, regardless of program content, to be displayed at the beginning of the aggregate pulse comprising the video frame for all three primary colors; -
FIG. 5E illustrates one possible result due to real-time analysis of each video frame to determine which bits are most important based on image content, therefore making the frame-to-frame sequence of the intensities and pulse widths to be dynamically variable based on the image content of the frame being displayed; -
FIG. 6 sets forth a method for dynamically altering the gray scale depth for regions of the video display that are contextually shown, according to real-time analysis of consecutive frames within the display system's video cache, to represent foreground objects in relative motion against the perceived background, such that if a detectable threshold for motional artifact generation is crossed by such motion, the moving foreground object is posterized and/or quantized to permit maximum front-loading of image data during the full three-primary color pulse displaying the frames in question; -
FIG. 7 illustrates a perspective view of a direct view flat panel display suitable for implementation of the present invention; -
FIG. 8A is a side view schematic of a single pixel in an OFF state; -
FIG. 8B illustrates the pixel shown inFIG. 8A in an ON state; -
FIG. 9A is a side view schematic of a single pixel in an OFF state, wherein the pixel has collector-coupler features; -
FIG. 9B illustrates the pixel shown inFIG. 9A in an ON state; -
FIG. 10 illustrates what causes the phenomenon of color image breakup when an observer views an image generated using field sequential color generation techniques during rotational motion of the observer's eye; -
FIG. 11A illustrates the perceived image that is desired irrespective of eye rotation and/or other motion in accordance with an embodiment of the present invention; and -
FIG. 11B illustrates the phenomenon of color image breakup by depicting a perceived image due to eye rotation and/or other motion. - Among the technologies (flat panel display or other candidate technologies that exploit the principle of field sequential color generation) that lend themselves to implementation of the present disclosure is the flat panel display disclosed in U.S. Pat. No. 5,319,491, which is hereby incorporated herein by reference in its entirety. The use of a representative flat panel display example throughout this detailed description shall not be construed to limit the applicability of the present invention to that field of use, but is intended for illustrative purposes as touching the matter of deployment of the present invention. Furthermore, the use of the three tristimulus primary colors (red, green, and blue) throughout the remainder of this detailed description is likewise intended for illustrative purposes, and shall not be construed to limit the applicability of the present invention to these primary colors solely, whether as to their number or color or other attribute.
- One possible display technology to be enhanced (without thereby restricting the range of applicability of the present invention) may be the current iteration of the display technology originally disclosed in U.S. Pat. No. 5,319,491, wherein pixels emit light using the principle of frustrated total internal reflection within a display architecture that leverages principles of field sequential color generation and pulse width modulated gray scale creation. In that display system, light is edge-injected into a planar slab waveguide and undergoes total internal reflection (TIR) within the guide, trapping the light inside it. Edge-injected light may comprise any number of consecutive primary colored lights, for example three primary colored lights (also referred to as tristimulus primaries), synchronously clocked to a desirable global frame rate (e.g., 60 Hz, requiring a 180 Hz rate to accommodate each of the three primaries utilized therein, namely, red, green, and blue). Pixels are electrostatically controlled MEMS structures that propel a thin film layer, hereinafter termed the “active layer”, which is controllably deformable across a microscopic gap (measuring between 300 to 1000 nanometers) into contact or near-contact with the waveguide, at which point light transits across from the waveguide to the active layer either by direct contact propagation and/or by way of evanescent coupling. In other words, application of an appropriate electrical potential across the gap, with conductors associated with the slab waveguide and the active layer to be propelled/deformed, causes the high-speed motion of the active layer toward the slab waveguide; actuation is deemed completed (i.e., ON state) when the active layer can move no closer to the slab waveguide (either in itself, or due to physical contact with the slab waveguide). The active layer in contact (or near contact) with a surface of the waveguide optically couples light out of the waveguide thereby extracting light from the waveguide via frustration of total internal reflected light (FTIR). The FTIR light extracted by the active layer may be directed towards a viewer as emitted light at that pixel location.
- The flat panel display is thus comprised of a plurality of pixels, each pixel representing a discrete subsection of the display that can be individually and selectively controlled in respect to locally propelling the active layer bearing a suitable refractive index across a microscopic gap between ON and OFF positions thereby switching the individual pixel between ON and OFF states.
FIG. 7 illustrates a simplified depiction of aflat panel display 700 comprised of a waveguide (i.e., light guidance substrate) 701 which may further include a flat panel matrix ofpixels 702. Behind thewaveguide 701 and in a parallel relationship withwaveguide 701 may be a transparent (e.g., glass, plastic, etc.)substrate 703. It is noted thatflat panel display 700 may include other elements than illustrated such as a light source, an opaque throat, an opaque backing layer, a reflector, and tubular lamps (or other light sources, such as LEDs, etc.). - A principle of operation for any of the plurality of pixels distributed across the slab waveguide involves locally, selectively, and controllably frustrating the total internal reflection of light bound within the slab at each pixel location by positioning the active layer, into contact or near contact with a surface of the slab waveguide during the individual pixel's ON state (i.e., light emitting state). To switch the individual pixel to its OFF state (i.e., light is not emitted at that pixel location), the active layer is sufficiently displaced from the waveguide by a microscopic gap (e.g., an air gap) between the active layer and the surface of the waveguide such that light coupling and evanescent coupling across the gap is negligible. The deformable active layer may be a thin film sheet of polymeric material with a refractive index selected to optimize the coupling of light during the contact/near-contact events, which can occur at very high speeds in order to permit the generation of adequate gray scale levels for multiple primary colors at video frame rates in order to avoid excessive motional and color breakup artifacts while preserving smooth video generation.
- For example,
FIGS. 8A and 8B illustrate a more detailed side view of onepixel 702 in an OFF and ON states, respectively.FIG. 8A shows an isolated view of apixel 800, in an OFF state geometry, having anactive layer 804 disposed in a spaced-apart relationship to awaveguide 803 by amicroscopic gap 806. Eachpixel 800 may include a first conductor layer (not shown) in or on awaveguide 803, and a second conductor layer (not shown) in or on theactive layer 804. Thepixel 800 is switched to an ON state, as depicted inFIG. 8B , but applying a sufficient electrical potential difference between the first and second conductor layers that causes theactive layer 804 to deform and move towards a surface of the waveguide such that the active layer couples light out of the waveguide as illustrated by emittedlight ray 808. However, there may be a certain amount of light loss due to the presence of reflectedlight rays active layer 902, as depicted bypixel 900 in OFF and ON states illustrated inFIGS. 9A and 9B , respectively. When thepixel 900 is in the ON state (FIG. 9B ), the collector-coupler features 903 interact withlight waves 912 that approach the vicinity of the waveguide-active layer interface, increasing the probability of light waves to exit the waveguide and enter theactive layer 902 and directing emitted light 910 towards a viewer. Since light is coupled out of the waveguide by the collector-coupler features 903, anopaque material 904 can be disposed between the collector-coupler features 903. Theopaque material 904 prevents light from entering theactive layer 902 at undesired locations, improving overall contrast ratio of the display and mitigating pixel cross-talk. Theopaque material 904 can substantially fill the interstitial area between the collector coupler features 903 of each pixel, or it can comprise a conformal coating of these features and the interstitial spaces between them. Theaperture 908 of each collector-coupler 903 remains uncoated so that light can be coupled into the collector-coupler 903. Depending on the desired use of the display, theopaque material 904 may be either a specific color (e.g., black) or reflective. - The use of a representative flat panel display example throughout this detailed description shall not be construed to limit the applicability of the present invention to that field of use, but is intended for illustrative purposes as touching the matter of deployment of the present invention. Furthermore, the use of the three primary colors (red, green, and blue) throughout the remainder of this detailed description is likewise intended for illustrative purposes, and shall not be construed to limit the applicability of the present invention to these primary colors, whether as to their number or color or other attribute.
- As stated previously, certain field sequential color displays, such as the one illustrated in
FIG. 7 , exhibit undesirable visual artifacts under certain viewing conditions and video content. The cause of such harmful artifacts proceeds from relative motion of the observer's retina and the individual primary components of a given video frame during the successive transmission in time of each respective subframe primary component. The display ofFIG. 7 serves as a pertinent example that will be used, with some modifications for the purpose of generalization, throughout this disclosure to illustrate the operative principles in question. It should be understood that this example is provided for illustrative purposes as a member of a class of valid candidate applications and implementations, and that any device, comprised of any system exploiting the principles that inhere in field sequential color generation, can be enhanced with respect to artifact reduction or suppression where said artifacts stem from the primary components comprising a video frame falling on different geometric regions of the observer's retina due to relative motion of retina and display. -
FIG. 10 illustrates in accordance with an embodiment of the present disclosure the general phenomenon of color image breakup in FSC displays. The information being displayed on the display surface during a given video frame proceeds to the observer'sretina 1009 as a series ofcollinear pulses frame 1001 is composed of temporally separatedprimaries primaries retina 1009 to form an image. If theprimary subcomponents retina 1009 is in rotational motion, then the phenomenon at the retina follows the pattern ofvideo frame 1010, where the individualprimary components -
FIGS. 11A and 11B illustrate the intended image depicted inFIG. 11A as compared to the actual perceived image depicted inFIG. 11B . For example, if the primary components comprisingvideo frame 1010 all arrived at the same location on the retina, the eye would merge the primary subframes to accurately form thecomposite image 1000, which in this example is an image of a gray airplane. However, if the eye is in rotational motion,retina 1009 moves with respect to the consecutive primary colored images (i.e., primary subframes) comprisingvideo frame 1010, such that 1011, 1012, and 1013 (the primary components comprising the entire frame 1010) fall at different locations onretina 1009, resulting in the perceivedimage 1102, where the separateprimary components image 1101 is the goal of artifact suppression, whereby the splayed, dissociatedimage 1102 is reduced or suppressed by virtue of extirpation of the cause of such dissociation. - Color display systems that utilize a temporally-based color generation method may require the means to mitigate, suppress, and control motional artifacts arising from the temporal decoupling of the primary color constituents comprising an image due to these constituents arriving at different points on the observer's retina due either to rotary motion of the eye or to the eye's tracking of a foreground object in the video image that is in motion relative to the perceived background of that image. The various embodiments of the present disclosure provide encoding methods to mitigate color breakup motional artifacts and/or reduce demands on pixel response speeds. The present invention may be implemented on a host of display systems (direct view or projection-based) that could be expected to use field sequential color encoding techniques and thus would be highly desirable and lead to improved image generation by system architectures based on the field sequential color paradigm.
- To provide a better understanding of the encoding methods of the present disclosure,
FIG. 1 illustrates the conventional breakdown of a single primary color into pulse width modulated constituents that stand in binary proportions one to another. The three tristimulus primary colored lights used in field sequential color displays are red, green, and blue, and displays using field sequential color use at least these three primaries for image generation. A conventional frame rate for video frames in such displays is 60 frames per second (60 fps) for all three colors, which means that one-third of this time period of 1/60th of a second is allocated to each of the tristimulus primaries: 1/180th of a second for red, for green, and for blue, totaling 1/60th of a second.FIG. 1 illustrates one of these primary colored light (e.g., red) and its duration, referred to as aprimary pulse duration 100, and how it is further subdivided into smaller fractions. For 8-bit color, which provides 28 different intensities or gray scale levels for the primary color (256 gray scales), it is appropriate to subdivide the entire pulse of 1/180th second into eight fractions referred to asbits second bit 102 is a second subdivision lasting ½ the primary pulse of bit 101 (i.e., the first subdivision), thethird bit 103 is a third subdivision lasting ¼ the primary pulse ofbit 101, thefourth bit 104 is a fourth subdivision lasting ⅛ the primary pulse ofbit 101, thefifth bit 105 is a fifth subdivision lasting 1/16 the primary pulse ofbit 101, thesixth bit 106 is a sixth subdivision lasting 1/32 the primary pulse ofbit 101, theseventh bit 107 is a seventh subdivision lasting 1/64 the primary pulse ofbit 101, and theeighth bit 108 is an eighth subdivision lasting 1/128 the primary pulse ofbit 101. In other words, the duration of eachbit 102 through 108 is a consecutive halving of the previous bit, whereinbit 108 represents 1/256 of the original primary pulse duration also referred to an aggregateprimary pulse 100. Any one of 256 different values of intensity based on pulse width modulation (the amount of time light is allowed to pass through any given pixel being independently modulated according to this schema) can be generated by appropriate actuation of the pixel to either an ON or OFF state during the eight time periods ofbits 101 through 108. If a given pixel is OFF for all eight time periods, the gray scale level being displayed for that pixel is zero (no intensity of the given primary), and if the pixel is ON (i.e., open) for all eight subdivisions, the intensity is maximized for the pixel being so actuated. By setting the temporal subdivisions in consecutive 2-to-1 binary relationships, the encoding of digital information in temporal form to generate gray scale values via PWM is made particularly efficient from the standpoint of drive electronics exigencies, especially as compared to subdividing the 1/180th aggregatefull pulse 100 into 256 evenly-divided subdivisions, which requires 32 times as many addressing cycles as is taught inFIG. 1 . The horizontal axis represents time passing from left to right, while the vertical axis represents the intensity of the illumination source (i.e., light source) feeding at least one (if not all) pixels being actuated according to the principles of display operation to enable field sequential color image generation. -
FIG. 2 illustrates how the three consecutive primaries (of 1/180th second duration each) are arrayed sequentially, one after another, to form a full color video frame of 1/60th of a second duration (i.e., 1/180th of a second per color primary multiplied by three total color primaries being modulated) in a conventional FSC display. A first primary color that lasts for a primary pulse duration 100 a has an 8-bit gray scale decomposition as previously shown by the single primary color light that lasts for aprimary pulse duration 100 inFIG. 1 . The first primary color represented by duration 100 a may be the color red, although the present invention need not be tied to any one of the six possible combinations of the three tristimulus primaries that provide illumination to the pixels (which are turned on or off (i.e., switched between ON and OFF states) according to the requirements of the color frame as encoded according to the sequence of bits for each primary shown inFIG. 2 ). The first primary pulse duration 100 a is followed by a second primary pulse duration 100 b that illuminates a different color (e.g., green) for an identical duration of 1/180th of a second, and a third primary pulse duration 100 c that illuminates a different color (e.g., blue) for an identical duration of 1/180th of a second. The consecutive sequential series of tristimulus primaries (i.e., three primary colored light) and their possible binary subdivisions is referred to as anaggregate tristimulus pulse 200. Because in the example shown each primary color is subdivided into 8 bits (256 possible gray scales), the aggregate color capability for this example is 24-bit color (over 16.7 million possible colors based on the gray scales available for each of the three tristimulus primaries of red, green, and blue). - In the two examples illustrated in
FIGS. 1 and 2 , there is no attempt to modulate the intensity of the illumination sources feeding the display system. All gray scales are generated solely by temporal considerations: the pulse width modulation of light flux by way of pixel actuation and deactuation is adequate to create and control digital gray scale values independently at each pixel for a suitably configured display system. In another aspect, it should be noted that blanking periods may be inserted between each of the subdivisions to accommodate driver timing issues (e.g., to load the pixel data) and illumination transient response issues to insure linearity of response, but these possible blanking states (where the illumination means are briefly shut off at the boundaries between individual gray scale bits, or between consecutive primary colors being displayed) are omitted in the figures of the present disclosure since their possible inclusion is assumed as likely, without thereby being necessarily an intrinsic part of the present disclosure and its diverse methods for meeting the existing needs in the art to mitigate motional artifacts and/or relax pixel response time requirements. The methods of the present disclosure are applicable whether or not such blanking periods are present. - In a first embodiment, modulating an intensity of the illumination means of the system in tightly-coordinated conjunction with the temporal modulation of the pixel actuation sequence permits either (a) truncation of the aggregate pulse defining any given primary color full-length frame (all bits accounted for), such that by asynchronous distribution the entire series of tristimulus pulses would be thereby truncated to mitigate motional artifacts, or (b) said sequence permits distribution of the time gained (by using such synchronized intensity modulation within the illumination system as a surrogate for temporal duration for the most significant bits) amongst all bit durations comprising the total gray scale makeup of the full primary frame so as to relax the transient response requirements upon the pixels at the heart of the display architecture. Further, a combination of these two desirable effects (temporal truncation and reduced demands upon pixel speed) is also possible under this embodiment.
- In particular, the first embodiment provides a method for combining intensity modulation with PWM to achieve aggregate pulse truncation to mitigate color motional breakup artifacts, as illustrated in
FIG. 3B , or to relax pixel transient response requirements as illustrated inFIG. 3C . For comparison purposes,FIG. 3A again displays thefundamental pulse 100 as previously depicted inFIG. 1 , showing that the entire pulse of the primary color has an overall duration of 1/180th of a second. It is possible, however, to shorten the amount of time that the primary color is displayed. Shortening of the display time for a given gray scale value, which serves to truncate the pulse, is a valuable tool for mitigating the effects of motional color breakup. The temporal decoupling of the constituent primaries comprising an image generated on a field sequential color display can be reduced when the various temporal components of the frame arrive closer together in time (due to aggregate pulse truncation). The means of shortening the display time without affecting the desired flux being displayed by any given pixel involves amplitude modulation of the intensity being displayed through a given pixel. By setting up the FSC system so that the global illumination means operates at, for example, double its normal intensity during the most significant bit being displayed, the pulse width of that most significant bit can be halved without affecting its luminous flux. The mostsignificant bit 101 in the non-intensity-modulated primary pulse shown inFIG. 3A becomes truncated inFIG. 3B asbit 301.Bits FIG. 3A therefore becomebits FIG. 3B . These latter seven bits (i.e., pulses) are unchanged; however, the first pulse of the full set of eight subdivisions, i.e., the mostsignificant bit 101 ofFIG. 3A , now becomesbit 301 ofFIG. 3B that exhibits half the temporal duration but double the intensity of the global illumination system feeding the pixels comprising the display. The area ofbit 101 inFIG. 3A and the area ofbit 301 inFIG. 3B are identical, and this area (representing the product of intensity and time) represents the perceived intensity of the pulse based on total light flux passing through the pixel(s) in question. Note, however, that with the halving of the duration of bit 101 (such that the temporal of duration ofbit 301 is now equal to that ofbit 302, rather than twice the duration ofbit 302 as would have been required had intensity modulation not been integrated with pulse width modulation), there is extra time available for the overall primary pulse to be encoded, this extra time is referred to as atime savings 310. In this example, thetime savings 310 is equal to the duration of bit 302 (which is equivalent to the time duration ofbit 102 inFIG. 3A ). This savedtime 310 may be used to truncate the aggregate pulse. InFIG. 3B , the overall pulse is shortened by the duration oftime savings 310. Truncation of the aggregate pulse for any one or more of the primary color full-length frames (e.g., each of the primary pulses of light; red-green-blue) may be employed such that each of the tristimulus pulses would be thereby truncated to mitigate motional artifacts. - Alternatively,
FIG. 3C illustrates that the additional time provided bytime savings 310 is not used to shorten the aggregate pulse and create dead space after 308 as displayed inFIG. 3B , but rather, the duration of thetime savings 310 may be equally distributed among all eightbinary subdivisions 311 through 318. This may be appropriate where the speed of pixel response becomes a factor in displaying the least significant bits (shortest pulse durations) required to generate certain gray scale values. In the case ofFIG. 3C , the primary color being displayed (e.g., red) is just as long as was displayed inFIG. 3A , but the timing demands on the pixels are relaxed because intensity modulation of the mostsignificant bit 311 has bought the system some extra safe operating area in regard to temporal demands upon the pixel shuttering mechanisms native to the display architecture in question. Whereas displays that do not have the temporal demands, the dynamic range of the light output will increase due to the resulting increase of overall pixel intensity. - It should be noted that intensity modulation as shown in
FIGS. 3B and 3C is not limited to simply doubling the intensity for the mostsignificant bit significant bit significant bit FIG. 3B ) or further reduce high speed actuation requirements on the pixels (as implemented inFIG. 3C ). This embodiment allows for any level of coordinated segmentation between controlled intensity modulation and pulse width modulation to secure the correct binary proportions of net flux reaching the observer's retinas while viewing the display being so enhanced. - For comparison purposes,
FIG. 3D again illustrates the entire three-primary frame of 1/60th second duration, as previously depicted inFIG. 2 , capable of exhibiting an aggregate total color bit depth of 24 bits (i.e., 8 bits per primary multiplied by three tristimulus primary colors (denoted by lower case “a”, “b”, and “c”) being modulated sequentially in time. First primary pulse duration 100 a, second primary pulse duration 100 b, and third primary pulse duration 100 c would represent, for example, the respective red, green, and blue temporal subframes of 1/180th second each duration comprising the entirefull color frame 200 inFIG. 3D . When intensity modulation is applied to all three primary colored lights a, b, c comprising the full color frame, as inFIG. 3E , the durations (i.e., pulse widths) of most significant bits 301 a, 301 b and 301 c are halved in duration, as compared to 101 a, 101 b, and 101 c depicted inFIG. 3D , so that the respective total primary pulse durations 300 a, 300 b and 300 c are individually truncated and thus truncated in the aggregate, with the time savings by a duration referred to as atime savings 320 depicted inFIG. 3E . As illustrated in this example, thetime savings 320 is equal to three times the duration of pulse 302 a. Thistime savings 320 may be utilized as a blanking period during which black is displayed to reduce decoupling of the primary components as they fall on the observer's retina. - A second embodiment of the present disclosure provides a method to create hard-wired front-loaded bit weighting to enhance perceived mitigation of motional artifacts using virtual aggregate pulse width truncation. As illustrated in
FIG. 3F , the temporal rearrangement of the various bit weights comprising the total aggregate color gray scale to be produced by a given pixel during a full frame can be conducted along lines geared toward favoring the front-loading of the most significant bits 101 a, 101 b, and 101 c corresponding to each of the primary colored lights a, b, c, as compared to the embodiment previously described with respect toFIG. 3D wherein the most significant bits 101 a, 101 b, and 101 c are distributed in separate first, second, and third primary pulse durations 100 a, 100 b, and 100 c respectively. Combining bit-rearranging with non-contiguous primary processing, as illustrated inFIG. 3F , may position the same bit weights for each of the three primary colors (a, b, c), contiguous with one another thereby forming afirst triplet 322 of all three most significant bits 101 a, 101 b, 101 c. Likewise, the second most significant bits 102 a, 102 b, 102 c are similarly aggregated as asecond triplet 324, the third most significant bits 103 a, 103 b, 103 c are similarly aggregated as athird triplet 326, the fourth most significant bits are similarly aggregated as afourth triplet 328, the fifth most significant bits are similarly aggregated as afifth triplet 330, the sixth most significant bits are similarly aggregated as asixth triplet 332, the seventh most significant bits are similarly aggregated as a seventh triplet 34, and the eighth most significant bits (i.e., least significant bits) are similarly aggregated as aneighth triplet 336, representing each descending bit weight being encoded according to the binary weighting paradigm defined at the outset. A display system must be capable of displaying bits in such a sequence, including a noncontiguous primary color light a, b, c processing sequence (where noncontiguous would be defined as a sequence violating the standard red-green-blue or similar sequential illumination of the display system to provide light to the matrix of pixels on the display surface). In this embodiment, there is no attempt to modulate the intensity of the illumination sources feeding the display system. All gray scales are generated solely by temporal means such as pulse width modulation of light flux by way of pixel actuation and deactuation. PWM is adequate to create and control digital gray scale values independently at each pixel for a suitably configured display system. - As shown in
FIG. 3G as compared toFIG. 3E , the temporal rearrangement of the various bit weights comprising the total aggregate color gray scale to be produced by a given pixel during a full frame can be conducted along lines geared toward favoring the front-loading of the most significant bits. Whereas inFIG. 3E , the most significant bits 301 a, 301 b, and 301 c are distributed in separate first, second, and third primary pulse durations 300 a, 300 b, and 300 c, respectively, in the embodiment illustrated inFIG. 3G the same bit weights are contiguous with one another, forming afirst triplet 322 of all three most significant bits 301 a, 301 b, 301 c. The second most significant bits 302 a, 302 b, 302 c are similarly aggregated as asecond triplet 324, the third most significant bits representing athird triplet 326, the fourth most significant bits representing afourth triplet 328, and so on such that the fifth, sixth, seventh and eighth most significant bits are rearranged into fifth, sixth, seventh, andeighth triplets - In a third embodiment, bit-splitting is deployed to divide higher order bits (e.g., the most significant bit (MSB) 101), which have the longest temporal duration, into smaller subunits that total the duration of the original pulse of
MSB 101. The split bits may be distributed and interleaved for all three stimulus colors a, b, c, which are thus intermixed (unlike prior art bit-splitting which is limited to bit-splitting within a single primary due to the exigencies of color wheel operation native to the displays incorporating such techniques).FIG. 4A illustrates the four mostsignificant bits FIG. 1 .FIG. 4B illustrates how the larger, more significant bits (e.g.,bits FIG. 4A ) can be subdivided. In this example,bits significant bit 104, as shown inFIG. 4B . The value of bit splitting as a method is not properly realized until the bits forming the single primary color gray scale are rearranged, such as illustrated inFIG. 4C , which attempts to provide an averaged weighting of each of the respective bit weights across the entire 1/180th of a second for the single primarycolor pulse duration 100. InFIGS. 4B and 4C , there are eightfractional durations 101′ comprising the mostsignificant bit 101, fourfractional durations 102′ comprising the second mostsignificant bit 102, twofractional durations 103′ comprising the third mostsignificant bit 103, and only oneduration 104 comprising the fourth mostsignificant bit 104. The lower order bits (bits - In this third embodiment, bit-splitting across all three tristimulus colors may be combined with the principle of intensity modulation of the illumination source disclosed in the first embodiment to secure superior optical performance while gaining either (a) additional aggregate pulse width truncation to further mitigate motional artifacts, (b) distribute any time saved due to the addition of intensity modulation among all bits being temporally generated to reduce the demand on pixel speeds in the display architecture, or (c) a combination of both motional artifact mitigation and reduction of pixel response requirements.
- An embodiment of the present invention teaches the coordinated modulation of light source intensity with pulse width modulation to facilitate aggregate pulse width truncation.
FIG. 5A reproduces the primary time savings already disclosed inFIG. 3E , whileFIG. 5B (for the sake of clarity) omits the four lowest order bits to set forth a variation on an intensity-modulated method as applied to the four highest order bits, namely 301 a, 302 a, 303 a, and 304 a. Note that the area of each of these respective bit weights represents the potential flux that can be generated by any given pixel in question (if the pixel were turned on for all such segments the flux would be actual). InFIG. 5B , note that the respective primaries are still treated contiguously, such that 300 a represents one primary color light pulse being modulated by intensity and pulse width modulation, and 300 b and 300 c represent the other two primary colored lights. - A fourth embodiment is a method that may include combining intensity modulation with PMW to create averaged primary weights across the aggregate full color three primary frame duration via noncontiguous primary colored light sequencing. The fourth embodiment of the present disclosure distributes conventional pulse-width-modulated temporal pulses so as to average out the relative bit weights over the entire tristimulus primary color sequence comprising a full color frame, thereby interleaving the various tristimulus primary color components to provide the best image quality as empirically determined by actual visual performance of such systems. As shown in
FIG. 5B , the respective primary colored light durations 300 a, 300 b, and 300 c are contiguous with one another. However, provided a display system (for example, the display disclosed in U.S. Pat. No. 5,319,491) is capable of noncontiguous primary color light sequencing, further possibilities for modifying and/or improving the visual quality of the video images being transduced by the display system can be marshaled. In this embodiment, the primary colored light a, b, c may be interleaved as illustrated inFIG. 5C so as to provide average potential intensity across the entire aggregate three-primary pulse duration of 1/60th of a second. This differs from the effects of bit-splitting-based averaging disclosed inFIG. 4C , which is limited to gray scale generation within a single primary color, and does not involve the interleaving of bit weights across color boundaries such as is shown inFIG. 5C , where it should be noted that the suffixes refer to the three primaries (the most common association would be a for red, b for green, and c for blue, so that 302 b would represent the second most significant bit for the primary color green). Note that the ordering of the bits inFIG. 5C no longer follows the conventional sequence of red-green-blue, and so does not lend itself to displays that are tied to sequential processing of the respective primaries (such as obtains in the case of color-wheel-based illumination systems used in projection-based displays). In this instance,FIG. 5C secures the benefits of the approach alluded to inFIG. 4C , except thatFIG. 5C employs coordinated manipulation of illumination intensity to the display, the interleaving of the primary colors, and the implementation of noncontiguous sequencing to insure maximum averaging of light energy for any given primary over the entire three-primary duration of 1/60th of a second, which was arbitrarily selected at the outset as a reasonably nominal frame rate for a field sequential color display system. - A fifth embodiment is a method that may include combining intensity modulation with PWM to create hard-wired front-loaded bit weighting to further enhance perceived mitigation of motional artifacts using virtual aggregate pulse width truncation. The fifth embodiment of the present disclosure distributes conventional pulse-width-modulated temporal pulses so as to front-load all the most significant bits relative to the specific single video frame being generated, with the lower order bits contributing less information to the image being generated being relegated to the end of the aggregate tristimulus colored light pulse comprising the entire color frame. By front-loading (or back-loading, which would be optically equivalent to the human eye: the benefit arises from grouping the most important pulse of all colors as closely as possible) the most important bits, the image undergoes virtual truncation due to such temporal rearrangement of the pulses comprising it. This embodiment hardwires the bit sequence from the most significant bits of all three primary colored lights being encoded first, followed by the next most significant bits of all three primary colored lights, and so on down to the least significant bits. Such virtual truncation can serve to mitigate motional breakup artifacts without, in fact, truly truncating the pulse in time, by means of such synchronous hardwired weighting of the respective bits to be displayed. The bulk of the information comprising the important features of the frame are displayed first, and the less important features are displayed later in time (and are likely to represent lower image value portions of the display and thus will be more difficult to resolve at video speed causing undesirable artifacts to be less noticeable as a result).
- As referenced in
FIG. 5B , the unmodified method for implementing multiple intensity levels in conjunction with pulse width modulation is shown without respect to the lower order bits which are ignored for the sake of visual and explanatory clarity. By already implementing intensity modulation, this fifth embodiment already enjoys some actual pulse width truncation as described for earlier embodiments where such intensity modulation is taught herein. However, there are additional performance enhancements available that may be gained by introducing a new method to add virtual pulse width truncation. Virtual pulse width truncation involves the principles of visual perception, and exploits the fact that by rearranging the bits in an appropriate way, the bulk of the visual data comprising the important elements of a video frame will be displayed close together in time, forming a tighter-packed ensemble of bit weights, while the lower order bit weights are postposited behind the more significant bits and, being lower order bits, are (in principle) less visible, tending to cause whatever artifacts may arise to be reduced in visual magnitude. -
FIG. 5D illustrates the hard-wired interleaving of primaries to achieve the desired virtual pulse width modulation. Note that the highest order bits referred to as the most significant bits 301 a, 301 b, and 301 c for the three primary colored lights a, b, c are displayed first as afirst ensemble 501 comprising bits 301 a, 301 b, and 301 c. Thereafter, the second most significant bits 302 a, 302 b, 302 c are displayed second as asecond ensemble 502, followed by a descending series of bit orders (i.e., third most significant bits 303 a, 303 b, 303 c are displayed third as athird ensemble 503, followed by the fourth most significant bits 304 a, 304 b, 304 c displayed fourth as a fourth ensemble 504). This sequence is termed hard-wired since it invariably displays the bit planes comprising the respective gray scale decomposition of any given video frame according to this precise sequence. This sequence need not be synchronous (tied irrevocably to a clock) if some bit weights are not included in the program content, a circumstance which could lead to further temporal truncation. The bit weight order taught inFIG. 5D will tend to insure that the phenomenon of virtual pulse width truncation will occur for a majority of video frames being processed by the display system. One advantage of this approach is that it requires no real time analysis of video content, by virtue of being hard-wired as a method for encoding the data. Therefore, this fifth embodiment enjoys not only the benefits of actual aggregate pulse width truncation, it adds the benefits of virtual pulse width truncation to further reduce the appearance of possible visual artifacts related to field sequential color display system operation. Note, too, that the method illustrated inFIG. 5D can be applied even if intensity modulation is not being applied, as previously illustrated inFIG. 3F . This embodiment of the present disclosure may also include any ordering of the bits within each of theensembles FIG. 5D , whether or not the flux is generated by a combination of intensity and pulse width modulation, solely by pulse width modulation, or solely by intensity modulation, for example inensemble 501 the bits may be any order such as 301 b, 301 c, 301 a. It should also be noted that the precise position within the 1/60th overall frame being displayed for the most significant bits is not important, for example, whileFIG. 5D suggests that the heaviest weighting (i.e., most significant bits) occurs at the left, early in the aggregate pulse for all the bit weights to be displayed, the most significant bits could just have easily been situated at the right, and have thus been back-loaded, with identical generation of the virtual pulse width truncation effect. Distribution of the heaviest weighting around the center of the 1/60th of a second frame period would be of equal potential value. The embodiment covers all such variations of the loading sequence, for which a point to be grasped for this method is an explicit concentration of the most significant bits as close in time as possible, regardless where within the frame pulse said concentration is to occur. The fact thatFIG. 5D illustrates that this concentration is at the front of the pulse, at the left, is not material to the embodiment but only represents one possible (and convenient) approach. Wherever the concentration of the highest bits is positioned, it may be hard-wired to always fall in that position insofar as this embodiment is concerned. - A sixth embodiment is a method that may include combining intensity modulation with PWM to create a real-time dynamically-determined front-loaded bit weighting to further enhance perceived mitigation of motional artifacts using virtual aggregate pulse width truncation. The sixth embodiment of the present disclosure may apply the same general virtual truncation as the fifth embodiment does, and thereby also serves to mitigate motional breakup artifacts without, in fact, truly truncating the pulse in time. This sixth embodiment is dependent upon real-time evaluation of each frame being generated, because the distribution and weighting of the bits is no longer hardwired and synchronously fixed, but is determined in real time video frame by video frame. The rearrangement of the bits to be displayed in this sixth embodiment must be calculated prior to generation of the frame for actual display to the viewer, which will likely require look-ahead capability in the video cache to preprocess each frame in regard to the required real-time analysis process. In contrast to the hardwired bit sequence of the fifth embodiment, the sixth embodiment dynamically adjusts the bit sequence in light of the exigencies of each frame's program content, thereby insuring that the correct bits are truly weighted to the front of the aggregate pulse being encoded across all primary colored lights, which cannot always be guaranteed in the case of the hardwired fifth embodiment.
- In regard to program material (video content), it is not always true that the highest order bits contain the video information that defines the key thresholds within a video image. The fifth embodiment, therefore, represents a very useful approximation and provides virtual aggregate pulse width truncation for about 75% or more of the information being displayed on a video system so equipped. But, for the remaining 25% of program content, the fifth embodiment may provide no apparent advantage or could even theoretically serve to extend rather than truncate the virtual perception of the pulse, therefore aggravating rather than mitigating motional artifacts during the display of the affected video frames.
- This sixth embodiment discloses an alternative method for determining the best order in which to display the bit weights comprising a video frame. Real-time video analysis software that is capable of calculating the visually significant gray scale levels in a full color frame at full operational speed can supply a display system with the means to re-order the bit weights on a frame-by-frame basis to insure that the bit weights that truly represent the most visually significant bits (not merely the bits that represent the largest arithmetic values) are displayed first. In most cases, these values are likely to match that for the hard-wired front-loaded bit weight encoding method disclosed in the fifth embodiment above, but any given frame may well deviate from this standard, as suggested in
FIG. 5E , where the most visually significant bits 301 a, 302 b, 304 a (as contrasted to the conventional definition of most significant bit as a purely mathematical definition) are displayed first in afirst ensemble 506, followed by the next most visually significant bits 304 b, 304 c, 302 a displayed second in asecond ensemble 507, followed by the next most visually significant bits 302 c, 303 a, 301 c displayed third in athird ensemble 508, followed by the next most visually significant bits 301 b, 303 b, 303 c displayed fourth in ensemble 509 (e.g., the least most visually significant bits inensemble 509 for a 4-bit system as illustrated inFIG. 5E ), etc. Note that the constituent elements comprising the most visuallysignificant bits 506 are clearly not the three most significant bits as they appeared inFIG. 5D . Because the video content may involve objects of one color moving against a background with a rather similar color, the distinction between the object and its background is likely to be determined by bits other than the most significant bits mathematically considered, but rather by bits that define the shade of difference between object and background, which could readily be a lower order bit, as suggested by the composition offirst ensemble 506 inFIG. 5E . The sample provided inFIG. 5E gives an arbitrary example, for implementation of this sixth embodiment would likely entail the customization of the bit weight sequence for every consecutive frame of video being encoded by the display driver circuitry to provide both maximum actual aggregate pulse width truncation across all colors, plus maximum virtual pulse width truncation due to front-loading of the most visually significant bits determined on a frame-by-frame basis by the system doing the video analysis and providing the driver circuitry the correct information to re-arrange the interleaving of the various primaries, their intensities, and their pulse widths to insure the desired result as elaborated above. - A seventh embodiment is a method that may include the real-time quantizing/posterizing of foreground objects moving relative to their background in a video frame sequence to gain actual and virtual aggregate pulse width truncation and concomitant mitigation of motional artifacts associated with FSC displays. The seventh embodiment of the present disclosure may extend a method of the fifth and sixth embodiments to a more elaborate level to achieve further gains in the realm of motional artifact mitigation in field sequential color displays. This embodiment is a variant of the previous method and includes determination (by intelligent real-time frame analysis software) of which portions of the image represent foreground objects in sufficiently rapid motion against the background being displayed to warrant the imposition of artifact mitigation methods. Motional artifacts are known to aggregate around such foreground objects at their borders in the direction of motion across the display. The method calls for the identified part of the frame that represents the moving foreground object to be converted, via an intelligent modulo method, into a reduced bit-depth image with the bits comprising the object being targeted for front-loading for the frame in question. For example, an airplane against a blue sky would be identified as a foreground moving object by the video analysis software. Both airplane and blue sky are originally encoded as 24-bit color (8 bits per tristimulus primary). The revised frame in this method would re-encode the airplane portion of the frame in a reduced bit-depth (e.g., 12-bit color, or 4 bits per tristimulus primary), and those moving-object-related bits for all relevant tristimulus primary colors would be front-loaded and transduced first before processing the remaining 12-bits which define the sky's greater bit depth. Because the foreground object is in sufficiently rapid motion, the loss of bit-depth is not apparent to the eye during its transit across the display. This real-time dynamic bit-depth adjusting method is invoked only as needed based on program content to mitigate and/or prevent the onset of motional color artifacts.
- This seventh embodiment is schematically illustrated in a flow chart depicted in
FIG. 6 , which provides a method for preprocessing video data to be encoded using field sequential color pulse width modulation techniques. The mitigation of motional artifacts may be applied to specific regions of a video frame, those regions most likely to be affected by artifacts by virtue of relative motion between that region (which thus represents a moving foreground object) and the background. Motional artifacts tend to occur at the borders of the foreground object and appear as color smearing and decoupling in the direction of apparent motion of the object. The principles that inhere in this method may also be adapted to situations where motional breakup is caused by relative motion of the display screen with the viewer's head (such as might occur with an avionics display when the pilot's cockpit is significantly vibrating). In that instance, a desired preprocessing effect may be imposed on the entire display if vibration is detected by an appropriate sensor. In general use, however, the preprocessing effect under this method is applied to objects in the video frame sequence that are intelligently determined, via real-time video analysis, to be moving at sufficient speeds relative to their background as to cause a significant risk for motional artifact generation and the resulting temporal decoupling of the edges of the object being tracked by the observer's eyes. - By way of background, there are means in the prior art for determining which parts of a video frame sequence represent an object in motion. Two methods for conducting such real-time analysis have been put forward by Zlokolica et al. (“Fuzzy logic recursive motion detection and denoising of video sequences,”Journal of Electronic Imaging, April-June 2006/Vol. 15(2), 023008-1ff) and by Argyriou and Vlachos (“Extrapolation-free arbitrary-shape motion estimation using phase correlation,” Journal of Electronic Imaging, January-March 2006/Vol. 15(1), 010501-1ff). Therefore, the implementation of such a step within the method disclosed hereunder shall be understood to represent either one or the other of such methods, a combination of them, or an equivalent or superior approach to that enunciated by these research teams, and as such reflects an existing but growing body of knowledge in the field of image analysis, particularly in regard to video analysis. Because motion detection under such systems requires several subsequent frames to be analyzed at once, this embodiment presupposes the existence of a suitable video cache capable of feeding such a real-time analysis system as might be assembled according to principles published by the researchers alluded to above.
- Further, this embodiment is based on selective posterization (or quantization) of the moving region(s), if any, in the video stream being analyzed. Posterization involves a deliberate reduction in the amount of gray scales involved in depicting an image or a portion of an image. When a part of the video stream is detected to represent a foreground object moving rapidly with respect to the background, it may be advantageous for that object to undergo both actual and virtual aggregate pulse width truncation to mitigate the artifacts that could arise if the frame rates or blanking periods are not suitably configured. One way to induce such truncation is to reconfigure the moving object in regard to the bit weights comprising it, so that it may be represented by four or five bits rather than all eight bits of each primary color. This reduction in bit depth will cause a visual effect known as posterization, wherein gentle gradations within the object become more sharply defined and less smooth. Since the object is in rapid motion, this brief loss of gray scale depth is far less objectionable than motional color breakup artifacts are (which even tend to elongate the moving object in the direction of motion, which is unacceptable for display systems involved in training pilots in simulator systems where target acquisition is premised on accurate shapes for moving objects on-screen).
- Therefore, by reducing the bit depth of the moving object and reconfiguring the display bit order for all primaries in light of the basic principles incorporated in the sixth embodiment (and reflected in the example given in
FIG. 5E in which the most visually significant bits take precedence over the mathematically most significant bits), a performance improvement in regard to motional artifact mitigation can be enjoyed by the display incorporating this method. - Referring to
FIG. 6 ,incoming video signal 601 is received in thevideo cache 602 that is used to feed data in appropriate increments frame-by-frame to the real-timemotion analysis subsystem 603, a system such as was alluded to above in regard to the prior art in the field of real-time image analysis applied to video streams. Several frames are processed at once by theanalyzer 603 to determine the status, in temporal context, of the first frame being processed. The system then determines 604 whether or not there is any relative motion between one or more foreground objects and the apparent background, and whether or not the motion is sufficiently rapid as to be likely to cause motional color breakup artifacts. If there is such relative motion, then the defined region(s) of the video frame that are detected to be in rapid relative motion are resampled/posterized/quantized 605 so as to intelligently reduce the number of gray scales defining the region(s) detected to be in motion (and thus representing moving objects against the displayed background in the video frame). The bit sequence for encoding may also be adjusted in real-time if the principles of the sixth embodiment are being followed in the device, otherwise not. If there was no detected relative motion, then the video frame is not posterized or resampled but is passed on as-is to theprimary encoding engine 606. If there was indeed such motion, the parameters for the video adjustment are handled at theanalyzer 605 and any encoding-specific adjustments (if the sixth embodiment is also implemented) are passed as determinative parameters to theencoding engine 606 which then determines the order of the bits to be displayed across all three primary colors. - The video frame, whether or not it has undergone real-time selective dynamic posterization at 605, is displayed at 607; the video cache is queried 608 to determine if it is empty. If it is, information display would naturally cease 610; if there are more frames to process, then the
system increments 609 such that sufficient video frames are present in thecache 602 to insure that the real timemotion analysis system 603 can continue to correctly determine whether the frames in context exhibit sufficient relative motion within the program content to trigger theposterization step 605. - The posterization step may involve any number of rounding methods to secure the desired result. One example of such a method is disclosed in “Adaptive color quantization using the baker's transformation,” by Montagne et al., Journal of Electronic Imaging, April-June 2006, Vol. 15(2), 023015-1ff), however there are a host of methods for securing useful results. The purpose of the posterization step is to provide further gains in artifact mitigation by reducing the system overhead required for generating the full gray scale bit depth for the moving object. Instead of, for example, the object (which may be an aircraft being displayed against a blue sky) being displayed in 24-bit color, it may be displayed in 12-bit color if it is moving rapidly against the sky in the video sequence. The combination of the sixth and seventh embodiments herein will provide maximum actual and virtual truncation of the aggregate color pulses that generate these images.
- It should be understood that the methods outlined here are not limited to the three tristimulus primaries but may be applied to extended gamut scenarios where additional visible primary colored light(s) are used to encode video data. The methods outlined here may also be applied to the interleaving of nonvisible primaries (e.g., infrared light) with visible primaries as well, and is not restricted in any way to the three well-known tristimulus primaries (e.g., red-green-blue). The same methods for intensity modulation coordinated with pulse width modulation and the other methods disclosed apply to all such display systems.
- It will be seen by those skilled in the art that many embodiments taking a variety of specific forms and reflecting changes, substitutions, and alternations can be made without departing from the spirit and scope of the invention. Therefore, the described embodiments illustrate but do not restrict the scope of the claims.
Claims (3)
1. A method comprising:
displaying a video image using field sequential color encoded in a plurality of video frames;
receiving the plurality of video frames for displaying a foreground object and a background image in the video image;
determining in the received plurality of video frames that the foreground object of a video image of at least one of the video frames is in motion relative to the background image of the video image;
modifying a gray scale of the at least one of the video frames associated with the foreground object that is in motion relative to the background image of the video image to reduce the number of gray scales defining the foreground object;
encoding the plurality of video frames including the modified video frame for image generation; and
displaying the encoded plurality of video frames, including the modified video frame;
wherein the plurality of video frames are displayed by temporally segregating primary color components of the video images by presenting each frame of video information by rapid consecutive generation of each primary component.
2. A method for removing field sequential color artifacts that arise in a display system that temporally segregates color components of a video image and presents each frame of video information by rapid consecutive generation of each color component when the color components of the video image making up a composite frame of video information do not all reach a same region of an observer's retina due to relative motion of the retina and the video image to be displayed, the method comprising:
receiving a video signal for displaying the video image comprising a foreground object and a background image, the video signal further comprising a plurality of video frames, each video frame comprising a plurality of bits representing a gray scale of one of a plurality of the color components used by the display system to display the video image;
determining that the foreground object is moving relative to the background image; and
modifying a video frame containing color gray scale information associated with display of the foreground object to reduce the number of gray scales defining the foreground object;
wherein the plurality of video frames are displayed by temporally segregating primary color components of the video images by presenting each frame of video information by rapid consecutive generation of each primary component.
3. A display system comprising:
a display panel for displaying a video image using field sequential color encoded in a plurality of video frames;
a video cache for receiving the plurality of video frames for determining whether the foreground object is moving relative to the background image;
a motion analyzer processing the plurality of video frames for determining whether the foreground object is moving relative to the background image;
the motion analyzer determining the foreground object is moving relative to the background image and further comprising circuitry for modifying a gray scale of the foreground object of at least one of the plurality of video frames in response thereto to reduce the number of gray scales defining the foreground object in the at least one of the plurality of video frames;
an encoder for encoding the plurality of video frames including the modified video frame for image generation; and
displaying the encoded plurality of video frames, including the modified video frame;
wherein the plurality of video frames are displayed by temporally segregating primary color components of the video image by presenting each frame of video information by rapid consecutive generation of each primary component.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/326,190 US20140375670A1 (en) | 2008-09-22 | 2014-07-08 | Field sequential color encoding for displays |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US9893108P | 2008-09-22 | 2008-09-22 | |
US12/564,894 US8405691B2 (en) | 2008-09-22 | 2009-09-22 | Field sequential color encoding for displays |
US13/772,457 US8773480B2 (en) | 2008-09-22 | 2013-02-21 | Field sequential color encoding for displays |
US14/326,190 US20140375670A1 (en) | 2008-09-22 | 2014-07-08 | Field sequential color encoding for displays |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/772,457 Continuation US8773480B2 (en) | 2008-09-22 | 2013-02-21 | Field sequential color encoding for displays |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140375670A1 true US20140375670A1 (en) | 2014-12-25 |
Family
ID=42037270
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/564,894 Expired - Fee Related US8405691B2 (en) | 2008-09-22 | 2009-09-22 | Field sequential color encoding for displays |
US13/772,457 Expired - Fee Related US8773480B2 (en) | 2008-09-22 | 2013-02-21 | Field sequential color encoding for displays |
US14/326,190 Abandoned US20140375670A1 (en) | 2008-09-22 | 2014-07-08 | Field sequential color encoding for displays |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/564,894 Expired - Fee Related US8405691B2 (en) | 2008-09-22 | 2009-09-22 | Field sequential color encoding for displays |
US13/772,457 Expired - Fee Related US8773480B2 (en) | 2008-09-22 | 2013-02-21 | Field sequential color encoding for displays |
Country Status (2)
Country | Link |
---|---|
US (3) | US8405691B2 (en) |
WO (1) | WO2010034025A2 (en) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010034025A2 (en) * | 2008-09-22 | 2010-03-25 | Uni-Pixel Displays, Inc. | Field sequential color encoding for displays |
US8893034B2 (en) * | 2010-01-27 | 2014-11-18 | Yahoo! Inc. | Motion enabled multi-frame challenge-response test |
CA2796519A1 (en) | 2010-04-16 | 2011-10-20 | Flex Lighting Ii, Llc | Illumination device comprising a film-based lightguide |
BR112012026329A2 (en) | 2010-04-16 | 2019-09-24 | Flex Lighting Ii Llc | signal comprising a film-based light guide |
ES2388413B1 (en) * | 2010-07-01 | 2013-08-22 | Telefónica, S.A. | METHOD FOR CLASSIFICATION OF VIDEOS. |
US20120025949A1 (en) * | 2010-07-29 | 2012-02-02 | Reed Matthew H | Concurrent Infrared Signal, Single Thread Low Power Protocol and System for Pet Control |
WO2013152439A1 (en) * | 2012-04-13 | 2013-10-17 | In Situ Media Corporation | Method and system for inserting and/or manipulating dynamic content for digital media post production |
US9124762B2 (en) | 2012-12-20 | 2015-09-01 | Microsoft Technology Licensing, Llc | Privacy camera |
US20140253702A1 (en) | 2013-03-10 | 2014-09-11 | OrCam Technologies, Ltd. | Apparatus and method for executing system commands based on captured image data |
US9213911B2 (en) * | 2013-03-15 | 2015-12-15 | Orcam Technologies Ltd. | Apparatus, method, and computer readable medium for recognizing text on a curved surface |
US9619867B2 (en) * | 2014-04-03 | 2017-04-11 | Empire Technology Development Llc | Color smear correction based on inertial measurements |
KR101577324B1 (en) | 2014-06-18 | 2015-12-29 | (주)티티에스 | Apparatus for treatmenting substrate |
CN105988272B (en) * | 2015-02-15 | 2019-03-01 | 深圳光峰科技股份有限公司 | Optical projection system and its control method |
CN106154714B (en) * | 2015-04-09 | 2018-11-13 | 深圳市光峰光电技术有限公司 | A kind of method and optical projection system of spatial light modulator modulation data |
US9874932B2 (en) * | 2015-04-09 | 2018-01-23 | Microsoft Technology Licensing, Llc | Avoidance of color breakup in late-stage re-projection |
US10074299B2 (en) | 2015-07-28 | 2018-09-11 | Microsoft Technology Licensing, Llc | Pulse width modulation for a head-mounted display device display illumination system |
US10338677B2 (en) | 2015-10-28 | 2019-07-02 | Microsoft Technology Licensing, Llc | Adjusting image frames based on tracking motion of eyes |
US10542596B1 (en) * | 2017-07-12 | 2020-01-21 | Facebook Technologies, Llc | Low power pulse width modulation by controlling bits order |
US20230067584A1 (en) * | 2021-08-27 | 2023-03-02 | Apple Inc. | Adaptive Quantization Matrix for Extended Reality Video Encoding |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6002412A (en) * | 1997-05-30 | 1999-12-14 | Hewlett-Packard Co. | Increased performance of graphics memory using page sorting fifos |
US6333725B1 (en) * | 1998-06-30 | 2001-12-25 | Daewoo Electronics, Co., Ltd. | Data interfacing apparatus of AC type plasma display panel system |
US20020012008A1 (en) * | 2000-04-21 | 2002-01-31 | Yuichi Takagi | Modulation circuit, image display using the same, and modulation method |
US20040164980A1 (en) * | 2002-12-04 | 2004-08-26 | Hewlett Gregory J. | Nonlinearity and reset conflicts in pulse width modulated displays |
US20050062765A1 (en) * | 2003-09-23 | 2005-03-24 | Elcos Microdisplay Technology, Inc. | Temporally dispersed modulation method |
US20060146389A1 (en) * | 2002-05-06 | 2006-07-06 | Uni-Pixel Displays, Inc. | Field sequential color efficiency |
US20060250423A1 (en) * | 2005-05-09 | 2006-11-09 | Kettle Wiatt E | Hybrid data planes |
US20070035707A1 (en) * | 2005-06-20 | 2007-02-15 | Digital Display Innovations, Llc | Field sequential light source modulation for a digital display system |
US20080137950A1 (en) * | 2006-12-07 | 2008-06-12 | Electronics And Telecommunications Research Institute | System and method for analyzing of human motion based on silhouettes of real time video stream |
US20080187219A1 (en) * | 2007-02-05 | 2008-08-07 | Chao-Ho Chen | Video Object Segmentation Method Applied for Rainy Situations |
US20080192065A1 (en) * | 2005-08-02 | 2008-08-14 | Uni-Pixel Displays, Inc. | Mechanism to Mitigate Color Breakup Artifacts in Field Sequential Color Display Systems |
US20100033516A1 (en) * | 2006-12-22 | 2010-02-11 | Koninklijke Philips Electronics N.V. | Method of adjusting the light output of a projector system, and system for adjusting the light output of a projector system |
US20100067863A1 (en) * | 2008-09-17 | 2010-03-18 | Wang Patricia P | Video editing methods and systems |
US7738002B2 (en) * | 2004-10-12 | 2010-06-15 | Koninklijke Philips Electronics N.V. | Control apparatus and method for use with digitally controlled light sources |
US8405691B2 (en) * | 2008-09-22 | 2013-03-26 | Rambus Inc. | Field sequential color encoding for displays |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3873544B2 (en) * | 1999-09-30 | 2007-01-24 | セイコーエプソン株式会社 | Electro-optical device and projection display device |
JP2002268034A (en) * | 2001-03-12 | 2002-09-18 | Matsushita Electric Ind Co Ltd | Liquid crystal display device and its driving method, and information display device |
-
2009
- 2009-09-22 WO PCT/US2009/057923 patent/WO2010034025A2/en active Application Filing
- 2009-09-22 US US12/564,894 patent/US8405691B2/en not_active Expired - Fee Related
-
2013
- 2013-02-21 US US13/772,457 patent/US8773480B2/en not_active Expired - Fee Related
-
2014
- 2014-07-08 US US14/326,190 patent/US20140375670A1/en not_active Abandoned
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6002412A (en) * | 1997-05-30 | 1999-12-14 | Hewlett-Packard Co. | Increased performance of graphics memory using page sorting fifos |
US6333725B1 (en) * | 1998-06-30 | 2001-12-25 | Daewoo Electronics, Co., Ltd. | Data interfacing apparatus of AC type plasma display panel system |
US20020012008A1 (en) * | 2000-04-21 | 2002-01-31 | Yuichi Takagi | Modulation circuit, image display using the same, and modulation method |
US20060146389A1 (en) * | 2002-05-06 | 2006-07-06 | Uni-Pixel Displays, Inc. | Field sequential color efficiency |
US20040164980A1 (en) * | 2002-12-04 | 2004-08-26 | Hewlett Gregory J. | Nonlinearity and reset conflicts in pulse width modulated displays |
US20050062765A1 (en) * | 2003-09-23 | 2005-03-24 | Elcos Microdisplay Technology, Inc. | Temporally dispersed modulation method |
US7738002B2 (en) * | 2004-10-12 | 2010-06-15 | Koninklijke Philips Electronics N.V. | Control apparatus and method for use with digitally controlled light sources |
US20060250423A1 (en) * | 2005-05-09 | 2006-11-09 | Kettle Wiatt E | Hybrid data planes |
US20070035707A1 (en) * | 2005-06-20 | 2007-02-15 | Digital Display Innovations, Llc | Field sequential light source modulation for a digital display system |
US20080192065A1 (en) * | 2005-08-02 | 2008-08-14 | Uni-Pixel Displays, Inc. | Mechanism to Mitigate Color Breakup Artifacts in Field Sequential Color Display Systems |
US20080137950A1 (en) * | 2006-12-07 | 2008-06-12 | Electronics And Telecommunications Research Institute | System and method for analyzing of human motion based on silhouettes of real time video stream |
US20100033516A1 (en) * | 2006-12-22 | 2010-02-11 | Koninklijke Philips Electronics N.V. | Method of adjusting the light output of a projector system, and system for adjusting the light output of a projector system |
US20080187219A1 (en) * | 2007-02-05 | 2008-08-07 | Chao-Ho Chen | Video Object Segmentation Method Applied for Rainy Situations |
US20100067863A1 (en) * | 2008-09-17 | 2010-03-18 | Wang Patricia P | Video editing methods and systems |
US8405691B2 (en) * | 2008-09-22 | 2013-03-26 | Rambus Inc. | Field sequential color encoding for displays |
US8773480B2 (en) * | 2008-09-22 | 2014-07-08 | Rambus Delaware Llc | Field sequential color encoding for displays |
Also Published As
Publication number | Publication date |
---|---|
WO2010034025A2 (en) | 2010-03-25 |
US20130235273A1 (en) | 2013-09-12 |
US8773480B2 (en) | 2014-07-08 |
WO2010034025A3 (en) | 2010-07-01 |
US20100073568A1 (en) | 2010-03-25 |
US8405691B2 (en) | 2013-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8773480B2 (en) | Field sequential color encoding for displays | |
US8643681B2 (en) | Color display system | |
US8077185B2 (en) | Mechanism to mitigate color breakup artifacts in field sequential color display systems | |
US8319699B2 (en) | Multiple display channel system with high dynamic range | |
JP5174309B2 (en) | Devices and techniques for increasing the dynamic range of projection devices | |
EP0666009B1 (en) | Matrix display systems and methods of operating such systems | |
EP1701332A2 (en) | Backlit display device with reduced flickering and blur | |
CN106356036B (en) | Reduced blur, low flicker display system | |
US7408527B2 (en) | Light emitting device driving method and projection apparatus so equipped | |
JP2000036969A (en) | Stereoscopic image display method and system | |
JP2000066632A (en) | Video image processing method for compensating influence of false contour and device therefor | |
US9165530B2 (en) | Three-dimensional image display apparatus | |
US8730399B2 (en) | Dynamic illumination control for laser projection display | |
US8305387B2 (en) | Adaptive pulse-width modulated sequences for sequential color display systems | |
US9230296B2 (en) | Spatial and temporal pulse width modulation method for image display | |
US20120320078A1 (en) | Image intensity-based color sequence reallocation for sequential color image display | |
JP5895446B2 (en) | Liquid crystal display element driving apparatus, liquid crystal display apparatus, and liquid crystal display element driving method | |
JP2003330410A (en) | Method of reducing image artifact on display panel caused by phosphor time response | |
JP2001202057A (en) | Image display device and image display method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |