WO2012089849A1 - Method and system for compensating effects in light emitting display devices - Google Patents

Method and system for compensating effects in light emitting display devices Download PDF

Info

Publication number
WO2012089849A1
WO2012089849A1 PCT/EP2012/050028 EP2012050028W WO2012089849A1 WO 2012089849 A1 WO2012089849 A1 WO 2012089849A1 EP 2012050028 W EP2012050028 W EP 2012050028W WO 2012089849 A1 WO2012089849 A1 WO 2012089849A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
display
light
area
display device
Prior art date
Application number
PCT/EP2012/050028
Other languages
French (fr)
Inventor
Arnout Robert Leontine VETSUYPENS
Peter NOLLET
Saso MLADENOVSKI
Original Assignee
Barco N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Barco N.V. filed Critical Barco N.V.
Publication of WO2012089849A1 publication Critical patent/WO2012089849A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/01Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour 
    • G02F1/13Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells
    • G02F1/133Constructional arrangements; Operation of liquid crystal cells; Circuit arrangements
    • G02F1/1333Constructional arrangements; Manufacturing methods
    • G02F1/1335Structural association of cells with optical devices, e.g. polarisers or reflectors
    • G02F1/133524Light-guides, e.g. fibre-optic bundles, louvered or jalousie light-guides
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/01Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour 
    • G02F1/13Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells
    • G02F1/133Constructional arrangements; Operation of liquid crystal cells; Circuit arrangements
    • G02F1/135Liquid crystal cells structurally associated with a photoconducting or a ferro-electric layer, the properties of which can be optically or electrically varied
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/04Structural and physical details of display devices
    • G09G2300/0421Structural details of the set of electrodes
    • G09G2300/0426Layout of electrodes and connections
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/04Structural and physical details of display devices
    • G09G2300/0421Structural details of the set of electrodes
    • G09G2300/043Compensation electrodes or other additional electrodes in matrix displays related to distortions or compensation signals, e.g. for modifying TFT threshold voltage in column driver
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0233Improving the luminance or brightness uniformity across the screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0242Compensation of deficiencies in the appearance of colours
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/029Improving the quality of display appearance by monitoring one or more pixels in the display panel, e.g. by monitoring a fixed reference pixel
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/04Maintaining the quality of display appearance
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/04Maintaining the quality of display appearance
    • G09G2320/043Preventing or counteracting the effects of ageing
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0693Calibration of display systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/14Detecting light within display terminals, e.g. using a single or a plurality of photosensors
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/14Detecting light within display terminals, e.g. using a single or a plurality of photosensors
    • G09G2360/144Detecting light within display terminals, e.g. using a single or a plurality of photosensors the light being ambient light
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/14Detecting light within display terminals, e.g. using a single or a plurality of photosensors
    • G09G2360/145Detecting light within display terminals, e.g. using a single or a plurality of photosensors the light originating from the display screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2092Details of a display terminals using a flat panel, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/22Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources
    • G09G3/30Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels
    • G09G3/32Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED]
    • G09G3/3208Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED] organic, e.g. using organic light-emitting diodes [OLED]
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/36Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source using liquid crystals

Definitions

  • the present invention relates to a system and method for detecting and/or visualising and/or compensating optical effects of an image displayed on a display device. It applies more particularly, but not exclusively, to all sorts of displays such as fixed format displays such as active matrix type displays, LED or OLED displays, LCD displays, plasma displays etc. It applies particularly to displays such as fixed format displays that are typically but not exclusively intended to be used for medical imaging.
  • the present invention relates to a display device such as a fixed format display that has a high contrast ratio, wide viewing angle, and accurate imaging.
  • Fixed format means that the displays comprise a matrix of light emitting cells or pixel structures that are individually addressable rather than using a scanning electron beam as in a CRT. Fixed format also relates to the fact that the display contains pixels to visualize the image as well as to the fact that individual parts of the image signal are assigned to specific pixels in the display.
  • the term "fixed format" is not related to whether the display is extendable, e.g. via tiling, to larger arrays, but to the fact that the display has a set of addressable pixels in an array or in groups of arrays or in a matrix. Making very large fixed format displays as single units manufactured on a single substrate is difficult.
  • the first method and system comprises the integration of a light sensor circuit in each individual pixel that acts as a feedback circuitry.
  • a liquid crystal display with voltage drive pixels one can correct the driving voltage by increasing or decreasing the latter depending upon this feedback signal to compensate for the loss or variation of luminescence and/or performance.
  • a second method for detecting and compensating the differential effects of display devices is based on a "model" approach.
  • a prediction of the reduction in performance for each pixel can be made based on a model. This can be done by analysing the video content or by monitoring the on-current time of each pixel.
  • the second method is representing a much cheaper and simple solution but its accuracy is heavily dependent on the quality of the model used. Environmental factors such as temperature and moisture during the time of use cannot be taken into account. Therefore, in practice this second method does not show very accurate results and still some part of the differential ageing problem remains visible. Thus, this type of compensation would not be acceptable for display devices used in medical imaging.
  • the model approach is based on the uniform prediction of degradation of the pixels put into the model.
  • the compensation achieved is not as accurate as it is required for an application in medical imaging.
  • displays such as fixed format displays that can be intended to be used for medical imaging, more particularly, display devices such as, for example fixed format displays that have a high contrast ratio, wide viewing angle, and accurate imaging and hence for detecting and/or visualising and/or compensating effects of an image displayed on such a display device.
  • This object is accomplished by a method and a system according to the present invention.
  • This present invention concerns using a first sensor in combination with a second partially transparent sensor in order to make sure that light measurements are made correctly and/or that the display or any of the sensors is working correctly or to compensate for ambient light.
  • the first sensor can be any of an external reference sensor, a sensor on or integrated in the display, or an ambient light sensor.
  • the present invention provides in a first aspect methods for detecting and/or visualising and/or compensating ambient light or errors such as ageing effects of pixel outputs displaying an image on a display device, or errors in the sensors themselves. Firstly a first image is displayed on an active display area on the display device having a first plurality of pixels.
  • the active display area is preferably the normal display area for images, i.e. in working operation.
  • a second image is displayed on a sub-area of the display device, having a second plurality of pixels, the active display area being larger than the sub-area, and the second image being smaller than the first image, having fewer pixels than the active display area.
  • the pixels of the sub-area are driven with pixel values that are representative or indicative for the pixels in the active display area and optical measurements are performed on light emitted from the active display area using a substantially transparent sensor and the sub-area and generating optical measurement signals therefrom. Then preferably the display of the image on the active display area is controlled in accordance with the optical measurement signals of the sub-area and the active area.
  • the first sensor is for measuring light originating from the environment, e.g. ambient light.
  • the first sensor can capture the ambient light thus the light from the environment or the light emitted from a sub-area of the display device.
  • the first sensor preferably is bidirectional.
  • the invention provides display devices comprising at least one display sub-area provided with a plurality of pixels, with for each display sub-area the first type of sensor that is an optical sensor unit that is located in front of the display and makes optical measurements on a light output from only a representative part of the display.
  • This first sensor can be either an external sensor, entirely separate from the display, or alternatively it can be integrated in the design of the display.
  • the external sensor can also be calibrated by using a reference sensor with known properties.
  • the first type of sensor can be a mini-spectrometer, like for instance the ultra-compact mini-spectrometer integrating MEMS and image sensor technologies with number C10988MA by Hamamatsu, which can be positioned directly above the sub-area that should be measured, or it can be integrated into the bezel of the display, and a waveguide solution can be used to guide light emitted by the sub-area of the active area to the sensor.
  • this mini-spectrometer can be used as an ambient light sensor, for example by integrating it into the bezel or into the backlight. By using a (a set of) minispectrometer(s) inside the backlight, the latter enables capturing of the spectrum of the impinging light through the optical stack and the liquid crystal layer.
  • the optical foils and the liquid crystal layer can impact the measured spectrum, such that a calibration of the measured spectrum will be required.
  • the liquid crystal layer preferably can be adapted such that it is working in the transmitting state such that as much light as possible is captured by the sensor, for instance a mini-spectrometer and the backlight of the display needs to be switched off.
  • the first sensor in case it is used for measurements of ambient light can for instance be a Coronis ambient light compensation sensor. The latter is preferably integrated in the bezel of a display.
  • the first type of optical sensor is used for measuring light of a sub-area of the display, the first sensor comprising an optical aperture and a photodiode sensor and in between a light guide structure, which preferably guides light rays using total internal reflection.
  • the aperture collects the light where after the light exits through a light exit plane.
  • the first type of optical sensor comprises an optical aperture and a first light sensor, with a bundle of optical fibres there between.
  • the optical fibres are preferably fixed together or bundled (e.g. glued), and the end surface is polished to accept light rays under a limited angle only (as defined in the attached claims).
  • the first optical sensor comprises a light guide.
  • the first optical sensor furthermore comprises an aperture at one extremity of the light guide, and a photodiode sensor or equivalent device at the other extremity of the light guide.
  • the light guide can have a non-uniform cross-section in order to concentrate light to a light exit plane.
  • light rays travel by total internal reflection through the light guide.
  • the first sensor doesn't have a size restriction, but it is typically used to measure a zone of about 1 by 1 cm, which preferably captures the combined light output of many pixels in that zone
  • the second partially transparent sensor is a full display, partially and most preferably substantially transparent sensor for detecting a property of light emitted from said full display area into a viewing angle of the display device.
  • the second sensor is located in such a way as to detect light from a front section of said display device in front of said display area. Further means for displaying a test pattern on the display area of a specific luminance and chromaticity can be provided, and the second sensor can detect properties of light emitted, more specifically luminance and chromaticity and means can be provided for comparing the output of the first sensor with that of the second sensor. Any difference may indicate a need to re-calibrate.
  • the second type of sensor can comprise at least one sub-sensor, whereby said sub sensor is adapted to produce an individual measurement signal, which can be positioned as part of a matrix structure.
  • one of said sub-sensors is placed in the light path of the light measured by the first type of sensor, i.e. there is an overlap. This enables obtaining the exact luminance value, compensated for the temperature drift of both sensors.
  • an alternative can be to use a light source that shines light from any position on all the sub sensors of the second sensor at any position.
  • the sub sensors of the second sensor can be calibrated relative to one another.
  • the second sensor can also comprise of a large single sensor, which is able to capture light coming from the entire active area, or a major part of the active area.
  • the second optical sensor according to embodiments of the present invention preferably is a bidirectional sensor that measures both the light emitted from the display and the ambient light.
  • the second type of sensor is used to measure light emitted by the display, optical measurements on light emitted from the display area and for example the sub-area are made by the second and first types of sensors respectively.
  • Electronic measurement signals are generated from these sensor measurements.
  • the image on the display device can be controlled in accordance with the electronic measurement signals of the first and/or second sensors.
  • the active display area and the sub-area can be present in one single display device.
  • both sensors are integrated into the display.
  • an external sensor can be used (for instance as the first sensor) to recalibrate the second type of sensor as the second type of sensor typically lacks a V(A) filter, which is a spectral filter needed to measure the luminance.
  • the first sensor can be used as a reference sensor, as it can be made to include a V(A) filter, while the second sensor can be corrected to match the values measured by the first sensor.
  • Each radiant quantity has a corresponding luminous i.e. photometric quantity, which reflects the physics related to the visual perception of the human eye.
  • a V(A) characteristic describes the spectral response function of the human eye in the wavelength range from 380 nm to 780 nm and is used to evaluate the corresponding radiometric quantity that is a function of wavelength A.
  • the photometric value luminous flux can be obtained by integrating radiant power ⁇ t>e (A) using the following formula:
  • the unit of luminous flux ⁇ is lumen [Im]
  • the unit of Oe is Watt [W]
  • V(A) is [1 /nm]
  • the scaling factor Km is 683 Im/W.
  • V(A) filter A optical spectral filter to be used in combination with a first sensor that results in a spectral sensitivity that matches the V(A) curve is called a V(A) filter.
  • both sensors are imperfect at a certain moment in time, and they can be recalibrated using a known reference light source, and making sure one of the sub-sensors of the second sensor is designed to measure the same value as the first sensor, if the sub-sensor of the second sensor is unable to measure the light source.
  • both sensors can be recalibrated (if both give a different value, at least one of them measures incorrectly).
  • both sensors can be used in such a way that measurement results produced by both sensors can be compared such that imperfections over time of at least one of the two sensors can be detected. Once they differ, a recalibration of the sensors is required.
  • one of the two sensors can be a reference sensor, which is used to recalibrate the other sensor.
  • the first sensor will then be the reference sensor, e.g. when an external calibrated sensor is used.
  • the second sensor is then calibrated by comparing its measured values to the first sensor and correcting its measured values. This can for instance be done by combining measurements of the first sensor with an ageing model and predicting what the second sensor should measure over the screen.
  • the first type of sensor is integrated into the display device at a fixed position.
  • the first type of sensor is positioned at a corner of the display device, and faces the sub-area of the screen, whereby said sensor is either temperature insensitive, or alternatively compensation is included to avoid temperature dependence.
  • the l-Guard sensor of Barco N.V. can be used for example as described for example in EP 1274066, In addition, any temperature dependence of the first sensor can be compensated.
  • a representative pattern for the second type of sensor is captured by the first type of sensor as well, and degrading of the light emitted by the display in that area happens in the same way, the light values captured by a sub sensor of the first type of sensor and its corresponding part of the first sensor are theoretically identical, then, the different sub sensors of the second sensor can all individually be matched to their corresponding part of the first sensor.
  • the patch of the display underneath the first sensor has a representative pattern for the different sub sensors of the second sensor, while during calibration, the representative parts are lit sequentially and individually to measure them at a suitable driving level.
  • This small representative image e.g. can be obtained by rescaling the display image on the display to a smaller size (second display image).
  • the exact scaling algorithm used is not considered a limitation of the present invention.
  • the typical driving values can be identified for each pixel or a representative group of pixels of the sub-area and the actual behaviour of these pixels can be determined at any moment of the drive time.
  • the first sensor is preferably adapted such as to gauge light coming from one of the subparts of its total patch, which requires a calibration for the small patches.
  • the actual matching may require displaying dedicated patterns for a brief moment in time, depending on the specific embodiment of the sensor, as measuring luminance and chromaticity for any type of pattern may not be obvious for the second sensor, as will be explained later on, especially when using a color display.
  • the first sensor can also be made up of sub-sensors.
  • the measurements are preferably restricted to measurements on a thermally stabilised display, as a new calibration would be needed if the temperature alters again. This is typically at the same moment in time when the luminance output of the display stabilises, which can be checked using the first type of sensor.
  • the display can comprise the active display area and the sub-area (patch) in one single display device the optical effects caused by running the display as a whole like the temperature changes and the exposure to oxygen levels in the air are typically similar for the pixels of the active display area and of the sub-area which leads to a reasonable accuracy of the compensation method.
  • the ageing of the display can vary over the display's active area as well (for instance, the light source's output can age differently, or some parts of the light source can emit a different amount of light than another part of the light source over time.
  • Another example is the ageing of positional dependant aging over time of the optical foils used in the backlight), therefore it is clear that it will not only depend on the history of the driving of the pixel.
  • a model can be used to compensate for this positional-dependant ageing over time, as this cannot be determined by the simply making a reference to the representative pattern of the first sensor. This improved modelling will result in a better recalibration of the second sensor.
  • the second image can contain a pattern of predefined pixel values, acting as a generic reference for any possible content of the first image, i.e. indicative of ageing of pixels of the first area.
  • this can be combined with the "model" approach to compensate for optical effects more accurately.
  • the model can become more complex.
  • such patterns can be very small patches of 255 different grey levels which can be displayed continuously, but need to be measured sequentially, in combination with the history of the driving of the pixels measured by the sub sensors of the second sensor. It is clear that a model is needed when using this data to calibrate the second sensor, as these measurements cannot be used directly to measure the degradation of the display, and accordingly the value the second sensor should measure. This is further explained below.
  • the display can be for instance an OLED display, a LCD display or a plasma display.
  • the method of the present invention provides a new approach for compensating optical effects, by using actual data derived from optical measurement signals, that correspond to areas of pixels that have been driven in a representative manner compared to measurements of specific areas of the display's active area.
  • the second image can be selected from parts of the first image in a way that the second image is representative of the first image but smaller in size.
  • the first and second sensors can also measure the same sub-area, i.e. the first and second sensors overlap such that the first sensor receives light through the second sensor or vice versa.
  • the second sensor can consist of a matrix of sub sensors, whereby one of the sensors of the matrix preferably overlaps with the first sensor, such that they can be used to measure light emitted by the same area.
  • the sub-area can again be divided into different parts which are driven with a pattern based on the actual display contents.
  • Typical driving values such as a dynamic pattern like moving images, or temporal dither patterns of the actual displayed image can be identified for this purpose and at least one part of the sub-area of the display device can be driven with that pattern.
  • the data how the pixel has been driven over the lifetime of the display can be stored.
  • these parameters then can be used, in combination with information on how display pixels have been driven over the lifetime of the display, to predict the ageing behaviour of display pixels and to compare these results with what is measured by the second sensor.
  • a measurement of the current behaviour of a given class of pixels can be provided based on measurements of the second sensor, which is appropriately matched to the first sensor, instead of storing the complete driving behaviour of each pixel of the complete display and instead of an inaccurate estimation based on a model.
  • the actual matching may require displaying dedicated patterns, depending on the specific embodiment of the sensor, as measuring luminance and chromaticity for any type of pattern may not be obvious for the second sensor, as will be explained later on, especially when using a color display.
  • the memory used to store the driving history of each of the sub-area pixels or alternatively of classes of these pixels from parts of the sub- area can be reduced.
  • the method for correction of an image is used in real time, i.e. in parallel with a running application.
  • the method is intervention-free, it does not require input from a user.
  • the optical measurements carried out are luminance measurements.
  • light output correction may comprise luminance and/or contrast correction.
  • the optical measurements carried out are colour measurements, in which case light output correction comprises colour correction of the displayed image.
  • the luminance measurements are carried out in sequences. For example, at a time zero not all parts of active display and the sub-area of the active display are used for measuring but it is also possible to reserve one part or zone of the active display and/or the sub-area which can be temporarily driven with zero. After 1000 hours, for example, the reserved part or zone can be used to start a new series of luminance measurements. With this reservation it is possible to measure the degradation of differently driven pixels and then make a more accurate prediction of the degradation behaviour of the pixels such as OLED pixels.
  • the sub-area and the active display area can be used and measured continuously to compare the sub-area image with the complete active display at all times.
  • the optical measurement then is used to identify the remaining efficiency of every grey level and/or every colour. This degradation is stored in a table which shows degradation per grey level and/or colour over time.
  • the present invention also provides a display device comprising at least one first display area provided with a plurality of pixels, with for each first display area a first sensor that is an optical sensor unit that is located in front of the display and makes optical measurements on a light output from only a representative part of the display and a second full display sensor for detecting a property of light emitted from said full display area into a viewing angle of the display device, which second sensor is located in a front section of said display device in front of said display area.
  • Means may be provided for displaying a test pattern on the display area, for example of a specific colour, for detecting the property of light by the second sensor and for comparing the output of the second sensor with that of the first sensor.
  • the display device comprises an active display area for displaying the image, an image forming device and an electronic driving system for driving the image forming device.
  • the first optical sensor unit for example, comprises an optical aperture and a light sensor having an optical axis, to make optical measurements on a light output from a sub-area of the active display area of the image forming device and generating optical measurement signals therefrom.
  • the second substantially transparent sensor can be suitably applied to an inner face of a cover member.
  • the transparent cover member may be used as a substrate in the manufacturing of the second sensor.
  • an organic or inorganic substrate can be used that has sufficient thermal stability to withstand operating temperature of vapour deposition and the high vacuum conditions, which is a preferred way of deposition of the layers constituting the second sensor.
  • Flexible substrates such as flexible polymeric substrates can also be used.
  • deposition techniques include chemical vapour deposition (CVD) and any type thereof for depositing inorganic semiconductors such as metal organic chemical vapour deposition (MOCVD) or thermal vapour deposition.
  • MOCVD metal organic chemical vapour deposition
  • one can also apply low temperature deposition techniques such as printing and coating for depositing organic materials for instance.
  • the second sensor further comprises at least partially transparent electrical conductors for conducting a measurement signal from said second sensor within said viewing angle for transmission to a controller.
  • Substantially transparent conductor materials such as a tin oxide, e.g. indium tin oxide (ITO) or a substantially transparent conductive polymer such as polymeric Poly(3,4-ethylenedioxythiophene) poly(styrenesulfonate), typically referred to as PEDOT:PSS, are well known partially transparent electrical conductors.
  • a thin oxide layer or transparent conductive oxide is used, for instance zinc oxide can also be used which is known to be a good transparent conductor.
  • the second sensor is provided with transparent electrodes that are defined in one layer with the said conductors (also called a lateral configuration). This reduces the number of layers that inherently lead to additional absorption and to interfaces that might slightly disturb the display image.
  • the second sensor comprises an organic photoconductive sensor.
  • organic photoconductive sensors may be embodied as single layers, as bilayers and as general multilayer structures. They may be advantageously applied within the present display device. Particularly, the presence on the inner face of the cover member allows that the organic materials are present in a closed and controllable atmosphere, e.g. in a space between the cover member and the display which will provide protection from any potential external damaging.
  • a getter may for instance be present to reduce negative impact of humidity or oxygen.
  • An example of a getter material is CaO.
  • vacuum conditions or a predefined atmosphere for instance pure nitrogen, an inert gas
  • a predefined atmosphere for instance pure nitrogen, an inert gas
  • a second sensor comprising an organic photoconductive sensor suitably further comprises a first and a second electrode that advantageously are located adjacent to each other.
  • the location adjacent to each other preferably defined within one layer, allows a design with finger-shaped electrodes that are mutually interdigitated.
  • charges generated in the photoconductive sensor are suitably collected by the electrodes.
  • the number of fingers per electrode is larger than 50, more preferably larger than 100, for instance in the range of 250-2000. However the present invention is not limited to this amount.
  • an organic photoconductive sensor can be a mono layer, a bi-layer or in general a multiple (>2) layer structure.
  • the organic photoconductive sensor is a bilayer structure with a exciton generation layer and a charge transport layer, said charge transport layer being in contact with a first and a second electrode.
  • a bilayer structure is for instance known from Applied Physics Letters 93 "Lateral organic bilayer heterojunction photoconductors" by John C. Ho, Alexi Arango and Vladimir Bulovic.
  • the sensor described by J.C. Ho et al relates to a non-transparent sensor as it refers to gold electrodes which will absorb the impinging light entirely.
  • the bilayer comprises an EGL (PTCBI) or Exciton Generation Layer and a HTL (TPD) or Hole Transport Layer (HTL) (in contact with the electrodes).
  • second sensors comprising composite materials could be constructed.
  • nano/micro particles are proposed, either organic or inorganic dissolved in the organic layers, or an organic layer consisting of a combination of different organic materials (dopants). Since the organic photosensitive particles often exhibit a strongly wavelength sensitive absorption coefficient, this configuration can result in a less colored transmission spectrum, when suitable materials are selected and suitably applied, or can be used to improve the detection over the whole visible spectrum, or can improve the detection of a specific wavelength region.
  • hybrid structures using a mix of organic and inorganic materials can be used instead of using organic layers to generate charges and collect them with the electrodes.
  • hybrid structures using a mix of organic and inorganic materials can be used.
  • a bilayer device that uses a quantum-dot exciton generation layer and an organic charge transport layer can be used.
  • colloidal Cadmium Selende quantum dots and an organic charge transport layer comprising of Spiro-TPD can be used.
  • a disadvantage could be that the second sensor only provides one output current per measurement for the entire spectrum. In other words, it is not evident to measure color online while using the display.
  • This could be avoided by using three independent photoconductive sensors that measure red, green and blue independently, as well as providing a suitable calibration for the three independent photoconductive sensors. They could be conceived similarly to the previous descriptions, and stacked on top of each other, or adjacent to each other on the substrate, to obtain an online color measurement. Offline color measurements can be made without the three independent photoconductive sensors, by calibrating the sensor to an external sensor which is able to measure tristimulus values (X, Y & Z), for a given spectrum.
  • I ( ⁇ ) is the spectral power distribution of the captured light.
  • the luminance corresponds to the Y component of the CIE XYZ tristimulus values. Since a first sensor, according to embodiments of the present invention, has a characteristic spectral sensitivity curve that differs from the three color matching functions depicted above, it cannot be used as such to obtain any of the three tristimulus values.
  • the second sensor according to embodiments of the present invention is typically sensitive in the entire visible spectrum with respect to the absorption spectrum of the sensor or alternatively, they are at least sensitive to the spectral power distributions of a (typical) display's primaries, XYZ values can be obtained after calibration for a specific type of spectral light distribution emitted by the display.
  • Displays are typically either monochrome or color displays. In the case of monochrome (e.g. grayscale) displays, they only have a single primary (e.g. white), and hence emit light with a single spectral power distribution. Color displays have typically three primaries - red (R), green (G) and blue (B)- which have three distinct specific spectral power distributions.
  • a calibration step preferably is applied to match the XYZ tristimulus values corresponding to the spectral power distributions of the display's primaries to the measurements made by the second sensor according to embodiments of the present invention.
  • the basic idea is to match the XYZ tristimulus values of the specific spectral power distribution of the primaries to the values measured by the sensor, by capturing them both with the second sensor and an external reference sensor. Since the second sensor response according to embodiments of the present invention is non-linear, and the spectral power distribution associated with the primary may alter slightly depending on the digital driving level of the primary, it is insufficient to match them at a single level. Instead, they need to be matched ideally at every digital driving level. This will provide a relation between the actual tristimulus values and second sensor measurements in the entire range of possible values.
  • Y is directly a measure of brightness (luminance) of a color.
  • the chromaticity can be specified by two derived parameters, x and y. These parameters can be obtained from the XYZ tristimulus values using the following formulae: x X
  • the second sensor lacks a V(A)-filter
  • the second sensor is preferably calibrated with a device that includes a V(A) filter.
  • the additional sensor can qualify as such a sensor. This is a synergetic effect of both sensors.
  • a similar reasoning can be used for obtaining the X and Z components of the tristimulus values.
  • the second sensor lacks the required color filter, but the first sensor can include the necessary filter, so it can be used for calibrating the second sensor for measuring color. So, by using the calibration curves, the color coordinates and luminance can be determined if we use separate calibration curves for the X, Y and Z components. This is again spectrum dependant. So the simplest way is to separately do a calibration for the red, green and blue display primaries light emission.
  • a feedback system can be provided for receiving the optical measurement signals of the second sensor and on the basis thereof controlling the electronic driving system.
  • This offline color measurement which is enabled by calibrating the sensor to an external sensor which is able to measure tristimulus values (X, Y & Z)
  • tristimulus values X, Y & Z
  • At least one second sensor and optionally an at least partially transparent optical coupling device can be provided.
  • the addition of an extra component is less advantageous but is included within the scope of the present invention.
  • the at least one second sensor is designed for detecting a property of light emitted from the said display area into a viewing angle of the display device.
  • the second sensor can be located outside or at least partially outside the viewing angle.
  • the at least partially transparent optical coupling device is located in a front section of said display device. It comprises a light guide member for guiding at least one part of the light emitted from the said display area to the corresponding second sensor.
  • the coupling device further comprises an incoupling member for coupling the light into the light guide member.
  • the incoupling member solves the apparent contradiction of a waveguide parallel to the front surface that does not disturb a display image, and a signal-to-noise ratio sufficiently high for allowing real-time measurements.
  • An additional advantage is that any scattering eventually occurring at or in the incoupling member, is limited to a small number of locations over the front surface of the display image.
  • a display device comprising at least two display areas with a plurality of pixels.
  • a sensor and an at least partially transparent optical coupling device are provided for each display area.
  • the at least two sensors are designed for detecting a property of light emitted from the said display area into a viewing angle of the display device.
  • the sensor is located outside or at least partially outside the viewing angle.
  • the at least partially transparent optical coupling device is located in a front section of said display device. It comprises a light guide member for guiding at least one part of the light emitted from the said display area to the corresponding sensor.
  • the coupling device further comprises an incoupling member for coupling the light into the light guide member.
  • the use of the incoupling member solves the apparent contradiction of a waveguide parallel to the front surface that does not disturb a display image, and a signal-to-noise ratio sufficiently high for allowing real-time measurements.
  • An additional advantage is that any scattering eventually occurring at or in the incoupling member, is limited to a small number of locations over the front surface of the display image.
  • the light guide member is running in a plane, which is parallel to a front surface of the display device.
  • the incoupling member is suitably an incoupling member for laterally coupling the light into the light guide member of the coupling device.
  • the result is a substantially planar incoupling member. This has the advantage of minimum disturbance of displayed images.
  • the coupling device may be embedded in a layer or plate. It may be assembled to a cover member, i.e. front glass plate, of the display after its manufacturing, for instance by insert or transfer moulding. Alternatively, the cover member is used as a substrate for definition of the coupling device.
  • a plurality of light guide members is arranged as individual light guide members or part of a light guide member bundle. It is suitable that the light guide member is provided with a circular or rectangular cross-sectional shape when viewed perpendicular to the global propagation direction of light in a light guide member. A light guide with such a cross-section may be made adequately and moreover limits scattering of radiation.
  • the cover member is typically a transparent substrate, for instance of glass or polymer material.
  • the second sensor or the second sensors of the sensor system is/are located at a front edge of the display device.
  • the incoupling member of this embodiment may be present on top of the light guide member or effectively inside the light guide member.
  • One example of such location inside the light guide is that the incoupling member and the light guide member have a co-planar ground plane.
  • the incoupling member may then extend above the light guide member or remain below a top face of the light guide member or be coplanar with such top face.
  • the incoupling member may have an interface with the light guide member or may be integral with such light guide member
  • the or each incoupling member is cone- shaped.
  • the incoupling member herein has a tip and a ground plane.
  • the ground plane preferably has circular or oval shape.
  • the tip is preferably facing towards the display area.
  • the or each incoupling member and the or each guide member are suitably formed integrally.
  • the or each incoupling member is a diffraction grating.
  • the diffraction grating allows that radiation of a limited set of wavelengths is transmitted through the light guide member. Different wavelengths (e.g. different colours) may be incoupled with gratings having mutually different grating periods. The range of wavelengths is preferably chosen so as to represent the intensity of the light most adequately.
  • both the cone-shaped incoupling member and diffraction grating are present as incoupling members.
  • These two different incoupling members may be coupled to one common light guide member or to separate light guide members, one for each, and typically leading to different sensors.
  • first and a second incoupling members of different type on one common light guide member, light extraction, at least of certain wavelengths, may be increased, thus further enhancing the signal to noise ratio. Additionally, because of the different operation of the incoupling members, the second sensor may detect more specific variations.
  • the different type of incoupling members may be applied for different type of measurements.
  • one type such as the cone-shaped incoupling member
  • the diffraction grating or the phosphor discussed below may be applied for color measurements.
  • one type such as the cone-shaped incoupling member
  • another type such as the diffraction grating
  • the one incoupling member (plus light guide member and sensor) may be coupled to a larger set of pixels than the other one.
  • the incoupling member comprises a transformer for transforming a wavelength of light emitted from the display area into a sensing wavelength.
  • the transformer is for instance based on a phosphor.
  • Such phosphor is suitably locally applied on top of the light guiding member.
  • the phosphor may alternatively be incorporated into a material of the light guiding member. It could furthermore be applied on top of another incoupling member (e.g. on top of or in a diffraction grating or a cone-shaped member or another incoupling member).
  • the sensing wavelength of the second sensor is suitably a wavelength in the infrared range.
  • This range has the advantage the light of the sensing wavelength is not visible anymore. Incoupling into and transport through the light guide member is thus not visible. In other words, any scattering of light is made invisible, and therewith disturbance of the emitted image of the display is prevented. Such scattering typically occurs simultaneously with the transformation of the wavelength, i.e. upon reemission of the light from the phosphor.
  • the sensing wavelength is most suitably a wavelength in the near infrared range, for instance between 0.7 and 1 .0 micrometers, and particularly between 0.75 and 0.9 micrometers. Such a wavelength can be suitably detected with a commercially available photodetectors, for instance based on silicon.
  • a suitable phosphor for such transformation is for instance a Manganese Activated Zinc Sulphide Phosphor.
  • the phosphor is dissolved in a waveguide material, which is then spin coated on top of the substrate.
  • the substrate is typically a glass substrate, for example BK7 glass with a refractive index of 1 ,51 .
  • the parts are removed from the which are undesired.
  • a rectangle is constructed which corresponds to the photosensitive area, in addition the remainder of the waveguide, used to transport the generated optical signal towards the edges, is created in a second iteration of this lithographic process.
  • Another layer can be spin coated (without the dissolved phosphors) on the substrate, and the undesired parts are removed again using lithography.
  • Waveguide materials from Rohm&Haas can be used or PMMA.
  • Such a phosphor may emit in the desired wavelength region, where the manganese concentration is greater than 2%.
  • other rare earth doped zinc sulfide phosphors can be used for infrared (IR) emission.
  • IR infrared
  • ZnS:ErF3 and ZnS:NdF3 thin film phosphors such as disclosed in J.Appl.Phys. 94(2003), 3147, which is incorporated herein by reference.
  • ZnS:TmxAgy with x between 100 and 1000 ppm and y between 10 and 100 ppm, as disclosed in US4499005.
  • a further second sensor embodiment of the coupling member and second sensor may be applied in addition to such sensor solution.
  • the combination enhances sensing solutions and the different type of sensor solutions have each their benefits.
  • the one sensor solution may herein be coupled to a larger set of pixels than another sensor solution.
  • the number of display areas with a second sensor is preferably larger than one, for instance two, four, eight or any plurality. It is preferable that each display area of the display is provided with a second sensor solution, but that is not essential. For instance, merely one display area within a group of display areas could be provided with a second sensor solution.
  • the real-time detection is carried out for the signal generated by the sensor according to the preferred embodiment of this invention, this signal is generated according to the sensors' physical characteristics as a consequence of the light emitted by the display, according to its light emission characteristics for any displayed pattern.
  • the detection of luminance and color (chromaticity) aspects may be carried out in a calibration mode, e.g. when the display is not in a display mode.
  • luminance and chromaticity detection may also be carried out real-time, in the display mode.
  • it can be suitable to do the measurements relative to a reference value.
  • the sensor does not exhibit an ideal spectral sensitivity according to the V ( ⁇ ) curve, nor does it have suitable color filters to measure the tristimulus values. Therefore, real-time measurements are difficult as the sensor will not be calibrated for every possible spectrum that results from the driving of the R, G & B subpixels which generate light impinging on the sensor.
  • a V(A) sensor following a ⁇ ( ⁇ ) curve describes the spectral response function of the human eye in the wavelength range from 380 nm to 780 nm and is used to establish the relation between the radiometric quantity that is a function of wavelength ⁇ , and the corresponding photometric quantity.
  • measurements of luminance and illuminance require a spectral response that matches the V(A) curve as closely as possible.
  • a sensor according to embodiments of the present invention is sensitive to the entire visible spectrum and doesn't have a spectral sensitivity over the visible spectrum that matches the V(A) curve. Therefore, an additional spectral filter is needed to obtain the correct spectral response.
  • the senor as described in a preferred embodiment also does not operate as an ideal luminance sensor.
  • the angular sensitivity is taken into account, as described in the following part.
  • the measured luminance corresponds to the light emitted by the pixel located directly under it (assuming that the sensor's sensitive area is parallel to the display's active area).
  • the sensor according to embodiments of the present invention captures the pixel under the point together with some light emitted by surrounding pixels. More specifically, the values captured by the sensor cover a larger area than the size of the sensor itself. Because of this, the patterns used, do not correspond to the actual patterns and therefore a correction has to be done in order to simulate the measurements of the sensor. To enable the latter preferably the luminance emission pattern of a pixel is measured as a function of the angles of its spherical coordinates.
  • the range of the angles preferably are changed from -80 to 80 degrees with a step of 2 degrees for the inclination angle ⁇ and from 0 to 180 with a step of 5 degrees for the angle ⁇ .
  • the distance preferably is kept constant over the measurements.
  • a luminance sensor When a luminance sensor is positioned parallel to the display's active area, the latter corresponds to an inclination angle of 0, meaning that only an orthogonal light ray is considered.
  • the exact light sensitivity of the sensor can be characterised. These measurements can then be used in the optical simulation software to obtain the corrected pattern for the actual light the sensors will detect. Using this actual light output will provide an additional improvement and advantageous effect of the algorithm that will render more reliable results.
  • the first and second sensors can be used to remove the contribution of the ambient light from the measurements of the second sensor, or remove quantify the luminance of the ambient light, depending on the type of first sensor used.
  • the measured light, measured by the first sensor may exclude the ambient light by the way it is constructed, its closeness to the display surface of the display device and it shading, which is typically the case when using an external reference sensor, or a sensor integrated at the corner of the display.
  • the second sensor measures a combination of both the ambient and the display light. By comparing the signals from the first and second sensors and especially their difference, the ambient light can be determined.
  • the impinging light should be close to uniform over the area where the matrix of the second sensor is located directly.
  • the first sensor can then also be used to calibrate the second sensor. For instance, we can put the average measured value of all the sub-sensors of the second sensor equal to the value measured by the ambient light sensor. The other sensors can then be used to obtain a spatial uniformity plot of the ambient light, by scaling the measured values to the average values. Also, in this methodology we implicitly assume the ambient light is more or less uniform over the display's active area, which is typical in a real-life environment.
  • ambient light also can be measured by performing two measurements: a first measurement with display active (measuring ambient light + display light) and then a measurement with display inactive (measuring purely ambient light). The difference between those two measurements gives an indication of the contribution of the ambient light in the measured signal, which allows eliminating it from the measured signal.
  • the drawback of this alternative solution is that it requires inactivating the display, while the combination of both sensors mentioned earlier does not require such an inactivation.
  • continuous recording of the outputs of first and second sensors can result in digital water marking, e.g. after capturing and recording all the signals measured by all the sensors of the sensor system in a session (e.g. at the time of a diagnosis), it could be possible to re-create the same conditions which existed when an image was displayed in the session (e.g. used to perform the diagnosis), at a later date.
  • an image displayed in a display area is used for treatment of the corresponding sensed value or sensed values, as well as the sensor's properties.
  • aspects of the image that are taken into account are particularly its light properties, and more preferably light properties emitted by the individual pixels or an average thereof. Light properties of light emitted by individual pixels include their emission spectrum at every angle
  • An algorithm may be used to calculate the expected response of the sensor based on digital driving levels provided to the display and the physical behaviour of the sensor (this includes its spectral sensitivity over angle, its non- linearity and so on).
  • This precorrection may be an additional precorrection which can be added onto a precorrection that for example corrects the driving of the display such that a uniform light output over the display's active area is obtained
  • the difference between the sensing result and the theoretically calculated value is compared by a controller to a lower and/or an upper threshold value taking into account the reference. If the result is outside the accepted range of values, it is to be reviewed or corrected. One possibility for review is that one or more subsequent sensing results for the display area are calculated and compared by the controller. If more than a critical number of sensing values for one display area are outside the accepted range, then the setting for the display area is to be corrected so as to bring it within the accepted range. A critical number is for instance 2 out of 10. E.g. if 3 to 10 of sensing values are outside the accepted range, the controller takes action. Else, if the number of sensing values outside the accepted range is above a monitoring value but not higher than the critical value, then the controller may decide to continue monitoring.
  • the controller may decide not to review all sensing results continuously, but to restrict the number of reviews to infrequent reviews with a specific time interval in between. Furthermore, this comparison process may be scheduled with a relatively low priority, such that it is only carried out when the processor is idle.
  • such sensing result is stored in a memory.
  • such set of sensing results may be evaluated.
  • One suitable evaluation is to find out whether the sensed values of the difference in light are systematically above or below the threshold value that, according to the settings specified by the driving of the display, should be emitted. If such systematic difference exists, the driving of the display may be adapted accordingly.
  • certain sensing results may be left out of the set, such as for instance an upper and a lower value. Additionally, it may be that values corresponding to a certain display setting are looked at. For instance, sensing values corresponding to a high (RGB) driving levels are looked at only.
  • the sensed values of certain (RGB) driving level may be evaluated, as these values are most reliable for reviewing driving level settings.
  • high and low values one may think of light measurements when emitting a predominantly green image versus the light measurements when emitting a predominantly yellow image. Additional calculations can be based on said set of sensed values. For instance, instead of merely determining a difference between sensed value and theoretically calculated value of the light output, which is the originally calibrated value, the derivative may be reviewed. This can then be used to see whether the difference increases or decreases. Again, the timescale of determining such derivative may be smaller or larger, preferably larger, than that of the absolute difference. It is not excluded that average values are used for determining the derivative over time.
  • sets of sensed values at a uniform driving of the display are used (or when applying another precorrection dedicated to achieve a uniform luminance output) for different display areas are compared to each other. In this manner, homogeneity of the display emittance (e.g. luminance) can be calculated.
  • the sensed value is suitably compared to a reference value for calibration purposes.
  • the calibration will be typically carried out per display area and compared to the output of the first sensor.
  • the calibration typically involves switching the backlight on and off to determine potential ambient light influences that might be measured during normal use of the display, for a display area and suitably one or more surrounding display areas. The difference between these measured values corresponds to the influence of the ambient light. This value needs to be determined because otherwise the calculated ideal value and the measured value will never match when the display is put in an environment that is not pitch black.
  • the calibration typically involves switching the display off, within a display area and suitably surrounding display areas including the sub-area measured by the first sensor.
  • the calibration is for instance carried out for a first time upon start up of the display. It may subsequently be repeated for display areas.
  • Moments for such calibration during real-time use include for instance short transition periods between a first block and a second block of images. In case of consumer displays, such transition period is for instance an announcement of a new and regular program, such as the daily news. In case of professional displays, such as displays for medical use, such transition periods are for instance periods between reviewing a first medical image (X-ray, MRI and the like) and a second medical image.
  • the controller will know or may determine such transition period. While the above method has been expressed in the claims as a use of the above mentioned sensor solutions, it is to be understood that the method is also applicable to any other sensor to be used with other display types. It is more generally a method of using a matrix of sensors in combination with a display.
  • the matrix of sensors is designed such that it is permanently integrated into the display's design. Therefore, a matrix of transparent organic photoconductive sensors is used preferably, suitably designed to preserve the display's visual quality to the highest possible degree.
  • the goal can be either to assess the luminance or color uniformity of the spatial light emission of a display, based on at least two zones.
  • the present invention includes providing a sensing result by:
  • the average display settings as used herein are more preferably the ideally emitted luminance as discussed above.
  • the display device defines at least one display area and the display device may be of conventional technology, such as a liquid crystal display (LCD) with a backlight, for instance based on light emitting diodes (LEDs), or an electroluminescent device such as an organic light emitting (OLED) display.
  • the display device suitably further comprises an electronic driving system and a controller receiving optical measurement signals generated in the first and second sensors and controlling the electronic driving system on the basis of the received optical measurement signals.
  • the additional sensor is an external sensor, which is not integrated into the design of the display, it is possible that the display does not directly receive the measurements, but that the measurements are sent to a computer, which interprets the measurements.
  • the sub-area of the active display area is adapted to show an image that is representative or indicative of the image of the complete active display area.
  • the active display area and the sub-area are in one single display device.
  • the optical aperture of the first optical sensor unit preferably has an acceptance angle such that at least 50% of the light received by the sensor comes from light travelling within 15° of the optical axis of the first light sensor (that is the acceptance angle of the sensor is 30°).
  • the acceptance angle of the first sensor is such that the ratio between the amount of light used for control which is emitted or reflected from the display area at a subtended acceptance angle of 30° or less to the amount of light used for control which is emitted or reflected from the display area at a subtended acceptance angle of greater than 30° is X:1 where X is 1 or greater.
  • the optical aperture of the first optical sensor unit can have an acceptance angle such that light received at the first sensor at an angle with the optical axis of the first light sensor equal to or greater than 10° is attenuated by at least 25%, light received at an angle equal to or greater than 20° is attenuated by at least 50 or 55% and light arriving at an angle equal to or greater than 35° is attenuated by at least 80 or 85%.
  • the optical measurements of the first and/or second sensors are luminance measurements.
  • the performance correction may then comprise luminance and/or contrast correction.
  • the optical measurements of at least the first sensor may also be colour measurements for instance retrieving color coordinates, in which case a colour correction may be carried out.
  • the feedback system preferably comprises a comparator/amplifier for comparing the optical measurement signals, measured luminance or colour values, with a reference value, and a regulator for regulating a backlight control and/or a video contrast control and/or a video brightness control and/or a colour temperature, so as to reduce the difference between the reference value and the measured value and bring this difference as close as possible to zero.
  • the first type optical sensor unit of the present invention can in one embodiment be a sensor integrated into the design of the display.
  • the first sensor preferably comprises a light guide between the optical aperture and the first light sensor.
  • This light guide may be e.g. a light pipe or an optical fibre.
  • the first type of sensor can be an external sensor, which is separated from the display.
  • the sub-area of the active display area of the image forming display device is less than 1 % of the total area of the active display area of the image forming device, preferably less than 0.1 %, and still more preferred less than 0.01 %.
  • the optical aperture of the first optical sensor unit masks a portion of the active display area, while the first light sensor itself does not mask any part of the active display area.
  • the light output from the front face of the active display area of a display device is continuously measured with a minimal coverage of the viewed image by the first sensor.
  • the first light sensor may be brought to the back of the display area or to a side thereof.
  • the sub-area measured on the screen by these embodiments of the first sensor is composed of a number of active pixels of the active display area.
  • the sub-area of active pixels measured on the screen is preferably not larger than 6 mm x 4 mm.
  • a measurement zone of 6 mm x 4 mm constitutes 0.6% of that active display area.
  • a measurement zone of 6 mm x 4 mm constitutes 0.0005% of that active display area.
  • test patch may be generated and superimposed on the active pixels viewed by the first and or second sensor.
  • a housing of the first optical sensor unit stands out above the active display area by a distance lower than 0.5 cm.
  • the present invention also includes a control unit to compensate for optical effects of pixels displaying an image on a display device, the control unit comprising:
  • the present invention also includes computer program product comprising code segments adapted for execution on any type of computing device.
  • the code segments when executed on a computing device provide:
  • the present invention also includes a machine readable signal storage medium storing the computer program product.
  • the medium may be a disk medium such as a diskette or harddisk, a tape storage medium, a solid state memory such as RAM or a USB memory stick, an optical recording disk such as a CD- ROM or DVD-ROM, etc.
  • Fig. 1 A is a top view and Fig. 1 B is a front view of a part of an OLED screen provided with an optical sensor unit according to the present invention.
  • Fig. 2 shows a first embodiment of an optical sensor unit according to the present invention, the unit comprising a light guide being assembled of different pieces of PMMA.
  • Fig. 3 shows a second embodiment of an optical sensor unit according to the present invention, the unit comprising a light guide with optical fibres.
  • Fig. 4 shows a third embodiment of an optical sensor unit according to the present invention, the unit comprising a light guide made of one single piece of PMMA.
  • Fig. 5 shows the light guide of Fig. 4, this light guide being coated with a reflective coating.
  • Fig. 6 shows the light guide of Fig. 4, this light guide being partially coated with a reflective coating, and the light guide being shielded from ambient light by a housing.
  • Fig. 7 illustrates an example of an ambient light sensor.
  • Fig 8a shows the first stage of amplification used for a display device with a sensor system
  • Fig 8b shows the second stage of amplification used for a display device with a sensor system
  • Fig 8c shows the first stage of amplification used for a display device with a sensor system
  • Fig. 9 illustrates the overview of the data path from the sensor to the processer
  • Fig. 10 is schematic representation of a display system according to an embodiment of the present invention.
  • Fig. 1 1 is a schematic representation of embodiments of the present invention.
  • Fig. 12 is a schematic illustration of a display device with a sensor system according to a first embodiment of the invention.
  • Fig. 13 shows the coupling device of the sensor system illustrated in Fig. 12;
  • Fig. 14 shows a vertical sectional of a sensor system for use in the display device according to a third embodiment of the invention.
  • Fig. 15 shows a horizontal sectional view of a display device with a sensor system according to a fourth embodiment of the invention.
  • Fig. 16 shows a side view of a display device with a sensor system according to a second embodiment of the invention
  • Fig. 17 shows a schematic view of a network of sensors with a single layer of electrodes used in the display device.
  • the acceptance angle of a sensor refers to the angle subtended by the extreme light rays which can enter the sensor.
  • the angle between the optical axis and the extreme rays is therefore usually half of the acceptance angle.
  • the present invention makes use of a first and a second sensor.
  • the first sensor is an optical sensor unit that makes optical measurements on a light output from a representative part of a fixed format display such as an LCD, LED, OLED, plasma display or the like or does ambient light measurements in an alternative embodiment.
  • the first sensor can be a colour sensor. So a first color sensor makes optical color measurements on a light output from a representative part of the display, i.e. a small part, or performs illuminance measurements on the ambient light.
  • the second sensor is a sensor such as a panchromatic sensor (that is the sensor is not specific to a certain colour) which is substantially transparent and is placed at the front of a display screen.
  • the first sensor can be an ambient light sensor as well, but if it is not explicitly mentioned, the first sensor can be considered as preferably a sensor intended to measure the light output (luminance/chromaticity) of the display.
  • the first type of sensor is a sensor integrated in the display, that permanently obstructs a part of the active area it is evident to perform online measurements, since any desired pattern can be displayed beneath the obstructed area, which can be measured at any time.
  • the first type of sensor is an ambient light sensor, it is typically integrated into the display's bezel, which is also suitable for online measurements.
  • the partially transparent second sensor can be made large enough, e.g. up to the entire active area of the display (contrary to current sensors that only measure at the border of the active area of the display). This allows measurement of light intensity over the complete screen or in parts of the screen when the second sensor is divided into regions. Also, by doing this, an average over the entire screen could be measured, which is more reliable than limiting to a small patch, and furthermore the amplitude of the signal is increased, which results in more reliable measurements.
  • the pattern measured by the sensor is the pattern currently displayed on the display, which is entirely under the user's control. When a dedicated uniform patch needs to be measured, the display should hence be taken out of its normal operation mode.
  • the second sensor since the light captured by the second sensor is not limited to a small opening angle; light coming from larger angles can also be measured due to the sensor's design. This means that the second sensor does not only measure light from a limited acceptance angle, but also detects light coming from higher angles. This is different compared to a conventional luminance sensor, and consequentially complex algorithms need to be implemented to obtain satisfactory measurement results even for a grayscale display. For some embodiments of the sensor in combination with a color display, it may be unavoidable to perform offline measurements to obtain luminance measurements. In addition, some of the required patterns will possibly not be displayed, which raises the need to measure some patterns in a non-sequential mode, or displaying them faster than the observer is able to see.
  • the first sensor is more suitable for tests that require continuous measurements (such as ensuring a continuous luminance output while the display is still in the process of thermal stabilization e.g. after it is started up).
  • the second sensor is by design more suited for offline measurements, which can be performed e.g. in a screen-saver mode or at time periods outside the active usage of the display, for example at night.
  • the second sensor typically lacks the necessary filters to do a X, Y and Z measurement, e.g. in a preferred embodiment using the organic photoconductive sensors positioned on top of the location to be measured, due to its design. It has a broad absorption spectrum (as it is a panchromatic sensor), which enables measuring tristimulus X, Y and Z values after a suitable calibration.
  • the second sensor can be a bidirectional sensor, able to measure light coming from both directions.
  • a filter in the sensor which insures that the sensor's spectral sensitivity matches the v-lambda curve, as it would result in severe optical losses and coloring of the display.
  • the spectrum of the emitted light can have any spectrum.
  • the source can include significant IR light, which can partially be detected the sensor, while human eyes are insensitive to this spectral range. This implies that there is no possibility to calibrate the second sensor to the ambient light in a generic way.
  • the first sensor is preferably matched to an ambient light sensor that includes a V(A) filter that matches the spectral sensitivity of the human eye, this ambient light sensor can be a specific embodiment of the first sensor. Or Alternatively a mini-spectrometer integrated in the display, more specifically in the bezel of a display can be used.
  • the first sensor can be used for calibrating the second sensor, as it does not need to be at least partially transparent and therefore it can include the necessary filters to measure the X, Y and Z components of light of any spectrum. This can then be used to calibrate the second sensor for measuring the X, Y and Z components corresponding to the primaries' spectra.
  • a non-integrated first sensor is used, and moved to all the positions of the sub- sensors of the second sensor, such that they measure the same value, and then calibrate them all individually.
  • a display- integrated first sensor can be used, as described before.
  • the second sensor when using the second sensor, it has to be calibrated using a reference sensor, which in accordance with embodiments of the present invention is the first sensor.
  • a reference sensor which in accordance with embodiments of the present invention is the first sensor.
  • measurements are performed at different luminance values by the second sensor and the first sensor. Since the second sensor does not include a V(lambda) filter, the measurements are made using the display for which the second sensor is to be used.
  • the different luminance values can be obtained by depicting uniform images of certain driving levels on the display.
  • a LUT can be created that is to be applied to the all the measurements of the second sensor. This calibration can be performed at regular intervals in the field. Either a sub-sensor of the second sensor can overlap with the first sensor (in the embodiment where the first sensor is integrated into the display) or the first sensor can be an external sensor that can be positioned at any sub-sensor of the second sensor.
  • the pixel content that is displayed at that time can be used to calibrate the display in real-time using the outputs of the first and second sensors, whereby the first sensor is an integrated sensor at the edge.
  • the first sensor is an integrated sensor at the edge.
  • the first sensor is continuously used to measure selected grey levels and/or colours of the sub- area.
  • the second sensor is used to measure the behaviour of the active area of the display. These selected grey levels and/or colours are put there to follow in real-time the ageing of the pixels. At certain timeframes the remaining efficiency of every grey level and/or colour is measured for the pixels by only turning on that grey level and/or colour and measuring the response (luminance and/or chromaticity) with the first and second optical sensors.
  • This degradation is stored in a table (e.g. degradation per grey level and/or colour over time), e.g. in a memory of the display. Note that it is also possible to start several sequences of measuring degradation.
  • every pixel or every zone of the display can be tracked as to how long that pixel or zone has been driven at a certain grey level/colour (or current level).
  • the degradation of every pixel or zone of the display can be measured and normalised using the output of the first sensor.
  • the second sensor can be calibrated by the first sensor. To do this, they can either measure at the same position when an external sensor is used, or the first sensor can be integrated into the display, a pattern representative for the entire display is depicted, as described earlier.
  • the first sensor can be used for calibrating the second sensor, as it does not need to be semitransparent and therefore it can include the necessary filters to measure the X, Y and Z components of light of any spectrum. This can then be used to calibrate the second sensor for measuring the X, Y and Z components corresponding to the primaries' spectra.
  • a non-integrated first sensor is used, and moved to all the positions of the sub-sensors of the second sensor, such that they measure the same value, and then calibrate them all individually.
  • a display-integrated first sensor can be used, and a measurement can be made when the first sensor and a sub-sensor of the second sensor are designed to measure the same region on the display's active area.
  • the other sub-sensors of the second sensor can then be calibrated relative to the sub-sensor that has been calibrated, by doing a scaling.
  • an additional model may be required to compensate for the positional dependency.
  • the first sensor makes optical measurements on a light output from the representative part of the LCD display. So the first color sensor makes optical measurements on a light output from a representative part of the fixed format display, i.e. a small part, and is combined with the output from a second full screen panchromatic sensor.
  • a calibration procedure is carried out such that a color (red green or blue) is displayed and the outputs of the first and second sensors are compared.
  • the average value from the second sensor (determined over the whole screen) is compared with the representative value from the first sensor. While the screen is in use the first sensor can be used at the same time.
  • light emitted as a combination of different primaries is displayed on the whole display.
  • the luminance components can then be measured (assuming we have the acquired calibration data for that spectrum, or assuming we can measure the primaries independently and we have calibration data for them).
  • a second sensor can be positioned in the light path of emitted light that is measured by the first sensor, such that they should measure the same result after calibration. In addition, they will be at approximately the same temperature. Therefore, if we have a model that predicts the sensor's response, depending on temperature and actually emitted light, for both sensors, the temperature dependency can be eliminated. Mathematically, this result in solving a set of two equations with two unknowns:
  • L2 f2(T, Lact) Where L1 & L2 are the sensor's measurements, while T is the temperature, and Lact is the actual light that should be measured ideally.
  • a sensor can be positioned in the light path of emitted light that is measured, such that they should measure the same result after calibration. By measuring regularly with both sensors, any variations between the sensors can be tracked. When the measured value starts to differ, at least one of the sensors measures incorrectly.
  • a LED based solution can be used. In this LED based solution, LEDs are positioned at the border of the display, outside the viewing angle (active area) of the display, at the viewers' side of the panel. These LEDs intended to be used only for calibration purposes, and are therefore only used rarely, and will not age significantly.
  • the relation between sensor output and LED driving can be determined for all sensors upfront (for various levels, as the second sensor's signal can depend non-linearly on the impinging light), and can be stored in the display. This can than be used to recalibrate the sensors, if the LEDs remain stable over the display's lifetime.
  • the LEDs can also be measured using a separate measurement circuit.
  • the downside is that by using this approach, is that the sensor will be calibrated based on the spectrum of the LEDs, spectral changes of the light emitted by the display that could occur can still lead to imperfect calibrations.
  • LEDs can be used in a similar methodology which are positioned in the backlight.
  • the combination of the first and second sensor can be used for various applications.
  • Certain standardization organizations have published recommendations/guidelines to which the first and second sensor can be combined to allow a fixed format display to comply in terms of spatial uniformity for medical imaging. These methods comprise measurements limited to a zone of the display area (not the entire active area). It is clear for anyone skilled in the art that the sensors elaborated in this invention can be suitably used for this application.
  • Test patterns can also be applied especially for calibration. For example a pattern can be displayed, and sensed afterwards by the sub-sensors of the second sensor to match the spatial configuration required to verify if the display is compliant to these standards. Instead of merely verifying if the display is compliant to the standards, an automated correction of the behaviour of the display is included within the scope of the present invention. Techniques exist in the current state of the art that allow obtaining a highly uniform spatial light output of a display.
  • the combination of the first and second sensors can be used.
  • the first type of sensor can be used to calibrate the second sensor, and afterwards the second sensor can be put to good use by measuring with multiple transparent second sensors over the area of the screen and comparing the results with the output of the first sensor.
  • the pixels of the fixed format display can be driven in accordance with the luminance results obtained to compensate for aging on a pixel-by-pixel or region-by-region basis.
  • the sky is blue and is usually at the top of an image. This means the blue pixel ages faster at the top than in the middle or bottom of the screen.
  • a well-known correction algorithm as disclosed in EP1424672 can be applied to compensate for non-uniformity and spatial noise of the display.
  • the display may be put through an initial calibration phase in which different grey levels and/or colours are displayed sequentially on the display system. For every displayed grey level and/or colour, the light output (luminance and/or colour information) is measured with the first sensor and with the second sensor at different locations on the display device. Interpolation can be used to estimate a measurement down to per display pixel. The relation between the second sensor response and the response of the first sensor is stored in a memory of the display.
  • This calibration phase allows checking a later second sensor response with a later first sensor response to see if luminance and/or chromaticity of the display has remained constant at various positions on the display.
  • an interpolated correction can be calculated and applied as a software precorrection to the display, similar to the correction as disclosed in EP'672, but using a limited number of interpolated measurements.
  • other correction methods can be used as disclosed in EP'672, applying a correction per zone, rather than per pixel. This method includes performing this correction once in production. But in addition the correction can be done at certain time intervals using the first and second sensors to ensure the correction remains correct at every point in time.
  • specific color patterns could be applied to the display and the first and second sensors used to measure the luminance and/or chromaticity. As a result non-uniformities can be corrected, for example in chromaticity.
  • the non-uniformity of the white point can also be measured, e.g. the first and second sensors are calibrated to the display spectrum and the contribution of the different primaries are known. Measuring luminance non-uniformity only requires calibrating the second sensor with respect to the first sensor for that specific spectrum, moreover measuring chromaticity non-uniformity requires calibrating the X, Y and Z components of the spectra of the primaries as well, as described before. Compliance with the DICOM GSDF standard is one of the essential characteristics of medical displays. It essential that DICOM compliance is maintained throughout the lifetime of the display. Using the second sensor DICOM compliance of the display can be verified at multiple positions. The output of the first sensor can be used to confirm the result.
  • the first sensor can be used to check if the display remains DICOM compliant over time and this compared with the second sensor outputs on a zone-by-zone basis.
  • This allows a better accuracy of measurement as the second sensors are used to measure away from the border of the active area of the display while the first sensor can correct for drift of the second sensors.
  • the second sensors are able to measure at different positions on the active area of the display. DICOM compliance could be checked e.g. by measuring for 64 uniform patterns spread equally over the dynamic range of the display. If the measured values are within 10% deviation of the ideal DICOM curve, the display is considered to be DICOM compliant. Any drift in the second sensors (e.g. because of ageing) can be corrected by comparison with the output of the first sensor.
  • the entire DICOM calibration of the display can be performed. In practice, this can be done by altering the LUT that is applied on the incoming image to obtain a DICOM calibrated image. To obtain this LUT, the native behaviour of the display could be measured using the second sensors (without the initial DICOM calibration), the resulting values can be used in combination with the ideal DICOM curve to obtain the required LUT.
  • the first sensor output can be used to compensate for any drift in the second sensor outputs, using suitable embodiments as described in this invention.
  • the second sensor is able to detect the impact of the ambient light on the entire area of the screen, allowing to overcome the limitation of the current measurement methodologies.
  • This measurement methodology is valuable for example when the display is used in a room with significant ambient light. If a certain luminance ratio, for instance (L_white + L_ambient)/(L_black + L_ambient) > 250, is to be obtained over the entire active area of the screen, the backlight setting should be adapted to ensure the compliancy at every location on the screen.
  • One sensor alone cannot guarantee a sufficient luminance ratio when the ambient light is non-uniform.
  • the present invention provides a method for compensating for optical effects such as ageing effect, of an image displayed on a display device, e.g. a fixed format device with pixels.
  • Fig. 10 is a schematic representation of a display system, e.g. a fixed format display that can be used with the present invention including a signal source 48 a controller unit 46, a driver 44 and a display 42 with a matrix of pixel elements that are driven by the driver 44.
  • the invention makes use of a sub-area (patch) of the screen in a way that is optimised for/adapted to emissive displays in combination with the image on the complete screen.
  • the sub-area is a first measurement zone that contains more than 1 pixel, and spatial intelligence is added to the content being shown in the first measurement area. In particular in one embodiment spatial partitioning is used.
  • the display comprises an array of pixels and a small portion of these pixels is used as a sub-area (patch) or first measurement zone.
  • the complete display forms a second measurement zone.
  • the pixels in the sub-area or measurement zone are driven in accordance with one or more algorithms, each of which is an embodiment of the present invention.
  • the pixels in the sub-area can be driven in the same way as pixels of the main part of the display, i.e. the active display area.
  • the active display area and the sub-area are in one single display device. In this way the pixels in the sub-area age at the same rate as pixels or pixel regions of the main display.
  • the pixels in the sub-area may also be driven at selected different levels and their ageing is measured continuously e.g. by measuring the selected different levels while darkening the rest of the sub area (it is obvious for anyone skilled in the art that the first sensor should be suitably calibrated to measure a sub-area for this application).
  • the ageing of the pixels in the sub-area can then be input into a model that relates pixel drive history to ageing effects. This model can be continuously or periodically updated based on the ageing effects of the pixels in the sub-area. In this way continuous, real- time values of the ageing properties of the complete display and its different pixel driving histories are obtained.
  • the selected levels can be a function of what is shown in the visible area (i.e. the pixels in the sub-area are driven in a representative manner of the pixels in the active display area of the display), or a generic pattern that gives us information about a broad range of pixel levels (i.e. the pixels in the sub-area are driven in a way that is indicative of the ageing of the pixels in the active display area).
  • An advantage of the present invention in emissive displays is compensation of the ageing that is dependent on the history of the pixel driving. By giving the system access to a large collection of accurate ageing statistics, ageing can be accurately corrected.
  • a sub-area or measurement zone is provided on the display. Non- limiting embodiments of such a measurement zone are described below.
  • Fig.1 A and Fig. 1 B are a top view and a front view respectively of a part of a display device 1 provided with a first sensor and a second sensor (not shown).
  • the first sensor comprises an optical sensor unit 10 for use with an embodiment according to the present invention.
  • the first sensor is a display-integrated sensor at the edge of the display. Neither the arrangement of the sensor nor the type of sensor is considered to be a limitation on the present invention.
  • the second sensor is provided for display areas of the active display, this second sensor being substantially transparent. This second sensor is not shown in the figure for clarity reasons and will be described later in the preferred embodiment where organic photoconductive sensors are used.
  • a fixed format display device 1 comprises a fixed format panel 2 and an electronic driving system 4 for driving the fixed format panel 2 to generate and display an image.
  • the display device 1 has an active display area 6 on which the image is displayed as well as a sub-area 7 on which the same image is shown as on the whole display area 6.
  • the fixed format panel 2 is kept fixed in a fixed format panel bezel 8.
  • a display device 1 is provided with an optical sensor unit 10 to make optical measurements on a light output from a sub-area 7 of the display panel 2 and a second optical sensor for making optical measurements on display areas of the active display. Suitable electric signals are generated from these optical measurements.
  • a feedback system 12 receives the electric measurement signals 1 1 , and controls the electronic driving system 4 on the basis of these signals.
  • the first optical sensor unit 10.lt can for instance be a clip-on sensor that is attached to the display initially during production.
  • the whole of the first optical sensor unit 10 can be calibrated together and can also be interchangeable.
  • the first optical sensor unit 10 has a light entrance plane or optical aperture 21 and a light exit plane 23. It can also have internal reflection planes.
  • the light entrance plane 21 preferably has a stationary contact with the active display area 6 which is light tight for ambient light. If the contact is not light tight it may be necessary to compensate for ambient light by using an additional ambient light sensor which is used to compensate for the level of ambient light.
  • the optical sensor unit 10 stands out above the active display area at a distance D of 5 mm or less.
  • the optical sensor unit 10 comprises an optical aperture 21 , a photodiode sensor 22 and in between, as a light guide 34, made from, for example, massive PMMA (polymethyl methacrylate) structures 14, 16, 18, 20, of which one presents an aperture 21 to collect light and one presents a light exit plane 23.
  • PMMA is a transparent (more than 90% transmission), hard and stiff material. The skilled person will appreciate that other materials may be used, e.g. glass.
  • the massive PMMA structures 14, 16, 18, 20 serve for guiding light rays using total internal reflection.
  • the PMMA structures 14 and 18 deflect a light bundle over 90°. The approximate path of two light rays 24, 26 is shown in Fig. 2.
  • the oblique parts of PMMA structures 14 and 18 are preferably metallised 28, 30 in order to serve as a mirror.
  • the other surfaces do not need to be metallised as light is travelling through the PMMA structure using total internal reflection.
  • the first optical sensor unit 10 comprises an optical aperture 21 and a first light sensor 22, with a bundle 32 of optical fibres there between.
  • the optical fibres are preferably fixed together or bundled (e.g. glued), and the end surface is polished to accept light rays under a limited angle only (as defined in the attached claims).
  • the first optical sensor unit 10 comprises a light guide 34 made of one piece of PMMA.
  • the first optical sensor unit 10 furthermore comprises an aperture 21 at one extremity of the light guide 34, and a photodiode sensor 22 or equivalent device at the other extremity of the light guide 34.
  • the light guide 34 can have a non-uniform cross-section in order to concentrate light to the light exit plane 23. Light rays travel by total internal reflection through the light guide 34. At 90° angles, the light rays are deflected by reflective areas 28, 30, which are for example metallised to serve as a mirror, as in the first embodiment.
  • this light guide 34 is rigid and simple to make.
  • a reflective coating 36 is applied directly or indirectly (i.e. non-separable or separable) to the outer surface of the light guide 34, with exception of the areas where light is coupled in (aperture 21 ) or out (light exit plane 23).
  • the reflection coefficient of this reflective coating material 36 is 0.9 or lower.
  • the coating lays at the surface of the light guide 34 and may not penetrate in it. In this case, ambient light is very well rejected for this specific embodiment of the first sensor.
  • the structure provides a narrow acceptance angle: light rays that enter the light guide 34 under a wide angle to the normal to the active display area 6, such as the ray represented by the dashed line 38, will be reflected and attenuated much more (because the reflection coefficient being 0.9 or lower) than the ray as represented by the dotted line 40 which enters the structure under a narrow angle to the normal to the active display area 6.
  • the structure can further be modified to change the acceptance angle, as shown in Fig. 6.
  • the reflective layer 36 By selectively omitting the reflective layer 36 on the surface of the light guide 34, at places where the structure is not exposed to ambient light (e.g. where it is covered by a display housing 42), the light rays travelling under a large angle to the axis of the light guide 34 (or to the normal to the active display area 6) can be made to exit the optical sensor unit 10, while ambient light cannot enter the light guide 34.
  • light rays that enter the light guide 34 under a wide angle to the normal to the active display area 6, such as a light ray represented by dashed line 38, will be further attenuated and even be allowed to exit the light guide 34.
  • Light rays that enter the light guide 34 under a small angle to the normal of the active display area 6, such as a light ray represented by dotted line 40, will be less attenuated and will only leave the light guide 34 at the level of the light exit plane 23 and photodiode sensor 22. Therefore, the light guide 34 is much more selective as a function of entrance angle of the light rays. This means that this light guide 34 realises a narrow acceptance angle.
  • the first sensor is an ambient light sensor
  • Fig. 7 illustrates a Barco Coronis product gamma which contains an ambient light sensor.
  • This sensor is also integrated into the display's bezel, and includes a V(A) filter to match the measured light to the luminance sensitivity of the human eye.
  • the Barco Coronis sensor only measures ambient light coming from the external environment and not the light emitted by the display itself.
  • Fig. 12 shows the above display device 1 formed as a liquid crystal display device (LCD device) 2.
  • the first sensor is not shown but is identical to any of the first sensors described with reference to Figs. 1 to 6.
  • the display device can be any suitable fixed format display such as a plasma display devices or any other kind of display device emitting light.
  • the display's active area 3 of the display device 1 is divided into a number of groups 4 of display areas 5, wherein each display area 5 comprises a plurality of pixels.
  • the display device 3 of this example comprises eight groups 4 of display areas 5; each group 4 comprises in this example ten display areas 5.
  • One of the display areas may be the sub-area described above that is measured by the first sensor.
  • Each of the display areas 5 is adapted for emitting light into a viewing angle of the display device to display an image to a viewer in front of the display device 1 .
  • Fig. 12 further shows a second sensor system 6 with a second sensor array 7 comprising, e.g. eight groups 8 of sensors which corresponds to the embodiment where the actual sensing is made outside the visual are of the display, and hence the light needs to be guided towards the edge of the display.
  • This embodiment thus corresponds to a waveguide solution and not to the preferred organic photoconductive sensor embodiment, where the light is captured on top of (part of) the display area 5, and the generated electronic signal is guided towards the edge.
  • the actual sensor is created directly in front of the (part of) the sub area that needs to be sensed, and the consequentially generated electronic signal is guided towards the edge of the display, using semitransparent conductors.
  • One of the sensors 9 can be in addition to a first sensor (not shown), i.e. the first and second sensors overlap or the first and second sensors can be mutually exclusive. In the following it will be assumed for explanation purposes only that the first and second sensors overlap.
  • Each of said groups 8 comprises, e.g. ten sensors 9 (individual sensors 9 are shown in Figs. 14, 15 and 16) and corresponds to one of the groups 4 of display areas 5.
  • Each of the second sensors 9 corresponds to one corresponding display area 5.
  • the sensor system 6 further comprises coupling devices 10 for a display area 5 with the corresponding second sensors 9.
  • Each coupling device 10 comprises a light guide member 12 and an incoupling member 13 for coupling the light into the light guide member 12, as shown in Fig. 13.
  • a specific incoupling member 13 shown in Fig. 13 is cone-shaped, with a tip and a ground plane. It is to be understood that the tip of the incoupling member 13 is facing the display area 5. Light emitted from the display area 5 and arriving at the incoupling member 13, is then refracted at the surface of the incoupling member 13.
  • the incoupling member 13 is formed, in one embodiment, as a laterally prominent incoupling member 14, which is delimited by two laterally coaxially aligned cones 15, 16, said cones 15, 16 having a mutual apex 17 and different apex angles a1 , a2.
  • the diameter d of the cones 15, 16 delimiting the incoupling member 13 can for instance be equal or almost equal to the width of the light guide member 12. Said light was originally emitted (arrow 18) from the display area 5 into the viewing angle of the display device 1 , note that only light emitted in perpendicular direction is depicted, while a display typically emits in a broader opening angle.
  • the direction of this originally emitted light is perpendicular to the alignment of a longitudinal axis 19 of the light guide member 12. All light guide members 12 run parallel in a common plane 20 to the sensor array 7 at one edge 21 of the display device 1 . Said edge 21 and the sensor array 7 are outside the viewing angle of the display device 1 .
  • a diffraction grating as an incoupling member 13.
  • the grating is provided with a spacing, also known as the distance between the laterally prominent parts.
  • the spacing is in the order of the wavelength of the coupled light, particularly between 500nm and 2 ⁇ .
  • a phosphor is used. The size of the phosphor could be smaller than the wavelength of the light to detect.
  • the light guide members 12 alternatively can be connected to one single sensor 9. All individual display areas 5 can be detected by a time sequential detection mode, e.g. by sequentially displaying a patch to be measured on the display areas 5.
  • the light guide members 12 are for instance formed as transparent or almost transparent optical fibres 22 (or microscopic light conductors) absorbing just a small part of the light emitted by the specific display areas 5 of the display device 1 .
  • the optical fibres 22 should be so small that a viewer does not notice them but large enough to carry a measurable amount of light.
  • the light reduction due to the light guide members and the incoupling structures for instance is about 5% for any display area 5. More generally, optical waveguides may be applied instead of optical fibres, as discussed hereinafter.
  • the display devices 1 are constructed with a front transparent plate such as a glass plate 23 serving as a transparent medium 24 in a front section 25 of the display device 1 .
  • Other display devices 1 can be made rugged with other transparent media 24 in the front section 25.
  • the light guide member 12 is formed as a layer onto a transparent substrate such as glass.
  • a material suitable for forming the light guide member 12 is for instance PMMA (polymethylmethacrylate).
  • Another suitable material is for instance commercially available from Rohm&Haas under the tradename LightlinkTM, with product numbers XP-5202A Waveguide Clad and XP-6701 A Waveguide Core.
  • a waveguide has a thickness in the order of 2-10 micrometer and a width in the order of micrometers to millimeters or even centimetres.
  • the waveguide comprises a core layer that is defined between one or more cladding layers.
  • the core layer is for instance sandwiched between a first and a second cladding layer.
  • the core layer is effectively carrying the light to the second sensors.
  • the interfaces between the core layer and the cladding layers define surfaces of the waveguide at which reflection takes place so as to guide the light in the desired direction.
  • the incoupling member 13 is suitably defined so as to redirect light into the core layer of the waveguide.
  • parallel coupling devices 10 formed as fibres 22 with a higher refractive index are buried into the medium 24, especially the front glass plate 23.
  • the coupling device 10 Above each area 5 the coupling device 10 is constructed on a predefined guide member 12 so light from that area 5 can be transported to the edge 21 of the display device.
  • the second sensor array 7 captures light of each display area 5 on the display device 1 .
  • This array 7 would of course require the same pitch as the fibres 22 in the plane 20 if the fibers run straight to the edge, without begin tightened or bent. While fibres are mentioned herein as an example, another light guide member such as a waveguide, could be applied alternatively.
  • Fig. 12 the coupling devices 10 are displayed with different lengths. In reality, full length coupling devices 10 may be present.
  • the incoupling member 13 is therein present at the destination area 5 for coupling in the light (originally emitted from the corresponding display area 5 into the viewing angle of the display device 1 ) into the light guide member 12 of the coupling device 10.
  • the light is afterwards coupled from an end section of the light guide member 12 into the corresponding second sensor 9 of the sensor array at the edge 21 of the display device 1 .
  • the sensors 9 preferably only measures light coming from the coupling devices 10.
  • the difference between a property of light in the coupling device 10 and that in the surrounding front glass plate 23 is measured. This combination of measuring methods leads to the highest accuracy.
  • the property can be intensity or colour for example.
  • each coupling device 10 carries light that is representative for light coming out of a pre-determined area 5 of the display device 1 . Setting the display 3 full white or using a white dot jumping from one area to another area 5 gives exact measurements of the light output in each area 5.
  • the relevant output light property e.g. colour or luminance
  • Image information determines the value of the relevant property of light, e.g. how much light is coming out of a specific area 5 (for example a pixel of the display 3) or its colour.
  • optical fibers 22 shaped like a beam, i.e. with a rectangular cross-section, in the plane parallel front glass plate 23, for instance a plate 23 made of fused silica.
  • the light To guide the light through the fibers 22, the light must be travelling in one of the conductive modes.
  • To get into a conductive mode a local alteration of the fiber 22 is needed. Such local alteration may be obtained in different manners, but in this case there are important requirements than just getting light inside the fiber 22.
  • the image displayed is hardly, not substantially or not at all disturbed.
  • the incoupling member 13 is a structure with limited dimensions applied locally at a location corresponding to a display area.
  • the incoupling member 13 has a surface area that typically much smaller than that of the display area, for instance at most 1 % of the display area, more preferably at most 0.1 % of the display area.
  • the incoupling member is designed such that it leads light to a lateral direction.
  • the incoupling member may be designed to be optically transparent in at least a portion of its surface area for at least a portion of light falling upon it. In this manner the portion of the image corresponding to the location of the incoupling member is still transmitted to a viewer. As a result, it will not be visible. It is observed for clarity that such partial transparency of the incoupling member is highly preferred, but not deemed essential. Such minor portion is for instance in an edge region of the display area, or in an area between a first and a second adjacent pixel. This is particularly feasible if the incoupling member is relatively small, e.g. for instance at most 0.1 % of the display area.
  • the incoupling member is provided with a ground plane that is circular, oval or is provided with rounded edges.
  • the ground plane of the incoupling member is typically the portion located at the side of the viewer. Hence, it is most essential for visibility. By using a ground plane without sharp edges or corners, this visibility is reduced and any scattering on such sharp edges are prevented.
  • a perfect separation may be difficult to achieve, but with the sensor system 6 comprising the coupling device 10 shown in Fig. 13 a very good signal-to-noise-ratio (SNR) can be achieved.
  • SNR signal-to-noise-ratio
  • a coupling device such as an incoupling member is not required.
  • organic photoconductive sensors can be used as the sensors.
  • the organic photoconductive sensors serve as sensors themselves (their resistivity alters depending on the impinging light) and because of that they can be placed directly on top of the location where they should measure (for instance, a voltage is put over its electrodes, and a impinging-light dependent current consequentially flows through the sensor, which is measured by external electronics).
  • Light collected for a particular display area 5 does not need to be guided towards a sensor 9 at the periphery of the display (i.e. contrary to what is exemplified by Fig. 14).
  • light is collected by a transparent or semi-transparent sensor 101 placed on each display area 5.
  • this embodiment may also have a sensor array 7 comprising, e.g. a plurality of groups, such as eight groups 8 of sensors 9, 101 .
  • Each of said groups 8 comprises a plurality of sensors, e.g. ten sensors 9 and correspond to one of the groups 4 of display areas 5.
  • Each of the sensors 9 corresponds to one corresponding display area 5, as illustrated in figure 17.
  • Fig. 16 shows a side view of a second sensor system 9 according to a further embodiment of the invention, which is for use with a first sensor (not shown).
  • the sensor system of this embodiment comprises transparent sensors 33 which are arranged in a matrix with rows and columns.
  • the sensors can for instance e.g. photoconductive sensors, hybrid structures, composite sensors, etc.
  • the sensor 33 can be realized as a stack comprising two groups 34, 35 of parallel bands 36 in two different layers 37, 38 on a substrate 39, preferably the front glass plate 23.
  • An interlayer 40 is placed between the bands 36 of the different groups 35, 36. This interlayer is the photosensitive layer of this embodiment.
  • the bands (columns) of the first group 34 are running perpendicular to the bands (rows) of the second group 35 in a parallel plane.
  • the sensor system 6 divides the display area 1 into different zones by design, which is clear for anyone skilled in the art, each with its own optical sensor 9 connected by transparent electrodes.
  • the addressing of the second sensors may be accomplished by any known array addressing method and/or devices.
  • a multiplexer (not shown) can be used to enable addressing of all second sensors.
  • a microcontroller is also present (not shown).
  • the display can be adapted, e.g. by a suitable software executed on a processing engine, to send a signal to the microcontroller (e.g. via a serial cable: RS232). This signal determines which second sensor's output signal is transferred.
  • a 16 channel analogue multiplexer ADG1606 (of Analog Devices) is used, which allows connection of a maximum of 16 sensors to one drain terminal (using a 4 bit input on 4 selection pins).
  • the multiplexer is a preferably a low-noise multiplexer. This is important, because the signal measured is typically a low-current analogue signal, therefore very sensitive to noise.
  • the very low (4.5 ⁇ ) on-resistance makes this multiplexer ideal for this application where low distortion is needed. This on- resistance is negligible in comparison to the resistance range of the sensor material itself (e.g. of the order of magnitude MQ-100GQ). Moreover, the power consumption for this CMOS multiplexer is low.
  • a simple microcontroller can be used (e.g. Basic Stamp 2) that can be programmed with Basic code: i.e. its input is a selection between 1 and 16; its output goes to the 4 selection pins of the multiplexer.
  • a layered software structure is foreseen.
  • the layered structure begins from the high-level implementation in QAWeb, which can access BarcoMFD, a Barco in-house software program, which can eventually communicate with the firmware of the display, which handles the low-level communication with the sensor.
  • BarcoMFD a Barco in-house software program
  • the firmware of the display which handles the low-level communication with the sensor.
  • the functionality can be accessed quite easily.
  • the communication with the second sensor is preferably a two-way communication.
  • the command to "measure” can be sent from the software layer and this will eventually be converted into a signal activating the sensor (e.g. a serial communication to the ADC to ask for a conversion) which puts the desired voltage signal over the sensor's electrodes.
  • the sensor selected by the multiplexer at that moment in time) will respond with a signal depending on the incoming light, which will eventually result in a signal in the high-level software layer.
  • the analogue signal generated by the second sensor and selected by the multiplexer is preferably filtered, and/or amplified and/or digitized.
  • the types of amplifiers used are preferably low noise amplifiers such as LT2054 and LT2055: zero drift, low noise amplifiers.
  • Different stages of amplification can be used. For example in an embodiment stages 1 to 3 are illustrated in Fig. 8a to 8c respectively.
  • the current to voltage amplification has a first factor, e.g. with factor 2.2x106 ⁇ .
  • closed loop amplification is adjustable by a second factor, e.g. between about 1 and 140 (using a digital potentiometer).
  • low band pass filtering is enabled (first order, with fO at about 50Hz (cfr RC constant of 22ms)).
  • Digitization can be by an analog to digital, converter (ADC) such as an LTC 2420 - a 20bit ADC which allows to differentiate more than 106 levels between a minimum and maximum value. For a typical maximum of 1000Cd/m2 (white display, backlight driven at high current), it is possible to discriminate 0.001 Cd/m2 if no noise is present.
  • ADC analog to digital, converter
  • the current timing in the circuit is mainly determined by setting of a ⁇ -ADC such as LTC2420.
  • the most important time is the conversion time from analogue to digital (about 160ms, internal clock is used with 50Hz signal rejection).
  • the output time of the 24 clock cycles needs to read the 20bit digital raw value out of the serial register of LTC2420 which is of secondary importance (e.g. over a serial 3-wire interface).
  • the choice of the ADC (and its setting) corresponds to the target of stable high resolution light signals (20bit digital value, averaged over a time of 160ms, using 50Hz filtering).
  • Fig. 9 illustrates the overview of data path from the second sensor to the ADC.
  • the ADC output can be provided to a processor, e.g. in a separate controller or in the display .
  • the embodiments that utilize a transparent sensor positioned on top of the location where they should measure preferably require suitable transparent electrodes, that allow the electronic signal to be guided towards the edge, where it can be analyzed by the external electronics.
  • Suitable materials for the transparent electrodes are for instance tin oxides such as ITO (Indium Tin Oxide), Zinc oxide or poly-3,4- ethylenedioxythiophene polystyrene acid (known in the art are PEDOT-PSS).
  • This sensor array 7 can be attached to the front glass or laminated on the front glass plate 23 of the display device 2, for instance an LCD.
  • the organic layers 101 is preferably an organic photoconductive sensor, and may be a monolayer, a bilayer, or a multiple layer structure. Most suitably, the organic layer(s) 101 comprises an exciton generation layer (EGL) and a charge transport layer (CTL). The charge transport layer (CTL) is in contact with a first and a second transparent electrode, between which electrodes a voltage difference may be applied.
  • the thickness of the CTL can be for instance in the range of 25 to 100 nm, f.i. 80 nm.
  • the EGL layer may have a thickness in the order of 5 to 50 nm, for instance 10nm.
  • the material for the EGL is for instance a perylene derivative.
  • One specific example is 3,4,9,10-perylenetetracarboxylic bisbenzimidazole (PTCBI).
  • the material for the CTL is typically a highly transparent p-type organic semiconductor material.
  • Various examples are known in the art of organic transistors and hole transport materials for use in organic light emitting diodes.
  • Examples include pentacene, poly-3-hexylthiophene (P3HT), 2- methoxy, 5-(2'-ethyl-hexyloxy)-1 ,4-phenylene vinylene (MEH-PPV), N,N'-bis(3- methylphenyl)-N,N -diphenyl-1 ,1 '-biphenyl-4,4'-diamine (TPD). Mixtures of small molecules and polymeric semiconductors in different blends could be used alternatively.
  • the materials for the CTL and the EGL are preferably chosen such that the energy levels of the orbitals (HOMO, LUMO) are appropriately matched, so that excitons dissociate at the interface of both layers.
  • a charge seperation layer may be present between the CTL and the EGL in one embodiment.
  • Various materials may be used as charge seperation layer, for AIO3.
  • a monolayer structure can also be used. This configuration is also tested in the referenced paper, with only an EGL. Again, in the paper, the electrodes are Au, whereas we made an embodiment with ITO electrodes, such that a (semi) transparent sensor can be created. Also, we created embodiments with other organic layers, both for the EGL as well as the CTL, such as PTCDA, with ITO electrodes. In a preferred embodiment, PTCBi as EGL and TMPB as CTL were used.
  • the organic photoconductive sensor may be a patterned layer or may be a single sheet covering the entire display.
  • each of the display area 5 will have its own set of electrodes but they will share a common organic photosensitive layer (simple or multiple).
  • the added advantage of a single sheet covering the entire display is that the possible color specific absorption by the organic layer will be uniform across the display. In the case where several islands of organic material are separated on the display, non uniformity in luminance and or color is more difficult to compensate.
  • the electrodes are provided with fingered shaped extensions, as presented in figure 17.
  • the extensions of the first and second electrode preferably form an interdigitated pattern.
  • the number of fingers may be anything between 2 and 5000, more preferably between 100 and 2500, suitably between 250 and 1000.
  • the surface area of a single transparent sensor may be in the order of square micrometers but is preferable in the order of square millimeters, for instance between 1 and 7000 square millimeters.
  • One suitable finger shape is for instance a 1500 by 80 micrometers size, but a size of for instance 4 x 6 micrometers is not excluded either.
  • the gap in between the fingers can for instance be 15 micrometers in one suitable implementation.
  • Electrodes 36 are made of a transparent conducting material like any of the materials described above e.g. ITO (Indium Tin Oxide) and are covered by organic layer(s) 101 .
  • the organic photoconductive sensor needs not be limited laterally.
  • the organic layer may be a single sheet covering the entire display (not shown).
  • Each of the display areas 5 will have its own set of electrodes 36 (one of the electrodes can be shared in some embodiments where sensors are addressed sequentially) but they can share a common organic photosensitive layer (simple or multiple).
  • the added advantage of a single sheet covering the entire display is that the possible color specific absorption by the organic layer will be to a major extent uniform across the display. In the case where several islands of organic material are separated on the display, non uniformity in luminance and or color is more difficult to compensate.
  • the first and a second electrode with the interlayer may, on a higher level, be arranged in a matrix (i.e. the areas where the finger patterns are located are arranged over the display's active area according to a matrix) for appropriate addressing and read out, as known to the skilled person.
  • the interlayer organic layer(s) is/are deposited after provision of the electrodes.
  • the substrate may be provided with a planarization layer.
  • a transistor may be provided at the output of the photosensor, particularly for amplification of the signal for transmission over the conductors to a controller. Most suitably, use is made of an organic transistor. Electrodes may be defined in the same electrode material as those of the photodetector.
  • organic photoconductors serve as sensors and because of that, they can be placed directly on top of the location where they should measure. Consequentially, light collected for a particular display area does not need to be guided towards a sensor at the periphery of the display. In the most preferred embodiment, light is collected by a transparent or semi-transparent second sensor placed on each display area. The conversion of photons into charge carriers is done at the display area and not at the periphery of the display and therefore the sensor will be within / inside the viewing angle
  • second sensors comprising composite materials could be constructed.
  • nano/micro particles are proposed, either organic or inorganic dissolved in the organic layers, or an organic layer consisting of a combination of different organic materials (dopants). Since the organic photosensitive particles often exhibit a strongly wavelength sensitive absorption coefficient, this configuration can result in a less colored transmission spectrum, or can be used to improve the detection over the whole visible spectrum, or can improve the detection of a specific wavelength region.
  • a disadvantage could be that the sensor only provides one output current per measurement for the entire spectrum for all these embodiments.
  • the X, Y and Z tristimulus values of a given spectrum emitted by the display have to be measured sequentially, the latter can be enabled by measuring and calibrating the X, Y and Z components of light emitted with a certain spectrum as described earlier, in case the sensor is sensitive to the entire visible spectrum.
  • hybrid structures using a mix of organic and inorganic materials can be used instead of using organic layers to generate charges.
  • a bilayer device that uses a quantum-dot exciton generation layer and an organic charge transport layer can be used.
  • colloidal Cadmium Selende quantum dots and an organic charge transport layer comprising of Spiro-TPD can be used.
  • an organic photoconductor can be a mono layer, a bi-layer or in general a multiple (>2) layer structure.
  • An example of an organic bilayer photoconductor is known from Applied Physics Letters 93 "Lateral organic bilayer heterojunction photoconductors" by John C. Ho, Alexi Arango and Vladimir Bulovic.
  • the bilayer disclosed by J.C. Ho uses gold electrodes, which are non-transparent, and therefore this sensor is not usable as a transparent sensor.
  • the bilayer comprises an EGL (PTCBI) or Exciton Generation Layer and a CTL (TPD) or Charge Transport Layer.
  • an alternative sensor like the organic sensor described above, can be used.
  • the senor can be panchromatic, meaning that it is sensitive to the entire visual spectrum. This implies that the sensor can be sensitive to the red, green and blue spectra emitted by the display.
  • the first downside of such a sensor is the lack of colour filters which are typically used for measuring the CIE XYZ components.
  • the sensor can also be used to measure the brightness of light with a certain spectrum after calibrating it with the first sensor that includes the required filters. This calibration step is crucial, as the measured brightness will be relative to the source due to the lack of a V ( ⁇ ) filter.
  • the absorption spectrum of the exciton generation layer organic material
  • the luminance versus the digital driving level (DDL) curve can be calibrated for all three primaries, and will differ for displays having different spectra.
  • the luminance of the different primaries can be measured using a matrix of sensors in order to obtain the luminance of the color components.
  • the sensor has a design fundamentally based on a compromise between transparency and efficiency: light needs to be sensed, which implies that photons are to be absorbed, while we still desire that the sensor remains (almost) transparent. This effect adds up to the lack of a V(A) filter, such that (minor) errors can occur when the emitted spectrum is non-constant over the active area of the display.
  • the major advantage of the sensor is its ability for measuring over the entire active area of the display, which allows obtaining a global measurement result, instead of a local measurement near the border of the screen.
  • a second sensor having a first and a second electrode with the interlayer may, on a higher level, be arranged in a matrix for appropriate addressing and read out, as known to the skilled person.
  • the interlayer is deposited after provision of the electrodes.
  • the substrate may be provided with a planarization layer.
  • a transistor may be provided at the output of the photosensor, particularly for amplification of the signal for transmission over the conductors to a controller.
  • a transistor may be provided at the output of the photosensor, particularly for amplification of the signal for transmission over the conductors to a controller.
  • Electrodes may be defined in the same electrode material as those of the photodetector.
  • the organic layer 101 may be patterned to be limited to one display area 5, a group of display areas 5, or alternatively certain pixels within the display area 5.
  • the interlayer is substantially unpatterned. Any color specific absorption by the transparent sensor will then be uniform across the display.
  • the organic layer 101 may comprise nanoparticles or microparticles, either organic or inorganic and dissolved or dispersed in an organic layer.
  • organic layer(s) 101 comprising a combination of different organic materials.
  • the organic photosensitive particles often exhibit a strongly wavelength dependent sensitive absorption coefficient, such a configuration can result in a less colored transmission spectrum. It may further be used to improve detection over the whole visible spectrum, or to improve the detection of a specific wavelength range
  • more than one transparent sensor may be present in a display area 5, as illustrated in Fig. 17. Additional second sensors may be used for improvement of the measurement, but also to provide different colour-specific measurements. Additionally, by covering substantially the full front surface with transparent sensors, any reduction in intensity of the emitted light due to absorption and/or reflection in the at least partially transparent sensor will be less visible or even invisible, because position-dependant variations over the active area can be avoided this way.
  • the sensor surface of the transparent sensor 30 is automatically divided in different zones.
  • a specific zone corresponds to a specific display area 5, preferably a zone consisting of a plurality of pixels, and can be addressed by placing the electric field across its columns and rows. The current that flows in the circuit at that given time is representative for the photonic current going through that zone.
  • This sensor system 6 cannot distinguish the direction of the light. Therefore the photocurrent going through the transparent sensor 30 can be either a pixel of the display area 5 or external (ambient) light. Therefore reference measurements with an inactive backlight device are suitably performed.
  • the transparent sensor is present in a front section between the front glass and the display.
  • the front glass provides protection from external humidity (e.g. water spilled on front glass, the use of cleaning materials, etc.). Also, it provides protection form potential external damaging of the sensor. In order to minimize negative impact of any humidity present in said cavity between the front glass and the display, encapsulation of the sensor is preferred.
  • FIG. 14 shows another embodiment of the invention relating to a sensor system 6 for rear detection.
  • Fig. 14 is a simplified representation of an optical stack of the display 3 comprising (from left to right) a diffuser, several collimator foils, a dual brightness enhancement film (DBEF) and a LED display element in the front section 25 of a display device 1 .
  • DBEF dual brightness enhancement film
  • the second sensor 9 of the sensor system 6 is added to measure all the light in the display area 5.
  • a backlight device 27 is located between the second sensor 9 and the stack of the display 3.
  • the second sensor 9 is counter sunken in a housing element (not shown) so only light close to the normal, perpendicular to the front surface 28, is detected.
  • the sensor system 6 shown in Fig. 14 can be used for performing an advantageous method for detecting a property of the light, e.g. the intensity or colour of the light emitted from at least one display area 5 of a liquid crystal display device 2 (LCD device) into the viewing angle of said display device 2, wherein said LCD device 2 comprises a backlight device 27 for lighting the display 3 formed as a liquid crystal display member of the display device 2.
  • a property of the light e.g. the intensity or colour of the light emitted from at least one display area 5 of a liquid crystal display device 2 (LCD device) into the viewing angle of said display device 2, wherein said LCD device 2 comprises a backlight device 27 for lighting the display 3 formed as a liquid crystal display member of the display device 2.
  • Fig. 15 shows a horizontal sectional view of a display device 1 with a second sensor system 6 according to another embodiment of the present invention.
  • the present embodiment is a scanning sensor system.
  • the sensor system 6 is realized as a solid state scanning sensor system localized the front section 25 of the display device 1 .
  • the display device 1 is in this example an liquid crystal display, but that is not essential. This embodiment provides effectively an incoupling member.
  • the substrate or structures created therein may be used as light guide members.
  • the solid state scanning sensor system is a switchable mirror. Therewith, light may be redirected into a direction towards a second sensor.
  • the solid state scanning system in this manner integrates both the incoupling member and the light guide member.
  • the solid state scanning sensor system is based on a perovskite crystalline or polycrystalline material, and particularly the electro-optical materials. Typical examples of such materials include lead zirconate titanate (PZT), lanthane doped lead zirconate titanate (PLZT), lead titanate (PT), bariumtitanate (BaTiO-i3), bariumstrontiumtitantate (BaSrTiO3). Such materials may be further doped with rare earth materials and may be provided by chemical vapour deposition, by sol-gel technology and as particles to be sintered. Many variations hereof are known from the fields of capacitors, actuators and microactuators (MEMS).
  • MEMS microactuators
  • An additional layer 29 can be added to the front glass plate 23 and may be an optical device 10 of the sensor system 6.
  • This layer is a conductive transparent layer such as a tin oxide, e.g. preferably an ITO layer 29 (ITO: Indium Tin Oxide) that is divided in line electrodes by at least one transparent isolating layer 30.
  • the isolating layer 30 is only a few microns ( ⁇ ) thick and placed under an angle ⁇ .
  • the isolating layer 30 is any suitable transparent insulating layer of which a PLZT layer (PLZT: lanthanum-doped lead zirconate titanate) is one example.
  • the insulating layer preferably has a similar refractive index to that of the conductive layer or at least an area of the conductive layer surrounding the insulating layer, e.g. 5% or less difference in refractive index. However; when using ITO and PLZT, this difference can be larger; a PLZT layer can have a refractive index of 2.48, whereas ITO has a refractive index of 1 .7.
  • the isolating layer 31 is an electro- optical switchable mirror 31 for deflecting at least one part of the light emitted from the display area 5 to the corresponding sensor 9 and is driven by a voltage.
  • the insulating layer can be an assembly of at least one ITO sub-layer and at least one glass or IPMRA sub-layer.
  • a four layered structure was manufactured.
  • a substrate f.i. a corning glass substrate
  • a first transparent electrode layer was provided. This was for instance ITO in a thickness of 30 nm.
  • a PZT layer was grown, in this example by CVD technology. The layer thickness was approximately 1 micrometer.
  • the deposition of the PZT layer may be optimized with nucleation layers as well as the deposition of several subsequent layers, that do not need to have the same composition.
  • a further electrode layer was provided on top of the PZT layer, for instance in a thickness of 100 nm. In one suitable example, this electrode layer was patterned in fingered shapes. More than one electrode may be defined in this electrode layer.
  • a polymer was deposited.
  • the polymer was added to mask the ITO finger pattern.
  • a voltage is applied between the bottom electrode and the fingers on top of the PZT the refractive index of the PZT under each of the fingers will change. This change in refractive index will result in the appearance of a diffraction pattern.
  • the finger pattern of the top electrode is preferably chosen so that a diffraction pattern with the same period would diffract light into direction that would undergo total internal reflection at the next interface of the glass with air.
  • the light is thereafter guided into the glass, which directs the light to the sensor positioned at the edge Therewith, all it is achieved that diffraction orders higher than zero are coupled into the glass and remain in the glass.
  • specific light guiding structures e.g. waveguides may be applied in or directly on the substrate.
  • ITO is here highly advantageous, it is observed that this embodiment of the invention is not limited to the use of ITO electrodes. Other partially transparent materials may be used as well. Furthermore, it is not excluded that an alternative electrode pattern is designed with which the perovskite layer may be switched so as to enable diffraction into the substrate or another light guide member.
  • the solid state scanning sensor system has no moving parts and is advantageous when it comes to durability. Another benefit is that the solid state scanning sensor system can be made quite thin and doesn't create dust when functioning.
  • An alternative solution can be the use a reflecting surface or mirror 28 that scans (passes over) the display 3, thereby reflecting light in the direction of the sensor array 7.
  • Other optical devices may be used that are able to deflect, reflect, bend, scatter, or diffract the light towards the sensor or sensors.
  • the sensor array 7 can be a photodiode array 32 without or with filters to measure intensity or colour of the light. Capturing and optionally storing measured light in function of the mirror position results in accurate light property map, e.g. colour or luminance map of the output emitted by the display 3. A comparable result can be achieved by passing the detector array 9 itself over the different display areas 5.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Nonlinear Science (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • Computer Hardware Design (AREA)
  • Optics & Photonics (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)
  • Electroluminescent Light Sources (AREA)
  • Devices For Indicating Variable Information By Combining Individual Elements (AREA)

Abstract

A system and method for detecting and/or visualising and/or compensating optical effects of an image displayed on a display device are described, more particularly, but not exclusively, applicable to all sorts of displays such as fixed format displays, active matrix type displays, LED or OLED displays, LCD displays plasma displays etc. The invention can be used with displays such as fixed format displays that are intended to be used for medical imaging, more particularly, display devices such as a fixed format displays that have a high contrast ratio, wide viewing angle, extremely fast response time, and accurate imaging and hence for detecting and/or visualising and/or compensating effects of an image displayed on such a display device.

Description

METHOD AND SYSTEM FOR COMPENSATING
EFFECTS IN LIGHT EMITTING DISPLAY DEVICES
Technical field of the invention
The present invention relates to a system and method for detecting and/or visualising and/or compensating optical effects of an image displayed on a display device. It applies more particularly, but not exclusively, to all sorts of displays such as fixed format displays such as active matrix type displays, LED or OLED displays, LCD displays, plasma displays etc. It applies particularly to displays such as fixed format displays that are typically but not exclusively intended to be used for medical imaging.
More particularly, the present invention relates to a display device such as a fixed format display that has a high contrast ratio, wide viewing angle, and accurate imaging.
Background of the invention
Emissive, especially fixed format displays such as, Field-Emission (FED), Plasma, EL and OLED displays have been used in situations where conventional CRT displays are too bulky and/or heavy and provide an alternative to non-emissive displays such as Liquid Crystal displays (LCD). Fixed format means that the displays comprise a matrix of light emitting cells or pixel structures that are individually addressable rather than using a scanning electron beam as in a CRT. Fixed format also relates to the fact that the display contains pixels to visualize the image as well as to the fact that individual parts of the image signal are assigned to specific pixels in the display. However, the term "fixed format" is not related to whether the display is extendable, e.g. via tiling, to larger arrays, but to the fact that the display has a set of addressable pixels in an array or in groups of arrays or in a matrix. Making very large fixed format displays as single units manufactured on a single substrate is difficult.
At present, it is known that displays such as OLED displays and liquid crystal displays can be equipped with means for compensating the loss or variation of light output, whereby such compensation in part is carried out in view of the differential outputs of the individual pixels. There are two types of compensation methods and systems known which address the problem of differential ageing of display devices. The first method and system comprises the integration of a light sensor circuit in each individual pixel that acts as a feedback circuitry. In the case of for example a liquid crystal display with voltage drive pixels, one can correct the driving voltage by increasing or decreasing the latter depending upon this feedback signal to compensate for the loss or variation of luminescence and/or performance.
A second method for detecting and compensating the differential effects of display devices is based on a "model" approach. By keeping track, e.g. in non- volatile storing, of how much each an individual pixel was driven over the lifetime of the display device a prediction of the reduction in performance for each pixel can be made based on a model. This can be done by analysing the video content or by monitoring the on-current time of each pixel. The second method is representing a much cheaper and simple solution but its accuracy is heavily dependent on the quality of the model used. Environmental factors such as temperature and moisture during the time of use cannot be taken into account. Therefore, in practice this second method does not show very accurate results and still some part of the differential ageing problem remains visible. Thus, this type of compensation would not be acceptable for display devices used in medical imaging.
The model approach is based on the uniform prediction of degradation of the pixels put into the model. Thus, the compensation achieved is not as accurate as it is required for an application in medical imaging.
Summary of the invention
It is an object of the present invention to provide an alternative system and method for detecting and/or visualising and/or compensating optical effects of an image displayed on a display device, more particularly, but not exclusively, to all sorts of displays such as fixed format displays, active matrix type displays, LED or OLED displays, LCD displays, plasma displays etc. The object applies particularly to displays such as fixed format displays that can be intended to be used for medical imaging, more particularly, display devices such as, for example fixed format displays that have a high contrast ratio, wide viewing angle, and accurate imaging and hence for detecting and/or visualising and/or compensating effects of an image displayed on such a display device. This object is accomplished by a method and a system according to the present invention.
This present invention concerns using a first sensor in combination with a second partially transparent sensor in order to make sure that light measurements are made correctly and/or that the display or any of the sensors is working correctly or to compensate for ambient light. The first sensor can be any of an external reference sensor, a sensor on or integrated in the display, or an ambient light sensor. The present invention provides in a first aspect methods for detecting and/or visualising and/or compensating ambient light or errors such as ageing effects of pixel outputs displaying an image on a display device, or errors in the sensors themselves. Firstly a first image is displayed on an active display area on the display device having a first plurality of pixels. The active display area is preferably the normal display area for images, i.e. in working operation.
In some embodiments a second image is displayed on a sub-area of the display device, having a second plurality of pixels, the active display area being larger than the sub-area, and the second image being smaller than the first image, having fewer pixels than the active display area. In addition, the pixels of the sub-area are driven with pixel values that are representative or indicative for the pixels in the active display area and optical measurements are performed on light emitted from the active display area using a substantially transparent sensor and the sub-area and generating optical measurement signals therefrom. Then preferably the display of the image on the active display area is controlled in accordance with the optical measurement signals of the sub-area and the active area.
In some embodiments the first sensor is for measuring light originating from the environment, e.g. ambient light. In embodiments of the present invention, the first sensor can capture the ambient light thus the light from the environment or the light emitted from a sub-area of the display device. Thus the first sensor preferably is bidirectional.
In an embodiment of a second aspect the invention provides display devices comprising at least one display sub-area provided with a plurality of pixels, with for each display sub-area the first type of sensor that is an optical sensor unit that is located in front of the display and makes optical measurements on a light output from only a representative part of the display. This first sensor can be either an external sensor, entirely separate from the display, or alternatively it can be integrated in the design of the display. The external sensor can also be calibrated by using a reference sensor with known properties.
Preferably the first type of sensor can be a mini-spectrometer, like for instance the ultra-compact mini-spectrometer integrating MEMS and image sensor technologies with number C10988MA by Hamamatsu, which can be positioned directly above the sub-area that should be measured, or it can be integrated into the bezel of the display, and a waveguide solution can be used to guide light emitted by the sub-area of the active area to the sensor. In an alternative embodiment, this mini-spectrometer can be used as an ambient light sensor, for example by integrating it into the bezel or into the backlight. By using a (a set of) minispectrometer(s) inside the backlight, the latter enables capturing of the spectrum of the impinging light through the optical stack and the liquid crystal layer. Obviously, the optical foils and the liquid crystal layer can impact the measured spectrum, such that a calibration of the measured spectrum will be required. The liquid crystal layer preferably can be adapted such that it is working in the transmitting state such that as much light as possible is captured by the sensor, for instance a mini-spectrometer and the backlight of the display needs to be switched off. In an alternative embodiment, the first sensor, in case it is used for measurements of ambient light can for instance be a Coronis ambient light compensation sensor. The latter is preferably integrated in the bezel of a display. In some embodiments the first type of optical sensor is used for measuring light of a sub-area of the display, the first sensor comprising an optical aperture and a photodiode sensor and in between a light guide structure, which preferably guides light rays using total internal reflection. Preferably the aperture collects the light where after the light exits through a light exit plane. In addition, preferably in between different parts of the light guide structure there can be an air gap. At these interfaces, stray light (which is light not emitted by the display device) can enter the light guide as well.
Another type of first sensor that can be used with embodiments according to the present invention is a fiber-optic implementation. The first type of optical sensor comprises an optical aperture and a first light sensor, with a bundle of optical fibres there between. The optical fibres are preferably fixed together or bundled (e.g. glued), and the end surface is polished to accept light rays under a limited angle only (as defined in the attached claims).
Another first type of optical sensor that can be used with embodiments according to the present invention comprises a light guide. The first optical sensor furthermore comprises an aperture at one extremity of the light guide, and a photodiode sensor or equivalent device at the other extremity of the light guide. The light guide can have a non-uniform cross-section in order to concentrate light to a light exit plane. Preferably, light rays travel by total internal reflection through the light guide. The first sensor doesn't have a size restriction, but it is typically used to measure a zone of about 1 by 1 cm, which preferably captures the combined light output of many pixels in that zone The second partially transparent sensor is a full display, partially and most preferably substantially transparent sensor for detecting a property of light emitted from said full display area into a viewing angle of the display device. The second sensor is located in such a way as to detect light from a front section of said display device in front of said display area. Further means for displaying a test pattern on the display area of a specific luminance and chromaticity can be provided, and the second sensor can detect properties of light emitted, more specifically luminance and chromaticity and means can be provided for comparing the output of the first sensor with that of the second sensor. Any difference may indicate a need to re-calibrate.
Preferably the second type of sensor can comprise at least one sub-sensor, whereby said sub sensor is adapted to produce an individual measurement signal, which can be positioned as part of a matrix structure. In an embodiment of the present invention, one of said sub-sensors is placed in the light path of the light measured by the first type of sensor, i.e. there is an overlap. This enables obtaining the exact luminance value, compensated for the temperature drift of both sensors. In addition, to enable calibration of any sub-sensor, an alternative can be to use a light source that shines light from any position on all the sub sensors of the second sensor at any position. Advantageously by comparing the measured response of the sensors to the expected response, the sub sensors of the second sensor can be calibrated relative to one another.
However, the second sensor can also comprise of a large single sensor, which is able to capture light coming from the entire active area, or a major part of the active area.
The second optical sensor according to embodiments of the present invention preferably is a bidirectional sensor that measures both the light emitted from the display and the ambient light.
In the embodiment the second type of sensor is used to measure light emitted by the display, optical measurements on light emitted from the display area and for example the sub-area are made by the second and first types of sensors respectively.
Electronic measurement signals are generated from these sensor measurements. The image on the display device can be controlled in accordance with the electronic measurement signals of the first and/or second sensors.
The active display area and the sub-area can be present in one single display device. Preferably, both sensors are integrated into the display. Moreover, an external sensor can be used (for instance as the first sensor) to recalibrate the second type of sensor as the second type of sensor typically lacks a V(A) filter, which is a spectral filter needed to measure the luminance. Thus the first sensor can be used as a reference sensor, as it can be made to include a V(A) filter, while the second sensor can be corrected to match the values measured by the first sensor. Each radiant quantity has a corresponding luminous i.e. photometric quantity, which reflects the physics related to the visual perception of the human eye. A V(A) characteristic describes the spectral response function of the human eye in the wavelength range from 380 nm to 780 nm and is used to evaluate the corresponding radiometric quantity that is a function of wavelength A. As an example, the photometric value luminous flux can be obtained by integrating radiant power <t>e (A) using the following formula:
780nm
<t>v = Km j0e (A)- V(A)dA
380nm
the unit of luminous flux Φν is lumen [Im] , the unit of Oe is Watt [W] and for V(A) is [1 /nm], in addition the scaling factor Km is 683 Im/W. This formula establishes the relationship between the (physical) radiometric unit watt and the (physiological) photometric unit lumen. All other photometric quantities are also obtained from the integral of their corresponding radiometric quantities weighted with the V(A) curve, analogous to the previous formula. The table below lists important radiometric and photometric quantities:
Figure imgf000009_0001
It is clear from the explanation above, that measurements of luminance and illuminance preferably require a spectral response that matches the V(A) curve as closely as possible. In general, a sensor that is sensitive to the entire visible spectrum doesn't have a spectral sensitivity over the visible spectrum that matches the V(A) curve. Therefore, a first sensor comprising an additional spectral filter is preferred to obtain the correct spectral response. A optical spectral filter to be used in combination with a first sensor that results in a spectral sensitivity that matches the V(A) curve is called a V(A) filter.
In embodiments there are three general ways of combining the first and second sensors. In the first methodology, both sensors are imperfect at a certain moment in time, and they can be recalibrated using a known reference light source, and making sure one of the sub-sensors of the second sensor is designed to measure the same value as the first sensor, if the sub-sensor of the second sensor is unable to measure the light source. As a result both sensors can be recalibrated (if both give a different value, at least one of them measures incorrectly). Alternatively, both sensors can be used in such a way that measurement results produced by both sensors can be compared such that imperfections over time of at least one of the two sensors can be detected. Once they differ, a recalibration of the sensors is required. In another embodiment, one of the two sensors can be a reference sensor, which is used to recalibrate the other sensor. Typically, the first sensor will then be the reference sensor, e.g. when an external calibrated sensor is used. The second sensor is then calibrated by comparing its measured values to the first sensor and correcting its measured values. This can for instance be done by combining measurements of the first sensor with an ageing model and predicting what the second sensor should measure over the screen. In some embodiments, the first type of sensor is integrated into the display device at a fixed position. Preferably, the first type of sensor is positioned at a corner of the display device, and faces the sub-area of the screen, whereby said sensor is either temperature insensitive, or alternatively compensation is included to avoid temperature dependence. The l-Guard sensor of Barco N.V. can be used for example as described for example in EP 1274066, In addition, any temperature dependence of the first sensor can be compensated. When a representative pattern for the second type of sensor is captured by the first type of sensor as well, and degrading of the light emitted by the display in that area happens in the same way, the light values captured by a sub sensor of the first type of sensor and its corresponding part of the first sensor are theoretically identical, then, the different sub sensors of the second sensor can all individually be matched to their corresponding part of the first sensor. This implies that during normal use, the patch of the display underneath the first sensor has a representative pattern for the different sub sensors of the second sensor, while during calibration, the representative parts are lit sequentially and individually to measure them at a suitable driving level. This small representative image e.g. can be obtained by rescaling the display image on the display to a smaller size (second display image). The exact scaling algorithm used is not considered a limitation of the present invention. Based on the actual display contents the typical driving values can be identified for each pixel or a representative group of pixels of the sub-area and the actual behaviour of these pixels can be determined at any moment of the drive time. The first sensor is preferably adapted such as to gauge light coming from one of the subparts of its total patch, which requires a calibration for the small patches. The actual matching may require displaying dedicated patterns for a brief moment in time, depending on the specific embodiment of the sensor, as measuring luminance and chromaticity for any type of pattern may not be obvious for the second sensor, as will be explained later on, especially when using a color display. Thus the first sensor can also be made up of sub-sensors. As positional- temperature dependency of the second sensor can be eliminated this way, the measurements are preferably restricted to measurements on a thermally stabilised display, as a new calibration would be needed if the temperature alters again. This is typically at the same moment in time when the luminance output of the display stabilises, which can be checked using the first type of sensor. As the display can comprise the active display area and the sub-area (patch) in one single display device the optical effects caused by running the display as a whole like the temperature changes and the exposure to oxygen levels in the air are typically similar for the pixels of the active display area and of the sub-area which leads to a reasonable accuracy of the compensation method. However, due to various reasons, for instance heat management, the ageing of the display can vary over the display's active area as well (for instance, the light source's output can age differently, or some parts of the light source can emit a different amount of light than another part of the light source over time. Another example is the ageing of positional dependant aging over time of the optical foils used in the backlight), therefore it is clear that it will not only depend on the history of the driving of the pixel. A model can be used to compensate for this positional-dependant ageing over time, as this cannot be determined by the simply making a reference to the representative pattern of the first sensor. This improved modelling will result in a better recalibration of the second sensor.
Hence the second image can contain a pattern of predefined pixel values, acting as a generic reference for any possible content of the first image, i.e. indicative of ageing of pixels of the first area. Advantageously this can be combined with the "model" approach to compensate for optical effects more accurately. When more general patterns are used, instead of representative patterns of the large second sensor, the model can become more complex. As a result such patterns can be very small patches of 255 different grey levels which can be displayed continuously, but need to be measured sequentially, in combination with the history of the driving of the pixels measured by the sub sensors of the second sensor. It is clear that a model is needed when using this data to calibrate the second sensor, as these measurements cannot be used directly to measure the degradation of the display, and accordingly the value the second sensor should measure. This is further explained below.
The display can be for instance an OLED display, a LCD display or a plasma display. The method of the present invention provides a new approach for compensating optical effects, by using actual data derived from optical measurement signals, that correspond to areas of pixels that have been driven in a representative manner compared to measurements of specific areas of the display's active area. Accordingly, the second image can be selected from parts of the first image in a way that the second image is representative of the first image but smaller in size. The first and second sensors can also measure the same sub-area, i.e. the first and second sensors overlap such that the first sensor receives light through the second sensor or vice versa. As mentioned above the second sensor can consist of a matrix of sub sensors, whereby one of the sensors of the matrix preferably overlaps with the first sensor, such that they can be used to measure light emitted by the same area.
In a preferred embodiment of the present inventive method the sub-area can again be divided into different parts which are driven with a pattern based on the actual display contents. Typical driving values such as a dynamic pattern like moving images, or temporal dither patterns of the actual displayed image can be identified for this purpose and at least one part of the sub-area of the display device can be driven with that pattern. At the same time, for each individual pixel of the sub-area the data how the pixel has been driven over the lifetime of the display can be stored. By measuring characteristics of the test patterns, in comparison with the measurements from the second sensor parameters of a model can be better adjusted. For example, these parameters then can be used, in combination with information on how display pixels have been driven over the lifetime of the display, to predict the ageing behaviour of display pixels and to compare these results with what is measured by the second sensor. In contrast to making an estimated prediction of the actual behaviour based on a model only, with the method of the present invention now a measurement of the current behaviour of a given class of pixels can be provided based on measurements of the second sensor, which is appropriately matched to the first sensor, instead of storing the complete driving behaviour of each pixel of the complete display and instead of an inaccurate estimation based on a model. The actual matching may require displaying dedicated patterns, depending on the specific embodiment of the sensor, as measuring luminance and chromaticity for any type of pattern may not be obvious for the second sensor, as will be explained later on, especially when using a color display. Moreover, the memory used to store the driving history of each of the sub-area pixels or alternatively of classes of these pixels from parts of the sub- area can be reduced.
The method for correction of an image is used in real time, i.e. in parallel with a running application. The method is intervention-free, it does not require input from a user. Preferably, the optical measurements carried out are luminance measurements. In that case, light output correction may comprise luminance and/or contrast correction. Alternatively, the optical measurements carried out are colour measurements, in which case light output correction comprises colour correction of the displayed image.
According to another preferred embodiment of the invention the luminance measurements are carried out in sequences. For example, at a time zero not all parts of active display and the sub-area of the active display are used for measuring but it is also possible to reserve one part or zone of the active display and/or the sub-area which can be temporarily driven with zero. After 1000 hours, for example, the reserved part or zone can be used to start a new series of luminance measurements. With this reservation it is possible to measure the degradation of differently driven pixels and then make a more accurate prediction of the degradation behaviour of the pixels such as OLED pixels.
Alternatively, the sub-area and the active display area can be used and measured continuously to compare the sub-area image with the complete active display at all times. The optical measurement then is used to identify the remaining efficiency of every grey level and/or every colour. This degradation is stored in a table which shows degradation per grey level and/or colour over time.
It is another preferred embodiment of the present inventive method to also track in time how a pixel of the sub-area and/or the active display area was driven. This is in contrast to only track a total drive time. This allows having an even more accurate model because it also takes into account the exact degradation at a particular moment of the lifespan. For example, if a measurement includes the measurement of all grey levels every 30 minutes it is possible to look for every pixel of the sub-area and subsequently of the whole display area what the degradation was when driving a pixel at a certain video level and moreover at a certain moment in time. This ultimately allows an accurate compensation with environmental changes, e.g. in temperature or moisture levels, also included into the model. This more accurate model allows a more accurate recalibration of the second sensor because it incorporates additional effects.
The present invention also provides a display device comprising at least one first display area provided with a plurality of pixels, with for each first display area a first sensor that is an optical sensor unit that is located in front of the display and makes optical measurements on a light output from only a representative part of the display and a second full display sensor for detecting a property of light emitted from said full display area into a viewing angle of the display device, which second sensor is located in a front section of said display device in front of said display area. Means may be provided for displaying a test pattern on the display area, for example of a specific colour, for detecting the property of light by the second sensor and for comparing the output of the second sensor with that of the first sensor.
The display device comprises an active display area for displaying the image, an image forming device and an electronic driving system for driving the image forming device. The first optical sensor unit, for example, comprises an optical aperture and a light sensor having an optical axis, to make optical measurements on a light output from a sub-area of the active display area of the image forming device and generating optical measurement signals therefrom.
The second substantially transparent sensor can be suitably applied to an inner face of a cover member. The transparent cover member may be used as a substrate in the manufacturing of the second sensor. Particularly, an organic or inorganic substrate can be used that has sufficient thermal stability to withstand operating temperature of vapour deposition and the high vacuum conditions, which is a preferred way of deposition of the layers constituting the second sensor. Flexible substrates such as flexible polymeric substrates can also be used. Specific examples of deposition techniques include chemical vapour deposition (CVD) and any type thereof for depositing inorganic semiconductors such as metal organic chemical vapour deposition (MOCVD) or thermal vapour deposition. In addition, one can also apply low temperature deposition techniques such as printing and coating for depositing organic materials for instance. Another method, which can be used, is organic vapor phase deposition. When depositing organic materials, the temperatures at the substrate level are not much lower than any of the vapor deposition. Assembly is not excluded as a manufacturing technique. In addition, coating techniques can also be used on inorganic substrates such as glass substrates, however for polymers one must keep in mind that the solvent can dissolve the substrate in some cases
In a suitable embodiment hereof, the second sensor further comprises at least partially transparent electrical conductors for conducting a measurement signal from said second sensor within said viewing angle for transmission to a controller. Substantially transparent conductor materials such as a tin oxide, e.g. indium tin oxide (ITO) or a substantially transparent conductive polymer such as polymeric Poly(3,4-ethylenedioxythiophene) poly(styrenesulfonate), typically referred to as PEDOT:PSS, are well known partially transparent electrical conductors. Preferably, a thin oxide layer or transparent conductive oxide is used, for instance zinc oxide can also be used which is known to be a good transparent conductor. In one most suitable embodiment, the second sensor is provided with transparent electrodes that are defined in one layer with the said conductors (also called a lateral configuration).. This reduces the number of layers that inherently lead to additional absorption and to interfaces that might slightly disturb the display image.
In a preferred embodiment, the second sensor comprises an organic photoconductive sensor. Such organic materials have been subject of advanced research over the past decades. Organic photoconductive sensors may be embodied as single layers, as bilayers and as general multilayer structures. They may be advantageously applied within the present display device. Particularly, the presence on the inner face of the cover member allows that the organic materials are present in a closed and controllable atmosphere, e.g. in a space between the cover member and the display which will provide protection from any potential external damaging. A getter may for instance be present to reduce negative impact of humidity or oxygen. An example of a getter material is CaO. Furthermore, vacuum conditions or a predefined atmosphere (for instance pure nitrogen, an inert gas) may be applied in said space upon assembly of the cover member to the display i.e. an encapsulation of the sensor.
A second sensor comprising an organic photoconductive sensor suitably further comprises a first and a second electrode that advantageously are located adjacent to each other. The location adjacent to each other, preferably defined within one layer, allows a design with finger-shaped electrodes that are mutually interdigitated. Herewith, charges generated in the photoconductive sensor are suitably collected by the electrodes. Preferably the number of fingers per electrode is larger than 50, more preferably larger than 100, for instance in the range of 250-2000. However the present invention is not limited to this amount.
Furthermore an organic photoconductive sensor can be a mono layer, a bi-layer or in general a multiple (>2) layer structure. In one preferred type of photoconductive sensor is one wherein the organic photoconductive sensor is a bilayer structure with a exciton generation layer and a charge transport layer, said charge transport layer being in contact with a first and a second electrode. Such a bilayer structure is for instance known from Applied Physics Letters 93 "Lateral organic bilayer heterojunction photoconductors" by John C. Ho, Alexi Arango and Vladimir Bulovic. The sensor described by J.C. Ho et al relates to a non-transparent sensor as it refers to gold electrodes which will absorb the impinging light entirely. The bilayer comprises an EGL (PTCBI) or Exciton Generation Layer and a HTL (TPD) or Hole Transport Layer (HTL) (in contact with the electrodes).
Alternatively, second sensors comprising composite materials could be constructed. With composite materials nano/micro particles are proposed, either organic or inorganic dissolved in the organic layers, or an organic layer consisting of a combination of different organic materials (dopants). Since the organic photosensitive particles often exhibit a strongly wavelength sensitive absorption coefficient, this configuration can result in a less colored transmission spectrum, when suitable materials are selected and suitably applied, or can be used to improve the detection over the whole visible spectrum, or can improve the detection of a specific wavelength region.
Alternatively, instead of using organic layers to generate charges and collect them with the electrodes, hybrid structures using a mix of organic and inorganic materials can be used. A bilayer device that uses a quantum-dot exciton generation layer and an organic charge transport layer can be used. For instance colloidal Cadmium Selende quantum dots and an organic charge transport layer comprising of Spiro-TPD.
Although in a preferred embodiment, where photoconductive sensors are used, a disadvantage could be that the second sensor only provides one output current per measurement for the entire spectrum. In other words, it is not evident to measure color online while using the display. This could be avoided by using three independent photoconductive sensors that measure red, green and blue independently, as well as providing a suitable calibration for the three independent photoconductive sensors. They could be conceived similarly to the previous descriptions, and stacked on top of each other, or adjacent to each other on the substrate, to obtain an online color measurement. Offline color measurements can be made without the three independent photoconductive sensors, by calibrating the sensor to an external sensor which is able to measure tristimulus values (X, Y & Z), for a given spectrum. It is important to note that uniform patches should be displayed here, as will become clear from the later description of the methodology to measure online. This can be understood as follows. A human observer is unable to distinguish the brightness or chromaticity of light with a specific wavelength impinging on his retina. Instead, he possesses three distinct types of photoreceptors, sensitive to three distinct wavelength bands that define his chromatic response. This chromatic response can be expressed mathematically by color matching functions. Consequentially, three color matching functions, and have been defined by the CIE in 1931 . They can be considered physically as three independent spectral sensitivity curves of three independent optical detectors positioned at our retinas. These color matching functions can be used to determine the CIE1931 XYZ tristimulus values, using the following formulae:
Figure imgf000018_0001
Where I (λ) is the spectral power distribution of the captured light. The luminance corresponds to the Y component of the CIE XYZ tristimulus values. Since a first sensor, according to embodiments of the present invention, has a characteristic spectral sensitivity curve that differs from the three color matching functions depicted above, it cannot be used as such to obtain any of the three tristimulus values. Although the second sensor according to embodiments of the present invention is typically sensitive in the entire visible spectrum with respect to the absorption spectrum of the sensor or alternatively, they are at least sensitive to the spectral power distributions of a (typical) display's primaries, XYZ values can be obtained after calibration for a specific type of spectral light distribution emitted by the display. Displays are typically either monochrome or color displays. In the case of monochrome (e.g. grayscale) displays, they only have a single primary (e.g. white), and hence emit light with a single spectral power distribution. Color displays have typically three primaries - red (R), green (G) and blue (B)- which have three distinct specific spectral power distributions. A calibration step preferably is applied to match the XYZ tristimulus values corresponding to the spectral power distributions of the display's primaries to the measurements made by the second sensor according to embodiments of the present invention. In this calibration step, the basic idea is to match the XYZ tristimulus values of the specific spectral power distribution of the primaries to the values measured by the sensor, by capturing them both with the second sensor and an external reference sensor. Since the second sensor response according to embodiments of the present invention is non-linear, and the spectral power distribution associated with the primary may alter slightly depending on the digital driving level of the primary, it is insufficient to match them at a single level. Instead, they need to be matched ideally at every digital driving level. This will provide a relation between the actual tristimulus values and second sensor measurements in the entire range of possible values. To obtain a conversion between any measured value, as measured by the second sensor according to the preferred embodiment, and the desired tristimulus value, an interpolation is needed to obtain a continuous conversion curve. This results in three conversion curves per display primary that convert the measured value in the XYZ tristimulus values. In the case of a monochrome display, three conversion curves are obtained when using this calibration methodology. Obtaining the XYZ tristimulus values is now evident when using a monochrome display. The light to be measured can simply be generated on the display (in the form of uniform patches), and measured by the sensor according to embodiment of the present invention, when using the different conversion curves.
In the case of a color display, this calibration needs to be done for each of the display's primaries. This results in 9 conversion curves, in the typical case when the display has 3 primaries. Note that a specific coloured patch with a specific driving of the red, green and blue primaries will have a specific spectrum, which is a superposition of the scaled spectra of the red, green and blue primaries, and hence every possible combination of the driving levels needs to be calibrated individually. Therefore, an alternative methodology can suitably be used: the red, green and blue primaries need to be calibrated individually for each digital driving level. During such a calibration a single primary patch is displayed while the other 2 channels (primaries) remain at the lowest possible driving level (emitting the least possible light, ideally no light at all). This suitable methodology implies that the red, green and blue driving of the patch needs to be done sequentially. The correct three conversion curves corresponding to the specific primary will need to be applied to obtain the XYZ tristimulus values from the measured values. This results in three sets of tristimulus values: (XRYRZR), (XGYGZG) and (XBYBZB). Since the XYZ tristimulus values are additive, the XYZ tristimulus values of the patch can be obtained using the following formulae:
Xpatch=XR+XG+XB
Ypatch=YR+YG+YB
Zpatch=ZR+ZG+ZB
Note that we assume the display has no crosstalk in these formulae. Two parts can be distinguished in the XYZ tristimulus values. Y is directly a measure of brightness (luminance) of a color. The chromaticity, on the other hand, can be specified by two derived parameters, x and y. These parameters can be obtained from the XYZ tristimulus values using the following formulae: x X
X + Y + Z
Y
X + Y + z
As in some embodiments of the present invention, the second sensor lacks a V(A)-filter, the second sensor is preferably calibrated with a device that includes a V(A) filter. The additional sensor can qualify as such a sensor. This is a synergetic effect of both sensors. A similar reasoning can be used for obtaining the X and Z components of the tristimulus values. The second sensor lacks the required color filter, but the first sensor can include the necessary filter, so it can be used for calibrating the second sensor for measuring color. So, by using the calibration curves, the color coordinates and luminance can be determined if we use separate calibration curves for the X, Y and Z components. This is again spectrum dependant. So the simplest way is to separately do a calibration for the red, green and blue display primaries light emission.
Once the second sensor is properly calibrated according to the embodiments elaborated in this application, a feedback system can be provided for receiving the optical measurement signals of the second sensor and on the basis thereof controlling the electronic driving system.
This offline color measurement which is enabled by calibrating the sensor to an external sensor which is able to measure tristimulus values (X, Y & Z) Thus allows measuring brightness as well as chromaticity.
For each display area, at least one second sensor and optionally an at least partially transparent optical coupling device can be provided. The addition of an extra component is less advantageous but is included within the scope of the present invention. The at least one second sensor is designed for detecting a property of light emitted from the said display area into a viewing angle of the display device. The second sensor can be located outside or at least partially outside the viewing angle. The at least partially transparent optical coupling device is located in a front section of said display device. It comprises a light guide member for guiding at least one part of the light emitted from the said display area to the corresponding second sensor. The coupling device further comprises an incoupling member for coupling the light into the light guide member.
In accordance with embodiments of the present invention it is possible to detect a property such as the intensity or the colour of light emitted by at least one display area of a display device into the viewing angle of said display device without constraining the view on said display device. The use of the incoupling member solves the apparent contradiction of a waveguide parallel to the front surface that does not disturb a display image, and a signal-to-noise ratio sufficiently high for allowing real-time measurements. An additional advantage is that any scattering eventually occurring at or in the incoupling member, is limited to a small number of locations over the front surface of the display image.
According to other embodiments of the invention, a display device is provided that comprises at least two display areas with a plurality of pixels. For each display area, a sensor and an at least partially transparent optical coupling device are provided. The at least two sensors are designed for detecting a property of light emitted from the said display area into a viewing angle of the display device. The sensor is located outside or at least partially outside the viewing angle. The at least partially transparent optical coupling device is located in a front section of said display device. It comprises a light guide member for guiding at least one part of the light emitted from the said display area to the corresponding sensor. The coupling device further comprises an incoupling member for coupling the light into the light guide member.
It is an advantage of the present invention to detect a property such as the brightness or the chromaticity of light emitted by at least two display areas of a display device into the viewing angle of said display device without notably degrading the display device's image quality. The use of the incoupling member solves the apparent contradiction of a waveguide parallel to the front surface that does not disturb a display image, and a signal-to-noise ratio sufficiently high for allowing real-time measurements. An additional advantage is that any scattering eventually occurring at or in the incoupling member, is limited to a small number of locations over the front surface of the display image. However, when using waveguides a moire pattern can be observed at the edge of the waveguides, which can be considered to be a high risk, to lower this risk the described embodiments using organic photoconductive sensors can be applied. Preferably, the light guide member is running in a plane, which is parallel to a front surface of the display device. The incoupling member is suitably an incoupling member for laterally coupling the light into the light guide member of the coupling device. The result is a substantially planar incoupling member. This has the advantage of minimum disturbance of displayed images. Furthermore, the coupling device may be embedded in a layer or plate. It may be assembled to a cover member, i.e. front glass plate, of the display after its manufacturing, for instance by insert or transfer moulding. Alternatively, the cover member is used as a substrate for definition of the coupling device.
In one implementation, a plurality of light guide members is arranged as individual light guide members or part of a light guide member bundle. It is suitable that the light guide member is provided with a circular or rectangular cross-sectional shape when viewed perpendicular to the global propagation direction of light in a light guide member. A light guide with such a cross-section may be made adequately and moreover limits scattering of radiation. The cover member is typically a transparent substrate, for instance of glass or polymer material.
In any of the above embodiments the second sensor or the second sensors of the sensor system is/are located at a front edge of the display device.
The incoupling member of this embodiment may be present on top of the light guide member or effectively inside the light guide member. One example of such location inside the light guide is that the incoupling member and the light guide member have a co-planar ground plane. The incoupling member may then extend above the light guide member or remain below a top face of the light guide member or be coplanar with such top face. Furthermore, the incoupling member may have an interface with the light guide member or may be integral with such light guide member
In one particular embodiment, the or each incoupling member is cone- shaped. The incoupling member herein has a tip and a ground plane. The ground plane preferably has circular or oval shape. The tip is preferably facing towards the display area.
The incoupling member may be formed as a laterally prominent incoupling member. Most preferably, it is delimited by two laterally coaxially aligned cones, said cones having a mutual apex and different apex angles. The difference between the apex angles Δα=α1 - α2 is smaller than the double value of the critical angle (Qc) for total internal reflection (TIR) Δα < 26c. Especially, the or each incoupling member fades seamlessly to the guide member of the coupling device. The or each incoupling member and the or each guide member are suitably formed integrally.
In an alternative embodiment, the or each incoupling member is a diffraction grating. The diffraction grating allows that radiation of a limited set of wavelengths is transmitted through the light guide member. Different wavelengths (e.g. different colours) may be incoupled with gratings having mutually different grating periods. The range of wavelengths is preferably chosen so as to represent the intensity of the light most adequately.
In a further embodiment hereof, both the cone-shaped incoupling member and diffraction grating are present as incoupling members. These two different incoupling members may be coupled to one common light guide member or to separate light guide members, one for each, and typically leading to different sensors.
By using a first and a second incoupling members of different type on one common light guide member, light extraction, at least of certain wavelengths, may be increased, thus further enhancing the signal to noise ratio. Additionally, because of the different operation of the incoupling members, the second sensor may detect more specific variations.
By using a first and a second incoupling member of different type in combination with a first and a second light guide member respectively, the different type of incoupling members may be applied for different type of measurements. For instance, one type, such as the cone-shaped incoupling member, may be applied for luminance measurements, whereas the diffraction grating or the phosphor discussed below may be applied for color measurements. Alternatively, one type, such as the cone-shaped incoupling member, may be used for a relative measurement, whereas another type, such as the diffraction grating, is used for an absolute measurement. In this embodiment, the one incoupling member (plus light guide member and sensor) may be coupled to a larger set of pixels than the other one. One is for instance coupled to a display area comprising a set of pixels, the other one is coupled to a group of display areas. In a further embodiment, the incoupling member comprises a transformer for transforming a wavelength of light emitted from the display area into a sensing wavelength. The transformer is for instance based on a phosphor. Such phosphor is suitably locally applied on top of the light guiding member. The phosphor may alternatively be incorporated into a material of the light guiding member. It could furthermore be applied on top of another incoupling member (e.g. on top of or in a diffraction grating or a cone-shaped member or another incoupling member).
The sensing wavelength of the second sensor is suitably a wavelength in the infrared range. This range has the advantage the light of the sensing wavelength is not visible anymore. Incoupling into and transport through the light guide member is thus not visible. In other words, any scattering of light is made invisible, and therewith disturbance of the emitted image of the display is prevented. Such scattering typically occurs simultaneously with the transformation of the wavelength, i.e. upon reemission of the light from the phosphor. The sensing wavelength is most suitably a wavelength in the near infrared range, for instance between 0.7 and 1 .0 micrometers, and particularly between 0.75 and 0.9 micrometers. Such a wavelength can be suitably detected with a commercially available photodetectors, for instance based on silicon.
A suitable phosphor for such transformation is for instance a Manganese Activated Zinc Sulphide Phosphor. Preferably, the phosphor is dissolved in a waveguide material, which is then spin coated on top of the substrate. The substrate is typically a glass substrate, for example BK7 glass with a refractive index of 1 ,51 . Using lithography, the parts are removed from the which are undesired. Preferably, a rectangle is constructed which corresponds to the photosensitive area, in addition the remainder of the waveguide, used to transport the generated optical signal towards the edges, is created in a second iteration of this lithographic process. Another layer can be spin coated (without the dissolved phosphors) on the substrate, and the undesired parts are removed again using lithography. Waveguide materials from Rohm&Haas can be used or PMMA.
Such a phosphor may emit in the desired wavelength region, where the manganese concentration is greater than 2%. Also other rare earth doped zinc sulfide phosphors can be used for infrared (IR) emission. Examples are ZnS:ErF3 and ZnS:NdF3 thin film phosphors, such as disclosed in J.Appl.Phys. 94(2003), 3147, which is incorporated herein by reference. Another example is ZnS:TmxAgy, with x between 100 and 1000 ppm and y between 10 and 100 ppm, as disclosed in US4499005.
Instead of being an alternative to the before mentioned transparent second sensor solution, a further second sensor embodiment of the coupling member and second sensor may be applied in addition to such sensor solution. The combination enhances sensing solutions and the different type of sensor solutions have each their benefits. The one sensor solution may herein be coupled to a larger set of pixels than another sensor solution.
While the foregoing description refers to the presence of at least one display area with a corresponding second sensor solution, the number of display areas with a second sensor is preferably larger than one, for instance two, four, eight or any plurality. It is preferable that each display area of the display is provided with a second sensor solution, but that is not essential. For instance, merely one display area within a group of display areas could be provided with a second sensor solution.
In a further aspect according to the invention, use of the said display devices for sensing a light property with the second sensor while displaying an image is provided.
Most suitably, the real-time detection is carried out for the signal generated by the sensor according to the preferred embodiment of this invention, this signal is generated according to the sensors' physical characteristics as a consequence of the light emitted by the display, according to its light emission characteristics for any displayed pattern. The detection of luminance and color (chromaticity) aspects may be carried out in a calibration mode, e.g. when the display is not in a display mode.
However, it is not excluded that luminance and chromaticity detection may also be carried out real-time, in the display mode. In some embodiments it can be suitable to do the measurements relative to a reference value. As mentioned in a preferred embodiment of this invention, the sensor does not exhibit an ideal spectral sensitivity according to the V (λ) curve, nor does it have suitable color filters to measure the tristimulus values. Therefore, real-time measurements are difficult as the sensor will not be calibrated for every possible spectrum that results from the driving of the R, G & B subpixels which generate light impinging on the sensor. A V(A) sensor following a ν(λ) curve describes the spectral response function of the human eye in the wavelength range from 380 nm to 780 nm and is used to establish the relation between the radiometric quantity that is a function of wavelength λ, and the corresponding photometric quantity. As a result measurements of luminance and illuminance require a spectral response that matches the V(A) curve as closely as possible. In general, a sensor according to embodiments of the present invention, is sensitive to the entire visible spectrum and doesn't have a spectral sensitivity over the visible spectrum that matches the V(A) curve. Therefore, an additional spectral filter is needed to obtain the correct spectral response.
On top of this non-ideal spectral sensitivity, the sensor as described in a preferred embodiment also does not operate as an ideal luminance sensor.
As the sensor used is not a perfect luminance sensor, as it does not only capture light in a very small opening angle, preferably the angular sensitivity is taken into account, as described in the following part.
For a given point on an ideal luminance sensor, the measured luminance corresponds to the light emitted by the pixel located directly under it (assuming that the sensor's sensitive area is parallel to the display's active area). On the contrary, the sensor according to embodiments of the present invention captures the pixel under the point together with some light emitted by surrounding pixels. More specifically, the values captured by the sensor cover a larger area than the size of the sensor itself. Because of this, the patterns used, do not correspond to the actual patterns and therefore a correction has to be done in order to simulate the measurements of the sensor. To enable the latter preferably the luminance emission pattern of a pixel is measured as a function of the angles of its spherical coordinates. The range of the angles preferably are changed from -80 to 80 degrees with a step of 2 degrees for the inclination angle Θ and from 0 to 180 with a step of 5 degrees for the angle Φ. The distance preferably is kept constant over the measurements. When a luminance sensor is positioned parallel to the display's active area, the latter corresponds to an inclination angle of 0, meaning that only an orthogonal light ray is considered. In addition, the exact light sensitivity of the sensor can be characterised. These measurements can then be used in the optical simulation software to obtain the corrected pattern for the actual light the sensors will detect. Using this actual light output will provide an additional improvement and advantageous effect of the algorithm that will render more reliable results.
In another preferred embodiment, e.g. suitable for performing real-time measurements, the first and second sensors can be used to remove the contribution of the ambient light from the measurements of the second sensor, or remove quantify the luminance of the ambient light, depending on the type of first sensor used. The measured light, measured by the first sensor may exclude the ambient light by the way it is constructed, its closeness to the display surface of the display device and it shading, which is typically the case when using an external reference sensor, or a sensor integrated at the corner of the display. On the other hand the second sensor measures a combination of both the ambient and the display light. By comparing the signals from the first and second sensors and especially their difference, the ambient light can be determined. The only limitation is that the impinging light should be close to uniform over the area where the matrix of the second sensor is located directly. In the specific embodiment where the first sensor is an ambient light sensor, the first sensor can then also be used to calibrate the second sensor. For instance, we can put the average measured value of all the sub-sensors of the second sensor equal to the value measured by the ambient light sensor. The other sensors can then be used to obtain a spatial uniformity plot of the ambient light, by scaling the measured values to the average values. Also, in this methodology we implicitly assume the ambient light is more or less uniform over the display's active area, which is typical in a real-life environment.
Alternatively, ambient light also can be measured by performing two measurements: a first measurement with display active (measuring ambient light + display light) and then a measurement with display inactive (measuring purely ambient light). The difference between those two measurements gives an indication of the contribution of the ambient light in the measured signal, which allows eliminating it from the measured signal. The drawback of this alternative solution is that it requires inactivating the display, while the combination of both sensors mentioned earlier does not require such an inactivation.
In addition, in another embodiment, continuous recording of the outputs of first and second sensors can result in digital water marking, e.g. after capturing and recording all the signals measured by all the sensors of the sensor system in a session (e.g. at the time of a diagnosis), it could be possible to re-create the same conditions which existed when an image was displayed in the session (e.g. used to perform the diagnosis), at a later date.
As a result, for an appropriate real-time sensing while display of images in ongoing, further processing on sensed values is suitably carried out. Therein, an image displayed in a display area is used for treatment of the corresponding sensed value or sensed values, as well as the sensor's properties. Aspects of the image that are taken into account, are particularly its light properties, and more preferably light properties emitted by the individual pixels or an average thereof. Light properties of light emitted by individual pixels include their emission spectrum at every angle
An algorithm may be used to calculate the expected response of the sensor based on digital driving levels provided to the display and the physical behaviour of the sensor (this includes its spectral sensitivity over angle, its non- linearity and so on). When comparing the result of this algorithm to the actually measured light of a pixel or a group of pixels, it is possible to improve the display's performance by implementing a precorrection on the display's driving levels to obtain the desired light output. This precorrection may be an additional precorrection which can be added onto a precorrection that for example corrects the driving of the display such that a uniform light output over the display's active area is obtained
In one embodiment, the difference between the sensing result and the theoretically calculated value is compared by a controller to a lower and/or an upper threshold value taking into account the reference. If the result is outside the accepted range of values, it is to be reviewed or corrected. One possibility for review is that one or more subsequent sensing results for the display area are calculated and compared by the controller. If more than a critical number of sensing values for one display area are outside the accepted range, then the setting for the display area is to be corrected so as to bring it within the accepted range. A critical number is for instance 2 out of 10. E.g. if 3 to 10 of sensing values are outside the accepted range, the controller takes action. Else, if the number of sensing values outside the accepted range is above a monitoring value but not higher than the critical value, then the controller may decide to continue monitoring.
In order to balance processing effort, the controller may decide not to review all sensing results continuously, but to restrict the number of reviews to infrequent reviews with a specific time interval in between. Furthermore, this comparison process may be scheduled with a relatively low priority, such that it is only carried out when the processor is idle.
In another embodiment, such sensing result is stored in a memory. At the end of a monitoring period, such set of sensing results may be evaluated. One suitable evaluation is to find out whether the sensed values of the difference in light are systematically above or below the threshold value that, according to the settings specified by the driving of the display, should be emitted. If such systematic difference exists, the driving of the display may be adapted accordingly. In order to increase the robustness of the set of sensing results, certain sensing results may be left out of the set, such as for instance an upper and a lower value. Additionally, it may be that values corresponding to a certain display setting are looked at. For instance, sensing values corresponding to a high (RGB) driving levels are looked at only. This may be suitable to verify if the display behaves at high (RGB) driving levels similar to its behaviour at other settings, for instance low (RGB) driving levels. Alternatively, the sensed values of certain (RGB) driving level may be evaluated, as these values are most reliable for reviewing driving level settings. Instead of high and low values, one may think of light measurements when emitting a predominantly green image versus the light measurements when emitting a predominantly yellow image. Additional calculations can be based on said set of sensed values. For instance, instead of merely determining a difference between sensed value and theoretically calculated value of the light output, which is the originally calibrated value, the derivative may be reviewed. This can then be used to see whether the difference increases or decreases. Again, the timescale of determining such derivative may be smaller or larger, preferably larger, than that of the absolute difference. It is not excluded that average values are used for determining the derivative over time.
In another embodiment, sets of sensed values at a uniform driving of the display are used (or when applying another precorrection dedicated to achieve a uniform luminance output) for different display areas are compared to each other. In this manner, homogeneity of the display emittance (e.g. luminance) can be calculated.
It will be understood by the skilled reader, that use is made of storage of displays theoretically calculated values and sensed values for the said processing and calculations. An efficient storage protocol may be further implemented by the skilled person.
In the embodiment where the display is used in a room with ambient light; the sensed value is suitably compared to a reference value for calibration purposes. The calibration will be typically carried out per display area and compared to the output of the first sensor. In the case of using a display with a backlight, the calibration typically involves switching the backlight on and off to determine potential ambient light influences that might be measured during normal use of the display, for a display area and suitably one or more surrounding display areas. The difference between these measured values corresponds to the influence of the ambient light. This value needs to be determined because otherwise the calculated ideal value and the measured value will never match when the display is put in an environment that is not pitch black. In case of using a display without backlight, the calibration typically involves switching the display off, within a display area and suitably surrounding display areas including the sub-area measured by the first sensor. The calibration is for instance carried out for a first time upon start up of the display. It may subsequently be repeated for display areas. Moments for such calibration during real-time use, which do not disturb a viewer, include for instance short transition periods between a first block and a second block of images. In case of consumer displays, such transition period is for instance an announcement of a new and regular program, such as the daily news. In case of professional displays, such as displays for medical use, such transition periods are for instance periods between reviewing a first medical image (X-ray, MRI and the like) and a second medical image. The controller will know or may determine such transition period. While the above method has been expressed in the claims as a use of the above mentioned sensor solutions, it is to be understood that the method is also applicable to any other sensor to be used with other display types. It is more generally a method of using a matrix of sensors in combination with a display. In the preferred embodiment, the matrix of sensors is designed such that it is permanently integrated into the display's design. Therefore, a matrix of transparent organic photoconductive sensors is used preferably, suitably designed to preserve the display's visual quality to the highest possible degree.
The goal can be either to assess the luminance or color uniformity of the spatial light emission of a display, based on at least two zones.
The present invention includes providing a sensing result by:
Comparing the sensor value which is actually measured in the zone to the ideally measured value which should have been measured by the sensor for a specified display area with the applied display settings for said display area corresponding to the moment in time on which the sensor determination is based. This can either be based on a mathematical algorithm or on an additional calibration step, depending on whether a real-time measurements or offline measurements are made using uniform patches, and
Evaluating the sensing result and/or evaluating a set of sensing results for defining a display evaluation parameter;
If the display evaluation parameter is outside an accepted range, modify the display settings, or notify the user the display is out of the desired operating range, and/or continue monitoring said display area.
The average display settings as used herein are more preferably the ideally emitted luminance as discussed above.
The display device defines at least one display area and the display device may be of conventional technology, such as a liquid crystal display (LCD) with a backlight, for instance based on light emitting diodes (LEDs), or an electroluminescent device such as an organic light emitting (OLED) display. The display device suitably further comprises an electronic driving system and a controller receiving optical measurement signals generated in the first and second sensors and controlling the electronic driving system on the basis of the received optical measurement signals. In embodiments where the additional sensor is an external sensor, which is not integrated into the design of the display, it is possible that the display does not directly receive the measurements, but that the measurements are sent to a computer, which interprets the measurements.
The sub-area of the active display area is adapted to show an image that is representative or indicative of the image of the complete active display area.
The active display area and the sub-area are in one single display device. The optical aperture of the first optical sensor unit preferably has an acceptance angle such that at least 50% of the light received by the sensor comes from light travelling within 15° of the optical axis of the first light sensor (that is the acceptance angle of the sensor is 30°). In other words the acceptance angle of the first sensor is such that the ratio between the amount of light used for control which is emitted or reflected from the display area at a subtended acceptance angle of 30° or less to the amount of light used for control which is emitted or reflected from the display area at a subtended acceptance angle of greater than 30° is X:1 where X is 1 or greater. Under some circumstances it may be advantageous to have an acceptance angle such that at least 60%, alternatively at least 70% or at least 75% of the light received by the first light sensor comes from light travelling within 15° of the optical axis of the first light sensor. The optical aperture of the first optical sensor unit can have an acceptance angle such that light received at the first sensor at an angle with the optical axis of the first light sensor equal to or greater than 10° is attenuated by at least 25%, light received at an angle equal to or greater than 20° is attenuated by at least 50 or 55% and light arriving at an angle equal to or greater than 35° is attenuated by at least 80 or 85%.
Preferably, the optical measurements of the first and/or second sensors are luminance measurements. The performance correction may then comprise luminance and/or contrast correction. The optical measurements of at least the first sensor may also be colour measurements for instance retrieving color coordinates, in which case a colour correction may be carried out. Once the second sensor is properly calibrated according to the embodiments elaborated in this application, a feedback system can be provided for receiving the optical measurement signals of the second sensor and on the basis thereof controlling the electronic driving system. The feedback system preferably comprises a comparator/amplifier for comparing the optical measurement signals, measured luminance or colour values, with a reference value, and a regulator for regulating a backlight control and/or a video contrast control and/or a video brightness control and/or a colour temperature, so as to reduce the difference between the reference value and the measured value and bring this difference as close as possible to zero.
The first type optical sensor unit of the present invention can in one embodiment be a sensor integrated into the design of the display. In this case, the first sensor preferably comprises a light guide between the optical aperture and the first light sensor. This light guide may be e.g. a light pipe or an optical fibre. In addition, in an alternative embodiment, the first type of sensor can be an external sensor, which is separated from the display. Preferably, the sub-area of the active display area of the image forming display device is less than 1 % of the total area of the active display area of the image forming device, preferably less than 0.1 %, and still more preferred less than 0.01 %. According to these embodiments where the first sensor is not an ambient light sensor, the optical aperture of the first optical sensor unit masks a portion of the active display area, while the first light sensor itself does not mask any part of the active display area. The light output from the front face of the active display area of a display device is continuously measured with a minimal coverage of the viewed image by the first sensor. The first light sensor may be brought to the back of the display area or to a side thereof.
The sub-area measured on the screen by these embodiments of the first sensor is composed of a number of active pixels of the active display area. The sub-area of active pixels measured on the screen is preferably not larger than 6 mm x 4 mm. For example for a mobile phone screen, with typical dimensions of the active display area of 50 mm x 80 mm (third generation mobile phone), a measurement zone of 6 mm x 4 mm constitutes 0.6% of that active display area. For a laptop screen with an active display area with dimensions of 2459 mm x 1844 mm (a 12.1 inch screen), a measurement zone of 6 mm x 4 mm constitutes 0.0005% of that active display area.
No dedicated test pixels are necessary, any pixels in the active display area can be used for carrying out optical measurements thereupon. A test patch may be generated and superimposed on the active pixels viewed by the first and or second sensor.
Preferably, a housing of the first optical sensor unit stands out above the active display area by a distance lower than 0.5 cm.
The present invention also includes a control unit to compensate for optical effects of pixels displaying an image on a display device, the control unit comprising:
- means for allowing display of a first image on an active display area on the display device having a first plurality of pixels,
- means for allowing display of a second image on a sub-area of the active display area and having a second plurality of pixels, the active display area being larger than the sub-area and the second image being smaller than the first image and having fewer pixels than the active display area,
- means for controlling driving the pixels of the sub-area according to parts of the first image, and
- means for controlling the display of the image on the active display area in accordance with the optical measurement signals of the first image and the second image.
The present invention also includes computer program product comprising code segments adapted for execution on any type of computing device. The code segments when executed on a computing device provide:
- means for allowing display of a first image on an active display area on the display device having a first plurality of pixels,
- means for allowing display of a second image on a sub-area of the active display area and having a second plurality of pixels, the active display area being larger than the sub-area and the second image being smaller than the first image and having fewer pixels than the active display area,
- means for controlling driving the pixels of the sub-area according to parts of the first image, and
- means for controlling the display of the image on the active display area in accordance with the optical measurement signals of the first and second image.
The present invention also includes a machine readable signal storage medium storing the computer program product. The medium may be a disk medium such as a diskette or harddisk, a tape storage medium, a solid state memory such as RAM or a USB memory stick, an optical recording disk such as a CD- ROM or DVD-ROM, etc.
Other features and advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the principles of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 A is a top view and Fig. 1 B is a front view of a part of an OLED screen provided with an optical sensor unit according to the present invention.
Fig. 2 shows a first embodiment of an optical sensor unit according to the present invention, the unit comprising a light guide being assembled of different pieces of PMMA.
Fig. 3 shows a second embodiment of an optical sensor unit according to the present invention, the unit comprising a light guide with optical fibres.
Fig. 4 shows a third embodiment of an optical sensor unit according to the present invention, the unit comprising a light guide made of one single piece of PMMA.
Fig. 5 shows the light guide of Fig. 4, this light guide being coated with a reflective coating.
Fig. 6 shows the light guide of Fig. 4, this light guide being partially coated with a reflective coating, and the light guide being shielded from ambient light by a housing.
In the different drawings, the same reference figures refer to the same or analogous elements.
Fig. 7 illustrates an example of an ambient light sensor.
Fig 8a shows the first stage of amplification used for a display device with a sensor system;
Fig 8b shows the second stage of amplification used for a display device with a sensor system;
Fig 8c shows the first stage of amplification used for a display device with a sensor system;
Fig. 9 illustrates the overview of the data path from the sensor to the processer;
Fig. 10 is schematic representation of a display system according to an embodiment of the present invention.
Fig. 1 1 is a schematic representation of embodiments of the present invention.
Fig. 12 is a schematic illustration of a display device with a sensor system according to a first embodiment of the invention;
Fig. 13 shows the coupling device of the sensor system illustrated in Fig. 12;
Fig. 14 shows a vertical sectional of a sensor system for use in the display device according to a third embodiment of the invention;
Fig. 15 shows a horizontal sectional view of a display device with a sensor system according to a fourth embodiment of the invention;
Fig. 16 shows a side view of a display device with a sensor system according to a second embodiment of the invention;
Fig. 17 shows a schematic view of a network of sensors with a single layer of electrodes used in the display device.
Description of the illustrative embodiments
The present invention will be described with respect to particular embodiments and with reference to certain drawings but the invention is not limited thereto but only by the claims. The drawings described are only schematic and are non-limiting. In the following the acceptance angle of a sensor refers to the angle subtended by the extreme light rays which can enter the sensor. The angle between the optical axis and the extreme rays is therefore usually half of the acceptance angle.
The present invention makes use of a first and a second sensor. The first sensor is an optical sensor unit that makes optical measurements on a light output from a representative part of a fixed format display such as an LCD, LED, OLED, plasma display or the like or does ambient light measurements in an alternative embodiment. The first sensor can be a colour sensor. So a first color sensor makes optical color measurements on a light output from a representative part of the display, i.e. a small part, or performs illuminance measurements on the ambient light. The second sensor is a sensor such as a panchromatic sensor (that is the sensor is not specific to a certain colour) which is substantially transparent and is placed at the front of a display screen. As mentioned earlier, the first sensor can be an ambient light sensor as well, but if it is not explicitly mentioned, the first sensor can be considered as preferably a sensor intended to measure the light output (luminance/chromaticity) of the display.
In the case the first type of sensor is a sensor integrated in the display, that permanently obstructs a part of the active area it is evident to perform online measurements, since any desired pattern can be displayed beneath the obstructed area, which can be measured at any time. In the case the first type of sensor is an ambient light sensor, it is typically integrated into the display's bezel, which is also suitable for online measurements.
The partially transparent second sensor can be made large enough, e.g. up to the entire active area of the display (contrary to current sensors that only measure at the border of the active area of the display). This allows measurement of light intensity over the complete screen or in parts of the screen when the second sensor is divided into regions. Also, by doing this, an average over the entire screen could be measured, which is more reliable than limiting to a small patch, and furthermore the amplitude of the signal is increased, which results in more reliable measurements. However, as a result it is not obvious to perform online measurements, as the pattern measured by the sensor is the pattern currently displayed on the display, which is entirely under the user's control. When a dedicated uniform patch needs to be measured, the display should hence be taken out of its normal operation mode. Also, since the light captured by the second sensor is not limited to a small opening angle; light coming from larger angles can also be measured due to the sensor's design. This means that the second sensor does not only measure light from a limited acceptance angle, but also detects light coming from higher angles. This is different compared to a conventional luminance sensor, and consequentially complex algorithms need to be implemented to obtain satisfactory measurement results even for a grayscale display. For some embodiments of the sensor in combination with a color display, it may be unavoidable to perform offline measurements to obtain luminance measurements. In addition, some of the required patterns will possibly not be displayed, which raises the need to measure some patterns in a non-sequential mode, or displaying them faster than the observer is able to see. Based on these arguments, one can conclude that the first sensor is more suitable for tests that require continuous measurements (such as ensuring a continuous luminance output while the display is still in the process of thermal stabilization e.g. after it is started up). The second sensor is by design more suited for offline measurements, which can be performed e.g. in a screen-saver mode or at time periods outside the active usage of the display, for example at night.
The second sensor, typically lacks the necessary filters to do a X, Y and Z measurement, e.g. in a preferred embodiment using the organic photoconductive sensors positioned on top of the location to be measured, due to its design. It has a broad absorption spectrum (as it is a panchromatic sensor), which enables measuring tristimulus X, Y and Z values after a suitable calibration.
The second sensor can be a bidirectional sensor, able to measure light coming from both directions. However when measuring ambient light, it is also not feasible to integrate a filter in the sensor which insures that the sensor's spectral sensitivity matches the v-lambda curve, as it would result in severe optical losses and coloring of the display. Theoretically, the spectrum of the emitted light can have any spectrum. For example the source can include significant IR light, which can partially be detected the sensor, while human eyes are insensitive to this spectral range. This implies that there is no possibility to calibrate the second sensor to the ambient light in a generic way. Therefore, the first sensor is preferably matched to an ambient light sensor that includes a V(A) filter that matches the spectral sensitivity of the human eye, this ambient light sensor can be a specific embodiment of the first sensor. Or Alternatively a mini-spectrometer integrated in the display, more specifically in the bezel of a display can be used.
These two types of sensors with profoundly different capabilities and limitations can be neatly combined in various ways, which eventually enables various applications.
As mentioned earlier, the first sensor can be used for calibrating the second sensor, as it does not need to be at least partially transparent and therefore it can include the necessary filters to measure the X, Y and Z components of light of any spectrum. This can then be used to calibrate the second sensor for measuring the X, Y and Z components corresponding to the primaries' spectra. To enable calibration of the second sensor with a first sensor, preferably either a non-integrated first sensor is used, and moved to all the positions of the sub- sensors of the second sensor, such that they measure the same value, and then calibrate them all individually. In an alternative embodiment, a display- integrated first sensor can be used, as described before.
For example, when using the second sensor, it has to be calibrated using a reference sensor, which in accordance with embodiments of the present invention is the first sensor. To calibrate the luminance measurements, measurements are performed at different luminance values by the second sensor and the first sensor. Since the second sensor does not include a V(lambda) filter, the measurements are made using the display for which the second sensor is to be used. The different luminance values can be obtained by depicting uniform images of certain driving levels on the display. Based on the two obtained measurements for the second sensor and the first sensor, a LUT can be created that is to be applied to the all the measurements of the second sensor. This calibration can be performed at regular intervals in the field. Either a sub-sensor of the second sensor can overlap with the first sensor (in the embodiment where the first sensor is integrated into the display) or the first sensor can be an external sensor that can be positioned at any sub-sensor of the second sensor.
For example, instead of using specific patterns offline such that the display can't be used for a (brief) period of time, the pixel content that is displayed at that time can be used to calibrate the display in real-time using the outputs of the first and second sensors, whereby the first sensor is an integrated sensor at the edge. But as we stressed earlier, this can be challenging due to the design of the second sensor, which can be less suitable for online measurements, especially for color displays.
In another example, during use of the display the first sensor is continuously used to measure selected grey levels and/or colours of the sub- area. The second sensor is used to measure the behaviour of the active area of the display. These selected grey levels and/or colours are put there to follow in real-time the ageing of the pixels. At certain timeframes the remaining efficiency of every grey level and/or colour is measured for the pixels by only turning on that grey level and/or colour and measuring the response (luminance and/or chromaticity) with the first and second optical sensors. This degradation is stored in a table (e.g. degradation per grey level and/or colour over time), e.g. in a memory of the display. Note that it is also possible to start several sequences of measuring degradation. In other words: at time zero one could start measuring all 255 grey levels. But one could also reserve a zone of the sub- area to start later tests. That zone can be temporarily driven with a zero value. After a time e.g. 1000 hours one could use the reserved zone to start a new series of measurements of all grey levels, etc...
In addition, every pixel or every zone of the display can be tracked as to how long that pixel or zone has been driven at a certain grey level/colour (or current level). By measuring the degradation of the different grey levels and/or colour using the second sensor the degradation of every pixel or zone of the display can be measured and normalised using the output of the first sensor. Note, that there are two degradations which are of importance, one is the degradation of the light emitted by the display, which depends on the specific grey level and the second is the degradation of the second sensor, which can be corrected by the first sensor as described earlier. Basically, by measuring all the grey levels with both the first and second sensors, the second sensor can be calibrated by the first sensor. To do this, they can either measure at the same position when an external sensor is used, or the first sensor can be integrated into the display, a pattern representative for the entire display is depicted, as described earlier.
The first sensor can be used for calibrating the second sensor, as it does not need to be semitransparent and therefore it can include the necessary filters to measure the X, Y and Z components of light of any spectrum. This can then be used to calibrate the second sensor for measuring the X, Y and Z components corresponding to the primaries' spectra. To enable calibration of the second sensor with a first sensor, preferably either a non-integrated first sensor is used, and moved to all the positions of the sub-sensors of the second sensor, such that they measure the same value, and then calibrate them all individually. In an alternative embodiment, a display-integrated first sensor can be used, and a measurement can be made when the first sensor and a sub-sensor of the second sensor are designed to measure the same region on the display's active area. The other sub-sensors of the second sensor can then be calibrated relative to the sub-sensor that has been calibrated, by doing a scaling. However, as the ageing of the display can vary over the display's active area as well, an additional model may be required to compensate for the positional dependency. The first sensor makes optical measurements on a light output from the representative part of the LCD display. So the first color sensor makes optical measurements on a light output from a representative part of the fixed format display, i.e. a small part, and is combined with the output from a second full screen panchromatic sensor. Periodically a calibration procedure is carried out such that a color (red green or blue) is displayed and the outputs of the first and second sensors are compared. The average value from the second sensor (determined over the whole screen) is compared with the representative value from the first sensor. While the screen is in use the first sensor can be used at the same time.
In one embodiment light emitted as a combination of different primaries is displayed on the whole display. The luminance components can then be measured (assuming we have the acquired calibration data for that spectrum, or assuming we can measure the primaries independently and we have calibration data for them).
Hence, in a sequential mode, specific patterns are applied to the display that allows the quantifying of the luminance non-uniformity of a certain spectrum emitted by the display generated as result of driving the primaries at specific over its active area. For example if white is displayed, the non-uniformity of the whitepoint can be measured. In addition, as explained above it is possible to measure the chromaticity and luminance of the primaries. This potential change in luminance and chromaticity can be compensated.
In an alternative embodiment, a second sensor can be positioned in the light path of emitted light that is measured by the first sensor, such that they should measure the same result after calibration. In addition, they will be at approximately the same temperature. Therefore, if we have a model that predicts the sensor's response, depending on temperature and actually emitted light, for both sensors, the temperature dependency can be eliminated. Mathematically, this result in solving a set of two equations with two unknowns:
L1 =f1 (T, Lact)
L2=f2(T, Lact) Where L1 & L2 are the sensor's measurements, while T is the temperature, and Lact is the actual light that should be measured ideally.
Furthermore, since we have the freedom to position the sensors over the active area of the screen as desired, a sensor can be positioned in the light path of emitted light that is measured, such that they should measure the same result after calibration. By measuring regularly with both sensors, any variations between the sensors can be tracked. When the measured value starts to differ, at least one of the sensors measures incorrectly. To calibrate the remainder of the sensors, several techniques can be used. In one technique, a LED based solution can be used. In this LED based solution, LEDs are positioned at the border of the display, outside the viewing angle (active area) of the display, at the viewers' side of the panel. These LEDs intended to be used only for calibration purposes, and are therefore only used rarely, and will not age significantly. The relation between sensor output and LED driving can be determined for all sensors upfront (for various levels, as the second sensor's signal can depend non-linearly on the impinging light), and can be stored in the display. This can than be used to recalibrate the sensors, if the LEDs remain stable over the display's lifetime. In a further specific embodiment, the LEDs can also be measured using a separate measurement circuit. The downside is that by using this approach, is that the sensor will be calibrated based on the spectrum of the LEDs, spectral changes of the light emitted by the display that could occur can still lead to imperfect calibrations. Alternatively, when using an LCD, LEDs can be used in a similar methodology which are positioned in the backlight. At least part of the changes in spectrum of the light emitted by the display will be captured when using this alternative embodiment. The downside of this alternative embodiment is that not all potential changes in the display can be captured, as the light emitted by the LEDs pass through several optical layers that can alter, such as foils and the LC layer.
As mentioned earlier, the combination of the first and second sensor can be used for various applications. Certain standardization organizations have published recommendations/guidelines to which the first and second sensor can be combined to allow a fixed format display to comply in terms of spatial uniformity for medical imaging. These methods comprise measurements limited to a zone of the display area (not the entire active area). It is clear for anyone skilled in the art that the sensors elaborated in this invention can be suitably used for this application.
The system according to the present invention can be used in real time, thus during display of a main application. Test patterns can also be applied especially for calibration. For example a pattern can be displayed, and sensed afterwards by the sub-sensors of the second sensor to match the spatial configuration required to verify if the display is compliant to these standards. Instead of merely verifying if the display is compliant to the standards, an automated correction of the behaviour of the display is included within the scope of the present invention. Techniques exist in the current state of the art that allow obtaining a highly uniform spatial light output of a display.
To guarantee the consistency of this correction over the entire lifetime of the display, the combination of the first and second sensors can be used. As mentioned earlier, the first type of sensor can be used to calibrate the second sensor, and afterwards the second sensor can be put to good use by measuring with multiple transparent second sensors over the area of the screen and comparing the results with the output of the first sensor. In a second step the pixels of the fixed format display can be driven in accordance with the luminance results obtained to compensate for aging on a pixel-by-pixel or region-by-region basis. Example: the sky is blue and is usually at the top of an image. This means the blue pixel ages faster at the top than in the middle or bottom of the screen. By using a completely blue screen this aging effect can be detected by the second sensor and aged pixels can be driven harder than less aged pixels
As a consequence it is beneficial to update the luminance uniformity correction techniques over time if such situations arise. A well-known correction algorithm as disclosed in EP1424672 can be applied to compensate for non-uniformity and spatial noise of the display. The display may be put through an initial calibration phase in which different grey levels and/or colours are displayed sequentially on the display system. For every displayed grey level and/or colour, the light output (luminance and/or colour information) is measured with the first sensor and with the second sensor at different locations on the display device. Interpolation can be used to estimate a measurement down to per display pixel. The relation between the second sensor response and the response of the first sensor is stored in a memory of the display. This calibration phase allows checking a later second sensor response with a later first sensor response to see if luminance and/or chromaticity of the display has remained constant at various positions on the display.
Based on a limited number of measurements from the second sensors, an interpolated correction can be calculated and applied as a software precorrection to the display, similar to the correction as disclosed in EP'672, but using a limited number of interpolated measurements. Also other correction methods can be used as disclosed in EP'672, applying a correction per zone, rather than per pixel. This method includes performing this correction once in production. But in addition the correction can be done at certain time intervals using the first and second sensors to ensure the correction remains correct at every point in time. In a non-sequential mode, specific color patterns could be applied to the display and the first and second sensors used to measure the luminance and/or chromaticity. As a result non-uniformities can be corrected, for example in chromaticity. This allows quantifying of the luminance non-uniformity of a certain color component of the display using the second sensors and to compare this to the output of the first sensor. Using these measurements, the non-uniformity of the white point can also be measured, e.g. the first and second sensors are calibrated to the display spectrum and the contribution of the different primaries are known. Measuring luminance non-uniformity only requires calibrating the second sensor with respect to the first sensor for that specific spectrum, moreover measuring chromaticity non-uniformity requires calibrating the X, Y and Z components of the spectra of the primaries as well, as described before. Compliance with the DICOM GSDF standard is one of the essential characteristics of medical displays. It essential that DICOM compliance is maintained throughout the lifetime of the display. Using the second sensor DICOM compliance of the display can be verified at multiple positions. The output of the first sensor can be used to confirm the result.
Hence the first sensor can be used to check if the display remains DICOM compliant over time and this compared with the second sensor outputs on a zone-by-zone basis. This allows a better accuracy of measurement as the second sensors are used to measure away from the border of the active area of the display while the first sensor can correct for drift of the second sensors. The second sensors are able to measure at different positions on the active area of the display. DICOM compliance could be checked e.g. by measuring for 64 uniform patterns spread equally over the dynamic range of the display. If the measured values are within 10% deviation of the ideal DICOM curve, the display is considered to be DICOM compliant. Any drift in the second sensors (e.g. because of ageing) can be corrected by comparison with the output of the first sensor.
Instead of only determining if the display is still compliant to the DICOM standards, the entire DICOM calibration of the display can be performed. In practice, this can be done by altering the LUT that is applied on the incoming image to obtain a DICOM calibrated image. To obtain this LUT, the native behaviour of the display could be measured using the second sensors (without the initial DICOM calibration), the resulting values can be used in combination with the ideal DICOM curve to obtain the required LUT. The first sensor output can be used to compensate for any drift in the second sensor outputs, using suitable embodiments as described in this invention.
When using a single ambient light sensor as the first sensor, the detection of local variations of reflection perceived on the screen cannot be detected. Therefore, the ambient light could be considered acceptable, while local peaks can be very disturbing for a user. Instead of measuring at only one location, the second sensor is able to detect the impact of the ambient light on the entire area of the screen, allowing to overcome the limitation of the current measurement methodologies.
This measurement methodology is valuable for example when the display is used in a room with significant ambient light. If a certain luminance ratio, for instance (L_white + L_ambient)/(L_black + L_ambient) > 250, is to be obtained over the entire active area of the screen, the backlight setting should be adapted to ensure the compliancy at every location on the screen. One sensor alone cannot guarantee a sufficient luminance ratio when the ambient light is non-uniform.
Making use of a first and second optical sensors as described above the present invention provides a method for compensating for optical effects such as ageing effect, of an image displayed on a display device, e.g. a fixed format device with pixels.
In the following the first and second sensors will be described separately but they are used together on the same display in embodiments of the present invention as described above. Fig. 10 is a schematic representation of a display system, e.g. a fixed format display that can be used with the present invention including a signal source 48 a controller unit 46, a driver 44 and a display 42 with a matrix of pixel elements that are driven by the driver 44. The invention makes use of a sub-area (patch) of the screen in a way that is optimised for/adapted to emissive displays in combination with the image on the complete screen. The sub-area is a first measurement zone that contains more than 1 pixel, and spatial intelligence is added to the content being shown in the first measurement area. In particular in one embodiment spatial partitioning is used. With reference to Fig. 1 1 specific embodiments of the patterns used for measurements to be made by the first and the second sensors are illustrated. The display comprises an array of pixels and a small portion of these pixels is used as a sub-area (patch) or first measurement zone. The complete display forms a second measurement zone. The pixels in the sub-area or measurement zone are driven in accordance with one or more algorithms, each of which is an embodiment of the present invention. The pixels in the sub-area can be driven in the same way as pixels of the main part of the display, i.e. the active display area. The active display area and the sub-area are in one single display device. In this way the pixels in the sub-area age at the same rate as pixels or pixel regions of the main display. The pixels in the sub-area may also be driven at selected different levels and their ageing is measured continuously e.g. by measuring the selected different levels while darkening the rest of the sub area (it is obvious for anyone skilled in the art that the first sensor should be suitably calibrated to measure a sub-area for this application). The ageing of the pixels in the sub-area can then be input into a model that relates pixel drive history to ageing effects. This model can be continuously or periodically updated based on the ageing effects of the pixels in the sub-area. In this way continuous, real- time values of the ageing properties of the complete display and its different pixel driving histories are obtained.
The selected levels can be a function of what is shown in the visible area (i.e. the pixels in the sub-area are driven in a representative manner of the pixels in the active display area of the display), or a generic pattern that gives us information about a broad range of pixel levels (i.e. the pixels in the sub-area are driven in a way that is indicative of the ageing of the pixels in the active display area).
An advantage of the present invention in emissive displays is compensation of the ageing that is dependent on the history of the pixel driving. By giving the system access to a large collection of accurate ageing statistics, ageing can be accurately corrected. To implement these ageing algorithms and models a sub-area or measurement zone is provided on the display. Non- limiting embodiments of such a measurement zone are described below.
Fig.1 A and Fig. 1 B are a top view and a front view respectively of a part of a display device 1 provided with a first sensor and a second sensor (not shown). The first sensor comprises an optical sensor unit 10 for use with an embodiment according to the present invention. Note that in this embodiment the first sensor is a display-integrated sensor at the edge of the display. Neither the arrangement of the sensor nor the type of sensor is considered to be a limitation on the present invention. The second sensor is provided for display areas of the active display, this second sensor being substantially transparent. This second sensor is not shown in the figure for clarity reasons and will be described later in the preferred embodiment where organic photoconductive sensors are used.
A fixed format display device 1 comprises a fixed format panel 2 and an electronic driving system 4 for driving the fixed format panel 2 to generate and display an image. The display device 1 has an active display area 6 on which the image is displayed as well as a sub-area 7 on which the same image is shown as on the whole display area 6. The fixed format panel 2 is kept fixed in a fixed format panel bezel 8.
According to an aspect of the present invention, a display device 1 is provided with an optical sensor unit 10 to make optical measurements on a light output from a sub-area 7 of the display panel 2 and a second optical sensor for making optical measurements on display areas of the active display. Suitable electric signals are generated from these optical measurements. A feedback system 12 receives the electric measurement signals 1 1 , and controls the electronic driving system 4 on the basis of these signals.
Several ways exist to realise the first optical sensor unit 10.lt can for instance be a clip-on sensor that is attached to the display initially during production. The whole of the first optical sensor unit 10 can be calibrated together and can also be interchangeable. Typically, the first optical sensor unit 10 has a light entrance plane or optical aperture 21 and a light exit plane 23. It can also have internal reflection planes. The light entrance plane 21 preferably has a stationary contact with the active display area 6 which is light tight for ambient light. If the contact is not light tight it may be necessary to compensate for ambient light by using an additional ambient light sensor which is used to compensate for the level of ambient light.
Preferably, the optical sensor unit 10 stands out above the active display area at a distance D of 5 mm or less.
According to a first embodiment, as shown in Fig. 2, the optical sensor unit 10 comprises an optical aperture 21 , a photodiode sensor 22 and in between, as a light guide 34, made from, for example, massive PMMA (polymethyl methacrylate) structures 14, 16, 18, 20, of which one presents an aperture 21 to collect light and one presents a light exit plane 23. PMMA is a transparent (more than 90% transmission), hard and stiff material. The skilled person will appreciate that other materials may be used, e.g. glass. The massive PMMA structures 14, 16, 18, 20 serve for guiding light rays using total internal reflection. The PMMA structures 14 and 18 deflect a light bundle over 90°. The approximate path of two light rays 24, 26 is shown in Fig. 2.
The oblique parts of PMMA structures 14 and 18 are preferably metallised 28, 30 in order to serve as a mirror. The other surfaces do not need to be metallised as light is travelling through the PMMA structure using total internal reflection. In between the different PMMA structures 14, 16, 18 and 20 there is an air gap. At these interfaces, stray light (which is light not emitted by the display device) can enter the light guide 34.
Another type of first optical sensor unit 10 that can be used with embodiments according to the present invention is shown in Fig. 3. It is a fiberoptic implementation. The first optical sensor unit 10 comprises an optical aperture 21 and a first light sensor 22, with a bundle 32 of optical fibres there between. The optical fibres are preferably fixed together or bundled (e.g. glued), and the end surface is polished to accept light rays under a limited angle only (as defined in the attached claims).
Another first optical sensor unit that can be used with embodiments according to the present invention is shown in Figs. 4 - Fig. 6. In this embodiment, the first optical sensor unit 10 comprises a light guide 34 made of one piece of PMMA. The first optical sensor unit 10 furthermore comprises an aperture 21 at one extremity of the light guide 34, and a photodiode sensor 22 or equivalent device at the other extremity of the light guide 34. The light guide 34 can have a non-uniform cross-section in order to concentrate light to the light exit plane 23. Light rays travel by total internal reflection through the light guide 34. At 90° angles, the light rays are deflected by reflective areas 28, 30, which are for example metallised to serve as a mirror, as in the first embodiment. The structure of this light guide 34 is rigid and simple to make. In an improvement of the structure (see Fig. 5), a reflective coating 36 is applied directly or indirectly (i.e. non-separable or separable) to the outer surface of the light guide 34, with exception of the areas where light is coupled in (aperture 21 ) or out (light exit plane 23). The reflection coefficient of this reflective coating material 36 is 0.9 or lower. The coating lays at the surface of the light guide 34 and may not penetrate in it. In this case, ambient light is very well rejected for this specific embodiment of the first sensor. At the same time, the structure provides a narrow acceptance angle: light rays that enter the light guide 34 under a wide angle to the normal to the active display area 6, such as the ray represented by the dashed line 38, will be reflected and attenuated much more (because the reflection coefficient being 0.9 or lower) than the ray as represented by the dotted line 40 which enters the structure under a narrow angle to the normal to the active display area 6.
The structure can further be modified to change the acceptance angle, as shown in Fig. 6. By selectively omitting the reflective layer 36 on the surface of the light guide 34, at places where the structure is not exposed to ambient light (e.g. where it is covered by a display housing 42), the light rays travelling under a large angle to the axis of the light guide 34 (or to the normal to the active display area 6) can be made to exit the optical sensor unit 10, while ambient light cannot enter the light guide 34.
In this way, light rays that enter the light guide 34 under a wide angle to the normal to the active display area 6, such as a light ray represented by dashed line 38, will be further attenuated and even be allowed to exit the light guide 34. Light rays that enter the light guide 34 under a small angle to the normal of the active display area 6, such as a light ray represented by dotted line 40, will be less attenuated and will only leave the light guide 34 at the level of the light exit plane 23 and photodiode sensor 22. Therefore, the light guide 34 is much more selective as a function of entrance angle of the light rays. This means that this light guide 34 realises a narrow acceptance angle.
In another embodiment, the first sensor is an ambient light sensor, a practical example of this specific embodiment of the first sensor is presented in Fig. 7 which illustrates a Barco Coronis product gamma which contains an ambient light sensor. This sensor is also integrated into the display's bezel, and includes a V(A) filter to match the measured light to the luminance sensitivity of the human eye. However, the Barco Coronis sensor only measures ambient light coming from the external environment and not the light emitted by the display itself.
Fig. 12 shows the above display device 1 formed as a liquid crystal display device (LCD device) 2. The first sensor is not shown but is identical to any of the first sensors described with reference to Figs. 1 to 6. The display device can be any suitable fixed format display such as a plasma display devices or any other kind of display device emitting light. The display's active area 3 of the display device 1 is divided into a number of groups 4 of display areas 5, wherein each display area 5 comprises a plurality of pixels. The display device 3 of this example comprises eight groups 4 of display areas 5; each group 4 comprises in this example ten display areas 5. One of the display areas may be the sub-area described above that is measured by the first sensor. Each of the display areas 5 is adapted for emitting light into a viewing angle of the display device to display an image to a viewer in front of the display device 1 . Fig. 12 further shows a second sensor system 6 with a second sensor array 7 comprising, e.g. eight groups 8 of sensors which corresponds to the embodiment where the actual sensing is made outside the visual are of the display, and hence the light needs to be guided towards the edge of the display. This embodiment thus corresponds to a waveguide solution and not to the preferred organic photoconductive sensor embodiment, where the light is captured on top of (part of) the display area 5, and the generated electronic signal is guided towards the edge. In addition, in the preferred embodiment which uses organic photoconductive sensors to detect light, the actual sensor is created directly in front of the (part of) the sub area that needs to be sensed, and the consequentially generated electronic signal is guided towards the edge of the display, using semitransparent conductors. One of the sensors 9 can be in addition to a first sensor (not shown), i.e. the first and second sensors overlap or the first and second sensors can be mutually exclusive. In the following it will be assumed for explanation purposes only that the first and second sensors overlap. Each of said groups 8 comprises, e.g. ten sensors 9 (individual sensors 9 are shown in Figs. 14, 15 and 16) and corresponds to one of the groups 4 of display areas 5. Each of the second sensors 9 corresponds to one corresponding display area 5. In a specific embodiment the sensor system 6 further comprises coupling devices 10 for a display area 5 with the corresponding second sensors 9. Each coupling device 10 comprises a light guide member 12 and an incoupling member 13 for coupling the light into the light guide member 12, as shown in Fig. 13. A specific incoupling member 13 shown in Fig. 13 is cone-shaped, with a tip and a ground plane. It is to be understood that the tip of the incoupling member 13 is facing the display area 5. Light emitted from the display area 5 and arriving at the incoupling member 13, is then refracted at the surface of the incoupling member 13. The incoupling member 13 is formed, in one embodiment, as a laterally prominent incoupling member 14, which is delimited by two laterally coaxially aligned cones 15, 16, said cones 15, 16 having a mutual apex 17 and different apex angles a1 , a2. The diameter d of the cones 15, 16 delimiting the incoupling member 13 can for instance be equal or almost equal to the width of the light guide member 12. Said light was originally emitted (arrow 18) from the display area 5 into the viewing angle of the display device 1 , note that only light emitted in perpendicular direction is depicted, while a display typically emits in a broader opening angle. The direction of this originally emitted light is perpendicular to the alignment of a longitudinal axis 19 of the light guide member 12. All light guide members 12 run parallel in a common plane 20 to the sensor array 7 at one edge 21 of the display device 1 . Said edge 21 and the sensor array 7 are outside the viewing angle of the display device 1 .
Alternatively, use may be made of a diffraction grating as an incoupling member 13. Herein, the grating is provided with a spacing, also known as the distance between the laterally prominent parts. The spacing is in the order of the wavelength of the coupled light, particularly between 500nm and 2μιη. In a further embodiment, a phosphor is used. The size of the phosphor could be smaller than the wavelength of the light to detect.
The light guide members 12 alternatively can be connected to one single sensor 9. All individual display areas 5 can be detected by a time sequential detection mode, e.g. by sequentially displaying a patch to be measured on the display areas 5. The light guide members 12 are for instance formed as transparent or almost transparent optical fibres 22 (or microscopic light conductors) absorbing just a small part of the light emitted by the specific display areas 5 of the display device 1 . The optical fibres 22 should be so small that a viewer does not notice them but large enough to carry a measurable amount of light. The light reduction due to the light guide members and the incoupling structures for instance is about 5% for any display area 5. More generally, optical waveguides may be applied instead of optical fibres, as discussed hereinafter.
Most of the display devices 1 are constructed with a front transparent plate such as a glass plate 23 serving as a transparent medium 24 in a front section 25 of the display device 1 . Other display devices 1 can be made rugged with other transparent media 24 in the front section 25. Suitably, the light guide member 12 is formed as a layer onto a transparent substrate such as glass. A material suitable for forming the light guide member 12 is for instance PMMA (polymethylmethacrylate). Another suitable material is for instance commercially available from Rohm&Haas under the tradename LightlinkTM, with product numbers XP-5202A Waveguide Clad and XP-6701 A Waveguide Core. Suitably, a waveguide has a thickness in the order of 2-10 micrometer and a width in the order of micrometers to millimeters or even centimetres. Typically, the waveguide comprises a core layer that is defined between one or more cladding layers. The core layer is for instance sandwiched between a first and a second cladding layer. The core layer is effectively carrying the light to the second sensors. The interfaces between the core layer and the cladding layers define surfaces of the waveguide at which reflection takes place so as to guide the light in the desired direction. The incoupling member 13 is suitably defined so as to redirect light into the core layer of the waveguide.
Alternatively, parallel coupling devices 10 formed as fibres 22 with a higher refractive index are buried into the medium 24, especially the front glass plate 23. Above each area 5 the coupling device 10 is constructed on a predefined guide member 12 so light from that area 5 can be transported to the edge 21 of the display device. At the edge 21 the second sensor array 7 captures light of each display area 5 on the display device 1 . This array 7 would of course require the same pitch as the fibres 22 in the plane 20 if the fibers run straight to the edge, without begin tightened or bent. While fibres are mentioned herein as an example, another light guide member such as a waveguide, could be applied alternatively.
In Fig. 12 the coupling devices 10 are displayed with different lengths. In reality, full length coupling devices 10 may be present. The incoupling member 13 is therein present at the destination area 5 for coupling in the light (originally emitted from the corresponding display area 5 into the viewing angle of the display device 1 ) into the light guide member 12 of the coupling device 10. The light is afterwards coupled from an end section of the light guide member 12 into the corresponding second sensor 9 of the sensor array at the edge 21 of the display device 1 . The sensors 9 preferably only measures light coming from the coupling devices 10. In addition, the difference between a property of light in the coupling device 10 and that in the surrounding front glass plate 23 is measured. This combination of measuring methods leads to the highest accuracy. The property can be intensity or colour for example.
In one method, each coupling device 10 carries light that is representative for light coming out of a pre-determined area 5 of the display device 1 . Setting the display 3 full white or using a white dot jumping from one area to another area 5 gives exact measurements of the light output in each area 5.
However, by this method it is not possible to perform continuous measurements without the viewer noticing it. In this case the relevant output light property, e.g. colour or luminance, should be calculated depending on the image information, radiation pattern of a pixel and position of a pixel with respect to the coupling device 1 1 . Image information determines the value of the relevant property of light, e.g. how much light is coming out of a specific area 5 (for example a pixel of the display 3) or its colour.
Consider the example of optical fibers 22 shaped like a beam, i.e. with a rectangular cross-section, in the plane parallel front glass plate 23, for instance a plate 23 made of fused silica. To guide the light through the fibers 22, the light must be travelling in one of the conductive modes. For light coming from outside the fibers 22 or from outside the plate 23, it is difficult to be coupled into one of the conductive modes. To get into a conductive mode a local alteration of the fiber 22 is needed. Such local alteration may be obtained in different manners, but in this case there are important requirements than just getting light inside the fiber 22.
For accurate measuring it is important that only light from a specific direction (directed from the corresponding display area 5 into the viewing angle of the display device) enters into the corresponding coupling device 10 (fiber 22). Hence, light from outside the display device 1 ('noisy' light) will not interfere with the measurement.
Additionally, it is important that upon insertion into the light guide member, f.i. fiber or waveguide, the image displayed is hardly, not substantially or not at all disturbed.
Use can be made of an incoupling member 13 for coupling light into the light guiding member. The incoupling member 13 is a structure with limited dimensions applied locally at a location corresponding to a display area. The incoupling member 13 has a surface area that typically much smaller than that of the display area, for instance at most 1 % of the display area, more preferably at most 0.1 % of the display area. Suitably, the incoupling member is designed such that it leads light to a lateral direction.
Additionally, the incoupling member may be designed to be optically transparent in at least a portion of its surface area for at least a portion of light falling upon it. In this manner the portion of the image corresponding to the location of the incoupling member is still transmitted to a viewer. As a result, it will not be visible. It is observed for clarity that such partial transparency of the incoupling member is highly preferred, but not deemed essential. Such minor portion is for instance in an edge region of the display area, or in an area between a first and a second adjacent pixel. This is particularly feasible if the incoupling member is relatively small, e.g. for instance at most 0.1 % of the display area.
In a further embodiment, the incoupling member is provided with a ground plane that is circular, oval or is provided with rounded edges. The ground plane of the incoupling member is typically the portion located at the side of the viewer. Hence, it is most essential for visibility. By using a ground plane without sharp edges or corners, this visibility is reduced and any scattering on such sharp edges are prevented.
A perfect separation may be difficult to achieve, but with the sensor system 6 comprising the coupling device 10 shown in Fig. 13 a very good signal-to-noise-ratio (SNR) can be achieved.
In another preferred embodiment a coupling device such as an incoupling member is not required. For example, organic photoconductive sensors can be used as the sensors. The organic photoconductive sensors serve as sensors themselves (their resistivity alters depending on the impinging light) and because of that they can be placed directly on top of the location where they should measure (for instance, a voltage is put over its electrodes, and a impinging-light dependent current consequentially flows through the sensor, which is measured by external electronics). Light collected for a particular display area 5 does not need to be guided towards a sensor 9 at the periphery of the display (i.e. contrary to what is exemplified by Fig. 14). In a preferred embodiment, light is collected by a transparent or semi-transparent sensor 101 placed on each display area 5. The conversion of photons into charge carriers is done at the display area 5 and not at the periphery of the display and therefore the sensor, although semitransparent, will not be visible but will be within / inside the viewing angle. Just as for the sensor system 6 of Fig. 12, this embodiment may also have a sensor array 7 comprising, e.g. a plurality of groups, such as eight groups 8 of sensors 9, 101 . Each of said groups 8 comprises a plurality of sensors, e.g. ten sensors 9 and correspond to one of the groups 4 of display areas 5. Each of the sensors 9 corresponds to one corresponding display area 5, as illustrated in figure 17.
Fig. 16 shows a side view of a second sensor system 9 according to a further embodiment of the invention, which is for use with a first sensor (not shown). The sensor system of this embodiment comprises transparent sensors 33 which are arranged in a matrix with rows and columns. The sensors can for instance e.g. photoconductive sensors, hybrid structures, composite sensors, etc. The sensor 33 can be realized as a stack comprising two groups 34, 35 of parallel bands 36 in two different layers 37, 38 on a substrate 39, preferably the front glass plate 23. An interlayer 40 is placed between the bands 36 of the different groups 35, 36. This interlayer is the photosensitive layer of this embodiment. The bands (columns) of the first group 34 are running perpendicular to the bands (rows) of the second group 35 in a parallel plane. The sensor system 6 divides the display area 1 into different zones by design, which is clear for anyone skilled in the art, each with its own optical sensor 9 connected by transparent electrodes.
The addressing of the second sensors may be accomplished by any known array addressing method and/or devices. For example, a multiplexer (not shown) can be used to enable addressing of all second sensors. In addition a microcontroller is also present (not shown). The display can be adapted, e.g. by a suitable software executed on a processing engine, to send a signal to the microcontroller (e.g. via a serial cable: RS232). This signal determines which second sensor's output signal is transferred. For example, a 16 channel analogue multiplexer ADG1606 (of Analog Devices) is used, which allows connection of a maximum of 16 sensors to one drain terminal (using a 4 bit input on 4 selection pins).
The multiplexer is a preferably a low-noise multiplexer. This is important, because the signal measured is typically a low-current analogue signal, therefore very sensitive to noise. The very low (4.5 Ω) on-resistance makes this multiplexer ideal for this application where low distortion is needed. This on- resistance is negligible in comparison to the resistance range of the sensor material itself (e.g. of the order of magnitude MQ-100GQ). Moreover, the power consumption for this CMOS multiplexer is low.
To control the multiplexer switching, a simple microcontroller can be used (e.g. Basic Stamp 2) that can be programmed with Basic code: i.e. its input is a selection between 1 and 16; its output goes to the 4 selection pins of the multiplexer.
To communicate with the second sensor, a layered software structure is foreseen. The layered structure begins from the high-level implementation in QAWeb, which can access BarcoMFD, a Barco in-house software program, which can eventually communicate with the firmware of the display, which handles the low-level communication with the sensor. In fact, by communicating with an object from upper levels, the functionality can be accessed quite easily.
The communication with the second sensor is preferably a two-way communication. For example, the command to "measure" can be sent from the software layer and this will eventually be converted into a signal activating the sensor (e.g. a serial communication to the ADC to ask for a conversion) which puts the desired voltage signal over the sensor's electrodes. The sensor (selected by the multiplexer at that moment in time) will respond with a signal depending on the incoming light, which will eventually result in a signal in the high-level software layer.
In order to reach the eventual high-level software layer, the analogue signal generated by the second sensor and selected by the multiplexer is preferably filtered, and/or amplified and/or digitized. The types of amplifiers used are preferably low noise amplifiers such as LT2054 and LT2055: zero drift, low noise amplifiers. Different stages of amplification can be used. For example in an embodiment stages 1 to 3 are illustrated in Fig. 8a to 8c respectively. In a first stage the current to voltage amplification has a first factor, e.g. with factor 2.2x106Ω. In a second stage closed loop amplification is adjustable by a second factor, e.g. between about 1 and 140 (using a digital potentiometer). And finally in a third stage low band pass filtering is enabled (first order, with fO at about 50Hz (cfr RC constant of 22ms)).
Digitization can be by an analog to digital, converter (ADC) such as an LTC 2420 - a 20bit ADC which allows to differentiate more than 106 levels between a minimum and maximum value. For a typical maximum of 1000Cd/m2 (white display, backlight driven at high current), it is possible to discriminate 0.001 Cd/m2 if no noise is present.
In addition the current timing in the circuit is mainly determined by setting of a ΔΣ-ADC such as LTC2420. Firstly, the most important time is the conversion time from analogue to digital (about 160ms, internal clock is used with 50Hz signal rejection). Secondly, the output time of the 24 clock cycles needs to read the 20bit digital raw value out of the serial register of LTC2420 which is of secondary importance (e.g. over a serial 3-wire interface). The choice of the ADC (and its setting) corresponds to the target of stable high resolution light signals (20bit digital value, averaged over a time of 160ms, using 50Hz filtering).
Additionally Fig. 9 illustrates the overview of data path from the second sensor to the ADC. The ADC output can be provided to a processor, e.g. in a separate controller or in the display .
The embodiments that utilize a transparent sensor positioned on top of the location where they should measure, preferably require suitable transparent electrodes, that allow the electronic signal to be guided towards the edge, where it can be analyzed by the external electronics.
Suitable materials for the transparent electrodes are for instance tin oxides such as ITO (Indium Tin Oxide), Zinc oxide or poly-3,4- ethylenedioxythiophene polystyrene acid (known in the art are PEDOT-PSS). This sensor array 7 can be attached to the front glass or laminated on the front glass plate 23 of the display device 2, for instance an LCD.
The difference between using a structure comprising an inorganic transparent conductive material such as ITO or for instance a thin structure such as proposed in the article of J.H. Ho et al in Applied Physics Letters 93 is not only the use of an inherently transparent material such as ITO instead of an inherently non-transparent material such as gold electrodes. The work function of the electrode material influences the efficiency of the sensor. In the bilayer photoconductor created in the previously mentioned article, a material with a higher work function is most likely more efficient. Therefore, Au is used which has a work function of around 5.1 eV, while ITO has a work function of typically 4.3-4.7 eV, This would result in a worse performance. These known designs seem to teach away from ITO at least when one expects an efficient sensor. The article cited above uses gold as electrode, US 6348290 suggests the use of a number of metals including Indium or an alloy of Indium (see also column 7 lines 25-35 of US'290). Conductive tin Oxide is not named, Furthermore, US 6348290 suggests using an alloy because of its superiority in e.g. electrical properties. However, when ITO is used in stead of gold, it was an unexpected finding that the structure would work so well as to be usable for the monitoring of luminance in a display. Also, previous known designs did not aim to create a transparent sensor, since gold or other metal electrodes are used, which are highly light absorbing.
In accordance with embodiments of the present invention, use is made of an at least partially transparent electrode materials. This is for instance ITO. Returning to Fig. 17, the organic layers 101 is preferably an organic photoconductive sensor, and may be a monolayer, a bilayer, or a multiple layer structure. Most suitably, the organic layer(s) 101 comprises an exciton generation layer (EGL) and a charge transport layer (CTL). The charge transport layer (CTL) is in contact with a first and a second transparent electrode, between which electrodes a voltage difference may be applied. The thickness of the CTL can be for instance in the range of 25 to 100 nm, f.i. 80 nm. The EGL layer may have a thickness in the order of 5 to 50 nm, for instance 10nm. The material for the EGL is for instance a perylene derivative. One specific example is 3,4,9,10-perylenetetracarboxylic bisbenzimidazole (PTCBI). The material for the CTL is typically a highly transparent p-type organic semiconductor material. Various examples are known in the art of organic transistors and hole transport materials for use in organic light emitting diodes. Examples include pentacene, poly-3-hexylthiophene (P3HT), 2- methoxy, 5-(2'-ethyl-hexyloxy)-1 ,4-phenylene vinylene (MEH-PPV), N,N'-bis(3- methylphenyl)-N,N -diphenyl-1 ,1 '-biphenyl-4,4'-diamine (TPD). Mixtures of small molecules and polymeric semiconductors in different blends could be used alternatively. The materials for the CTL and the EGL are preferably chosen such that the energy levels of the orbitals (HOMO, LUMO) are appropriately matched, so that excitons dissociate at the interface of both layers. In addition to these two layers, a charge seperation layer (CSL) may be present between the CTL and the EGL in one embodiment. Various materials may be used as charge seperation layer, for AIO3.
Instead of using a bilayer structure, a monolayer structure can also be used. This configuration is also tested in the referenced paper, with only an EGL. Again, in the paper, the electrodes are Au, whereas we made an embodiment with ITO electrodes, such that a (semi) transparent sensor can be created. Also, we created embodiments with other organic layers, both for the EGL as well as the CTL, such as PTCDA, with ITO electrodes. In a preferred embodiment, PTCBi as EGL and TMPB as CTL were used. The organic photoconductive sensor may be a patterned layer or may be a single sheet covering the entire display. In the latter case, each of the display area 5 will have its own set of electrodes but they will share a common organic photosensitive layer (simple or multiple). The added advantage of a single sheet covering the entire display is that the possible color specific absorption by the organic layer will be uniform across the display. In the case where several islands of organic material are separated on the display, non uniformity in luminance and or color is more difficult to compensate.
In one further implementation, the electrodes are provided with fingered shaped extensions, as presented in figure 17. The extensions of the first and second electrode preferably form an interdigitated pattern. The number of fingers may be anything between 2 and 5000, more preferably between 100 and 2500, suitably between 250 and 1000. The surface area of a single transparent sensor may be in the order of square micrometers but is preferable in the order of square millimeters, for instance between 1 and 7000 square millimeters. One suitable finger shape is for instance a 1500 by 80 micrometers size, but a size of for instance 4 x 6 micrometers is not excluded either. The gap in between the fingers can for instance be 15 micrometers in one suitable implementation.
In connection with said further implementation, it is most suitable to build up the sensor on a substrate with said electrodes. The organic layer 101 therein overlies or underlies said electrodes. In other words, in Fig. 17 a network of sensors 9 with a single layer of electrodes 36 is illustrated. Electrodes 36 are made of a transparent conducting material like any of the materials described above e.g. ITO (Indium Tin Oxide) and are covered by organic layer(s) 101 . In addition, the organic photoconductive sensor needs not be limited laterally. The organic layer may be a single sheet covering the entire display (not shown). Each of the display areas 5 will have its own set of electrodes 36 (one of the electrodes can be shared in some embodiments where sensors are addressed sequentially) but they can share a common organic photosensitive layer (simple or multiple). The added advantage of a single sheet covering the entire display is that the possible color specific absorption by the organic layer will be to a major extent uniform across the display. In the case where several islands of organic material are separated on the display, non uniformity in luminance and or color is more difficult to compensate.
The first and a second electrode with the interlayer may, on a higher level, be arranged in a matrix (i.e. the areas where the finger patterns are located are arranged over the display's active area according to a matrix) for appropriate addressing and read out, as known to the skilled person. Most suitably, the interlayer organic layer(s) is/are deposited after provision of the electrodes. The substrate may be provided with a planarization layer. Optionally, a transistor may be provided at the output of the photosensor, particularly for amplification of the signal for transmission over the conductors to a controller. Most suitably, use is made of an organic transistor. Electrodes may be defined in the same electrode material as those of the photodetector.
These organic photoconductors are the preferred embodiment of the second sensor. Unless indicated otherwise, the detailed descriptions of this patent are elaborated with this specific embodiment in mind.
These organic photoconductors serve as sensors and because of that, they can be placed directly on top of the location where they should measure. Consequentially, light collected for a particular display area does not need to be guided towards a sensor at the periphery of the display. In the most preferred embodiment, light is collected by a transparent or semi-transparent second sensor placed on each display area. The conversion of photons into charge carriers is done at the display area and not at the periphery of the display and therefore the sensor will be within / inside the viewing angle
Alternatively, second sensors comprising composite materials could be constructed. With composite materials nano/micro particles are proposed, either organic or inorganic dissolved in the organic layers, or an organic layer consisting of a combination of different organic materials (dopants). Since the organic photosensitive particles often exhibit a strongly wavelength sensitive absorption coefficient, this configuration can result in a less colored transmission spectrum, or can be used to improve the detection over the whole visible spectrum, or can improve the detection of a specific wavelength region. However, a disadvantage could be that the sensor only provides one output current per measurement for the entire spectrum for all these embodiments. In other words, the X, Y and Z tristimulus values of a given spectrum emitted by the display have to be measured sequentially, the latter can be enabled by measuring and calibrating the X, Y and Z components of light emitted with a certain spectrum as described earlier, in case the sensor is sensitive to the entire visible spectrum. This could be avoided by using three independent photoconductors that measure red, green and blue independently, which can each be calibrated to measure to measure X, Y and Z for a given spectrum. They could be conceived similarly to the previous descriptions, and stacked on top of each other, or adjacent to each other on the substrate, to obtain an online color measurement.
In addition, instead of using organic layers to generate charges, hybrid structures using a mix of organic and inorganic materials can be used. A bilayer device that uses a quantum-dot exciton generation layer and an organic charge transport layer can be used. For instance colloidal Cadmium Selende quantum dots and an organic charge transport layer comprising of Spiro-TPD.
Furthermore an organic photoconductor can be a mono layer, a bi-layer or in general a multiple (>2) layer structure. An example of an organic bilayer photoconductor is known from Applied Physics Letters 93 "Lateral organic bilayer heterojunction photoconductors" by John C. Ho, Alexi Arango and Vladimir Bulovic. However, the bilayer disclosed by J.C. Ho uses gold electrodes, which are non-transparent, and therefore this sensor is not usable as a transparent sensor. The bilayer comprises an EGL (PTCBI) or Exciton Generation Layer and a CTL (TPD) or Charge Transport Layer. In another preferred embodiment, an alternative sensor, like the organic sensor described above, can be used. First of all, the sensor can be panchromatic, meaning that it is sensitive to the entire visual spectrum. This implies that the sensor can be sensitive to the red, green and blue spectra emitted by the display. The first downside of such a sensor is the lack of colour filters which are typically used for measuring the CIE XYZ components. The sensor can also be used to measure the brightness of light with a certain spectrum after calibrating it with the first sensor that includes the required filters. This calibration step is crucial, as the measured brightness will be relative to the source due to the lack of a V (λ) filter. The absorption spectrum of the exciton generation layer (organic material) is linked directly to the spectral sensitivity of the sensor. Therefore, the luminance versus the digital driving level (DDL) curve can be calibrated for all three primaries, and will differ for displays having different spectra. After the calibration, the luminance of the different primaries can be measured using a matrix of sensors in order to obtain the luminance of the color components.
The sensor has a design fundamentally based on a compromise between transparency and efficiency: light needs to be sensed, which implies that photons are to be absorbed, while we still desire that the sensor remains (almost) transparent. This effect adds up to the lack of a V(A) filter, such that (minor) errors can occur when the emitted spectrum is non-constant over the active area of the display.
However, the major advantage of the sensor is its ability for measuring over the entire active area of the display, which allows obtaining a global measurement result, instead of a local measurement near the border of the screen.
Returning to Fig. 16 it is observed that a second sensor having a first and a second electrode with the interlayer may, on a higher level, be arranged in a matrix for appropriate addressing and read out, as known to the skilled person. Most suitably, the interlayer is deposited after provision of the electrodes. The substrate may be provided with a planarization layer.
Optionally, a transistor may be provided at the output of the photosensor, particularly for amplification of the signal for transmission over the conductors to a controller. Most suitably, use is made of an organic transistor. Electrodes may be defined in the same electrode material as those of the photodetector.
Alternatively, the organic layer 101 may be patterned to be limited to one display area 5, a group of display areas 5, or alternatively certain pixels within the display area 5. Alternatively, the interlayer is substantially unpatterned. Any color specific absorption by the transparent sensor will then be uniform across the display.
Alternatively, the organic layer 101 , as illustrated in Fig. 17, may comprise nanoparticles or microparticles, either organic or inorganic and dissolved or dispersed in an organic layer. A further alternative are organic layer(s) 101 comprising a combination of different organic materials. As the organic photosensitive particles often exhibit a strongly wavelength dependent sensitive absorption coefficient, such a configuration can result in a less colored transmission spectrum. It may further be used to improve detection over the whole visible spectrum, or to improve the detection of a specific wavelength range
Suitably, more than one transparent sensor may be present in a display area 5, as illustrated in Fig. 17. Additional second sensors may be used for improvement of the measurement, but also to provide different colour-specific measurements. Additionally, by covering substantially the full front surface with transparent sensors, any reduction in intensity of the emitted light due to absorption and/or reflection in the at least partially transparent sensor will be less visible or even invisible, because position-dependant variations over the active area can be avoided this way.
By constructing the second sensor 9 as shown in Fig. 16, the sensor surface of the transparent sensor 30 is automatically divided in different zones. A specific zone corresponds to a specific display area 5, preferably a zone consisting of a plurality of pixels, and can be addressed by placing the electric field across its columns and rows. The current that flows in the circuit at that given time is representative for the photonic current going through that zone.
This sensor system 6 cannot distinguish the direction of the light. Therefore the photocurrent going through the transparent sensor 30 can be either a pixel of the display area 5 or external (ambient) light. Therefore reference measurements with an inactive backlight device are suitably performed.
Suitably, the transparent sensor is present in a front section between the front glass and the display. The front glass provides protection from external humidity (e.g. water spilled on front glass, the use of cleaning materials, etc.). Also, it provides protection form potential external damaging of the sensor. In order to minimize negative impact of any humidity present in said cavity between the front glass and the display, encapsulation of the sensor is preferred.
Turning to Fig. 14, this shows another embodiment of the invention relating to a sensor system 6 for rear detection. Fig. 14 is a simplified representation of an optical stack of the display 3 comprising (from left to right) a diffuser, several collimator foils, a dual brightness enhancement film (DBEF) and a LED display element in the front section 25 of a display device 1 . At the backside 26 of the display 3 (left side) the second sensor 9 of the sensor system 6 is added to measure all the light in the display area 5. A backlight device 27 is located between the second sensor 9 and the stack of the display 3. The second sensor 9 is counter sunken in a housing element (not shown) so only light close to the normal, perpendicular to the front surface 28, is detected.
The sensor system 6 shown in Fig. 14 can be used for performing an advantageous method for detecting a property of the light, e.g. the intensity or colour of the light emitted from at least one display area 5 of a liquid crystal display device 2 (LCD device) into the viewing angle of said display device 2, wherein said LCD device 2 comprises a backlight device 27 for lighting the display 3 formed as a liquid crystal display member of the display device 2.
Fig. 15 shows a horizontal sectional view of a display device 1 with a second sensor system 6 according to another embodiment of the present invention. The present embodiment is a scanning sensor system. The sensor system 6 is realized as a solid state scanning sensor system localized the front section 25 of the display device 1 . The display device 1 is in this example an liquid crystal display, but that is not essential. This embodiment provides effectively an incoupling member. The substrate or structures created therein (waveguide, fibers) may be used as light guide members.
In accordance with this embodiment of the invention, the solid state scanning sensor system is a switchable mirror. Therewith, light may be redirected into a direction towards a second sensor. The solid state scanning system in this manner integrates both the incoupling member and the light guide member. In one suitable embodiment, the solid state scanning sensor system is based on a perovskite crystalline or polycrystalline material, and particularly the electro-optical materials. Typical examples of such materials include lead zirconate titanate (PZT), lanthane doped lead zirconate titanate (PLZT), lead titanate (PT), bariumtitanate (BaTiO-i3), bariumstrontiumtitantate (BaSrTiO3). Such materials may be further doped with rare earth materials and may be provided by chemical vapour deposition, by sol-gel technology and as particles to be sintered. Many variations hereof are known from the fields of capacitors, actuators and microactuators (MEMS).
In one example, use was made of PLZT. An additional layer 29 can be added to the front glass plate 23 and may be an optical device 10 of the sensor system 6. This layer is a conductive transparent layer such as a tin oxide, e.g. preferably an ITO layer 29 (ITO: Indium Tin Oxide) that is divided in line electrodes by at least one transparent isolating layer 30. The isolating layer 30 is only a few microns (μιη) thick and placed under an angle β. The isolating layer 30 is any suitable transparent insulating layer of which a PLZT layer (PLZT: lanthanum-doped lead zirconate titanate) is one example. The insulating layer preferably has a similar refractive index to that of the conductive layer or at least an area of the conductive layer surrounding the insulating layer, e.g. 5% or less difference in refractive index. However; when using ITO and PLZT, this difference can be larger; a PLZT layer can have a refractive index of 2.48, whereas ITO has a refractive index of 1 .7. The isolating layer 31 is an electro- optical switchable mirror 31 for deflecting at least one part of the light emitted from the display area 5 to the corresponding sensor 9 and is driven by a voltage. The insulating layer can be an assembly of at least one ITO sub-layer and at least one glass or IPMRA sub-layer.
In one further example, a four layered structure was manufactured. Starting from a substrate, f.i. a corning glass substrate, a first transparent electrode layer was provided. This was for instance ITO in a thickness of 30 nm. Thereon, a PZT layer was grown, in this example by CVD technology. The layer thickness was approximately 1 micrometer. The deposition of the PZT layer may be optimized with nucleation layers as well as the deposition of several subsequent layers, that do not need to have the same composition. A further electrode layer was provided on top of the PZT layer, for instance in a thickness of 100 nm. In one suitable example, this electrode layer was patterned in fingered shapes. More than one electrode may be defined in this electrode layer. Subsequently, a polymer was deposited. The polymer was added to mask the ITO finger pattern. When to this structure a voltage is applied between the bottom electrode and the fingers on top of the PZT the refractive index of the PZT under each of the fingers will change. This change in refractive index will result in the appearance of a diffraction pattern. The finger pattern of the top electrode is preferably chosen so that a diffraction pattern with the same period would diffract light into direction that would undergo total internal reflection at the next interface of the glass with air. The light is thereafter guided into the glass, which directs the light to the sensor positioned at the edge Therewith, all it is achieved that diffraction orders higher than zero are coupled into the glass and remain in the glass. Optionally, specific light guiding structures, e.g. waveguides may be applied in or directly on the substrate.
While it will be appreciated that the use of ITO is here highly advantageous, it is observed that this embodiment of the invention is not limited to the use of ITO electrodes. Other partially transparent materials may be used as well. Furthermore, it is not excluded that an alternative electrode pattern is designed with which the perovskite layer may be switched so as to enable diffraction into the substrate or another light guide member.
The solid state scanning sensor system has no moving parts and is advantageous when it comes to durability. Another benefit is that the solid state scanning sensor system can be made quite thin and doesn't create dust when functioning.
An alternative solution can be the use a reflecting surface or mirror 28 that scans (passes over) the display 3, thereby reflecting light in the direction of the sensor array 7. Other optical devices may be used that are able to deflect, reflect, bend, scatter, or diffract the light towards the sensor or sensors.
The sensor array 7 can be a photodiode array 32 without or with filters to measure intensity or colour of the light. Capturing and optionally storing measured light in function of the mirror position results in accurate light property map, e.g. colour or luminance map of the output emitted by the display 3. A comparable result can be achieved by passing the detector array 9 itself over the different display areas 5.

Claims

1 . - A method for compensating for ambient light or ageing effects of pixel outputs displaying an image on a display device, comprising:
5- displaying a first image on an active display area (6) on the display device (1 ) having a first plurality of pixels,
displaying a second image on a sub-area (7) of the display device (1 ) and having a second plurality of pixels, the active display area (6) being larger than the sub-area (7) and the second image being smaller than the first image and 0 having fewer pixels than the active display area (6)
driving the pixels of the sub-area (7) with pixel values that are representative or indicative for the pixels in the active display area (6),
making optical measurements on light emitted from the environment or from the active display area (6) using a substantially transparent sensor and the sub- 5 area (7) and generating optical measurement signals (1 1 ) therefrom, and
controlling the display of the image on the active display area (6) in accordance with the optical measurement signals (1 1 ) of the environment or of the sub-area (7) and the active area (6).
2. - A method according to claim 1 , wherein the sub-area (7) can be0 divided into different parts which are driven with a pattern based on the actual display contents, or the sub-area (7) can be divided into different parts which are driven with a pattern based on a priori defined pixel values containing more than 1 driving level. 5
3.- A method according to any of claims 1 or 2, wherein the optical measurements are luminance or colour measurement.
4. - A method according to claim 3, wherein the colour or luminance measurements are carried out in sequences.
0
5. - A method according to any of claims 1 to 4, further comprising displaying a test pattern on the display area of a specific colour, for detecting the property of light by the substantially transparent sensor and for comparing the output of the substantially transparent sensor with that of a sub-area sensor.
6. - A method according to any of claims 1 to 5, wherein a step of tracking in time how a pixel of the sub-area was driven is included.
7. - A display device comprising at least one display area provided with a plurality of pixels, with a first sensor that is an optical sensor unit that is located in front of the display and makes optical measurements on a light output from only a representative part of the display, or is an ambient light sensor, and a second full display substantially transparent sensor for detecting a property of light emitted from said full display area into a viewing angle of the display device, which second sensor is located in a front section of said display device in front of said display area, and means for displaying a test pattern on the display area of a specific colour, and for detecting the property of light by the second sensor and for comparing the output of the second sensor with that of the first sensor.
8. -- A device according to claim 7, whereby said first or second sensor is located in front of the display, in a bezel of the display or can be located in the backlight of the display.
9. - A device according to claim 7, whereby said first sensor is an external reference sensor which is used to calibrate the second sensor.
10.- A device according to any claim 7 to 9, whereby said second sensor comprises at least one sub-sensor which can be positioned in a matrix structure, whereby said sub-sensor is adapted to produce an individual measurement signal,.
1 1 -. A device according to claim 10, whereby said sub-sensor is positioned in the light path of the light measured by the first sensor.
12- A device according to any of claims 7 to 1 1 , whereby said second sensor is a bidirectional sensor that measures both the light emitted from the display device and the ambient light.
13. - A device according to any of claims 7 to 12, wherein the property of light is luminance or wherein the property of light is colour and whereby color coordinates are obtained, or is ambient light.
14. The device according to any of claims 7 to 13, further comprising means for determining variations in the light output of that colour over the display area of the second sensor.
15. The display device as claimed in any of the claims 7 to 14, further comprising at least partially transparent electrical conductors for conducting a measurement signal from said second sensor within said viewing angle for transmission to a controller.
16. - The display device according to any of the claims 7 to 10, wherein light output correction comprises luminance and/or contrast and/or colour correction.
17- The display device according to any of the claims 7 to 16, wherein the sub-area (7) of the active display area (6) of the image forming device (2) is less than 1 % of the area of the active display area (6) of the image forming device (2), preferably less than 0.1 %, still more preferred less than 0.01 %.
18.- The display device according to any of the previous claims 7 to 17, wherein the first sensor comprises an optical sensor unit and the optical aperture (21 ) of the optical sensor unit (10) masks a portion of the active display area (6), while the light sensor (22) of the optical sensor unit does not mask any part of the active display area (6).
19- The display device according to claim 18, wherein the optical sensor unit (10) stands out above the active display area (6) a distance of 5 mm or less.
20. The display device as claimed in any previous claim 7 to 19, wherein the first sensor comprises a photoconductor.
21 . The display device as claimed in any previous claim 7 to 20, further comprising at least partially transparent electrical conductors for conducting a measurement signal from said first pr second sensor within said viewing angle for transmission to a controller.
22. The display device as claimed in claim 21 , wherein the at least partially transparent electrodes comprise an electrically conductive oxide.
23. The display device as claimed in any previous claim 7 to 22, wherein the second sensor is a bilayer structure with an exciton generation layer and a charge transport layer, said charge transport layer being in contact with a first and a second electrode.
24. The display device according to any previous claim 7 to 23 wherein the first or second sensor comprises an at least partially transparent optical coupling device located in a front section of said display device comprising a light guide member for guiding at least one part of the light emitted from the said display area to the corresponding first or second sensor, wherein said coupling device further comprises an incoupling member for coupling the light into the light guide member.
25. The display device as claimed in claim 24, wherein the light guide member is running in a plane which is parallel to a front surface of the display device and wherein the incoupling member is an incoupling member or laterally coupling the light into the light guide member of the coupling device.
26. The display device as claimed in claim 24 or 25, wherein the light guide member is provided with a spherical or rectangular cross-sectional shape when viewed in a plane normal to the front surface and normal to a main extension of the light guide member.
27. The display device as claimed in Claim 24, wherein the incoupling member is cone-shaped.
28. The display device as claimed in Claim 27, wherein the incoupling member is formed as a laterally prominent incoupling member, which is delimited by two laterally coaxial aligned cones, said cones having a mutual apex and different apex angles (a1 ,a2).
29. The display device as claimed in Claim 24, wherein the incoupling member is a diffraction grating.
30. The display device as claimed in any of the Claims 24, or 27 to 29, wherein the incoupling member further transforms a wavelength of light emitted from the display area into a sensing wavelength.
31 . The display device as claimed in Claim 30, wherein the sensing wavelength is in the infrared range, particularly between 0.7 and 300 micrometers.
32. The display device as claimed in Claim 30 or 31 , wherein the incoupling member is provided with a phosphor for said transformation.
33. The display device as claimed in any of the claims 24 to 32, wherein the coupling device is part of a cover member having an inner face and an outer face opposed to the inner face, said inner face facing the at least one display area, wherein the coupling device is present at the inner face.
34.- A control unit for controlling an image on a display device, the display device having at least one display area provided with a plurality of pixels, with for the display area a first sensor that is an optical sensor unit that is located in front of the display and makes optical measurements on a light output from only a representative part of the display or is an ambient light sensor; a second full display substantially transparent sensor for detecting a property of light emitted from said full display area into a viewing angle of the display device , which second sensor is located in a front section of said display device in front of said display area, the control unit comprising:
means for displaying a test pattern on the display area of a specific colour, means for accessing each second sensor and for detecting the property of light by the second sensor and for comparing the output of the secondsensor with that of the first sensor.
35. The control unit according to claim 34, further adapted to drive different parts of the sub-area with a pattern based on the actual display contents.
36. The control unit of claim 34 further adapted to drive different parts of the sub-area with a pattern based on a priori defined pixel values containing more than 1 driving level.
37. The control unit of any of claims 34 to 36, wherein the optical measurements are luminance measurements and the controller is adapted to carry out the luminance measurements in sequences.
38. The control unit of any of the claims 34 to 37, further comprising means to carry out optical measurements such that light is transmitted from within the sub-area of the active display area to outside the active display area.
39. The control unit of any of claims 34 to 38, further adapted to track in time how a pixel of the sub-area was driven.
40. The control unit of any of claims 34 to 39, further adapted to carry out light output correction by luminance and/or contrast correction.
PCT/EP2012/050028 2010-12-31 2012-01-02 Method and system for compensating effects in light emitting display devices WO2012089849A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB201022140A GB2486921A (en) 2010-12-31 2010-12-31 Compensating for age effects in active matrix displays
GB1022140.6 2010-12-31

Publications (1)

Publication Number Publication Date
WO2012089849A1 true WO2012089849A1 (en) 2012-07-05

Family

ID=43599143

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2012/050028 WO2012089849A1 (en) 2010-12-31 2012-01-02 Method and system for compensating effects in light emitting display devices

Country Status (2)

Country Link
GB (1) GB2486921A (en)
WO (1) WO2012089849A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8987652B2 (en) 2012-12-13 2015-03-24 Apple Inc. Electronic device with display and low-noise ambient light sensor with a control circuitry that periodically disables the display
WO2017216958A1 (en) * 2016-06-17 2017-12-21 三菱電機株式会社 Display light guide body and display device provided with same
US9997137B2 (en) 2015-09-30 2018-06-12 Apple Inc. Content-based statistics for ambient light sensing
CN109959449A (en) * 2017-12-26 2019-07-02 深圳市巨烽显示科技有限公司 Sense the device of display front face brightness
CN110662983A (en) * 2017-05-11 2020-01-07 ams有限公司 Optical sensor device
CN112133197A (en) * 2020-09-29 2020-12-25 厦门天马微电子有限公司 Display screen, optical compensation method and optical compensation system of under-screen camera in display screen
US11029592B2 (en) 2018-11-20 2021-06-08 Flightsafety International Inc. Rear projection simulator with freeform fold mirror
US11122243B2 (en) 2018-11-19 2021-09-14 Flightsafety International Inc. Method and apparatus for remapping pixel locations
US11282177B2 (en) * 2019-11-27 2022-03-22 Boe Technology Group Co., Ltd. Moire quantitative evaluation method and device, electronic device, storage medium
CN114724520A (en) * 2022-04-13 2022-07-08 南京巨鲨显示科技有限公司 Self-adaptive brightness adjustment method for gray-scale medical image
TWI774242B (en) * 2020-02-21 2022-08-11 日商Eizo股份有限公司 Method for detecting light emitted from display screen and display device
CN116052578A (en) * 2023-03-31 2023-05-02 深圳曦华科技有限公司 Method and device for synchronously controlling chip input and output in display chip system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2591469A (en) * 2020-01-28 2021-08-04 Phoenix Systems Uk Ltd Method and apparatus for display monitoring

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4499005A (en) 1984-04-30 1985-02-12 Gte Laboratories Incorporated Infrared emitting phosphor
US6348290B1 (en) 1995-08-03 2002-02-19 Nippon Ink And Chemicals, Inc. Multilayer organic photoconductor including electrically conductive support having specific index of surface area
EP1274066A1 (en) 2001-07-03 2003-01-08 Barco N.V. Method and system for real time correction of an image
EP1424672A1 (en) 2002-11-29 2004-06-02 Barco N.V. Method and device for correction of matrix display pixel non-uniformities
EP2159783A1 (en) * 2008-09-01 2010-03-03 Barco N.V. Method and system for compensating ageing effects in light emitting diode display devices
WO2010081814A1 (en) * 2009-01-13 2010-07-22 Barco N.V. Display device and use thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060119592A1 (en) * 2004-12-06 2006-06-08 Jian Wang Electronic device and method of using the same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4499005A (en) 1984-04-30 1985-02-12 Gte Laboratories Incorporated Infrared emitting phosphor
US6348290B1 (en) 1995-08-03 2002-02-19 Nippon Ink And Chemicals, Inc. Multilayer organic photoconductor including electrically conductive support having specific index of surface area
EP1274066A1 (en) 2001-07-03 2003-01-08 Barco N.V. Method and system for real time correction of an image
EP1424672A1 (en) 2002-11-29 2004-06-02 Barco N.V. Method and device for correction of matrix display pixel non-uniformities
EP2159783A1 (en) * 2008-09-01 2010-03-03 Barco N.V. Method and system for compensating ageing effects in light emitting diode display devices
WO2010081814A1 (en) * 2009-01-13 2010-07-22 Barco N.V. Display device and use thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
J.APPI.PHYS., vol. 94, 2003, pages 3147
J.H. HO ET AL., APPLIED PHYSICS LETTERS
JOHN C. HO; ALEXI ARANGO; VLADIMIR BULOVIC: "Lateral organic bilayer heterojunction photoconductors", APPLIED PHYSICS LETTERS 93

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8987652B2 (en) 2012-12-13 2015-03-24 Apple Inc. Electronic device with display and low-noise ambient light sensor with a control circuitry that periodically disables the display
US9997137B2 (en) 2015-09-30 2018-06-12 Apple Inc. Content-based statistics for ambient light sensing
US10438564B2 (en) 2015-09-30 2019-10-08 Apple Inc. Content-based statistics for ambient light sensing
WO2017216958A1 (en) * 2016-06-17 2017-12-21 三菱電機株式会社 Display light guide body and display device provided with same
CN110662983A (en) * 2017-05-11 2020-01-07 ams有限公司 Optical sensor device
CN109959449A (en) * 2017-12-26 2019-07-02 深圳市巨烽显示科技有限公司 Sense the device of display front face brightness
US11122243B2 (en) 2018-11-19 2021-09-14 Flightsafety International Inc. Method and apparatus for remapping pixel locations
US11595626B2 (en) 2018-11-19 2023-02-28 Flightsafety International Inc. Method and apparatus for remapping pixel locations
US11812202B2 (en) 2018-11-19 2023-11-07 Flightsafety International Inc. Method and apparatus for remapping pixel locations
US11029592B2 (en) 2018-11-20 2021-06-08 Flightsafety International Inc. Rear projection simulator with freeform fold mirror
US11709418B2 (en) 2018-11-20 2023-07-25 Flightsafety International Inc. Rear projection simulator with freeform fold mirror
US11282177B2 (en) * 2019-11-27 2022-03-22 Boe Technology Group Co., Ltd. Moire quantitative evaluation method and device, electronic device, storage medium
TWI774242B (en) * 2020-02-21 2022-08-11 日商Eizo股份有限公司 Method for detecting light emitted from display screen and display device
CN112133197A (en) * 2020-09-29 2020-12-25 厦门天马微电子有限公司 Display screen, optical compensation method and optical compensation system of under-screen camera in display screen
CN114724520A (en) * 2022-04-13 2022-07-08 南京巨鲨显示科技有限公司 Self-adaptive brightness adjustment method for gray-scale medical image
CN114724520B (en) * 2022-04-13 2024-02-20 南京巨鲨显示科技有限公司 Luminance self-adaptive adjustment method for gray-scale medical image
CN116052578A (en) * 2023-03-31 2023-05-02 深圳曦华科技有限公司 Method and device for synchronously controlling chip input and output in display chip system
CN116052578B (en) * 2023-03-31 2023-08-04 深圳曦华科技有限公司 Method and device for synchronously controlling chip input and output in display chip system

Also Published As

Publication number Publication date
GB2486921A (en) 2012-07-04
GB201022140D0 (en) 2011-02-02

Similar Documents

Publication Publication Date Title
EP2659306B1 (en) Display device and means to measure and isolate the ambient light
WO2012089849A1 (en) Method and system for compensating effects in light emitting display devices
US20130278578A1 (en) Display device and means to improve luminance uniformity
US9671643B2 (en) Display device and use thereof
CN101587256B (en) Electro-optical device and electronic apparatus
CN101688998B (en) Display device
US20160042676A1 (en) Apparatus and method of direct monitoring the aging of an oled display and its compensation
KR101692248B1 (en) Touch panel
US8432510B2 (en) Liquid crystal display device and light detector having first and second TFT ambient light photo-sensors alternatively arranged on the same row
CN101576673B (en) Liquid crystal display
US20190101779A1 (en) Display screen, electronic device, and light intensity detection method
GB2456771A (en) Spectrally compensating a light sensor
CN101447145A (en) Display device and electronic apparatus
TW201001390A (en) Display device and method for luminance adjustment of display device
CN110047906A (en) Display device, display panel and its manufacturing method based on clear photodiode
US20120327683A1 (en) Liquid micro-shutter display device
WO2023018834A1 (en) Systems and methods for ambient light sensor disposed under display layer
WO2012089847A2 (en) Stability and visibility of a display device comprising an at least transparent sensor used for real-time measurements
EP3503239B1 (en) Light emitting display panel
WO2013164015A1 (en) A display integrated semitransparent sensor system and use thereof
GB2489657A (en) A display device and sensor arrangement
JP5239293B2 (en) Liquid crystal device and electronic device
TWI479389B (en) Optical sensor device, display apparatus, and method for driving optical sensor device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12703461

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12703461

Country of ref document: EP

Kind code of ref document: A1