US20060238832A1 - Display system - Google Patents
Display system Download PDFInfo
- Publication number
- US20060238832A1 US20060238832A1 US11/472,758 US47275806A US2006238832A1 US 20060238832 A1 US20060238832 A1 US 20060238832A1 US 47275806 A US47275806 A US 47275806A US 2006238832 A1 US2006238832 A1 US 2006238832A1
- Authority
- US
- United States
- Prior art keywords
- color image
- image data
- data
- display
- test
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3191—Testing thereof
- H04N9/3194—Testing thereof including sensor feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0666—Adjustment of display parameters for control of colour parameters, e.g. colour temperature
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/14—Detecting light within display terminals, e.g. using a single or a plurality of photosensors
- G09G2360/145—Detecting light within display terminals, e.g. using a single or a plurality of photosensors the light originating from the display screen
Definitions
- the present invention relates to a display system for correcting the effect of optical flare and then displaying images.
- color characteristics are defined for color image devices or image data as space-coordinate-independent information. Using the color information, color reproduction is performed.
- the present invention has been made in view of the above-described background, and it is an object of the present invention to provide a display system capable of performing color reproduction as intended by reducing the effect of optical flare.
- a display system includes: a color image display device for displaying a color image; and an image correction device for producing corrected color image data to be outputted to the color image display device by correcting color image data.
- the image correction device calculates the corrected color image data from the color image data so as to correct optical flare of the color image display device on the basis of relationship(s) between one of a plurality of test color image data outputted to the color image display device and the spatial distribution of display colors of a test color image that has been displayed on the color image display device in accordance with the one of the plurality of test color image data.
- FIG. 1 is a schematic diagram showing a configuration of a display system according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing a configuration of an image correction device according to the above-described embodiment.
- FIG. 3 is a block diagram showing a configuration of a flare calculation device according to the above-described embodiment.
- FIG. 4 is a diagram showing image data of a geometric correction pattern outputted by a test image output device in the above-described embodiment.
- FIG. 5 is a diagram showing text data in which coordinate information on center positions of cross patterns is stored in the above-described embodiment.
- FIG. 6 is a diagram showing an area to be divided in test color image data outputted by the test image output device in the above-described embodiment.
- FIG. 7 is a diagram showing text data in which coordinate information on sub-areas into which an area is divided is stored in the above-described embodiment.
- FIG. 8 is a block diagram showing a configuration of a shot image input device according to the above-described embodiment.
- FIG. 9 is a diagram showing sample areas set to a shot image of the test color image in the above-described embodiment.
- FIG. 10 is a diagram showing sample areas of a light-emitting area and non-light-emitting areas.
- This principle is for acquiring display image data that is the same as original input image data by providing corrected input image data in a case where the input image data has been undesirably changed into different display image data by being affected by, for example, optical flare (hereinafter referred to as flare where appropriate) of a display device.
- optical flare hereinafter referred to as flare where appropriate
- Input image data having the number of pixels N, which is inputted into a display device is represented as (p 1 , p 2 , . . . , p N ) t .
- the superscript t represents a transposition.
- the light distribution of image actually displayed with respect to the input image data is represented in a discrete representation manner as image data that has the number of pixels N, and discreted display image data is assumed to be (g 1 , g 2 , . . . , g N ) t .
- This display image data can be acquired by shooting an image displayed on the display device using, for example, a digital camera. In general, since this display image data is affected by flare of the display device, it does not correspond to the input image data.
- the display image data part of light emitted by means of signals outputted from other pixel locations is superposed onto light emitted in one pixel location. Even if the value of an input image signal inputted into the display device is zero, the value of the display image data does not generally become zero.
- the display image data is represented as a bias (o 1 , o 2 , . . . , o N ) t . Taking the above-described effects into account, the relationship between the input image data and the display image data is modeled as equation 1.
- Equation 2 represents equation 1 in a simple manner using capital letters corresponding to lowercase letters that represent individual elements in the determinant of equation 1.
- G MP+O [Equation 2]
- the above-described display image data G exactly corresponds to the above-described input image data P.
- the display image data is generally not the same as the input image data.
- corrected display image data G′ that is display image data corresponding exactly to the corrected input image data P′ is as shown in the following equation 3.
- G′ MP′+O [Equation 3]
- the display characteristics M shown in equation 5 are represented as a matrix of N ⁇ N.
- N the number of pixels in a case where a display device has 1280 ⁇ 1024 pixels
- data size generally becomes very large.
- Equation 7 can be acquired by replacing the input image data P in equation 6 with the corrected input image data P′ and by using the condition shown in the above-described equation 4.
- P M′*P′+O [Equation 7]
- the corrected input image data P′ can be calculated using equation 7. That is, a technique of deconvolution for calculating one image (here, P′) using another known image (here, M′) from a convolution image of two images (here, P ⁇ O) is known. For example, a method described in chapter 7 of document 1 (Rosenfeld and A. C. Kak, Digital Picture Processing, Academic Press 1976 (whose translation was supervised by Makoto Nagao, kindaikagaku, 1978)) can be used.
- the correction method shown in the above-described equation 5 includes an inverse matrix operation of a matrix M that has many elements.
- the correction method shown in the above-described equation 7 and document 1 includes convolution inverse operations. Accordingly, these complex operations lead to significant loads on a processor and long processing time.
- the display image data G is modeled by being represented as the sum of the input image data P, the bias O, and a flare component F that is data of flare distribution representing the effect of flare or the like.
- G P+O+F [Equation 8]
- the flare component F shown in equation 8 can be represented as shown in the following equation 9 using equation 2.
- the corrected display image data G′ can be represented as shown in the following equation 10 by inputting P ⁇ F ⁇ M ⁇ 1 O, which is acquired by using F in equation 9, into equation 3 as the corrected input image data P′.
- F can be acquired from the following equation 11 taking the above-described result into account.
- F ( M ⁇ E ) P ⁇ ( M ⁇ E ) 2 P [Equation 11]
- the corrected display image data G′ can be represented as shown in the following equation 12 by inputting P ⁇ F ⁇ M ⁇ 1 O, which is acquired by using the above-described F, into equation 3 as the corrected input image data P′.
- G′ P+ ( M ⁇ E ) 3 P [Equation 12]
- the flare F can be obtained as shown in the following equation 13.
- the corrected input image data P′ for correcting the flare F shown in equation 13 is as shown in the following equation 14.
- the flare can be desirably removed lightening the load on a processing system.
- equation 13 can be replaced with the following equation 16.
- F ( M′ ⁇ E ′)* P ⁇ ( M′ ⁇ E ′)*( M′ ⁇ E ′)* P +( M′ ⁇ E ′)*( M′ ⁇ E ′)*( M′ ⁇ E ′)* P ⁇ . . . [Equation 16] where the letter E′ represents a column vector in which the value of a component corresponding to a center position of an image is one, and the values of other components are zero.
- ⁇ represents the value obtained by deconvolution of O using M′.
- the value of a signal outputted from a display device or a color image display device such as a digital camera sometimes has a nonlinear relationship with brightness.
- the above-described processing is required to be performed after the nonlinearity of each signal is corrected.
- the description thereof will be omitted. That is, the above principle has been described as the principle in linear space after the nonlinearity of each signal is corrected.
- FIG. 1 is a schematic diagram showing a configuration of a display system.
- This display system is configured with the following components: a projector 1 , which is a color image display device, for projecting images; an image correction device 2 for producing corrected images to be projected by the projector 1 ; a screen 3 , which is a color image display device, on which images are projected by the projector 1 ; and a test image shooting camera 4 that is test color image measuring means such as a digital color camera and is disposed so as to shoot a whole image displayed on the screen 3 .
- a projector 1 which is a color image display device, for projecting images
- an image correction device 2 for producing corrected images to be projected by the projector 1
- a screen 3 which is a color image display device, on which images are projected by the projector 1
- a test image shooting camera 4 that is test color image measuring means such as a digital color camera and is disposed so as to shoot a whole image displayed on the screen 3 .
- the test image shooting camera 4 is included in the image correction device in a broad sense and is provided with a circuit capable of correcting blurs on an image due to the optical characteristics of a shooting lens and the variations of sensitivities of image pickup devices. For example, before digital image data of RGB is outputted from the test image shooting camera 4 , the correction is performed on the digital image data. In addition, the test image shooting camera 4 outputs a linear response signal depending on incident light intensity.
- the image correction device 2 outputs predetermined test color image data that has been stored therein in advance to the projector 1 .
- the projector 1 projects the test color image data provided by the image correction device 2 on the screen 3 .
- the image correction device 2 controls the test image shooting camera 4 so that the test image shooting camera 4 shoots an image with the distribution of display colors corresponding to the test color image displayed on the screen 3 and transfers the data of the shot image to the image correction device 2 . Subsequently, the image correction device 2 receives the transferred color image data.
- the image correction device 2 calculates display characteristics data used for correcting color image data on the basis of the color image data having been acquired from the test image shooting camera 4 and the original test color image data having been provided to the projector 1 .
- the image correction device 2 stores, in advance, color image data that has been converted so that the color image data can have a linear relationship with brightness.
- the image correction device 2 corrects the color image data that has been stored therein in advance using the calculated display characteristics data and then stores the corrected color image data.
- the image correction device 2 When color image data to be displayed is selected by an operator, the image correction device 2 outputs corrected color image data corresponding to the selected color image data to the projector 1 .
- the projector 1 projects an image on the screen 3 on the basis of the corrected color image data having been provided by the image correction device 2 .
- an image for which the effect of flare can be corrected is displayed on the screen 3 , whereby a person who has displayed the image can have a viewer see the image as intended.
- each of the image data inputted into the projector 1 , the image data outputted from the test image shooting camera 4 , and the image data processed in the image correction device 2 are 1280 pixels wide ⁇ 1024 pixels high and are a three-channel image data of three colors, RGB.
- FIG. 2 is a block diagram showing the configuration of the image correction device 2 .
- the image correction device 2 is configured with the following components: a flare calculation device 13 for outputting predetermined test color image data that has been stored therein in advance to the projector 1 and for acquiring color image data (shot image data) having been shot on the basis of the test color image data from the test image shooting camera 4 and for calculating display characteristics data on the basis of the acquired shot image data and the original test color image data; an image data storage device 11 for storing color image data to be displayed as well as corrected color image data that is acquired as a result of correcting the color image data by means of a flare correction device 12 (described later); and the flare correction device 12 for acquiring the color image data from the image data storage device 11 and for correcting the acquired color image data using the display characteristics data having been calculated by the flare calculation device 13 and for outputting the corrected color image data to the image data storage device 11 again so as to make the image data storage device 11 store the corrected color image data.
- a flare calculation device 13 for outputting predetermined test color image data that has been stored therein in advance to the projector 1 and for
- Operations for acquiring the display characteristics data are as follows.
- the flare calculation device 13 outputs the test color image data to the projector 1 so as to make the projector 1 display a test color image on the screen 3 .
- the flare calculation device 13 controls the test image shooting camera 4 so that the test image shooting camera 4 shoots the image displayed on the screen 3 and transfers the color image data of the shot image to the flare calculation device 13 .
- the flare calculation device 13 acquires the color image data of the shot image and calculates the display characteristics data on the basis of the acquired color image data and the original test color image data-and then stores the calculated display characteristics data.
- One is a matrix M of N ⁇ N defined by the above-described equation 1 or 2.
- Another is a matrix M′ used in equation 6.
- the image data is RGB three-channel image data
- the flare correction device 12 reads out the color image data stored in the image data storage device 11 as well as inputs one of the two kinds of display characteristics data M and M′ in accordance with a flare correction method from the flare calculation device 13 . Subsequently, the flare correction device 12 performs a flare correction operation based on the readout color image data using the readout display characteristics data and then calculates the corrected color image data.
- the color image data and the corrected color image data correspond to the input image data P and the corrected input image data P′ in the above-described principle, respectively.
- each pixel corresponds to RGB three-channel image data
- the letters P and P′ individually represent the RGB three-channel image data.
- the flare correction device 12 is configured with the following first to fourth correction modules.
- the flare correction device 12 is configured to use the display characteristics data M or M′ readout from the flare calculation device 13 for calculating the corrected color image data in these first to fourth correction modules.
- the first correction module calculates the corrected color image data P′ by inputting the display characteristics data M into equation 5. It is assumed that the bias O has been measured and stored in the flare correction device 12 in advance.
- the measuring method of the bias O is that a test color image capable of making the values of all components become zero is projected from the projector 1 on the screen 3 , and the projected test color image displayed on the screen 3 is shot using the test image shooting camera 4 .
- the second correction module calculates the corrected color image data P′ by inputting the display characteristics data M′ into equation 7 and performing a deconvolution operation.
- the third correction module is flare calculating means and calculates the corrected color image data P′ by inputting the display characteristics data M into equation 14.
- the constant K can be arbitrarily set by an operator of the image correction device 2 .
- the fourth correction module is flare calculating means and calculates the corrected color image data P′ by inputting the display characteristics data M′ into equation 17.
- the number of terms of the part corresponding to equation 16 in equation 17 (the number of terms corresponds to the above-described constant K) can also be arbitrarily set by an operator of the image correction device 2 .
- the corrected color image data calculated by the flare correction device 12 is outputted from the flare correction device 12 to the image data storage device 11 and is then stored in the image data storage device 11 .
- An operator operates the image correction device 2 and selects desired corrected color image data stored in the image correction device 2 .
- the corrected color image data having been selected is read out from the image data storage device 11 and is then outputted to the projector 1 .
- the projector 1 receives the corrected color image data and then projects an image corresponding to the corrected color image data on the screen 3 , whereby the color image in which the effect of flare is reduced can be displayed and viewed on the screen 3 .
- FIG. 3 is a block diagram showing the configuration of the flare calculation device 13 .
- This flare calculation device 13 is configured with the following components: a test image output device 21 for storing the test color image data and geometric correction pattern image data (described later) and for outputting the stored image data to the projector 1 , a shot image input device 22 (described later), and a correction data calculation device 23 (described later) as required; the shot image input device 22 for inputting the shot color image data from the test image shooting camera 4 by controlling the test image shooting camera 4 , and calculating a coordinate transform table required for a geometric correction operation on the basis of the above-described geometric correction pattern image data, and performing the geometric correction operation on the color image data having been inputted from the test image shooting camera 4 using the calculated coordinate transform table, and outputting the geometrically corrected color image data; the correction data calculation device 23 that is display characteristics calculating means for calculating the display characteristics data M and M′ on the basis of the original test color image data having been acquired from the test image output device 21 and the shot and geometrically corrected color image data having been acquired via the shot image input device 22 ; and a correction data storage
- the test image output device 21 outputs the test color image data used for measuring display characteristics to the projector 1 as well as transmits a signal showing that it has outputted the test color image data to the shot image input device 22 .
- test image output device 21 outputs information on the test color image data having been outputted to the projector 1 to the correction data calculation device 23 .
- the shot image input device 22 Upon receiving the above-described signal from the test image output device 21 , the shot image input device 22 controls the test image shooting camera 4 so that the test image shooting camera 4 shoots the test color image projected on the screen 3 by the projector 1 .
- the color image having been shot by the test image shooting camera 4 is transferred to the shot image input device 22 as shot image data.
- the shot image input device 22 outputs the acquired shot image data to the correction data calculation device 23 .
- the correction data calculation device 23 performs processing for calculating display characteristics data on the basis of the information on the original test color image data having been transmitted from the test image output device 21 and the shot image data having been transmitted from the shot image input device 22 .
- the correction data calculation device 23 is configured with two kinds of display characteristics data calculation module corresponding to the two kinds of display characteristics data M and M′, respectively.
- the first and second display characteristics data calculation modules calculate the display characteristics data M and M′, respectively.
- the correction data calculation device 23 is configured so that an operator of the image correction device 2 can select one of the display characteristics data calculation modules.
- FIG. 4 is a diagram showing image data of a geometric correction pattern outputted by the test image output device 21 .
- FIG. 5 is a diagram showing text data in which coordinate information on center positions of cross patterns is stored.
- the test image output device 21 outputs the image data of the geometric correction pattern, for example, shown in FIG. 4 to the projector 1 prior to outputting the test color image data.
- the image data of the geometric correction pattern outputted from the test image output device 21 is image data in which black cross patterns are evenly spaced in four rows and five columns against a white background.
- the coordinate information on a center position of each cross pattern is outputted from the test image output device 21 to the shot image input device 22 as text data in the form shown in FIG. 5 .
- the center position of a cross pattern in the upper left corner is defined as a coordinate 1
- the center position of a cross pattern on the right side of the coordinate 1 is defined as a coordinate 2
- pixel locations from the coordinate 1 to a coordinate 20 that represents the center position of a cross pattern in the lower right corner are displayed.
- a coordinate system that represents a coordinate of each pixel as, for example, (0, 0) in the case of the pixel in the upper left corner and (1279, 1023) in the case of the pixel in the lower right corner is employed.
- the shot image input device 22 produces the coordinate transform table that gives relationship(s) between space coordinates of both the test color image data and the image shot by the test image shooting camera 4 on the basis of this coordinate information and the shot image data of the geometric correction pattern image having been acquired from the test image shooting camera 4 .
- test image output device 21 When the production of the coordinate transform table for the geometric correction is completed, the test image output device 21 outputs the test color image data to the projector 1 .
- FIG. 6 is a diagram showing an area to be divided in test color image data outputted by the test image output device 21 .
- FIG. 7 is a diagram showing text data storing coordinate information on sub-areas into which an area is divided.
- test color image data is configured so that, in only one of the sub-areas (a sub-area with 256 ⁇ 256 pixels), one color of RGB colors can be displayed, for example, at maximal brightness. Since all sub-areas, into which an area is divided, individually display each color of RGB colors, sixty kinds of test color image data are prepared and are sequentially displayed.
- the coordinate information (pattern data) on the sub-areas in the test color image data is outputted from the test image output device 21 to the correction data calculation device 23 as text data in the form shown in FIG. 7 .
- the sub-area in the upper left corner is defined as a pattern 1
- the sub-area on the right side of the pattern 1 is defined as a pattern 2 .
- pixel locations from the pattern 1 to a pattern 20 that represents the sub-area in the lower right corner are displayed.
- the pattern 1 represented as (0, 0, 256, 256) shows that the pixel location thereof in the upper left corner is (0, 0), and the sub-area thereof corresponds to the area between (0, 0) and the coordinate placed at a distance of (256, 256) from (0, 0).
- the pattern 20 represented as (1024, 768, 256, 256) shows that the pixel location thereof in the upper left corner is (1024, 768), and the sub-area thereof corresponds to the area between (1024, 768) and the coordinate placed at a distance of (256, 256) from (1024, 768).
- the correction data calculation device 23 calculates the display characteristics data M or M′ on the basis of the coordinate information and the shot image data of the test color image having been acquired from the test image shooting camera 4 .
- FIG. 8 is a block diagram showing the configuration of the shot image input device 22 .
- the shot image input device 22 is configured with the following components: a camera control device 31 for controlling the test image shooting camera 4 in accordance with a signal transmitted from the test image output device 21 so that the test image shooting camera 4 performs a shooting operation; a shot image storage device 32 for storing the image data of a shot image having been shot by the test image shooting camera 4 ; a geometric correction data calculation device 33 for calculating a geometric correction table on the basis of the shot image of a geometric correction pattern image stored in the shot image storage device 32 and the coordinate information corresponding to the geometric correction pattern image having been transmitted from the test image output device 21 ; and a geometric correction device 34 for performing a geometric correction operation on the image data of the test color image stored in the shot image storage device 32 on the basis of the geometric correction table having been calculated by the geometric correction data calculation device 33 and outputting the geometrically corrected image data to the correction data calculation device 23 .
- a camera control device 31 for controlling the test image shooting camera 4 in accordance with a signal transmitted from the test image output device 21 so that the test image shooting
- the camera control device 31 Upon receiving a signal showing that the test image output device 21 has outputted image data to the projector 1 from the test image output device 21 , the camera control device 31 outputs a command to the test image shooting camera 4 for controlling and making the test image shooting camera 4 perform a shooting operation.
- the shot image storage device 32 receives and stores the image data having been transmitted from the test image shooting camera 4 .
- the shot image storage device 32 outputs the shot image data to the geometric correction data calculation device 33 .
- the shot image storage device 32 outputs the shot image data to the geometric correction device 34 .
- the geometric correction data calculation device 33 receives the shot image for the geometric correction pattern image from the shot image storage device 32 , as well as, coordinate information corresponding to the geometric correction pattern image from the test image output device 21 , and then performs processing for calculating a geometric correction table.
- the geometric correction table is table data for converting the coordinates of the image data having been transmitted from the test image shooting camera 4 into the coordinates of the image data to be outputted from the test image output device 21 .
- the geometric correction table is calculated as follows.
- cross patterns are detected from the shot image of the geometric correction pattern image having been transmitted from the shot image storage device 32 , and then the coordinates of the center locations of the detected cross patterns are acquired.
- the geometric correction table is calculated on the basis of the relationship between the twenty groups of coordinates of the center locations of the acquired cross patterns and the coordinates corresponding to the geometric correction pattern image having been transmitted from the test image output device 21 .
- the geometric correction table having been calculated by the geometric correction data calculation device 33 is outputted to the geometric correction device 34 .
- the geometric correction device 34 receives the geometric correction table having been calculated from the geometric correction data calculation device 33 , as well as, the shot image of the test color image data from the shot image storage device 32 . Subsequently, the geometric correction device 34 performs a coordinate conversion operation on the shot image of the test color image data and then outputs the converted image data to the correction data calculation device 23 .
- the correction data calculation device 23 calculates at least one of the display characteristics data M and M′ on the basis of the coordinate information on the test image having been transmitted from the test image output device 21 and the shot image of the test color image, upon which the geometric correction has been performed, having been transmitted from the shot image input device 22 , and then outputs the calculated display characteristics data to the correction data storage device 24 .
- FIG. 9 is a diagram showing sample areas set to the shot image of the test color image.
- FIG. 10 is a diagram showing sample areas of a light-emitting area and non-light-emitting areas.
- the correction data calculation device 23 acquires a signal value in a predetermined sample area from the shot image of the test color image upon which the geometric correction operation has been performed.
- sample areas are set as shown in FIG. 9 . That is, each of the sample areas is set as an area with 9 ⁇ 9 pixels. These sample areas are evenly arranged in four rows and five columns so that these sample areas can individually be placed at locations corresponding to the twenty coordinates of the center locations of light-emitting areas in the test image shown in FIG. 5 , and are defined as sample areas S 1 through S 20 .
- the signal values of sample areas other than a light-emitting area in the test color image are individually acquired.
- the sum of signal values of pixels in each sample area (the sum of signal values of 81 pixels when one sample area has 9 ⁇ 9 pixels) is calculated and averaged.
- the mean value is set as the value of a flare signal in a coordinate of a center location of each sample area.
- the reason for adding and averaging the data of a plurality of pixels is that the reliability of data can be improved under the circumstances in which the amount of light occurring owing to the effect of flare is not so large. Since it can be assumed that the flare does not include high-frequency components, this kind of processing is enabled. By performing processing on the basis of the signal values of only sample areas, processing time can be shortened.
- flare signals of all other pixel locations are acquired by performing interpolation processing using the nineteen flare signals.
- the flare signal of the light-emitting area is acquired by performing an extrapolation operation using a flare signal in an adjacent pixel location.
- the distribution of twenty flare signals is acquired for each channel.
- the distribution is regarded as the distribution acquired when only the center pixel of each of the twenty light-emitting areas shown in FIG. 6 emit light (namely, only the center pixel of each light-emitting area is a light-emitting pixel).
- the sum of signal values of one light-emitting area is divided by 65536, and then the value acquired after the division is defined as the value of a flare signal of one light-emitting pixel.
- the distribution acquired when only the center pixel of each of the twenty light-emitting areas emit light is converted into the distribution of flare signals each of which is outputted from a light-emitting pixel.
- the distribution of flare signals of other pixel locations are calculated by performing interpolation processing using the distribution of flare signals of adjacent light-emitting pixel locations.
- the distribution of flare signals when all pixels exist in light-emitting pixel locations is calculated.
- the distribution of all of the 1310720 flare signals is calculated.
- the distribution configured with the flare signals of the 1310720 pixels is produced 1310720 times so that a one-to-one correspondence between the number of flare signals 1310720 and the number of light-emitting pixel locations 1310720 can be achieved. Consequently, the display characteristics data M represented as a matrix of 1310720 rows and 1310720 columns can be provided. As described previously, the display characteristics data is produced for each of the three channels. In the matrix of the display characteristics data M, the letter j of each element mij corresponds to the coordinate of a light-emitting pixel, and the letter i corresponds to the coordinate of a pixel for which a flare signal is acquired.
- the display characteristics data M′ that is calculated by the second display characteristics data calculation module of the correction data calculation device 23 and is then used by the second or fourth correction module of the flare correction device 12 , is calculated as follows.
- the distribution of twenty flare signals corresponding to twenty light-emitting areas is moved to a coordinate that enables the coordinate of the center position of a light-emitting area to correspond to the coordinate of the center position of an image, and then the twenty flare signals are averaged, whereby the display characteristics data M′ can be acquired.
- the flare correction device 12 performs a correction operation on the color image data using the display characteristics data M or M′ that has been calculated and then outputs the corrected color image data to the image data storage device 11 .
- a gradation correction operation is performed on the acquired corrected color image data taking gradation characteristics of a projector into account.
- the gradation correction technique is known as a technique for color reproduction processing, so the description thereof will be omitted.
- the present invention is not limited thereto.
- an arbitrary image display device for example, a CRT or a liquid crystal panel, can be applied to the present invention.
- a test image shooting camera configured with an RGB digital camera is used in the above description.
- a monochrome camera or a multiband camera with four or more bands may be used.
- a measuring device for performing spot measurement such as a spectroradiometer, a luminance meter, and a calorimeter may be used as means for acquiring the spatial distribution of display colors instead of the camera. In this case, the accuracy of measurement can be expected to be increased.
- both the image data projected by a projector and the image data acquired by a test image shooting camera are 1280 pixels wide ⁇ 1024 pixels high is illustrated, but the number of pixels may be changed.
- the number of pixels for displaying and the number of pixels for shooting may be different from each other.
- the combination of the number of pixels for displaying and the number of pixels for shooting can be arbitrarily selected.
- the calculation of the display characteristics data is performed in accordance with the size of the corrected color image data.
- the number of cross patterns in the geometric correction pattern, the number of light-emitting areas in the test color image, and the number of sample areas for flare signal measurement are set to twenty, but each number may be set to an arbitrary number.
- the operator of the image correction device may set each number to a desired number considering the accuracy of measurement and measurement time.
- the image data upon which a flare correction operation has been performed is stored in advance, and then the corrected image data is used when the image data is projected onto a screen.
- the image data inputted from an image source may be flare-corrected and then be displayed in real time.
- a display system performs processing as hardware.
- the same function may be achieved by making a computer to which a display device such as a monitor and a measurement device such as a digital camera are connected, perform a display program or may be achieved by a display method applied to a system that has the above-described configuration.
- the effect of light from another pixel location upon a display color in an arbitrary pixel location is preferably reduced, whereby a display system capable of displaying a color image with high color reproducibility can be achieved.
- test color image measuring means for measuring the spatial distribution of display colors corresponding to test color image data is provided in this embodiment, the display characteristics of the color image display device can be accurately and simply measured. Consequently, the secular change of the color image display device can be supported.
- the spatial distribution of display colors can be more easily acquired.
- the display characteristics can be more accurately measured.
- a luminance meter, a calorimeter, and a spectroradiometer as the test color image measuring means
- the display characteristics can be more accurately measured.
- a monochrome camera as the test color image measuring means
- low-cost device configuration can be achieved.
- a multiband camera as the test color image measuring means, not only acquirement of accurate display characteristics but also accurate spatial measurement can be achieved.
- flare correction based on a flare model can be accurately performed.
- the corrected color image data is calculated on the basis of flare distribution data having been calculated by the flare calculating means, the calculation of the corrected image data can be easily performed.
- optimal flare correction in which the loads and accuracy of calculation are taken into account can be performed by setting the constant K in equation 13 to an appropriate value.
- optimal flare correction in which the loads and accuracy of calculation are taken into account can be performed by setting the number of terms on the right side to an appropriate value.
- a display system capable of performing color reproduction operations as intended by reducing the effect of optical flare can be provided.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Controls And Circuits For Display Device (AREA)
- Processing Of Color Television Signals (AREA)
- Video Image Reproduction Devices For Color Tv Systems (AREA)
- Liquid Crystal Display Device Control (AREA)
- Control Of Indicators Other Than Cathode Ray Tubes (AREA)
Abstract
A display system according to an embodiment of the present invention includes: a color image display device for displaying a color image; and an image correction device for producing corrected color image data to be outputted to the color image display device by correcting color image data. In the display system, the image correction device calculates the corrected color image data from the color image data so as to correct optical flare of the color image display device on the basis of relationship(s) between one of a plurality of test color image data outputted to the color image display device and the spatial distribution of display colors of a test color image that has been displayed on the color image display device in accordance with the one of the plurality of test color image data.
Description
- This application is a continuation application of PCT/JP2004/019410 filed on Dec. 24, 2004 and claims benefit of Japanese Application No. 2003-431384 filed in Japan on Dec. 25, 2003, the entire contents of which are incorporated herein by this reference.
- 1. Field of the Invention
- The present invention relates to a display system for correcting the effect of optical flare and then displaying images.
- 2. Description of the Related Art
- Recently, techniques for reproducing an image of a subject on a display with accurate color reproduction have been actively studied so as to facilitate electronic commerce, electronic art galleries, electronic museums, etc.
- In these studies, color characteristics of an image input device and an image display device are measured, and using the information on the color characteristics of these devices, the correction of a color signal is performed. It is important to standardize the format of information on color characteristics of devices so as to enable color reproduction systems to become popular. The International Color Consortium (ICC) defines color characteristics information on devices as device profiles (see, http://www.color.org).
- In the above-described ICC's device profiles and current color image systems, color characteristics are defined for color image devices or image data as space-coordinate-independent information. Using the color information, color reproduction is performed.
- The above-described ICC's device profiles and current color image systems cannot take into account the effect that a color that exists in one position in an image has upon a color that exists in another position in the image, for example, the effect of optical flare occurring in a display device. Therefore, in the display device affected by optical flare, exact color reproduction cannot be adequately performed.
- The present invention has been made in view of the above-described background, and it is an object of the present invention to provide a display system capable of performing color reproduction as intended by reducing the effect of optical flare.
- According to an embodiment of the present invention, there is provided a display system includes: a color image display device for displaying a color image; and an image correction device for producing corrected color image data to be outputted to the color image display device by correcting color image data. In the display system, the image correction device calculates the corrected color image data from the color image data so as to correct optical flare of the color image display device on the basis of relationship(s) between one of a plurality of test color image data outputted to the color image display device and the spatial distribution of display colors of a test color image that has been displayed on the color image display device in accordance with the one of the plurality of test color image data.
-
FIG. 1 is a schematic diagram showing a configuration of a display system according to an embodiment of the present invention. -
FIG. 2 is a block diagram showing a configuration of an image correction device according to the above-described embodiment. -
FIG. 3 is a block diagram showing a configuration of a flare calculation device according to the above-described embodiment. -
FIG. 4 is a diagram showing image data of a geometric correction pattern outputted by a test image output device in the above-described embodiment. -
FIG. 5 is a diagram showing text data in which coordinate information on center positions of cross patterns is stored in the above-described embodiment. -
FIG. 6 is a diagram showing an area to be divided in test color image data outputted by the test image output device in the above-described embodiment. -
FIG. 7 is a diagram showing text data in which coordinate information on sub-areas into which an area is divided is stored in the above-described embodiment. -
FIG. 8 is a block diagram showing a configuration of a shot image input device according to the above-described embodiment. -
FIG. 9 is a diagram showing sample areas set to a shot image of the test color image in the above-described embodiment. -
FIG. 10 is a diagram showing sample areas of a light-emitting area and non-light-emitting areas. - First, a principle used in the present invention will be described in detail prior to the detailed description of an embodiment of the present invention. This principle is for acquiring display image data that is the same as original input image data by providing corrected input image data in a case where the input image data has been undesirably changed into different display image data by being affected by, for example, optical flare (hereinafter referred to as flare where appropriate) of a display device.
- [Principle]
- Input image data having the number of pixels N, which is inputted into a display device, is represented as (p1, p2, . . . , pN)t. The superscript t represents a transposition. The light distribution of image actually displayed with respect to the input image data is represented in a discrete representation manner as image data that has the number of pixels N, and discreted display image data is assumed to be (g1, g2, . . . , gN)t. This display image data can be acquired by shooting an image displayed on the display device using, for example, a digital camera. In general, since this display image data is affected by flare of the display device, it does not correspond to the input image data. In the display image data, part of light emitted by means of signals outputted from other pixel locations is superposed onto light emitted in one pixel location. Even if the value of an input image signal inputted into the display device is zero, the value of the display image data does not generally become zero. In this case, the display image data is represented as a bias (o1, o2, . . . , oN)t. Taking the above-described effects into account, the relationship between the input image data and the display image data is modeled as
equation 1. - The
following equation 2 representsequation 1 in a simple manner using capital letters corresponding to lowercase letters that represent individual elements in the determinant ofequation 1.
G=MP+O [Equation 2] - In
equation 2 and the following equations, matrixes and vectors are represented in bold letters. However, in the text, these matrixes and vectors are represented in normal-width letters for convenience in writing. - In the above-described
equations equation 1, the matrix including elements mij (i=1˜N, j=1˜N)) is referred to as display characteristics of a display device. - It is desired that the above-described display image data G exactly corresponds to the above-described input image data P. However, as described previously, since the display image is affected by flare or the like, the display image data is generally not the same as the input image data.
- Accordingly, in order to make the display image data correspond exactly to or be very similar to the original input image data by correcting the input image data and using the corrected input image data, a method of calculating the corrected input image data will be considered. When the corrected input image data is represented as P′, corrected display image data G′ that is display image data corresponding exactly to the corrected input image data P′ is as shown in the following
equation 3.
G′=MP′+O [Equation 3] - A conditional equation for making the corrected display image data G′ shown in
equation 3 correspond exactly to the original input image data P is as shown in the followingequation 4.
G′=P [Equation 4] - In order to satisfy
equation 4, the corrected input image data P′ shown in the followingequation 5, which is calculated usingequation 3, can be used.
P′=M −1 [P−O] [Equation 5] - where the display characteristics M are known.
- As described previously, the display characteristics M shown in
equation 5 are represented as a matrix of N×N. For example, in a case where a display device has 1280×1024 pixels, the value of N becomes 1280×1024=1310720. As is apparent from this case, data size generally becomes very large. - On the other hand, when the display characteristics M have special characteristics, computing may be more easily performed. For example, the case in which the spreading of light occurring due to flare or the like of the display device does not depend on pixel locations and is evenly distributed will be considered. When the display characteristics in this case are represented as M′(m′1, m′2, . . . , m′N), the above-described
equation equation 6 using a convolution operation of the display characteristics M′ and the input image data P.
G=M′*P+O [Equation 6] - where the sign “*” represents a convolution operation.
- The following equation 7 can be acquired by replacing the input image data P in
equation 6 with the corrected input image data P′ and by using the condition shown in the above-describedequation 4.
P=M′*P′+O [Equation 7] - where, when the display characteristics M′ are known, the corrected input image data P′ can be calculated using equation 7. That is, a technique of deconvolution for calculating one image (here, P′) using another known image (here, M′) from a convolution image of two images (here, P−O) is known. For example, a method described in chapter 7 of document 1 (Rosenfeld and A. C. Kak, Digital Picture Processing, Academic Press 1976 (whose translation was supervised by Makoto Nagao, kindaikagaku, 1978)) can be used.
- The correction method shown in the above-described
equation 5 includes an inverse matrix operation of a matrix M that has many elements. The correction method shown in the above-described equation 7 anddocument 1 includes convolution inverse operations. Accordingly, these complex operations lead to significant loads on a processor and long processing time. - On the other hand, as a simpler and easier correction method, the method of calculating the corrected input image data by subtracting the amount of flare from the input image data will be considered. Here, the display image data G is modeled by being represented as the sum of the input image data P, the bias O, and a flare component F that is data of flare distribution representing the effect of flare or the like.
G=P+O+F [Equation 8] - The flare component F shown in equation 8 can be represented as shown in the following equation 9 using
equation 2.
F=G−P−O=MP−P=(M−E)P [Equation 9] - where the letter E represents a unit matrix of N×N.
- The corrected display image data G′ can be represented as shown in the following
equation 10 by inputting P−F−M−1O, which is acquired by using F in equation 9, intoequation 3 as the corrected input image data P′. - Here, when the value of an off-diagonal component of M is smaller than one, the value of −(M−E)2P of the second term in
equation 10 becomes smaller than the value of (M−E)P in equation 9. The value of −(M−E)2P represents a flare correction error of the corrected display image data G′, and the value of (M−E)P represents the effect of flare upon the display image data G before the correction. Accordingly, this shows the improvement of the effect of flare. - In addition, F can be acquired from the following
equation 11 taking the above-described result into account.
F=(M−E)P−(M−E)2P [Equation 11]
The corrected display image data G′ can be represented as shown in the followingequation 12 by inputting P−F−M−1O, which is acquired by using the above-described F, intoequation 3 as the corrected input image data P′.
G′=P+(M−E)3 P [Equation 12] - The value of (M−E)3 P of the second term of
equation 12 represents a flare correction error. As is clear from the fact that the order thereof is three, the effect of flare becomes further smaller. - Similarly, when the flare correction error is calculated using the equation including terms up to the Kth order, the flare F can be obtained as shown in the following
equation 13. - Accordingly, the corrected input image data P′ for correcting the flare F shown in
equation 13 is as shown in the following equation 14. - When the value of K in equation 14 becomes larger, the correction for increasingly reducing the effect of flare can be performed. Alternatively, by setting the value of K to an appropriate value taking calculation complexity and calculation time into account, the flare can be desirably removed lightening the load on a processing system.
- Even if the flare correction is performed in accordance with the above-described method, when the spread of light occurring due to flare or the like does not depend on pixel locations and is evenly distributed, the letter F in equation 9 can be replaced as shown in the following
equation 15 using a convolution operation.
F=M′*P−P [Equation 15] - Similarly,
equation 13 can be replaced with the followingequation 16.
F=(M′−E′)*P−(M′−E′)*(M′−E′)*P+(M′−E′)*(M′−E′)*(M′−E′)*P−. . . [Equation 16]
where the letter E′ represents a column vector in which the value of a component corresponding to a center position of an image is one, and the values of other components are zero. - Accordingly, the corrected input image data P′ corresponding to equation 14 is obtained as shown in the following
equation 17.
P′=P−Õ−(M′−E′)*P+(M′−E′)*(M′−E′)*P−(M′−E′)*(M′−E′)*(M′−E′)*P+. . . [Equation 17] - where Õ represents the value obtained by deconvolution of O using M′.
- The description has been given with reference to the case in which image data is handled as one-channel data. However, in the case of color images, the color image data thereof is generally handled as three-channel data. Therefore, in this case, the above-described display characteristics M or M′ are calculated for each of R, G, and B channels, and the flare correction is performed on the basis of the calculated display characteristics.
- Furthermore, when a color image is displayed by means of image data that has four or more primary colors, the above-described flare correction processing is performed on an image of each channel, whereby the color image can be also displayed as intended.
- The value of a signal outputted from a display device or a color image display device such as a digital camera sometimes has a nonlinear relationship with brightness. In this case, the above-described processing is required to be performed after the nonlinearity of each signal is corrected. However, since techniques for correcting gradation characteristics are known, the description thereof will be omitted. That is, the above principle has been described as the principle in linear space after the nonlinearity of each signal is corrected.
- An embodiment of the present invention will be described in detail with reference to the accompanying drawings.
- An embodiment of the present invention is shown in
FIGS. 1 through 10 .FIG. 1 is a schematic diagram showing a configuration of a display system. - This display system is configured with the following components: a
projector 1, which is a color image display device, for projecting images; animage correction device 2 for producing corrected images to be projected by theprojector 1; ascreen 3, which is a color image display device, on which images are projected by theprojector 1; and a testimage shooting camera 4 that is test color image measuring means such as a digital color camera and is disposed so as to shoot a whole image displayed on thescreen 3. - The test
image shooting camera 4 is included in the image correction device in a broad sense and is provided with a circuit capable of correcting blurs on an image due to the optical characteristics of a shooting lens and the variations of sensitivities of image pickup devices. For example, before digital image data of RGB is outputted from the testimage shooting camera 4, the correction is performed on the digital image data. In addition, the testimage shooting camera 4 outputs a linear response signal depending on incident light intensity. - Operations of this display system will now be described.
- In the display system, operations for acquiring display characteristics data that is required for correcting optical flare are as follows.
- The
image correction device 2 outputs predetermined test color image data that has been stored therein in advance to theprojector 1. - The
projector 1 projects the test color image data provided by theimage correction device 2 on thescreen 3. - The
image correction device 2 controls the testimage shooting camera 4 so that the testimage shooting camera 4 shoots an image with the distribution of display colors corresponding to the test color image displayed on thescreen 3 and transfers the data of the shot image to theimage correction device 2. Subsequently, theimage correction device 2 receives the transferred color image data. - The
image correction device 2 calculates display characteristics data used for correcting color image data on the basis of the color image data having been acquired from the testimage shooting camera 4 and the original test color image data having been provided to theprojector 1. - Next, operations for projecting general images by means of the display system that has acquired the display characteristics data are as follows. When these general images are projected, since the above-described test
image shooting camera 4 is not required, it may be removed. - The
image correction device 2 stores, in advance, color image data that has been converted so that the color image data can have a linear relationship with brightness. - As describe previously, the
image correction device 2 corrects the color image data that has been stored therein in advance using the calculated display characteristics data and then stores the corrected color image data. - When color image data to be displayed is selected by an operator, the
image correction device 2 outputs corrected color image data corresponding to the selected color image data to theprojector 1. - The
projector 1 projects an image on thescreen 3 on the basis of the corrected color image data having been provided by theimage correction device 2. - Consequently, an image for which the effect of flare can be corrected is displayed on the
screen 3, whereby a person who has displayed the image can have a viewer see the image as intended. - In this embodiment, each of the image data inputted into the
projector 1, the image data outputted from the testimage shooting camera 4, and the image data processed in theimage correction device 2 are 1280 pixels wide×1024 pixels high and are a three-channel image data of three colors, RGB. -
FIG. 2 is a block diagram showing the configuration of theimage correction device 2. - The
image correction device 2 is configured with the following components: aflare calculation device 13 for outputting predetermined test color image data that has been stored therein in advance to theprojector 1 and for acquiring color image data (shot image data) having been shot on the basis of the test color image data from the testimage shooting camera 4 and for calculating display characteristics data on the basis of the acquired shot image data and the original test color image data; an imagedata storage device 11 for storing color image data to be displayed as well as corrected color image data that is acquired as a result of correcting the color image data by means of a flare correction device 12 (described later); and theflare correction device 12 for acquiring the color image data from the imagedata storage device 11 and for correcting the acquired color image data using the display characteristics data having been calculated by theflare calculation device 13 and for outputting the corrected color image data to the imagedata storage device 11 again so as to make the imagedata storage device 11 store the corrected color image data. - Next, operations of the
image correction device 2 will be described. - Operations for acquiring the display characteristics data are as follows.
- The
flare calculation device 13 outputs the test color image data to theprojector 1 so as to make theprojector 1 display a test color image on thescreen 3. In synchronization with this operation, theflare calculation device 13 controls the testimage shooting camera 4 so that the testimage shooting camera 4 shoots the image displayed on thescreen 3 and transfers the color image data of the shot image to theflare calculation device 13. Theflare calculation device 13 acquires the color image data of the shot image and calculates the display characteristics data on the basis of the acquired color image data and the original test color image data-and then stores the calculated display characteristics data. There are two kinds of display characteristics data calculated by theflare calculation device 13. One is a matrix M of N×N defined by the above-describedequation equation 6. Since the image data is RGB three-channel image data, it is assumed that the representation of these equations includes all data of the RGB three-channel image data. Since the number of pixels of the image data is 1280×1024=1310720, the value of N becomes 1310720 in this case. Operations for calculating the display characteristics data M and M′ by theflare calculation device 13 will be described later with reference toFIG. 3 . - Next, operations for correcting the color image data are as follows.
- The
flare correction device 12 reads out the color image data stored in the imagedata storage device 11 as well as inputs one of the two kinds of display characteristics data M and M′ in accordance with a flare correction method from theflare calculation device 13. Subsequently, theflare correction device 12 performs a flare correction operation based on the readout color image data using the readout display characteristics data and then calculates the corrected color image data. - In the description of this embodiment, the color image data and the corrected color image data correspond to the input image data P and the corrected input image data P′ in the above-described principle, respectively. As described previously, since, in the color image data and the corrected color image data, each pixel corresponds to RGB three-channel image data, it is assumed that the letters P and P′ individually represent the RGB three-channel image data.
- The
flare correction device 12 is configured with the following first to fourth correction modules. Theflare correction device 12 is configured to use the display characteristics data M or M′ readout from theflare calculation device 13 for calculating the corrected color image data in these first to fourth correction modules. - The first correction module calculates the corrected color image data P′ by inputting the display characteristics data M into
equation 5. It is assumed that the bias O has been measured and stored in theflare correction device 12 in advance. For example, the measuring method of the bias O is that a test color image capable of making the values of all components become zero is projected from theprojector 1 on thescreen 3, and the projected test color image displayed on thescreen 3 is shot using the testimage shooting camera 4. - The second correction module calculates the corrected color image data P′ by inputting the display characteristics data M′ into equation 7 and performing a deconvolution operation.
- The third correction module is flare calculating means and calculates the corrected color image data P′ by inputting the display characteristics data M into equation 14. The constant K can be arbitrarily set by an operator of the
image correction device 2. - The fourth correction module is flare calculating means and calculates the corrected color image data P′ by inputting the display characteristics data M′ into
equation 17. The number of terms of the part corresponding toequation 16 in equation 17 (the number of terms corresponds to the above-described constant K) can also be arbitrarily set by an operator of theimage correction device 2. - Thus, the corrected color image data calculated by the
flare correction device 12 is outputted from theflare correction device 12 to the imagedata storage device 11 and is then stored in the imagedata storage device 11. - Operations for projecting and viewing the image of the corrected color image data are as follows.
- An operator operates the
image correction device 2 and selects desired corrected color image data stored in theimage correction device 2. The corrected color image data having been selected is read out from the imagedata storage device 11 and is then outputted to theprojector 1. Theprojector 1 receives the corrected color image data and then projects an image corresponding to the corrected color image data on thescreen 3, whereby the color image in which the effect of flare is reduced can be displayed and viewed on thescreen 3. -
FIG. 3 is a block diagram showing the configuration of theflare calculation device 13. - This flare calculation device 13 is configured with the following components: a test image output device 21 for storing the test color image data and geometric correction pattern image data (described later) and for outputting the stored image data to the projector 1, a shot image input device 22 (described later), and a correction data calculation device 23 (described later) as required; the shot image input device 22 for inputting the shot color image data from the test image shooting camera 4 by controlling the test image shooting camera 4, and calculating a coordinate transform table required for a geometric correction operation on the basis of the above-described geometric correction pattern image data, and performing the geometric correction operation on the color image data having been inputted from the test image shooting camera 4 using the calculated coordinate transform table, and outputting the geometrically corrected color image data; the correction data calculation device 23 that is display characteristics calculating means for calculating the display characteristics data M and M′ on the basis of the original test color image data having been acquired from the test image output device 21 and the shot and geometrically corrected color image data having been acquired via the shot image input device 22; and a correction data storage device 24 for storing the display characteristics data M and M′ having been calculated by the correction data calculation device 23, and outputting the stored display characteristics data M and M′ to the flare correction device 12 as required.
- Operations of the
flare calculation device 13 will be described. - The test
image output device 21 outputs the test color image data used for measuring display characteristics to theprojector 1 as well as transmits a signal showing that it has outputted the test color image data to the shotimage input device 22. - Furthermore, the test
image output device 21 outputs information on the test color image data having been outputted to theprojector 1 to the correctiondata calculation device 23. - Upon receiving the above-described signal from the test
image output device 21, the shotimage input device 22 controls the testimage shooting camera 4 so that the testimage shooting camera 4 shoots the test color image projected on thescreen 3 by theprojector 1. The color image having been shot by the testimage shooting camera 4 is transferred to the shotimage input device 22 as shot image data. The shotimage input device 22 outputs the acquired shot image data to the correctiondata calculation device 23. - The correction
data calculation device 23 performs processing for calculating display characteristics data on the basis of the information on the original test color image data having been transmitted from the testimage output device 21 and the shot image data having been transmitted from the shotimage input device 22. - That is, the correction
data calculation device 23 is configured with two kinds of display characteristics data calculation module corresponding to the two kinds of display characteristics data M and M′, respectively. The first and second display characteristics data calculation modules calculate the display characteristics data M and M′, respectively. The correctiondata calculation device 23 is configured so that an operator of theimage correction device 2 can select one of the display characteristics data calculation modules. -
FIG. 4 is a diagram showing image data of a geometric correction pattern outputted by the testimage output device 21.FIG. 5 is a diagram showing text data in which coordinate information on center positions of cross patterns is stored. - The test
image output device 21 outputs the image data of the geometric correction pattern, for example, shown inFIG. 4 to theprojector 1 prior to outputting the test color image data. - The image data of the geometric correction pattern outputted from the test
image output device 21 is image data in which black cross patterns are evenly spaced in four rows and five columns against a white background. - The coordinate information on a center position of each cross pattern (geometric correction pattern data) is outputted from the test
image output device 21 to the shotimage input device 22 as text data in the form shown inFIG. 5 . - In the example shown in
FIG. 5 , the center position of a cross pattern in the upper left corner is defined as a coordinate 1, and the center position of a cross pattern on the right side of the coordinate 1 is defined as a coordinate 2. Thus, pixel locations from the coordinate 1 to a coordinate 20 that represents the center position of a cross pattern in the lower right corner are displayed. Here, a coordinate system that represents a coordinate of each pixel as, for example, (0, 0) in the case of the pixel in the upper left corner and (1279, 1023) in the case of the pixel in the lower right corner is employed. - As described later, the shot
image input device 22 produces the coordinate transform table that gives relationship(s) between space coordinates of both the test color image data and the image shot by the testimage shooting camera 4 on the basis of this coordinate information and the shot image data of the geometric correction pattern image having been acquired from the testimage shooting camera 4. - When the production of the coordinate transform table for the geometric correction is completed, the test
image output device 21 outputs the test color image data to theprojector 1. -
FIG. 6 is a diagram showing an area to be divided in test color image data outputted by the testimage output device 21.FIG. 7 is a diagram showing text data storing coordinate information on sub-areas into which an area is divided. - As shown in
FIG. 6 , an area having 1280×1024 pixels is evenly divided into four rows and five columns. The test color image data is configured so that, in only one of the sub-areas (a sub-area with 256×256 pixels), one color of RGB colors can be displayed, for example, at maximal brightness. Since all sub-areas, into which an area is divided, individually display each color of RGB colors, sixty kinds of test color image data are prepared and are sequentially displayed. - If processing is performed on each pixel, all pixels are sequentially made to emit light of each color of RGB colors on a pixel-by-pixel basis, whereby the time taken to acquire data becomes too long. In addition, in this case in which light is emitted on a pixel-by-pixel basis, it is difficult to measure the effect of flare in one location from another pixel location owing to an insufficient amount of light. Furthermore, owing to variations in the maximal brightness of individual pixels, the stability of data may be low. For the above-described reasons, an area is divided into twenty sub-areas in all. By performing processing on a block-by-block basis, short-time processing can be achieved using stable data acquired under the condition of a sufficient amount of light.
- The coordinate information (pattern data) on the sub-areas in the test color image data is outputted from the test
image output device 21 to the correctiondata calculation device 23 as text data in the form shown inFIG. 7 . - Referring to the example shown in
FIG. 7 , the same coordinate system as that used inFIG. 5 is used. The sub-area in the upper left corner is defined as apattern 1, and the sub-area on the right side of thepattern 1 is defined as apattern 2. Thus, pixel locations from thepattern 1 to apattern 20 that represents the sub-area in the lower right corner are displayed. - More specifically, the
pattern 1 represented as (0, 0, 256, 256) shows that the pixel location thereof in the upper left corner is (0, 0), and the sub-area thereof corresponds to the area between (0, 0) and the coordinate placed at a distance of (256, 256) from (0, 0). Accordingly, for example, thepattern 20 represented as (1024, 768, 256, 256) shows that the pixel location thereof in the upper left corner is (1024, 768), and the sub-area thereof corresponds to the area between (1024, 768) and the coordinate placed at a distance of (256, 256) from (1024, 768). - As described later, the correction
data calculation device 23 calculates the display characteristics data M or M′ on the basis of the coordinate information and the shot image data of the test color image having been acquired from the testimage shooting camera 4. -
FIG. 8 is a block diagram showing the configuration of the shotimage input device 22. - The shot
image input device 22 is configured with the following components: acamera control device 31 for controlling the testimage shooting camera 4 in accordance with a signal transmitted from the testimage output device 21 so that the testimage shooting camera 4 performs a shooting operation; a shotimage storage device 32 for storing the image data of a shot image having been shot by the testimage shooting camera 4; a geometric correctiondata calculation device 33 for calculating a geometric correction table on the basis of the shot image of a geometric correction pattern image stored in the shotimage storage device 32 and the coordinate information corresponding to the geometric correction pattern image having been transmitted from the testimage output device 21; and ageometric correction device 34 for performing a geometric correction operation on the image data of the test color image stored in the shotimage storage device 32 on the basis of the geometric correction table having been calculated by the geometric correctiondata calculation device 33 and outputting the geometrically corrected image data to the correctiondata calculation device 23. - Operations of the shot
image input device 22 will be described. - Upon receiving a signal showing that the test
image output device 21 has outputted image data to theprojector 1 from the testimage output device 21, thecamera control device 31 outputs a command to the testimage shooting camera 4 for controlling and making the testimage shooting camera 4 perform a shooting operation. - The shot
image storage device 32 receives and stores the image data having been transmitted from the testimage shooting camera 4. When the shot image data is for the geometric correction pattern image, the shotimage storage device 32 outputs the shot image data to the geometric correctiondata calculation device 33. When the shot image data is for the test color image data, the shotimage storage device 32 outputs the shot image data to thegeometric correction device 34. - The geometric correction
data calculation device 33 receives the shot image for the geometric correction pattern image from the shotimage storage device 32, as well as, coordinate information corresponding to the geometric correction pattern image from the testimage output device 21, and then performs processing for calculating a geometric correction table. - The geometric correction table is table data for converting the coordinates of the image data having been transmitted from the test
image shooting camera 4 into the coordinates of the image data to be outputted from the testimage output device 21. The geometric correction table is calculated as follows. - First, cross patterns are detected from the shot image of the geometric correction pattern image having been transmitted from the shot
image storage device 32, and then the coordinates of the center locations of the detected cross patterns are acquired. Next, the geometric correction table is calculated on the basis of the relationship between the twenty groups of coordinates of the center locations of the acquired cross patterns and the coordinates corresponding to the geometric correction pattern image having been transmitted from the testimage output device 21. - Many techniques for detecting the cross patterns and for calculating the geometric correction table on the basis of the relationship of the twenty groups of sample coordinates are known. These techniques can be employed as required, but the description thereof will be omitted.
- The geometric correction table having been calculated by the geometric correction
data calculation device 33 is outputted to thegeometric correction device 34. - As described previously, the
geometric correction device 34 receives the geometric correction table having been calculated from the geometric correctiondata calculation device 33, as well as, the shot image of the test color image data from the shotimage storage device 32. Subsequently, thegeometric correction device 34 performs a coordinate conversion operation on the shot image of the test color image data and then outputs the converted image data to the correctiondata calculation device 23. - The correction
data calculation device 23 calculates at least one of the display characteristics data M and M′ on the basis of the coordinate information on the test image having been transmitted from the testimage output device 21 and the shot image of the test color image, upon which the geometric correction has been performed, having been transmitted from the shotimage input device 22, and then outputs the calculated display characteristics data to the correctiondata storage device 24. - Operations of the correction
data calculation device 23 will be described with reference toFIGS. 9 and 10 .FIG. 9 is a diagram showing sample areas set to the shot image of the test color image.FIG. 10 is a diagram showing sample areas of a light-emitting area and non-light-emitting areas. - In order to obtain the display characteristics data, the correction
data calculation device 23 acquires a signal value in a predetermined sample area from the shot image of the test color image upon which the geometric correction operation has been performed. - The sample areas are set as shown in
FIG. 9 . That is, each of the sample areas is set as an area with 9×9 pixels. These sample areas are evenly arranged in four rows and five columns so that these sample areas can individually be placed at locations corresponding to the twenty coordinates of the center locations of light-emitting areas in the test image shown inFIG. 5 , and are defined as sample areas S1 through S20. - As shown in
FIG. 10 , the signal values of sample areas other than a light-emitting area in the test color image (in the example shown inFIG. 10 , sample areas S2 through S20 other than the light-emitting area in the upper left corner) are individually acquired. The sum of signal values of pixels in each sample area (the sum of signal values of 81 pixels when one sample area has 9×9 pixels) is calculated and averaged. The mean value is set as the value of a flare signal in a coordinate of a center location of each sample area. Thus, first, the distribution of flare signals in coordinates of only center locations of nineteen sample areas other than the light-emitting area is calculated. The reason for adding and averaging the data of a plurality of pixels is that the reliability of data can be improved under the circumstances in which the amount of light occurring owing to the effect of flare is not so large. Since it can be assumed that the flare does not include high-frequency components, this kind of processing is enabled. By performing processing on the basis of the signal values of only sample areas, processing time can be shortened. - Next, flare signals of all other pixel locations are acquired by performing interpolation processing using the nineteen flare signals. As shown in the example of
FIG. 10 , when the light-emitting area exists in one of the corners, the flare signal of the light-emitting area is acquired by performing an extrapolation operation using a flare signal in an adjacent pixel location. - Thus, all flare signals of one test color image are acquired. The above-described processing is performed on all of the twenty test color images shown in
FIG. 5 . In this specification, since a three-channel RGB color image has been described by way of example, the above-described processing is performed on sixty test color images in all. - The distribution of twenty flare signals is acquired for each channel. The distribution is regarded as the distribution acquired when only the center pixel of each of the twenty light-emitting areas shown in
FIG. 6 emit light (namely, only the center pixel of each light-emitting area is a light-emitting pixel). In fact, since the entire light-emitting area with 256×256 pixels emits light, the sum of signal values of one light-emitting area is divided by 65536, and then the value acquired after the division is defined as the value of a flare signal of one light-emitting pixel. Thus, the distribution acquired when only the center pixel of each of the twenty light-emitting areas emit light is converted into the distribution of flare signals each of which is outputted from a light-emitting pixel. The distribution of flare signals of other pixel locations are calculated by performing interpolation processing using the distribution of flare signals of adjacent light-emitting pixel locations. - Thus, the distribution of flare signals when all pixels exist in light-emitting pixel locations is calculated. As described previously, when the entire area is configured with 1280×1024 pixels, the distribution of all of the 1310720 flare signals is calculated.
- The distribution configured with the flare signals of the 1310720 pixels is produced 1310720 times so that a one-to-one correspondence between the number of flare signals 1310720 and the number of light-emitting pixel locations 1310720 can be achieved. Consequently, the display characteristics data M represented as a matrix of 1310720 rows and 1310720 columns can be provided. As described previously, the display characteristics data is produced for each of the three channels. In the matrix of the display characteristics data M, the letter j of each element mij corresponds to the coordinate of a light-emitting pixel, and the letter i corresponds to the coordinate of a pixel for which a flare signal is acquired.
- The display characteristics data M′ that is calculated by the second display characteristics data calculation module of the correction
data calculation device 23 and is then used by the second or fourth correction module of theflare correction device 12, is calculated as follows. The distribution of twenty flare signals corresponding to twenty light-emitting areas is moved to a coordinate that enables the coordinate of the center position of a light-emitting area to correspond to the coordinate of the center position of an image, and then the twenty flare signals are averaged, whereby the display characteristics data M′ can be acquired. - As described previously, the
flare correction device 12 performs a correction operation on the color image data using the display characteristics data M or M′ that has been calculated and then outputs the corrected color image data to the imagedata storage device 11. - Like general image display devices, a gradation correction operation is performed on the acquired corrected color image data taking gradation characteristics of a projector into account. However, the gradation correction technique is known as a technique for color reproduction processing, so the description thereof will be omitted.
- Although this embodiment has been described by using a projector as an example of a color display device, the present invention is not limited thereto. For example, an arbitrary image display device, for example, a CRT or a liquid crystal panel, can be applied to the present invention.
- As means for acquiring the spatial distribution of display colors corresponding to test color image data, a test image shooting camera (color camera) configured with an RGB digital camera is used in the above description. However, a monochrome camera or a multiband camera with four or more bands may be used. Alternatively, like the example shown in
FIG. 9 , when the number of samples to be measured is relatively small, a measuring device for performing spot measurement such as a spectroradiometer, a luminance meter, and a calorimeter may be used as means for acquiring the spatial distribution of display colors instead of the camera. In this case, the accuracy of measurement can be expected to be increased. - In the above description, the case in which both the image data projected by a projector and the image data acquired by a test image shooting camera are 1280 pixels wide×1024 pixels high is illustrated, but the number of pixels may be changed. In addition, the number of pixels for displaying and the number of pixels for shooting may be different from each other. In general, the combination of the number of pixels for displaying and the number of pixels for shooting can be arbitrarily selected. In this case, the calculation of the display characteristics data is performed in accordance with the size of the corrected color image data.
- Furthermore, the number of cross patterns in the geometric correction pattern, the number of light-emitting areas in the test color image, and the number of sample areas for flare signal measurement are set to twenty, but each number may be set to an arbitrary number. Alternatively, the operator of the image correction device may set each number to a desired number considering the accuracy of measurement and measurement time.
- In the above description, the image data upon which a flare correction operation has been performed is stored in advance, and then the corrected image data is used when the image data is projected onto a screen. In a case where sufficient processing speed can be ensured, the image data inputted from an image source may be flare-corrected and then be displayed in real time.
- Furthermore, in the above description, the case in which a display system performs processing as hardware has been given. However, the same function may be achieved by making a computer to which a display device such as a monitor and a measurement device such as a digital camera are connected, perform a display program or may be achieved by a display method applied to a system that has the above-described configuration.
- According to the above-described embodiment, the effect of light from another pixel location upon a display color in an arbitrary pixel location is preferably reduced, whereby a display system capable of displaying a color image with high color reproducibility can be achieved.
- Since test color image measuring means for measuring the spatial distribution of display colors corresponding to test color image data is provided in this embodiment, the display characteristics of the color image display device can be accurately and simply measured. Consequently, the secular change of the color image display device can be supported.
- In particular, by employing a color camera such as a digital camera as the test color image measuring means, the spatial distribution of display colors can be more easily acquired.
- On the other hand, by employing a luminance meter, a calorimeter, and a spectroradiometer as the test color image measuring means, the display characteristics can be more accurately measured. Alternatively, by employing a monochrome camera as the test color image measuring means, low-cost device configuration can be achieved. Furthermore, by employing a multiband camera as the test color image measuring means, not only acquirement of accurate display characteristics but also accurate spatial measurement can be achieved.
- By calculating and using the display characteristics data of the color image display device on the basis of the test color image data and the spatial distribution of display colors corresponding to the test color image data, flare correction based on a flare model can be accurately performed.
- Moreover, since the corrected color image data is calculated on the basis of flare distribution data having been calculated by the flare calculating means, the calculation of the corrected image data can be easily performed.
- Since the representation of
equation 13 is used as the vector F that represents the flare distribution data, optimal flare correction in which the loads and accuracy of calculation are taken into account can be performed by setting the constant K inequation 13 to an appropriate value. Similarly, when the representation ofequation 16 is used as the vector F that represents the flare distribution data, optimal flare correction in which the loads and accuracy of calculation are taken into account can be performed by setting the number of terms on the right side to an appropriate value. - Thus, a display system capable of performing color reproduction operations as intended by reducing the effect of optical flare can be provided.
- It should be understood that the present invention is not limited to the above-described embodiment, and various modifications and variations may be made to the present invention without departing from the scope and spirit of the present invention.
Claims (14)
1. A display system comprising:
a color image display device for displaying a color image; and
an image correction device for producing corrected color image data to be outputted to the color image display device by correcting color image data,
wherein the image correction device calculates the corrected color image data from the color image data so as to correct optical flare of the color image display device on the basis of relationship(s) between one of a plurality of test color image data outputted to the color image display device and the spatial distribution of display colors of a test color image that has been displayed on the color image display device in accordance with the one of the plurality of test color image data.
2. The display system according to claim 1 , wherein the image correction device comprises test color image measuring means for measuring the spatial distribution of display colors of the test color image that has been displayed on the color image display device in accordance with the test color image data.
3. The display system according to claim 2 , wherein the test color image measuring means comprises at least one of a luminance meter, a calorimeter, a spectroradiometer, a monochrome camera, a color camera, and a multiband camera.
4. The display system according to claim 3 , wherein the image correction device comprises display characteristics calculating means for calculating display characteristics data of the color image display device on the basis of the test color image data and the spatial distribution of display colors of the test color image that has been displayed on the color image display device in accordance with the test color image data, the image correction device calculating the corrected color image data on the basis of the display characteristics data having been calculated by the display characteristics calculating means.
5. The display system according to claim 4 , wherein the image correction device further comprises flare calculating means for calculating the corrected color image data by calculating flare distribution data of the color image data using the display characteristics data, and subtracting the calculated flare distribution data from the color image data.
6. The display system according to claim 5 , wherein, when a vector having data of each pixel of a plurality of pixels included in the color image data as a component is defined as P, the number of components of the vector P is defined as N (N is a natural number), a unit matrix of N×N is defined as E, the matrix representation of N×N showing the display characteristics of the color image display device is defined as M, and an arbitrary constant is defined as K, a vector F representing the flare distribution data is acquired by the following equation.
7. The display system according to claim 2 , wherein the image correction device comprises display characteristics calculating means for calculating display characteristics data of the color image display device on the basis of the test color image data and the spatial distribution of display colors of the test color image that has been displayed on the color image display device in accordance with the test color image data, the image correction device calculating the corrected color image data on the basis of the display characteristics data having been calculated by the display characteristics calculating means.
8. The display system according to claim 7 , wherein the image correction device further comprises flare calculating means for calculating the corrected color image data by calculating flare distribution data of the color image data using the display characteristics data, and subtracting the calculated flare distribution data from the color image data.
9. The display system according to claim 8 , wherein, when a vector having data of each pixel of a plurality of pixels included in the color image data as a component is defined as P, the number of components of the vector P is defined as N (N is a natural number), a unit matrix of N×N is defined as E, the matrix representation of N×N showing the display characteristics of the color image display device is defined as M, and an arbitrary constant is defined as K, a vector F representing the flare distribution data is acquired by the following equation.
10. The display system according to claim 1 , wherein the image correction device comprises display characteristics calculating means for calculating display characteristics data of the color image display device on the basis of the test color image data and the spatial distribution of display colors of the test color image that has been displayed on the color image display device in accordance with the test color image data, the image correction device calculating the corrected color image data on the basis of the display characteristics data having been calculated by the display characteristics calculating means.
11. The display system according to claim 10 , wherein the image correction device further comprises flare calculating means for calculating the corrected color image data by calculating flare distribution data of the color image data using the display characteristics data, and subtracting the calculated flare distribution data from the color image data.
12. The display system according to claim 11 , wherein, when a vector having data of each pixel of a plurality of pixels included in the color image data as a component is defined as P, the number of components of the vector P is defined as N (N is a natural number), a unit matrix of N×N is defined as E, the matrix representation of N×N showing the display characteristics of the color image display device is defined as M, and an arbitrary constant is defined as K, a vector F representing the flare distribution data is acquired by the following equation.
13. A display program for causing a computer to execute in predetermined steps, comprising:
a first step of outputting a plurality of test color image data to a color image display device and making the color image display device display a plurality of test color images;
a second step of acquiring the spatial distribution of display colors of each of the plurality of test color images having been displayed on the color image display device in accordance with the first step;
a third step of calculating corrected color image data from color image data so as to correct optical flare of the color image display device on the basis of the plurality of test color image data, and the spatial distribution of display colors of the test color image having been acquired in accordance with each of the plurality of test color image data by means of the second step; and
a fourth step of outputting the corrected color image data having been calculated in accordance with the third step to the color image display device and making the color image display device display the corrected color image data.
14. A display method comprising:
a first step of outputting a plurality of test color image data to a color image display device and making the color image display device display a plurality of test color images;
a second step of acquiring the spatial distribution of display colors of each of the plurality of test color images having been displayed on the color image display device in accordance with the first step;
a third step of calculating corrected color image data from color image data so as to correct optical flare of the color image display device on the basis of the plurality of test color image data, and the spatial distribution of display colors of the test color image having been acquired in accordance with each of the plurality of test color image data by means of the second step; and
a fourth step of outputting the corrected color image data having been calculated in accordance with the third step to the color image display device and making the color image display device display the corrected color image data.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003431384A JP2005189542A (en) | 2003-12-25 | 2003-12-25 | Display system, display program and display method |
JP2003-431384 | 2003-12-25 | ||
PCT/JP2004/019410 WO2005064584A1 (en) | 2003-12-25 | 2004-12-24 | Display system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/019410 Continuation-In-Part WO2005064584A1 (en) | 2003-12-25 | 2004-12-24 | Display system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20060238832A1 true US20060238832A1 (en) | 2006-10-26 |
Family
ID=34736429
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/472,758 Abandoned US20060238832A1 (en) | 2003-12-25 | 2006-06-21 | Display system |
Country Status (4)
Country | Link |
---|---|
US (1) | US20060238832A1 (en) |
EP (1) | EP1699035A4 (en) |
JP (1) | JP2005189542A (en) |
WO (1) | WO2005064584A1 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060152524A1 (en) * | 2005-01-12 | 2006-07-13 | Eastman Kodak Company | Four color digital cinema system with extended color gamut and copy protection |
US20060197775A1 (en) * | 2005-03-07 | 2006-09-07 | Michael Neal | Virtual monitor system having lab-quality color accuracy |
US20100060728A1 (en) * | 2006-12-05 | 2010-03-11 | Daniel Bublitz | Method for producing high-quality reproductions of the front and/or rear sections of the eye |
US20110007176A1 (en) * | 2009-07-13 | 2011-01-13 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US20110242352A1 (en) * | 2010-03-30 | 2011-10-06 | Nikon Corporation | Image processing method, computer-readable storage medium, image processing apparatus, and imaging apparatus |
US8531474B2 (en) | 2011-11-11 | 2013-09-10 | Sharp Laboratories Of America, Inc. | Methods, systems and apparatus for jointly calibrating multiple displays in a display ensemble |
US20150304618A1 (en) * | 2014-04-18 | 2015-10-22 | Fujitsu Limited | Image processing device and image processing method |
CN105448272A (en) * | 2014-08-06 | 2016-03-30 | 财团法人资讯工业策进会 | Display system and image compensation method |
US10873731B2 (en) | 2019-01-08 | 2020-12-22 | Seiko Epson Corporation | Projector, display system, image correction method, and colorimetric method |
US20210248948A1 (en) * | 2020-02-10 | 2021-08-12 | Ebm Technologies Incorporated | Luminance Calibration System and Method of Mobile Device Display for Medical Images |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005352437A (en) * | 2004-05-12 | 2005-12-22 | Sharp Corp | Liquid crystal display device, color management circuit, and display control method |
JP4901246B2 (en) * | 2006-03-15 | 2012-03-21 | 財団法人21あおもり産業総合支援センター | Spectral luminance distribution estimation system and method |
WO2019045010A1 (en) * | 2017-08-30 | 2019-03-07 | 株式会社オクテック | Information processing device, information processing system, and information processing method |
WO2020065792A1 (en) * | 2018-09-26 | 2020-04-02 | Necディスプレイソリューションズ株式会社 | Video reproduction system, video reproduction device, and calibration method for video reproduction system |
JP7270025B2 (en) * | 2019-02-19 | 2023-05-09 | 富士フイルム株式会社 | PROJECTION DEVICE, CONTROL METHOD AND CONTROL PROGRAM THEREOF |
KR20230012909A (en) * | 2021-07-16 | 2023-01-26 | 삼성전자주식회사 | Electronic apparatus and control method thereof |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6456339B1 (en) * | 1998-07-31 | 2002-09-24 | Massachusetts Institute Of Technology | Super-resolution display |
US20030020836A1 (en) * | 2001-07-30 | 2003-01-30 | Nec Viewtechnology, Ltd. | Device and method for improving picture quality |
US6522313B1 (en) * | 2000-09-13 | 2003-02-18 | Eastman Kodak Company | Calibration of softcopy displays for imaging workstations |
US20050103976A1 (en) * | 2002-02-19 | 2005-05-19 | Ken Ioka | Method and apparatus for calculating image correction data and projection system |
US20060262147A1 (en) * | 2005-05-17 | 2006-11-23 | Tom Kimpe | Methods, apparatus, and devices for noise reduction |
US20070035706A1 (en) * | 2005-06-20 | 2007-02-15 | Digital Display Innovations, Llc | Image and light source modulation for a digital display system |
US20080024868A1 (en) * | 2004-06-15 | 2008-01-31 | Olympus Corporation | Illuminating Unit and Imaging Apparatus |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000241791A (en) * | 1999-02-19 | 2000-09-08 | Victor Co Of Japan Ltd | Projector device |
JP2001054131A (en) * | 1999-05-31 | 2001-02-23 | Olympus Optical Co Ltd | Color image display system |
JP3695374B2 (en) * | 2001-09-25 | 2005-09-14 | 日本電気株式会社 | Focus adjustment device and focus adjustment method |
JP2003283964A (en) * | 2002-03-26 | 2003-10-03 | Olympus Optical Co Ltd | Video display apparatus |
JP2005020314A (en) * | 2003-06-25 | 2005-01-20 | Olympus Corp | Calculating method, calculating program and calculating apparatus for display characteristic correction data |
-
2003
- 2003-12-25 JP JP2003431384A patent/JP2005189542A/en active Pending
-
2004
- 2004-12-24 WO PCT/JP2004/019410 patent/WO2005064584A1/en not_active Application Discontinuation
- 2004-12-24 EP EP04807766A patent/EP1699035A4/en not_active Withdrawn
-
2006
- 2006-06-21 US US11/472,758 patent/US20060238832A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6456339B1 (en) * | 1998-07-31 | 2002-09-24 | Massachusetts Institute Of Technology | Super-resolution display |
US6522313B1 (en) * | 2000-09-13 | 2003-02-18 | Eastman Kodak Company | Calibration of softcopy displays for imaging workstations |
US20030020836A1 (en) * | 2001-07-30 | 2003-01-30 | Nec Viewtechnology, Ltd. | Device and method for improving picture quality |
US20050103976A1 (en) * | 2002-02-19 | 2005-05-19 | Ken Ioka | Method and apparatus for calculating image correction data and projection system |
US20080024868A1 (en) * | 2004-06-15 | 2008-01-31 | Olympus Corporation | Illuminating Unit and Imaging Apparatus |
US20060262147A1 (en) * | 2005-05-17 | 2006-11-23 | Tom Kimpe | Methods, apparatus, and devices for noise reduction |
US20070035706A1 (en) * | 2005-06-20 | 2007-02-15 | Digital Display Innovations, Llc | Image and light source modulation for a digital display system |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7362336B2 (en) * | 2005-01-12 | 2008-04-22 | Eastman Kodak Company | Four color digital cinema system with extended color gamut and copy protection |
US20060152524A1 (en) * | 2005-01-12 | 2006-07-13 | Eastman Kodak Company | Four color digital cinema system with extended color gamut and copy protection |
US20060197775A1 (en) * | 2005-03-07 | 2006-09-07 | Michael Neal | Virtual monitor system having lab-quality color accuracy |
US20100060728A1 (en) * | 2006-12-05 | 2010-03-11 | Daniel Bublitz | Method for producing high-quality reproductions of the front and/or rear sections of the eye |
US8289382B2 (en) * | 2006-12-05 | 2012-10-16 | Carl Zeiss Meditec Ag | Method for producing high-quality reproductions of the front and/or rear sections of the eye |
US9100556B2 (en) * | 2009-07-13 | 2015-08-04 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US20110007176A1 (en) * | 2009-07-13 | 2011-01-13 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US8350954B2 (en) * | 2009-07-13 | 2013-01-08 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method with deconvolution processing for image blur correction |
US20130113963A1 (en) * | 2009-07-13 | 2013-05-09 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US20110242352A1 (en) * | 2010-03-30 | 2011-10-06 | Nikon Corporation | Image processing method, computer-readable storage medium, image processing apparatus, and imaging apparatus |
CN102236896A (en) * | 2010-03-30 | 2011-11-09 | 株式会社尼康 | Image processing method, computer-readable storage medium, image processing apparatus, and imaging apparatus |
US8989436B2 (en) * | 2010-03-30 | 2015-03-24 | Nikon Corporation | Image processing method, computer-readable storage medium, image processing apparatus, and imaging apparatus |
US8531474B2 (en) | 2011-11-11 | 2013-09-10 | Sharp Laboratories Of America, Inc. | Methods, systems and apparatus for jointly calibrating multiple displays in a display ensemble |
US20150304618A1 (en) * | 2014-04-18 | 2015-10-22 | Fujitsu Limited | Image processing device and image processing method |
US9378429B2 (en) * | 2014-04-18 | 2016-06-28 | Fujitsu Limited | Image processing device and image processing method |
CN105448272A (en) * | 2014-08-06 | 2016-03-30 | 财团法人资讯工业策进会 | Display system and image compensation method |
US10873731B2 (en) | 2019-01-08 | 2020-12-22 | Seiko Epson Corporation | Projector, display system, image correction method, and colorimetric method |
US11303862B2 (en) | 2019-01-08 | 2022-04-12 | Seiko Epson Corporation | Projector, display system, image correction method, and colorimetric method |
US20210248948A1 (en) * | 2020-02-10 | 2021-08-12 | Ebm Technologies Incorporated | Luminance Calibration System and Method of Mobile Device Display for Medical Images |
US11580893B2 (en) * | 2020-02-10 | 2023-02-14 | Ebm Technologies Incorporated | Luminance calibration system and method of mobile device display for medical images |
Also Published As
Publication number | Publication date |
---|---|
WO2005064584A1 (en) | 2005-07-14 |
EP1699035A1 (en) | 2006-09-06 |
JP2005189542A (en) | 2005-07-14 |
EP1699035A4 (en) | 2008-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20060238832A1 (en) | Display system | |
US8390644B2 (en) | Methods and apparatus for color uniformity | |
US8115830B2 (en) | Image processing apparatus | |
US8777418B2 (en) | Calibration of a super-resolution display | |
US7184054B2 (en) | Correction of a projected image based on a reflected image | |
JP3766672B2 (en) | Image correction data calculation method | |
KR20080015101A (en) | Color transformation luminance correction method and device | |
WO2020028872A1 (en) | Method and system for subgrid calibration of a display device | |
US9489881B2 (en) | Shading correction calculation apparatus and shading correction value calculation method | |
JP2000253263A (en) | Color reproduction system | |
US7639401B2 (en) | Camera-based method for calibrating color displays | |
US9437160B2 (en) | System and method for automatic color matching in a multi display system using sensor feedback control | |
JP2006220714A (en) | Liquid crystal display apparatus, display control method thereof, and display control program for liquid crystal display apparatus | |
US7639260B2 (en) | Camera-based system for calibrating color displays | |
US20210407046A1 (en) | Information processing device, information processing system, and information processing method | |
JP2005150779A (en) | Method for calculating display characteristics correction data of image display apparatus, display characteristic correction data program, and apparatus for calculating display characteristics correction data | |
Karr et al. | High dynamic range digital imaging of spacecraft | |
CN113870768B (en) | Display compensation method and device | |
JPH06105185A (en) | Brightness correction method | |
US20090207188A1 (en) | Image display device, highlighting method | |
CN107799090B (en) | Method for simulating display characteristics of display and display | |
KR102602543B1 (en) | Simulation of biaxial stretching and compression in stretchable displays | |
JP2003179946A (en) | Chromaticity correcting equipment | |
JP7189515B2 (en) | Two-dimensional flicker measuring device, two-dimensional flicker measuring method, and two-dimensional flicker measuring program | |
KR20240022084A (en) | Apparatus and Method for Compensating Mura |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NATIONAL INSTITUTE OF INFORMATION AND COMMUNICATIO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OHSAWA, KENRO;REEL/FRAME:018009/0425 Effective date: 20060614 Owner name: OLYMPUS CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OHSAWA, KENRO;REEL/FRAME:018009/0425 Effective date: 20060614 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |