WO2005064584A1 - 表示システム - Google Patents
表示システム Download PDFInfo
- Publication number
- WO2005064584A1 WO2005064584A1 PCT/JP2004/019410 JP2004019410W WO2005064584A1 WO 2005064584 A1 WO2005064584 A1 WO 2005064584A1 JP 2004019410 W JP2004019410 W JP 2004019410W WO 2005064584 A1 WO2005064584 A1 WO 2005064584A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- color image
- image data
- test
- display
- data
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3191—Testing thereof
- H04N9/3194—Testing thereof including sensor feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0666—Adjustment of display parameters for control of colour parameters, e.g. colour temperature
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/14—Detecting light within display terminals, e.g. using a single or a plurality of photosensors
- G09G2360/145—Detecting light within display terminals, e.g. using a single or a plurality of photosensors the light originating from the display screen
Definitions
- the present invention relates to a display system for displaying an image by correcting the effect of optical flare.
- color characteristics are defined as information that does not depend on the spatial coordinates of an image for a color image device or image data, and this color is defined. Color reproduction is performed based on the characteristic information.
- the present invention has been made in view of the above circumstances, and has as its object to provide a display system capable of reducing the influence of optical flare and performing intended color reproduction and obtaining! /, Ru. .
- the display system of the present invention includes a color image display device for displaying a color image and a supplementary image for outputting to the color image display device by correcting color image data.
- An image correction device that generates normal color image data.
- the image correction device includes a plurality of test color image data to be output to the color image display device, and a plurality of test color image data corresponding to the plurality of test color image data. Based on the relationship between the test color displayed on the color image display device and the spatial distribution of display colors of one image, corrected color image data for correcting the optical flare of the color image display device is obtained from the color image data. It is calculated.
- FIG. 1 is a diagram showing an outline of a configuration of a display system according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing a configuration of an image correction device according to the embodiment.
- FIG. 3 is a block diagram showing a configuration of a flare calculation device in the embodiment.
- FIG. 4 is a diagram showing geometric correction pattern image data output by the test image output device in the embodiment.
- FIG. 5 is a view showing text data in which coordinate information of a center position of a cross pattern is recorded in the embodiment.
- FIG. 6 is a diagram showing a state of a divided region in test color image data output by a test image output device in the embodiment.
- FIG. 7 is a diagram showing text data in which coordinate information of a divided area is recorded in the embodiment.
- FIG. 8 is a block diagram showing a configuration of a captured image input device in the embodiment.
- FIG. 9 is a diagram showing a state of a sample area set for a captured image related to a test color image in the embodiment.
- FIG. 10 is a diagram showing a state of a light emitting region and a sample region other than the light emitting region in the embodiment.
- This principle is based on what kind of corrected input image is obtained when the input image data becomes different display image data under the influence of the optical flare of the display device (hereinafter, simply referred to as flare). If you enter data, the original input image data This is to determine whether the same display image data as the data is obtained.
- the input image data of the number N of pixels input to the display device is (p, p,..., P.
- the suffix t on the right shoulder indicates transposition.
- the display image data which represents the light distribution of the image actually displayed as image data having N pixels, is represented by (g, g,
- the display image data can be acquired, for example, by photographing an image displayed on a display device with a digital camera. Since this display image data is affected by flare or the like of the display device, the display image data generally does not match the input image data, and a part of light displayed by a signal at another pixel position is superimposed. Further, even when the input image signal to the display device is 0, the display image data generally does not become 0. The display image data at this time is biased (o, o,
- Equation 1 is obtained by modeling the relationship between the input image data and the display image data in consideration of these effects.
- Expression 2 is a simplified expression of Expression 1 using capital letters corresponding to the characters of each element in the determinant of Expression 1.
- the display image data G coincides with the input image data P.
- the display image is affected by flare and the like.
- the data and the display image data are different.
- the input image data is corrected so that the display image data acquired using the corrected input image data matches or approximates the original input image data. Is calculated. Assuming that the corrected input image data is P ', corrected display image data G', which is display image data corresponding to the corrected input image data P ', is expressed by the following equation (3).
- Equation 3 In order to satisfy Equation 4, it is sufficient to use Equation 3 above to use corrected input image data P ′ as shown in Equation 5 below.
- the display characteristic M as shown in Expression 5 is an N x N matrix as described above.
- the data size can generally be said to be very large.
- the display characteristic M has a special characteristic
- the calculation may be able to be performed more easily. For example, consider a case in which the spread of light due to flare or the like in a display device can be approximated by the same distribution almost without depending on the pixel position. At this time Let M, (m,, m,, ⁇ , m,) denote the display characteristics of
- G M '* P + 0
- the symbol "*" represents a convolution operation
- Equation 7 By using the input image data P in Equation 6 as corrected input image data P 'and further using the conditions shown in Equation 4, the following Equation 7 is obtained.
- the display characteristic M ′ is known, it is possible to calculate the corrected input image data P ′ from the equation (7). That is, a technique related to deconvolution in which, when one image (here, ') is known from the convolution image of two images (here, PO), the other image (here,') is calculated. Is known. For example, it is possible to use a method as described in Chapter 7 of Reference 1 (Rosenfeld and A. Shi, KaK, Digital Picture Processing, Academic Press 197D ( ⁇ : True Translation Digital Image Processing Modern Science Co., Ltd. 1978)). it can.
- the correction method as shown in Expression 5 above includes an operation of an inverse matrix of the matrix M having a large number of elements, and the correction method as described in Expression 7 and Reference 1 is a convolution method. Since the calculation includes the inverse calculation of the calculation, the calculation is complicated, and the load on the processing system may be increased or the processing time may be required.
- the display image data G is modeled as being represented by the sum of the input image data P, the bias O, and the flare component F which is flare distribution data caused by the influence of flare and the like as shown in Expression 8.
- E represents an N ⁇ N unit matrix.
- the corrected display image data G ′ is obtained from Expression 3 as shown in Expression 12 below.
- (M ⁇ E) 3 P in the second term of the equation (12) is a flare correction error, and has a third order, which indicates that the influence of the flare is further reduced.
- the corrected input image data P ′ for correcting the flare F as shown in Expression 13 is as shown in Expression 14 below.
- F of Expression 9 represents the convolution. And can be replaced as shown in Equation 15 below.
- E ' is a column vector in which the component corresponding to the center position of the image is 1 and the other components are 0.
- O is the deconvolution of ⁇ with ⁇
- the image data is handled as one-channel data.
- the image data is generally handled as three-channel data. Therefore, in this case, the display characteristic ⁇ or the display characteristic M ′ described above is calculated for each of the RGB channels, and flare correction is performed based on the calculated display characteristics.
- the signal value of a color image device such as a display device or a digital camera has a nonlinear relationship with the corresponding luminance.
- a correction of the gradation characteristics is a known technique, the description thereof is omitted here. I have. That is, the above description explains the principle in a linear space after correcting the nonlinearity of the signal.
- FIG. 1 to 10 show an embodiment of the present invention
- FIG. 1 is a diagram showing an outline of a configuration of a display system.
- the display system includes a projector 1 as a color image display device for projecting an image, an image correction device 2 for generating a corrected image projected by the projector 1, and an image projected from the projector 1.
- a test color image measuring means comprising a screen 3 as a color image display device and a digital camera or the like arranged so as to capture the entire area of the image displayed on the screen 3 and a test image as a color camera. Shooting And a camera 4.
- the test image photographing camera 4 is included in an image correction device in a broad sense, and is used to reduce in-plane unevenness of an image or in-plane sensitivity unevenness of an image sensor caused by optical characteristics of a photographing lens. It is assumed that a circuit or the like capable of correcting the in-plane unevenness of the resulting image is provided, and the output digital image data of, for example, RGB has been corrected. Further, the test image capturing camera 4 outputs a signal in a linear response to the input light intensity.
- the image correction device 2 outputs to the projector 1 predetermined test color image data stored internally in advance.
- the projector 1 projects the test color image data supplied from the image correction device 2 on a screen 3.
- the image correction device 2 controls the test image capturing camera 4 to capture, transfer, and capture the image of the display color distribution corresponding to the test color image data displayed on the screen 3. .
- the image correction device 2 corrects the color image data using the color image data obtained from the test image capturing camera 4 and the original test color image data supplied to the projector 1. Display characteristic data used for the calculation is calculated.
- the operation of projecting a normal image by the display system after acquiring the display characteristic data is as follows. Note that when projecting this normal image, the test image photographing camera 4 is unnecessary, and thus is removed, for example.
- the image correction device 2 corrects the color image data stored therein in advance using the display characteristic data calculated as described above, and then corrects the internal color image data as corrected color image data. After that, when the color image data to be displayed is selected by the operator, the corrected color image data corresponding to the selection is output to the projector 1.
- the projector 1 projects and displays an image on the screen 3 based on the corrected color image data supplied from the image correction device 2.
- the image data input to the projector 1, the image data output from the test image capturing camera 4, and the image data processed by the image correction device 2 are all horizontal 1280 It is assumed that the image data is three-channel image data that is 1024 pixels by X pixels and each pixel is composed of three colors of RGB.
- FIG. 2 is a block diagram showing a configuration of the image correction device 2.
- the image correction device 2 outputs predetermined test color image data stored in advance to the projector 1, and outputs the photographed color image data related to the test color image data from the test image photographing camera 4 to the projector 1.
- a flare calculator 13 that acquires (photographed image data) and calculates display characteristic data based on the acquired photographed image data and original test color image data, and stores color image data for display.
- An image data storage device 11 that also stores corrected color image data obtained by correcting the color image data by a flare correction device 12 described later, and acquires color image data from the image data storage device 11. Then, the image data is corrected using the display characteristic data calculated by the flare calculating device 13, and the corrected color image data (corrected color image data) is used as the image data.
- the operation is performed as follows.
- the flare calculation device 13 outputs the test color image data to the projector 1 and displays the test color image on the screen 3.
- the flare calculating device 13 controls the test image capturing camera 4 to capture a display image on the screen 3.
- the flare calculating device 13 obtains the captured color image data by transferring it, and based on the obtained color image data and the original test color image data.
- display characteristic data is calculated and stored.
- the process in which the flare calculating device 13 calculates the display characteristic data M, M will be described later in more detail with reference to FIG.
- the flare correction device 12 reads out the color image data stored in the image data storage device 11, and outputs the two types of display characteristic data M or M from the flare calculation device 13 according to the flare correction method. ', And perform flare correction based on the read display characteristic data to calculate corrected color image data.
- the color image data corresponds to the input image data P in the description of the principle
- the corrected color image data corresponds to the corrected input image data P ′ in the description of the principle.
- the color image data and the corrected color image data each represent pixel RGB3 channel image data, so "P" and “P '” represent RGB3 channel data, respectively.
- the flare correction device 12 includes first to fourth correction modules as described below, and displays the display characteristic data M or M, input from the flare calculation device 13, The first to fourth correction modules are used to calculate corrected color image data.
- the first correction module inputs the display characteristic data M, and calculates the correction force color image data P ′ using Expression 5.
- the bias O is calculated by projecting a test color image from the projector 1 onto the screen 3 so that all components become 0, and shooting the image on the screen 3 using the test image shooting camera 4 described above. It is measured in advance and stored in the flare correction device 12.
- the second correction module receives the display characteristic data M, and performs an operation such as deconvolution based on Expression 7 to calculate corrected color image data P ′.
- the third correction module is a flare calculating means, which inputs the display characteristic data M and calculates the corrected color image data P 'using Expression 14. It should be noted that the constant K can be arbitrarily set by the operator of the image correction device 2.
- the fourth correction module is a flare calculating means, which inputs the display characteristic data M, and calculates the corrected color image data P ′ by using Expression 17. Also in this case, the number of terms in Equation 16 in Equation 17 (the number of terms corresponds to the above constant K) is set so that the operator of the image correction apparatus 2 can arbitrarily set the number of terms. Has become.
- the corrected color image data calculated by the flare correction device 12 is output from the flare correction device 12 to the image data storage device 11 and stored by the image data storage device 11.
- the operator operates the image correction device 2 to select desired corrected color image data stored in the image correction device 2. Then, the corrected color image data corresponding to the selection is read from the image data storage device 11 and output to the projector 1.
- the projector 1 projects an image based on the input corrected color image data onto the screen 3 so that the color image displayed on the screen 3 can be observed in a state where the influence of optical flare is reduced. It becomes possible.
- FIG. 3 is a block diagram showing a configuration of the flare calculating device 13.
- the flare calculating device 13 stores test color image data and geometric correction pattern image data as described later, and the projector 1 and a photographed image input device 22 and a correction device to be described later as necessary.
- the test image output device 21 for outputting to the data calculating device 23 and the color image data photographed by controlling the test image photographing camera 4 are inputted from the test image photographing camera 4, and the geometric correction pattern image is inputted.
- the test image output device 21 outputs test color image data for measuring display characteristics to the projector 1 and transmits a signal to the effect that the test color image data has been output to the photographed image input device 22.
- the test image output device 21 also outputs information on the test color image data output to the projector 1 to the correction data calculation device 23.
- the photographed image input device 22 When the photographed image input device 22 receives the above signal from the test image output device 21, the photographed image input device 22 controls the test image photographing camera 4 to project the test color image projected on the screen 3 from the projector 1. To shoot. The color image captured by the test image capturing camera 4 is transferred to the captured image input device 22 as captured image data. The captured image input device 22 outputs the acquired captured image data to the correction data calculation device 23.
- the correction data calculation device 23 calculates display characteristic data based on the information on the original test color image data input from the test image output device 21 and the captured image data input from the captured image input device 22. Is performed.
- the correction data calculation device 23 is configured to include two types of display characteristic data calculation modules corresponding to the two types of display characteristic data M, M, and the first display characteristic data
- the calculation module calculates the display characteristic data M
- the second display characteristic data calculation module calculates the display characteristic data M. At this time, it is configured that the operator of the image correction device 2 can select which display characteristic data calculation module to use.
- FIG. 4 is a diagram showing geometric correction pattern image data output by the test image output device 21, and
- FIG. 5 is a diagram showing text data in which coordinate information of the center position of the cross pattern is recorded. .
- the test image output device 21 first outputs, for example, geometric correction pattern image data as shown in FIG. 4 to the projector 1 before outputting the test color image data.
- the geometric correction pattern image data output by the test image output device 21 has, as shown in FIG. 4, black cross patterns arranged at equal intervals in a vertical direction 4 ⁇ a horizontal direction 5 on a white background. Image data.
- the coordinate information (geometric correction pattern data) of the center position of each cross pattern is output from the test image output device 21 to the photographed image input device 22 as text data in a format as shown in FIG. Become.
- the center position of the cross pattern located at the upper left is coordinate 1
- the center position of the cross pattern located on the right is coordinate 2
- the pixel position up to the center position of the cross pattern at the coordinate 20 is described.
- the coordinate system at this time is, for example, a coordinate system that represents the coordinates of each pixel, with the pixel at the upper left corner being coordinates (0, 0) and the pixel at the lower right corner being coordinates (1279, 1023). ing.
- the captured image input device 22 performs the test color image data generation based on the coordinate information and the captured image data of the geometric correction pattern image acquired from the test image power camera 4.
- a coordinate conversion table that gives a correspondence between the spatial coordinates of the image and the spatial coordinates of the image captured by the test image capturing camera 4 is created.
- test image output device 21 When the creation of the coordinate conversion table for the geometric correction is completed, the test image output device 21 next outputs the test color image data to the projector 1.
- FIG. 6 is a diagram showing a state of a divided region in the test color image data output by the test image output device 21, and
- FIG. 7 is a diagram showing text data in which coordinate information of the divided region is recorded. It is.
- this test color image data is obtained by dividing the entire area of 1280 pixels x 1024 pixels equally into 4 x 5 pixels (area of 256 pixels x 256 pixels). However, the image data is such that only one of the RGB colors is displayed at a maximum luminance in only one area. Then, for each of the divided areas, 60 types of test color image data are prepared and sequentially displayed in order to display three colors of RGB, respectively.
- the reason for dividing into a total of 20 areas in this way is that, when processing is performed for each pixel, three colors of R, G, and B are sequentially emitted in time series in pixel units. Data This is because the time required to acquire the image becomes too long, and if the light is emitted in units of one pixel, the amount of light is insufficient even if the effect of the flare is detected at other pixel positions, and the maximum of each pixel This is because there may be a lack of data stability such as uneven brightness. In this manner, by performing processing in units of blocks, it becomes possible to perform processing in a short time based on stable data having a sufficient light quantity.
- the coordinate information (pattern data) of the test color image data area is output from the test image output device 21 to the correction data calculation device 23 as text data in a format as shown in FIG. Te ru.
- the same coordinate system as that used in FIG. 5 is used, and the area located at the upper left corner is pattern 1, and the area located on the right is pattern 2
- the pixel positions up to the area of the pattern 20 located at the lower right corner are described.
- the correction data calculation device 23 uses the coordinate information and the captured image data of the test color image acquired from the test image capturing camera 4 to display the display characteristic data M, Or display characteristic data ⁇ 'is calculated!
- FIG. 8 is a block diagram showing the configuration of the captured image input device 22.
- the photographed image input device 22 includes a camera control device 31 that controls the test image photographing camera 4 to perform photographing based on a signal from the test image output device 21; Image storage device 32 for inputting and storing image data captured by the camera, a captured image relating to the geometric correction pattern image stored in the captured image storage device 32, and the geometric correction pattern from the test image output device 21.
- a geometric correction data calculating device 33 that calculates a geometric correction table based on coordinate information corresponding to an image, and is stored in the photographed image storage device 32 based on the geometric correction table calculated by the geometric correction data calculating device 33.
- Geometric correction of the image data related to the And a geometric correction device 34 for outputting to the correction data calculation device 23.
- the camera control device 31 issues a command for controlling the test image photographing camera 4 to perform photographing. Output to image capture camera 4.
- the captured image storage device 32 inputs and stores the image data transmitted from the test image capturing camera 4.
- the photographed image storage device 32 transmits the photographed image data to the geometric correction data calculation device 33 when the photographed image data is related to the geometric correction pattern image, and the test image data when the photographed image data is related to the test color image data. Then, the photographed image data is output to the geometric correction device 34 respectively.
- the geometric correction data calculation device 33 inputs a captured image relating to the geometric correction pattern image from the captured image storage device 32, and inputs coordinate information corresponding to the geometric correction pattern image from the test image output device 21. A process for calculating a geometric correction table is performed.
- the geometric correction table is table data for converting the coordinates of the image data input from the test image capturing camera 4 into the coordinates of the image data output from the test image output device 21, Specifically, it is calculated as follows.
- a cross pattern is detected from a photographed image related to a geometric correction pattern image input from the photographed image storage device 32, and the center coordinates thereof are obtained.
- a geometric correction table is calculated based on the correspondence between the center coordinates of the obtained 20 sets of cross-shaped patterns and the coordinates corresponding to the geometric correction pattern image input from the test image output device 21.
- the geometric correction table calculated by the geometric correction data calculation device 33 in this way is output to the geometric correction device.
- the geometric correction device 34 receives the geometric correction table calculated in advance from the geometric correction data calculation device 33 as described above, and outputs the test color from the captured image storage device 32 to the test color storage device 32.
- the captured image related to the image data is input, the coordinate conversion of the captured image related to the test color image data is performed with reference to the geometric correction table, and the converted image data is output to the correction data calculating device 23.
- the correction data calculation device 23 calculates the coordinate information of the test image input from the test image output device 21 and the captured image of the test color image after geometric correction input from the captured image input device 22, At least one of the display characteristic data M and the display characteristic data M 'is calculated and output to the correction data storage device 24.
- FIG. 9 is a diagram illustrating a state of a sample region set for a captured image related to a test color image
- FIG. 10 is a diagram illustrating a state of a light emitting region and a sample region other than the light emitting region.
- the correction data calculation device 23 acquires a signal value in a predetermined sample area from a captured image of a test color image after geometric correction in order to obtain display characteristic data.
- This sample area is set, for example, as shown in FIG. That is, the sample area is set as an area of 9 pixels ⁇ 9 pixels, and this sample area is positioned at each central coordinate portion in the light emission area of the 20 test images shown in FIG.
- the sample areas are arranged at equal intervals of 4 ⁇ 5 horizontally to form sample areas S1—S20.
- the sample area other than the light emitting area of the test color image (in the example shown in FIG. 10, since the upper left corner is the light emitting area, the sample areas S2—S20) Calculate the signal value, calculate the sum of the signal values of each pixel in each sample area (or the sum of the signal values of 81 pixels if the sample area is 9 pixels x 9 pixels), and calculate the average value And these are used as flare signals for the center coordinates of each sample area. In this way, the flare signal distribution for only the center coordinates of the 19 sample regions other than the light-emitting region is first obtained.
- the reason why the average value is obtained by adding data of a plurality of pixels in this way is that the amount of light due to the influence of flare is not so large, and that the data is averaged to improve reliability. Such processing is possible because it can be assumed that the flare has no high-frequency components. Then, by performing the processing based on the signal value of only the sample area, the processing time can be reduced. [0113] Thereafter, by performing interpolation processing based on these 19 flare signals, flare signals at all other pixel positions are obtained. At this time, as in the example shown in FIG. 10, when the light-emitting area is at the four corners of the image, the flare signal power of the neighboring pixel position is also calculated to obtain the flare signal.
- the flare signals of all pixels are obtained for one test color image, and such processing is performed for all of the 20 test color images as shown in FIG. Note that, here, since a 3-channel RGB color image is taken as an example, the above-described processing is performed on a total of 60 test color images.
- the flare signal distribution obtained for 20 channels per channel is obtained when only the center pixel of the 20 light emitting regions emits light as shown in FIG. ) Is considered as the flare signal distribution. Since the actual light emission is performed by the light emission of the entire light emitting area consisting of 256 pixels ⁇ 256 pixels, the light is divided by 65536 to be converted into a flare signal distribution per one light emitting pixel.
- the flare signal distributions corresponding to other light emitting pixel positions are calculated by performing an interpolation process using the flare signal distributions corresponding to neighboring light emitting pixel positions.
- 1310720 flare signal distributions including the flare signals of 1310720 pixels are created so as to correspond one-to-one to the light emission positions of the 1310720 pixels, and a matrix of 1310720 rows and 1310720 columns is generated. It becomes the display characteristic data M represented by the equation. Such display characteristic data is generated for each of the three channels as described above.
- j in each element m of the determinant corresponds to the coordinates of the light emitting pixel
- i corresponds to the coordinates of the pixel for obtaining the flare signal.
- the display characteristic data M calculated by the second display characteristic data calculation module of the correction data calculation device 23 and used in the second correction module or the fourth correction module of the flare correction device 12, is And 20 flare distributions corresponding to the 20 light-emitting areas were respectively moved so that the center coordinates of the light-emitting area become the center coordinates of the image. Later, it is calculated by calculating the average value of these 20.
- the flare correction device 12 corrects the color image data as described above using the display characteristic data M or M, calculated in this way, and sends the corrected color image data to the image data storage device 11. Output.
- gradation correction is performed on the obtained corrected color image data in consideration of the gradation characteristics of the projector. Since this is a related known technique, its description is omitted here.
- a projector is described as an example of a color display device.
- the present invention is not limited to this, and may be applied to any image display device such as a CRT and a liquid crystal panel. Can be applied.
- test image capturing camera composed of an RGB digital camera was used in the above description. It is also possible to use a camera or a multi-band camera with four or more bands. Alternatively, when the number of measurement samples is relatively small, as in the example shown in FIG. 9, instead of a camera, a spectroradiometer, luminance meter, and color A measuring device such as a meter for performing spot measurement can also be used. In this case, it can be expected that the accuracy of the measurement can be improved.
- the image data projected by the projector and the image data acquired by the test image photographing camera are all 1280 horizontal pixels by 1024 vertical pixels.
- the number of pixels to be displayed and the number of pixels to be imaged may be different.
- any combination of the number of pixels to be displayed and the number of pixels to be imaged can be used.
- the display characteristic data is calculated according to the size of the corrected color image data.
- the number of cross patterns in the geometric correction pattern, the number of light-emitting areas in the test color image, and the number of sample areas for flare signal measurement were all 20 in the above example, but are not limited thereto. Can be independently set to any number. In addition, the configuration may be such that the operator of the image correction apparatus can set as desired in consideration of measurement accuracy, measurement time, and the like. [0125] In the above description, the image data is stored after being subjected to the flare correction in advance, and when projecting the image data on the screen, a sufficient processing speed that uses the already corrected image data is secured. If possible, the image data input from the video source card may be displayed with flare correction in real time!
- a display system that performs processing in hardware has been described as an example.
- the present invention is not limited to this, and a display device such as a monitor or a computer to which a measuring device such as a digital camera is connected may be used for display.
- An equivalent function may be realized by executing a program, or a display method applied to a system having such a configuration may be used.
- the display chromaticity at an arbitrary pixel position and the other pixel positions can be satisfactorily reduced by the influence of light, and a color image having high color reproducibility can be obtained. Can be realized.
- test color image measuring means for measuring the spatial distribution of the display colors corresponding to the test color image data makes it possible to accurately and simply measure the display characteristics of the color image display device. Thus, it is possible to cope with a temporal change of the color image display device.
- test color image measuring means by using a color camera such as a digital camera as the test color image measuring means, it is possible to more easily acquire the spatial distribution of display colors.
- test color image measuring means when a luminance meter, a colorimeter, a spectral radiation luminance meter, or the like is used as the test color image measuring means, it is possible to more accurately measure the display characteristics.
- a monochrome camera as the test color image measuring means, a low-cost device configuration can be achieved.
- a multi-band camera as the test color image measuring means, display characteristics can be obtained with high accuracy, and spatial measurement can also be performed with high accuracy.
- the display characteristic data of the color image display device is calculated and used, whereby the model of the flare is obtained. It is possible to perform accurate flare correction based on
- the flare distribution data is calculated by the flare calculation means, and the calculated flare distribution data is calculated. Since the corrected color image data is calculated based on the data, it is possible to easily calculate the corrected image data.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Controls And Circuits For Display Device (AREA)
- Processing Of Color Television Signals (AREA)
- Video Image Reproduction Devices For Color Tv Systems (AREA)
- Liquid Crystal Display Device Control (AREA)
- Control Of Indicators Other Than Cathode Ray Tubes (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP04807766A EP1699035A4 (en) | 2003-12-25 | 2004-12-24 | DISPLAY SYSTEM |
US11/472,758 US20060238832A1 (en) | 2003-12-25 | 2006-06-21 | Display system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-431384 | 2003-12-25 | ||
JP2003431384A JP2005189542A (ja) | 2003-12-25 | 2003-12-25 | 表示システム、表示プログラム、表示方法 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/472,758 Continuation-In-Part US20060238832A1 (en) | 2003-12-25 | 2006-06-21 | Display system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005064584A1 true WO2005064584A1 (ja) | 2005-07-14 |
Family
ID=34736429
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/019410 WO2005064584A1 (ja) | 2003-12-25 | 2004-12-24 | 表示システム |
Country Status (4)
Country | Link |
---|---|
US (1) | US20060238832A1 (ja) |
EP (1) | EP1699035A4 (ja) |
JP (1) | JP2005189542A (ja) |
WO (1) | WO2005064584A1 (ja) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005352437A (ja) * | 2004-05-12 | 2005-12-22 | Sharp Corp | 液晶表示装置、カラーマネージメント回路、及び表示制御方法 |
US7362336B2 (en) * | 2005-01-12 | 2008-04-22 | Eastman Kodak Company | Four color digital cinema system with extended color gamut and copy protection |
US20060197775A1 (en) * | 2005-03-07 | 2006-09-07 | Michael Neal | Virtual monitor system having lab-quality color accuracy |
JP4901246B2 (ja) * | 2006-03-15 | 2012-03-21 | 財団法人21あおもり産業総合支援センター | 分光輝度分布推定システムおよび方法 |
DE102006057190A1 (de) * | 2006-12-05 | 2008-06-12 | Carl Zeiss Meditec Ag | Verfahren zur Erzeugung hochqualitativer Aufnahmen der vorderen und/oder hinteren Augenabschnitte |
JP5173954B2 (ja) * | 2009-07-13 | 2013-04-03 | キヤノン株式会社 | 画像処理装置及び画像処理方法 |
US8989436B2 (en) * | 2010-03-30 | 2015-03-24 | Nikon Corporation | Image processing method, computer-readable storage medium, image processing apparatus, and imaging apparatus |
US8531474B2 (en) | 2011-11-11 | 2013-09-10 | Sharp Laboratories Of America, Inc. | Methods, systems and apparatus for jointly calibrating multiple displays in a display ensemble |
JP6260428B2 (ja) * | 2014-04-18 | 2018-01-17 | 富士通株式会社 | 画像処理装置、画像処理方法、及びプログラム |
TWI571844B (zh) * | 2014-08-06 | 2017-02-21 | 財團法人資訊工業策進會 | 顯示系統、影像補償方法與其電腦可讀取記錄媒體 |
WO2019045010A1 (ja) * | 2017-08-30 | 2019-03-07 | 株式会社オクテック | 情報処理装置、情報処理システムおよび情報処理方法 |
WO2020065792A1 (ja) * | 2018-09-26 | 2020-04-02 | Necディスプレイソリューションズ株式会社 | 映像再生システム、映像再生機器、及び映像再生システムのキャリブレーション方法 |
CN111416968B (zh) | 2019-01-08 | 2022-01-11 | 精工爱普生株式会社 | 投影仪、显示系统、图像校正方法 |
JP7270025B2 (ja) * | 2019-02-19 | 2023-05-09 | 富士フイルム株式会社 | 投影装置とその制御方法及び制御プログラム |
TWI720813B (zh) * | 2020-02-10 | 2021-03-01 | 商之器科技股份有限公司 | 醫療影像用行動裝置顯示器亮度校正系統與方法 |
KR20230012909A (ko) * | 2021-07-16 | 2023-01-26 | 삼성전자주식회사 | 전자 장치 및 이의 제어 방법 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000241791A (ja) * | 1999-02-19 | 2000-09-08 | Victor Co Of Japan Ltd | プロジェクタ装置 |
JP2003046810A (ja) * | 2001-07-30 | 2003-02-14 | Nec Viewtechnology Ltd | 画質改善装置および画質改善方法 |
JP2003098599A (ja) * | 2001-09-25 | 2003-04-03 | Nec Corp | フォーカス調整装置およびフォーカス調整方法 |
WO2003071794A1 (fr) * | 2002-02-19 | 2003-08-28 | Olympus Corporation | Procede et dispositif de calcul de donnees de correction d'image et systeme de projection |
JP2005020314A (ja) * | 2003-06-25 | 2005-01-20 | Olympus Corp | 表示特性補正データの算出方法、表示特性補正データの算出プログラム、表示特性補正データの算出装置 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6456339B1 (en) * | 1998-07-31 | 2002-09-24 | Massachusetts Institute Of Technology | Super-resolution display |
JP2001054131A (ja) * | 1999-05-31 | 2001-02-23 | Olympus Optical Co Ltd | カラー画像表示システム |
US6522313B1 (en) * | 2000-09-13 | 2003-02-18 | Eastman Kodak Company | Calibration of softcopy displays for imaging workstations |
JP2003283964A (ja) * | 2002-03-26 | 2003-10-03 | Olympus Optical Co Ltd | 映像表示装置 |
WO2005124299A1 (ja) * | 2004-06-15 | 2005-12-29 | Olympus Corporation | 照明ユニット及び撮像装置 |
US7639849B2 (en) * | 2005-05-17 | 2009-12-29 | Barco N.V. | Methods, apparatus, and devices for noise reduction |
US7404645B2 (en) * | 2005-06-20 | 2008-07-29 | Digital Display Innovations, Llc | Image and light source modulation for a digital display system |
-
2003
- 2003-12-25 JP JP2003431384A patent/JP2005189542A/ja active Pending
-
2004
- 2004-12-24 WO PCT/JP2004/019410 patent/WO2005064584A1/ja not_active Application Discontinuation
- 2004-12-24 EP EP04807766A patent/EP1699035A4/en not_active Withdrawn
-
2006
- 2006-06-21 US US11/472,758 patent/US20060238832A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000241791A (ja) * | 1999-02-19 | 2000-09-08 | Victor Co Of Japan Ltd | プロジェクタ装置 |
JP2003046810A (ja) * | 2001-07-30 | 2003-02-14 | Nec Viewtechnology Ltd | 画質改善装置および画質改善方法 |
JP2003098599A (ja) * | 2001-09-25 | 2003-04-03 | Nec Corp | フォーカス調整装置およびフォーカス調整方法 |
WO2003071794A1 (fr) * | 2002-02-19 | 2003-08-28 | Olympus Corporation | Procede et dispositif de calcul de donnees de correction d'image et systeme de projection |
JP2005020314A (ja) * | 2003-06-25 | 2005-01-20 | Olympus Corp | 表示特性補正データの算出方法、表示特性補正データの算出プログラム、表示特性補正データの算出装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1699035A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP1699035A4 (en) | 2008-12-10 |
JP2005189542A (ja) | 2005-07-14 |
EP1699035A1 (en) | 2006-09-06 |
US20060238832A1 (en) | 2006-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20060238832A1 (en) | Display system | |
JP4681033B2 (ja) | 画像補正データ生成システム、画像データ生成方法及び画像補正回路 | |
JP3766672B2 (ja) | 画像補正データ算出方法 | |
KR100648592B1 (ko) | 화상 처리 시스템, 프로젝터 및 화상 처리 방법 | |
JP4974586B2 (ja) | 顕微鏡用撮像装置 | |
JP2009171008A (ja) | 色再現装置および色再現プログラム | |
JP2016050982A (ja) | 輝度補正装置及びこれを備えるシステム並びに輝度補正方法 | |
KR20180061792A (ko) | 표시 패널의 보상 데이터 생성 방법 및 장치 | |
CN105185302A (zh) | 单色图像间灯点位置偏差修正方法及其应用 | |
KR20170047449A (ko) | 표시 장치 및 이의 휘도 보정방법 | |
JP2019149764A (ja) | 表示装置用校正装置、表示装置用校正システム、表示装置の校正方法、及び、表示装置 | |
US7477294B2 (en) | Method for evaluating and correcting the image data of a camera system | |
KR20180062571A (ko) | 표시 패널의 보상 데이터 생성 방법 및 장치 | |
JP2005189542A5 (ja) | ||
JP2002267574A (ja) | カラー表示装置の測色装置、測色方法、画質調整・検査装置、画質調整・検査方法 | |
JP2010139324A (ja) | 色ムラ測定方法、および色ムラ測定装置 | |
JP2022066278A (ja) | カメラテストシステムおよびカメラテスト方法 | |
JP2021167814A (ja) | ディスプレイ装置分析システムおよびそれの色分析方法{analysis system for display device and color analysis method thereof} | |
KR20080056624A (ko) | 디스플레이의 그레이 레벨 대 휘도 곡선을 신속히 생성하는방법 및 장치 | |
JP5362753B2 (ja) | 画質調整装置及び画像補正データ生成プログラム | |
JP2010066352A (ja) | 測定装置、補正データ生成装置、測定方法、補正データ生成方法、および補正データ生成プログラム | |
JP2008139709A (ja) | 色処理装置およびその方法 | |
CN110300291B (zh) | 确定色彩值的装置和方法、数字相机、应用和计算机设备 | |
JP2015070348A (ja) | 色むら補正方法及び色むら補正処理部を有した撮像装置 | |
JP4547208B2 (ja) | ディスプレイの較正方法,較正装置,較正テーブル及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2004807766 Country of ref document: EP Ref document number: 11472758 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 2004807766 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 11472758 Country of ref document: US |