JP2006215756A - Image processing apparatus, image processing method, and program for the same - Google Patents

Image processing apparatus, image processing method, and program for the same Download PDF

Info

Publication number
JP2006215756A
JP2006215756A JP2005026940A JP2005026940A JP2006215756A JP 2006215756 A JP2006215756 A JP 2006215756A JP 2005026940 A JP2005026940 A JP 2005026940A JP 2005026940 A JP2005026940 A JP 2005026940A JP 2006215756 A JP2006215756 A JP 2006215756A
Authority
JP
Japan
Prior art keywords
image
pixel
value
white point
tristimulus value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP2005026940A
Other languages
Japanese (ja)
Inventor
Daisuke Ishihara
Toru Ishii
Yoichi Miyake
Toshiya Nakaguchi
Masami Shishikura
Norimichi Tsumura
洋一 三宅
俊哉 中口
正視 宍倉
徳道 津村
融 石井
大輔 石原
Original Assignee
Dainippon Ink & Chem Inc
大日本インキ化学工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dainippon Ink & Chem Inc, 大日本インキ化学工業株式会社 filed Critical Dainippon Ink & Chem Inc
Priority to JP2005026940A priority Critical patent/JP2006215756A/en
Publication of JP2006215756A publication Critical patent/JP2006215756A/en
Application status is Withdrawn legal-status Critical

Links

Images

Abstract

PROBLEM TO BE SOLVED: To provide an image processing apparatus, an image processing method, and a program therefor, in which an impression when a human sees an actual scene and an impression when a virtual scene in which the scene is displayed on a screen are viewed are closer than before. I will provide a.
According to the present invention, the tristimulus value of each pixel color expressed using a different white point as a reference is calculated according to whether the white point of the original image is greater than or less than a threshold, and the tristimulus value is calculated. In addition to the pixels indicating the tristimulus values of the white point and the white point, the JCh image is calculated by reflecting the average brightness value of the background area and the surround area and the average illuminance in the virtual environment in the tone mapped image. . Further, in the process of calculating the output image from the JCh image, the tristimulus value of the white point of the display device and the average illuminance value of the surrounding environment when observing the display device are reflected.
[Selection] Figure 1

Description

  The present invention relates to an image processing device, an image processing method, and a program for displaying an HDR (High Dynamic Range) image on a display device.

  When creating an image of a virtual object such as a metal can by simulation using a general computer graphic technology, 1) shape data of the virtual object, 2) plane design data for adding design to the shape, and 3) Color specification data for coloring the plane design data, 4) Material data of what kind of material is used (for example, texture data when ink printed material is applied to an aluminum plate), 5) These virtual objects are placed Virtual environment data including lighting color and lighting position of the room, etc., 6) virtual scene data recording the arrangement relationship between the virtual environment and the virtual object, and 7) in which viewpoint direction and how much viewing angle the virtual scene. You need virtual camera data that records what you are watching.

  Here, as a method of designating the shape data of the virtual object, there are a method of creating a drawing with general CAD (computer aided design) software and a method of actually measuring the shape of the object with a three-dimensional digitizer. In addition, as a method of designating plane design data, there are a method of creating an illustration or a logo using general drawing software, and a method of taking an existing design image using a plane scanner. In a general computer graphic technique, the color designation data is color designated by red (R), blue (B) and green (G) which are the three primary colors of the display. The texture data is empirical based on the existing reflection models such as Phong model and Blinn model published in Non-Patent Document 1 and Non-Patent Document 2, as found in general computer graphics software. There is a way to set parameters.

  In addition, as a method for acquiring the color designation data and the texture data, there is a method in which an actual material disclosed in Non-Patent Document 3 is measured by declination spectroscopy and the declination spectral characteristics of the material are used.

  The virtual environment data can be specified by photographing the actual environment with a camera or the like and projecting the captured image data onto a cube or hemisphere, or by numerically specifying the illumination color or illumination position as a three-dimensional spatial coordinate value. There is. As a method for designating virtual scene data, the designated virtual environment data is displayed on a computer screen, a virtual object is arranged by a pointing device such as a mouse, and the position / tilt angle of the virtual object is three-dimensionally. There is a method of manually specifying a numerical value as a spatial coordinate value. As a method for specifying virtual camera data, the specified virtual scene data is displayed on a computer screen, and a viewpoint or field of view is specified by a pointing device such as a mouse. There is a method of specifying numerical values as coordinate values.

  Then, by the rendering process, for example, screen image data photographed on the virtual camera using the designated shape data, plane design data, color designation data, texture data, virtual environment data, virtual scene data, and virtual camera data is converted. Obtained by numerical calculation. The screen image data is obtained as luminance values of red (R), blue (B), and green (G), which are the three primary colors of a display device such as a display, or a luminance value on a spectral basis. The obtained screen image data is displayed in a designated window size on a display device such as a display. As a result, a scene image using the data 1) to 7) can be expressed on a computer.

  By the way, when the screen image data is a luminance value other than the luminance values of red (R), blue (B), and green (G), which are the three primary colors of the display, for example, a spectral-based luminance value, the obtained screen image data is obtained. Needs to be converted into display display signal data for display. For example, if the color specification data and texture data are used for the declination spectroscopic measurement of the actual material and the declination characteristic of the spectral reflectance of the material is used, the luminance value obtained by the calculation is the reproducible luminance range in the display device The value may be larger than (hereinafter referred to as dynamic range). As a specific example, a screen image obtained by the above calculation in a general computer graphics software in a highlight portion when the metal can is viewed from the opposite direction of the regular reflection direction of light in the metal can. This is the case where the luminance value of the data is larger than the dynamic range of the display device. In this case, red (R), blue (B), and green (G) are all set to the maximum value, that is, white on the screen. I was handling it. That is, when an image of a metal can is created in general computer graphics software, the highlighted portion is displayed in white.

However, in the case of a metallic can printed directly on an aluminum plate, the highlight portion in the regular reflection direction of light in an actual metal can is colored with an ink hue instead of white due to the transparency of the printed ink. Is perceived as a color. For this reason, the highlight portion displayed using computer graphics software for the case of a metallic can printed directly on an aluminum plate, and the highlighted portion when the metallic can printed directly on the aluminum plate is actually viewed The image of the computer graphics is different from the impression when the human actually sees the object of the metallic can.
Note that Non-Patent Document 4, Non-Patent Document 5, Non-Patent Document 6, and Non-Patent Document 7 are disclosed as related prior art.
Bui-Tuong, Phong, "Illumination for Computer Generated Pictures", CACM, 18 (6), June 1975,311-317, Also in BEAT82,449-455, Blinn, JF, "Models of Light Reflection for Computer Synthesized Pictures", SIGGRAPH77,192-198, Also in FREE80,316-322) Nicodemus, FE, JCRichmond, JJHsia, IWGinsberg, and T.Limperis, Geometrical Considerations and Nomenclature for Reflectance, NBS Monograph 160, USDepartment of Commerce, Washington DC, October 1977 Nathan Moroney and five others, "The CIECAM02 Color Appearance Model", IS & T / SID 10th Color Imaging Conference, Scottsdale, 23-27 (2002). Mark D. Fairchild and one other, "Meet iCAM: A Next-Generation Color Appearance Model", IS & T / SID 10th Color Imaging Conference, Scottsdale, 33-38 (2002). Erik Reinhard, 3 others (School of Computing, University of Utah), "Photographic Tone Mapping (" Photographic Tone Reproduction for Digital Images "," ACM Transactions on Graphics 21 (3), pp267-276 July 2002 (Proceedings of SIGGRAPH 2002) ) ". Garrett M. Johnson and one other, "Rendering HDR Images", IS & T / SID 11th Color Imaging Conference, Scottsdale, 36-41 (2003).

  The problem that the image of computer graphics as described above differs from the impression when the real object is viewed in real space is an important problem for manufacturers of beverage cans and vehicles. For example, a beverage can manufacturer or a car manufacturer can set the color of a beverage can or the color of a car body based on the impression of an object displayed on the screen using computer graphics without actually manufacturing the product. Checking and reducing the labor required to determine colors. However, if the impression of an object seen with computer graphics is different from the impression of an actually manufactured article, the process of determining colors using computer graphics becomes useless.

  Accordingly, when the screen image data calculated by the rendering process is a high dynamic range image (hereinafter referred to as HDR image) and an image depending on the dynamic range of the display device is referred to as a low dynamic range image (hereinafter referred to as LDR image), it is viewed in computer graphics. In order to reduce the difference between the impression of the actual product and the impression of the actual manufactured product, the same impression as when a human actually sees the object within the dynamic range reproduction capability of the display device A technique (tone mapping technique) for creating an LDR image that receives a signal from an HDR image is indispensable.

  Also, when processing such as changing the color (spectral distribution) of illumination in a virtual environment expressed by computer graphics, it is necessary to display in consideration of the chromatic adaptation effect due to human visual characteristics. For example, when the tristimulus values of a virtual object are obtained using the spectral radiance characteristics of the incandescent light source and the spectral reflectance characteristics of the material, and physical color reproduction is performed as RGB signal values on the display device, Since the spectral radiance characteristic of the image shows a reddish spectral radiance characteristic, even if the spectral reflectance characteristic of the material shows a tendency to white, it is rendered as a reddish image including the highlight portion. . Therefore, it is displayed on the screen of the display device as a virtual scene that is entirely red. However, in a real space, when a human sees a real scene where an actual object is placed under an incandescent light source, the human perceives the material as white as possible even in that environment (in this case, under the incandescent light source). By performing the color adaptation, the entire image is perceived as whitish compared to the virtual image obtained by the calculation. In other words, even if the physical color reproduction is accurately rendered, the impression perceived by humans differs depending on the environment (such as the spectral distribution of illumination).

Furthermore, when displaying a virtual image of an object on the screen, it is difficult to determine whether a highlight portion, which is an important factor for expressing a texture, is a design color or a highlight color by illumination in a still image. On the other hand, if it is possible to observe while manipulating the position and orientation of the lighting, the viewpoint, and the object interactively, the highlight part moves according to the operation, so it can be recognized as the highlight part. Can also realistically express the texture.
In other words, in order to create and display a rendered image of a virtual object with a more realistic color / texture, the impression that a human actually sees the object is received within the dynamic range reproduction capability of the display device. Improvement of the technology for creating such LDR images from HDR images (tone mapping technology), reflection of the effects of human chromatic adaptation effects on virtual scenes when the illumination color (spectral distribution) is changed, It is necessary to consider the high-speed calculation performance that allows observation while manipulating the viewpoint and the position and orientation of the object.

  Therefore, the present invention considers improvement of tone mapping technology and reflection of the effect of human chromatic adaptation effects on the virtual scene when the illumination color (spectral distribution) is changed, thereby allowing humans to view actual scenes. It is an object of the present invention to provide an image processing apparatus, an image processing method, and a program therefor, in which an impression when viewing a virtual scene in which the scene is displayed on a screen is closer than ever before.

  The present invention has been made to solve the above-described problems, and is an image processing device that generates an LDR image that can be displayed within the luminance range of a display device from an input HDR image, and includes tristimulus values of the HDR image. When the tristimulus value is less than a predetermined threshold value for each pixel of the first tristimulus value image indicating, the white point of the virtual light source is used for each pixel, and the tristimulus value is greater than or equal to the predetermined threshold value Calculating a white point increased according to the tristimulus value for each pixel, and calculating a first white point image that holds the tristimulus values of the white points; Second tristimulus value image calculating means for calculating a second tristimulus value image of tristimulus values normalized using each pixel of one white point image, and one pixel in the first tristimulus value image Pixel group in the background area centered on First luminance image calculating means for calculating a first luminance image by performing an averaging process of the tristimulus value Y in all the pixels, and the pixel for one pixel in the first tristimulus value image. A second luminance image is calculated by performing the averaging process of the tristimulus values Y on all the pixels in the pixel group in the surround area that is the center area and outside the background area. Based on the luminance image calculation means, the first luminance image, the second luminance image, and the average illuminance in the virtual environment, a first observation condition parameter for calculating a JCh image indicating a perceptual correlation value is calculated. First observation condition parameter calculation means, second white point image calculation means for calculating a second white point image obtained by normalizing the first white point image, the second tristimulus value image, and the second white point Using a point image and the first viewing condition parameter, an image processing apparatus characterized by comprising a JCh image calculating means for calculating the JCh image indicating the perceived correlation value for each pixel.

  The present invention also provides an environment for displaying and observing an image on a display device based on the tristimulus value of the white point of the display device and the average illuminance of the surrounding environment when observing the display device. Below, the second observation condition for calculating the second observation condition parameter for calculating the RGB signal value of the image that causes each pixel of the image displayed on the display device to have the same appearance as the JCh image indicating the perceptual correlation value Using the parameter calculation means, the JCh image, the tristimulus value of the white point of the display device, and the second observation condition parameter, the RGB signal value of the image that gives the same appearance as the JCh image is calculated and displayed. And an image output means for outputting to the image.

  According to the present invention, the background area is an area indicating a pixel group within a predetermined range from the center pixel.

  Further, the present invention is characterized in that the surround area is an area indicating a pixel group that is outside the background area and is shown in a predetermined range wider than the background area.

  The present invention is also an image processing method in an image processing apparatus for generating an LDR image that can be displayed within a luminance range of a display device from an input HDR image, wherein the image processing device uses tristimulus values of the HDR image. For each pixel of the first tristimulus value image shown, if the tristimulus value is less than a predetermined threshold, the white point of the virtual light source is used for each pixel, and if the tristimulus value is greater than or equal to the predetermined threshold For each pixel, a white point increased according to the tristimulus value is calculated, a first white point image holding the tristimulus value of the white point is calculated, and the image processing device is configured to store the first white point. A second tristimulus value image of tristimulus values normalized using each pixel of the image is calculated, and the image processing apparatus performs a background region centered on the pixel for one pixel in the first tristimulus value image In the pixel group The first luminance image is calculated by performing the averaging process of the stimulus value Y for all the pixels, and the image processing device is centered on the pixel for one pixel in the first tristimulus value image. A second luminance image is calculated by performing an averaging process on the tristimulus values Y for all pixels in a pixel group in the surround area that is an area and outside the background area, and the image processing apparatus Calculating a first observation condition parameter for calculating a JCh image indicating a perceptual correlation value based on the first luminance image, the second luminance image, and an average illuminance in a virtual environment; A processing device calculates a second white point image obtained by normalizing the first white point image, and the image processing device calculates the second tristimulus value image, the second white point image, and the first observation condition parameter. Using an image processing method and calculates the JCh image indicating the perceived correlation value for each pixel.

  According to the present invention, in the above-described image processing method, the image processing device displays an image on the display device based on the tristimulus value of the white point of the display device and the average illuminance of the surrounding environment when observing the display device. In an environment for displaying and observing, each pixel of the image displayed on the display device has a second observation condition parameter for calculating the RGB signal value of the image that gives the same appearance as the JCh image indicating the perceptual correlation value. The image processing device calculates the RGB signal value of the image that gives the same appearance as the JCh image using the JCh image, the tristimulus value of the white point of the display device, and the second observation condition parameter. It calculates and outputs to a display apparatus.

  The present invention is also a program executed by a computer of an image processing apparatus that generates an LDR image that can be displayed within a luminance range of a display device from an input HDR image, wherein the image processing device performs tristimulus of the HDR image. When the tristimulus value is less than a predetermined threshold value for each pixel of the first tristimulus value image indicating the value, the white point of the virtual light source is used for each pixel, and the tristimulus value is greater than or equal to the predetermined threshold value Calculating a white point that is increased according to the tristimulus value for each pixel, calculating a first white point image that retains the tristimulus values of the white point, and the image processing device includes: A process of calculating a second tristimulus value image of tristimulus values normalized using each pixel of the first white point image, and the image processing device for each pixel in the first tristimulus value image Backg centered around a process of calculating the first luminance image by performing the averaging process of the tristimulus value Y in the pixel group in the ound region for all the pixels, and the image processing device includes a first tristimulus value image in the first tristimulus value image. The tristimulus value Y is averaged for all pixels in the pixel group in the surround area that is an area centered on the pixel and outside the background area. A process for calculating a luminance image; and for calculating a JCh image indicating a perceptual correlation value based on the first luminance image, the second luminance image, and an average illuminance in a virtual environment. A process for calculating the first observation condition parameter, a process for the image processing apparatus to calculate a second white point image obtained by normalizing the first white point image, and the image processing apparatus, 2 using a tristimulus value image and the second white point image and the first viewing condition parameters, a program for executing a process of calculating and the computer JCh image indicating the perceived correlation value for each pixel.

  According to the present invention, in addition to the above processing, the image processing device displays an image on the display device based on the tristimulus value of the white point of the display device and the average illuminance of the surrounding environment when observing the display device. The second observation condition parameter for calculating the RGB signal value of the image that causes each pixel of the image displayed on the display device to have the same appearance as the JCh image indicating the perceptual correlation value. Processing, and the image processing device uses the JCh image, the tristimulus value of the white point of the display device, and the second observation condition parameter to obtain an RGB signal value of the image that gives the same appearance as the JCh image. A program for causing a computer to execute a process of calculating and outputting to a display device.

  According to the present invention, an output image is calculated by determining a white point serving as a reference for human color perception according to whether the white point of the original image is greater than or less than a threshold value. As a result, in the tone mapping process, white point image data is generated in which the tristimulus values of the white point, which is a reference for human color perception, differ between the highlight portion and the non-highlight portion of each pixel. In addition, it is possible to express a color based on the white point for each pixel, and it is possible to generate an image that is closer to the impression of a color that a person perceives in the real space. In the chromatic adaptation process, the average brightness value of the background area and the surround area and the average illuminance in the virtual environment are reflected in the tone-mapped image. Therefore, a JCh image representing a color closer to an impression when a human sees an object in real space can be calculated.

  According to the present invention, the output image is calculated using the tristimulus value of the white point of the display device and the average illuminance value of the surrounding environment when observing the display device. It is possible to calculate an image for screen display that gives the same appearance as the JCh image showing the perceptual correlation value under the observation environment. Thereby, it is possible to calculate an image in which an impression when a human sees an actual scene and an impression when a virtual scene in which the scene is displayed on the screen are closer than before are calculated.

Hereinafter, an image processing apparatus according to an embodiment of the present invention will be described with reference to the drawings.
FIG. 1 is a block diagram showing a configuration of an image processing apparatus according to an embodiment of the present invention. In this figure, reference numeral 1 denotes an input processing device such as a mouse or a keyboard. Reference numeral 2 denotes a data storage device for storing various data. The data storage device 2 includes virtual object shape data, plane design data indicating a virtual object pattern, virtual object color designation data, virtual object texture data, virtual scene data, virtual environment photographed image data, virtual environment data It stores illumination position data, virtual environment illumination type data, virtual environment illumination color data, and virtual camera position data. An input processing unit 3 transfers information received from the input processing device and information read from the data storage device 2 to the graphic processing unit. Reference numeral 4 denotes a graphics processing device such as a GPU (Graphics Processing Unit). An output processing unit 5 performs image output processing. Reference numeral 6 denotes a display device such as a liquid crystal display.

  In the graphic processing device 4, reference numeral 41 denotes a rendering processing unit that generates screen image data (HDR image data) from input data using each information received from the input processing unit 3 by a conventional method. In addition, the screen image data (HDR image data) generated by the rendering processing unit 41 is used to give the same impression as when a human actually sees an object within the range of the dynamic range reproduction capability of the display device. This is a tone mapping processing unit that generates display display image data (LDR image data). Reference numeral 43 denotes a color adaptation processing unit that performs a process of reflecting the influence of the chromatic adaptation effect perceived by humans in the real space on a virtual image displayed by changing the illumination color (spectral distribution) on the computer.

  The image processing apparatus according to the present embodiment uses the screen image data calculated by the rendering processing unit 41 as an HDR image, and finally displays an image depending on the dynamic range of the display device 6 displayed on the display device 6 as an LDR image. By the tone mapping process, the HDR image is processed so that the difference between the impression of the object seen with computer graphics and the impression of the object actually produced is smaller than before, Through color adaptation processing, an LDR image is created from the HDR image that gives the same impression as when a human actually sees an object within the dynamic range of the display device 6, and more realistic colors than before. / Processing to display an image of a virtual object having a texture.

Next, the processing flow of the image processing apparatus will be described.
First, when the input processing unit 3 receives designation of each data stored in the data storage device 2 from the input processing device 1, the input processing unit 3 receives the shape data of the virtual object, the virtual object from the data storage device 2. Plane design data indicating a pattern, virtual object color specification data, virtual object texture data, virtual scene data, virtual environment captured image data, virtual environment illumination position data, virtual environment illumination type data, virtual environment illumination color Read data, virtual camera position data.

  Here, the shape data of the virtual object is, for example, data indicating 3D modeling coordinates of the beverage can. The plane design data is data indicating the pattern of the beverage can by a two-dimensional plane vector. Further, the color designation data of the virtual object is data of the declination spectral reflectance indicating the color of the aluminum of the beverage can. The texture data of the virtual object is data of the declination spectral reflectance indicating the glossy color of aluminum. The virtual scene data is position information when the beverage can is arranged at a predetermined position of the background image, that is, data relating to coordinates indicating the arrangement relationship between the shape data of the beverage can and the background image. The captured image data of the virtual environment is data of a photograph to be combined with background image data on which a beverage can is arranged. The illumination position data of the virtual environment is data of coordinates in the three-dimensional space of the light source that irradiates the beverage can with light. The illumination type data of the virtual environment is data indicating the illuminance of illumination in the virtual environment displaying the input image, for example. The illumination color data of the virtual environment is data indicating the spectral distribution of the light source in the virtual environment displaying the input image. Further, the position data of the virtual camera is a camera position or a focal length parameter when the input image is captured (seen) in the virtual environment of the input image.

  Then, the input processing unit 3 notifies the rendering processing unit 41 of each data read from the data storage device 2, and instructs the start of the rendering process. Upon receiving an instruction to start rendering processing, the rendering processing unit 41 starts rendering processing and generates an HDR image. This rendering process is the same process as that of a conventional rendering apparatus. An image obtained as a result of the rendering process is screen image (HDR image) data. The HDR image data is image data that is not tone-mapped to a dynamic range that can be expressed by the display device 6. Next, the tone mapping processing unit 42 performs tone mapping processing on the HDR image generated by the rendering processing unit to create an LDR image that receives the same impression as when a human actually sees an object. Perform the mapping process.

FIG. 2 is a diagram showing a processing flow of JCh image calculation.
Next, processing of the tone mapping processing unit 42 will be described with reference to FIG.
First, assuming that an HDR image generated by the rendering processing unit 41 is an input image, the input image data holds the RGB values of each pixel of the image. The tone mapping processing unit 42 reads tone map processing data from the data storage device 2. Here, the tone map processing data, X E Y E X E: a CIEXYZ value tristimulus value of the light source in a virtual environment displaying an input image (illumination color data of the virtual environment), theta: white point image calculation This is a threshold parameter (scalar value) that determines whether to use tristimulus values of the light source or to use the photographic Tone Mapping technique described in Non-Patent Document 6.

When the tone mapping processing unit 42 reads the input image and tone map processing data, next, the tone mapping processing unit 42 converts the RGB value of each pixel of the input image into a CIEXYZ value, and calculates an XYZ image (first tristimulus value image). Processing is performed (step S1). Here, the pixel positions of the image are represented as (x, y), x = 1, 2, 3,..., M, y = 1, 2, 3,. F 1 represents a two-dimensional Fourier transform for the image, and F −1 represents a two-dimensional inverse Fourier transform for the image. This XYZ image calculation process is a process dependent on the display device 6. For example, in the case of the display device 6 corresponding to the sRGB color space, assuming that the RGB signal value of the input image is R m G m B m , the following equation (1) processing is performed on each pixel of the input image.

Next, the tone mapping processing unit 42 uses the XYZ image calculated in the above (Step S1), the tristimulus value X E Y E Z E of the light source, and the threshold parameter θ to generate an X W Y W Z W image (each An image having a CIEXYZ value of the white point for the pixel in the pixel: a first white point image) is calculated (step S2). The following formula (2) is used in the process of step S2.

Next, the tone mapping processing unit 42 uses the XYZ image calculated in (Step S1) and the X W Y W Z W image calculated in (Step S2) to generate an X′Y′Z ′ image ( In each pixel, an XYZ image (second tristimulus value image) normalized by the X W Y W Z W image is calculated (step S3). The following equation (3) is used in the process of step S3.

Here, in step S1, an XYZ value of each pixel of the input image is calculated. In step S2, the tristimulus value X w Y w Z w of the white point obtained from the tristimulus values X E Y E Z E of the light source is a pixel showing the values of the following white point threshold parameter theta, also the threshold parameter For pixels showing a white point value exceeding θ, a white point tristimulus value X w Y w Z w is calculated using the photographic Tone Mapping technique. As a result, white point image data is generated in which the tristimulus values X W Y W Z W of the white point, which is the reference for human color perception, differ between the highlight portion and the non-highlight portion of each pixel. be able to. Here, human perception of color has a function of determining colors of other regions with white as a reference. Thus after the processing in step S3, three since the stimulus value X W Y W Z W XYZ image by image normalized with the reference of human color perception in the highlight portion and the portion other than the highlight An X′Y′Z ′ image in consideration of the influence of the tristimulus values X W Y W Z W of the white point can be calculated. In other words, in the tone mapping process, the influence of the tristimulus values X W Y W Z W of the white point, which is a reference for human color perception in the highlight portion and the portion other than the highlight portion, is taken into consideration. The reference color can be expressed, and it is possible to generate an image that receives the same impression as when a human actually sees an object. This completes the tone mapping process. The threshold parameter θ is an appropriate value obtained through experiments. Then tone mapping processing unit 42 when the tone mapping process is completed transmits the X W Y W Z W X'Y'Z' image calculated in the image and step S3 calculated in step S2 to the color adaptation processing unit 43.

Next, chromatic adaptation processing will be described.
Chromatic adaptation processing unit 43 receives the information of the X W Y W Z W image and X'Y'Z' image from the tone mapping processing unit 42 starts the color adaptation process. First, the chromatic adaptation processing unit 43 reads chromatic adaptation processing data from the data storage unit 2. This chromatic adaptation processing data uses E: average illuminance [lux] (virtual environment illumination type data) in the virtual environment displaying the input image and θ: tristimulus value of the light source when calculating the white point image, It consists of a threshold parameter (scalar value) that determines whether to use Photographic Tone Mapping, and λ: the focal length parameter (scalar value) of the camera when the input image is captured (seen) in the virtual environment of the input image It is data. Then color adaptation processing unit 43 uses the X W Y W Z W image, the X W Y W Z W for each pixel of the image luminance Y W is normalized to be a 100 (step S4). The following formula (4) is used in the process of step S4. The image data obtained by performing the normalization in step S4 for each pixel is set as an X ′ W YW Z ′ W image (second white point image).

Then color adaptation processing unit 43 uses the XYZ image and the focal length parameter lambda, processing for calculating the Y b image (first luminance image) for the entire image (step S5). The Y b image, a pixel group of a background region around the pixel for one pixel in the XYZ image (background region: for example, the viewing angle ≦ twice ≦ background area relative to the field of view of the camera around the pixel This is an image obtained by performing the averaging process on the tristimulus value Y in the range of 10 degrees for all the pixels in the XYZ image. The process of step S5 is performed by the following equation (5).

  Next, the chromatic adaptation processing unit 44 performs processing for calculating a surround image (second luminance image) for the entire image using the XYZ image and the focal length parameter λ (step S6). This surround image is a pixel group of a surround area centered on the pixel in the XYZ image (surround area: for example, a viewing angle of a background area with the pixel as the center and a field of view of the camera as a reference) This is an image obtained by performing the averaging process on all the pixels in the XYZ image with respect to the (triangle) tristimulus value Y. The process of step S6 is performed by the following equation (6).

FIG. 3 is a diagram showing an overview of the viewing angle of the background area and the viewing angle of the surround area.
Here, the step S5, in step S6, generates an image (Y b image) with each pixel of the tristimulus value Y of the background region, an image (surround image) with each pixel of the tristimulus value Y of the surround area However, this is to reflect the influence of the luminance of the surrounding background and surrounding areas when determining the color for each pixel.

  Next, the chromatic adaptation processing unit 44 generates an RGB image having a cone response optimized for chromatic adaptation prediction calculation as a pixel (step S7). In this process, the following formula (7) is applied to each pixel of the X′Y′Z ′ image obtained in step S3.

Next, the chromatic adaptation processing unit 44 generates, for the white point image generated in step S4, an R WG W B W image having a cone response optimized for chromatic adaptation prediction calculation in pixels (step S4). S8). In this process, the following expression (8) is performed on each pixel of the X ′ W YW Z ′ W image.

Next, the chromatic adaptation processing unit 44 includes an observation condition parameter (first observation condition parameter) including a c image, an N c image, an FL value, an N bb image, an N cb image, a z image, an n image, and a D image, and an observation. Other variable LA values, F images, and k values used in the condition parameter calculation process are calculated by the following equations (9) to (18) (step S9). The LA value (calculation process variable), k value (calculation process variable), and FL value are scalar values, and once obtained, it is not necessary to obtain each pixel.

Next, the chromatic adaptation processing unit 44 generates an R C G C B C image having the cone response after the chromatic adaptation prediction calculation as a pixel (step S10). This process uses the RGB image calculated in step S7, the X ′ W YW Z ′ W image calculated in step S4, the R W G W B W image calculated in step S8, and the observation condition parameter D. Thus, each pixel is calculated by the following formula.

Next, the chromatic adaptation processing unit 44 calculates, for the white point image, an R CW G CW B CW image having a cone response after the chromatic adaptation prediction calculation as a pixel (step S11). This process is a R W G W B W image calculated in step S8, the X'W Y'W Z'W image calculated in step S4, using the D image viewing condition parameters calculated in step S9, the It calculates with the following formula | equation with respect to a pixel.

Next, the chromatic adaptation processing unit 44 calculates an R′G′B ′ image having a cone response optimized for perceptual correlation value calculation in a pixel (step S12). This process is calculated by the following formula for each pixel of the R C G C B C image calculated in step S10.

Then, color adaptation processor 44, the white point image, calculates the R'W G'W B'W image with optimized cone response to perceived correlation value calculation in the pixel (step S13) . This process is calculated by the following formula for each pixel of the R CW G CW B CW image calculated in step S11.

Next, the color adaptation processing unit 44 calculates the R'a G'a B'a picture having a value according to the nonlinearity of the perceived cone response to the pixel (step S14). This processing uses the R′G′B ′ image calculated in step S12 and the FL image of the observation condition parameter calculated in step S9, and uses the following formula for each pixel of the R′G′B ′ image. Calculated by

Then, color adaptation processor 44, the white point image, calculates the R'aW G'aW B'aW image with a value obtained by applying the nonlinearity of perceived cone response to the pixel (step S15) . This process uses a R'W G'W B'W image calculated in step S13, the FL image viewing condition parameters calculated in step S9, each pixel of the R'W G'W B'W image Is calculated by the following formula.

Next, the chromatic adaptation processing unit 44 calculates a JCh image (J is brightness, C is saturation, and h is hue) having perceptual correlation values in pixels (step S16). This process includes the R ′ a G ′ a Ba image calculated in step S14, the R ′ aW G ′ aW BaW image calculated in step S15, and the N c image of the observation condition parameter calculated in step S9. , n cb image, n bb image, c image, z image, using n images, for each corresponding pixel of R'a G'a B'a picture and R'aW G'aW B'aW image, following It is calculated by the following formula.

Here, a general perceptual correlation value (JCh) means “a value obtained by quantifying an arbitrary color under an arbitrary observation condition”. The meaning of the perceptual correlation value based on the above formula is “an image that quantifies the appearance of the scene when you are in the scene of the input image”, and “I am in the scene of the input image”. Perceptual correlation values are calculated using mathematical formulas under two observation conditions. Through the above processing, in steps S1 to S3, a tone that takes into consideration the influence of the tristimulus values X W Y W Z W of the white point, which is a reference for human color perception in the highlight portion and the portion other than the highlight portion. In step S4 to step S15, mapping processing is performed in consideration of chromatic adaptation in which a human perceives other colors based on the white color of the light source in the virtual environment. It is possible to calculate a JCh image representing a color closer to an impression when a human sees an object in real space.

FIG. 4 is a diagram showing a processing flow for calculating an output image based on a JCh image.
4, next, the chromatic adaptation processing unit 43 performs a process of determining the RGB value of the output image to be output to the display device based on the JCh image calculated in step S16. That is, the RGB signal value that gives the same appearance as the perceptual correlation value calculated in step S16 is calculated in an environment where an image is displayed on the display device for observation. Here, the data storage device 2, and X w2 Y w2 Z w2 value is CIEXYZ value of the white point of a display device, E 2: Display device average illuminance at the time of observation of the surrounding environment [lux] is recorded ing.

The chromatic adaptation processing unit 43 reads the tristimulus values X W2 Y W2 Z W2 values of the white point of the display device, and R W2 G W2 B W2 which is the cone response of the white point optimized for the chromatic adaptation prediction calculation. Is calculated (step S17). This process is calculated by the following formula.

Next, the chromatic adaptation processing unit 43 c 2 value, N c2 value, FL 2 value, N bb2 value, N cb2 value, z 2 value, n 2 value, D, which are observation condition parameters (second observation condition parameters) A binary value is calculated (step S18). This process is calculated by the following formula using the Y b2 value, the surround 2 value, and the average illuminance E 2 value of the surrounding environment when observing the display device.

Next, the chromatic adaptation processing unit 43 calculates an R CW2 G CW2 B CW2 value that is a cone response of the white point after the chromatic adaptation prediction calculation (step S19). This process is the R W2 G W2 B W2 calculated in step S17, and the white point X W2 Y W2 Z W2 value of the display device, using the viewing condition parameters D 2, is calculated by the following equation.

Then color adaptation processing unit 43 calculates a a cone response optimized white point on the perceptual correlation value calculation R'W2 G'W2 B'W2 value (step S20). This process is calculated by the following equation using the R CW2 G CW2 B CW2 value calculated in step S19.

Next, the chromatic adaptation processing unit 43 calculates an R ′ aW2 G ′ aW2 BaW2 value obtained by applying the perceptual nonlinearity to the cone response of the white point (step S21). This process is calculated by the following equation using the R ′ W2 G ′ W2 BW2 value calculated in step S20 and the observation condition parameter FL 2 value.

Then color adaptation processing unit 43 calculates the R'a2 G'a2 B'a2 image with a value obtained by applying the nonlinearity of perceived cone response to the pixel (step S22). This process and the JCh image calculated in step S16, R'aW2 G'aW2 B'aW2 value and the observation condition parameters N c2 value calculated in step S21, N cb2 value, N bb2 value, z 2 value, n 2 value And the following processing is performed on each pixel of the JCh image.

Then color adaptation processing unit 43 calculates the R 2'G 2'B 2 'image with optimized cone response to perceived correlation value calculation in the pixel (step S23). This process, and R'a2 G'a2 B'a2 image calculated in step S22, by using the observation condition parameters FL 2 values, the following for each pixel of R'a2 G'a2 B'a2 image Process.

Next, the chromatic adaptation processing unit 43 calculates an R C2 G C2 B C2 image having the pixel response of the cone response after the chromatic adaptation prediction calculation (step S24). This process performs the following processing for each pixel of R 2'G 2'B 2 'image calculated in step S23.

Next, the chromatic adaptation processing unit 43 calculates an R 2 G 2 B 2 image having a cone response optimized for chromatic adaptation prediction calculation in a pixel (step S25). This process uses the R C2 G C2 B C2 image calculated in the step S24, and the white point X W2 Y W2 Z W2 value of the display device, and a viewing condition parameters D 2, R C2 G C2 B C2 image The following processing is performed for each pixel.

Next, the chromatic adaptation processing unit 43 calculates an X 2 Y 2 Z 2 image having a CIEXYZ value as a pixel (step S26). In this process, the following process is performed on each pixel of the R 2 G 2 B 2 image calculated in step S25.

Note that the X 2 Y 2 Z 2 image calculated in step S26 becomes display display image data (LDR image). Then, the chromatic adaptation processing unit 43 transfers the X 2 Y 2 Z 2 image calculated in step S 26 to the output processing unit 5. Next, the output processing unit 5 uses the X 2 Y 2 Z 2 image to calculate an output image (not RGB of the cone response) having computer RGB signals for screen display in pixels (step S27). This process depends on the display device. For example, in the case of a display device corresponding to the sRGB color space, the following processing is performed on each pixel of the X 2 Y 2 Z 2 image. However, the computer RGB signal (final output) for screen display is R m2 G m2 B m2 .

Through the above processing, an output image is calculated using the X W2 Y W2 Z W2 value, which is the tristimulus value of the white point of the display device, and the average illuminance E 2 value of the surrounding environment when observing the display device. Therefore, it is possible to calculate an image for screen display that gives the same appearance as the perceptual correlation value calculated in step S16 in an environment where an image is displayed on the display device for observation. Thereby, it is possible to calculate an image in which an impression when a human sees an actual scene and an impression when a virtual scene in which the scene is displayed on the screen are closer than before are calculated.

  Next, differences between the techniques of Non-Patent Document 4, Non-Patent Document 5, and Non-Patent Document 7 and the above-described invention of the present application will be described. The techniques of Non-Patent Document 5 and Non-Patent Document 7 are color expression methods (Color Appearance Model) to which the technique of Non-Patent Document 4 is applied. Also in Non-Patent Document 4, Non-Patent Document 5, and Non-Patent Document 7, calculation of perceptual correlation values and calculation of LDR images based on the calculated perceptual correlation values are performed as in the present invention. However, in the technique of the present invention, the threshold parameter θ is used to determine a highlight portion and a non-highlight portion of the input image depending on whether the luminance of the pixel of the input image exceeds or does not exceed the threshold parameter θ. The white point for determining the color of the pixel is changed. That is, for pixels that do not exceed the threshold parameter θ (portions other than highlights), the color of the pixel is determined based on the white point of the tristimulus value of the light source, and for pixels that exceed the threshold parameter θ (highlight portions), the light source The color of each pixel is determined on the basis of a value equal to or greater than the white point of the tristimulus values. As a result, in tone mapping of the input image, a portion other than the highlight is reproduced in a colorimetric manner, and a photographic tone mapping that is known to have good image quality is applied to the highlighted portion. In the adaptation process, instead of estimating the tristimulus value of the light source from the input image, the light source information of the virtual environment is used, so that an image closer to the color impression perceived by humans in real space is generated. Will be able to.

In the art of the present invention is also the color adaptation process, an image having an image with each pixel of the tristimulus value Y of background regions (Y b image), each pixel of the tristimulus value Y of surround region (surround image) And determining the color for each pixel, the observation condition parameter is reflected to reflect the luminance of the surrounding background area and the surrounding area, and the JCh image is calculated using the observation condition parameter. Therefore, this is a chromatic adaptation process that considers chromatic adaptation such that a human perceives other colors based on the white color of the light source in the virtual environment. Therefore, it is possible to calculate a JCh image representing a color closer to an impression when a human sees an object in real space.

In the technology of the present invention, as described above, the X W2 Y W2 Z W2 value, which is the tristimulus value of the white point of the display device, and the average illuminance E 2 value of the surrounding environment when observing the display device are calculated. Since the output image is calculated using the screen, it is possible to calculate an image for screen display that gives the same appearance as the perceptual correlation value calculated in step S16 in an environment where the image is displayed on the display device and observed. Further, the JCh image itself used for calculating the output image is also a JCh image representing a color closer to the impression when the human sees the object in the real space, and thus, the impression when the human sees the actual scene. Then, it is possible to calculate an image in which the impression when the virtual scene in which the scene is displayed on the screen is viewed becomes closer than before.

  Next, the result of the subjective evaluation experiment of the LDR image displayed by the method of Non-Patent Document 5 and Non-Patent Document 7 (iCAM) and the LDR image displayed by the method of the present invention will be disclosed.

(Experiment 1)
In Experiment 1, the iCAM method and the method of the present invention are compared using one LDR still image. Here, the LDR still image is an image in which a beverage can is placed under the light source A (350 [lux]) so that no highlight can be seen. It compares how it looks through the adaptation effect. The LDR still image as an input image is an image (RGB signal value for computer) created by computer graphics technology using the declination spectral reflection characteristics and shape of the beverage can.

In the image processing using the method of the present invention, a total of 30 images are created in which the appearance of the beverage can is predicted by changing the following parameters.
θ: threshold parameter for white point image calculation <0, 10, 20, 30, 50, 100 (6 types in total)>
λ: Camera focal length parameter <1, 5, 10, 20, 50 (5 types in total)>
In the method of the present invention, the input image and input values other than the above parameters are set as follows according to the observation conditions.
X E Y E Z E: CIEXYZ value of the light source (A source) in the input image scene [103.8 100 49.7]
E: Average illuminance in the scene of the input image [350 lux]
X W Y W Z W: CIEXYZ value of the display device white point [93.85 100 108.50 (found)
E 2 : Average illuminance [350 lux] of the surrounding environment when observing the display device

Further, in the image processing using the iCAM technique of Non-Patent Document 5 and Non-Patent Document 7, a total of 18 images in which the appearance of a beverage can is predicted are created by changing the following parameters.
Low-pass filter size parameter σ: 1/4, 1/8, 1/16 of image size (larger in length and width) (3 types in total)
Calculation method of R W G W B W : Formula (67) (1 type in total)

Color adaptation parameter D: 0.1, 0.5, 1.0 (3 types in total)
Coefficient to multiply by FL: 1 / 1.0, 1 / 1.3 (total 2 types)
Parameter Clipsize for determining the maximum RGB signal value: 100 (1 type in total)
In the image processing using the iCam method of Non-Patent Document 5 and Non-Patent Document 7, input values other than the input image and the above parameters were set as follows according to the observation conditions.
X W Y W Z W: CIEXYZ value of the display device white point [93.85 100 108.50 (found)

Then, one image is randomly selected from the image group (30 sheets) in which the appearance of the beverage can is predicted by the technique of the present invention and the image group (18 sheets) in which the appearance of the beverage can is predicted by the iCAM technique. Displayed on the device. The subject then selected how close the appearance of the display image was to the appearance of the beverage can from the following five evaluation criteria.
1. Similar Somewhat similar 3. Neither can I say 4. 4. Not very similar Does not resemble

  In the comparison between the iCAM technique of Experiment 1 and the technique of the present invention, the binocular septum method was used in which the actual beverage can was observed with one eye and the image displayed on the display device with the other eye. The number of subjects was 10. Further, the evaluation result of each image was analyzed by the series category method, and the evaluation value of each image was calculated by normalizing the highest evaluation image to 1 and the lowest evaluation image to 0 among all images.

FIG. 5 is a diagram showing the results of Experiment 1.
As shown in FIG. 5, the table value of the highest rated image in the image group that predicted the appearance of the beverage can by the method of the present invention is 1, and the highest evaluation in the image group that predicted the appearance of the beverage can by the iCAM method. The surface value of this image was 0.37. As a result, compared to the conventional method, the technique of the present invention is closer to the impression when a human sees an actual scene and the impression when he sees a virtual scene displaying the scene on the screen. Can do.

The following is an instruction to the subject in Experiment 1.
(1) In this experiment, we will evaluate the appearance of the design part of the beverage can displayed on the display and the appearance of the design part of the actual beverage can.
(2) Select the evaluation from the following five levels.
(3) The evaluation target is only the design part of the beverage can, and the background and the silver lid and bottom part are excluded from the evaluation.
(4) There are 48 images in total.

(Experiment 2)
In Experiment 2, the iCAM method and the method of the present invention are compared using one HDR still image. Here, the HDR still image is an image in which the beverage can is placed so that the highlight can be seen under the A light source (350 [lux]), and the image displayed as a result of the image processing of the beverage can is the visual adaptation. It compares how it looks through effects. The HDR still image as an input image is an image (RGB signal value for computer) created by computer graphics technology using the declination spectral reflection characteristics and shape of the beverage can.

In the image processing using the technique of the present invention, a total of 35 images in which the appearance of the beverage can is predicted are created by changing the following parameters.
θ: threshold parameter for white point image calculation <0, 10, 20, 30, 50, 80, 100 (7 types in total)>
λ: Camera focal length parameter <1, 5, 10, 20, 50 (5 types in total)>
In the method of the present invention, the input image and input values other than the above parameters are set as follows according to the observation conditions.
X E Y E Z E: CIEXYZ value of the light source (A source) in the input image scene [103.8 100 49.7]
E: Average illuminance in the scene of the input image [350 lux]
X W Y W Z W: CIEXYZ value of the display device white point [93.85 100 108.50 (found)
E 2 : Average illuminance [350 lux] of the surrounding environment when observing the display device

In the image processing using the iCAM method of Non-Patent Document 5 and Non-Patent Document 7, a total of 36 images in which the appearance of a beverage can is predicted are created by changing the following parameters.
Low-pass filter size parameter σ: 1/4, 1/8, 1/16 of image size (larger in length and width) (3 types in total)
Calculation method of R W G W B W : Formula (67), Formula (68) (2 types in total)

Color adaptation parameter D: 0.1, 0.5, 1.0 (3 types in total)
Coefficient to multiply by FL: 1 / 1.0, 1 / 1.3 (total 2 types)
Parameter Clipsize for determining the maximum RGB signal value: 99 (1 type in total)
In the image processing using the iCAM method of Non-Patent Document 5 and Non-Patent Document 7, the input image and input values other than the above parameters were set as follows according to the observation conditions.
X W Y W Z W: CIEXYZ value of the display device white point [93.85 100 108.50 (found)

Then, one image is randomly selected from the image group (35 images) in which the appearance of the beverage can is predicted by the method of the present invention and the image group (36 images) in which the appearance of the beverage can is predicted by the iCAM method. Displayed on the device. The subject then selected how close the appearance of the display image was to the appearance of the beverage can from the following five evaluation criteria.
1. Similar Somewhat similar 3. Neither can I say 4. 4. Not very similar Does not resemble

  In the comparison between the iCAM method of Experiment 2 and the method of the present invention, the binocular septum method was used in which the actual beverage can was observed with one eye and the image displayed on the display device with the other eye. The number of subjects was 10. Further, the evaluation result of each image was analyzed by the series category method, and the evaluation value of each image was calculated by normalizing the highest evaluation image to 1 and the lowest evaluation image to 0 among all images.

FIG. 6 is a diagram showing the results of Experiment 2.
As shown in FIG. 6, the table value of the highest evaluation image in the image group predicted by the method of the present invention is 1, and the highest evaluation in the image group in which the appearance of the beverage can is predicted by the iCAM method. The surface value of this image was 0.58. As a result, compared to the conventional method, the technique of the present invention is closer to the impression when a human sees an actual scene and the impression when he sees a virtual scene displaying the scene on the screen. Can do.

The following is an instruction to the subject in Experiment 2.
(1) In this experiment, the appearance of the design and gloss of the beverage can displayed on the display and the appearance of the design and gloss of the actual beverage can will be evaluated.
(2) Select the evaluation from the following five levels.
(3) Please evaluate both the appearance of the design part and the appearance of the glossy part, and evaluate them comprehensively.
(4) The evaluation target is only the design part of the beverage can, and the background and the silver lid and bottom part are excluded from the evaluation.
(5) There are 71 images in total.

(Experiment 3)
In Experiment 3, the iCAM method and the method of the present invention are compared using a moving image in which the angle of an object changes according to the designation of the camera angle. Here, the image is an image in which a beverage can is placed under the A light source (350 [lux]), and the image displayed as a result of image processing of the beverage can according to the angle specified by the user is a visual adaptation effect. Compare what they look through. An image that is an input image is an image created by computer graphics technology using the declination spectral reflection characteristics and shape of a beverage can in each frame at the time of screen drawing (an RGB signal value for a computer, but HDR information as a numerical value) ). In addition, the actual beverage can can be moved with one hand, and you can move back and forth between the state where the highlight is visible and the state where the highlight is not visible. In addition, the angle of the display image can be changed by a mouse operation, and the angle can be freely adjusted from a state where the highlight is visible to a state where the highlight is not visible.

In the image processing using the method of the present invention, a total of 21 images are created in which the appearance of the beverage can is predicted by changing the following parameters.
θ: threshold parameter for white point image calculation <0, 10, 20, 30, 50, 80, 100 (7 types in total)>
λ: Camera focal length parameter <1, 5, 50 (3 types in total)>
In the method of the present invention, the input image and input values other than the above parameters are set as follows according to the observation conditions.
X E Y E Z E: CIEXYZ value of the light source (A source) in the input image scene [103.8 100 49.7]
E: Average illuminance in the scene of the input image [350 lux]
X W Y W Z W: CIEXYZ value of the display device white point [93.85 100 108.50 (found)
E 2 : Average illuminance [350 lux] of the surrounding environment when observing the display device

Further, in the image processing using the iCAM technique of Non-Patent Document 5 and Non-Patent Document 7, a total of 18 images in which the appearance of a beverage can is predicted are created by changing the following parameters.
Low-pass filter size parameter σ: 1/4, 1/8, 1/16 of image size (larger in length and width) (3 types in total)
Calculation method of R W G W B W : Formula (67), Formula (68) (2 types in total)
Color adaptation parameter D: 0.1, 0.5, 1.0 (3 types in total)
Coefficient multiplied by FL: 1 / 1.0 (1 type in total)
Parameter Clipsize for determining the maximum RGB signal value: 99 (1 type in total)
In the image processing using the iCAM method of Non-Patent Document 5 and Non-Patent Document 7, the input image and input values other than the above parameters were set as follows according to the observation conditions.
X W Y W Z W: CIEXYZ value of the display device white point [93.85 100 108.50 (found)

Then, one image is selected at random from the image group (21 images) in which the appearance of the beverage can is predicted by the method of the present invention and the image group (18 images) in which the appearance of the beverage can is predicted by the iCAM method, and the display Displayed on the device. Then, the test subject selected how close the appearance of the display image is to the appearance of the beverage can from the following five evaluation criteria.
1. Similar Somewhat similar 3. Neither can I say 4. 4. Not very similar Does not resemble

  In the comparison between the iCAM method of Experiment 2 and the method of the present invention, the binocular septum method was used in which the actual beverage can was observed with one eye and the image displayed on the display device with the other eye. The number of subjects was 10. Further, the evaluation result of each image was analyzed by the series category method, and the evaluation value of each image was calculated by normalizing the highest evaluation image to 1 and the lowest evaluation image to 0 among all images.

FIG. 7 shows the results of Experiment 3. In FIG.
As shown in FIG. 7, the table value of the highest rated image in the image group that predicted the appearance of the beverage can by the method of the present invention is 1, and the highest evaluation in the image group that predicted the appearance of the beverage can by the iCAM method. The surface value of this image was 0.19. As a result, the technique of the present invention is closer to the impression when a human sees an actual scene and the impression when a virtual scene is displayed on the screen than a conventional technique. Can do.

The following is an instruction for the subject in Experiment 3.
(1) In this experiment, the appearance of the design and gloss of the beverage can displayed on the display and the appearance of the design and gloss of the actual beverage can will be evaluated.
(2) Select the evaluation from the following five levels.
(3) Please evaluate both the appearance of the design part and the appearance of the glossy part, and evaluate them comprehensively.
(4) The evaluation target is only the design part of the beverage can, and the background and the silver lid and bottom part are excluded from the evaluation.
(5) During the evaluation, perform the evaluation while operating the beverage can so that both the design part and the glossy part can be evaluated.
(6) Please evaluate within 30 seconds for each image.
(7) There are 39 images in total.

  The image processing apparatus described above has a computer system inside. The process described above is stored in a computer-readable recording medium in the form of a program, and the above process is performed by the computer reading and executing this program. Here, the computer-readable recording medium means a magnetic disk, a magneto-optical disk, a CD-ROM, a DVD-ROM, a semiconductor memory, or the like. Alternatively, the computer program may be distributed to the computer via a communication line, and the computer that has received the distribution may execute the program.

  The program may be for realizing a part of the functions described above. Furthermore, what can implement | achieve the function mentioned above in combination with the program already recorded on the computer system, and what is called a difference file (difference program) may be sufficient.

1 is a block diagram illustrating a configuration of an image processing apparatus according to an embodiment of the present invention. It is a figure which shows the processing flow of JCh image calculation by one Embodiment of this invention. It is a figure which shows the outline | summary of the viewing angle of a background area | region and the viewing angle of a surround area by one Embodiment of this invention. It is a figure which shows the processing flow which calculates an output image based on the JCh image by one Embodiment of this invention. It is a figure which shows the result of Experiment 1. It is a figure which shows the result of the experiment 2. FIG. It is a figure which shows the result of the experiment 3. FIG.

Explanation of symbols

DESCRIPTION OF SYMBOLS 1 ... Input processing device 2 ... Data storage device 3 ... Input processing part 4 ... Graphic processing device 5 ... Output processing part 6 ... Display apparatus 41 ... Rendering processing part 42- ..Tone mapping processing unit 43 ... Color adaptation processing unit

Claims (8)

  1. An image processing device that generates an LDR (low dynamic range) image that can be displayed within a luminance range of a display device from an input HDR (high dynamic range) image,
    For each pixel of the first tristimulus value image indicating the tristimulus value of the HDR image, when the tristimulus value is less than a predetermined threshold, the white point of the virtual light source is used for each pixel, and the tristimulus value is A first white point that calculates a white point that is increased according to the tristimulus value for each pixel when the threshold value is greater than or equal to a predetermined threshold value, and that calculates a first white point image that holds the tristimulus values of those white points Image calculating means;
    Second tristimulus value image calculating means for calculating a tristimulus value second tristimulus value image normalized using each pixel of the first white point image;
    For each pixel in the first tristimulus value image, the first luminance image is calculated by performing the averaging process of the tristimulus value Y in the pixel group in the background area centering on the pixel for all the pixels. First luminance image calculating means for
    The averaging process of tristimulus values Y in a pixel group in a surround region that is a region centered on the pixel and outside the background region for one pixel in the first tristimulus value image is applied to all pixels. Second luminance image calculation means for calculating a second luminance image by performing the method,
    A first observation condition parameter for calculating a first observation condition parameter for calculating a JCh image indicating a perceptual correlation value based on the first luminance image, the second luminance image, and an average illuminance in a virtual environment. A calculation means;
    Second white point image calculating means for calculating a second white point image obtained by normalizing the first white point image;
    JCh image calculating means for calculating a JCh image indicating a perceptual correlation value for each pixel using the second tristimulus value image, the second white point image, and the first observation condition parameter;
    An image processing apparatus comprising:
  2. Based on the tristimulus value of the white point of the display device and the average illuminance of the surrounding environment at the time of observing the display device, each of the images displayed on the display device in an environment where the image is displayed on the display device and observed A second observation condition parameter calculating means for calculating a second observation condition parameter for calculating an RGB signal value of the image in which the pixel has the same appearance as the JCh image indicating the perceptual correlation value;
    Using the JCh image, the tristimulus value of the white point of the display device, and the second observation condition parameter, the RGB signal value of the image that gives the same appearance as the JCh image is calculated and output to the display device Means,
    The image processing apparatus according to claim 1, further comprising:
  3.   The image processing apparatus according to claim 1, wherein the background area is an area indicating a pixel group within a predetermined range from the center pixel.
  4.   4. The image processing apparatus according to claim 1, wherein the surround area is an area that indicates a pixel group that is outside the background area and is shown in a predetermined range wider than the background area. 5.
  5. An image processing method in an image processing apparatus that generates an LDR (low dynamic range) image that can be displayed within a luminance range of a display device from an input HDR (high dynamic range) image,
    When the image processing device has a tristimulus value less than a predetermined threshold for each pixel of the first tristimulus value image indicating the tristimulus value of the HDR image, a white point of a virtual light source is set for each pixel. When the tristimulus value is greater than or equal to a predetermined threshold, a white point increased for each pixel according to the tristimulus value is calculated, and a first white point image holding the tristimulus values of those white points is used. Calculate
    The image processing device calculates a tristimulus value second tristimulus value image normalized using each pixel of the first white point image;
    The image processing apparatus performs the averaging process of the tristimulus value Y in the pixel group in the background region with the pixel as the center with respect to one pixel in the first tristimulus value image. Calculating a first luminance image;
    The image processing apparatus averages tristimulus values Y in a pixel group in a surround region that is a region centered on the pixel and outside the background region for one pixel in the first tristimulus value image. A second luminance image is calculated by performing the process on all pixels;
    The image processing device calculates a first observation condition parameter for calculating a JCh image indicating a perceptual correlation value based on the first luminance image, the second luminance image, and an average illuminance in a virtual environment. And
    The image processing device calculates a second white point image obtained by normalizing the first white point image;
    The image processing device calculates a JCh image indicating a perceptual correlation value for each pixel using the second tristimulus value image, the second white point image, and the first observation condition parameter. Image processing method.
  6. In an environment where the image processing device displays and observes an image on the display device based on the tristimulus value of the white point of the display device and the average illuminance of the surrounding environment when observing the display device, the display device Calculating a second observation condition parameter for calculating an RGB signal value of the image that causes each pixel of the image displayed on the image to have the same appearance as the JCh image indicating a perceptual correlation value;
    The image processing device uses the JCh image, the tristimulus value of the white point of the display device, and the second observation condition parameter to calculate and display an RGB signal value of the image that gives the same appearance as the JCh image. The image processing method according to claim 5, wherein the image processing method outputs the image to an apparatus.
  7. A program to be executed by a computer of an image processing apparatus that generates an LDR (low dynamic range) image that can be displayed within a luminance range of a display device from an input HDR (high dynamic range) image,
    When the image processing device has a tristimulus value less than a predetermined threshold for each pixel of the first tristimulus value image indicating the tristimulus value of the HDR image, a white point of a virtual light source is set for each pixel. When the tristimulus value is greater than or equal to a predetermined threshold, a white point increased for each pixel according to the tristimulus value is calculated, and a first white point image holding the tristimulus values of those white points is used. Processing to calculate,
    A process in which the image processing device calculates a second tristimulus value image of tristimulus values normalized using each pixel of the first white point image;
    The image processing apparatus performs the averaging process of the tristimulus value Y in the pixel group in the background region with the pixel as the center with respect to one pixel in the first tristimulus value image. Processing for calculating the first luminance image;
    The image processing apparatus averages tristimulus values Y in a pixel group in a surround region that is a region centered on the pixel and outside the background region for one pixel in the first tristimulus value image. A process of calculating the second luminance image by performing the process on all pixels;
    The image processing device calculates a first observation condition parameter for calculating a JCh image indicating a perceptual correlation value based on the first luminance image, the second luminance image, and an average illuminance in a virtual environment. Processing to
    A process in which the image processing device calculates a second white point image obtained by normalizing the first white point image;
    A process in which the image processing apparatus calculates a JCh image indicating a perceptual correlation value for each pixel using the second tristimulus value image, the second white point image, and the first observation condition parameter;
    A program that causes a computer to execute.
  8. In addition to the processing according to claim 7,
    In an environment where the image processing device displays and observes an image on the display device based on the tristimulus value of the white point of the display device and the average illuminance of the surrounding environment when observing the display device, the display device A process of calculating a second observation condition parameter for calculating an RGB signal value of the image in which each pixel of the image displayed on the image has the same appearance as the JCh image indicating a perceptual correlation value;
    The image processing device uses the JCh image, the tristimulus value of the white point of the display device, and the second observation condition parameter to calculate and display an RGB signal value of the image that gives the same appearance as the JCh image. Processing to output to the device;
    A program that causes a computer to execute.
JP2005026940A 2005-02-02 2005-02-02 Image processing apparatus, image processing method, and program for the same Withdrawn JP2006215756A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005026940A JP2006215756A (en) 2005-02-02 2005-02-02 Image processing apparatus, image processing method, and program for the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2005026940A JP2006215756A (en) 2005-02-02 2005-02-02 Image processing apparatus, image processing method, and program for the same

Publications (1)

Publication Number Publication Date
JP2006215756A true JP2006215756A (en) 2006-08-17

Family

ID=36978958

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2005026940A Withdrawn JP2006215756A (en) 2005-02-02 2005-02-02 Image processing apparatus, image processing method, and program for the same

Country Status (1)

Country Link
JP (1) JP2006215756A (en)

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008176791A (en) * 2007-01-18 2008-07-31 Ricoh Co Ltd Synthetic image and video generation, from ground truth data
JP2009135602A (en) * 2007-11-28 2009-06-18 Canon Inc Image processing method and apparatus thereof, program, and storage medium
JP2010055404A (en) * 2008-08-28 2010-03-11 Canon Inc Image processing method and image processing apparatus
JP2010062673A (en) * 2008-09-01 2010-03-18 Canon Inc Image processing apparatus, and method thereof
KR20110136152A (en) * 2010-06-14 2011-12-21 삼성전자주식회사 Apparatus and method of creating high dynamic range image empty ghost image by using filtering
US8144921B2 (en) 2007-07-11 2012-03-27 Ricoh Co., Ltd. Information retrieval using invisible junctions and geometric constraints
US8156115B1 (en) 2007-07-11 2012-04-10 Ricoh Co. Ltd. Document-based networking with mixed media reality
US8156116B2 (en) 2006-07-31 2012-04-10 Ricoh Co., Ltd Dynamic presentation of targeted information in a mixed media reality recognition system
US8156427B2 (en) 2005-08-23 2012-04-10 Ricoh Co. Ltd. User interface for mixed media reality
US8176054B2 (en) 2007-07-12 2012-05-08 Ricoh Co. Ltd Retrieving electronic documents by converting them to synthetic text
US8184155B2 (en) 2007-07-11 2012-05-22 Ricoh Co. Ltd. Recognition and tracking using invisible junctions
US8195659B2 (en) 2005-08-23 2012-06-05 Ricoh Co. Ltd. Integration and use of mixed media documents
US8201076B2 (en) 2006-07-31 2012-06-12 Ricoh Co., Ltd. Capturing symbolic information from documents upon printing
US8276088B2 (en) 2007-07-11 2012-09-25 Ricoh Co., Ltd. User interface for three-dimensional navigation
US8332401B2 (en) 2004-10-01 2012-12-11 Ricoh Co., Ltd Method and system for position-based image matching in a mixed media environment
JP2012532335A (en) * 2009-06-29 2012-12-13 トムソン ライセンシングThomson Licensing Zone-based tone mapping
US8335789B2 (en) 2004-10-01 2012-12-18 Ricoh Co., Ltd. Method and system for document fingerprint matching in a mixed media environment
US8369655B2 (en) 2006-07-31 2013-02-05 Ricoh Co., Ltd. Mixed media reality recognition using multiple specialized indexes
US8385660B2 (en) 2009-06-24 2013-02-26 Ricoh Co., Ltd. Mixed media reality indexing and retrieval for repeated content
US8385589B2 (en) 2008-05-15 2013-02-26 Berna Erol Web-based content detection in images, extraction and recognition
US8411944B2 (en) 2008-08-21 2013-04-02 Canon Kabushiki Kaisha Color processing apparatus and method thereof
US8489987B2 (en) 2006-07-31 2013-07-16 Ricoh Co., Ltd. Monitoring and analyzing creation and usage of visual content using image and hotspot interaction
US8510283B2 (en) 2006-07-31 2013-08-13 Ricoh Co., Ltd. Automatic adaption of an image recognition system to image capture devices
US8521737B2 (en) 2004-10-01 2013-08-27 Ricoh Co., Ltd. Method and system for multi-tier image matching in a mixed media environment
US8600989B2 (en) 2004-10-01 2013-12-03 Ricoh Co., Ltd. Method and system for image matching in a mixed media environment
US8676810B2 (en) 2006-07-31 2014-03-18 Ricoh Co., Ltd. Multiple index mixed media reality recognition using unequal priority indexes
US8825682B2 (en) 2006-07-31 2014-09-02 Ricoh Co., Ltd. Architecture for mixed media reality retrieval of locations and registration of images
US8838591B2 (en) 2005-08-23 2014-09-16 Ricoh Co., Ltd. Embedding hot spots in electronic documents
US8856108B2 (en) 2006-07-31 2014-10-07 Ricoh Co., Ltd. Combining results of image retrieval processes
US8868555B2 (en) 2006-07-31 2014-10-21 Ricoh Co., Ltd. Computation of a recongnizability score (quality predictor) for image retrieval
US9020966B2 (en) 2006-07-31 2015-04-28 Ricoh Co., Ltd. Client device for interacting with a mixed media reality recognition system
US9063952B2 (en) 2006-07-31 2015-06-23 Ricoh Co., Ltd. Mixed media reality recognition with image tracking
US9063953B2 (en) 2004-10-01 2015-06-23 Ricoh Co., Ltd. System and methods for creation and use of a mixed media environment
US9176984B2 (en) 2006-07-31 2015-11-03 Ricoh Co., Ltd Mixed media reality retrieval of differentially-weighted links
US9373029B2 (en) 2007-07-11 2016-06-21 Ricoh Co., Ltd. Invisible junction feature recognition for document security or annotation
US9530050B1 (en) 2007-07-11 2016-12-27 Ricoh Co., Ltd. Document annotation sharing
WO2017051612A1 (en) * 2015-09-25 2017-03-30 ソニー株式会社 Image processing device and image processing method

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8521737B2 (en) 2004-10-01 2013-08-27 Ricoh Co., Ltd. Method and system for multi-tier image matching in a mixed media environment
US8335789B2 (en) 2004-10-01 2012-12-18 Ricoh Co., Ltd. Method and system for document fingerprint matching in a mixed media environment
US8332401B2 (en) 2004-10-01 2012-12-11 Ricoh Co., Ltd Method and system for position-based image matching in a mixed media environment
US9063953B2 (en) 2004-10-01 2015-06-23 Ricoh Co., Ltd. System and methods for creation and use of a mixed media environment
US8600989B2 (en) 2004-10-01 2013-12-03 Ricoh Co., Ltd. Method and system for image matching in a mixed media environment
US8838591B2 (en) 2005-08-23 2014-09-16 Ricoh Co., Ltd. Embedding hot spots in electronic documents
US8156427B2 (en) 2005-08-23 2012-04-10 Ricoh Co. Ltd. User interface for mixed media reality
US8195659B2 (en) 2005-08-23 2012-06-05 Ricoh Co. Ltd. Integration and use of mixed media documents
US9020966B2 (en) 2006-07-31 2015-04-28 Ricoh Co., Ltd. Client device for interacting with a mixed media reality recognition system
US9063952B2 (en) 2006-07-31 2015-06-23 Ricoh Co., Ltd. Mixed media reality recognition with image tracking
US8510283B2 (en) 2006-07-31 2013-08-13 Ricoh Co., Ltd. Automatic adaption of an image recognition system to image capture devices
US8156116B2 (en) 2006-07-31 2012-04-10 Ricoh Co., Ltd Dynamic presentation of targeted information in a mixed media reality recognition system
US8201076B2 (en) 2006-07-31 2012-06-12 Ricoh Co., Ltd. Capturing symbolic information from documents upon printing
US8489987B2 (en) 2006-07-31 2013-07-16 Ricoh Co., Ltd. Monitoring and analyzing creation and usage of visual content using image and hotspot interaction
US9176984B2 (en) 2006-07-31 2015-11-03 Ricoh Co., Ltd Mixed media reality retrieval of differentially-weighted links
US8868555B2 (en) 2006-07-31 2014-10-21 Ricoh Co., Ltd. Computation of a recongnizability score (quality predictor) for image retrieval
US8676810B2 (en) 2006-07-31 2014-03-18 Ricoh Co., Ltd. Multiple index mixed media reality recognition using unequal priority indexes
US8825682B2 (en) 2006-07-31 2014-09-02 Ricoh Co., Ltd. Architecture for mixed media reality retrieval of locations and registration of images
US8369655B2 (en) 2006-07-31 2013-02-05 Ricoh Co., Ltd. Mixed media reality recognition using multiple specialized indexes
US8856108B2 (en) 2006-07-31 2014-10-07 Ricoh Co., Ltd. Combining results of image retrieval processes
US8238609B2 (en) 2007-01-18 2012-08-07 Ricoh Co., Ltd. Synthetic image and video generation from ground truth data
JP2008176791A (en) * 2007-01-18 2008-07-31 Ricoh Co Ltd Synthetic image and video generation, from ground truth data
US8989431B1 (en) 2007-07-11 2015-03-24 Ricoh Co., Ltd. Ad hoc paper-based networking with mixed media reality
US8144921B2 (en) 2007-07-11 2012-03-27 Ricoh Co., Ltd. Information retrieval using invisible junctions and geometric constraints
US8184155B2 (en) 2007-07-11 2012-05-22 Ricoh Co. Ltd. Recognition and tracking using invisible junctions
US8156115B1 (en) 2007-07-11 2012-04-10 Ricoh Co. Ltd. Document-based networking with mixed media reality
US9373029B2 (en) 2007-07-11 2016-06-21 Ricoh Co., Ltd. Invisible junction feature recognition for document security or annotation
US9530050B1 (en) 2007-07-11 2016-12-27 Ricoh Co., Ltd. Document annotation sharing
US8276088B2 (en) 2007-07-11 2012-09-25 Ricoh Co., Ltd. User interface for three-dimensional navigation
US10192279B1 (en) 2007-07-11 2019-01-29 Ricoh Co., Ltd. Indexed document modification sharing with mixed media reality
US8176054B2 (en) 2007-07-12 2012-05-08 Ricoh Co. Ltd Retrieving electronic documents by converting them to synthetic text
JP2009135602A (en) * 2007-11-28 2009-06-18 Canon Inc Image processing method and apparatus thereof, program, and storage medium
US8385589B2 (en) 2008-05-15 2013-02-26 Berna Erol Web-based content detection in images, extraction and recognition
US8411944B2 (en) 2008-08-21 2013-04-02 Canon Kabushiki Kaisha Color processing apparatus and method thereof
JP2010055404A (en) * 2008-08-28 2010-03-11 Canon Inc Image processing method and image processing apparatus
JP2010062673A (en) * 2008-09-01 2010-03-18 Canon Inc Image processing apparatus, and method thereof
US8385660B2 (en) 2009-06-24 2013-02-26 Ricoh Co., Ltd. Mixed media reality indexing and retrieval for repeated content
KR101739432B1 (en) 2009-06-29 2017-05-24 톰슨 라이센싱 Zone-based tone mapping
JP2012532335A (en) * 2009-06-29 2012-12-13 トムソン ライセンシングThomson Licensing Zone-based tone mapping
KR101664123B1 (en) 2010-06-14 2016-10-11 삼성전자주식회사 Apparatus and method of creating high dynamic range image empty ghost image by using filtering
KR20110136152A (en) * 2010-06-14 2011-12-21 삼성전자주식회사 Apparatus and method of creating high dynamic range image empty ghost image by using filtering
WO2017051612A1 (en) * 2015-09-25 2017-03-30 ソニー株式会社 Image processing device and image processing method

Similar Documents

Publication Publication Date Title
KR100496875B1 (en) Method for image processing
Finlayson et al. Improving gamut mapping color constancy
US8884980B2 (en) System and method for changing hair color in digital images
EP1139656A1 (en) A color transform method for the mapping of colors in images
DE10311711B4 (en) Color adjustment method, color adjustment device, color conversion definition editing device, image processing device, program and storage medium
KR101194481B1 (en) Adjusting digital image exposure and tone scale
Fairchild et al. iCAM framework for image appearance, differences, and quality
Pattanaik et al. A multiscale model of adaptation and spatial vision for realistic image display
Finlayson et al. Solving for colour constancy using a constrained dichromatic reflection model
US8111941B2 (en) Method for dynamic range editing
Grundhofer et al. Real-time adaptive radiometric compensation
US7764829B2 (en) System and method for discovering and categorizing attributes of a digital image
Olkkonen et al. Perceived glossiness and lightness under real-world illumination
Kuang et al. Evaluating HDR rendering algorithms
JP3962588B2 (en) 3D image processing method, 3D image processing apparatus, 3D image processing system, and 3D image processing program
JP5002742B2 (en) Apparatus and method for rendering 3D objects using parametric texture maps
US8976173B2 (en) Bi-illuminant dichromatic reflection model for image manipulation
Bodrogi et al. Colour memory for various sky, skin, and plant colours: Effect of the image context
EP1218854A1 (en) Method and apparatus for 3d model creation based on 2d images
Chen National Taiwan University
Smith et al. Apparent greyscale: A simple and fast conversion to perceptually accurate images and video
US6573889B1 (en) Analytic warping
BRPI0615103A2 (en) multimedia color management system
JP2005526300A (en) General image enhancement algorithm to enhance the visual recognition of details of the digital image
KR20110052207A (en) Image processing apparatus and method for enhancing of depth perception

Legal Events

Date Code Title Description
A300 Withdrawal of application because of no request for examination

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20080513