WO2011148776A1 - Image processing apparatus and image processing method - Google Patents
Image processing apparatus and image processing method Download PDFInfo
- Publication number
- WO2011148776A1 WO2011148776A1 PCT/JP2011/060691 JP2011060691W WO2011148776A1 WO 2011148776 A1 WO2011148776 A1 WO 2011148776A1 JP 2011060691 W JP2011060691 W JP 2011060691W WO 2011148776 A1 WO2011148776 A1 WO 2011148776A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- tristimulus values
- white point
- light source
- adaptation
- pixel position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/80—Shading
Definitions
- the present invention relates to an image display technique.
- 3D-CG (3D-CG) technique
- 3D-CG 3D-CG
- the appearances of an object from various viewpoints can be simulated by rotating, enlarging, and reducing the object.
- optical information such as a reflectance, radiance, refractive index, or transmittance is set for an object such as a 3D object or light source, and physical colors to be displayed are calculated based on tristimulus values (for example, XYZ values) .
- tristimulus values for example, XYZ values
- the diffuse and specular reflection components are respectively
- Japanese Patent Laid-Open No. 2008-146260 discloses a method of generating a satisfactory image based on the adaptation state of human vision by
- the present invention has been made in consideration of the aforementioned problems, and provides a technique for determining a reflection state of a light source onto an object, and generating an output image using adaptation white points calculated according to the determination result.
- an image processing apparatus comprising: a first acquisition unit
- a second acquisition unit configured to acquire a viewing condition required to view the object
- a first decision unit configured to decide a state of reflection of the light source on the object upon viewing the object based on the viewing condition
- a second decision unit configured to decide an adaptation white point upon displaying the object by an image output unit, based on the state of reflection.
- an image processing method comprising: a first acquisition step of acquiring information of an object and information of a light source, which are laid out on a virtual space; a second acquisition step of acquiring a viewing condition required to view the object; a first decision step of deciding a state of reflection of the light source on the object upon viewing the object based on the viewing condition; and a second decision step of deciding an adaptation white point upon displaying the object by an image output unit, based on the state of reflection.
- FIG. 1 is a block diagram showing an example of the functional arrangement of an image processing apparatus 1 and its peripheral devices;
- FIG. 2 is a flowchart of processing
- FIG. 3 is a view showing an example of a virtual object
- Fig. 4 is a view for explaining BRDF characteristics
- Fig. 5 is a view showing an example of a virtual space upon generation of a rendering image
- FIGs. 6A and 6B are views showing examples of different thresholds m for different BRDF
- Fig. 7 is a view showing an example of a device profile of an image output device 2 ;
- FIG. 8 is a block diagram showing an example of the functional arrangement of an image processing apparatus 81 and its peripheral devices;
- FIG. 9 is a flowchart of processing
- Fig. 10 is a conceptual graph of the relationships among white points.
- This image output device 2 is a display device such as a CRT or liquid crystal panel in this embodiment.
- the image output device 2 may be other devices such as a printer as long as it can output an image.
- the memory 3 stores, for example,
- virtual space information required to generate an image of a virtual space (an image of a virtual object) , and a device profile of the image output device 2.
- the virtual space information (virtual space information) required to generate an image of a virtual space (an image of a virtual object) , and a device profile of the image output device 2.
- An object information acquisition unit 101 reads out information (object information) associated with the virtual object from the memory 3.
- a light source information acquisition unit 102 reads out information associated with the light source from the memory 3.
- a viewing information acquisition unit 103 reads out information (viewing information) associated with the viewpoint from the memory 3.
- a rendering image generation unit 104 generates an image of the virtual object viewed from the viewpoint as a rendering image using the object information read out by the object information
- the light source information read out by the light source information acquisition unit 102 the light source information read out by the light source information acquisition unit 102, and the viewing information read out by the viewing information acquisition unit 103. More
- the rendering image generation unit 104 generates a rendering image by projecting light, which comes from the light source, is reflected by the surface of the virtual object, and is received at the position of the viewpoint, onto a projection plane set on the virtual space.
- 106 calculates tristimulus values of an adaptation white point of the image output device 2 and those of an adaptation white point of the virtual object using tristimulus values of a white point of the image output device 2, those of a white point of the virtual object, which are determined according to the ratio, and those of a reference white point.
- a color conversion unit 107 settles RGB values (device values) of respective pixels which form the rendering image generated by the rendering image generation unit 104 using the adaptation white points calculated by the adaptation white point calculation unit 106.
- An image output unit 108 outputs the RGB values (device values) of respective pixels which form the rendering image generated by the rendering image generation unit 104 using the adaptation white points calculated by the adaptation white point calculation unit 106.
- Fig. 2 shows the flowchart of that processing.
- the processing according to the flowchart shown in Fig. 2 is executed for each
- step SI the object information
- this object information includes, for each polygon, color information of the polygon, position information of vertices which configure the polygon, normal vector information of the polygon, and reflection
- the object information includes position and orientation information indicating a layout position and
- Such object information is prepared in advance by, for example, CAD software, and is stored in the memory 3.
- the reflectance characteristic information is measurement data of BRDF (Bidirectional Reflectance Distribution Function) characteristics of a polygon, and is measured in
- the BRDF characteristic is a function unique to a reflection point which represents how much light components are reflected in respective directions when light coming from a light source 402 strikes a certain point on a reflection surface 401 from an arbitrary direction, as shown in Fig. 4.
- step S2 the light source information acquisition unit 102 acquires the aforementioned light source information from the memory 3.
- This light source information includes information indicating a layout position and orientation of the light source on the virtual space, and spectral radiance information ⁇ ( ⁇ ) of the light source ( ⁇ represents the wavelength of light) .
- the spectral radiance information ⁇ ( ⁇ ) of the light source represents pieces of radiance information at respective wavelengths ⁇ : 380 to 780 nm.
- the light source information may include other kinds of information associated with the light source (for example, a color of light emitted by the light source) in addition to the aforementioned pieces of information.
- step S3 the viewing information
- This viewing information includes position and orientation
- step S4 the rendering image generation unit 104 generates a rendering image of the virtual object using the object information acquired by the object information acquisition unit 101, the light source information acquired by the light source
- step S5 the light source reflection region determination unit 105 determines specular reflection components of respective pixels (respective pixel positions on the projection plane) which form the rendering image generated in step S4, and calculates the aforementioned ratio. Details of the processing in step S5 will be described later.
- step S6 the adaptation white point calculation unit 106 calculates tristimulus values of an adaptation white point of the image output device 2 and those of an adaptation white point of the virtual object using tristimulus values of a white point of the image output device 2, those of a white point of the virtual object, which are determined according to the ratio, and those of a reference white point.
- step S7 the color conversion unit 107 selects one pixel, which is not selected yet, from the rendering image.
- step S8 the color conversion unit 107 settles RGB values of the pixel selected in step S7 using the adaptation white points calculated by the adaptation white point calculation unit 106. If all pixels in the rendering image have been selected in step S7, the process advances to step S10 via step S9; if pixels to be selected still remain in step S7, the process returns to step S7 via step S9.
- step S10 the image output unit 108 outputs the rendering image whose RGB values of all the pixels are settled by the color conversion unit 107 to the image output device 2.
- a viewpoint 52, projection plane 51, virtual object 54, and light source 53 have to be laid out on the virtual space, as shown in Fig. 5.
- a method of generating a rendering image a case using a known ray-tracing method will be explained.
- calculating a pixel value is executed in association with respective pixel positions on the projection plane 51, thereby deciding pixel values at the respective pixel positions on the projection plane 51.
- ⁇ be an angle (incident angle) a normal vector n to the polygon 59 and a direction vector L of a light ray coming from the light source 53 make
- an angle (reflection angle) a direction vector S of specular reflected light of this light ray at the polygon 59 and the normal vector n make is also ⁇ .
- the rendering image generation unit 104 calculates the shift angle p and reflection angle ⁇ in association with the specified polygon 59 (second calculation) . As described above, the
- reflection characteristic information is defined for each polygon. Therefore, according to the
- reflection angle ⁇ and reflection characteristic information are obtained for the pixel position P. The same applies to other pixel positions.
- ⁇ ( ⁇ ), y ( ⁇ ) , and ⁇ ( ⁇ ) are color matching functions.
- k is a quantity proportional to brightness of illuminating light.
- the tristimulus values X, Y, and Z expressed by these equations are calculated by multiplying the luminance spectrum I ( ⁇ , ⁇ , p) of the viewed reflected light by the color matching functions x ( ⁇ ) , y ( ⁇ ) , and z ( ⁇ ) , respectively, and integrating the products within the wavelength range (380 nm to 780 nm) of visible light.
- the value of a stimulus value Y is normalized and can assume a value ranging from 0 to 1.
- the tristimulus values XYZ are calculated for the pixel position P.
- Step S5 Executed by Light Source Reflection Region Determination Unit 105>
- the reflection characteristic information and shift angle p at each pixel position on the projection plane are used.
- the shift angle p at a pixel position of interest is larger than a threshold m decided based on the reflection
- a specular reflection component of a pixel at the pixel position of interest can be assumed as "0".
- Figs. 6A and 6B show examples of different thresholds m in case of different BRDF characteristics.
- Fig. 6A shows BRDF characteristics with a high image clarity.
- Fig. 6B shows BRDF
- the light source reflection region determination unit 105 determines for respective pixel positions on the projection plane whether or not specular reflection components are assumed to be "0". The light source reflection region determination unit 105 counts (the number of pixels whose specular reflection components are assumed to be "0".
- the light source reflection region determination unit 105 calculates, as the aforementioned ratio, a value obtained by dividing the counted number of pixels by (the total number of pixels on the projection plane) . This ratio naturally assumes a value ranging from 0 to 1.
- a calculation formula used to calculate tristimulus values of partial adaptation white points of the virtual object and image output device 2 from tristimulus values of white points of the virtual object and image output device 2 and those of a reference white point, so as to calculate the tristimulus values of the partial adaptation white points is used as a partial adaptation model.
- This partial adaptation model will be described in detail later .
- the adaptation white point calculation unit 106 calculates the partial adaptation model by giving the tristimulus values of the white point of the image output device 2, and those of the white point of the virtual object, which are decided according to the value of the ratio, to the partial adaptation model. With this computation, the tristimulus values of the partial adaptation white point of the virtual object and those of the partial adaptation white point of the image output device 2 are calculated .
- the adaptation white point calculation unit 106 decides the tristimulus values of the white point of the virtual object to be given to the partial adaptation model according to the ratio calculated by the light source reflection region determination unit 105.
- the adaptation white point calculation unit 106 gives tristimulus values
- tristimulus values may be given to the partial
- tristimulus values at a pixel position with a highest luminance level may be used as those of the white point of the virtual object.
- the adaptation white point calculation unit 106 sets a luminance value Y of a specular reflection component having a highest
- chromaticities of the white point of the virtual object are the same as those of the tristimulus values of the perfect reflecting diffuser.
- the adaptation white point calculation unit 106 gives this addition result (combined result) to the partial adaptation model as the tristimulus values of the white point of the virtual object.
- the adaptation white point calculation unit 106 can calculate the tristimulus values of the partial adaptation white point of the virtual object and those of the partial adaptation white point of the image output device 2 in
- the color conversion unit 107 reads out
- Fig. 7 shows an example of the device profile of the image output device 2. As shown in Fig. 7, the device
- the color conversion unit 107 converts the tristimulus values XYZ of the respective grid points to JCh values. Then, the color conversion unit 107 specifies outermost grid points with reference to the JCh values of the grid points, thus calculating a color reproduction range of the image output device 2.
- the color conversion unit 107 converts the
- chromatic adaptation conversion methods such as a Von Kries chromatic adaptation formula may be used.
- the color conversion unit 107 executes color compression ( colorimetric gamut compression) of the JCh values calculated for respective pixel
- the colorimetric gamut compression is a method of keeping colors within the color
- the CIECAM02 inverse color adaptation conversion to the color-compressed JCh values using the "tristimulus values of the partial adaptation white point of the virtual object" calculated in step S6.
- the JCh values can be converted into tristimulus values XYZ for respective pixel positions on the projection plane.
- the color conversion unit 107 converts these tristimulus values XYZ into RGB values using
- RGB values are calculated from tristimulus values XYZ of grid points around the tristimulus values XYZ of interest using interpolation processing such as
- RGB values for respective pixel positions on the projection plane can be settled, a rendering image configured by pixels having RGB values can be formed on the projection plane.
- Fig. 10 is a conceptual graph of the relationship among white points.
- a human visual system cannot completely correct the color of a light source even at the time of viewing of a monitor and at the time of viewing of the virtual object. Therefore, incomplete adaptation has to be corrected.
- white which is
- a point on a blackbody radiation locus having a color temperature 8500K is used as the reference white point.
- the reference white is not limited to this white point, and another white point, for example, a white point on a daylight locus, which has a color temperature higher than equi-energy white, may be set.
- Xwm / Ywm and Z Wm are tristimulus values of the monitor white point
- Xw p i Yw p , and Z Wp are tristimulus values of the virtual object white point.
- monitor white point corresponding to the chromaticities of the monitor white point, and a color temperature T Wp of the virtual object white point corresponding to the chromaticities of the virtual object white point are acquired from, for example, a chromaticity - color temperature table stored in the memory 3.
- the reason why the reciprocal of the color temperature is used is that the color temperature difference does not correspond to a color difference that one can perceive, but the reciprocal of the color temperature is nearly equal to human perception.
- Ki k m i x is a partial adaptation coefficient
- Ki' 1 - Ki
- L*i is a weighting coefficient based on the luminance level of the white point.
- coefficient uses a value set by the user, but it can be automatically calculated using, for example, a function. Also, based on a concept that one adapts more to a white point having a higher luminance level, the
- weighting coefficient L*i is calculated using:
- chromaticities u" Wm and v" Wm corresponding to the color temperature of the monitor white point for partial adaptation correction chromaticities u" Wp and v" Wp corresponding to the color temperature of the virtual object white point for partial adaptation correction are inversely calculated.
- tristimulus values X" Wm , Y"w m , and Z" Wm of the monitor white point for partial adaptation correction are calculated by:
- the adaptation white points are calculated according to the ratio of (the number of pixels whose specular reflection components cannot be assumed to be zero) to (the total number of pixels on the projection plane), that is, the area of the light source reflection region.
- the adaptation white points are calculated according to distances from reflection of a central point of the light source for respective pixel positions on a
- FIG. 8 An example of the functional arrangement of an image processing apparatus 81 according to this embodiment and its peripheral devices will be described with reference to Fig. 8.
- the same reference numerals in Fig. 8 denote the same parts as in Fig. 1, and a description thereof will not be repeated.
- a light source central reflection position determination unit 815 determines distances from a reflection position of a central point of the light source for respective pixel positions within a region required to form a rendering image on the projection plane (rendering image region) .
- An adaptation white point calculation unit 816 calculates adaptation white points for respective pixels within the rendering image region according to the determination result of the light source central reflection position determination unit 815.
- Fig. 9 shows the flowchart of that processing.
- the processing according to the flowchart shown in Fig. 8 is executed for each individual frame.
- steps S901 to S904 are the same as those in steps SI to S4 described above, a description thereof will not be repeated. Note that steps S901 to S904 are the same as steps SI to S4, but processing for calculating information which is not required in the following processing may be skipped.
- step S905 the light source central reflection position determination unit 815 selects one pixel, which is not selected yet, from a pixel group in the rendering image region.
- step S906 the light source central reflection position determination unit 815 calculates a reflection position of the central point of the light source on the projection plane, and calculates a distance from the calculated position to the position (selected pixel position) of the pixel (selected pixel) selected in step S905. Details of the processing in this step will be described later.
- step S907 the adaptation white point calculation unit 816 executes the following processing.
- step S906 calculates, based on the distance calculated in step S906 for the selected pixel, tristimulus values of an adaptation white point of the image output device 2 and those of an adaptation white point of the selected pixel using tristimulus values of a white point of the image output device 2, those of a white point of a virtual object, and those of a reference white point.
- step S908 a color conversion unit 107 settles RGB values of the selected pixel using the adaptation white points calculated by the adaptation white point calculation unit 816.
- the processing in step S908 is the same as that in step S8 described above, except that color conversion processing is executed for each pixel.
- step S905 If all pixels in the rendering image region have been selected in step S905, the process advances to step S910 via step S909; if pixels to be selected still remain in step S905, the process returns to step S905 via step S909.
- step S910 an image output unit 108 outputs a rendering image whose RGB values of all pixels are settled by the color conversion unit 107 to the image output device 2.
- Step S906 Executed by Light Source Central Reflection Position Determination Unit 815>
- the light source central reflection position determination unit 815 specifies a pixel position C where the directions of a direction vector S of specular reflected light to a direction vector L of a light ray coming from the central point of the light source and a line V from a viewpoint 52 are parallel to each other (a shift angle p is closest to "0") from respective pixel positions on the projection plane. That is, the unit 815 specifies this pixel position C as a "light source central reflection position". In this specifying processing of the pixel position C, the size of the projection plane is set to be sufficiently larger than that of the rendering image.
- the light source central reflection position determination unit 815 calculates a distance between the pixel position C and the position of the selected pixel. The light source central reflection position determination unit 815 then calculates a value of a fraction having this calculated distance as a numerator and a diagonal distance of the rendering image region as a denominator. Note that the
- denominator is not limited to this.
- the light source central reflection position determination unit 815 When the calculated value of the fraction is larger than "1", the light source central reflection position determination unit 815 outputs "0" as a determination result. When the calculated value of the fraction is "1", the unit 815 outputs "0" as a
- the unit 815 outputs "1" as a
- the unit 815 outputs (1 - r) as a determination result.
- Step S907 Executed by Adaptation White Point Calculation Unit 816>
- partial adaptation white points are fixed.
- the partial adaptation white points are also different for respective pixels.
- a luminance value Y of the specular reflected light S at the pixel position C is set as a luminance value of the white point of the virtual object. Also,
- chromaticities of the white point of the virtual object are set to be the same as those of the tristimulus values of the perfect reflecting diffuser.
- the determination result of the light source central reflection position determination unit 815 assumes a value ranging from “0" to "1"
- the tristimulus values when the determination result is "0” and those when the determination result is “1” are combined according to the value of the determination result, as in the first embodiment.
- the determination result is f (0 ⁇ f ⁇ 1) .
- the adaptation white point calculation unit 816 gives this addition result (combined result) to the partial adaptation model as the tristimulus values of the white point of the selected pixel.
- apparatus 1 shown in Fig. 1 are implemented by hardware. Also, in the description of the second embodiment, all the units which configure the image processing
- this software is executed by a computer which includes an execution unit such as a CPU, a RAM, a ROM, and a memory such as a hard disk.
- this software is stored in that memory, and is executed by that execution unit.
- the arrangement of the computer which executes this software is not particularly limited.
- aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described
- embodiment ( s ) and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment ( s ) .
- the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (for example, computer- readable medium) .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/582,099 US20120327086A1 (en) | 2010-05-24 | 2011-04-27 | Image processing apparatus and image processing method |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2010118769A JP5615041B2 (ja) | 2010-05-24 | 2010-05-24 | 画像処理装置、画像処理方法 |
| JP2010-118769 | 2010-05-24 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2011148776A1 true WO2011148776A1 (en) | 2011-12-01 |
Family
ID=45003768
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2011/060691 Ceased WO2011148776A1 (en) | 2010-05-24 | 2011-04-27 | Image processing apparatus and image processing method |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US20120327086A1 (enExample) |
| JP (1) | JP5615041B2 (enExample) |
| WO (1) | WO2011148776A1 (enExample) |
Families Citing this family (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP6506507B2 (ja) * | 2013-05-15 | 2019-04-24 | キヤノン株式会社 | 測定装置およびその制御方法 |
| JP6100086B2 (ja) * | 2013-05-15 | 2017-03-22 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
| US9489918B2 (en) * | 2013-06-19 | 2016-11-08 | Lenovo (Beijing) Limited | Information processing methods and electronic devices for adjusting display based on ambient light |
| WO2015008373A1 (ja) * | 2013-07-19 | 2015-01-22 | 富士通株式会社 | 情報処理装置、検査範囲の計算方法、及びプログラム |
| US9266287B2 (en) | 2013-09-18 | 2016-02-23 | Disney Enterprises, Inc. | 3D printing with custom surface reflectance |
| JP6604744B2 (ja) * | 2015-05-03 | 2019-11-13 | キヤノン株式会社 | 画像処理装置、画像処理方法、画像形成システム及びプログラム |
| US10850495B2 (en) * | 2016-01-29 | 2020-12-01 | Massachusetts Institute Of Technology | Topology optimization with microstructures |
| GB2605154B (en) | 2021-03-24 | 2023-05-24 | Sony Interactive Entertainment Inc | Image rendering method and apparatus |
| GB2605152B (en) | 2021-03-24 | 2023-11-08 | Sony Interactive Entertainment Inc | Image rendering method and apparatus |
| GB2605157B (en) | 2021-03-24 | 2023-08-23 | Sony Interactive Entertainment Inc | Image rendering method and apparatus |
| GB2605158B (en) | 2021-03-24 | 2023-05-17 | Sony Interactive Entertainment Inc | Image rendering method and apparatus |
| GB2605155B (en) | 2021-03-24 | 2023-05-17 | Sony Interactive Entertainment Inc | Image rendering method and apparatus |
| GB2605171B (en) * | 2021-03-24 | 2023-05-24 | Sony Interactive Entertainment Inc | Image rendering method and apparatus |
| GB2605156B (en) | 2021-03-24 | 2023-11-08 | Sony Interactive Entertainment Inc | Image rendering method and apparatus |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH10143675A (ja) * | 1996-11-15 | 1998-05-29 | Fuji Photo Film Co Ltd | コンピュータグラフィックス再現方法 |
| JP2000113215A (ja) * | 1998-10-08 | 2000-04-21 | Dainippon Screen Mfg Co Ltd | 画像処理装置およびその処理を実行するためのプログラムを記録した記録媒体 |
| JP2008146260A (ja) * | 2006-12-07 | 2008-06-26 | Canon Inc | 画像生成装置及び画像生成方法 |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6226006B1 (en) * | 1997-06-27 | 2001-05-01 | C-Light Partners, Inc. | Method and apparatus for providing shading in a graphic display system |
| AU2006200969A1 (en) * | 2006-03-07 | 2007-09-27 | Canon Information Systems Research Australia Pty Ltd | Print representation |
| WO2008062874A1 (fr) * | 2006-11-22 | 2008-05-29 | Nikon Corporation | Procédé de traitement d'image, programme de traitement d'image, dispositif et caméra de traitement d'image |
-
2010
- 2010-05-24 JP JP2010118769A patent/JP5615041B2/ja not_active Expired - Fee Related
-
2011
- 2011-04-27 WO PCT/JP2011/060691 patent/WO2011148776A1/en not_active Ceased
- 2011-04-27 US US13/582,099 patent/US20120327086A1/en not_active Abandoned
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH10143675A (ja) * | 1996-11-15 | 1998-05-29 | Fuji Photo Film Co Ltd | コンピュータグラフィックス再現方法 |
| JP2000113215A (ja) * | 1998-10-08 | 2000-04-21 | Dainippon Screen Mfg Co Ltd | 画像処理装置およびその処理を実行するためのプログラムを記録した記録媒体 |
| JP2008146260A (ja) * | 2006-12-07 | 2008-06-26 | Canon Inc | 画像生成装置及び画像生成方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| US20120327086A1 (en) | 2012-12-27 |
| JP2011248476A (ja) | 2011-12-08 |
| JP5615041B2 (ja) | 2014-10-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2011148776A1 (en) | Image processing apparatus and image processing method | |
| US11361474B2 (en) | Method and system for subgrid calibration of a display device | |
| JP4231661B2 (ja) | 色再現装置 | |
| JP6004481B2 (ja) | カラー画像処理方法、カラー画像処理装置およびカラー画像処理プログラム | |
| EP0423653B1 (en) | Method and apparatus for compensating for color in color images | |
| JP2006215756A (ja) | 画像処理装置および画像処理方法ならびにそのプログラム | |
| US20150294475A1 (en) | Image processing apparatus, image processing method and program | |
| US20110007333A1 (en) | Color processing apparatus, and method therefor | |
| JP5672848B2 (ja) | 表示画像の調整方法 | |
| CN102301391A (zh) | 彩色图像处理方法、彩色图像处理设备以及彩色图像处理程序 | |
| JP2003348501A (ja) | 画像表示装置 | |
| US20080193011A1 (en) | Pixel Processor | |
| US8331665B2 (en) | Method of electronic color image saturation processing | |
| JP5517594B2 (ja) | 画像表示装置および画像表示方法 | |
| JPH0540833A (ja) | カラ−画像制御方法 | |
| JP7191208B2 (ja) | 個人に固有の色空間を測定するための方法および個人に固有の色空間に応じてデジタル画像を補正するための方法 | |
| JP2023102943A (ja) | 画像合成表示装置、方法、及びプログラム | |
| JP2008511850A (ja) | 忠実な色でデジタル画像を表示するための方法およびシステム | |
| Zhang | Lightness, Brightness, and Transparency in Optical See-Through Augmented Reality | |
| Navvab et al. | Evaluation of historical museum interior lighting system using fully immersive virtual luminous environment | |
| JP6335483B2 (ja) | 補正方法、補正装置、及び、プログラム | |
| Penczek et al. | Evaluating the optical characteristics of stereoscopic immersive display systems | |
| JP2007165995A (ja) | 画像生成装置、画像生成方法および画像生成プログラム | |
| JP4225414B2 (ja) | 画像色補正装置、方法、プログラム及び記録媒体 | |
| Bärz et al. | Validating photometric and colorimetric consistency of physically-based image synthesis |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11786476 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 13582099 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 11786476 Country of ref document: EP Kind code of ref document: A1 |