EP3588930A1 - Image processing apparatus, image processing method, and program - Google Patents
Image processing apparatus, image processing method, and program Download PDFInfo
- Publication number
- EP3588930A1 EP3588930A1 EP19182062.0A EP19182062A EP3588930A1 EP 3588930 A1 EP3588930 A1 EP 3588930A1 EP 19182062 A EP19182062 A EP 19182062A EP 3588930 A1 EP3588930 A1 EP 3588930A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- luminance
- correction
- image
- conversion
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 137
- 238000003672 processing method Methods 0.000 title claims description 4
- 238000012937 correction Methods 0.000 claims abstract description 309
- 238000006243 chemical reaction Methods 0.000 claims abstract description 88
- 238000007639 printing Methods 0.000 claims abstract description 69
- 238000013507 mapping Methods 0.000 claims description 84
- 230000006835 compression Effects 0.000 claims description 62
- 238000007906 compression Methods 0.000 claims description 62
- 238000005286 illumination Methods 0.000 claims description 27
- 238000001914 filtration Methods 0.000 claims description 14
- 239000000284 extract Substances 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims 4
- 238000000034 method Methods 0.000 description 73
- 230000035945 sensitivity Effects 0.000 description 34
- 230000006870 function Effects 0.000 description 20
- 238000012546 transfer Methods 0.000 description 19
- 239000003086 colorant Substances 0.000 description 13
- 230000000007 visual effect Effects 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- WBMKMLWMIQUJDP-STHHAXOLSA-N (4R,4aS,7aR,12bS)-4a,9-dihydroxy-3-prop-2-ynyl-2,4,5,6,7a,13-hexahydro-1H-4,12-methanobenzofuro[3,2-e]isoquinolin-7-one hydrochloride Chemical compound Cl.Oc1ccc2C[C@H]3N(CC#C)CC[C@@]45[C@@H](Oc1c24)C(=O)CC[C@@]35O WBMKMLWMIQUJDP-STHHAXOLSA-N 0.000 description 4
- 238000004321 preservation Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 101100077717 Mus musculus Morn2 gene Proteins 0.000 description 2
- 238000007641 inkjet printing Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 125000001475 halogen functional group Chemical group 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/6058—Reduction of colour to a range of reproducible colours, e.g. to ink- reproducible colour gamut
- H04N1/6063—Reduction of colour to a range of reproducible colours, e.g. to ink- reproducible colour gamut dependent on the contents of the image to be reproduced
- H04N1/6069—Reduction of colour to a range of reproducible colours, e.g. to ink- reproducible colour gamut dependent on the contents of the image to be reproduced spatially varying within the image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00127—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
- H04N1/00132—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture in a digital photofinishing system, i.e. a system where digital photographic images undergo typical photofinishing processing, e.g. printing ordering
- H04N1/00167—Processing or editing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/401—Compensating positionally unequal response of the pick-up or reproducing head
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/6002—Corrections within particular colour systems
- H04N1/6005—Corrections within particular colour systems with luminance or chrominance signals, e.g. LC1C2, HSL or YUV
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/6002—Corrections within particular colour systems
- H04N1/6008—Corrections within particular colour systems with primary colour signals, e.g. RGB or CMY(K)
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/6027—Correction or control of colour gradation or colour contrast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/6058—Reduction of colour to a range of reproducible colours, e.g. to ink- reproducible colour gamut
Definitions
- the image input module 401 obtains HDR image data.
- image data held by the HDD 303 may be obtained, or image data may be obtained from an external apparatus via the data transfer I/F 306.
- HDR image data RGB data whose D range has a peak luminance of 1,000 nit and whose color space is BT.2020 will be described as an example.
- the contrast correction module 407 generates the contrast correction intensity based on the values of the high frequencies generated by the input image characteristic obtaining module 405 and the output image characteristic obtaining module 406.
- the target value of the contrast intensity is set to the value of the high frequency of the input image.
- Hm the correction intensity calculated as the correction coefficient used when performing contrast correction
- Ht the value of the high frequency generated by the input image characteristic obtaining module 405
- H' the value of the high frequency generated by the output image characteristic obtaining module 406.
- a gamut mapping module 403 performs gamut mapping by the method described above with reference to Fig. 5 and the like for the image data that has undergone the contrast correction in step S404.
- the contrast sensitivity can be calculated by equation (19).
- the contrast appearance characteristic obtaining module 408 sets the filter M in consideration of the observation condition, obtains a low-frequency component L using the filter M, and sets the contrast correction intensity using the contrast sensitivity value calculated based on the observation condition. However, only one of them may suffice.
- the value H' of the high frequency is generated from the image data that has undergone the D range compression and gamut mapping, and the contrast correction module 407 obtains the contrast correction intensity Hm using H' and the input image data Ht, and corrects the value of the high frequency using this.
- the following processing may be performed in place of correction of the high-frequency component H using the correction intensity Hm. That is, in step S202 of the sixth embodiment, the value L of the low frequency and the value H of the high frequency are obtained using the filter M generated based on the observation condition by the contrast appearance characteristic obtaining module 408, and D range compression is performed for the obtained value L of the low frequency to generate a value L' of the low frequency. Then, a luminance I' may be obtained by integrating the value H of the high frequency and the value L' of the low frequency.
- the second component is corrected not to exceed the luminance range of the output as well.
- the second component is corrected in the following way based on the value of a first component L before D range conversion.
- the highlight detail loss/shadow detail loss readily occurs when L is on the high luminance side or on the low luminance side.
- the second component is corrected in the following way to prevent the second component corrected by the contrast correction module 407 from exceeding the D range of the input and causing highlight detail loss/shadow detail loss.
- L max and L min are the maximum value and the minimum value of the D range of the input, respectively.
- the correction coefficients W and S need not always be Sigmoid-type functions as described above.
- the function is not particularly limited as long as it makes the absolute value of the second component Hcb after the correction smaller than the absolute value of the second component Hc before the correction.
- H max much larger than H may be set to perform calculation as follows.
- the contrast after the second component correction can easily be perceived.
- the widths of buffer regions ⁇ W and ⁇ S are less than JND, the luminance difference between pixels 21 and 22 after second component correction is difficult to perceive. This is because even if the second component is corrected such that it falls within the buffer region to prevent highlight detail loss/shadow detail loss, the contrast is visually lost because the width of the buffer region is less than JND.
- the widths of the buffer regions ⁇ W and ⁇ S are preferably larger than JND.
- step S1201 the second component is corrected in the following way.
- L' max and L' min are the maximum value and the minimum value of the luminance of the D range after compression, respectively.
- correction coefficients W and S need not always be Sigmoid-type functions as described above. Any function can be used for the decision as long as it is a function for strongly suppressing the second component as the position of the first component moves to the high luminance side or the low luminance side.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Color Image Communication Systems (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
Description
- The present invention relates to an image processing apparatus, an image processing method, and a program.
- In recent years, HDR (High Dynamic Range) contents having a high luminance and a wide color gamut reproduction range have become popular. An HDR content is expressed using a peak luminance of 1,000 nit (1,000 cd/m2) or more within the color gamut range of BT.2020 (Rec.2020). When printing is performed by a printing apparatus using HDR image data, the dynamic range (to be referred to as a "D range" hereinafter) of the luminance needs to be compressed by D range compression using a tone curve or the like to a dynamic range that the printing apparatus can reproduce. For example, as shown in
Fig. 1 , the contrast of an area with a high luminance is reduced, thereby performing D range compression. For example, Japanese Patent Laid-Open No.2011-86976 - The image data that has undergone the D range compression to the luminance range of the printing apparatus needs to be subjected to gamut mapping to the color gamut of the printing apparatus.
Fig. 2A shows the color gamut of BT.2020 within a luminance range of 1,000 nit.Fig. 2B shows the color gamut of the printing apparatus. InFigs. 2A and 2B , the abscissa represents y of the xy chromaticity, and the ordinate represents the luminance. When comparing the color gamut of BT.2020 with that of the printing apparatus, the color gamut shapes are not similar because color materials in use are different. Hence, when printing an HDR content by the printing apparatus, the degree of luminance compression needs to be changed in accordance with the chromaticity, instead of evenly compressing the D range. - At this time, in a case in which the shape of the color gamut of input image data and the shape of the color gamut of the printing apparatus are largely different, even when contrast correction is performed using the method of Japanese Patent Laid-Open No.
2011-86976 - The present invention in its first aspect provides an image processing apparatus as specified in
claims 1 to 13. - The present invention in its second aspect provides an image processing method as specified in claim 14.
- The present invention in its third aspect provides a program as specified in claim 15.
- According to the present invention, it is possible to provide contrast correction considering lowering of a contrast caused by the difference in the color reproduction range between an input and an output.
- Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
-
-
Fig. 1 is a view for explaining D range conversion; -
Figs. 2A, 2B ,2C, and 2D are views for explaining the difference in color gamut between BT.2020 and a printing apparatus; -
Fig. 3 is a block diagram showing an example of the hardware arrangement of a system according to the present invention; -
Fig. 4 is a block diagram showing an example of a software configuration concerning contrast correction according to the present invention; -
Fig. 5 is a view for explaining gamut mapping according to the present invention; -
Fig. 6 is a view for explaining a Gaussian filter; -
Fig. 7 is a view for explaining a visual transfer function according to the present invention; -
Fig. 8 is a flowchart showing the processing of an output image characteristic obtaining module according to the present invention; -
Fig. 9 is a flowchart showing the processing of a contrast correction module according to the present invention; -
Fig. 10 is a flowchart showing a contrast correction method according to the first embodiment; -
Fig. 11 is a flowchart showing a contrast correction method according to the second embodiment; -
Fig. 12 is a flowchart showing a contrast correction method according to the third embodiment; -
Fig. 13 is a flowchart showing a contrast correction method according to the fourth embodiment; -
Fig. 14 is a view for explaining a correction intensity generation method according to the fifth embodiment; -
Fig. 15 is a schematic view of an example of a UI configuration screen according to the sixth embodiment; -
Fig. 16 is a block diagram showing an example of a software configuration concerning contrast correction according to the sixth embodiment; -
Fig. 17 is a view showing an example of a luminance-high sensitivity frequency conversion table according to the sixth embodiment; -
Fig. 18 is a view showing a table of high sensitivity frequencies on a luminance basis according to the sixth embodiment; -
Fig. 19 is a flowchart showing the procedure of processing according to the eighth embodiment; -
Fig. 20 is an explanatory view of correction determination in the processing according to the eighth embodiment; -
Fig. 21 is a flowchart showing the procedure of processing according to the ninth embodiment; -
Fig. 22 is a view for illustrating an approach of modelling contrast sensitivity as used in the 9th embodiment; and -
Fig. 23 is a flowchart showing the procedure of processing according to the 10th embodiment. -
Fig. 3 is a block diagram showing an example of the arrangement of a system to which the present invention can be applied. The system includes animage processing apparatus 300 and aprinting apparatus 310. Theimage processing apparatus 300 is formed by a host PC functioning as an information processing apparatus, or the like. Theimage processing apparatus 300 includes aCPU 301, aRAM 302, anHDD 303, a display I/F 304, an operation unit I/F 305, and a data transfer I/F 306, and these components are communicably connected via an internal bus. - The
CPU 301 executes various kinds of processing using theRAM 302 as a work area in accordance with a program held by theHDD 303. TheRAM 302 is a volatile storage area, and is used as a work memory or the like. The HDD 303 is a nonvolatile storage area, and holds a program according to this embodiment, an OS (Operating System), and the like. The display I/F 304 is an interface configured to perform data transmission/reception between adisplay 307 and the main body of theimage processing apparatus 300. The operation unit I/F 305 is an interface configured to input an instruction input using anoperation unit 308 such as a keyboard or a mouse to the main body of theimage processing apparatus 300. The data transfer I/F 306 is an interface configured to transmit/receive data to/from an external apparatus. - For example, the
CPU 301 generates image data printable by theprinting apparatus 310 in accordance with an instruction (a command or the like) input by a user using theoperation unit 308 or a program held by theHDD 303, and transfers the image data to theprinting apparatus 310. In addition, theCPU 301 performs predetermined processing for image data received from theprinting apparatus 310 via the data transfer I/F 306 in accordance with a program stored in theHDD 303, and displays the result or various kinds of information on thedisplay 307. - The
printing apparatus 310 includes animage processing accelerator 311, a data transfer I/F 312, aCPU 313, aRAM 314, aROM 315, and aprinting unit 316, and these components are communicably connected via an internal bus. Note that the printing method of theprinting apparatus 310 is not particularly limited. For example, an inkjet printing apparatus may be used, or an electrophotographic printing apparatus may be used. The following description will be made using an inkjet printing apparatus as an example. - The
CPU 313 executes various kinds of processing using theRAM 314 as a work area in accordance with a program held by theROM 315. TheRAM 314 is a volatile storage area, and is used as a work memory or the like. TheROM 315 is a nonvolatile storage area, and holds a program according to this embodiment, an OS (Operating System), and the like. The data transfer I/F 312 is an interface configured to transmit/receive data to/from an external apparatus. Theimage processing accelerator 311 is hardware capable of executing image processing at a speed higher than that of theCPU 313. When theCPU 313 writes parameters and data necessary for image processing to a predetermined address of theRAM 314, theimage processing accelerator 311 is activated. After the parameters and the data are loaded, predetermined image processing is executed for the data. However, theimage processing accelerator 311 is not an indispensable element, and the equivalent processing can also be executed by theCPU 313. Theprinting unit 316 executes a printing operation based on an instruction from theimage processing apparatus 300. - The connection method of the data transfer I/
F 306 of theimage processing apparatus 300 and the data transfer I/F 312 of theprinting apparatus 310 is not particularly limited. For example, USB (Universal Serial Bus), IEEE 1394, or the like can be used. In addition, the connection may be wired or wireless. - Contrast correction according to this embodiment will be described below in detail. The contrast correction according to this embodiment is processing for performing predetermined image processing when printing HDR image data by the
printing apparatus 310. As described above, in this embodiment, the color reproduction range of an input image (for example, HDR image data) and that of theprinting apparatus 310 for performing printing are different, and the range of reproducible colors is wider in the input image. -
Fig. 4 is a block diagram showing an example of a software configuration for performing image processing concerning contrast correction when printing HDR image data by theprinting apparatus 310. In this embodiment, each module shown inFig. 4 is implemented when theCPU 301 reads out a program stored in theHDD 303 and executes it. Theimage processing apparatus 300 includes animage input module 401, a Drange conversion module 402, agamut mapping module 403, animage output module 404, an input image characteristic obtainingmodule 405, an output image characteristic obtainingmodule 406, and acontrast correction module 407. Note that the modules shown here represent modules concerning processing concerning contrast correction, and theimage processing apparatus 300 may further include a module configured to perform another image processing. - The
image input module 401 obtains HDR image data. As for the obtaining method, image data held by theHDD 303 may be obtained, or image data may be obtained from an external apparatus via the data transfer I/F 306. In this embodiment, as HDR image data, RGB data whose D range has a peak luminance of 1,000 nit and whose color space is BT.2020 will be described as an example. - The D
range conversion module 402 performs D range compression to a predetermined luminance range for the luminance of image data input to the Drange conversion module 402 using a means such as a one-dimensional lookup table (to be referred to as a 1DLUT hereinafter). In this embodiment, the D range compression is performed using a graph shown inFig. 1 . InFig. 1 , the abscissa represents the luminance of an input to be subjected to D range compression, and the ordinate represents the luminance after compression. Based on the compression characteristic shown inFig. 1 , the HDR image data having a luminance range of 1,000 nit is compressed to a luminance range of 100 nit that theprinting apparatus 310 can handle. - For the image data input to the
gamut mapping module 403, thegamut mapping module 403 performs gamut mapping to the color gamut of theprinting apparatus 310 using a method such as a three-dimensional LUT (to be referred to as a 3DLUT hereinafter).Fig. 5 is a view for explaining gamut mapping according to this embodiment. InFig. 5 , the abscissa represents Cr of the YCbCr color space, and the ordinate represents a luminance Y. Aninput color gamut 501 of the image data input to thegamut mapping module 403 undergoes gamut mapping to anoutput color gamut 502 that is the color gamut of theprinting apparatus 310. When the input colors are (Y, Cb, Cr), they are converted into (Y', Cb', Cr'). If the input color has a color space different from YCbCr, the color space is converted into the YCbCr color space, and gamut mapping is then performed. In the example shown inFig. 5 , theinput color gamut 501 and theoutput color gamut 502 do not have similar shapes. -
Primary colors input color gamut 501 are mapped toprimary colors output color gamut 502, respectively. Although theprimary colors primary colors - In addition, an
area 507 outside the color gamut, which is represented by hatching inFig. 5 , is a color gamut that cannot be expressed by theprinting apparatus 310. Thearea 507 outside the color gamut is an area that is included in theinput color gamut 501 but not in theoutput color gamut 502. On the other hand, anarea 508 in the color gamut is an area included in both theinput color gamut 501 and theoutput color gamut 502. Thearea 507 outside the color gamut is compressed more largely than thearea 508 in the color gamut and mapped in theoutput color gamut 502. For example, in the input colors, acontrast 509 of two colors is mapped to acontrast 511, and acontrast 510 is mapped to the same contrast as the input even after the mapping. That is, in thecontrast 510, the change before and after the mapping is smaller than in thecontrast 511. In other words, the conversion characteristic is different between conversion in thearea 508 in the color gamut and conversion from thearea 507 outside the color gamut to thearea 508 in the color gamut. Since the colors outside the output color gamut are compressed more largely than the colors in the output color gamut and mapped, the contrast becomes lower in the colors outside the output color gamut. - The input image characteristic obtaining
module 405 generates (extracts) the value of the high frequency of the image data input to theimage input module 401. First, the input image characteristic obtainingmodule 405 calculates the luminance of the input image data. If the input image data is RGB data (R: Red, G: Green, B: Blue), it can be converted into YCbCr by a method represented by equations (1) to (3). Note that the RGB-YCbCr conversion formulas shown below are merely examples, and other conversion formulas may be used. In the following formulas, "·" represents multiplication. - Furthermore, the input image characteristic obtaining
module 405 generates the value of a high frequency from the calculated luminance (Y value). To generate the value of the high frequency, first, the value of a low frequency is calculated. The value of the low frequency is generated by performing filtering processing for the luminance. The filtering processing will be described with reference toFig. 6 using a Gaussian filter used to perform smoothing processing as an example. InFig. 6 , the filter size is 5 × 5, and acoefficient 601 is set for each of the 25 pixels. Let x be the horizontal direction of the image, and y be the vertical direction. The pixel value at coordinates (x, y) is p(x, y), and the filter coefficient is f(x, y). Filtering processing is performed by a method represented by equation (4) for each pixel p'(x, y) of interest. Every time the filter scans the image data with respect to apixel 602 of interest as the center, the calculation of equation (4) is performed. When scanning for all pixels is completed, the value of the low frequency is obtained. - In this embodiment, the Gaussian type has been exemplified as the filter characteristic. However, the present invention is not limited to this. For example, an edge preservation type filter such as a bilateral filter may be used. When the edge preservation type filter is used, the halo of an artifact that occurs in an edge portion at the time of contrast correction can be reduced.
-
Fig. 7 is a view showing a visual transfer function VTF for a spatial frequency. The visual transfer function VTF shown inFig. 7 indicates that the visual sensitivity represented by the ordinate changes when the spatial frequency represented by the abscissa changes. This means that the higher the visual sensitivity is, the higher the transfer characteristic is. As can be seen from the visual transfer function VTF, a high transfer characteristic of about 0.8 or more can be obtained at a spatial frequency of 0.5 cycle/mm or more. Note that in the example shown inFig. 7 , when the spatial frequency is 2 cycles/mm or more, the visual sensitivity is lower than 0.8. The frequency that is the target of contrast correction is preferably a frequency with a high visual sensitivity. That is, the high frequency indicates 0.5 cycle/mm or more, which is a frequency including a peak sensitivity, and the low frequency indicates 0.5 cycle/mm or less. In this embodiment, a high-frequency component and a low-frequency component are obtained from the luminance based on this premise. -
- In this embodiment, the value H of the high frequency and the value L of the low frequency of the luminance I will be described as values equal to a value Re of reflected light and a value Li of illumination light, respectively. The illumination light here means an illumination light component included in the luminance component, and the reflected light means a reflected light component included in the luminance component. That is, the description will be made using the value H of the high frequency as a value representing the intensity of the high-frequency component, and also using the value L of the low frequency as a value representing the intensity of the low-frequency component.
- The value of the illumination light can be generated by performing filtering processing, like the value of the low frequency. In addition, when an edge preservation type filter is used, the value of illumination light at an edge portion can more accurately be generated. The value Re of the reflected light and the value Li of the illumination light can be given by
- The value H of the high frequency is generated by dividing the input image by the value of the low frequency, as indicated by equation (5). However, the present invention is not limited to this. For example, the value H of the high frequency may be generated by subtracting the value of the low frequency from the input image, as indicated by equation (7). This also applies to a case in which the value of the reflected light and the value of the illumination light are used.
- The output image characteristic obtaining
module 406 generates the value of the high frequency of the color system to be output by theprinting apparatus 310. That is, the output image characteristic obtainingmodule 406 obtains the value of the high frequency within the range of the color system that can be reproduced by theprinting apparatus 310. The generation method will be described later with reference to the flowchart ofFig. 8 . - The
contrast correction module 407 decides the contrast correction intensity based on the values of the high frequencies generated by the input image characteristic obtainingmodule 405 and the output image characteristic obtainingmodule 406, and performs contrast correction processing for the value of the high frequency of the image data input to thecontrast correction module 407. In this embodiment, the description will be made assuming that the contrast of the image is corrected by correcting the intensity of the value of the high frequency. The correction method will be described later with reference to the flowchart ofFig. 9 . - The
image output module 404 performs image processing for output by theprinting apparatus 310. The image data that has undergone the gamut mapping by thegamut mapping module 403 is separated into ink colors to be printed by theprinting apparatus 310. Theimage output module 404 further performs desired image processing needed for the output by theprinting apparatus 310, for example, quantization processing of converting the image data into binary data representing ink discharge/non-discharge using dither or error diffusion processing. - Details of the processing of generating the high frequency of the color system to be output by the
printing apparatus 310, which is performed by the output image characteristic obtainingmodule 406, will be described with reference toFig. 8 . - In step S101, the output image characteristic obtaining
module 406 causes the Drange conversion module 402 to perform D range conversion for image data input to theimage input module 401. - In step S102, the output image characteristic obtaining
module 406 causes thegamut mapping module 403 to perform gamut mapping for the image data that has undertone the D range compression in step S101. - In step S103, the output image characteristic obtaining
module 406 generates a value H' of a high frequency from the image data that has undergone the gamut mapping in step S102. To generate the value of the high frequency, the output image characteristic obtainingmodule 406 calculates the luminance, and further calculates the value of the low frequency of the calculated luminance, like the input image characteristic obtainingmodule 405. The output image characteristic obtainingmodule 406 calculates the value of the high frequency in accordance with equation (5) based on the value of the low frequency and the input luminance. The processing procedure is then ended. - The D range compression processing and the gamut mapping processing here have the same contents as D range conversion and gamut mapping processing performed in processing shown in
Fig. 10 to be described later, but are executed for a different purpose. Note that the D range compression processing and the gamut mapping processing will sometimes be referred to as conversion processing together in the following explanation. - Details of the contrast processing by the
contrast correction module 407 will be described with reference toFig. 9 . - In step S201, the
contrast correction module 407 converts the input image data into YCbCr color space. If the input color space is the RGB color space, the RGB color space is converted into the YCbCr color space in accordance with equations (1) to (3). - In step S202, the
contrast correction module 407 obtains the luminance value I from the data of the YCbCr color space generated in step S201, and calculates the value H of the high frequency and the value L of the low frequency based on the luminance value. Here, the calculation methods of the value H of the high frequency and the value L of the low frequency are similar to those of the input image characteristic obtainingmodule 405 and the output image characteristic obtainingmodule 406 described above. That is, thecontrast correction module 407 calculates the value L of the low frequency of the luminance, and calculates the value H of the high frequency in accordance with equation (5) based on the calculated value L of the low frequency and the input luminance value I. - In step S203, the
contrast correction module 407 generates the contrast correction intensity based on the values of the high frequencies generated by the input image characteristic obtainingmodule 405 and the output image characteristic obtainingmodule 406. Here, the target value of the contrast intensity is set to the value of the high frequency of the input image. Let Hm be the correction intensity calculated as the correction coefficient used when performing contrast correction, Ht be the value of the high frequency generated by the input image characteristic obtainingmodule 405, and H' be the value of the high frequency generated by the output image characteristic obtainingmodule 406. At this time, the correction intensity calculation method can be represented by - The value obtained here is the reverse bias before and after the conversion. For this reason, in the example shown in
Fig. 5 , the correction intensity in thearea 507 outside the color gamut is set to be higher than the correction intensity in thearea 508 in the color gamut. This is because the degree of change (degree of compression) in the conversion is different, as described using thecontrast 510 and thecontrasts - Note that when the value Ht of the high frequency and the value H' of the high frequency are generated using equation (7), the correction intensity Hm can be given by
- In step S204, the
contrast correction module 407 performs contrast correction by multiplying the value H of the high frequency generated in step S202 by the correction intensity Hm. That is, contrast correction is performed for the value of the high frequency of the input image data. Letting Hc be the value of the high frequency after the contrast correction, contrast correction can be represented by -
- As represented by equations (8) and (9), the contrast lowers from the input image to the output image, that is, the reverse bias amount when the intensity of the value of the high frequency lowers is set to the correction intensity Hm. When correction is performed by multiplication of the reverse bias amount by equation (10) or addition of the reverse bias amount by equation (11), the intensity of the value of the high frequency of the input image can be maintained in the output image, or a value close to the intensity of the value of the high frequency of the input image can be obtained in the output image.
- In step S205, the
contrast correction module 407 combines the value Hc of the high frequency after the contrast correction in step S204, the value L of the low frequency calculated in step S202, and the value Cb and Cr generated in step S201 to obtain the original RGB data. First, thecontrast correction module 407 integrates the value Hc of the high frequency after the contrast correction and the value L of the low frequency by equation (12), thereby obtaining a luminance I' after the contrast correction by combining the values of the frequencies. -
- The
contrast correction module 407 then plane-combines the luminance I' and the color difference values (Cb, Cr) to generate color image values (I', Cb, Cr). The image that has undergone the contrast correction according to this embodiment is thus obtained. The processing procedure is then ended. - The flowchart of the overall processing according to this embodiment will be described with reference to
Fig. 10 . This processing procedure is implemented when, for example, theCPU 301 reads out and executes a program stored in theHDD 303 and thus functions as each processing unit shown inFig. 4 . - In step S301, the
image input module 401 obtains HDR image data. As for the obtaining method, image data held by theHDD 303 may be obtained, or image data may be obtained from an external apparatus via the data transfer I/F 306. In addition, the HDR image data of the obtaining target may be decided based on selection or instruction of the user. - In step S302, the
contrast correction module 407 generates the contrast correction intensity Hm by the method described above with reference toFig. 9 using the values of the high frequencies generated by the input image characteristic obtainingmodule 405 and the output image characteristic obtainingmodule 406. - In step S303, the
contrast correction module 407 performs, for the value of the high frequency of the image data input in step S301, contrast correction by the method described above with reference toFig. 9 using the contrast correction intensity Hm generated in step S302. That is, steps S303 and S304 of this processing procedure correspond to the processing shown inFig. 9 . - In step S304, the D
range conversion module 402 performs D range conversion (dynamic range compression processing) by the method described above with reference toFig. 1 and the like for the image data that has undergone the contrast correction in step S303. In this embodiment, the Drange conversion module 402 converts the D range from 1,000 nit of the input image to 100 nit that is the D range for gamut mapping. - In step S305, the
gamut mapping module 403 performs gamut mapping processing by the method described above with reference toFig. 5 and the like for the image data that has undergone the D range conversion in step S304. - In step S306, the
image output module 404 executes output processing for output by theprinting apparatus 310 by the above-described method for the image data that has undergone the gamut mapping in step S305. The processing procedure is then ended. - In this embodiment, using the value of the high frequency of the input image and the value of the high frequency of the output image after gamut mapping, contrast correction is performed by setting the reverse bias amount corresponding to the decrease amount of the value of the high frequency to the correction intensity. Accordingly, even in a case in which the value of the high frequency lowers due to the D range conversion in step S304 and the gamut mapping in step S305, which are performed after the correction intensity is set, the decrease amount is corrected in advance by the contrast correction. As a result, even after the gamut mapping, the contrast of the input image can be maintained, or the contrast can be made close to that.
- In addition, when the value of the high frequency of the output image after the gamut mapping is used at the time of generation of the contrast correction intensity, the correction intensity can be decided in a state in which the decrease amount of the contrast due to compression of gamut mapping is included. Hence, as the ratio of compression by gamut mapping rises, the contrast correction intensity can be set high. Additionally, the value of the high frequency that has undergone the contrast correction is close to the value of the high frequency of the input image, and the value of the low frequency that has not undergone the contrast correction is close to the value of the low frequency after the gamut mapping.
- As is apparent from the above description, according to this embodiment, it is possible to suppress lowering of the contrast caused by the difference in the color reproduction range between the input and the output.
- Note that in this embodiment, an example in which the YCbCr color space is used as a luminance has been described. However, an xyz color space representing a luminance and chromaticity may be used.
- The second embodiment of the present invention will be described with reference to the flowchart of
Fig. 11 . A description of portions that overlap the first embodiment will be omitted, and only differences will be described. In this embodiment, contrast correction is performed after D range conversion, unlikeFig. 10 described in the first embodiment. That is, the order of processing steps is different from the first embodiment. - In step S401, an
image input module 401 obtains HDR image data. As for the obtaining method, image data held by anHDD 303 may be obtained, or image data may be obtained from an external apparatus via a data transfer I/F 306. In addition, the HDR image data of the obtaining target may be decided based on selection or instruction of the user. - In step S402, a D
range conversion module 402 performs D range conversion by the method described above with reference toFig. 1 and the like for the image data input in step S401. In this embodiment, the Drange conversion module 402 converts the D range from 1,000 nit of the input image to 100 nit that is the D range for gamut mapping. - In step S403, a
contrast correction module 407 generates a contrast correction intensity Hm by the method described above with reference toFig. 9 using the values of high frequencies generated by an input image characteristic obtainingmodule 405 and an output image characteristic obtainingmodule 406. - In step S404, the
contrast correction module 407 performs, for the value of the high frequency of the image data that has undergone the D range conversion in step S402, contrast correction by the method described above with reference toFig. 9 using the contrast correction intensity Hm generated in step S403. That is, steps S403 and S404 of this processing procedure correspond to the processing shown inFig. 9 described in the first embodiment. - In step S405, a
gamut mapping module 403 performs gamut mapping by the method described above with reference toFig. 5 and the like for the image data that has undergone the contrast correction in step S404. - In step S406, an
image output module 404 executes output processing for output by aprinting apparatus 310 by the above-described method for the image data that has undergone the gamut mapping in step S405. The processing procedure is then ended. - In this embodiment, using the value of the high frequency of the input image and the value of the high frequency of the output image after gamut mapping, contrast correction is performed by setting the reverse bias amount corresponding to the decrease amount of the value of the high frequency to the correction intensity. Hence, even in a case in which the value of the high frequency lowers due to the D range conversion in step S402 and the gamut mapping in step S405, the decrease amount is corrected by the contrast correction. As a result, even after the gamut mapping, the contrast of the input image can be maintained, or the contrast can be made close to that.
- In addition, when the value of the high frequency of the output image after the gamut mapping is used at the time of generation of the contrast correction intensity, the correction intensity can be decided in a state in which the decrease amount of the contrast due to compression of gamut mapping is included. Hence, as the ratio of compression by gamut mapping rises, the contrast correction intensity can be set high. Additionally, the value of the high frequency that has undergone the contrast correction is close to the value of the high frequency of the input image, and the value of the low frequency that has not undergone the contrast correction is close to the value of the low frequency after the gamut mapping.
- Furthermore, since the contrast correction is performed after the D range conversion, the memory used for the processing can be made small because the D range is small as compared to a case in which the correction is performed before the D range conversion.
- The third embodiment of the present invention will be described with reference to the flowchart of
Fig. 12 . A description of portions that overlap the first embodiment will be omitted, and only differences will be described. In this embodiment, contrast correction is performed after D range conversion and gamut mapping, unlikeFig. 10 described in the first embodiment. That is, the order of processing steps is different from the first embodiment. - In step S501, an
image input module 401 obtains HDR image data. As for the obtaining method, image data held by anHDD 303 may be obtained, or image data may be obtained from an external apparatus via a data transfer I/F 306. In addition, the HDR image data of the obtaining target may be decided based on selection or instruction of the user. - In step S502, a D
range conversion module 402 performs D range conversion by the method described above with reference toFig. 1 and the like for the image data input in step S501. In this embodiment, the Drange conversion module 402 converts the D range from 1,000 nit of the input image to 100 nit that is the D range for gamut mapping. - In step S503, a
gamut mapping module 403 performs gamut mapping by the method described above with reference toFig. 5 and the like for the image data that has undergone the D range conversion in step S502. - In step S504, a
contrast correction module 407 generates a contrast correction intensity Hm by the method described above with reference toFig. 9 using the values of high frequencies generated by an input image characteristic obtainingmodule 405 and an output image characteristic obtainingmodule 406. - In step S505, the
contrast correction module 407 performs, for the value of the high frequency of the image data that has undergone the gamut mapping in step S503, contrast correction by the method described above with reference toFig. 9 using the contrast correction intensity Hm generated in step S504. That is, steps S504 and S505 of this processing procedure correspond to the processing shown inFig. 9 described in the first embodiment. - In step S506, an
image output module 404 executes output processing for output by aprinting apparatus 310 by the above-described method for the image data that has undergone the contrast correction in step S505. The processing procedure is then ended. - In this embodiment, using the value of the high frequency of the input image and the value of the high frequency of the output image after gamut mapping, contrast correction is performed by setting the reverse bias amount corresponding to the decrease amount of the value of the high frequency to the correction intensity. Hence, even in a case in which the value of the high frequency lowers due to the D range conversion in step S502 and the gamut mapping in step S503, the decrease amount is corrected by the contrast correction. As a result, even after the gamut mapping, the contrast of the input image can be maintained, or the contrast can be made close to that.
- In addition, when the value of the high frequency of the output image after the gamut mapping is used at the time of generation of the contrast correction intensity, the correction intensity can be decided in a state in which the decrease amount of the contrast due to compression of gamut mapping is included. Hence, as the ratio of compression by gamut mapping rises, the contrast correction intensity can be set high. Additionally, the value of the high frequency that has undergone the contrast correction is close to the value of the high frequency of the input image, and the value of the low frequency that has not undergone the contrast correction is close to the value of the low frequency after the gamut mapping.
- Furthermore, since the contrast correction is performed after the gamut mapping, the memory used for the processing can be made small because the D range is small as compared to a case in which the correction is performed before the D range conversion.
- The fourth embodiment of the present invention will be described with reference to the flowchart of
Fig. 13 . A description of portions that overlap the first embodiment will be omitted, and only differences will be described. In this embodiment, D range conversion is performed twice, unlikeFig. 10 described in the first embodiment. - In step S601, an
image input module 401 obtains HDR image data. As for the obtaining method, image data held by anHDD 303 may be obtained, or image data may be obtained from an external apparatus via a data transfer I/F 306. In addition, the HDR image data of the obtaining target may be decided based on selection or instruction of the user. - In step S602, a D
range conversion module 402 performs D range conversion by the method described above with reference toFig. 1 and the like for the image data input in step S601. In this embodiment, the Drange conversion module 402 converts the D range from 1,000 nit of the input image to the D range of a color space used as a standard. For example, in a case of AdobeRGB, the D range of the input image is converted to 120 nit. - In step S603, a
contrast correction module 407 generates a contrast correction intensity Hm by the method described above with reference toFig. 9 using the values of high frequencies generated by an input image characteristic obtainingmodule 405 and an output image characteristic obtainingmodule 406. - In step S604, the
contrast correction module 407 performs, for the value of the high frequency of the image data converted into the D range of the standard color space in step S602, contrast correction by the method described above with reference toFig. 9 using the contrast correction intensity Hm generated in step S603. That is, steps S603 and S604 of this processing procedure correspond to the processing shown inFig. 9 described in the first embodiment. - In step S605, the D
range conversion module 402 performs D range conversion by the method described above with reference toFig. 1 and the like for the image data that has undergone the contrast correction in step S604. In this embodiment, the D range of the image is converted from 120 nit of the standard color space converted in step S602 to 100 nit that is the D range for gamut mapping. - In step S606, a
gamut mapping module 403 performs gamut mapping by the method described above with reference toFig. 5 and the like for the image data that has undergone the D range conversion in step S605. - In step S607, an
image output module 404 executes output processing for output by aprinting apparatus 310 by the above-described method for the image data that has undergone the gamut mapping in step S606. The processing procedure is then ended. - In this embodiment, using the value of the high frequency of the input image and the value of the high frequency of the output image after gamut mapping, contrast correction is performed by setting the reverse bias amount corresponding to the decrease amount of the value of the high frequency to the correction intensity. Hence, even in a case in which the value of the high frequency lowers due to the D range conversion to the standard color space in step S602, the D range conversion in step S605, and the gamut mapping in step S606, the decrease amount is corrected by the contrast correction. As a result, even after the gamut mapping, the contrast of the input image can be maintained, or the contrast can be made close to that.
- In addition, when the value of the high frequency of the output image after the gamut mapping is used at the time of generation of the contrast correction intensity, the correction intensity can be decided in a state in which the decrease amount of the contrast due to compression of gamut mapping is included. Hence, as the ratio of compression by gamut mapping rises, the contrast correction intensity can be set high. Additionally, the value of the high frequency that has undergone the contrast correction is close to the value of the high frequency of the input image, and the value of the low frequency that has not undergone the contrast correction is close to the value of the low frequency after the gamut mapping.
- Furthermore, since the D range is temporarily converted into the D range of the standard color space, an editing operation such as retouch can be performed while confirming the image in an environment independent of the printing apparatus, for example, on an HDR monitor.
- In the above embodiments, the description has been made using the example in which the contrast correction intensity is generated from the value of the high frequency of the input image and the value of the high frequency of the output image. In this embodiment, an example in which correction intensity information is generated by a 3D LUT method will be described.
Fig. 14 is a view for explaining generation of correction intensity information according to this embodiment. - In this embodiment, correction intensity information sets the decrease amount of the contrast between the input image and the output image to the reverse bias. The output image is assumed to be in a state in which the input image has undergone D range compression and also gamut mapping. In
Fig. 14 , the reference color (224, 0, 0) and the contrast target color (232, 8, 8) of the input change to (220, 8, 8) and (216, 12, 12), respectively, by the D range compression and the gamut mapping. Difference values ΔRGB representing the contrast between the reference color and the contrast target color in the input and the output are 13.9 and 6.9, and the reverse bias of the contrast ratio is calculated by equation (14). In addition, the reverse bias of the contrast difference can be calculated by equation (15). - By the above method, the correction intensity for the input color is generated. This is calculated for each grid value of a 3D LUT, thereby generating a 3D LUT representing a correction intensity Hm of the output for the input (R, G, B). In this way, it is possible to generate correction intensity information having a characteristic that makes the correction intensity Hm larger for a color outside the color gamut compressed largely by gamut mapping than for a color in the color gamut for which the compression is small.
- A method of performing contrast correction using the correction intensity information will be described. A
contrast correction module 407 looks up the 3D LUT of the correction intensity information using the RGB values of input image data, thereby generating the correction intensity Hm for the input color. Furthermore, thecontrast correction module 407 performs contrast correction using the generated correction intensity Hm. - In this embodiment, using the input image and the output image after gamut mapping, the contrast correction is performed using the correction intensity information of the 3D LUT that sets the reverse bias amount corresponding to the decrease amount of the contrast to the correction intensity. Hence, even if the contrast lowers due to the D range conversion and the gamut mapping, the decrease amount is corrected. For this reason, even after the gamut mapping, the contrast of the input image can be maintained, or the contrast can be made close to that. In addition, since the correction intensity Hm is generated by the 3D LUT method, the value of the high frequency of the input image and the value of the high frequency of the output image need not be calculated, and the contrast correction can be performed in a small memory state.
- As the sixth embodiment of the present invention, a form that holds the effect of contrast correction considering an observation condition will be described. Note that a description of components that overlap the above embodiments will appropriately be omitted, and a description will be made with focus placed on differences.
- As described above, the contrast intensity lowers at the time of printing by a printing apparatus due to compression by gamut mapping. Additionally, since the contrast sensitivity characteristic changes depending on the observation condition, it is difficult to hold the effect of contrast correction. This embodiment aims at solving this problem.
-
Fig. 15 shows aUI configuration screen 1301 provided by a contrast correction application according to this embodiment, which is displayed on adisplay 307. The user can set a contrast correction condition to be described later via theUI configuration screen 1301 that is a display screen. The user designates, in apath box 1302 of theUI configuration screen 1301, the storage location (path) of an image to be subjected to contrast correction. The image designated by thepath box 1302 is displayed in an inputimage display portion 1303. In an outputapparatus setting box 1304, an apparatus that outputs the image designated by thepath box 1302 is selected from a pull-down menu and set. In an output papersize setting box 1305, a paper size to be output is selected from a pull-down menu and set. Note that in addition to predetermined sizes, the user may input an arbitrary size from anoperation unit 308 and set. In an observationdistance setting box 1306, a distance to observe an output printed product is input from theoperation unit 308 and set. An appropriate observation distance may automatically be calculated and set based on the output paper size set in the output papersize setting box 1305. Conversely, an appropriate output paper size may automatically be calculated and set based on the observation distance set in the observationdistance setting box 1306. In an illuminationlight setting box 1307, the luminance value of illumination light made to strike the output printed product is selected from a pull-down menu and set. The luminance value may be input from theoperation unit 308. -
Fig. 16 is a block diagram showing an example of a software configuration according to this embodiment. The software configuration further includes a contrast appearance characteristic obtainingmodule 408, unlike the configurator shown inFig. 4 described in the first embodiment. Animage input module 401 according to this embodiment further obtains the output apparatus (a printer in this embodiment) designated in the outputapparatus setting box 1304 of theUI configuration screen 1301 and the output paper size designated in the output papersize setting box 1305. Theimage input module 401 obtains the observation distance designated in the observationdistance setting box 1306 and the luminance value of illumination light set in the illuminationlight setting box 1307. Theimage input module 401 also obtains HDR image data designated in thepath box 1302 of theUI configuration screen 1301. - In the first embodiment, filtering processing has been described with reference to
Fig. 6 . In this embodiment, a filter to be used in the above-described filtering processing can be set in the following way by a contrast appearance characteristic obtainingmodule 408 shown inFig. 16 in consideration of the observation condition. - First, the number PDppd of pixels located in a predetermined viewing angle is calculated from the obtained observation condition (output paper size, observation distance). Here, the predetermined viewing angle is set to 1°.
-
-
-
- The calculated angular resolution PDcpd is set as the filter size of a Gaussian filter, and this filter is defined as a filter M. Note that here, PDcpd is directly set as the filter size of the Gaussian filter. However, the present invention is not limited to this. For example, a table representing the correspondence relationship between PDcpd and the filter size may be held in advance, and the filter size may be set by referring to the table. Alternatively, in a case of the above-described edge preservation type filter, filtering processing is performed by determining edge portions and portions other than the edges. For this reason, in addition to set values concerning the filter size, set values (for example, a luminance difference) concerning whether an image is a target of filtering processing are necessary. Hence, in addition to the filter size, set values concerning whether an image is a target of filtering processing may be set based on the observation condition.
- In the first embodiment, contrast processing by a
contrast correction module 407 has been described with reference toFig. 9 . At this time, the above-described correction intensity Hm may be calculated based on the observation condition as well. Using the luminance value of the illumination light obtained by theimage input module 401, the contrast appearance characteristic obtainingmodule 408 calculates a ratio Sr to the contrast sensitivity value at the luminance value of the illumination light serving as a reference in the following way. Then, the correction intensity Hm is obtained using the calculated ratio Sr. Here, the luminance value of the illumination light serving as a reference means a luminance value serving as a reference that the appearance of the effect of contrast correction should match. The luminance value of the illumination light serving as a reference may be set by the user as a set value (not shown) in theUI configuration screen 1301 on theimage input module 401. Alternatively, the luminance value may be held internally as a predetermined value. The contrast sensitivity ratio Sr is calculated using a contrast sensitivity value S(ur, Ls) at a luminance value Ls of illumination light in the observation environment and a contrast sensitivity value S(ur, Lr) at the luminance value of the illumination light serving as a reference. Note that ur is the high sensitivity frequency at the luminance value of the illumination light serving as a reference. - As the calculation method of ur, a Barten model is used. According to the Barten model, the contrast sensitivity can be calculated by equation (19).
- Here, assume that k = 3.3, T = 0.1, η = 0.025, h = 357 × 3600, a contrast variation Φext(u) corresponding to external noise = 0, and a contrast variation Φ0 corresponding to neural noise = 3 × 10-8 [sec deg2]. In addition, XE = 12 [deg], and NE = 15 [cycle] (0 and 90 [deg], for a frequency of 2 [c/deg] or more, 45 [deg] and NE = 7.5 [cycles]). Assume that σ0 = 0.0133 [deg], and Csph = 0.0001 [deg/mm3].
-
- In equations (19) to (24), when the target luminance value is set to L, and the spatial frequency is set to u, the contrast sensitivity of the spatial frequency u at the target luminance L can be calculated.
Fig. 17 is a graph that plots the contrast sensitivity calculated for each luminance by the Barten model. As the luminance becomes high, the frequency of a high contrast sensitivity transitions to the high frequency side. To the contrary, as the luminance becomes low, the frequency of a high contrast sensitivity transitions to the low frequency side, as can be seen. Contrast sensitivities for a plurality of spatial frequencies may be calculated in advance in correspondence with a plurality of luminance values using equations (19) to (24), and a luminance-high sensitivity frequency conversion table in which spatial frequencies of maximum values are associated with the luminance values may be held.Fig. 18 shows an example of the luminance-high sensitivity frequency conversion table. In a case in which a luminance value that is not described in the table is set, a high sensitivity frequency can be calculated by defining an approximate function that connects high sensitivity frequencies on a luminance basis, as shown inFig. 17 . -
- The
contrast correction module 407 generates a contrast correction intensity. When the contrast sensitivity ratio Sr calculated by a contrast sensitivity ratio calculation unit 1401, a value Hta of a target high frequency as the target of correction, and a value H' of the output high frequency after gamut mapping are used, the contrast correction intensity Hm can be represented by -
- Next, as for contrast ratio calculation processing, the contrast sensitivity S(ur, Lr) at the luminance value of the illumination light serving as a reference is calculated, and the contrast sensitivity S(ur, Ls) at the luminance value of the illumination light in the observation environment is calculated. Then, the contrast sensitivity ratio Sr is calculated using the contrast sensitivity S(ur, Lr) of the illumination light serving as a reference and the contrast sensitivity S(ur, Ls) of the illumination light in the observation environment.
- When contrast correction processing is performed using the above-described method, the effect of contrast correction considering the observation condition can be held. In the above-described embodiment, the contrast appearance characteristic obtaining
module 408 sets the filter M in consideration of the observation condition, obtains a low-frequency component L using the filter M, and sets the contrast correction intensity using the contrast sensitivity value calculated based on the observation condition. However, only one of them may suffice. - In the above-described sixth embodiment, as in steps S101 to S103, the value H' of the high frequency is generated from the image data that has undergone the D range compression and gamut mapping, and the
contrast correction module 407 obtains the contrast correction intensity Hm using H' and the input image data Ht, and corrects the value of the high frequency using this. However, the following processing may be performed in place of correction of the high-frequency component H using the correction intensity Hm. That is, in step S202 of the sixth embodiment, the value L of the low frequency and the value H of the high frequency are obtained using the filter M generated based on the observation condition by the contrast appearance characteristic obtainingmodule 408, and D range compression is performed for the obtained value L of the low frequency to generate a value L' of the low frequency. Then, a luminance I' may be obtained by integrating the value H of the high frequency and the value L' of the low frequency. - Additionally, in the sixth embodiment, when performing contrast correction, contrast correction may be performed by setting the value Hm to the above-described ratio Sr to the contrast sensitivity value, that is, by setting Hm = Sr, instead of obtaining the correction intensity Hm from the input image data Ht and the image data that has undergone the D range compression and the gamut mapping. In this case, the value L of the low frequency and the value H of the high frequency may be obtained using the filter M generated based on the observation condition. However, not the filter M but a filter prepared without being based on the observation condition may be used.
- As the seventh embodiment of the present invention, a form that considers highlight detail loss or shadow detail loss at the time of dynamic range compression will be described. Note that a description of components that overlap the above embodiments will appropriately be omitted, and a description will be made with focus placed on differences.
- As image processing of correcting lowering of a contrast caused when D range compression as described above is performed, Retinex processing is used. In the Retinex processing, first, an image is separated into an illumination light component and a reflected light component. When the illumination light component is D-range-compressed, and the reflected light component is held, D range compression can be performed while holding the contrast of the original image.
- It can be said that the illumination light component is substantially a low-frequency component, and the reflected light component is substantially a high-frequency component. In this embodiment, the low-frequency component or the illumination light component will be referred to as a first component, and the high-frequency component or the reflected light component will be referred to as a second component hereinafter.
- At this time, in a case in which the shape of the color gamut of input image data and the shape of the color gamut of a printing apparatus are largely different, even when contrast correction is performed using the conventional method, the contrast obtained at the time of printing may be different from the intended contrast due to compression by gamut mapping. Furthermore, if the pixel value of the second component is large on the high luminance side or on the low luminance side, the output image may exceed the D range of the output, and highlight detail loss or shadow detail loss occurs.
Figs. 2C and 2D show the principle of occurrence of highlight detail loss/shadow detail loss. InFigs. 2C and 2D , the ordinate represents the pixel value, and the abscissa represents the coordinate values of an image.Figs. 2C and 2D show the first component of the image and pixel values obtained by adding the second component to the first component before D range compression and after D range compression, respectively. After D range compression is performed for the first component of the image to obtain the first component, the second component maintains the value before the D range compression. In this case, as indicated by the pixel values obtained by adding the second component, the values are clipped by the upper and lower limits (the dotted lines inFig. 2D ) of the D range on the high luminance side and on the low luminance side, and highlight detail loss or shadow detail loss occurs. That is, if the value of the low-frequency component is D-range-compressed to the high luminance side or low luminance side, highlight detail loss/shadow detail loss readily occurs. - In consideration of the above-described problem, this embodiment aims at suppressing highlight detail loss or shadow detail loss in the contrast at the time of dynamic range compression.
- In the first embodiment, contrast processing by a
contrast correction module 407 has been described with reference toFig. 9 . In this embodiment, processing is performed in the following way in steps S204 and S205 ofFig. 9 . The processes of steps S201 to S203 are the same as in the first embodiment. In step S204, in addition to the process of step S204 described in the first embodiment, a second component correction module (not shown) in thecontrast correction module 407 corrects the second component to prevent the high-frequency component, that is, the second component corrected by thecontrast correction module 407 from exceeding the D range of the input which is the luminance range of the input and causing highlight detail loss/shadow detail loss. Note that since the output from thecontrast correction module 407 has the same D range as the input, the second component is corrected not to exceed the luminance range of the output as well. Here, the second component is corrected in the following way based on the value of a first component L before D range conversion. The highlight detail loss/shadow detail loss readily occurs when L is on the high luminance side or on the low luminance side. Hence, the larger or the smaller the value L becomes, the higher the degree of correction of the second component is. -
-
- When Hc = 1, since highlight detail loss/shadow detail loss shadow is not caused by the addition of the second component, the second component is not corrected.
- The correction coefficients P and Q are calculated in the following way.
- In step S205, the
contrast correction module 407 combines a value Hcb of the high frequency after the contrast correction and the correction of the second component in step S204, the value L of the low frequency calculated in step S202, and value Cb and Cr generated in step S201 to obtain the original RGB data. First, thecontrast correction module 407 integrates the value Hc of the high frequency after the contrast correction and the value L of the low frequency by equation (32), thereby obtaining a luminance I' after the contrast correction by combining the values of the frequencies. - Note that when the value Hc of the high frequency and the value L of the low frequency are generated using equation (7) described in the first embodiment, the second component is corrected in the following way to prevent the second component corrected by the
contrast correction module 407 from exceeding the D range of the input and causing highlight detail loss/shadow detail loss. - When Hc > 0, highlight detail loss may occur on the high luminance side. For this reason, correction is performed such that the absolute value of the second component becomes small as the value of the first component L becomes large. Here, the second component is corrected using a correction coefficient W.
-
-
- When Hc = 0, since highlight detail loss/shadow detail loss shadow is not caused by the addition of the value of the second component Hc, nothing is performed.
- Here, α, β, t1, and t2 are predetermined constants. If the first component has a halftone, the second component is not so suppressed. The second component is suppressed only when the value of the first component is on the high luminance side or on the low luminance side.
- In addition, Lmax and Lmin are the maximum value and the minimum value of the D range of the input, respectively. Note that the correction coefficients W and S need not always be Sigmoid-type functions as described above. The function is not particularly limited as long as it makes the absolute value of the second component Hcb after the correction smaller than the absolute value of the second component Hc before the correction.
- In addition, equations (33) and (34) may be executed by obtaining W(L') and S(L') using an LUT calculated for each value L' in advance. When the LUT prepared in advance is used, the processing load needed for the operation can be reduced, and the processing speed can be improved.
-
- The
contrast correction module 407 then plane-combines the luminance I' and the color difference values (Cb, Cr) to generate color image values (I', Cb, Cr). The image that has undergone the contrast correction according to this embodiment is thus obtained. - The procedure of processing is the same as that described with reference to
Fig. 9 in the first embodiment, and a description thereof will be omitted. - As described above, in this embodiment, the second component of the HDR image is corrected in advance in consideration of the contrast lowering caused by the D range conversion and the gamut mapping. In addition, after the second component correction, processing is performed to prevent highlight detail loss/shadow detail loss from occurring. When the contrast correction considering the contrast lowering caused by the gamut mapping is performed in advance for the HDR image, the contrast can easily be maintained even after the gamut mapping.
- The eighth embodiment will be described with reference to the flowchart of
Fig. 19 . - The flowchart of
Fig. 19 shows the procedure of processing of highlight detail loss/shadow detail loss determination. In this determination, it is determined, based on the value of a first component L' after D range compression and the value of a second component H before the D range compression, whether to perform highlight detail loss/shadow detail loss correction. When the highlight detail loss/shadow detail loss determination is performed based on the values of both the first component L' after D range compression and the second component H, a pixel that causes highlight detail loss/shadow detail loss can more correctly be specified. Furthermore, by correcting only the pixel that causes highlight detail loss/shadow detail loss, lowering of the contrast of a pixel that does not cause highlight detail loss/shadow detail loss can be prevented. The rest is the same as in the seventh embodiment. - In step S1001, a
contrast correction module 407 determines whether to correct highlight detail loss/shadow detail loss based on the second component H before D range compression and the first component L' after D range compression. - More specifically, in accordance with the result of addition of the first component L' after D range compression and the second component H, it is determined whether to perform highlight detail loss/shadow detail loss correction.
Fig. 20 shows the outline of the determination. The highlight detail loss/shadow detail loss correction determination D range inFig. 20 is a D range determined in advance for highlight detail loss/shadow detail loss determination, and buffer regions ΔW and ΔS represent the luminance intervals between the D range after compression and the highlight detail loss/shadow detail loss correction determination D range. - (1) When L' + H falls within the range of the highlight detail loss/shadow detail loss correction determination D range (pixel 20)
In this case, since highlight detail loss/shadow detail loss does not occur, correction is not performed. - (2) When L' + H falls outside the range of the highlight detail loss/shadow detail loss correction determination D range and also falls within the range of the D range after compression (pixel 21)
In this case, highlight detail loss/shadow detail loss does not occur. However, to prevent tone inversion caused as the result of highlight detail loss/shadow detail loss correction, the pixel is set to the target pixel of highlight detail loss/shadow detail loss correction. - (3) When L' + H falls outside the range of the D range after compression (pixel 23)
- In this case, highlight detail loss/shadow detail occurs. The pixel is set to the target pixel of highlight detail loss/shadow detail loss correction.
- The second component correction module (not shown) in the
contrast correction module 407 corrects the second component in accordance with equations below based on the result of the above-described highlight detail loss/shadow detail loss correction determination. In the equation, α is a predetermined constant, Thmax is the maximum value of a predetermined D range for highlight detail loss/shadow detail loss determination, and Thmin is the minimum value, which are determined in advance to prevent an adverse effect in the image after the second component is corrected. - In a case of highlight detail loss (L' + H > Thmax)
-
- In a case of shadow detail loss (L' + H < Thmin)
-
-
-
- Note that a constant Hmax much larger than H may be set to perform calculation as follows.
-
-
- In cases other than the above cases, the second component is not corrected, like equation (40).
- As described above, in this embodiment, highlight detail loss/shadow detail loss correction determination is performed, and correction can be performed only for the second component that needs highlight detail loss/shadow detail loss correction. Accordingly, lowering of the contrast can be suppressed in a case in which the first component is on the high luminance side or on the low luminance side, but highlight detail loss/shadow detail loss hardly occurs depending on the value of the second component.
- The ninth embodiment will be described with reference to the flowchart of
Fig. 21. Fig. 21 shows details of highlight detail loss/shadow detail loss correction performed by acontrast correction module 407. - In this embodiment, in the highlight detail loss/shadow detail loss determination described in the eighth embodiment, it is determined whether to perform highlight detail loss/shadow detail loss correction based on a first component L' after D range compression and a second component H before D range compression, and a just-noticeable difference (JND), which are decided in step S900 (S901).
- When JND is taken into consideration at the time of highlight detail loss/shadow detail loss correction determination, the contrast after the second component correction can easily be perceived. For example, in
Fig. 20 , if the widths of buffer regions ΔW and ΔS are less than JND, the luminance difference betweenpixels - The
contrast correction module 407 holds the value of the just-noticeable difference (JND) for a luminance by a JND holding module (not shown). - Note that the value of JND may be calculated at the start of a program and held in a memory (a
RAM 302 or aRAM 314 shown inFig. 3 ) until the end of the program, or an LUT may be held in an external file and loaded as needed. Alternatively, the value of JND may be calculated each time. - JND is a threshold to allow a person to recognize a difference. A luminance difference less than JND is hardly perceived.
- JND is obtained from, for example, a Barten model as shown in
Fig. 22 . The Barten model is the physiological model of a visual system formed by a mathematical description. The abscissa inFig. 22 represents the luminance value, and the ordinate inFig. 22 represents the minimum contrast step perceivable by a human with respect to the luminance value. Here, letting Lj be a certain luminance value, and Lj + 1 be a luminance value obtained by adding JND to Lj, a minimum contrast step mt is defined by, for example, -
- This shows that when the luminance difference is equal to or more than JND, a human can perceive the luminance difference. As models representing the visual characteristic, various mathematical models such as a Weber model and a DeVries-Rose model have been proposed in addition to the Barten model. In addition, JND may be a numerical value found experimentally or empirically by sensory evaluation or the like.
- In the highlight detail loss/shadow detail loss determination, the widths of the buffer regions ΔW and ΔS in
Fig. 20 are decided to be equal to or more than JND, thereby reducing lowering of the visual contrast after highlight detail loss/shadow detail loss correction. That is, if the width of the buffer region is less than JND, the luminance difference in the buffer region is difficult to perceive. When the width of the buffer region is equal to or more than JND, the contrast in the buffer region is easily perceived even after correction of the second component. -
Fig. 21 shows the procedure of highlight detail loss/shadow detail loss correction processing according to the ninth embodiment. Step S900 of deciding the highlight detail loss/shadow detail loss correction determination D range is added to the procedure shown inFig. 19 according to the eighth embodiment, and determination is performed using the decided D range. -
- Processing after highlight detail loss/shadow detail loss correction determination is the same as in the eighth embodiment other than that determination is performed using the decided luminance range D1.
- As described above, in the ninth embodiment, at the time of highlight detail loss/shadow detail loss correction determination, the width of the buffer region to correct the second component is decided in consideration of the visual characteristic JND. When the width of the buffer region is equal to or more than JND, loss of the contrast after second component correction can be reduced.
- In the seventh to ninth embodiments, highlight detail loss/shadow detail loss correction is performed after the
contrast correction module 407 performs contrast correction. In the 10th embodiment, the contrast correction module corrects the second component and corrects highlight detail loss or shadow detail loss without performing contrast correction. Note that in this embodiment, animage processing apparatus 300 may not include acontrast correction module 407, and may have the function of a correction module configured to correct the second component in the following way. The rest is the same as in the seventh embodiment. -
Fig. 23 is a flowchart showing the procedure of image processing according to this embodiment. Unlike the procedure ofFig. 9 shown in the first embodiment, steps S203 and S204 inFig. 9 are replaced with second component correction of step S1201 inFig. 10 . - Steps S201 and S202 are the same as in
Fig. 9 described in the first embodiment. - Next, in step S1201, the second component is corrected in the following way.
- When H > 0, highlight detail loss may occur on the high luminance side. For this reason, correction is performed such that the absolute value of a second component H becomes small as the value of a first component L' after D range compression becomes large. Here, the second component H is corrected using a correction coefficient W below, thereby obtaining Hcb.
-
- When H = 0, since highlight detail loss/shadow detail loss shadow is not caused by the addition of the second component, the second component is not corrected.
-
- Here, α, β, t1, and t2 are predetermined constants. As the position of the first component is moved to the high luminance side or the low luminance side, the second component is suppressed.
- In addition, L'max and L'min are the maximum value and the minimum value of the luminance of the D range after compression, respectively.
- When a nonlinear function is applied to the correction coefficient in this way, the second component can be strongly suppressed as the position of the first component moves to the high luminance side or the low luminance side.
- Note that the correction coefficients W and S need not always be Sigmoid-type functions as described above. Any function can be used for the decision as long as it is a function for strongly suppressing the second component as the position of the first component moves to the high luminance side or the low luminance side.
- Note that equations (49) and (50) may be executed by obtaining W(L') and S(L') using an LUT calculated for each value L' in advance. When the LUT prepared in advance is used, the processing load needed for the operation can be reduced, and the processing speed can be improved.
- Note that the correction coefficients W and S may be calculated using the value of the first component L before D range compression. When the first component L before D range compression is used, the D range compression and the second component correction processing can be performed in parallel, and the calculation efficiency improves. Letting Lmax and Lmin be the maximum value and the minimum value of the luminance of the D range before compression, the correction of the second component in this case is performed in the following way.
-
-
- Nothing is performed.
- Step S205 is performed as in the first embodiment.
- The 11th embodiment of the present invention will be described. The procedure of processing is the same as in
Fig. 11 described in the second embodiment, and a description will be made using this. In addition, a description of portions that overlap the above embodiments will be omitted, and only differences will be described. The processes of steps S401 to S403 are the same as in the second embodiment. - In step S404, a
contrast correction module 407 performs, for the value of the high frequency of image data that has undergone D range conversion in step S402, contrast correction by the method described above with reference toFig. 9 using a contrast correction intensity Hm generated in step S403. That is, steps S403 and S404 of this processing procedure correspond to the processing shown inFig. 9 described in the first embodiment. Furthermore, highlight detail loss/shadow detail loss correction is performed in the following way as in the seventh embodiment. - Note that in a case in which a value Hc of a high frequency and a value K of a low frequency are generated using equation (7),
-
-
- When Hc = 1, since highlight detail loss/shadow detail loss shadow is not caused by the addition of the second component, the second component is not corrected.
- The correction coefficients P and Q are calculated in the following way.
- In step S405, the
contrast correction module 407 combines a value Hcb of the high frequency after the contrast correction and second component correction in step S404, the value L of the low frequency calculated in step S202 ofFig. 9 , and value Cb and Cr generated in step S201 ofFig. 9 to obtain the original RGB data. First, thecontrast correction module 407 integrates the value Hc of the high frequency after the contrast correction and the value L of the low frequency by equation (59), thereby obtaining a luminance I' after the contrast correction by combining the values of the frequencies. - Note that when the value Hc of the high frequency and the value L of the low frequency are generated using equation (7), the second component is corrected in the following way to prevent the second component corrected by the
contrast correction module 407 from exceeding the D range of the input and causing highlight detail loss/shadow detail loss. - When Hc > 0, highlight detail loss may occur on the high luminance side. For this reason, correction is performed such that the absolute value of the second component becomes small as the value of the first component L becomes large. Here, the second component is corrected using a correction coefficient W.
-
-
- In this case, since highlight detail loss/shadow detail loss shadow is not caused by the addition of the value of the second component Hc, nothing is performed.
- Here, α, β, t1, and t2 are predetermined constants. If the first component has a halftone, the second component is not so suppressed. The second component is suppressed only when the value of the first component is on the high luminance side or on the low luminance side.
- In addition, Lmax and Lmin are the maximum value and the minimum value of the D range of the input, respectively. Note that the correction coefficients W and S need not always be Sigmoid-type functions as described above. The function is not particularly limited as long as it makes the absolute value of the second component Hcb after the correction smaller than the absolute value of the second component Hc before the correction.
- In addition, equations (60) and (61) may be executed by obtaining W(L') and S(L') using an LUT calculated for each value L' in advance. When the LUT prepared in advance is used, the processing load needed for the operation can be reduced, and the processing speed can be improved.
-
- The
contrast correction module 407 then plane-combines the luminance I' and the color difference values (Cb, Cr) to generate color image values (I', Cb, Cr). The image that has undergone the contrast correction according to this embodiment is thus obtained. - Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a 'non-transitory computer-readable storage medium') to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
- While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
Claims (15)
- An image processing apparatus (300) characterized by comprising:obtaining means (405) for obtaining a luminance of an input image having a color reproduction range wider than that of a printing apparatus (310);conversion means (402, 403) for performing, for the input image, conversion processing of obtaining a value included in the color reproduction range of the printing apparatus and obtaining the luminance of the image after the conversion; andcorrection means (407) for correcting the luminance of the input image,wherein the correction means performs correction of the luminance of the input image based on a conversion characteristic between the luminance obtained by the obtaining means and the luminance obtained by the conversion means such that an intensity of the correction becomes higher for a color that is not included in the color reproduction range of the printing apparatus than for a color included in the color reproduction range of the printing apparatus.
- The apparatus according to claim 1, characterized by further comprising an extraction means (406) for extracting a high-frequency component from the luminance of an image,
wherein the extraction means extracts a first high-frequency component from the luminance obtained by the obtaining means, and extracts a second high-frequency component from the luminance obtained by the conversion means, and
the correction means corrects the luminance of the input image based on a conversion characteristic between the first high-frequency component and the second high-frequency component. - The apparatus according to claim 2, characterized in that the extraction means
extracts the first high-frequency component by generating a first low-frequency component from the luminance of the input image by filtering means and subtracting the first low-frequency component from the luminance of the input image, and
extracts the second high-frequency component by generating, by the filtering means, a second low-frequency component from the luminance of the image obtained by the conversion means and subtracting the second low-frequency component from the luminance of the image obtained by the conversion means. - The apparatus according to claim 3, characterized in that the correction means decides an intensity of the correction by subtracting the second high-frequency component from the first high-frequency component.
- The apparatus according to claim 2, characterized in that the extraction means
extracts the first high-frequency component by generating a first low-frequency component from the luminance of the input image by filtering means and dividing the luminance of the input image by the first low-frequency component, and
extracts the second high-frequency component by generating, by the filtering means, a second low-frequency component from the luminance of the image obtained by the conversion means and dividing the luminance of the image obtained by the conversion means by the second low-frequency component. - The apparatus according to claim 5, characterized in that the correction means decides an intensity of the correction by dividing the first high-frequency component by the second high-frequency component.
- The apparatus according to any one of claims 2 to 6, characterized in that a reflected light component is used as a high-frequency component of the luminance, and an illumination light component is used as a low frequency of the luminance.
- The apparatus according to any one of claims 1 to 7, characterized in that the correction by the correction means is performed for the input image, and the same conversion processing as the conversion processing by the conversion means is applied to the image after the correction.
- The apparatus according to any one of claims 1 to 7, characterized in that the same conversion processing as the conversion processing by the conversion means is applied to the input image, and the correction by the correction means is performed for the image after the conversion.
- The apparatus according to any one of claims 1 to 9, characterized in that the conversion processing by the conversion means includes dynamic range compression processing and gamut mapping processing.
- The apparatus according to any one of claims 1 to 10, characterized by further comprising:input means (408) for inputting information about an observation condition when observing an image printed on a sheet by the printing apparatus based on data representing the input image; anddecision means (407) for deciding a contrast characteristic concerning a degree of appearance of a contrast in the printed image based on the information about the observation condition input by the input means,wherein the correction means performs correction of the luminance of the input image based on the conversion characteristic between the luminance obtained by the obtaining means and the luminance obtained by the conversion means and the contrast characteristic decided by the decision means such that the intensity of the correction becomes higher for the color that is not included in the color reproduction range of the printing apparatus than for the color included in the color reproduction range of the printing apparatus.
- The apparatus according to any one of claims 1 to 10, characterized in that the correction means corrects a high-frequency component of the image based on a luminance of a low-frequency component of the image such that the luminance of the high-frequency component of the image after the conversion processing is included in a luminance range of the image after the correction by the correction means.
- The apparatus according to any one of claims 1 to 3, characterized in that the correction means further comprises determination means for determining whether to perform the correction based on a luminance of a low-frequency component of the image after the conversion processing and a luminance of a high-frequency component of the image after the conversion processing.
- An image processing method characterized by comprising:obtaining a luminance of an input image having a color reproduction range wider than that of a printing apparatus (310);performing, for the input image, conversion processing of obtaining a value included in the color reproduction range of the printing apparatus (310) and obtaining the luminance of the image after the conversion; andcorrecting the luminance of the input image,wherein in the correcting, correction of the luminance of the input image is performed based on a conversion characteristic between the luminance obtained in the obtaining and the luminance obtained in the performing the conversion processing such that an intensity of the correction becomes higher for a color that is not included in the color reproduction range of the printing apparatus than for a color included in the color reproduction range of the printing apparatus.
- A program that causes a computer (300) to function as:obtaining means (405) for obtaining a luminance of an input image having a color reproduction range wider than that of a printing apparatus (310);conversion means (402, 403) for performing, for the input image, conversion processing of obtaining a value included in the color reproduction range of the printing apparatus and obtaining the luminance of the image after the conversion; andcorrection means (407) for correcting the luminance of the input image,wherein the correction means performs correction of the luminance of the input image based on a conversion characteristic between the luminance obtained by the obtaining means and the luminance obtained by the conversion means such that an intensity of the correction becomes higher for a color that is not included in the color reproduction range of the printing apparatus than for a color included in the color reproduction range of the printing apparatus.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018125278 | 2018-06-29 | ||
JP2018125046 | 2018-06-29 | ||
JP2018207405 | 2018-11-02 |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3588930A1 true EP3588930A1 (en) | 2020-01-01 |
EP3588930B1 EP3588930B1 (en) | 2023-10-11 |
Family
ID=67003314
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19182062.0A Active EP3588930B1 (en) | 2018-06-29 | 2019-06-24 | Image processing apparatus, image processing method, and program |
Country Status (5)
Country | Link |
---|---|
US (1) | US11323576B2 (en) |
EP (1) | EP3588930B1 (en) |
JP (2) | JP7105737B2 (en) |
KR (1) | KR20200002683A (en) |
CN (1) | CN110661931B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3860105A1 (en) * | 2020-01-31 | 2021-08-04 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, program, and non-transitory computer-readable storage medium storing program |
US12067304B2 (en) | 2020-01-31 | 2024-08-20 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium storing a program that performs dynamic range conversion based on obtained display information |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011071468A1 (en) * | 2009-12-09 | 2011-06-16 | Thomson Licensing | Method for protecting satellite reception from strong terrestrial signals |
JP7329932B2 (en) * | 2019-02-27 | 2023-08-21 | キヤノン株式会社 | Image processing device, image processing method, and program |
JP7433918B2 (en) | 2020-01-09 | 2024-02-20 | キヤノン株式会社 | Image processing device and image processing method |
JP2022122194A (en) | 2021-02-09 | 2022-08-22 | キヤノン株式会社 | Image processing device, printer, and image processing method and program |
KR20230055361A (en) | 2021-10-18 | 2023-04-25 | 캐논 가부시끼가이샤 | Image processing apparatus, image processing method, and storage medium storing program |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2120449A1 (en) * | 2008-05-14 | 2009-11-18 | Thomson Licensing, Inc. | Method of processing of a compressed image into a gamut mapped image using spatial frequency analysis |
JP2011086976A (en) | 2009-10-13 | 2011-04-28 | Victor Co Of Japan Ltd | Image processor and image processing method |
US20170195526A1 (en) * | 2015-06-26 | 2017-07-06 | Shenzhen China Star Optoelectronics Technology Co., Ltd. | Gamut Mapping Method |
US20180035088A1 (en) * | 2016-08-01 | 2018-02-01 | Ricoh Company, Ltd. | Image processing apparatus, image projection apparatus, and image processing method |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6646762B1 (en) * | 1999-11-05 | 2003-11-11 | Xerox Corporation | Gamut mapping preserving local luminance differences |
JP4035278B2 (en) | 2000-07-14 | 2008-01-16 | キヤノン株式会社 | Image processing method, apparatus, and recording medium |
US7009734B2 (en) | 2000-08-22 | 2006-03-07 | Canon Kabushiki Kaisha | Method and apparatus for forming color transform lookup table, and image processing method |
JP2002262124A (en) | 2000-11-30 | 2002-09-13 | Canon Inc | Image processor and method, and recording control method and device and printer driver |
JP3762267B2 (en) | 2001-08-08 | 2006-04-05 | キヤノン株式会社 | Image processing apparatus and method |
JP4078132B2 (en) | 2002-06-28 | 2008-04-23 | キヤノン株式会社 | Image processing apparatus and method |
JP4639037B2 (en) | 2003-07-18 | 2011-02-23 | キヤノン株式会社 | Image processing method and apparatus |
JP4623630B2 (en) * | 2004-09-01 | 2011-02-02 | 株式会社リコー | Image processing apparatus, image processing method, program, image forming apparatus, and image forming system |
JP2006094161A (en) | 2004-09-24 | 2006-04-06 | Fuji Photo Film Co Ltd | Apparatus, method and program for image processing |
JP4736939B2 (en) * | 2005-08-16 | 2011-07-27 | コニカミノルタホールディングス株式会社 | Imaging apparatus and image processing method |
JP2007082181A (en) * | 2005-08-16 | 2007-03-29 | Konica Minolta Holdings Inc | Imaging apparatus and image processing method |
JP2008028920A (en) * | 2006-07-25 | 2008-02-07 | Canon Inc | Image copying apparatus |
JP4730837B2 (en) * | 2006-09-15 | 2011-07-20 | 株式会社リコー | Image processing method, image processing apparatus, program, and recording medium |
JP2008078737A (en) * | 2006-09-19 | 2008-04-03 | Ricoh Co Ltd | Image processor, image processing method, and recording medium |
JP4894595B2 (en) * | 2007-04-13 | 2012-03-14 | ソニー株式会社 | Image processing apparatus and method, and program |
JP2011130437A (en) * | 2009-12-21 | 2011-06-30 | Toshiba Corp | Image processing apparatus and image processing method |
JP5932392B2 (en) | 2012-02-28 | 2016-06-08 | キヤノン株式会社 | Image processing apparatus and image processing method |
EP3136375B1 (en) * | 2015-08-31 | 2020-07-08 | Lg Electronics Inc. | Image display apparatus |
JP2017146766A (en) | 2016-02-17 | 2017-08-24 | 株式会社Jvcケンウッド | Image processing device, image processing method, and image processing program |
JP6623832B2 (en) * | 2016-02-26 | 2019-12-25 | 富士通株式会社 | Image correction apparatus, image correction method, and computer program for image correction |
JP6805968B2 (en) | 2016-08-01 | 2020-12-23 | 株式会社リコー | Image processing device, image projection device, and image processing method |
-
2019
- 2019-06-11 JP JP2019109023A patent/JP7105737B2/en active Active
- 2019-06-18 US US16/444,223 patent/US11323576B2/en active Active
- 2019-06-24 EP EP19182062.0A patent/EP3588930B1/en active Active
- 2019-06-28 KR KR1020190077957A patent/KR20200002683A/en not_active Application Discontinuation
- 2019-06-28 CN CN201910575192.1A patent/CN110661931B/en active Active
-
2022
- 2022-07-07 JP JP2022109985A patent/JP7417674B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2120449A1 (en) * | 2008-05-14 | 2009-11-18 | Thomson Licensing, Inc. | Method of processing of a compressed image into a gamut mapped image using spatial frequency analysis |
JP2011086976A (en) | 2009-10-13 | 2011-04-28 | Victor Co Of Japan Ltd | Image processor and image processing method |
US20170195526A1 (en) * | 2015-06-26 | 2017-07-06 | Shenzhen China Star Optoelectronics Technology Co., Ltd. | Gamut Mapping Method |
US20180035088A1 (en) * | 2016-08-01 | 2018-02-01 | Ricoh Company, Ltd. | Image processing apparatus, image projection apparatus, and image processing method |
Non-Patent Citations (1)
Title |
---|
WU GUANGYUAN ET AL: "Cross-media color reproduction using the frequency-based spatial gamut mapping algorithm based on human color vision", PROCEEDINGS OF SPIE; [PROCEEDINGS OF SPIE ISSN 0277-786X VOLUME 10524], SPIE, US, vol. 10615, 10 April 2018 (2018-04-10), pages 106153Y - 106153Y, XP060101630, ISBN: 978-1-5106-1533-5, DOI: 10.1117/12.2302924 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3860105A1 (en) * | 2020-01-31 | 2021-08-04 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, program, and non-transitory computer-readable storage medium storing program |
US20210241056A1 (en) * | 2020-01-31 | 2021-08-05 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, non-transitory computer-readable storage medium storing program |
US11797806B2 (en) * | 2020-01-31 | 2023-10-24 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, non-transitory computer-readable storage medium storing program |
US12067304B2 (en) | 2020-01-31 | 2024-08-20 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium storing a program that performs dynamic range conversion based on obtained display information |
Also Published As
Publication number | Publication date |
---|---|
JP7105737B2 (en) | 2022-07-25 |
CN110661931B (en) | 2022-12-23 |
EP3588930B1 (en) | 2023-10-11 |
JP2020074501A (en) | 2020-05-14 |
JP2022125295A (en) | 2022-08-26 |
KR20200002683A (en) | 2020-01-08 |
JP7417674B2 (en) | 2024-01-18 |
US11323576B2 (en) | 2022-05-03 |
US20200007695A1 (en) | 2020-01-02 |
CN110661931A (en) | 2020-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3588930B1 (en) | Image processing apparatus, image processing method, and program | |
US10582087B2 (en) | Image processing apparatus, control method, and non-transitory computer-readable storage medium | |
JP5300595B2 (en) | Image processing apparatus and method, and computer program | |
US11146738B2 (en) | Image processing apparatus, control method, and non-transitory computer-readable storage medium | |
JP4498233B2 (en) | Image processing apparatus and image processing method | |
US10848644B2 (en) | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium | |
US20110216968A1 (en) | Smart image resizing with color-based entropy and gradient operators | |
JP2017126869A (en) | Image processing apparatus, image processing method and program | |
EP3860105A1 (en) | Image processing apparatus, image processing method, program, and non-transitory computer-readable storage medium storing program | |
EP3860104A1 (en) | Image processing apparatus, image processing method, program, and non-transitory computer-readable storage medium storing program | |
JP2008147937A (en) | Image processor and image processing method | |
US11514562B2 (en) | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium | |
JP4249034B2 (en) | Halo reduction in space-dependent color gamut mapping | |
US11070704B2 (en) | Image processing apparatus, control method for controlling image processing apparatus, and storage medium for removing a color from image data based on specified color | |
US11108999B2 (en) | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium | |
US11368606B1 (en) | Image processing apparatus and non-transitory computer readable medium | |
JP4375223B2 (en) | Image processing apparatus, image processing method, and image processing program | |
JP5632937B2 (en) | Image processing apparatus and method, and computer program | |
JP2005079950A (en) | Method, device and program for generating image structure reproduction characteristic | |
JP2011004187A (en) | Image processing apparatus and method therefor | |
JP2005341269A (en) | Image processing apparatus, image processing method and program | |
JP2009232267A (en) | Color processing apparatus, method and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20200701 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20211021 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20230504 |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: SUWA, TETSUYA Inventor name: MURASAWA, KOUTA Inventor name: YAZAWA, MAYA Inventor name: OGAWA, SHUHEI Inventor name: KAGAWA, HIDETSUGU |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602019039008 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20231011 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1621394 Country of ref document: AT Kind code of ref document: T Effective date: 20231011 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240112 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240211 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240211 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240112 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240111 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240212 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240111 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20240521 Year of fee payment: 6 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240521 Year of fee payment: 6 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602019039008 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231011 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20240712 |