WO2005122552A1 - 画像処理装置および方法、並びにプログラム - Google Patents
画像処理装置および方法、並びにプログラム Download PDFInfo
- Publication number
- WO2005122552A1 WO2005122552A1 PCT/JP2005/010350 JP2005010350W WO2005122552A1 WO 2005122552 A1 WO2005122552 A1 WO 2005122552A1 JP 2005010350 W JP2005010350 W JP 2005010350W WO 2005122552 A1 WO2005122552 A1 WO 2005122552A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- component
- illumination component
- gain
- signal
- luminance signal
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 238000004364 calculation method Methods 0.000 claims abstract description 16
- 238000005286 illumination Methods 0.000 claims description 61
- 238000000926 separation method Methods 0.000 claims description 6
- 230000003321 amplification Effects 0.000 claims description 5
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 5
- 238000009499 grossing Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims 7
- 238000003672 processing method Methods 0.000 claims 1
- 238000007906 compression Methods 0.000 abstract description 25
- 230000006835 compression Effects 0.000 abstract description 22
- 230000002542 deteriorative effect Effects 0.000 abstract description 3
- 238000005457 optimization Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 26
- 238000003708 edge detection Methods 0.000 description 21
- 230000006870 function Effects 0.000 description 21
- 230000001186 cumulative effect Effects 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000007423 decrease Effects 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 238000003860 storage Methods 0.000 description 4
- 230000006978 adaptation Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 241000518579 Carea Species 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000002310 reflectometry Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 101000860173 Myxococcus xanthus C-factor Proteins 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
- H04N5/772—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/407—Control or modification of tonal gradation or of extreme levels, e.g. background level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/20—Circuitry for controlling amplitude response
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/20—Circuitry for controlling amplitude response
- H04N5/205—Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic
- H04N5/208—Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic for compensating for attenuation of high frequency components, e.g. crispening, aperture distortion correction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/77—Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/84—Television signal recording using optical recording
- H04N5/85—Television signal recording using optical recording on discs or drums
Definitions
- the present invention relates to an image processing apparatus and method, and a program that can appropriately compress a captured digital image.
- a representative example of this contrast enhancement method is, for example, a tone for converting a pixel level of each pixel of an image by a function having a predetermined input / output relationship (hereinafter referred to as a level conversion function).
- a method called a curve adjustment method or a histogram equalization that adaptively changes a level conversion function according to a pixel level frequency distribution has been proposed.
- Patent Document 1 JP 2001-298621 A
- the present invention has been made in view of such circumstances, and aims to improve the contrast without impairing sharpness by appropriately compressing a captured digital image. It is.
- a captured digital image can be appropriately compressed.
- FIG. 1 is a diagram showing a configuration example of a recording system of a digital video camera to which the present invention is applied.
- FIG. 2 is a block diagram showing an example of an internal configuration of a dynamic range compression unit.
- FIG. 3A is a diagram illustrating details of edge detection of an LPF with an edge detection function.
- FIG. 3B is a diagram illustrating details of edge detection of an LPF with an edge detection function.
- FIG. 4 is a diagram showing a level in an edge direction.
- FIG. 5A is a diagram showing an example of an offset table.
- FIG. 5B is a diagram showing an example of an offset table.
- FIG. 6A is a diagram showing another example of the offset table.
- FIG. 6B is a diagram showing another example of the offset table.
- FIG. 7A is a diagram showing an example of a reflectance gain coefficient table.
- FIG. 7B is a diagram showing an example of a reflectance gain coefficient table.
- FIG. 8A is a diagram showing an example of a chroma gain coefficient table.
- FIG. 8B is a diagram showing an example of a chroma gain coefficient table.
- FIG. 9 is a diagram showing an example of a determination area.
- FIG. 10 is a flowchart illustrating compression processing of a luminance signal.
- FIG. 11 is a flowchart illustrating a chroma signal compression process.
- FIG. 12A is a diagram showing a processing result of a luminance signal.
- FIG. 12B is a diagram showing a processing result of a luminance signal.
- FIG. 12C is a diagram showing a processing result of a luminance signal.
- FIG. 13A is a diagram showing a processing result of a chroma signal.
- FIG. 13B is a diagram showing a processing result of a chroma signal.
- FIG. 14 is a block diagram showing a configuration example of a computer.
- FIG. 1 is a diagram showing a configuration example of a recording system of a digital video camera 1 to which the present invention is applied.
- the solid-state imaging device 11 is composed of, for example, a CCD (Charge Coupled Device) or a C-MOS (Complementary Metal Oxide Semiconductor), and photoelectrically converts an incident light image of a subject into input image data S1. Is generated, and the generated input image data S 1 is Output to the processing unit 12.
- the camera signal processing unit 12 performs signal processing such as sampling processing and YC separation processing on the input image data S1 input from the solid-state imaging device 11, and converts the luminance signal Y1 and the chroma signal C1 into a dynamic range compression unit 13 Output to
- the dynamic range compression unit 13 compresses the luminance signal Y1 and the chroma signal C1 input from the camera signal processing unit 12 into a recording range so as to improve contrast without losing sharpness.
- the compressed luminance signal Y2 and chroma signal C2 are output to the recording format processing unit 14.
- the recording format processing unit 14 performs predetermined processing such as addition and modulation of an error correction code on the luminance signal Y2 and the chroma signal C2 input from the dynamic range compression unit 13, and records the signal S2 on the recording medium 15.
- the recording medium 15 is composed of, for example, a compact disc-read only memory (CD-ROM), a digital versatile disc (DVD), or a semiconductor memory.
- FIG. 2 is a block diagram showing an example of the internal configuration of the dynamic range compression unit 13.
- the example of FIG. 2 is roughly divided into a block for processing the luminance signal Y1 and a block for processing the chroma signal C1.
- the adders 25 to 34 are blocks that perform processing on the dark part of the luminance signal Y1, and the adder 22, the aperture controller 23, the reflectance gain coefficient calculator 35, and the adder 36 , A block for processing the bright portion of the luminance signal Y1.
- the luminance signal Y1 output from the camera signal processing unit 12 is input to an LPF (Lowpass Filter) 21 with an edge detection function, a power calculator 22, and an aperture controller (aper-computer) 23, and the chroma signal C1 Is input to the multiplier 39.
- LPF Lowpass Filter
- the LPF 21 with the edge detection function extracts an illumination component (smoothed signal L with preserved edges) from the input luminance signal Y1, and adds the extracted smoothed signal L to the adders 22, 25, 29 and 34, a reflectance gain coefficient calculation unit 35, a chroma gain coefficient calculation unit 38, and a chroma area determination unit 40.
- the smoothing signal L with the preserved edges is abbreviated as signal L.
- the details of the edge detection of the LPF 21 with an edge detection function will be described with reference to FIG.
- the uppermost left pixel is described as a pixel (1, 1)
- the m-th pixel in the horizontal direction and the n-th pixel in the vertical direction are described as (m, n) pixels.
- the LPF 21 with the edge detection function processes the target pixel 51 (the pixel of (4, 4)) with 7 vertical pixels and 7 horizontal pixels around it.
- the LPF21 with edge detection function is the median processing target pixel, (4, 1), (4, 2), (4, 3), (4, 5), (4, 6), (4, Calculate the pixel values of 7), (1, 4), (2, 4), (3, 4), (5, 4), (6, 4), and (7, 4).
- pixel P ⁇ pixel X lZ64 ⁇ of (1, 1) + ⁇ pixel X 6Z64 ⁇ of (2, 1) + ⁇ pixel X 15Z64 ⁇ of ⁇ (3, 1) + pixel X of (4, 1) 20/64 ⁇ + ⁇ (5, 1) pixel X 15Z64 ⁇ + ⁇ (6, 1) pixel X 6Z64 ⁇ + ⁇ (7, 1) pixel X 1/64 ⁇ .
- the LPF 21 with the edge detection function calculates a median value based on the pixel of interest 51 and the three pixel groups 54 that are left-side median processing target pixels, and averages the median two values to the left average.
- the luminance component be 64.
- an upper average luminance component 61, a lower average luminance component 62, and a right average luminance component 63 are calculated.
- an average luminance component in four directions around the target pixel 51 is obtained.
- the LPF 21 with the edge detection function calculates the difference ⁇ of the average luminance component in the vertical direction and the difference Ah of the average luminance component in the horizontal direction, and determines the one with the larger difference, that is, the one with the smaller correlation as the edge direction. After the edge direction is determined, the edge direction and the target pixel 51 are compared.
- the target pixel 51 is set in the range B (that is, the level of the average luminance component of the higher level) within the level difference in the edge direction. (Between L1 and the level L2 of the lower average luminance component), the target pixel 51 is output as it is. In contrast, the target pixel 51 is in the range A (higher level !, higher than the level L1 of the average luminance component! Or the range C (lower level average luminance When the level is lower than the component level L2), the LPF 21 with the edge detection function replaces and outputs the smoothed signal L (for example, the average value of a 7 ⁇ 7 pixel low-pass filter).
- the smoothed signal L for example, the average value of a 7 ⁇ 7 pixel low-pass filter
- the microcomputer 24 supplies the adder 25 with an input adjustment la representing an offset amount to be subtracted from the input luminance level value of the illumination component offset table 27, and multiplies the input luminance level of the illumination component offset table 27.
- the input adjustment lb indicating the gain amount to be supplied is supplied to the multiplier 26.
- the microcomputer 24 supplies the adder 29 with an input adjustment 2a representing an offset amount to be subtracted from the input luminance level value of the illumination component offset table 31, and a gain by which the input luminance level of the illumination component offset table 31 is multiplied.
- the input adjustment 2b representing the quantity is supplied to the multiplier 30.
- the microcomputer 24 supplies a gain lc representing the maximum gain amount to be multiplied to the output luminance level of the illumination component offset table 27 to the multiplier 28, and multiplies the output luminance level of the illumination component offset table 31.
- the gain 2c representing the maximum gain amount is supplied to the multiplier 32.
- the microcomputer 24 includes an input adjustment add representing an offset amount by which the input luminance level power of the reflectance gain coefficient table is also reduced, and an output representing an amount of gain multiplied by the output luminance level of the reflectance gain coefficient table.
- the adjustment offset is supplied to the reflectance gain coefficient calculator 35.
- the microcomputer 24 judges the histogram, and adjusts the values of the input adjustment la, lb, 2a, 2b, the gain lc, 2c, the input adjustment add, and the output adjustment offset, or These values are adjusted based on the user's instruction. Further, the input adjustments la, lb, 2a, 2b and the gains lc, 2c may be determined in advance in the manufacturing process.
- the adder 25 adds the input adjustment la supplied from the microcomputer 24 to the signal L supplied from the LPF 21 with the edge detection function, and supplies the signal L to the multiplier 26.
- the multiplier 26 multiplies the signal L supplied from the adder 25 by the input adjustment lb supplied from the microcomputer 24, and supplies the signal L to the illumination component offset table 27.
- the illumination component offset table 27 is based on the input adjustments la and lb supplied from the adder 25 and the multiplier 26. Adjust the amount and keep it. Also, the illumination component offset table 27 refers to the held offset table, and the adder 25 and the The offset amount ofstl according to the luminance level of the signal L supplied via the multiplier 26 is supplied to the multiplier 28. The multiplier 28 multiplies the offset amount ofstl supplied from the illumination component offset table 27 by the gain lc supplied also by the microcomputer 24, and supplies the result to the adder 33.
- FIG. 5A shows an example of an offset table held by the illumination component offset table 27.
- the horizontal axis represents the input luminance level
- the vertical axis represents the offset amount of stl (the same applies to FIG. 5B described later).
- the offset table shown in FIG. 5A assuming that the input luminance level (horizontal axis) normalized to 8 bits is X, the offset amount ofstl (vertical axis) is given by, for example, the following equation. It is represented by (1).
- FIG. 5B is a diagram for explaining the relationship between the offset table held by the illumination component offset table 27 and the adjustment parameters.
- the input adjustment la (arrow la in the figure) represents the offset amount subtracted from the input luminance level force to the offset table. That is, when the input is fixed, the input adjustment la is an amount by which the offset table is shifted rightward.
- the input adjustment lb (arrow lb in the figure) indicates the amount of gain multiplied by the input luminance level to the offset table. That is, when the input is fixed, the input adjustment lb is an amount for increasing or decreasing the area width of the offset table, and corresponds to the adjustment of the luminance level range to be processed.
- the gain lc (arrow lc in the figure) represents the maximum gain amount that is multiplied by the output luminance level from the offset table. That is, the gain lc is an amount that increases or decreases the vertical axis of the offset table, and is a value that directly affects the boost amount of the processing.
- the adder 29 adds the input adjustment 2a supplied from the microcomputer 24 to the signal L supplied from the LPF 21 with the edge detection function, and supplies the result to the multiplier 30.
- the multiplier 30 multiplies the signal L supplied from the adder 29 by the input adjustment 2b supplied from the microcomputer 24, and supplies the signal L to the illumination component offset table 31.
- the illumination component offset table 31 is based on the input adjustments 2a and 2b supplied from the adder 29 and the multiplier 30, and determines the offset amount of the offset table for determining the boost amount of the low-frequency luminance level, and Adjust the gain amount and keep it. Further, the illumination component offset table 31 refers to the held offset table, and supplies an offset amount ofst2 corresponding to the luminance level of the signal L supplied via the adder 29 and the multiplier 30 to the multiplier 32. I do. The multiplier 32 multiplies the offset amount ofst2 supplied from the illumination component offset table 31 by the gain 2c supplied with the power of the microcomputer 24 and supplies the multiplied value to the adder 33.
- FIG. 6A shows an example of an offset table held by the illumination component offset table 31.
- the horizontal axis represents the input luminance level
- the vertical axis represents the offset amount of st2 (the same applies to FIG. 6B described later).
- the offset table shown in FIG. 6A assuming that the input luminance level (horizontal axis) normalized to 8 bits is X, the offset amount ofst2 (vertical axis) is given by, for example, the following equation. It is represented by (2).
- FIG. 6B is a diagram for explaining the relationship between the offset table held by the illumination component offset table 31 and the adjustment parameters.
- the input adjustment 2a (arrow 2a in the figure) represents an offset amount by which the input luminance level force to the offset table is also subtracted. That is, when the input is fixed, the input adjustment 2a is the amount by which the offset table is shifted rightward.
- Input adjustment 2b (arrow 2b in the figure) indicates a gain amount to be multiplied by the input luminance level to the offset table. That is, when the input is fixed, the input adjustment 2b is the amount by which the area width of the offset table is increased or decreased. This corresponds to the adjustment of the luminance level range to be processed.
- the gain 2c (arrow 2c in the figure) represents the maximum gain that is multiplied by the output luminance level from the offset table. That is, the gain 2c is an amount that increases or decreases the vertical axis of the offset table, and is a value that directly affects the boost amount of the process.
- the adder 33 adjusts the maximum gain amount supplied to the multiplier 32 to the offset amount ofstl, which determines the boost amount of the luminance level in the ultra-low frequency range where the maximum gain amount is adjusted, supplied from the multiplier 28.
- the offset amount ofst2 that determines the boost amount of the low-frequency luminance level obtained is added, and the obtained offset amount (lighting component adjustable remaining amount T (L)) is supplied to the adder 34.
- the adder 34 adds the illumination component adjustment remaining amount T (L) supplied from the adder 33 to the signal L (original illumination component) supplied from the LPF 21 with the edge detection function, and obtains the obtained gain optimization.
- the illumination component (signal T (L) ′) is supplied to the adder 37.
- the adder 22 subtracts the signal L (illumination component) supplied from the LPF 21 with the edge detection function from the luminance signal Y1 (original signal) input from the camera signal processing unit 12, and obtains the obtained texture.
- the feeder component (signal R) is supplied to the adder 36.
- the reflectance gain coefficient calculating unit 35 refers to the reflectance gain coefficient table, determines out of the boost area of the ultra-low luminance and the low luminance out of the boosted luminance signal as the adaptive area, and uses the determined area as the aperture controller 23. To supply. When determining the adaptation area, the reflectance gain coefficient calculation unit 35 adjusts the offset amount and the gain amount of the reflectance gain coefficient table based on the input adjustment add and the output adjustment offset supplied from the microcomputer 24. I do.
- FIG. 7A shows an example of a reflectivity gain coefficient table held by the reflectivity gain coefficient calculator 35.
- the horizontal axis represents the input luminance level
- the vertical axis represents the reflectance gain amount (the same applies to FIG. 7B described later).
- FIG. 7B is a diagram for explaining the relationship between the reflectance gain coefficient table held by the reflectance gain coefficient calculator 35 and the adjustment parameter.
- the output adjustment offset (the arrow offset in the figure) represents a gain amount to be multiplied by the output luminance level of the reflectance gain coefficient table. That is, the output adjustment offset increases the vertical axis of the reflectance gain coefficient table! Is the amount to be done.
- Adjustment parameter Data A (arrow A in the figure) represents a parameter for determining the maximum gain amount of the aperture controller 23.
- the input adjustment add (arrow add in the figure) represents an offset amount by which the input luminance level power to the reflectance gain coefficient table is also subtracted. That is, when the input is fixed, the input adjustment add is the amount by which the reflectance gain coefficient table is shifted to the right.
- the limit level indicates the maximum limit (maximum gain amount) set to avoid adding an extra aperture signal in the aperture controller 23! /
- the aperture control amount apgain (vertical axis) is, for example, Is represented by the following equation (3).
- A represents the maximum gain amount of the aperture controller 23
- offset represents the amount of shifting the reflectance gain coefficient table upward
- add represents the amount of shifting the reflectance gain coefficient table rightward.
- the aperture controller 23 is input from the camera signal processing unit 12 based on the adaptation area determined by the reflectance gain coefficient calculation unit 35 so as to be adapted outside the ultra-low luminance and low luminance boost areas.
- the illumination component level-dependent aperture correction of the luminance signal Y1 is performed and supplied to the adder 36.
- the adder 36 adds the aperture-corrected luminance signal supplied from the aperture controller 23 to the signal R (the original signal strength is also a texture component from which the illumination component has been subtracted) supplied from the adder 22, and Supply to vessel 37.
- the adder 37 adds the gain-optimized illumination component (signal T (L) ′) supplied from the adder 34 to the aperture-corrected texture component supplied from the adder 36, and obtains the obtained dynamic range.
- the compressed luminance signal Y2 is output to the recording format processing unit 14.
- the chroma gain coefficient calculating unit 38 refers to the chroma gain coefficient table, determines a gain amount to be multiplied with a chroma signal having a particularly low luminance level among the boosted luminance signals, and multiplies the gain amount. Supply to container 39.
- FIG. 8A shows an example of a chroma gain coefficient table held by the chroma gain coefficient calculating section 38.
- the horizontal axis represents the input luminance level
- the vertical axis represents the amount of chroma gain
- the value of this vertical axis is offset by 1 (the same applies to FIG. 8B described later).
- FIG. 8B is a diagram for explaining the relationship between the coefficient table held by the chroma gain coefficient calculation unit 38 and the adjustment parameters.
- adjustment parameter B represents a parameter for determining the maximum gain amount in the chroma gain coefficient table (arrow B in the figure).
- the chroma gain amount cgain (vertical axis) is obtained by, for example, the following equation (4). expressed.
- B represents the maximum gain in the chroma gain coefficient table.
- the multiplier 39 multiplies the input chroma signal C1 by the gain amount supplied from the chroma gain coefficient calculator 38, and supplies the same to an HPF (Highpass Filter) 41 and an adder 43.
- HPF Highpass Filter
- the value on the vertical axis has an offset of 1, so for example, when the adjustment parameter B is 0.0, the chroma signal remains at the input value. Output from multiplier 39.
- the HPF 41 extracts a high-frequency component of the chroma signal supplied from the multiplier 39 and supplies it to the multiplier 42.
- the chroma area discriminating unit 40 selects an area on which the LPF is applied to the chroma signal on the luminance signal of the boosted area, and supplies the selected area to the multiplier 42.
- FIG. 9 shows an example of a discrimination area used by the chroma area discrimination unit 40 for selection.
- the horizontal axis represents the input luminance level
- the vertical axis represents the chroma area.
- the boost area and the non-boost area are linearly changed. This adjusts the strength of the LPF.
- the discrimination area shown in FIG. 9 if the input luminance level (horizontal axis) normalized to 8 bits is X, the chroma area carea (vertical axis) is expressed by, for example, the following equation (5). Is done.
- the multiplier 42 multiplies the chroma signal of the high frequency component supplied from the HPF 41 by an area to be multiplied by the LPF supplied from the chroma area discriminating unit 40, and supplies the result to the adder 43.
- the adder 43 converts the chroma signal supplied from the multiplier 39 Chroma noise is reduced by subtracting the chroma signal of the high-frequency component supplied from it (that is, LPF processing is performed on the chroma signal of the low-frequency part), and the obtained chroma signal C2 after dynamic range compression is recorded Output to the format processing unit 14.
- the adder 25, the multiplier 26, the illumination component offset table 27, and the multiplier 28 form a block for determining the boost amount of the luminance level in the ultra-low frequency range.
- the multiplier 30, the illumination component offset table 31, and the multiplier 32 constitute a block that determines the boost amount of the low-frequency luminance level.This is only an example. If one block is determined, the number may be one or two or more (plural).
- step S1 the LPF 21 with an edge detection function detects an edge in the image data input from the camera signal processing unit 12 where the pixel value of the luminance signal Y1 changes sharply (FIG. 3B).
- the luminance signal Y1 is smoothed while keeping the edge, and the illumination component (signal L) is extracted.
- the LPF 21 with the edge detection function smoothes the luminance signal Y1 depending on whether or not the target pixel 51 is within the level difference in the edge direction (range B). Judge whether it is strong or not.
- step S2 the adder 22 subtracts the illumination component extracted by the processing in step S1 from the luminance signal Y1 (original signal) input from the camera signal processing unit 12, and separates a texture component (signal R). .
- step S3 based on the adaptation area (FIG. 7B) determined by the reflectance gain coefficient calculation unit 35, the aperture controller 23 adapts to outside the ultra-low luminance and low luminance boost areas.
- the illumination component level-dependent aperture correction of the luminance signal Y1 input from the camera signal processing unit 12 is performed.
- step S4 the adder 33 determines the boost amount of the luminance level in the ultra-low frequency range in which the offset amount, the gain amount, and the maximum gain amount are supplied from the illumination component offset table 27 via the multiplier 28.
- step S5 the adder 34 adds the illumination component adjustment remaining amount T (L) calculated in step S4 to the illumination component extracted in step S1 to obtain a gain-optimized illumination. Get the component (signal T (L) ').
- step S6 the adder 37 adds the gain-optimized illumination component (signal T (L) ') obtained in step S5 to the texture component aperture-corrected in step S3, and obtains the dynamic range. Obtain the output luminance signal Y2 after compression.
- the output luminance signal Y 2 after the dynamic range compression obtained by the above processing is output to the recording format processing unit 14.
- step S21 the chroma gain coefficient calculator 38 converts the chroma signal of the luminance data Y1 extracted from the image data input from the camera signal processor 12 by the processing of step S1 in FIG. Calculate the amplification (gain) of C1 (Fig. 8B).
- step S22 the chroma area determination unit 40 selects a noise reduction area of the chroma signal C1 (that is, an area to which the LPF is applied) from the illumination components of the luminance signal Y1 extracted by the processing of step S1 in FIG. ( Figure 9).
- step S23 the calo calculator 43 reduces the chroma noise of the gain-applied low-luminance chroma signal based on the noise reduction area selected by the processing in step S22, Obtain the compressed chroma signal C2.
- the output chroma signal C2 after the dynamic range compression obtained by the above processing is output to the recording format processing unit 14.
- FIG. 12A is a histogram of a luminance component when low-luminance boost processing is simply performed on input image data in an 8-bit range that fluctuates in a range of 0 to 255 without considering the influence of bit compression. Is shown.
- FIG. 12B is a histogram of luminance components when the input image data of the 8-bit range that fluctuates in the range of 0 to 255 is compressed according to the present invention. 9 shows an example of a gram.
- FIG. 12C shows an example of the cumulative histogram of FIGS. 12A and 12B. In the figure, HI indicates the cumulative histogram of FIG. 12A, and H2 indicates the cumulative histogram of FIG. 12B.
- FIG. 13A shows the chroma component of the input image data in the 8-bit range that fluctuates in the range of 128 to 127, when the low luminance is simply boosted without considering the influence of bit compression.
- 9 shows an example of a histogram.
- FIG. 13B shows an example of a histogram of chroma components when the input image data in the 8-bit range that varies in the range of 128 to 127 is compressed according to the present invention.
- FIGS. 13A and 13B show histograms in the 50 to 50 level range.
- the low-level chroma component (the center in the horizontal axis direction in the figure) does not transition in the histogram due to the influence of noise, but forms a comb shape. ( Figure 13A).
- an appropriate gain is applied to the chroma component corresponding to the boost region of the luminance component, and the noise is reduced.
- the level transitions smoothly outward from the level area (Fig. 13B).
- the contrast can be improved without impairing the sharpness, and the captured digital image can be appropriately compressed.
- the low-luminance part may have potential gradations, but the compression processing of the present invention is applied. thing Accordingly, an image with a smoother gradation can be obtained even for the boost portion.
- the luminance value of a digital image captured by the solid-state imaging device 11 is conventionally lower than that of a digital image.
- the image data of the brilliant part where the contrast cannot be obtained the image data of the part other than the edge can be amplified while preserving the edge.
- the dynamic range compression unit 13 is realized by a computer 100 as shown in FIG.
- a CPU (Central Processing Unit) 101 executes various processes according to a program stored in a ROM 102 or a program loaded from a storage unit 108 into a RAM (Random Access Memory) 103.
- the RAM 103 also appropriately stores data necessary for the CPU 101 to execute various processes.
- the CPU 101, the ROM 102, and the RAM 103 are mutually connected via a bus 104.
- An input / output interface 105 is also connected to the bus 104.
- the input / output interface 105 is connected to an input unit 106 including a keyboard and a mouse, an output unit 107 including a display, a storage unit 108, and a communication unit 109.
- the communication unit 109 performs communication processing via a network.
- a drive 110 is connected to the input / output interface 105 as necessary, and the removable medium 1 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is used.
- the removable medium 1 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is used.
- a computer program which is appropriately mounted and whose power is read out is installed in the storage unit 108 as necessary.
- a recording medium that is installed in a computer and records a program that can be executed by the computer is distributed in order to provide the user with the program separately from the apparatus main body.
- Magnetic disks including flexible disks
- optical disks including CD-ROMs (Compact Disc-Read Only Memory), DVDs (Digital Versatile Discs)), magneto-optical disks (MD (Mini -Disc) (including registered trademark)) or a program that is provided to the user in a state where it is built in the main body of the device, which is not only composed of removable media 111 composed of semiconductor memory, etc., but is recorded. It is configured by a hard disk or the like included in the ROM 103 or the storage unit 108.
- the step of describing the program stored in the recording medium may be performed not only in a time-series manner but also in a time-series manner in the order in which the program is included. , And also includes processing executed in parallel or individually.
- the system represents the entire device including a plurality of devices.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Picture Signal Circuits (AREA)
- Studio Devices (AREA)
- Processing Of Color Television Signals (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05751456A EP1755331A4 (en) | 2004-06-10 | 2005-06-06 | PROGRAM, METHOD AND IMAGE PROCESSING DEVICE |
US11/629,152 US20080284878A1 (en) | 2004-06-10 | 2005-06-06 | Image Processing Apparatus, Method, and Program |
JP2006514495A JP4497160B2 (ja) | 2004-06-10 | 2005-06-06 | 画像処理装置および方法、並びにプログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004172212 | 2004-06-10 | ||
JP2004-172212 | 2004-06-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005122552A1 true WO2005122552A1 (ja) | 2005-12-22 |
Family
ID=35503493
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/010350 WO2005122552A1 (ja) | 2004-06-10 | 2005-06-06 | 画像処理装置および方法、並びにプログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US20080284878A1 (ja) |
EP (1) | EP1755331A4 (ja) |
JP (1) | JP4497160B2 (ja) |
KR (1) | KR20070026571A (ja) |
CN (1) | CN1965570A (ja) |
WO (1) | WO2005122552A1 (ja) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4574457B2 (ja) * | 2005-06-08 | 2010-11-04 | キヤノン株式会社 | 画像処理装置およびその方法 |
JP4524711B2 (ja) * | 2008-08-04 | 2010-08-18 | ソニー株式会社 | ビデオ信号処理装置、ビデオ信号処理方法、プログラム |
JP5569042B2 (ja) * | 2010-03-02 | 2014-08-13 | 株式会社リコー | 画像処理装置、撮像装置及び画像処理方法 |
US8766999B2 (en) * | 2010-05-20 | 2014-07-01 | Aptina Imaging Corporation | Systems and methods for local tone mapping of high dynamic range images |
CN102095496B (zh) * | 2010-12-06 | 2012-09-05 | 宁波耀泰电器有限公司 | 一种测量动态照度分布的方法 |
JP5488621B2 (ja) | 2012-01-11 | 2014-05-14 | 株式会社デンソー | 画像処理装置、画像処理方法、及びプログラム |
US10368097B2 (en) * | 2014-01-07 | 2019-07-30 | Nokia Technologies Oy | Apparatus, a method and a computer program product for coding and decoding chroma components of texture pictures for sample prediction of depth pictures |
US20160111063A1 (en) * | 2014-10-20 | 2016-04-21 | Trusight, Inc. | System and method for optimizing dynamic range compression image processing color |
CN110278425A (zh) * | 2019-07-04 | 2019-09-24 | 潍坊学院 | 图像增强方法、装置、设备和存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1051661A (ja) * | 1996-05-21 | 1998-02-20 | Samsung Electron Co Ltd | 低域フィルタリングとヒストグラム等化を用いた画質改善方法及びその回路 |
JP2000032482A (ja) * | 1998-07-14 | 2000-01-28 | Canon Inc | 信号処理装置、信号処理方法及び記憶媒体 |
JP2001024907A (ja) * | 1999-07-06 | 2001-01-26 | Matsushita Electric Ind Co Ltd | 撮像装置 |
JP2001298621A (ja) * | 2000-02-07 | 2001-10-26 | Sony Corp | 画像処理装置及びその方法 |
JP2003101815A (ja) * | 2001-09-26 | 2003-04-04 | Fuji Photo Film Co Ltd | 信号処理装置及び信号処理方法 |
JP2004120224A (ja) * | 2002-09-25 | 2004-04-15 | Fuji Photo Film Co Ltd | 画像補正処理装置及びプログラム |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07231396A (ja) * | 1993-04-19 | 1995-08-29 | Mitsubishi Electric Corp | 画質補正回路 |
KR0170657B1 (ko) * | 1994-08-31 | 1999-03-30 | 김광호 | 색신호에 있는 윤곽을 보정하는 방법 및 이를 칼라 비디오기기에서 구현하기 위한 회로 |
JP3221291B2 (ja) * | 1995-07-26 | 2001-10-22 | ソニー株式会社 | 画像処理装置、画像処理方法、ノイズ除去装置及びノイズ除去方法 |
DE19637613C2 (de) * | 1996-09-16 | 2000-02-24 | Heidelberger Druckmasch Ag | Druckmaschine zum Erzeugen eines Bildes mittels Tonpartikeln |
US6694051B1 (en) * | 1998-06-24 | 2004-02-17 | Canon Kabushiki Kaisha | Image processing method, image processing apparatus and recording medium |
JP4131348B2 (ja) * | 1998-11-18 | 2008-08-13 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
EP1075140A1 (en) * | 1999-08-02 | 2001-02-07 | Koninklijke Philips Electronics N.V. | Video signal enhancement |
US6724943B2 (en) * | 2000-02-07 | 2004-04-20 | Sony Corporation | Device and method for image processing |
JP4556276B2 (ja) * | 2000-03-23 | 2010-10-06 | ソニー株式会社 | 画像処理回路及び画像処理方法 |
WO2002102086A2 (en) * | 2001-06-12 | 2002-12-19 | Miranda Technologies Inc. | Apparatus and method for adaptive spatial segmentation-based noise reducing for encoded image signal |
JP3750797B2 (ja) * | 2001-06-20 | 2006-03-01 | ソニー株式会社 | 画像処理方法および装置 |
US7139036B2 (en) * | 2003-01-31 | 2006-11-21 | Samsung Electronics Co., Ltd. | Method and apparatus for image detail enhancement using filter bank |
-
2005
- 2005-06-06 EP EP05751456A patent/EP1755331A4/en not_active Withdrawn
- 2005-06-06 KR KR1020067025938A patent/KR20070026571A/ko not_active Application Discontinuation
- 2005-06-06 US US11/629,152 patent/US20080284878A1/en not_active Abandoned
- 2005-06-06 CN CNA2005800190560A patent/CN1965570A/zh active Pending
- 2005-06-06 JP JP2006514495A patent/JP4497160B2/ja not_active Expired - Fee Related
- 2005-06-06 WO PCT/JP2005/010350 patent/WO2005122552A1/ja not_active Application Discontinuation
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1051661A (ja) * | 1996-05-21 | 1998-02-20 | Samsung Electron Co Ltd | 低域フィルタリングとヒストグラム等化を用いた画質改善方法及びその回路 |
JP2000032482A (ja) * | 1998-07-14 | 2000-01-28 | Canon Inc | 信号処理装置、信号処理方法及び記憶媒体 |
JP2001024907A (ja) * | 1999-07-06 | 2001-01-26 | Matsushita Electric Ind Co Ltd | 撮像装置 |
JP2001298621A (ja) * | 2000-02-07 | 2001-10-26 | Sony Corp | 画像処理装置及びその方法 |
JP2003101815A (ja) * | 2001-09-26 | 2003-04-04 | Fuji Photo Film Co Ltd | 信号処理装置及び信号処理方法 |
JP2004120224A (ja) * | 2002-09-25 | 2004-04-15 | Fuji Photo Film Co Ltd | 画像補正処理装置及びプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP1755331A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP1755331A1 (en) | 2007-02-21 |
US20080284878A1 (en) | 2008-11-20 |
CN1965570A (zh) | 2007-05-16 |
EP1755331A4 (en) | 2009-04-29 |
KR20070026571A (ko) | 2007-03-08 |
JP4497160B2 (ja) | 2010-07-07 |
JPWO2005122552A1 (ja) | 2008-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4497160B2 (ja) | 画像処理装置および方法、並びにプログラム | |
JP4894595B2 (ja) | 画像処理装置および方法、並びに、プログラム | |
US8144985B2 (en) | Method of high dynamic range compression with detail preservation and noise constraints | |
EP2216988B1 (en) | Image processing device and method, program, and recording medium | |
KR100782845B1 (ko) | 비로그 도메인 조도 수정을 이용한 디지털 영상 개선방법과 시스템 | |
US8520134B2 (en) | Image processing apparatus and method, and program therefor | |
US20120301050A1 (en) | Image processing apparatus and method | |
JP2001275015A (ja) | 画像処理回路及び画像処理方法 | |
US20100074521A1 (en) | Image Processing Apparatus, Image Processing Method, and Program | |
US20070103570A1 (en) | Apparatus, method, recording medium and program for processing signal | |
US20090073278A1 (en) | Image processing device and digital camera | |
JP2007049540A (ja) | 画像処理装置および方法、記録媒体、並びに、プログラム | |
JP4320572B2 (ja) | 信号処理装置および方法、記録媒体、並びにプログラム | |
US20100128332A1 (en) | Image signal processing apparatus and method, and program | |
US20090002562A1 (en) | Image Processing Device, Image Processing Method, Program for Image Processing Method, and Recording Medium Having Program for Image Processing Method Recorded Thereon | |
JP3184309B2 (ja) | 階調補正回路及び撮像装置 | |
JP2000156797A (ja) | 画像処理装置及び画像処理方法 | |
JP2010130150A (ja) | 階調補正装置および撮像装置 | |
US8693799B2 (en) | Image processing apparatus for emphasizing details of an image and related apparatus and methods | |
US7894686B2 (en) | Adaptive video enhancement gain control | |
JP5295854B2 (ja) | 画像処理装置及び画像処理プログラム | |
JP4479600B2 (ja) | 画像処理装置および方法、並びにプログラム | |
US7773824B2 (en) | Signal processing device and method, recording medium, and program | |
JP4304609B2 (ja) | 信号処理装置および方法、記録媒体、並びにプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2005751456 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006514495 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11629152 Country of ref document: US Ref document number: 1020067025938 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580019056.0 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 2005751456 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1020067025938 Country of ref document: KR |