WO2013161839A1 - 画像処理方法、及び画像処理装置 - Google Patents
画像処理方法、及び画像処理装置 Download PDFInfo
- Publication number
- WO2013161839A1 WO2013161839A1 PCT/JP2013/061969 JP2013061969W WO2013161839A1 WO 2013161839 A1 WO2013161839 A1 WO 2013161839A1 JP 2013061969 W JP2013061969 W JP 2013061969W WO 2013161839 A1 WO2013161839 A1 WO 2013161839A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- area
- pixel
- hierarchy
- correction
- value
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 27
- 238000012545 processing Methods 0.000 title claims description 39
- 238000012937 correction Methods 0.000 claims abstract description 145
- 238000004364 calculation method Methods 0.000 claims description 24
- 238000013459 approach Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 description 35
- 230000014509 gene expression Effects 0.000 description 23
- 238000000034 method Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 11
- 238000000605 extraction Methods 0.000 description 8
- 238000012935 Averaging Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20182—Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Definitions
- the present invention relates to an image processing method and an image processing apparatus.
- a technique for reducing random noise included in an image is an indispensable technique for reproducing a captured image more clearly.
- a technique for reducing random noise for example, there is a technique disclosed in Patent Document 1.
- Patent Document 1 is a plurality of arithmetic circuits that calculate a moving average pixel number n based on a predetermined formula for an arbitrary pixel of interest i in the main scanning direction of a color digital signal output from an input image processing circuit.
- a plurality of bit selection circuits that selectively output the target pixel i and the reference pixel j of the preceding and following n pixels, and a plurality of absolute values of differences between the output level of the target pixel i and each output level of the reference pixel j
- the circuit includes a plurality of arithmetic circuits that perform moving average processing of output signals output from the plurality of determination circuits.
- the reference pixel j is added to the moving averaging process only when the absolute value of the difference between the output level of the target pixel i and the output level of the reference pixel j is equal to or less than the threshold value, A portion where the absolute value of the difference changes steeply beyond a threshold value is excluded from the moving averaging process, and thereby noise components can be effectively removed.
- Patent Document 1 cannot remove low-frequency noise having periodicity equal to or greater than the smoothing filter size.
- the noise in the image could not be effectively removed while preserving the edge component and the texture component in the image.
- an object of the present invention is to provide an image processing method and image processing capable of effectively removing noise in an image while preserving edge components and texture components in the image. To provide an apparatus.
- the present invention obtains a pixel statistical value and edge information of a pixel for each multi-hierarchy region that includes a target pixel and whose range is sequentially narrowed. Is corrected using the edge information, and using the difference information after correction and a pixel statistical value of an area wider than the area of the hierarchy, the pixel statistics of the area of the hierarchy The value is corrected, and the edge information is used to re-correct the pixel statistic value of the area of the layer before correction and the difference information of the pixel statistic value of the area of the layer after correction from the maximum range.
- the pixel of interest is corrected by repeating the correction and re-correction of the pixel statistic value of the region in the hierarchy in order until the region of the minimum range is reached.
- the present invention includes a pixel statistic value calculation unit that calculates a pixel statistic value of a pixel in a multi-hierarchy region that includes a target pixel and whose range is sequentially narrowed, and a multi-layer region that includes the target pixel and whose range is sequentially narrowed. For each area of the hierarchy, using the edge information calculation means for calculating edge information, the corrected difference information, and the pixel statistics of the area wider than the area of the hierarchy, the pixel statistics of the area of the hierarchy are calculated.
- the image processing apparatus further includes a correcting unit that corrects the pixel of interest by repeating correction and re-correction of the pixel statistic value of the area of the hierarchy in order in each hierarchy until the area becomes.
- FIG. 1 is a diagram for explaining an image processing method according to an embodiment of the present invention.
- FIG. 2 is a diagram showing an example of the Func function.
- FIG. 3 is a diagram showing a setting example of the parameter a in a wide area.
- FIG. 4 is a diagram for explaining an example of the shape of the re-correction function in the present embodiment.
- FIG. 5 is a diagram for explaining an example of the shape of the re-correction function in the present embodiment.
- FIG. 6 is a diagram for explaining an example of the shape of the re-correction function in the present embodiment.
- FIG. 7 is a block diagram of the image processing apparatus according to the present embodiment.
- FIG. 8 is a block diagram of the image processing apparatus according to the present embodiment.
- FIG. 1 is a diagram for explaining an image processing method according to an embodiment of the present invention.
- FIG. 1 shows the flow of processing when multi-resolution processing of three layers is performed, the present invention can be applied to two layers and can be easily expanded to four or more layers.
- FIG. 1 is a diagram for explaining an image processing method according to an embodiment of the present invention.
- a spatial average value L3 (x, y) that is a pixel statistical value of a space in a wide area centered on a pixel position (x, y) (a pixel of interest), and edge information in that area
- the spatial average value (L2 (x, y)) in the middle region is corrected using the edge amount E3 (x, y).
- the signal is calculated based on E3 (x, y).
- L2 (x, y) is recorrected by extracting and synthesizing the extracted signal with L2 ′ (x, y). Then, the spatial average value (L1 (x, y)) of the narrow area is obtained by using the spatial average value (L2 ′′ (x, y)) in the re-corrected middle area and the edge amount (E2 (x, y)) in the middle area. , y)) by correcting the input pixel value P in (x, y) and obtaining the output pixel value P out (x, y) by sequentially processing the corrections. To do.
- the pixel statistical value is a spatial average value of the target region
- the spatial average value is an arithmetic average value, a geometric average value, a weighted average value, or the like of pixels existing in the region.
- the edge amount or edge information is defined by a difference value of a statistical amount (average value, median, etc.) of pixels between the upper, lower, left and right regions around the target pixel (input pixel).
- the edge amount is the difference between the spatial average values will be described as in the case of the pixel statistical value.
- the processing in each layer in FIG. 1 is the same in the flow of processing except only the parameters for determining the correction amount. Therefore, as an example, the spatial average value L2 (x, y) in the middle region is corrected using the spatial average value L3 (x, y) in the wide region and the edge amount E3 (x, y) in the region. Details of the processing will be described.
- the spatial average value L3 (x, y) of the wide area (range: -k3 to k3) and the spatial average value L2 (x, y) of the middle area (range: -k2 to k2) at the pixel position (x, y) ) Is calculated as in equations (1) and (2).
- the wide area and the middle area are designated by the same number of pixels in the vertical and horizontal directions at k3 and k2, respectively.
- the edge amount E3 (x, y) in the wide area is calculated.
- the vertical edge amount EV3 (x, y) and the horizontal edge amount EH3 (x, y) are calculated as shown in Equations (3) and (4), and these are calculated.
- Equation (5) are added as shown in Equation (5) to calculate the edge amount E3 (x, y) in the wide area.
- the range of the wide area is designated by the same number of pixels in the vertical and horizontal directions at k3, but it does not necessarily have to be the same number of pixels in the vertical and horizontal directions.
- the spatial average value L3 (x, y) in the wide area is calculated as in Expression (6). Correction is performed, and the corrected spatial average value L3 ′ (x, y) of the wide area is calculated.
- the composite weight ⁇ 3 (x, y) is calculated as shown in Expression (7) using preset threshold values hi3 and lo3.
- the spatial average value L2 (x, y) of the middle region is corrected as in Expression (8).
- the correction function Func the one shown in FIG. 2 is used.
- the correction of the spatial average value L2 (x, y) in the middle region at the pixel position (x, y) is performed by setting the difference to (L2 (x, y) -L3 ′′ (x, y)) and correcting the image in FIG.
- the correction amount diffout obtained by the function is added to L2 (x, y), and parameters a, b, and limit in the correction function of Fig. 2 are the resolution to be processed and the color components to be corrected. To be determined.
- the edge amount E3 (x, y) calculated by Expression (5) in each layer is reflected in the Func function (correction function) that suppresses the noise component, and the Func function (correction function) in each layer is changed.
- the edge amount E3 (x, y) calculated by Expression (5) in each layer is reflected in the Func function (correction function) that suppresses the noise component, and the Func function (correction function) in each layer is changed.
- a coefficient ⁇ 3 (x, y) whose value changes as shown in Expression (9) according to the edge amount E3 (x, y) is defined.
- Hi3 and lo3 which are threshold values of E3 (x, y) are values set in advance.
- the coefficient ⁇ 3 (x, y) defined by Equation (9) is a real number from 0 to 1.0.
- the parameter a in the Func function is set as in the following equation (10).
- FIG. 3 shows the relationship between the edge amount E3 (x, y) and the parameter a.
- a_lo3 is a value used as the parameter a when the edge amount E3 (x, y) is smaller than the threshold lo3
- a_hi3 is a value used for the parameter a when the edge amount is larger than the threshold hi3.
- the parameter a is a value between a_hi3 and a_lo3 when the edge amount E3 (x, y) is the threshold value lo3 to hi3.
- the spatial average value L2 (x, y) of the middle region is corrected as shown in Equation (11). In this way, the correction is performed using the expressions (1) to (8) or using the expressions (1) to (5) and the expressions (9) to (11).
- the spatial average value L2 ′ (x, y) in the region is obtained.
- a signal is extracted from R2 (x, y) based on the function Fresid, and the extracted signal is synthesized into L2 ′ (x, y) and re-corrected.
- the spatial average value L2 ′′ (x, y) in the region is obtained.
- the value is extracted in proportion to the edge amount of the target region from the difference value R2 (x, y), thereby removing noise while maintaining a sense of resolution.
- Equation (13) may or may not be changed in each layer.
- the function F resid is not limited to the equation (14), and for example, a configuration like the following equations (15) and (16) is also possible.
- the function F resid is represented by Expression (18) or Expression (19).
- the sign function in Expression (18) is a function that outputs the sign of the input
- ⁇ ′ in Expression (18) and Expression (19) is Expression (20).
- ⁇ in Expression (20) is a value set in advance.
- the threshold values hi and lo and the parameters ⁇ 1 and ⁇ 2 may or may not be changed in each layer as in the expressions (14), (15), and (16).
- ⁇ ′ is not limited to the equation (20), and can be replaced in the same manner as the equations (15) and (16) for the equation (14).
- the recorrection amount in noise suppression is controlled based on the edge information.
- FIG. 7 is a block diagram of the image processing apparatus according to the embodiment of the present invention.
- the first embodiment uses a method of obtaining a correction value using Equations (1) to (8), Equations (1) to (5), and Equations (9) to (11). There is a method to obtain a correction value.
- FIG. 7 shows an image processing apparatus in the case where correction values are obtained using equations (1) to (8).
- the image processing apparatus of the present embodiment shown in FIG. 7 includes an area pixel value extraction unit 1, a spatial average value calculation unit 2, a correction unit 3, a re-correction unit 4, an output image control unit 5, and edge information. And a calculation unit 6.
- the region pixel value extraction unit 1 performs pixel values of pixels in a wide region centered on a pixel position (x, y) (target pixel), pixel values of pixels in a middle region, and pixels of pixels in a narrow region
- the value and the pixel value of the input pixel value P in (x, y) (target pixel) are extracted at each timing and output to the spatial average value calculation unit 2 and the edge information calculation unit 6.
- the spatial average value calculation unit 2 receives the pixel value of each region from the region pixel value extraction unit 1, and calculates the spatial average value of the region.
- the calculated spatial average values of the wide area, the medium area, and the narrow area are output to the correction unit 3 and the re-correction unit 4.
- the edge information calculation unit 6 calculates the edge amount E3 (x, y) in the wide region based on the pixel values of the pixels existing in the wide region from the region pixel value extraction unit 1. For the calculation of the edge amount, the vertical edge amount EV3 (x, y) and the horizontal edge amount EH3 (x, y) are calculated as shown in Equation (3) and Equation (4), and these are calculated using Equation (3). By adding as in (5), the edge amount E3 (x, y) in the wide area is calculated. Similarly, the edge amount E2 (x, y) of the middle region and the edge amount E1 (x, y) of the narrow region are calculated. Note that although the edge amounts in the horizontal direction and the vertical direction are mentioned, the edge amounts in the oblique direction may be calculated and used.
- the correction unit 3 uses the combined weight ⁇ 3 (x, y) obtained from the edge amount E3 (x, y) calculated by the edge information calculation unit 6 to obtain a spatial average value in a wide area as shown in Equation (6).
- L3 (x, y) is corrected, and the corrected spatial average value L3 ′′ (x, y) of the wide area is calculated.
- the synthesis weight ⁇ 3 (x, y) is a preset threshold value hi3 and Using lo3, it is calculated as in equation (7).
- the spatial average value L2 (x, y) of the middle region is corrected as in equation (8), and the corrected spatial average value L2 ′ (x, y) is calculated. Similar correction is performed on the spatial average value L1 (x, y) and the input pixel value P in (x, y).
- the recorrection unit 4 calculates a difference value between the spatial average value L2 (x, y) before correction and the corrected spatial average value L2 ′ (x, y), and the edge amount calculated by the edge information calculation unit 6 Based on E3 (x, y), L2 ′ (x, y) is recorrected as in equation (17) to obtain a re-corrected spatial average value L2 ′′ (x, y). Is also performed on the corrected spatial average value L1 ′ (x, y) and the corrected input pixel value P in ′ (x, y).
- the output image control unit 5 instructs the region pixel value extraction unit 1 to extract the pixel values of the pixels in the region of the next layer each time the sequentially corrected spatial average value is input. Each time the re-corrected spatial average value is input, the value is fed back to the correction unit 3. Then, it outputs one pixel P out (x, y) when is input, the P out (x, y) as an output pixel value.
- FIG. 8 shows an image processing apparatus according to an embodiment of the present invention in the case where correction values are obtained using Expressions (1) to (5) and Expressions (9) to (11).
- the image processing apparatus includes an area pixel value extraction unit 1, a spatial average value calculation unit 2, a correction unit 3, a re-correction unit 4, an output image control unit 5, An edge information calculation unit 6 and a correction function determination unit 7 are provided.
- the region pixel value extraction unit 1, the spatial average value calculation unit 2, the re-correction unit 4, the output image control unit 5, and the edge information calculation unit 6 perform the same operations as the image processing apparatus of the first embodiment shown in FIG. .
- the correction function determination unit 7 calculates a parameter a in the Func function (correction function) as shown in equations (9) and (10), and Determine the Func function (correction function).
- the correction unit 3 corrects the spatial average value of each layer using the Func function (correction function) of each layer determined by the correction function determination unit 7.
- the edge region ⁇ improves the sense of resolution in the texture area.
- each unit can be configured by hardware, but can also be realized by a computer program.
- functions and operations similar to those of the above-described embodiments are realized by a processor that operates according to a program stored in the program memory.
- Appendix 9 The image processing method according to any one of appendix 1 to appendix 8, wherein the pixel statistical value uses a spatial average value of pixels.
- Appendix 10 The image processing method according to appendix 9, wherein the spatial average value is any one of an arithmetic average value, a geometric average value, and a weighted average value of pixels.
- Pixel statistical value calculation means for calculating a pixel statistical value of a pixel in the area for each of the multi-hierarchy areas including the target pixel and the range is sequentially narrowed;
- Edge information calculation means for calculating edge information for each multi-layer region including the target pixel and having a range that is sequentially narrowed;
- the pixel statistics of the area of the hierarchy are corrected, and the edge information is used to correct the statistics of the hierarchy of the hierarchy.
- An image processing apparatus comprising correction means for correcting the pixel of interest by repeating correction and re-correction of pixel statistics.
- the correction means includes When the target pixel is included in the edge, the correction amount is reduced, When the target pixel is included in a flat region, the correction amount is increased, When the pixel of interest is included in the texture area, the correction amount according to the edge information, The image processing apparatus according to appendix 11, which performs correction.
- the correction means uses the pixel statistic value of the area of the hierarchy, the pixel statistic value of the area of the hierarchy wider than the area of the hierarchy, and the edge information in the area of the hierarchy wider than the area of the hierarchy.
- the image processing apparatus according to appendix 11 or appendix 12, wherein:
- the correction means uses the difference information after correction and the pixel statistic value of the area wider than the area of the hierarchy to correct the pixel statistic value of the area of the hierarchy for each hierarchy. 15.
- the image processing device according to any one of appendix 11 to appendix 14, which is changed.
- the correction means uses the difference information after correction and the pixel statistic value of a region wider than the region of the hierarchy to correct the pixel statistic value of the region of the hierarchy, and the difference information approaches zero.
- the image processing apparatus according to any one of appendix 11 to appendix 17, wherein a value closer to zero is output as the difference information increases and correction is weakened as the difference information increases.
- Appendix 19 The image processing device according to any one of appendix 11 to appendix 18, wherein the pixel statistical value uses a spatial average value of pixels.
- Appendix 20 The image processing apparatus according to appendix 19, wherein the spatial average value is any one of an arithmetic average value, a geometric average value, and a weighted average value of pixels.
- a pixel statistic value calculation process for calculating a pixel statistic value of a pixel in the region for each multi-layer region including the target pixel and the range being sequentially narrowed; Edge information calculation processing for calculating edge information for each multi-level region including the target pixel and the range is sequentially narrowed; Using the corrected difference information and the pixel statistics of the area wider than the area of the hierarchy, the pixel statistics of the area of the hierarchy are corrected, and the edge information is used to correct the statistics of the hierarchy of the hierarchy.
- the pixel statistic value of the area and the difference information of the pixel statistic value of the area of the hierarchy after the correction are recorrected, and the area of the hierarchy of the hierarchy is sequentially changed in each hierarchy until it reaches the area of the minimum range from the maximum range.
- the correction process includes When the target pixel is included in the edge, the correction amount is reduced, When the target pixel is included in a flat region, the correction amount is increased, When the pixel of interest is included in the texture area, the correction amount according to the edge information, The program according to appendix 21, which performs correction.
- the correction processing uses the difference information after correction and the pixel statistical value of the area wider than the area of the hierarchy to correct the pixel statistical value of the area of the hierarchy.
- the program according to any one of appendix 11 to appendix 14, which is changed.
- the correction process uses the difference information after correction and the pixel statistical value of the area wider than the area of the hierarchy to correct the pixel statistical value of the area of the hierarchy.
- the correction process uses the difference information after correction and the pixel statistic value of the area wider than the area of the hierarchy to correct the pixel statistic value of the area of the hierarchy, and the area wider than the area of the hierarchy.
- the correction process uses the difference information after correction and the pixel statistical value of a region wider than the region of the hierarchy to correct the pixel statistics of the region of the hierarchy, and the difference information approaches zero.
- the program according to any one of appendix 21 to appendix 27, which performs a correction that outputs a value closer to zero and weakens the correction as the difference information increases.
- Appendix 29 The program according to any one of appendix 21 to appendix 28, wherein the pixel statistical value uses a spatial average value of pixels.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
Description
次に、広領域におけるエッジ量E3(x,y)を算出する。エッジ量の算出では,まず、縦方向のエッジ量EV3(x,y)と横方向のエッジ量EH3(x,y)とを、式(3)および式(4)のように算出し、これらを式(5)のように加算することで、広領域におけるエッジ量E3(x,y)を算出する。ここで、広領域をk3にて縦横同じ画素数で範囲を指定しているが、必ずしも縦横同じ画素数でなくてもよい。
続いて、算出されたエッジ量E3(x,y)から算出される合成重みα3(x,y)を用いて、式(6)のように広領域における空間平均値L3(x,y)を補正し、補正された広領域の空間平均値L3’(x,y)を算出する。合成重みα3(x,y)は、あらかじめ設定されている閾値hi3およびlo3を用いて、式(7)のように算出される。
最後に、算出された空間平均値L3’を用い、式(8)のように中領域の空間平均値L2(x,y)を補正する。
補正関数Funcの一例として、図2に示したものを用いる。例えば、画素位置(x,y)における中領域の空間平均値L2(x,y)の補正は、diffinを(L2(x,y)-L3”(x,y))とし、図2の補正関数によって得られる補正量diffoutをL2(x,y)に加算することで、実行される。図2の補正関数内のパラメータであるa,b,limitは、処理する解像度および補正する色成分毎に決定される。
式(9)で定義される係数β3(x,y)は、0~1.0までの実数となる。係数β3(x,y)を利用して、以下の式(10)のように、Func関数におけるパラメータaを設定する。エッジ量E3(x,y)とパラメータaの関係を図3に示す。
ここで、a_lo3は、エッジ量E3(x,y)が閾値lo3より小さい場合にパラメータaとして使用する値であり、a_hi3はエッジ量が閾値hi3より大きくなる場合のパラメータaに使用する値である。エッジ量E3(x,y)が閾値lo3からhi3までパラメータaは、a_hi3からa_lo3までの間の値となる。ここで、a_hi3は0以上の実数であり、a_lo3は,a_lo3>=a_hi3の実数である。
このようにして、式(1)から式(8)までを用いて、あるいは、式(1)から式(5)、および式(9)から式(11)までを用いて、補正された中領域における空間平均値L2’(x,y)を得る。
式(13)における関数Fresidの一例を以下の式(14)に示す。
α1<=α2であり、基本的には0.0<=α1<=α2<=1.0である。つまり、式(14)は、注目画素がエッジに含まれるような場合(E3(x,y)>hi3)は、差分値R2(x,y)から高い割合で値を抽出することにより、最終的な補正量を小さくしてエッジの喪失を防ぐ。また、注目画素が平坦領域に含まれるような場合(E3(x,y)<lo3)は、差分値R2(x,y)から低い割合で値を抽出することにより、最終的な補正量を大きく保って平坦領域でのノイズ除去性能を維持する。さらに、注目画素がテクスチャ領域に含まれる場合は、差分値R2(x,y)から対象領域のエッジ量に比例して値を抽出することで、解像度感を保ちつつノイズを除去する。式(14)において、関数Fresidの入力r=1の場合の、エッジ量eと関数Fresidの出力の関係を図4に示す。
式(15)、式(16)において、関数Fresidの入力r=1の場合の、エッジ量eと関数Fresidの出力の関係を図5および図6に示す。
ここで、関数Fresidは、式(18)又は式(19)となる。式(18)におけるsign関数は入力の符号を出力する関数であり、式(18)や式(19)におけるλ’は式(20)となる。
ここで、式(20)のλはあらかじめ設定する値である。また、閾値hi、lo、パラメータα1、α2は式(14)、(15)、(16)に示されたものと同様に、各階層で変更してもよいし、そうでなくてもよい。
注目画素を含み、範囲が順次狭くなる多階層の領域毎に画素の画素統計値とエッジ情報とを求め、
当該階層の領域の画素統計値と当該階層の領域より広い階層の領域の画素統計値との差分情報を、前記エッジ情報を用いて補正し、
補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値を用いて、当該階層の領域の画素統計値を補正し、
前記エッジ情報を用いて、補正前の前記当該階層の領域の画素統計値と、補正後の前記当該階層の領域の画素統計値の差分情報とを再補正し、
最大範囲から最小範囲の領域になるまで、順次各階層において、前記当該階層の領域の画素統計値の補正と再補正を繰り返すことで、前記注目画素を補正する
画像処理方法。
前記再補正は、
前記注目画素がエッジに含まれる場合には補正量を小さくし、
前記注目画素が平坦領域に含まれる場合には補正量を大きくし、
前記注目画素がテクスチャ領域に含まれる場合にはエッジ情報に応じた補正量により、
補正を行う
付記1に記載の画像処理方法。
当該階層の領域の画素統計値と、当該階層の領域より広い階層の領域の画素統計値と、当該階層の領域より広い階層の領域におけるエッジ情報とを用いて、前記差分情報を算出する
付記1又は付記2に記載の画像処理方法。
当該階層の領域より広い階層の領域におけるエッジ情報が所定の閾値を超える場合、当該階層の画素統計値の補正を行わない
付記3に記載の画像処理方法。
補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、補正の強度を階層毎に変化させる
付記1から付記4のいずれかに記載の画像処理方法。
補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、当該階層の領域のノイズによる画素値の変動量に応じて補正の強度を変化させる
付記5に記載の画像処理方法。
補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、当該階層の領域より広い領域のエッジ量に応じて補正の強度を変化させる
付記5に記載の画像処理方法。
補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、前記差分情報がゼロに近づくほどゼロに近い値を出力し、前記差分情報が大きくなるほど補正を弱める補正を行う
付記1から付記7のいずれかに記載の画像処理方法。
前記画素統計値は、画素の空間平均値を用いる
付記1から付記8のいずれかに記載の画像処理方法。
前記空間平均値は、画素の相加平均値、相乗平均値、加重平均値のいずれかである
付記9に記載の画像処理方法。
注目画素を含み、範囲が順次狭くなる多階層の領域毎に、その領域の画素の画素統計値を算出する画素統計値算出手段と、
注目画素を含み、範囲が順次狭くなる多階層の領域毎に、エッジ情報を算出するエッジ情報算出手段と、
補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値を用いて、当該階層の領域の画素統計値を補正し、前記エッジ情報を用いて、補正前の前記当該階層の領域の画素統計値と、補正後の前記当該階層の領域の画素統計値の差分情報とを再補正し、最大範囲から最小範囲の領域になるまで、順次各階層において、前記当該階層の領域の画素統計値の補正と再補正を繰り返すことで、前記注目画素を補正する補正手段と
を有する画像処理装置。
前記補正手段は、
前記注目画素がエッジに含まれる場合には補正量を小さくし、
前記注目画素が平坦領域に含まれる場合には補正量を大きくし、
前記注目画素がテクスチャ領域に含まれる場合にはエッジ情報に応じた補正量により、
補正を行う
付記11に記載の画像処理装置。
前記補正手段は、当該階層の領域の画素統計値と、当該階層の領域より広い階層の領域の画素統計値と、当該階層の領域より広い階層の領域におけるエッジ情報とを用いて、前記差分情報を算出する
付記11又は付記12に記載の画像処理装置。
前記補正手段は、当該階層の領域より広い階層の領域におけるエッジ情報が所定の閾値を超える場合、当該階層の画素統計値の補正を行わない
付記13に記載の画像処理装置。
前記補正手段は、補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、補正の強度を階層毎に変化させる
付記11から付記14のいずれかに記載の画像処理装置。
前記補正手段は、補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、当該階層の領域のノイズによる画素値の変動量に応じて補正の強度を変化させる
付記15に記載の画像処理装置。
前記補正手段は、補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、当該階層の領域より広い領域のエッジ量に応じて補正の強度を変化させる
付記15に記載の画像処理装置。
前記補正手段は、補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、前記差分情報がゼロに近づくほどゼロに近い値を出力し、前記差分情報が大きくなるほど補正を弱める補正を行う
付記11から付記17のいずれかに記載の画像処理装置。
前記画素統計値は、画素の空間平均値を用いる
付記11から付記18のいずれかに記載の画像処理装置。
前記空間平均値は、画素の相加平均値、相乗平均値、加重平均値のいずれかである
付記19に記載の画像処理装置。
注目画素を含み、範囲が順次狭くなる多階層の領域毎に、その領域の画素の画素統計値を算出する画素統計値算出処理と、
注目画素を含み、範囲が順次狭くなる多階層の領域毎に、エッジ情報を算出するエッジ情報算出処理と、
補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値を用いて、当該階層の領域の画素統計値を補正し、前記エッジ情報を用いて、補正前の前記当該階層の領域の画素統計値と、補正後の前記当該階層の領域の画素統計値の差分情報とを再補正し、最大範囲から最小範囲の領域になるまで、順次各階層において、前記当該階層の領域の画素統計値の補正と再補正を繰り返すことで、前記注目画素を補正する補正処理と
を、コンピュータに実行させるプログラム。
前記補正処理は、
前記注目画素がエッジに含まれる場合には補正量を小さくし、
前記注目画素が平坦領域に含まれる場合には補正量を大きくし、
前記注目画素がテクスチャ領域に含まれる場合にはエッジ情報に応じた補正量により、
補正を行う
付記21に記載のプログラム。
前記補正処理は、当該階層の領域の画素統計値と、当該階層の領域より広い階層の領域の画素統計値と、当該階層の領域より広い階層の領域におけるエッジ情報とを用いて、前記差分情報を算出する
付記21又は付記22に記載のプログラム。
前記補正処理は、当該階層の領域より広い階層の領域におけるエッジ情報が所定の閾値を超える場合、当該階層の画素統計値の補正を行わない
付記23に記載のプログラム。
前記補正処理は、補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、補正の強度を階層毎に変化させる
付記11から付記14のいずれかに記載のプログラム。
前記補正処理は、補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、当該階層の領域のノイズによる画素値の変動量に応じて補正の強度を変化させる
付記25に記載のプログラム。
前記補正処理は、補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、当該階層の領域より広い領域のエッジ量に応じて補正の強度を変化させる
付記25に記載のプログラム。
前記補正処理は、補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、前記差分情報がゼロに近づくほどゼロに近い値を出力し、前記差分情報が大きくなるほど補正を弱める補正を行う
付記21から付記27のいずれかに記載のプログラム。
前記画素統計値は、画素の空間平均値を用いる
付記21から付記28のいずれかに記載のプログラム。
前記空間平均値は、画素の相加平均値、相乗平均値、加重平均値のいずれかである
付記29に記載のプログラム。
2 空間平均値算出部
3 補正部
4 再補正部
5 出力画像制御部
6 エッジ情報算出部
7 補正関数決定部
Claims (10)
- 注目画素を含み、範囲が順次狭くなる多階層の領域毎に画素の画素統計値とエッジ情報とを求め、
当該階層の領域の画素統計値と当該階層の領域より広い階層の領域の画素統計値との差分情報を、前記エッジ情報を用いて補正し、
補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値を用いて、当該階層の領域の画素統計値を補正し、
前記エッジ情報を用いて、補正前の前記当該階層の領域の画素統計値と、補正後の前記当該階層の領域の画素統計値の差分情報とを再補正し、
最大範囲から最小範囲の領域になるまで、順次各階層において、前記当該階層の領域の画素統計値の補正と再補正を繰り返すことで、前記注目画素を補正する
画像処理方法。 - 前記再補正は、
前記注目画素がエッジに含まれる場合には補正量を小さくし、
前記注目画素が平坦領域に含まれる場合には補正量を大きくし、
前記注目画素がテクスチャ領域に含まれる場合にはエッジ情報に応じた補正量により、
補正を行う
請求項1に記載の画像処理方法。 - 当該階層の領域の画素統計値と、当該階層の領域より広い階層の領域の画素統計値と、当該階層の領域より広い階層の領域におけるエッジ情報とを用いて、前記差分情報を算出する
請求項1又は請求項2に記載の画像処理方法。 - 当該階層の領域より広い階層の領域におけるエッジ情報が所定の閾値を超える場合、当該階層の画素統計値の補正を行わない
請求項3に記載の画像処理方法。 - 補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、補正の強度を階層毎に変化させる
請求項1から請求項4のいずれかに記載の画像処理方法。 - 補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、当該階層の領域のノイズによる画素値の変動量に応じて補正の強度を変化させる
請求項5に記載の画像処理方法。 - 補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、当該階層の領域より広い領域のエッジ量に応じて補正の強度を変化させる
請求項5に記載の画像処理方法。 - 補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値とを用いて、当該階層の領域の画素統計値を補正する際、前記差分情報がゼロに近づくほどゼロに近い値を出力し、前記差分情報が大きくなるほど補正を弱める補正を行う
請求項1から請求項7のいずれかに記載の画像処理方法。 - 注目画素を含み、範囲が順次狭くなる多階層の領域毎に、その領域の画素の画素統計値を算出する画素統計値算出手段と、
注目画素を含み、範囲が順次狭くなる多階層の領域毎に、エッジ情報を算出するエッジ情報算出手段と、
補正後の前記差分情報と、前記当該階層の領域より広い領域の画素統計値を用いて、当該階層の領域の画素統計値を補正し、前記エッジ情報を用いて、補正前の前記当該階層の領域の画素統計値と、補正後の前記当該階層の領域の画素統計値の差分情報とを再補正し、最大範囲から最小範囲の領域になるまで、順次各階層において、前記当該階層の領域の画素統計値の補正と再補正を繰り返すことで、前記注目画素を補正する補正手段と
を有する画像処理装置。 - 前記補正手段は、
前記注目画素がエッジに含まれる場合には補正量を小さくし、
前記注目画素が平坦領域に含まれる場合には補正量を大きくし、
前記注目画素がテクスチャ領域に含まれる場合にはエッジ情報に応じた補正量により、
補正を行う
請求項9に記載の画像処理装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014512625A JP6249173B2 (ja) | 2012-04-26 | 2013-04-24 | 画像処理方法、及び画像処理装置 |
CN201380021821.7A CN104254872B (zh) | 2012-04-26 | 2013-04-24 | 图像处理方法和图像处理设备 |
US14/397,037 US9501711B2 (en) | 2012-04-26 | 2013-04-24 | Image processing method and image processing device with correction of pixel statistical values to reduce random noise |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012100910 | 2012-04-26 | ||
JP2012-100910 | 2012-04-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013161839A1 true WO2013161839A1 (ja) | 2013-10-31 |
Family
ID=49483147
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/061969 WO2013161839A1 (ja) | 2012-04-26 | 2013-04-24 | 画像処理方法、及び画像処理装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US9501711B2 (ja) |
JP (1) | JP6249173B2 (ja) |
CN (1) | CN104254872B (ja) |
WO (1) | WO2013161839A1 (ja) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6256704B2 (ja) * | 2012-04-26 | 2018-01-10 | 日本電気株式会社 | 画像処理方法、及び画像処理装置 |
JP6249173B2 (ja) * | 2012-04-26 | 2017-12-20 | 日本電気株式会社 | 画像処理方法、及び画像処理装置 |
US9542617B2 (en) * | 2013-02-28 | 2017-01-10 | Nec Corporation | Image processing device and image processing method for correcting a pixel using a corrected pixel statistical value |
WO2016051716A1 (ja) * | 2014-09-29 | 2016-04-07 | 日本電気株式会社 | 画像処理方法、画像処理装置、及び画像処理プログラムを記憶する記録媒体 |
WO2017166301A1 (zh) * | 2016-04-01 | 2017-10-05 | 华为技术有限公司 | 一种图像处理方法、电子设备以及存储介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002183727A (ja) * | 2000-12-19 | 2002-06-28 | Konica Corp | 画像処理装置 |
JP2007018379A (ja) * | 2005-07-08 | 2007-01-25 | Konica Minolta Medical & Graphic Inc | 画像処理方法及び画像処理装置 |
JP2011041183A (ja) * | 2009-08-18 | 2011-02-24 | Sony Corp | 画像処理装置および方法、並びにプログラム |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3664231B2 (ja) | 2000-08-09 | 2005-06-22 | 日本電気株式会社 | カラー画像処理装置 |
EP1217826A3 (en) | 2000-12-19 | 2005-05-25 | Konica Corporation | Image processing apparatus |
KR100403601B1 (ko) | 2001-12-21 | 2003-10-30 | 삼성전자주식회사 | 영상의 윤곽선 보정 장치 및 방법 |
JP4535125B2 (ja) * | 2005-03-31 | 2010-09-01 | 株式会社ニコン | 画像処理方法 |
US8351735B2 (en) * | 2006-10-18 | 2013-01-08 | Robert Bosch Gmbh | Image processing system, method and computer program for contrast enhancement of images |
WO2013027723A1 (ja) * | 2011-08-22 | 2013-02-28 | 日本電気株式会社 | ノイズ除去装置、ノイズ除去方法及びプログラム |
JP6256703B2 (ja) * | 2012-04-26 | 2018-01-10 | 日本電気株式会社 | 画像処理方法、及び画像処理装置 |
JP6249173B2 (ja) * | 2012-04-26 | 2017-12-20 | 日本電気株式会社 | 画像処理方法、及び画像処理装置 |
US20140056536A1 (en) * | 2012-08-27 | 2014-02-27 | Toshiba Medical Systems Corporation | Method and system for substantially removing dot noise |
US9542617B2 (en) * | 2013-02-28 | 2017-01-10 | Nec Corporation | Image processing device and image processing method for correcting a pixel using a corrected pixel statistical value |
-
2013
- 2013-04-24 JP JP2014512625A patent/JP6249173B2/ja active Active
- 2013-04-24 US US14/397,037 patent/US9501711B2/en active Active
- 2013-04-24 WO PCT/JP2013/061969 patent/WO2013161839A1/ja active Application Filing
- 2013-04-24 CN CN201380021821.7A patent/CN104254872B/zh active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002183727A (ja) * | 2000-12-19 | 2002-06-28 | Konica Corp | 画像処理装置 |
JP2007018379A (ja) * | 2005-07-08 | 2007-01-25 | Konica Minolta Medical & Graphic Inc | 画像処理方法及び画像処理装置 |
JP2011041183A (ja) * | 2009-08-18 | 2011-02-24 | Sony Corp | 画像処理装置および方法、並びにプログラム |
Also Published As
Publication number | Publication date |
---|---|
CN104254872A (zh) | 2014-12-31 |
US20150098656A1 (en) | 2015-04-09 |
JPWO2013161839A1 (ja) | 2015-12-24 |
JP6249173B2 (ja) | 2017-12-20 |
CN104254872B (zh) | 2017-09-22 |
US9501711B2 (en) | 2016-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6213558B2 (ja) | 画像処理方法、及び画像処理装置 | |
JP4681608B2 (ja) | イメージ処理のための方法および装置 | |
JP6146574B2 (ja) | ノイズ除去装置、ノイズ除去方法及びプログラム | |
JP6249173B2 (ja) | 画像処理方法、及び画像処理装置 | |
JP6369150B2 (ja) | アンチエイリアシングエッジを回復するフィルタリング方法及びフィルタリング装置 | |
JP2009093323A (ja) | 画像処理装置およびプログラム | |
JP2012208553A (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
JPWO2009107197A1 (ja) | 画像処理装置、画像処理方法および画像処理プログラム | |
JP6256703B2 (ja) | 画像処理方法、及び画像処理装置 | |
JP6256704B2 (ja) | 画像処理方法、及び画像処理装置 | |
JP7265316B2 (ja) | 画像処理装置及び画像処理方法 | |
JP4381240B2 (ja) | 画像処理装置及びこれを用いた画像表示装置、並びに画像処理方法及びこれをコンピュータに実行させるためのプログラム | |
JP6256680B2 (ja) | 画像処理方法、画像処理装置、及び画像処理プログラム | |
JP6079962B2 (ja) | 画像処理方法、及び画像処理装置 | |
Okado et al. | Fast and high-quality regional histogram equalization | |
JP4913246B1 (ja) | エッジ強調方法またはエッジ強調度演算方法 | |
JP5350497B2 (ja) | 動き検出装置、制御プログラム、および集積回路 | |
JP4992438B2 (ja) | 画像処理装置および画像処理プログラム | |
US20130114888A1 (en) | Image processing apparatus, computer program product, and image processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13781314 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014512625 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14397037 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13781314 Country of ref document: EP Kind code of ref document: A1 |