WO2014087909A1 - Image processing device - Google Patents

Image processing device Download PDF

Info

Publication number
WO2014087909A1
WO2014087909A1 PCT/JP2013/081988 JP2013081988W WO2014087909A1 WO 2014087909 A1 WO2014087909 A1 WO 2014087909A1 JP 2013081988 W JP2013081988 W JP 2013081988W WO 2014087909 A1 WO2014087909 A1 WO 2014087909A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
pixel
image processing
unit
signal
Prior art date
Application number
PCT/JP2013/081988
Other languages
French (fr)
Japanese (ja)
Inventor
善光 村橋
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2014087909A1 publication Critical patent/WO2014087909A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/62Retouching, i.e. modification of isolated colours only or in isolated picture areas only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/77Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase

Definitions

  • the present invention relates to an image processing apparatus, and more particularly to a technique for performing image quality improvement processing such as noise reduction processing and sharpening processing according to an image.
  • Japanese Patent Application Laid-Open No. 2010-152518 discloses that a sharpening process for a specific color area such as a face image area is reduced from other areas, thereby sharpening an image such as a landscape and a natural image of a human face.
  • a technique for performing a sharpening process is disclosed. In this technique, sharpness processing is performed by setting a sharpness processing parameter for a specific color region lower than a level set for a region other than the specific color region.
  • Japanese Patent Laid-Open No. 2010-152518 switches sharpness processing between a specific color region and a region other than the specific color region. For this reason, when the boundary between the specific color area and the area other than the specific color area is ambiguous, sharpness processing is finely switched at the boundary portion, which may cause artifacts.
  • image quality improvement processing such as noise reduction processing or sharpening processing is performed uniformly on the entire image, artifacts may occur depending on the object included in the image. For example, in the case of an image that contains fine linear textures at random, such as turf and green trees, the linear texture is misrecognized as noise such as jaggy around the contour and smoothed. May result in an unnatural image. Further, in such an image area, the effect of the sharpening process is felt weak.
  • An object of the present invention is to provide a technique for reducing artifacts generated when image processing such as noise reduction or sharpening is performed on an image.
  • An image processing apparatus includes an image processing unit that performs image processing on a signal value of each pixel in an image signal using an input image processing parameter, and a signal value of each pixel in the image signal To set the likelihood of the pixel for a predetermined object, a predetermined first parameter for use in the image processing for the object, and for use in the image processing for another object
  • a parameter output unit that outputs the image processing parameter obtained by mixing the predetermined second parameter with a mixture ratio based on the likelihood set by the setting unit to the image processing unit.
  • the setting unit converts the signal value of each pixel into a feature amount representing the size of the component in a feature space composed of a plurality of components representing the feature of the object. Conversion is performed to obtain a scalar value based on the feature amount, and the likelihood is set based on the scalar value.
  • the feature space is a color space including a first component and a second component constituting a color.
  • the feature space has a frequency space including a predetermined frequency component included in the object.
  • the setting unit converts the signal value of each pixel into the feature amount in the color space, and uses a vector representing the range of the object in the feature space.
  • the scalar value mainly representing the first component and the second component is obtained.
  • the setting unit converts the signal value of each pixel into the feature amount in the color space, and a plurality of regions representing the range of the object in the color space. A distance between each determined reference point and the feature amount is obtained as the scalar value.
  • the setting unit obtains the frequency component of the pixel in the frequency space from the signal value of each pixel, and sets the obtained frequency component as the scalar value of the pixel.
  • the image processing unit performs a smoothing process on the signal value in the image signal as the image processing, and the image At least one of second image processing for performing sharpening processing on the signal value in the signal is performed.
  • FIG. 1 is a block diagram illustrating a schematic configuration of the image processing apparatus according to the first embodiment.
  • FIG. 2 is a circuit diagram of the feature space conversion unit in the first embodiment.
  • FIG. 3A is a diagram illustrating the color of the detection target object on the UV plane.
  • FIG. 3B is a diagram illustrating a range that the scalar value S1 can take on the UV plane.
  • FIG. 3C is a diagram illustrating a possible range of the scalar value S2 on the UV plane.
  • FIG. 4 is a diagram illustrating a region on the UV plane corresponding to the scalar value.
  • FIG. 5 is a schematic circuit diagram of the parameter output unit in the first embodiment.
  • FIG. 5 is a schematic circuit diagram of the parameter output unit in the first embodiment.
  • FIG. 6 is a diagram illustrating a schematic configuration of a noise reduction processing unit in the first embodiment.
  • FIG. 7 is a block diagram illustrating a configuration of the difference image generation unit in the first embodiment.
  • FIG. 8 is a block diagram illustrating a schematic configuration of the image processing apparatus according to the second embodiment.
  • FIG. 9A is a diagram illustrating a region on the UV plane representing the color of the detection target object in the second embodiment.
  • FIG. 9B is a diagram illustrating a region on the UV plane representing the color of the detection target object in the second embodiment.
  • FIG. 9C is a diagram illustrating a region on the UV plane representing the color of the detection target object in the second embodiment.
  • FIG. 10 is a diagram illustrating a region having a high likelihood on the UV plane in the second embodiment.
  • FIG. 11 is a block diagram illustrating a schematic configuration of the image processing apparatus according to the modification (1).
  • FIG. 12 is a block diagram illustrating a schematic configuration of the sharpening processing unit in the modified example (1).
  • FIG. 13 is a block diagram illustrating a configuration example of the high-frequency component output unit in Modification Example (1).
  • FIG. 14 is a block diagram illustrating a configuration example of the nonlinear filter in the modification example (1).
  • FIG. 15 is a block diagram illustrating a configuration example of the two-dimensional high-pass filter unit in the modification (1).
  • FIG. 16 is a block diagram illustrating a configuration example of the nonlinear arithmetic unit in the modification example (1).
  • An image processing apparatus includes: an image processing unit that performs image processing on a signal value of each pixel in an image signal using an input image processing parameter; and A setting unit that sets the likelihood of the pixel for a predetermined object using a signal value, a predetermined first parameter for use in the image processing for the object, and the image processing for another object A parameter output unit that outputs the image processing parameter, which is obtained by mixing a predetermined second parameter for use with a mixture ratio based on the likelihood set by the setting unit, to the image processing unit ( First configuration).
  • the likelihood indicating the object likeness is set for each pixel in the image signal
  • Image processing is performed on each pixel using a parameter corresponding to the likelihood.
  • the setting unit converts the signal value of each pixel into a feature amount that represents the size of the component in a feature space that includes a plurality of components that represent the feature of the object. Conversion may be performed to obtain a scalar value based on the feature amount, and the likelihood may be set based on the scalar value.
  • the likelihood for an object is set for each pixel in a feature space that represents a feature of the object that may cause an artifact during image processing. Therefore, it is possible to perform appropriate image processing for the pixels having the object characteristics.
  • the feature space may be a color space including a first component and a second component constituting a color in the second configuration.
  • the present invention can be applied to an object having a color feature.
  • the feature space may include a frequency space including a predetermined frequency component included in the object.
  • the likelihood can be set using the frequency component included in the object.
  • the setting unit converts the signal value of each pixel into the feature amount in the color space, and uses a vector representing the range of the object in the feature space.
  • the scalar value mainly representing the first component and the second component may be obtained.
  • the setting unit converts the signal value of each pixel into the feature amount in the color space, and a plurality of regions representing the range of the object in the color space.
  • the distance between each determined reference point and the feature amount may be obtained as the scalar value.
  • the setting unit obtains a frequency component of the pixel in the frequency space from a signal value of each pixel, and uses the obtained frequency component as the scalar value of the pixel. It is good.
  • the image processing unit performs a smoothing process on the signal value in the image signal as the image processing, and the image It is good also as performing at least one of the 2nd image processing which performs a sharpening process about the said signal value in a signal.
  • the likelihood for the detection target object is set according to the degree to which the characteristics of a predetermined object (hereinafter, detection target object) are included. Then, image quality improvement processing (image processing) according to the likelihood is performed.
  • the likelihood indicates the likelihood of the detection target object of each pixel value in the image signal.
  • a specific color area here, a green area such as grass or trees
  • the image signal will be described as an example.
  • FIG. 1 is a block diagram illustrating a schematic configuration of an image processing apparatus according to the present embodiment.
  • the image processing apparatus 1 includes a feature space conversion unit 10, an RGB-YUV conversion unit 15, a setting unit 20A, a parameter output unit 30, an image processing unit 40, and a YUB-RGB conversion unit 50.
  • the feature space conversion unit 10 and the setting unit 20A are examples of a setting unit.
  • each part will be described.
  • the RGB-YUV conversion unit 15 converts the input RGB signal into a YUV signal according to the following equation (1) and outputs the YUV signal to the image processing unit 40.
  • the feature space conversion unit 10 converts an image signal in the RGB color space (hereinafter, RGB signal) into an image signal in the YUV color space (hereinafter, YUV signal), and outputs the YUV signal to the setting unit 20A and the image processing unit 40.
  • the detection target object is an image of grass, trees, and the like, and has a green feature.
  • the feature of the detection target object can be obtained more accurately by converting the input signal into a signal value in a color space composed of components representing the color.
  • the input signal in order to obtain a UV signal that does not depend on luminance, the input signal is converted into a YUV signal in the YUV color space, but the input signal is converted into xyY, L * a * b * , You may make it convert into the signal of color spaces, such as L * u * v * .
  • the YUV color space is an example of a feature space
  • the YUV signal is an example of a feature amount that represents the size of a component in the feature space. That is, the input signal may be converted into a feature amount in a feature space made up of components representing the features of the detection target object in accordance with the features of the detection target object.
  • FIG. 2 is a circuit diagram for converting an input RGB signal into a YUV signal in the feature space conversion unit 10.
  • the R, G, and B signals input to each multiplier shown in FIG. 2 are multiplied by coefficients (a 11 to a 33 ) corresponding to the multipliers, and added for each R, G, and B signal. . That is, the RGB signal is converted into a YUV signal by the conversion equation shown in the following equation (1-1).
  • the setting unit 20A includes a vector calculation processing unit 21, a minimum value calculation processing unit 22, and a numerical value limiting processing unit 23.
  • the scalar value S1 represents the color difference from the detection target object mainly as the U component (an example of the first component), and the scalar value S2 represents the color difference from the detection target object as V This mainly represents the component (an example of the second component).
  • the minimum value calculation processing unit 22 selects the scalar value of the minimum value (hereinafter referred to as the minimum value S) from the scalar values S1 and S2 calculated by the vector calculation processing unit 21 and outputs the selected value to the numerical value limiting processing unit 23.
  • the numerical value calculation processing unit 23 converts the minimum value S output from the minimum value calculation processing unit 22 into likelihood P.
  • the likelihood P is represented by a numerical value from the minimum value 0 to the maximum value 16.
  • FIG. 4 is a diagram illustrating a region on the UV plane corresponding to the scalar value (minimum value S). Curves L31 and L32 indicated by broken lines in the UV plane of FIG. 4 are obtained by connecting a line segment parallel to the line segment L1 shown in FIG. 3B and a line segment parallel to the line segment L2 shown in FIG. 3C. Yes, it represents the boundary of the color (green) of the detection target object.
  • the region P1 is a region showing green and includes a color having a scalar value S of 16 or more.
  • the region P2 is a region including a green component, and includes a color having a scalar value S greater than 0 and less than 16.
  • Region P 3 is not in the green region, include color scalar value S becomes 0 or less. Therefore, the color included in the region P 1 is the highest likelihood, color included in the region P 3 most likelihood is low. Color included in the region P 2 takes a value between the maximum value and the minimum value of the likelihood.
  • the parameter output unit 30 calculates a parameter to be output to the image processing unit 40 based on the likelihood P output from the numerical value limiting processing unit 23 of the setting unit 20A.
  • a first parameter K1 is arbitrarily set as a parameter suitable for the detection target object
  • a second parameter K2 is arbitrarily set as a parameter suitable for another object.
  • the parameter output unit 30 calculates a parameter K (an example of an image processing parameter) obtained by mixing the first parameter K1 and the second parameter K2 at a ratio corresponding to the likelihood P.
  • FIG. 5 shows a circuit diagram for specifying the parameter K in the parameter output unit 30.
  • the parameter K is expressed by the following formula (4).
  • the parameter output unit 30 calculates the value of the parameter K by substituting the calculated mixing ratio ⁇ into the above equation (4).
  • 0.5
  • the parameter K (K1 + K2) / 2.
  • the image processing unit 40 includes a noise reduction processing unit 41.
  • the image processing unit 40 uses the parameter K output from the parameter output unit 30 to perform noise reduction processing on the Y signal output from the RGB-YUV conversion unit 15 by the noise reduction processing unit 41 for each pixel. .
  • the image processing unit 40 outputs the YUV signal for each pixel subjected to the noise reduction processing to the YUV-RGB conversion unit 50.
  • the YUV-RGB conversion unit 50 converts the YUV signal output from the image processing unit 40 into an RGB signal by the following equation (1-2) and outputs the RGB signal.
  • FIG. 6 is a block diagram illustrating a schematic configuration of the noise reduction processing unit 41.
  • the noise reduction processing unit 41 includes a difference image generation unit 410, a parameter adjustment unit 416, and a synthesis calculation unit 417.
  • the difference image generation unit 410 generates, for each pixel, a first difference signal ( ⁇ Y) obtained by performing a smoothing process on the input Y signal.
  • the difference image generation unit 410 only needs to perform a smoothing process on the Y signal for each pixel and output the difference signal ⁇ Y after the smoothing process.
  • FIG. 1 an example of the difference image generation unit 410 will be described.
  • the difference image generation unit 410 includes a contour direction estimation unit 411, a reference region load processing unit 412, a preprocessing unit 413, a direction evaluation unit 414, and a product-sum operation unit 415.
  • the contour direction estimation unit 411 estimates the contour direction for each pixel based on the signal value (luminance value) for each pixel represented by the Y signal output from the RGB-YUV conversion unit 15.
  • the contour direction is a direction orthogonal to the normal line of the contour line, that is, a tangential direction of the contour line.
  • the contour line refers to a line indicating a space where the signal value is substantially constant, and may be a curve or a straight line. Therefore, the contour is not limited to a region where the signal value changes rapidly according to a change in position.
  • the relationship between the contour line and the signal value corresponds to the relationship between the contour line and the altitude.
  • the position of each pixel is given discretely or is affected by noise around the contour to be improved in this embodiment, such as jaggy, dot disturbance, mosquito noise, etc.
  • the direction of the contour cannot be determined using the line passing through as a line forming the contour.
  • the signal value is differentiable (that is, continuous) in the space representing the coordinates for each pixel.
  • the contour direction estimation unit 411 calculates the contour direction ⁇ based on, for example, Expression (5) based on the difference value of the signal value in the horizontal direction or the vertical direction for each pixel.
  • the contour direction ⁇ is a counterclockwise angle with respect to the horizontal direction (x direction).
  • x and y are horizontal and vertical coordinates, respectively.
  • Y (x, y) is a signal value at coordinates (x, y). That is, the contour direction ⁇ is calculated as an angle that gives a tangent value obtained by dividing the partial differentiation of the signal value Y (x, y) in the x direction by the partial differentiation of the signal value Y (x, y) in the y direction.
  • the Expression (5) can be derived from the relationship that the signal value Y (x, y) is constant even if the coordinates (x, y) are different.
  • Gx (x, y) and Gy (x, y) represent partial differentiation in the x direction and partial differentiation in the y direction of the signal value Y (x, y), respectively.
  • Gx (x, y) and Gy (x, y) may be referred to as x-direction partial differentiation and y-direction partial differentiation, respectively.
  • the position (coordinates) of the pixel (i, j) indicates the barycentric point of the pixel.
  • the variable a at the position of the pixel is represented as a (i, j) or the like.
  • the contour direction estimation unit 411 uses, for example, equations (6) and (7), respectively, and the x-direction partial differential Gx (i, j) and y of the signal value Y (i, j) at each pixel (i, j).
  • a directional partial differential Gy (i, j) is calculated.
  • i and j are integer values indicating indexes of the pixel of interest in the x direction and y direction, respectively.
  • a pixel of interest is a pixel that attracts attention as a direct processing target.
  • Wx (u ′, v ′) and Wy (u ′, v ′) indicate filter coefficients of the difference filter in the x direction and the y direction, respectively.
  • u and v are integer values indicating indices of the reference pixel in the x and y directions, respectively.
  • the reference pixel is a pixel that is within a range determined by a predetermined rule with the target pixel as a reference, and is a pixel that is referred to when processing the target pixel.
  • the reference pixel includes a target pixel.
  • the difference filter described above may include filter coefficients Wx (u ′, u ′, v ′ for each of u ′, v ′ of 2n + 1 in the x direction and 2n + 1 in the y direction (total (2n + 1) ⁇ (2n + 1)) reference pixels.
  • n is an integer value greater than 1 (for example, 2).
  • n is set to a value smaller than a predetermined maximum value, for example, an integer value equal to the enlargement rate, an integer value obtained by rounding up the digits after the decimal point of the enlargement rate, or any of these integer values. It is determined that the value is larger.
  • the contour direction estimation unit 411 quantizes and quantizes the contour direction ⁇ (i, j) calculated based on the calculated x-direction partial differential Gx (i, j) and y-direction partial differential Gy (i, j).
  • a quantized contour direction D (i, j) representing the contour direction is calculated.
  • the contour direction estimation unit 411 uses, for example, Expression (8) when calculating the quantized contour direction D (i, j).
  • N d is a constant representing the number of quantized contour directions (number of quantized contour directions).
  • the quantization contour direction number Nd is, for example, any value between 8 and 32. That is, the quantized contour direction D (i, j) is represented by any integer from 0 to N d ⁇ 1 by rounding the value obtained by dividing the contour direction ⁇ by the quantization interval by ⁇ / N d. .
  • the degree of freedom in the contour direction ⁇ is restricted, and the processing load described later is reduced.
  • of the x-direction partial differential Gx (i, j) is smaller than a predetermined minute real value (for example, 10 ⁇ 6 ).
  • Tan ⁇ 1 is ⁇ / 2.
  • a tangent function having two arguments, Gx and Gy is prepared in order to avoid the above-described error due to division and zero division. By using this, tan ⁇ 1 is obtained. Also good.
  • the contour direction estimation unit 411 outputs quantized contour direction information representing the calculated quantized contour direction D (i, j) to the reference region load processing unit 412.
  • the reference region load processing unit 412 performs, for each pixel of interest (i, j), based on the quantized contour direction D (i, j) for each pixel represented by the quantized contour direction information input from the contour direction estimation unit 411. Define reference area load information.
  • the reference area load information is a weighting factor R (D (i, j), u ′, v ′) for each reference pixel (u ′, v ′) belonging to a reference area centered on a certain target pixel (i, j). ). This weighting factor may be referred to as a reference area load.
  • the size of the reference area in the reference area load processing unit 412 is determined in advance so as to be equal to the size of the reference area in the direction evaluation unit 414.
  • the reference region load processing unit 412 Based on the quantized contour direction D (i, j) for each pixel represented by the quantized contour direction information input from the contour direction estimating unit 411, the reference region load processing unit 412 performs weighting factor R (D (i, j ), U ′, v ′).
  • the reference area load processing unit 412 is a reference in which (D (i, j), u ′, v ′) takes a value other than zero among the weighting factors R (D (i, j), u ′, v ′).
  • a weighting factor R (D (i, j), u ′, v ′) for each pixel (u ′, v ′) is selected.
  • Such a reference pixel is called a contour direction reference pixel because it is located in a contour direction or a direction approximate to the contour direction from the target pixel (i, j).
  • the reference area load processing unit 412 generates reference area load information representing the weighting factor R (D (i, j), u ′, v ′) related to each contour direction reference pixel, and accumulates the generated reference area load information. The result is output to the sum calculation unit 415.
  • the reference region load processing unit 412 extracts quantized contour direction information representing the quantized contour direction D (u, v) related to each contour direction reference pixel from the input quantized contour direction information.
  • the reference area load processing unit 412 outputs the extracted quantized contour direction information to the direction evaluation unit 414.
  • the reference area load processing unit 412 extracts a luminance signal representing the signal value Y (u, v) related to each contour direction reference pixel from the Y signal output from the RGB-YUV conversion unit 15.
  • the reference area load processing unit 412 outputs the extracted luminance signal to the preprocessing unit 413.
  • Pre-processing unit 413 For each pixel of interest (i, j), the pre-processing unit 413, for each pixel of interest (i, j), from the luminance signal input from the reference region load processing unit 412, each reference pixel belonging to the reference region (i, j) is the center. A luminance signal representing the signal value Y (u, v) of u, v) is extracted. The pre-processing unit 413 subtracts the signal value Y (i, j) of the target pixel from the signal value Y (u, v) of the reference signal represented by the extracted luminance signal, and obtains the difference signal value Y (u, v) ⁇ Y (i, j) is calculated. The preprocessing unit 413 generates a difference signal representing the calculated difference signal value, and outputs the generated difference signal to the product-sum operation unit 415.
  • the direction evaluation unit 414 calculates the direction evaluation value of each reference pixel belonging to the reference region centered on the target pixel for each target pixel based on the quantized contour direction D (i, j) for each pixel.
  • the direction evaluation unit 414 determines the difference between the quantization contour direction D (i, j) of the target pixel (i, j) and the quantization contour direction D (u, v) of the reference pixel (u, v).
  • the direction evaluation value of the reference pixel is determined so that the smaller the value is, the larger the direction evaluation value is.
  • D (u, v) ⁇ D (i, j) is calculated.
  • ) is determined as the maximum value 1.
  • ) is determined as the minimum value 0.
  • the direction evaluation unit 414 determines that the quantization contour direction D (i, j) for the target pixel (i, j) approximates the quantization contour direction D (u, v) for the reference pixel (u, v), that is, the difference.
  • the direction evaluation value F ( ⁇ D) may be determined so as to increase as the absolute value
  • ) 0. (
  • the direction evaluation unit 414 determines the other quantization contour.
  • a correction value is calculated by adding Nd to the direction value.
  • the direction evaluation unit 414 calculates an absolute value for a difference value between the calculated correction value and one quantized contour direction.
  • the intended direction evaluation value is determined by using the absolute value thus calculated as
  • F
  • the contour direction of the pixel of interest (i, j) is different from the reference pixel (u, v) having a different contour direction. The effect can be ignored or neglected.
  • the size of the reference region to which the reference pixel (u, v) belongs may be 2n + 1 or larger.
  • the size of the reference region in the direction evaluation unit 414 may be different from the size of the reference region in the contour direction estimation unit 411. For example, the number of pixels in the horizontal direction and the vertical direction of the reference area in the direction evaluation unit 414 is 7 respectively, whereas the number of pixels in the horizontal direction and the vertical direction of the reference area in the contour direction estimation unit 411 is 5 respectively. There may be.
  • the direction evaluation unit 414 outputs direction evaluation value information representing the direction evaluation value F ( ⁇ D) of each reference pixel (u, v) to the product-sum operation unit 415 for each pixel of interest (i, j).
  • the product-sum operation unit 415 For each pixel of interest (i, j), the product-sum operation unit 415 receives the direction evaluation value information from the direction evaluation unit 414, the reference region load information from the reference region load processing unit 412, and the difference signal from the preprocessing unit 413. Entered. The product-sum operation unit 415 performs the direction evaluation value F (
  • Rs (D (i, j)) represents a function (region selection function) for selecting a contour direction reference pixel among the reference pixels related to the target pixel (i, j). That is, u ′, v′ ⁇ Rs (D (i, j)) indicates a contour direction reference pixel.
  • Expression (9) is obtained by calculating the direction evaluation value F (
  • Equation (9) represents that the smoothed difference value ⁇ Y (i, j) is calculated by dividing the calculated sum by the reference area N (i, j) (the number of reference pixels belonging to the reference region). .
  • the product-sum operation unit 415 generates a smoothed difference signal representing the calculated smoothed difference value ⁇ Y (i, j), and synthesizes the generated smoothed difference signal ( ⁇ Y (i, j)) as the first difference signal. The result is output to the calculation unit 417.
  • the difference image generation unit 410 calculates the contour direction of each pixel, and is the contour direction of the target pixel or a direction that approximates the direction, and the contour direction is the same as or approximate to the target pixel.
  • the target pixel is smoothed using the signal value relating to the reference pixel which is the direction in which the pixel is to be moved.
  • the parameter adjustment unit 416 multiplies the first difference signal ( ⁇ Y (i, j)) for each pixel output from the product-sum operation unit 415 of the difference image generation unit 410 by the parameter K output from the parameter output unit 30.
  • the second difference signal (K ⁇ ⁇ Y (i, j)) is output to the composition calculation unit 417.
  • the composition calculation unit 417 outputs, for each pixel, a Y ′ signal obtained by adding the second difference signal output from the parameter adjustment unit 416 and the Y signal output from the RGB-YUV conversion unit 15.
  • a Y signal equivalent to the luminance signal of the original image is output, and the effect of the noise reduction process is minimized.
  • the parameter K is 0 ⁇ K ⁇ 1
  • the second difference signal corresponding to the likelihood for the detection target object is added to the Y signal, so that the parameter K is rapidly switched between 1 and 0. Artifacts due to are reduced.
  • the scalar value of the signal value of each pixel in the image signal is determined based on a plurality of components constituting the color of the detection target object in the color space (UV plane) according to the characteristics of the detection target object. Ask. Then, a likelihood corresponding to the magnitude of the scalar value is set for each pixel, and image processing is performed using a parameter corresponding to the likelihood of the pixel. With this configuration, even if a pixel is vaguely determined whether or not it substantially matches the detection target object, image processing is performed using parameters according to the likelihood of the pixel. The image processing suitable for is performed. As a result, it is possible to reduce artifacts due to image processing as compared with the case where image processing is performed using uniform parameters for the entire image.
  • the signal value of the pixel having the color (feature) of the detection target object is expressed as a scalar value on the UV plane, and the likelihood for the detection target object is obtained based on the scalar value.
  • the likelihood for the detection target object is obtained based on the scalar value.
  • FIG. 8 is a block diagram showing a schematic configuration of the image processing apparatus according to the present embodiment.
  • the image processing device 1a is different from the setting unit 20A of the first embodiment in that the setting unit 20B includes a distance calculation processing unit 24 and a maximum value calculation processing unit 25.
  • the setting unit 20B includes a distance calculation processing unit 24 and a maximum value calculation processing unit 25.
  • the distance calculation processing unit 24 calculates the distance R between the reference point in the plurality of regions on the UV plane corresponding to the color (feature) of the detection target object and the position indicated by the pixel signal value.
  • the distance calculation processing unit 24 converts the value of the distance R so that the value becomes larger as the distance R to the reference point is smaller.
  • the distance R is an example of a scalar value for the color of the detection target object.
  • substantially elliptical regions f1, f2, and f3 on the UV plane are set as a color range (here, green) corresponding to the detection target object.
  • the coordinates (U 1 , V 1 ) of the reference point q1 on the UV plane are (16, 32).
  • the distance calculation processing unit 24 uses the following equation (10) to calculate the distance between the signal value (U, V) of each pixel in the YUV signal output from the feature space conversion unit 10 and the reference points q1, q2, q3. calculate.
  • the distance calculation processing unit 24 converts the distance Rn between the signal value (U, V) of each pixel and the reference points q1, q2, q3 by the following formula (11) or formula (12).
  • Expressions (11) and (12) are functions that increase as the distance Rn decreases.
  • the maximum value calculation processing unit 25 selects the maximum value distance Rn ′ from the distances R1 ′, R2 ′, R3 ′ calculated by the distance calculation processing unit 24, and outputs it to the numerical value limiting processing unit 23.
  • the maximum distance Rn ′ By selecting the maximum distance Rn ′, the logical sum of the regions f1, f2, and f3 can be obtained as a result.
  • the region f obtained by superimposing the regions f1, f2, and f3 is a region having a higher likelihood for the detection target object than the region other than the region f.
  • the likelihood with respect to the detection target object is represented by a multi-value based on the distance between the two.
  • FIG. 11 is a block diagram showing a configuration of an image processing apparatus according to this modification. As shown in FIG. 11, the image processing apparatus 1b includes a parameter output unit 30A and an image processing unit 40A instead of the parameter output unit 30 and the image processing unit 40 in the first embodiment.
  • 0.5
  • the parameter K ′ (K1 ′ + K2 ′) / 2.
  • the image processing unit 40A includes a sharpening processing unit 42.
  • FIG. 12 is a diagram illustrating a schematic configuration of the sharpening processing unit 42.
  • the sharpening processing unit 42 includes a high frequency component output unit 420, a parameter adjustment unit 428, and a synthesis calculation unit 429.
  • the high frequency component output unit 420 performs a sharpening process on the Y signal output from the RGB-YUV conversion unit 15 and outputs a high frequency component value NL 2D of the Y signal.
  • FIG. 13 is a block diagram showing an example of the high-frequency component output unit 420.
  • the high frequency component output unit 420 includes a contour direction estimation unit 411, a low-pass filter unit 420 a, and a high frequency extension unit 427.
  • the low-pass filter unit 420 a includes a preprocessing unit 413, a direction evaluation unit 414, a reference region load processing unit 412, a product-sum operation unit 425, and a synthesis operation unit 426.
  • the same reference numerals as those in the first embodiment are assigned to the same configurations as those in the first embodiment.
  • description of the same configuration as in the first embodiment will be omitted, and the product-sum operation unit 425, the synthesis operation unit 426, and the high-frequency extension unit 427 will be described.
  • the product-sum operation unit 425 For each pixel of interest (i, j), the product-sum operation unit 425 receives the direction evaluation value information from the direction evaluation unit 414, the reference region load information from the reference region load processing unit 412, and the luminance signal from the preprocessing unit 413. Entered.
  • the product-sum operation unit 425 includes a direction evaluation value F (
  • the product-sum value S (i, j) is calculated using the equation (13), for example, based on the signal value Y (u, v) represented by.
  • Equation (13) is the product of the direction evaluation value F (
  • the calculation is performed for each reference pixel, and the sum between reference pixels belonging to the reference area of the calculated product is calculated as a product sum value S (i, j). That is, the equation (13) indicates that the product sum value S (i, j) is the product of the direction evaluation value F (
  • ) and the reference region load R (D (i, j), u ′, v ′) may be referred to as a direction evaluation region load.
  • the product-sum operation unit 425 is based on the direction evaluation value F (
  • the load area C (i, j) is calculated using Equation (14).
  • Expression (14) calculates the product of the direction evaluation value F (
  • equation (14) indicates that the load area C (i, j) is calculated by taking the sum of the above-described direction evaluation region loads within the reference region.
  • the product-sum operation unit 425 calculates the sum between reference pixels belonging to the reference region of the reference region load R (D (i, j), u ′, v ′) represented by the reference region load information as a reference area N (i, j).
  • the reference area N (i, j) represents the number of reference pixels that are referred to for nominal purposes in the product-sum operation of Equation (13).
  • the product-sum operation unit 425 includes product-sum value information representing the product-sum value S (i, j) calculated for each pixel of interest (i, j), load area information representing the load area C (i, j), and reference area. Reference area information representing N (i, j) is output to the composition calculation unit 426.
  • the sum calculation unit 426 receives product sum value information, load area information, and reference area information from the product sum calculation unit 425.
  • the synthesis calculation unit 426 divides the product sum value S (i, j) represented by the product sum value information by the load area C (i, j) represented by the load area information to thereby smooth the direction smoothing value Y ′ (i, j). Is calculated. That is, the calculated direction smoothing value Y ′ (i, j) is a reference pixel in the quantization contour direction of the pixel of interest (i, j) or a direction approximating the direction, and the contour direction is of interest. It represents a signal value smoothed between reference pixels that are equal to or approximate to the pixel contour direction.
  • the composition calculation unit 426 calculates the mixture ratio w (i, j) by dividing the load area C (i, j) by the reference area N (i, j) represented by the reference area information.
  • the mixture ratio w (i, j) approximates whether the contour direction is equal to the contour direction of the pixel of interest among the reference pixels in the quantization contour direction of the pixel of interest (i, j) or the direction approximating the direction. This represents the number ratio of reference pixels.
  • the composition calculation unit 426 converts the direction smoothed value Y ′ (i, j) and the signal value Y (i, j) represented by the input luminance signal (Y signal) into the mixing ratios w (i, j) and ( 1-w (i, j)) is added with weight (addition operation) to calculate the low-pass signal value Y ′′ (i, j). This weighted addition is expressed by equation (15).
  • the synthesis calculation unit 426 outputs the luminance signal Y ′′ representing the calculated low-pass signal value Y ′′ (i, j) to the high frequency extension unit 427 and the synthesis calculation unit 429.
  • FIG. 14 is a block diagram illustrating a configuration of the high-frequency extension unit 427. As illustrated in FIG. 14, the high-frequency extension unit 427 includes a two-dimensional high-pass filter unit 381 and a nonlinear calculation unit 382.
  • the two-dimensional high-pass filter unit 381 receives the luminance signal Y ′′ from the synthesis calculation unit 426 and the quantized contour direction information from the contour direction estimation unit 411.
  • the two-dimensional high-pass filter unit 381 has a quantized contour direction D (i, j) represented by the quantized contour direction information with respect to the low-pass signal value Y ′′ (i, j) represented by the luminance signal Y ′′.
  • the two-dimensional high-pass filter unit 381 outputs the calculated contour direction component signal W2D to the non-linear operation unit 382.
  • FIG. 15 is a schematic diagram illustrating the configuration of the two-dimensional high-pass filter unit 381.
  • the two-dimensional high-pass filter unit 381 includes a delay memory 3811, a direction selection unit 3812, a multiplication unit 3813, a filter coefficient memory 3814, and a synthesis calculation unit 3815.
  • the delay memory 3811 includes delay elements 3811-1 to 3811-2n + 1 that delay the input signal by 2n + 1 W x samples.
  • Delay elements 2811-v-1 ⁇ 2811- v-2n + 1 respectively an input signal W x -2n, W x -2n + 1, ..., W x samples delayed by 2n + 1 samples direction selection unit delay signal including a signal value of It outputs to 3812.
  • the delay elements 3811-1 to 3811-2n + 1 are connected in series.
  • the low-pass signal value Y ′′ (i, j) represented by the luminance signal Y ′′ is input to one end of the delay element 3811-1, and the other end of the delay element 3811-1 is delayed by W x samples.
  • the signal is output to one end of the delay element 3811-2.
  • One end of each of the delay elements 3811-2 to 3811-2n + 1 receives a delay signal delayed by W x to 2n ⁇ W x samples from the other end of each of the delay elements 3811-1 to 3811-2n.
  • the other end of the delay element 3811-2n + 1 outputs a delayed signal delayed by (2n + 1) ⁇ W x samples to the direction selection unit 3812.
  • the signal value representing the luminance signal Y ′′ and the signal values of (2n + 1) ⁇ (2n + 1) pixels adjacent to each other in the horizontal direction and the vertical direction around the target pixel are output to the direction selection unit 3812. Pixels corresponding to these signal values are reference pixels belonging to a reference region centered on the target pixel.
  • the direction selecting unit 3812 Based on the quantized contour direction D (i, j) for each pixel represented by the quantized contour direction information input from the contour direction estimating unit 411, the direction selecting unit 3812 starts the quantization contour from the target pixel (i, j).
  • a reference pixel (u ′, v ′) in the direction D or a direction approximating that direction is selected.
  • the selected reference pixel is, for example, a reference pixel that satisfies the following condition. (1)
  • a line segment extending in the quantization contour direction from the center of the target pixel (i, j) is a reference pixel (u ′, v ′) that passes through the region.
  • the horizontal or vertical length through which the line segment passes is longer than 0.5 pixels or 0.5 pixels.
  • One reference pixel is selected for each of 2n + 1 coordinates in at least one of the horizontal direction and the vertical direction.
  • the selected reference pixel may be referred to as a selected reference pixel.
  • the direction selection unit 3812 determines whether the direction in which one reference pixel is selected for each of the 2n + 1 coordinates (hereinafter sometimes referred to as a selected coordinate direction) is the horizontal direction, the vertical direction, or both. For example, the direction selection unit 3812 outputs the signal values related to 2n + 1 selected reference pixels to the multiplication unit 3813 in descending order of the index in the selected coordinate direction.
  • the direction selection unit 3812 includes a storage unit in which selection reference pixel information representing a selection reference pixel for each quantization contour direction is stored in advance, and a quantization contour direction D (i, j) of the pixel of interest (i, j). The selected reference pixel information corresponding to () may be selected. Accordingly, the direction selection unit 3812 outputs the signal value related to the selected reference pixel represented by the selected selected reference pixel information to the multiplication unit 3813.
  • the multiplication unit 3813 includes 2n + 1 multipliers 3813-1 to 3813-2n + 1.
  • Multipliers 3813-1 to 3813-2 n + 1 multiply each signal value input from direction selection unit 3812 by each filter coefficient read from filter coefficient memory 3814, and output the multiplied value to synthesis operation unit 3815.
  • the signal values are inputted so that the order of the multipliers 3813-1 to 3813-2n + 1 matches the order of the signal values inputted to each of them (descending order of the index in the selected coordinate direction).
  • the filter coefficient memory 3814 stores 2n + 1 filter coefficients a D ⁇ n , a D ⁇ n + 1 , and a D + n used in the multipliers 3813-1 to 3813-2 n + 1 in advance.
  • the filter coefficients a D ⁇ n , a D ⁇ n + 1 , to a D + n are high-pass filter coefficients that realize a high-pass filter by a product-sum operation with a signal value.
  • Combining unit 3815 adds the 2n + 1 multiplications value input from the multiplication unit 3813, and generates a contour direction component signal W 2D having a signal value which is their total.
  • the composition calculation unit 3815 outputs the generated contour direction component signal W2D to the nonlinear calculation unit 382.
  • Nonlinear operator 382 performs a nonlinear operation on the signal values representing the direction component signal W 2D input from two-dimensional high-pass filter 381.
  • the nonlinear calculation unit 382 outputs the high frequency component value NL 2D represented by the calculated nonlinear output value to the parameter adjustment unit 428.
  • FIG. 16 is a block diagram illustrating a configuration of the nonlinear arithmetic unit 382.
  • the nonlinear calculation unit 382 includes an absolute value calculation unit 2821-A, a power calculation unit 2822-A, a filter coefficient memory 2823-A, a multiplication unit 2824-A, a synthesis calculation unit 2825-A, a code detection unit 2826-A, and a multiplication unit. 2827-A.
  • the non-linear operation unit 382 generates an odd function sgn
  • ) of l (el) order (where l is an integer greater than 1) with respect to the input signal value W ( W 2D ). + C 2 ⁇
  • c 1 , c 2 ,..., c 1 are 1, 2,.
  • the absolute value calculation unit 2821-A calculates the absolute value
  • the power calculation unit 2822-A includes l-1 multipliers 2822-A-2 to 2822-A-l, and multiplies the absolute value
  • Multiplier 2822-A-2 calculates absolute square value
  • the multiplier 2822-A-2 outputs the calculated absolute square value
  • Multipliers 2822-A-3 to 2822-A-1-1 are absolute square values
  • to l-2 the absolute value calculating unit 2821-a is input from the absolute value
  • the multipliers 2822-A-3 to 2822-Al-1 use the calculated absolute cube values
  • the data is output to A-l and the multiplier 2824-A.
  • the multiplier 2822-A-l receives the absolute l-1 power value
  • the multiplier 2822-A-l outputs the calculated absolute l-th power value
  • the filter coefficient memory 2823-A includes l storage elements 2823-A-1 to 2823-A-l.
  • the multiplication unit 2824-A includes l multipliers 2824-A-1 to 2824-A-l.
  • Multipliers 2824-A-1 ⁇ 2824- A-l the absolute value is input from the respective power calculating section 2822-A
  • Multipliers 2824-A-1 to 2824-A-l output the calculated multiplication values to the synthesis operation unit 2825-A, respectively.
  • the synthesis operation unit 2825-A adds the multiplication values respectively input from the multipliers 2824-A-1 to 2824-A-1 to calculate a synthesis value.
  • the combination calculation unit 2825-A outputs the calculated combination value to the multiplication unit 2827-A.
  • the sign detection unit 2826-A detects the sign of the signal value W indicated by the direction component signal input from the two-dimensional high-pass filter 381, that is, positive / negative. When the signal value is smaller than 0, the code detection unit 2826-A outputs ⁇ 1 as a code value to the multiplication unit 2827-A. When the signal value is 0 or larger than 0, the code detection unit 2826-A outputs 1 as the code value to the multiplication unit 2827-A.
  • the multiplication unit 2827-A multiplies the combined value input from the combining calculation unit 2825-A and the code value input from the code detection unit 2826-A to calculate the high frequency component value NL 2D .
  • the multiplier 2827-A outputs the calculated high frequency component value to the parameter adjuster 428 (see FIG. 12).
  • the non-linear operation unit 382 having the above-described configuration has a relatively large circuit scale, but can adjust the output high-frequency component value using a small number of coefficients. If the coefficient values c 1 to c l ⁇ 1 other than the highest order l are 0, the configuration related to the product-sum operation of these orders may be omitted.
  • the configurations that can be omitted are the storage elements 2823-A-1 to 2823-A-1 and the multipliers 2824-A-1 to 2824-A-1 in the nonlinear arithmetic unit 382.
  • the storage element 2823-A-1 and the multiplier 2824-A-1 may be omitted.
  • the parameter adjustment unit 428 multiplies the high frequency component value NL 2D (first high frequency component value) output from the high frequency component output unit 420 by the parameter K output from the parameter output unit 30 (K ⁇ NL). 2D ) (hereinafter referred to as the second high-frequency component value) is output to the composition calculation unit 429.
  • the synthesis calculation unit 429 uses the second high-frequency component value output from the parameter adjustment unit 428 and the low-pass signal value Y ′′ (i, j) output from the synthesis calculation unit 426 of the high-frequency component output unit 420. Addition (combination) is performed to calculate the high-frequency extension signal value Z (i, j). The synthesis calculation unit 429 generates a luminance signal Z representing the calculated high-frequency extension signal value Z (i, j), updates the luminance signal Y input from the feature space conversion unit 10 to the luminance signal Z, and outputs the luminance signal Z And an image signal including the color difference signals U and V are output.
  • the luminance signal Z in which the high frequency component is combined with the low-pass signal is output.
  • the luminance signal Z in which the high frequency component is not combined with the low-pass signal is output.
  • the luminance signal Z in which the high-frequency component value corresponding to the likelihood for the detection target object is combined with the low-pass signal is output. Artifacts due to the rapid switching of 'to 1 and 0 are reduced.
  • noise reduction processing is performed by the noise reduction processing unit 41 in the image processing unit 40
  • sharpening processing is performed by the sharpening processing unit 42 in the image processing unit 40A.
  • the image processing unit may be configured to perform noise reduction processing and sharpening processing.
  • the configuration of the image processing unit 40 in the first embodiment is configured to include the high-frequency extension unit 427 of the sharpening processing unit 42 in the second embodiment.
  • the high frequency extension unit 427 uses the luminance signal Y ′ (i, j) output from the noise reduction processing unit 41 and the quantized contour direction D (i, j) output from the contour direction estimation unit 411. To calculate the high-frequency component value.
  • the likelihood based on the scalar value in the UV plane based on the signal value of the pixel or the distance between the color of the pixel in the UV plane and the color reference point of the detection target object.
  • the adjustment may be performed using the number of taps of the filter used for the noise reduction processing or the sharpening processing, the coefficient, the overshoot limit value, or the like. Good.
  • the pixel having the detection target object is specified in the image signal, and the pixel signal value is converted into the scalar value of the pixel on the UV plane and the distance to the detection target object. You may make it perform the process which sets likelihood using a fuzzy inference apparatus.
  • the color space has been described as an example of the feature space of the detection target object.
  • a frequency space composed of frequency components low frequency components and high frequency components
  • the likelihood for each pixel may be set based on the frequency component of the signal value of each pixel in the input signal and the specific frequency component.
  • the absolute value Gy of the difference between the luminance value Y (i, j + 1) and the luminance value Y (i, j) of the pixel located in the vertical direction is obtained, and the result of adding the absolute values Gx and Gy (hereinafter, absolute The frequency component of the pixel to be processed (an example of a scalar value) may be used.
  • the frequency component of the pixel is a feature value in the feature space and represents a scalar value.
  • the setting unit may set the likelihood for the detection target object according to the absolute value G.
  • the likelihood may be set for each pixel so that the likelihood increases as the difference between the absolute value G and the threshold value increases.
  • an FIR filter may be used as a difference circuit that extracts a difference between luminance values of pixels.
  • the setting unit obtains the degree of coincidence (an example of a scalar value) between the frequency pattern of the detection target object and the frequency pattern of the signal value of each pixel using a pattern matching circuit.
  • the likelihood corresponding to the degree may be set.
  • a color space is described as an example of the feature space of the detection target object.
  • a frequency space is used as the feature space of the detection target object.
  • the feature space is not limited to this as long as it is a multidimensional scalar field composed of components representing the features of the detection target object.
  • the setting unit obtains a scalar value for the color of the detection target object for each pixel in the same manner as in the first and second embodiments described above, and each pixel in the same manner as in the modification (5) described above. Is obtained as a scalar value.
  • the feature space is composed of color and frequency components representing the features of the detection target object.
  • the setting unit sets the likelihood based on the scalar value for the color for each pixel in the same manner as in the first or second embodiment.
  • the likelihood is set. May be adjusted according to the difference from the threshold. Even if the determination of whether or not the pixel to be processed is a detection target object is vague based only on the color of the detection target object, this configuration appropriately determines whether or not it is a detection target object. It becomes possible to judge.
  • the frequency component of the pixel is a feature value and a scalar value.
  • a feature value (UV value, frequency component) in a feature space including a YUV component and a frequency (F) component is used. May be obtained as a scalar value.
  • each scalar value (S1 ', S2', S3 ') is obtained from the feature quantity of each pixel by the following equations (16), (17), and (18).
  • weights (vectors) for converting feature values into scalar values may be set so that each feature value of a pixel becomes a scalar value, or as in the first and second embodiments.
  • the weight (vector) may be set based on the feature amount of the detection target object in the feature space.
  • the present invention can be industrially used as an image processing apparatus mounted on a display device such as a television or a PC.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

Provided is a technology whereby artifacts, which arise when carrying out noise reduction, sharpening, or other image enhancement processes on an image, are reduced. An image processing device comprises: a setting unit which, using signal values of each pixel in an inputted image signal, sets a likelihood with respect to a predetermined object; a parameter output unit which outputs a parameter for image processing, wherein a first parameter which is employed in image processing with respect to the object and a second parameter which is employed in image processing with respect to another object, are mixed at a mixing ratio based on the likelihood; and an image processing unit which, using the parameter for image processing which is outputted from the parameter output unit, carries out an image process on the signal values of each pixel.

Description

画像処理装置Image processing device
 本発明は、画像処理装置に関し、特に、画像に応じたノイズ低減化処理や鮮鋭化処理等の画質改善処理を行う技術に関する。 The present invention relates to an image processing apparatus, and more particularly to a technique for performing image quality improvement processing such as noise reduction processing and sharpening processing according to an image.
 特開2010-152518号公報には、顔画像領域等の特定色領域に対する鮮鋭化処理を他の領域より低減化させることで、風景等の画像を鮮鋭化し、人物の顔等については自然な画像となるように鮮鋭化処理を行う技術が開示されている。この技術は、特定色領域に対するシャープネス処理のパラメータを特定色領域以外の領域について設定されているレベルより低くしてシャープネス処理を行う。 Japanese Patent Application Laid-Open No. 2010-152518 discloses that a sharpening process for a specific color area such as a face image area is reduced from other areas, thereby sharpening an image such as a landscape and a natural image of a human face. A technique for performing a sharpening process is disclosed. In this technique, sharpness processing is performed by setting a sharpness processing parameter for a specific color region lower than a level set for a region other than the specific color region.
 特開2010-152518号公報は、特定色領域と特定色領域以外とでシャープネス処理を切り替えている。そのため、特定色領域と特定色領域以外との境界があいまいである場合には、その境界部分でシャープネス処理が細かく切り替わり、それによってアーチファクトが生じる可能性がある。一方で、画像全体に対して均一にノイズの低減化処理や鮮鋭化処理等の画質改善処理を行うと、画像に含まれるオブジェクトによってはアーチファクトが生じる場合がある。例えば、芝や緑の木々等のように、細かな線状のテクスチャがランダムに含まれている画像の場合、線状のテクスチャが、輪郭周辺に生じるジャギー等のノイズと誤認識されて平滑化されてしまい、不自然な画像になることがある。さらに、このような画像の領域では、鮮鋭化処理の効果が弱く感じられる。 Japanese Patent Laid-Open No. 2010-152518 switches sharpness processing between a specific color region and a region other than the specific color region. For this reason, when the boundary between the specific color area and the area other than the specific color area is ambiguous, sharpness processing is finely switched at the boundary portion, which may cause artifacts. On the other hand, when image quality improvement processing such as noise reduction processing or sharpening processing is performed uniformly on the entire image, artifacts may occur depending on the object included in the image. For example, in the case of an image that contains fine linear textures at random, such as turf and green trees, the linear texture is misrecognized as noise such as jaggy around the contour and smoothed. May result in an unnatural image. Further, in such an image area, the effect of the sharpening process is felt weak.
 本発明は、画像に対してノイズ低減化や鮮鋭化等の画像処理を行う際に生じるアーチファクトを低減する技術を提供することを目的とする。 An object of the present invention is to provide a technique for reducing artifacts generated when image processing such as noise reduction or sharpening is performed on an image.
 第1の発明に係る画像処理装置は、画像信号における各画素の信号値に対し、入力される画像処理用パラメータを用いて画像処理を行う画像処理部と、前記画像信号における各画素の信号値を用い、予め定められたオブジェクトに対する前記画素の尤度を設定する設定部と、前記オブジェクトに対する前記画像処理に用いるための予め定められた第1パラメータと、他のオブジェクトに対する前記画像処理に用いるための予め定められた第2パラメータとを、前記設定部で設定された前記尤度に基づく混合比によって混合した前記画像処理用パラメータを前記画像処理部に出力するパラメータ出力部とを備える。 An image processing apparatus according to a first aspect of the present invention includes an image processing unit that performs image processing on a signal value of each pixel in an image signal using an input image processing parameter, and a signal value of each pixel in the image signal To set the likelihood of the pixel for a predetermined object, a predetermined first parameter for use in the image processing for the object, and for use in the image processing for another object A parameter output unit that outputs the image processing parameter obtained by mixing the predetermined second parameter with a mixture ratio based on the likelihood set by the setting unit to the image processing unit.
 第2の発明は、第1の発明において、前記設定部は、前記オブジェクトの特徴を表す複数の成分からなる特徴空間において、前記各画素の信号値を、前記成分の大きさを表す特徴量に変換して、前記特徴量に基づくスカラー値を求め、前記スカラー値に基づいて前記尤度を設定する。 In a second aspect based on the first aspect, the setting unit converts the signal value of each pixel into a feature amount representing the size of the component in a feature space composed of a plurality of components representing the feature of the object. Conversion is performed to obtain a scalar value based on the feature amount, and the likelihood is set based on the scalar value.
 第3の発明は、第2の発明において、前記特徴空間は、色を構成する第1成分と第2成分とを含む色空間である。 In a third aspect based on the second aspect, the feature space is a color space including a first component and a second component constituting a color.
 第4の発明は、第2又は第3の発明において、前記特徴空間は、前記オブジェクトに含まれる予め定められた周波数成分を含む周波数空間を有する。 In a fourth aspect based on the second aspect or the third aspect, the feature space has a frequency space including a predetermined frequency component included in the object.
 第5の発明は、第3の発明において、前記設定部は、前記各画素の信号値を、前記色空間における前記特徴量に変換し、前記特徴空間における前記オブジェクトの範囲を表すベクトルを用いて、前記第1成分と前記第2成分を主として表した前記スカラー値を求める。 In a fifth aspect based on the third aspect, the setting unit converts the signal value of each pixel into the feature amount in the color space, and uses a vector representing the range of the object in the feature space. The scalar value mainly representing the first component and the second component is obtained.
 第6の発明は、第3の発明において、前記設定部は、前記各画素の信号値を、前記色空間における前記特徴量に変換し、前記色空間における前記オブジェクトの範囲を表す複数の領域において定められた各基準点と、前記特徴量との距離を前記スカラー値として求める。 In a sixth aspect based on the third aspect, the setting unit converts the signal value of each pixel into the feature amount in the color space, and a plurality of regions representing the range of the object in the color space. A distance between each determined reference point and the feature amount is obtained as the scalar value.
 第7の発明は、第4の発明において、前記設定部は、前記各画素の信号値から前記周波数空間における前記画素の周波数成分を求め、求めた周波数成分を前記画素の前記スカラー値とする。 In a seventh aspect based on the fourth aspect, the setting unit obtains the frequency component of the pixel in the frequency space from the signal value of each pixel, and sets the obtained frequency component as the scalar value of the pixel.
 第8の発明は、第1から第7のいずれかの発明において、前記画像処理部は、前記画像処理として、前記画像信号における前記信号値について平滑化処理を行う第1画像処理、及び前記画像信号における前記信号値について鮮鋭化処理を行う第2画像処理の少なくとも一方を行う。 According to an eighth invention, in any one of the first to seventh inventions, the image processing unit performs a smoothing process on the signal value in the image signal as the image processing, and the image At least one of second image processing for performing sharpening processing on the signal value in the signal is performed.
 本発明の構成によれば、画像に対してノイズ低減化や鮮鋭化等の画像処理を行う際に生じるアーチファクトを低減することができる。 According to the configuration of the present invention, it is possible to reduce artifacts generated when image processing such as noise reduction and sharpening is performed on an image.
図1は、第1実施形態に係る画像処理装置の概略構成を示すブロック図である。FIG. 1 is a block diagram illustrating a schematic configuration of the image processing apparatus according to the first embodiment. 図2は、第1実施形態における特徴空間変換部の回路図である。FIG. 2 is a circuit diagram of the feature space conversion unit in the first embodiment. 図3Aは、UV平面における検出対象オブジェクトの色を説明する図である。FIG. 3A is a diagram illustrating the color of the detection target object on the UV plane. 図3Bは、UV平面においてスカラー値S1がとりうる範囲を表す図である。FIG. 3B is a diagram illustrating a range that the scalar value S1 can take on the UV plane. 図3Cは、UV平面においてスカラー値S2がとりうる範囲を表す図である。FIG. 3C is a diagram illustrating a possible range of the scalar value S2 on the UV plane. 図4は、スカラー値に対応するUV平面上の領域を示す図である。FIG. 4 is a diagram illustrating a region on the UV plane corresponding to the scalar value. 図5は、第1実施形態におけるパラメータ出力部の概略回路図である。FIG. 5 is a schematic circuit diagram of the parameter output unit in the first embodiment. 図6は、第1実施形態におけるノイズ低減化処理部の概略構成を示す図である。FIG. 6 is a diagram illustrating a schematic configuration of a noise reduction processing unit in the first embodiment. 図7は、第1実施形態における差分画像生成部の構成を示すブロック図である。FIG. 7 is a block diagram illustrating a configuration of the difference image generation unit in the first embodiment. 図8は、第2実施形態に係る画像処理装置の概略構成を示すブロック図である。FIG. 8 is a block diagram illustrating a schematic configuration of the image processing apparatus according to the second embodiment. 図9Aは、第2実施形態における検出対象オブジェクトの色を表すUV平面上の領域を示す図である。FIG. 9A is a diagram illustrating a region on the UV plane representing the color of the detection target object in the second embodiment. 図9Bは、第2実施形態における検出対象オブジェクトの色を表すUV平面上の領域を示す図である。FIG. 9B is a diagram illustrating a region on the UV plane representing the color of the detection target object in the second embodiment. 図9Cは、第2実施形態における検出対象オブジェクトの色を表すUV平面上の領域を示す図である。FIG. 9C is a diagram illustrating a region on the UV plane representing the color of the detection target object in the second embodiment. 図10は、第2実施形態において、UV平面上の尤度が高い領域を示す図である。FIG. 10 is a diagram illustrating a region having a high likelihood on the UV plane in the second embodiment. 図11は、変形例(1)に係る画像処理装置の概略構成を示すブロック図である。FIG. 11 is a block diagram illustrating a schematic configuration of the image processing apparatus according to the modification (1). 図12は、変形例(1)における鮮鋭化処理部の概略構成を示すブロック図である。FIG. 12 is a block diagram illustrating a schematic configuration of the sharpening processing unit in the modified example (1). 図13は、変形例(1)における高周波成分出力部の構成例を示すブロック図である。FIG. 13 is a block diagram illustrating a configuration example of the high-frequency component output unit in Modification Example (1). 図14は、変形例(1)における非線形フィルタの構成例を示すブロック図である。FIG. 14 is a block diagram illustrating a configuration example of the nonlinear filter in the modification example (1). 図15は、変形例(1)における2次元高域通過フィルタ部の構成例を示すブロック図である。FIG. 15 is a block diagram illustrating a configuration example of the two-dimensional high-pass filter unit in the modification (1). 図16は、変形例(1)における非線形演算部の構成例を示すブロック図である。FIG. 16 is a block diagram illustrating a configuration example of the nonlinear arithmetic unit in the modification example (1).
 本発明の一実施形態に係る画像処理装置は、画像信号における各画素の信号値に対し、入力される画像処理用パラメータを用いて画像処理を行う画像処理部と、前記画像信号における各画素の信号値を用い、予め定められたオブジェクトに対する前記画素の尤度を設定する設定部と、前記オブジェクトに対する前記画像処理に用いるための予め定められた第1パラメータと、他のオブジェクトに対する前記画像処理に用いるための予め定められた第2パラメータとを、前記設定部で設定された前記尤度に基づく混合比によって混合した前記画像処理用パラメータを前記画像処理部に出力するパラメータ出力部とを備える(第1の構成)。第1の構成によれば、画像処理の際にアーチファクトが生じる可能性のあるオブジェクトが画像信号に含まれている場合、画像信号における各画素に対してそのオブジェクトらしさを示す尤度が設定され、尤度に応じたパラメータを用いて各画素に画像処理が施される。その結果、オブジェクトに適した画像処理を行うことが可能となるため、画像全体に均一なパラメータを用いて画像処理を行う場合と比べ、画像処理によるアーチファクトを低減させることができる。 An image processing apparatus according to an embodiment of the present invention includes: an image processing unit that performs image processing on a signal value of each pixel in an image signal using an input image processing parameter; and A setting unit that sets the likelihood of the pixel for a predetermined object using a signal value, a predetermined first parameter for use in the image processing for the object, and the image processing for another object A parameter output unit that outputs the image processing parameter, which is obtained by mixing a predetermined second parameter for use with a mixture ratio based on the likelihood set by the setting unit, to the image processing unit ( First configuration). According to the first configuration, when an object that may cause artifacts during image processing is included in the image signal, the likelihood indicating the object likeness is set for each pixel in the image signal, Image processing is performed on each pixel using a parameter corresponding to the likelihood. As a result, since it is possible to perform image processing suitable for the object, artifacts due to image processing can be reduced as compared to the case where image processing is performed using uniform parameters for the entire image.
 第2の構成は、第1の構成において、前記設定部は、前記オブジェクトの特徴を表す複数の成分からなる特徴空間において、前記各画素の信号値を、前記成分の大きさを表す特徴量に変換して、前記特徴量に基づくスカラー値を求め、前記スカラー値に基づいて前記尤度を設定することとしてもよい。第2の構成によれば、画像処理の際にアーチファクトが生じる可能性のあるオブジェクトの特徴を表す特徴空間において、オブジェクトに対する尤度が画素毎に設定される。そのため、オブジェクトの特徴を有する画素について適切な画像処理を行うことができる。 According to a second configuration, in the first configuration, the setting unit converts the signal value of each pixel into a feature amount that represents the size of the component in a feature space that includes a plurality of components that represent the feature of the object. Conversion may be performed to obtain a scalar value based on the feature amount, and the likelihood may be set based on the scalar value. According to the second configuration, the likelihood for an object is set for each pixel in a feature space that represents a feature of the object that may cause an artifact during image processing. Therefore, it is possible to perform appropriate image processing for the pixels having the object characteristics.
 第3の構成は、第2の構成において、前記特徴空間は、色を構成する第1成分と第2成分とを含む色空間であることとしてもよい。第3の構成によれば、色に特徴を有するオブジェクトに適用することができる。 In a third configuration, the feature space may be a color space including a first component and a second component constituting a color in the second configuration. According to the third configuration, the present invention can be applied to an object having a color feature.
 第4の構成は、第2又は第3の構成において、前記特徴空間は、前記オブジェクトに含まれる予め定められた周波数成分を含む周波数空間を有することとしてもよい。第4の構成によれば、オブジェクトに含まれる周波数成分を用いて尤度を設定することができる。 In a fourth configuration according to the second or third configuration, the feature space may include a frequency space including a predetermined frequency component included in the object. According to the fourth configuration, the likelihood can be set using the frequency component included in the object.
 第5の構成は、第3の構成において、前記設定部は、前記各画素の信号値を、前記色空間における前記特徴量に変換し、前記特徴空間における前記オブジェクトの範囲を表すベクトルを用いて、前記第1成分と前記第2成分を主として表した前記スカラー値を求めることとしてもよい。 According to a fifth configuration, in the third configuration, the setting unit converts the signal value of each pixel into the feature amount in the color space, and uses a vector representing the range of the object in the feature space. The scalar value mainly representing the first component and the second component may be obtained.
 第6の構成は、第3の構成において、前記設定部は、前記各画素の信号値を、前記色空間における前記特徴量に変換し、前記色空間における前記オブジェクトの範囲を表す複数の領域において定められた各基準点と、前記特徴量との距離を前記スカラー値として求めることとしてもよい。 According to a sixth configuration, in the third configuration, the setting unit converts the signal value of each pixel into the feature amount in the color space, and a plurality of regions representing the range of the object in the color space. The distance between each determined reference point and the feature amount may be obtained as the scalar value.
 第7の構成は、第4の構成において、前記設定部は、前記各画素の信号値から前記周波数空間における前記画素の周波数成分を求め、求めた周波数成分を前記画素の前記スカラー値とすることとしてもよい。 According to a seventh configuration, in the fourth configuration, the setting unit obtains a frequency component of the pixel in the frequency space from a signal value of each pixel, and uses the obtained frequency component as the scalar value of the pixel. It is good.
 第8の構成は、第1から第7のいずれかの構成において、前記画像処理部は、前記画像処理として、前記画像信号における前記信号値について平滑化処理を行う第1画像処理、及び前記画像信号における前記信号値について鮮鋭化処理を行う第2画像処理の少なくとも一方を行うこととしてもよい。第8の構成によれば、オブジェクトに対する尤度に応じたパラメータを用いて、画像信号における各画素の信号値に対して平滑化処理又は鮮鋭化処理の少なくとも一方を施すことができる。そのため、例えば、オブジェクトが細かいテクスチャを有する木々や芝などである場合、画像信号にそのオブジェクトが含まれている画素に対しては平滑化処理の強度を弱めるように画像処理用パラメータを調整することができる。その結果、画像全体に対して均一に平滑化処理を行う場合と比べて、平滑化処理によるアーチファクトを低減させることが可能となる。また、そのオブジェクトが含まれる画素に対しては鮮鋭化処理の強度を強めるように画像処理用パラメータを調整することができるので、その画素の部分については他の画素より鮮鋭化の効果を強めることが可能となる。 According to an eighth configuration, in any one of the first to seventh configurations, the image processing unit performs a smoothing process on the signal value in the image signal as the image processing, and the image It is good also as performing at least one of the 2nd image processing which performs a sharpening process about the said signal value in a signal. According to the eighth configuration, it is possible to perform at least one of the smoothing process or the sharpening process on the signal value of each pixel in the image signal using a parameter corresponding to the likelihood for the object. Therefore, for example, if the object is trees or grass with a fine texture, adjust the image processing parameters so that the intensity of the smoothing process is weakened for the pixels that contain the object in the image signal. Can do. As a result, it is possible to reduce artifacts due to the smoothing process as compared to the case where the smoothing process is uniformly performed on the entire image. In addition, since the image processing parameters can be adjusted so as to increase the sharpening processing strength for the pixel containing the object, the sharpening effect of the pixel portion is strengthened compared to other pixels. Is possible.
 以下、図面を参照し、本発明の実施の形態を詳しく説明する。図中同一又は相当部分には同一符号を付してその説明は繰り返さない。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the drawings, the same or corresponding parts are denoted by the same reference numerals and description thereof will not be repeated.
<第1実施形態>
 (概要)
 本実施形態では、画像処理装置に入力される画像信号の各画素値について、予め定められたオブジェクト(以下、検出対象オブジェクト)の特徴が含まれる度合に応じて検出対象オブジェクトに対する尤度を設定し、その尤度に応じた画質改善処理(画像処理)を行う。尤度は、画像信号における各画素値の検出対象オブジェクトらしさを示す。本実施形態では、検出対象オブジェクトの一例として、画像信号における特定の色の領域(ここでは芝や木々等の緑色の領域)を例に説明する。
<First Embodiment>
(Overview)
In this embodiment, for each pixel value of the image signal input to the image processing apparatus, the likelihood for the detection target object is set according to the degree to which the characteristics of a predetermined object (hereinafter, detection target object) are included. Then, image quality improvement processing (image processing) according to the likelihood is performed. The likelihood indicates the likelihood of the detection target object of each pixel value in the image signal. In the present embodiment, as an example of the detection target object, a specific color area (here, a green area such as grass or trees) in the image signal will be described as an example.
 (構成)
 図1は、本実施形態に係る画像処理装置の概略構成を示すブロック図である。図1に示すように、画像処理装置1は、特徴空間変換部10、RGB-YUV変換部15、設定部20A、パラメータ出力部30、画像処理部40、及びYUB-RGB変換部50を有する。なお、本実施形態において、特徴空間変換部10及び設定部20Aは、設定部の一例である。以下、各部について説明する。
(Constitution)
FIG. 1 is a block diagram illustrating a schematic configuration of an image processing apparatus according to the present embodiment. As shown in FIG. 1, the image processing apparatus 1 includes a feature space conversion unit 10, an RGB-YUV conversion unit 15, a setting unit 20A, a parameter output unit 30, an image processing unit 40, and a YUB-RGB conversion unit 50. In the present embodiment, the feature space conversion unit 10 and the setting unit 20A are examples of a setting unit. Hereinafter, each part will be described.
 RGB-YUV変換部15は、下記式(1)により、入力されるRGB信号をYUV信号に変換して画像処理部40へ出力する。
Figure JPOXMLDOC01-appb-M000001
The RGB-YUV conversion unit 15 converts the input RGB signal into a YUV signal according to the following equation (1) and outputs the YUV signal to the image processing unit 40.
Figure JPOXMLDOC01-appb-M000001
 特徴空間変換部10は、RGB色空間における画像信号(以下、RGB信号)をYUV色空間における画像信号(以下、YUV信号)に変換し、YUV信号を設定部20Aと画像処理部40とに出力する。上述したように、検出対象オブジェクトは、芝や木々等の画像であり、緑色の特徴を有する。検出対象オブジェクトが色に特徴を有する場合には、入力信号を、色を表す成分からなる色空間の信号値に変換することで検出対象オブジェクトの特徴がより的確に得られる。本実施形態では、輝度に依存しないUV信号を得るため、入力信号をYUV色空間におけるYUV信号に変換するが、検出対象オブジェクトの特徴に応じて、入力信号をxyY、L***、L***等の色空間の信号に変換するようにしてもよい。本実施形態において、YUV色空間は、特徴空間の一例であり、YUV信号は、特徴空間における成分の大きさを表す特徴量の一例である。つまり、検出対象オブジェクトの特徴に応じて、入力信号を、検出対象オブジェクトの特徴を表す成分からなる特徴空間における特徴量に変換すればよい。 The feature space conversion unit 10 converts an image signal in the RGB color space (hereinafter, RGB signal) into an image signal in the YUV color space (hereinafter, YUV signal), and outputs the YUV signal to the setting unit 20A and the image processing unit 40. To do. As described above, the detection target object is an image of grass, trees, and the like, and has a green feature. When the detection target object has a feature in color, the feature of the detection target object can be obtained more accurately by converting the input signal into a signal value in a color space composed of components representing the color. In this embodiment, in order to obtain a UV signal that does not depend on luminance, the input signal is converted into a YUV signal in the YUV color space, but the input signal is converted into xyY, L * a * b * , You may make it convert into the signal of color spaces, such as L * u * v * . In the present embodiment, the YUV color space is an example of a feature space, and the YUV signal is an example of a feature amount that represents the size of a component in the feature space. That is, the input signal may be converted into a feature amount in a feature space made up of components representing the features of the detection target object in accordance with the features of the detection target object.
 図2は、特徴空間変換部10において、入力されるRGB信号をYUV信号に変換する回路図を示している。図2に示す各乗算器に入力されたR、G、Bの各信号は、乗算器に対応する係数(a11~a33)が乗算され、R、G、Bの信号毎に加算される。つまり、下記の式(1-1)に示す変換式によって、RGB信号はYUV信号に変換される。 FIG. 2 is a circuit diagram for converting an input RGB signal into a YUV signal in the feature space conversion unit 10. The R, G, and B signals input to each multiplier shown in FIG. 2 are multiplied by coefficients (a 11 to a 33 ) corresponding to the multipliers, and added for each R, G, and B signal. . That is, the RGB signal is converted into a YUV signal by the conversion equation shown in the following equation (1-1).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 図1に戻り、説明を続ける。設定部20Aは、ベクトル演算処理部21、最小値演算処理部22、及び数値制限処理部23を有する。ベクトル演算処理部21は、特徴空間変換部10から出力されたYUV信号の各画素の信号値(Y,U,V)を、Y=1とするUV平面における複数のスカラー値に変換する。図3Aに示すUV平面において、U=0、V=0に近づくほど検出対象オブジェクトの色(緑色)に近づく。本実施形態では、ベクトル演算処理部21は、YUV信号における各画素の信号値を以下の式(2)(3)を用いてUV平面におけるスカラー値S1,S2に変換する。 Returning to FIG. 1, the description will be continued. The setting unit 20A includes a vector calculation processing unit 21, a minimum value calculation processing unit 22, and a numerical value limiting processing unit 23. The vector calculation processing unit 21 converts the signal value (Y i , U i , V i ) of each pixel of the YUV signal output from the feature space conversion unit 10 into a plurality of scalar values on the UV plane with Y = 1. To do. In the UV plane shown in FIG. 3A, the closer to U = 0 and V = 0, the closer to the color (green) of the detection target object. In the present embodiment, the vector calculation processing unit 21 converts the signal value of each pixel in the YUV signal into scalar values S1 and S2 on the UV plane using the following equations (2) and (3).
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 図3Bに示すUV平面において、破線L1よりV軸方向に近づくほど緑色に近づき、図3Cに示すUV平面において、破線L2よりU軸方向に近づくほど緑色に近づく。式(2)のベクトル(w11,w12,w13)は、図3Bの破線L1に応じて定まり、式(3)のベクトル(w21,w22,w23)は、図3Cの破線L2に応じて定まる。これらベクトルは、色空間における検出対象オブジェクトの範囲を表すベクトルである。本実施形態では、例えば、(w11,w12,w13)=(-1.00,-0.25,155)/4、(w21,w22,w23)=(0.25,-1.00,128)/4、と設定されている。 In the UV plane shown in FIG. 3B, the closer to the V-axis direction from the broken line L1, the closer to green, and in the UV plane shown in FIG. 3C, the closer to the U-axis direction from the broken line L2, the closer to green. The vector (w11, w12, w13) in Expression (2) is determined according to the broken line L1 in FIG. 3B, and the vector (w21, w22, w23) in Expression (3) is determined according to the broken line L2 in FIG. 3C. These vectors are vectors representing the range of the detection target object in the color space. In this embodiment, for example, (w11, w12, w13) = (− 1.00, −0.25, 155) / 4, (w21, w22, w23) = (0.25, −1.00, 128). ) / 4.
 つまり、スカラー値S1は、検出対象オブジェクトとの色の差を、U成分(第1成分の一例)を主として表したものであり、スカラー値S2は、検出対象オブジェクトとの色の差を、V成分(第2成分の一例)を主として表したものである。スカラー値S1、S2は、UV平面においてU=0、V=0に近づくにつれて単調増加するように設定される。 That is, the scalar value S1 represents the color difference from the detection target object mainly as the U component (an example of the first component), and the scalar value S2 represents the color difference from the detection target object as V This mainly represents the component (an example of the second component). The scalar values S1 and S2 are set so as to monotonously increase as U = 0 and V = 0 in the UV plane.
 図1に戻り、説明を続ける。最小値演算処理部22は、ベクトル演算処理部21で算出されたスカラー値S1、S2から最小値のスカラー値(以下、最小値S)を選択して数値制限処理部23へと出力する。 Returning to Fig. 1, the explanation will be continued. The minimum value calculation processing unit 22 selects the scalar value of the minimum value (hereinafter referred to as the minimum value S) from the scalar values S1 and S2 calculated by the vector calculation processing unit 21 and outputs the selected value to the numerical value limiting processing unit 23.
 数値演算処理部23は、最小値演算処理部22から出力された最小値Sを尤度Pに変換する。本実施形態において、尤度Pは、最小値0から最大値16までの数値で表される。数値演算処理部23は、最小値Sが16以上の値であるときは、最大値の尤度P=16を設定する。また、最小値Sが0以下である場合には、最小値の尤度P=0を設定する。また、最小値Sが0より大きく、16未満である場合には、尤度P=Sを設定する。 The numerical value calculation processing unit 23 converts the minimum value S output from the minimum value calculation processing unit 22 into likelihood P. In this embodiment, the likelihood P is represented by a numerical value from the minimum value 0 to the maximum value 16. When the minimum value S is 16 or more, the numerical calculation processing unit 23 sets the maximum likelihood P = 16. When the minimum value S is 0 or less, the minimum value likelihood P = 0 is set. When the minimum value S is greater than 0 and less than 16, likelihood P = S is set.
 図4は、スカラー値(最小値S)に対応するUV平面上の領域を示す図である。図4のUV平面における破線で示した曲線L31及びL32は、図3Bに示した線分L1に平行な線分と図3Cに示した線分L2に平行な線分とを各々連結したものであり、検出対象オブジェクトの色(緑色)の境界を表す。つまり、領域P1は、緑色を示す領域であり、スカラー値Sが16以上となる色が含まれる。領域P2は、緑色の成分を含む領域であり、スカラー値Sが0より大きく、16未満となる色が含まれる。領域Pは、緑色ではない領域であり、スカラー値Sが0以下となる色が含まれる。従って、領域Pに含まれる色は最も尤度が高く、領域Pに含まれる色は最も尤度が低くなる。領域Pに含まれる色は尤度の最大値と最小値の間の値をとる。 FIG. 4 is a diagram illustrating a region on the UV plane corresponding to the scalar value (minimum value S). Curves L31 and L32 indicated by broken lines in the UV plane of FIG. 4 are obtained by connecting a line segment parallel to the line segment L1 shown in FIG. 3B and a line segment parallel to the line segment L2 shown in FIG. 3C. Yes, it represents the boundary of the color (green) of the detection target object. In other words, the region P1 is a region showing green and includes a color having a scalar value S of 16 or more. The region P2 is a region including a green component, and includes a color having a scalar value S greater than 0 and less than 16. Region P 3 is not in the green region, include color scalar value S becomes 0 or less. Therefore, the color included in the region P 1 is the highest likelihood, color included in the region P 3 most likelihood is low. Color included in the region P 2 takes a value between the maximum value and the minimum value of the likelihood.
 図1に戻り、説明を続ける。パラメータ出力部30は、設定部20Aの数値制限処理部23から出力される尤度Pに基づいて、画像処理部40に出力すべきパラメータを算出する。本実施形態では、パラメータ出力部30において、検出対象オブジェクトに適するパラメータとして第1パラメータK1、他のオブジェクトに適したパラメータとして第2パラメータK2が任意に設定されている。パラメータ出力部30は、第1パラメータK1と第2パラメータK2とを尤度Pに応じた割合で混合したパラメータK(画像処理用パラメータの一例)を算出する。なお、本実施形態において、K1=0、K2=1が予め設定されている。 Returning to Fig. 1, the explanation will be continued. The parameter output unit 30 calculates a parameter to be output to the image processing unit 40 based on the likelihood P output from the numerical value limiting processing unit 23 of the setting unit 20A. In the present embodiment, in the parameter output unit 30, a first parameter K1 is arbitrarily set as a parameter suitable for the detection target object, and a second parameter K2 is arbitrarily set as a parameter suitable for another object. The parameter output unit 30 calculates a parameter K (an example of an image processing parameter) obtained by mixing the first parameter K1 and the second parameter K2 at a ratio corresponding to the likelihood P. In this embodiment, K1 = 0 and K2 = 1 are set in advance.
 図5は、パラメータ出力部30においてパラメータKを特定する回路図を示している。図5に示すように、パラメータ出力部30は、出力された尤度Pを用い、第1パラメータK1と第2パラメータK2の混合比α(=(P-Pmin)/(Pmax-Pmin))を算出する。パラメータKは、以下の式(4)により表される。 FIG. 5 shows a circuit diagram for specifying the parameter K in the parameter output unit 30. As shown in FIG. 5, the parameter output unit 30 uses the output likelihood P, and calculates the mixture ratio α (= (P−Pmin) / (Pmax−Pmin)) of the first parameter K1 and the second parameter K2. calculate. The parameter K is expressed by the following formula (4).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 パラメータ出力部30は、算出した混合比αを、上記式(4)に代入することによりパラメータKの値を算出する。なお、尤度P=Pmax=16の場合、α=1となり、パラメータK=K1となる。この場合には、尤度Pが最大値であり、処理対象の画素は検出対象オブジェクトの色と略一致している可能性が高い。そのため、このような画素に対しては、検出対象オブジェクトに適したパラメータK(K=K1=0)が出力される。 The parameter output unit 30 calculates the value of the parameter K by substituting the calculated mixing ratio α into the above equation (4). When the likelihood P = Pmax = 16, α = 1 and the parameter K = K1. In this case, the likelihood P is the maximum value, and there is a high possibility that the pixel to be processed substantially matches the color of the detection target object. Therefore, a parameter K (K = K1 = 0) suitable for the detection target object is output for such a pixel.
 尤度P=Pmin=0の場合、α=0となり、パラメータK=K2となる。この場合には、尤度Pが最小値であり、処理対象の画素は検出対象オブジェクトの色と一致していない可能性が高い。そのため、このような画素に対しては、他のオブジェクトに適したパラメータK(K=K2=1)が出力される。 When the likelihood P = Pmin = 0, α = 0 and the parameter K = K2. In this case, the likelihood P is the minimum value, and there is a high possibility that the pixel to be processed does not match the color of the detection target object. Therefore, a parameter K (K = K2 = 1) suitable for other objects is output for such a pixel.
 また、例えば、尤度Pが最大値と最小値の中間値(=(Pmax+Pmin)/2)の場合には、α=0.5となり、パラメータK=(K1+K2)/2となる。この場合は、尤度Pが中間値であり、処理対象の画素が検出対象オブジェクトの色と一致するか否かの判断があいまいになる。そのため、このような画素に対しては、第1パラメータK1と第2パラメータK2の中間値を加算したパラメータK(K=0.5)が出力される。 For example, when the likelihood P is an intermediate value between the maximum value and the minimum value (= (Pmax + Pmin) / 2), α = 0.5 and the parameter K = (K1 + K2) / 2. In this case, the likelihood P is an intermediate value, and it is ambiguous to determine whether the processing target pixel matches the color of the detection target object. Therefore, a parameter K (K = 0.5) obtained by adding an intermediate value between the first parameter K1 and the second parameter K2 is output for such a pixel.
 図1に戻り、説明を続ける。画像処理部40は、ノイズ低減化処理部41を有する。画像処理部40は、パラメータ出力部30から出力されたパラメータKを用い、RGB-YUV変換部15から出力されるY信号について、画素毎に、ノイズ低減化処理部41によりノイズ低減化処理を行う。画像処理部40は、ノイズ低減化処理を行った画素毎のYUV信号をYUV-RGB変換部50に出力する。YUV-RGB変換部50は、画像処理部40から出力されるYUV信号を、下記の式(1-2)によって、RGB信号に変換して出力する。 Returning to Fig. 1, the explanation will be continued. The image processing unit 40 includes a noise reduction processing unit 41. The image processing unit 40 uses the parameter K output from the parameter output unit 30 to perform noise reduction processing on the Y signal output from the RGB-YUV conversion unit 15 by the noise reduction processing unit 41 for each pixel. . The image processing unit 40 outputs the YUV signal for each pixel subjected to the noise reduction processing to the YUV-RGB conversion unit 50. The YUV-RGB conversion unit 50 converts the YUV signal output from the image processing unit 40 into an RGB signal by the following equation (1-2) and outputs the RGB signal.
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 ここで、画像処理部40におけるノイズ低減化処理部41の構成例について説明する。図6は、ノイズ低減化処理部41の概略構成を示すブロック図である。図6に示すように、ノイズ低減化処理部41は、差分画像生成部410と、パラメータ調整部416と、合成演算部417とを有する。差分画像生成部410は、画素毎に、入力されるY信号について平滑化処理を行った第1差分信号(ΔY)を生成する。差分画像生成部410は、画素単位にY信号について平滑化処理を行い、平滑化処理を行った差分信号ΔYを出力するものであればよい。ここで、差分画像生成部410の一例について説明する。図7は、差分画像生成部410の構成を示すブロック図である。図7に示すように、差分画像生成部410は、輪郭方向推定部411、参照領域荷重処理部412、前処理部413、方向評価部414、及び積和演算部415を有する。 Here, a configuration example of the noise reduction processing unit 41 in the image processing unit 40 will be described. FIG. 6 is a block diagram illustrating a schematic configuration of the noise reduction processing unit 41. As illustrated in FIG. 6, the noise reduction processing unit 41 includes a difference image generation unit 410, a parameter adjustment unit 416, and a synthesis calculation unit 417. The difference image generation unit 410 generates, for each pixel, a first difference signal (ΔY) obtained by performing a smoothing process on the input Y signal. The difference image generation unit 410 only needs to perform a smoothing process on the Y signal for each pixel and output the difference signal ΔY after the smoothing process. Here, an example of the difference image generation unit 410 will be described. FIG. 7 is a block diagram illustrating a configuration of the difference image generation unit 410. As illustrated in FIG. 7, the difference image generation unit 410 includes a contour direction estimation unit 411, a reference region load processing unit 412, a preprocessing unit 413, a direction evaluation unit 414, and a product-sum operation unit 415.
(輪郭方向推定部411)
 輪郭方向推定部411は、RGB-YUV変換部15から出力されるY信号が表す画素毎の信号値(輝度値)に基づいて輪郭方向を画素毎に推定する。輪郭方向とは、輪郭をなす線の法線に直交する方向、つまり輪郭をなす線の接線方向である。輪郭をなす線は、信号値が略一定となる空間を示す線を指し、曲線であっても直線であってもよい。従って、輪郭とは位置の変化に応じて信号値が急激に変化する領域には限られない。輪郭をなす線と信号値との間の関係は、等高線と標高の関係に相当する。各画素の位置は離散的に与えられたり、ジャギー、ドット妨害、モスキートノイズ等、本実施例で改善対象とする輪郭周辺のノイズの影響を受けていたりするため、一定の信号値をとる画素間を通る線を、輪郭をなす線として、輪郭方向を定めることはできないことがある。ここでは、信号値が画素毎の座標を表す空間において微分可能(即ち、連続)であると仮定する。輪郭方向推定部411は、画素毎に信号値の水平方向もしくは垂直方向の差分値に基づいて、例えば式(5)に基づいて輪郭方向θを算出する。 
(Contour direction estimation unit 411)
The contour direction estimation unit 411 estimates the contour direction for each pixel based on the signal value (luminance value) for each pixel represented by the Y signal output from the RGB-YUV conversion unit 15. The contour direction is a direction orthogonal to the normal line of the contour line, that is, a tangential direction of the contour line. The contour line refers to a line indicating a space where the signal value is substantially constant, and may be a curve or a straight line. Therefore, the contour is not limited to a region where the signal value changes rapidly according to a change in position. The relationship between the contour line and the signal value corresponds to the relationship between the contour line and the altitude. The position of each pixel is given discretely or is affected by noise around the contour to be improved in this embodiment, such as jaggy, dot disturbance, mosquito noise, etc. In some cases, the direction of the contour cannot be determined using the line passing through as a line forming the contour. Here, it is assumed that the signal value is differentiable (that is, continuous) in the space representing the coordinates for each pixel. The contour direction estimation unit 411 calculates the contour direction θ based on, for example, Expression (5) based on the difference value of the signal value in the horizontal direction or the vertical direction for each pixel.
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 式(5)において、輪郭方向θは、水平方向(x方向)を基準とした左回りの角度である。x、yは、それぞれ水平方向、垂直方向の座標である。Y(x,y)は、座標(x,y)における信号値である。即ち、輪郭方向θは、信号値Y(x,y)のx方向への偏微分を、信号値Y(x,y)のy方向への偏微分で除算した正接値を与える角度として算出される。式(5)は、座標(x,y)が異なっても信号値Y(x,y)が一定という関係から導き出すことができる。ここで、Gx(x,y)、Gy(x,y)は、それぞれ信号値Y(x,y)のx方向への偏微分、y方向への偏微分を表す。以下の説明では、Gx(x,y)、Gy(x,y)を、それぞれx方向偏微分、y方向偏微分と呼ぶことがある。以下の説明では特に断らない限り、画素(i,j)の位置(座標)は、その画素の重心点を指す。また、その画素の位置における変数aを、a(i,j)等と表す。 In Expression (5), the contour direction θ is a counterclockwise angle with respect to the horizontal direction (x direction). x and y are horizontal and vertical coordinates, respectively. Y (x, y) is a signal value at coordinates (x, y). That is, the contour direction θ is calculated as an angle that gives a tangent value obtained by dividing the partial differentiation of the signal value Y (x, y) in the x direction by the partial differentiation of the signal value Y (x, y) in the y direction. The Expression (5) can be derived from the relationship that the signal value Y (x, y) is constant even if the coordinates (x, y) are different. Here, Gx (x, y) and Gy (x, y) represent partial differentiation in the x direction and partial differentiation in the y direction of the signal value Y (x, y), respectively. In the following description, Gx (x, y) and Gy (x, y) may be referred to as x-direction partial differentiation and y-direction partial differentiation, respectively. In the following description, unless otherwise specified, the position (coordinates) of the pixel (i, j) indicates the barycentric point of the pixel. Further, the variable a at the position of the pixel is represented as a (i, j) or the like.
 輪郭方向推定部411は、例えば、それぞれ式(6)、(7)を用いて各画素(i,j)における信号値Y(i,j)のx方向偏微分Gx(i,j)、y方向偏微分Gy(i,j)を算出する。 The contour direction estimation unit 411 uses, for example, equations (6) and (7), respectively, and the x-direction partial differential Gx (i, j) and y of the signal value Y (i, j) at each pixel (i, j). A directional partial differential Gy (i, j) is calculated.
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
 式(6)(7)において、i、jは、それぞれ注目画素のx方向、y方向のインデックスを示す整数値である。注目画素とは、直接の処理対象として注目される画素である。Wx(u’,v’)、Wy(u’,v’)は、それぞれx方向、y方向の差分フィルタのフィルタ係数を示す。u、vは、それぞれ、参照画素のx方向、y方向のインデックスを示す整数値である。参照画素とは、注目画素を基準として予め定められた規則により決まる範囲内にある画素であって、注目画素に対する処理を行う際に参照される画素である。参照画素は、注目画素を含む。u’、v’は、それぞれ、注目画素を原点としたときの参照画素のx方向、y方向のインデックスを示す整数値である。従って、u=i+u’、v=j+v’である。 In Expressions (6) and (7), i and j are integer values indicating indexes of the pixel of interest in the x direction and y direction, respectively. A pixel of interest is a pixel that attracts attention as a direct processing target. Wx (u ′, v ′) and Wy (u ′, v ′) indicate filter coefficients of the difference filter in the x direction and the y direction, respectively. u and v are integer values indicating indices of the reference pixel in the x and y directions, respectively. The reference pixel is a pixel that is within a range determined by a predetermined rule with the target pixel as a reference, and is a pixel that is referred to when processing the target pixel. The reference pixel includes a target pixel. u ′ and v ′ are integer values indicating indices in the x and y directions of the reference pixel when the target pixel is the origin. Therefore, u = i + u ′ and v = j + v ′.
 前述の差分フィルタは、例えば、x方向に2n+1個、y方向に2n+1個(計(2n+1)・(2n+1)個)の参照画素の各々u’,v’に対してフィルタ係数Wx(u’,v’)、Wy(u’,v’)を有する。以下の説明では、このフィルタ係数が与えられる参照画素が属する領域を参照領域と呼ぶことがある。nは、1よりも大きい整数値(例えば、2)である。ここで、フィルタ係数Wx(u’,v’)、Wy(u’,v’)は、注目画素を基準として正方向の参照画素に対して1、注目画素と同一の差分方向(x方向)の座標値を有する参照画素に対して0、注目画素を基準として負方向の参照画素に対して-1である。即ち、x方向の差分フィルタのフィルタ係数Wx(u’,v’)は、1(0<u’≦n)、0(u’=0)、-1(0>u’≧-n)である。y方向の差分フィルタのフィルタ係数Wy(u’,v’)は、1(0<v’≦n)、0(v’=0)、-1(0>v’≧-n)である。また、nは、画像の拡大率と等しいか、その拡大率よりも大きい整数値である。 For example, the difference filter described above may include filter coefficients Wx (u ′, u ′, v ′ for each of u ′, v ′ of 2n + 1 in the x direction and 2n + 1 in the y direction (total (2n + 1) · (2n + 1)) reference pixels. v ′) and Wy (u ′, v ′). In the following description, an area to which a reference pixel to which the filter coefficient is given belongs may be referred to as a reference area. n is an integer value greater than 1 (for example, 2). Here, the filter coefficients Wx (u ′, v ′) and Wy (u ′, v ′) are 1 with respect to the reference pixel in the positive direction based on the target pixel, and the same differential direction (x direction) as the target pixel. 0 for a reference pixel having a coordinate value of −1, and −1 for a reference pixel in the negative direction with reference to the target pixel. That is, the filter coefficient Wx (u ′, v ′) of the differential filter in the x direction is 1 (0 <u ′ ≦ n), 0 (u ′ = 0), −1 (0> u ′ ≧ −n). is there. The filter coefficients Wy (u ′, v ′) of the difference filter in the y direction are 1 (0 <v ′ ≦ n), 0 (v ′ = 0), and −1 (0> v ′ ≧ −n). Further, n is an integer value that is equal to or larger than the enlargement ratio of the image.
 これにより、注目画素に対して正方向、負方向各々に対して、信号値が平滑化されるため、輪郭の方向を推定する場合において、ジャギー、モスキートノイズ、ドット妨害など輪郭周りのノイズの影響を受けにくくなる。但し、nが大きくとり注目画素から離れた参照画素を考慮すると、本来、局所的な値である偏微分値が正しく算出されないことがある。従って、nを、予め定めた最大値よりも小さい値、例えば、拡大率と等しい整数値もしくは拡大率の小数点以下の桁を切り上げた整数値、又はこれらの整数値の何れかよりも予め定めた値だけ大きい値と定めておく。 As a result, the signal value is smoothed in each of the positive direction and the negative direction with respect to the target pixel. Therefore, when estimating the direction of the contour, the influence of noise around the contour such as jaggy, mosquito noise, and dot interference It becomes difficult to receive. However, if reference pixels that are large and away from the target pixel are taken into account, a partial differential value that is originally a local value may not be calculated correctly. Therefore, n is set to a value smaller than a predetermined maximum value, for example, an integer value equal to the enlargement rate, an integer value obtained by rounding up the digits after the decimal point of the enlargement rate, or any of these integer values. It is determined that the value is larger.
 輪郭方向推定部411は、算出したx方向偏微分Gx(i,j)、y方向偏微分Gy(i,j)に基づいて算出した輪郭方向θ(i,j)を量子化し、量子化した輪郭方向を表す量子化輪郭方向D(i,j)を算出する。輪郭方向推定部411は、量子化輪郭方向D(i,j)を算出する際、例えば、式(8)を用いる。 The contour direction estimation unit 411 quantizes and quantizes the contour direction θ (i, j) calculated based on the calculated x-direction partial differential Gx (i, j) and y-direction partial differential Gy (i, j). A quantized contour direction D (i, j) representing the contour direction is calculated. The contour direction estimation unit 411 uses, for example, Expression (8) when calculating the quantized contour direction D (i, j).
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
 式(8)において、round(…)は、実数…の小数点以下の桁を四捨五入した整数値を与える丸め関数である。Nは、量子化された輪郭方向の数(量子化輪郭方向数)を表す定数である。量子化輪郭方向数Nは、例えば、8から32の間のいずれかの値である。つまり、量子化輪郭方向D(i,j)は、輪郭方向θを、量子化間隔をπ/Nで除算した値を丸め、0からNd-1までのいずれかの整数で表される。これにより、輪郭方向θの自由度を制約して、後述する処理の負荷を低減する。 In equation (8), round (...) Is a rounding function that gives an integer value obtained by rounding off decimal places of real numbers. N d is a constant representing the number of quantized contour directions (number of quantized contour directions). The quantization contour direction number Nd is, for example, any value between 8 and 32. That is, the quantized contour direction D (i, j) is represented by any integer from 0 to N d−1 by rounding the value obtained by dividing the contour direction θ by the quantization interval by π / N d. . As a result, the degree of freedom in the contour direction θ is restricted, and the processing load described later is reduced.
 なお、ゼロ除算を回避するために、x方向偏微分Gx(i,j)の絶対値|Gx(i,j)|が予め定めた微小な実数値(例えば、10-6)よりも小さい場合には、tan-1をπ/2とする。また、演算処理システムによっては、前述の除算による誤差やゼロ除算を避けるため、GxとGy2つの引数を有する正接関数が用意されている場合があるが、これを利用してtan-1を求めても良い。 In order to avoid division by zero, the absolute value | Gx (i, j) | of the x-direction partial differential Gx (i, j) is smaller than a predetermined minute real value (for example, 10 −6 ). Tan −1 is π / 2. Depending on the arithmetic processing system, there is a case where a tangent function having two arguments, Gx and Gy, is prepared in order to avoid the above-described error due to division and zero division. By using this, tan −1 is obtained. Also good.
 輪郭方向推定部411は、算出した量子化輪郭方向D(i,j)を表す量子化輪郭方向情報を参照領域荷重処理部412に出力する。 The contour direction estimation unit 411 outputs quantized contour direction information representing the calculated quantized contour direction D (i, j) to the reference region load processing unit 412.
(参照領域荷重処理部412)
 参照領域荷重処理部412は、輪郭方向推定部411から入力された量子化輪郭方向情報が表す画素毎の量子化輪郭方向D(i,j)に基づいて、注目画素(i,j)毎に参照領域荷重情報を定める。参照領域荷重情報とは、ある注目画素(i,j)を中心とする参照領域に属する参照画素(u’,v’)毎の重み係数R(D(i,j),u’,v’)を表す情報である。この重み係数を参照領域荷重と呼ぶことがある。参照領域荷重処理部412における参照領域の大きさは、方向評価部414における参照領域の大きさと等しくなるように予め定めておく。
(Reference area load processing unit 412)
The reference region load processing unit 412 performs, for each pixel of interest (i, j), based on the quantized contour direction D (i, j) for each pixel represented by the quantized contour direction information input from the contour direction estimation unit 411. Define reference area load information. The reference area load information is a weighting factor R (D (i, j), u ′, v ′) for each reference pixel (u ′, v ′) belonging to a reference area centered on a certain target pixel (i, j). ). This weighting factor may be referred to as a reference area load. The size of the reference area in the reference area load processing unit 412 is determined in advance so as to be equal to the size of the reference area in the direction evaluation unit 414.
 参照領域荷重処理部412は、輪郭方向推定部411から入力された量子化輪郭方向情報が表す画素毎の量子化輪郭方向D(i,j)に基づいて、重み係数R(D(i,j),u’,v’)を定める。参照領域荷重処理部412は、重み係数R(D(i,j),u’,v’)のうち、(D(i,j),u’,v’)が零以外の値をとる参照画素(u’,v’)毎の重み係数R(D(i,j),u’,v’)を選択する。かかる参照画素は、注目画素(i,j)から輪郭方向又は輪郭方向に近似する方向に位置するため、輪郭方向参照画素と呼ぶ。参照領域荷重処理部412は、各輪郭方向参照画素に係る重み係数R(D(i,j),u’,v’)を表す参照領域荷重情報を生成し、生成した参照領域荷重情報を積和演算部415に出力する。 Based on the quantized contour direction D (i, j) for each pixel represented by the quantized contour direction information input from the contour direction estimating unit 411, the reference region load processing unit 412 performs weighting factor R (D (i, j ), U ′, v ′). The reference area load processing unit 412 is a reference in which (D (i, j), u ′, v ′) takes a value other than zero among the weighting factors R (D (i, j), u ′, v ′). A weighting factor R (D (i, j), u ′, v ′) for each pixel (u ′, v ′) is selected. Such a reference pixel is called a contour direction reference pixel because it is located in a contour direction or a direction approximate to the contour direction from the target pixel (i, j). The reference area load processing unit 412 generates reference area load information representing the weighting factor R (D (i, j), u ′, v ′) related to each contour direction reference pixel, and accumulates the generated reference area load information. The result is output to the sum calculation unit 415.
 参照領域荷重処理部412は、入力された量子化輪郭方向情報から各輪郭方向参照画素に係る量子化輪郭方向D(u,v)を表す量子化輪郭方向情報を抽出する。参照領域荷重処理部412は、抽出した量子化輪郭方向情報を方向評価部414に出力する。参照領域荷重処理部412は、RGB-YUV変換部15から出力されたY信号から各輪郭方向参照画素に係る信号値Y(u,v)を表す輝度信号を抽出する。参照領域荷重処理部412は、抽出した輝度信号を前処理部413に出力する。 The reference region load processing unit 412 extracts quantized contour direction information representing the quantized contour direction D (u, v) related to each contour direction reference pixel from the input quantized contour direction information. The reference area load processing unit 412 outputs the extracted quantized contour direction information to the direction evaluation unit 414. The reference area load processing unit 412 extracts a luminance signal representing the signal value Y (u, v) related to each contour direction reference pixel from the Y signal output from the RGB-YUV conversion unit 15. The reference area load processing unit 412 outputs the extracted luminance signal to the preprocessing unit 413.
(前処理部413)
 前処理部413は、参照領域荷重処理部412から入力された輝度信号から、注目画素(i,j)毎に、その注目画素(i,j)を中心とした参照領域に属する各参照画素(u,v)の信号値Y(u,v)を表す輝度信号を抽出する。前処理部413は、抽出した輝度信号が表す参照信号の信号値Y(u,v)から注目画素の信号値Y(i,j)をそれぞれ減算し、差信号値Y(u,v)-Y(i,j)を算出する。前処理部413は、算出した差信号値を表す差信号を生成し、生成した差信号を積和演算部415に出力する。
(Pre-processing unit 413)
For each pixel of interest (i, j), the pre-processing unit 413, for each pixel of interest (i, j), from the luminance signal input from the reference region load processing unit 412, each reference pixel belonging to the reference region (i, j) is the center. A luminance signal representing the signal value Y (u, v) of u, v) is extracted. The pre-processing unit 413 subtracts the signal value Y (i, j) of the target pixel from the signal value Y (u, v) of the reference signal represented by the extracted luminance signal, and obtains the difference signal value Y (u, v) − Y (i, j) is calculated. The preprocessing unit 413 generates a difference signal representing the calculated difference signal value, and outputs the generated difference signal to the product-sum operation unit 415.
(方向評価部414)
 方向評価部414は、画素毎の量子化輪郭方向D(i,j)に基づいて、注目画素毎にその注目画素を中心とする参照領域に属する各参照画素の方向評価値を算出する。ここで、方向評価部414は、注目画素(i,j)の量子化輪郭方向D(i,j)と参照画素(u,v)の量子化輪郭方向D(u,v)との差が小さいほど、方向評価値が大きくなるように当該参照画素の方向評価値を定める。例えば、方向評価部414は、注目画素(i,j)に対する量子化輪郭方向D(i,j)と参照画素(u,v)に対する量子化輪郭方向D(u,v)の差分値ΔD=D(u,v)-D(i,j)を算出する。ここで差分値ΔDが0、つまりD(u,v)とD(i,j)が等しいとき、方向評価値F(|ΔD|)を最大値1と定める。差分値ΔDが0ではない場合、つまりD(u,v)とD(i,j)が等しくない場合に、方向評価値F(|ΔD|)を最小値0と定める。
(Direction evaluation unit 414)
The direction evaluation unit 414 calculates the direction evaluation value of each reference pixel belonging to the reference region centered on the target pixel for each target pixel based on the quantized contour direction D (i, j) for each pixel. Here, the direction evaluation unit 414 determines the difference between the quantization contour direction D (i, j) of the target pixel (i, j) and the quantization contour direction D (u, v) of the reference pixel (u, v). The direction evaluation value of the reference pixel is determined so that the smaller the value is, the larger the direction evaluation value is. For example, the direction evaluation unit 414 calculates the difference value ΔD = between the quantization contour direction D (i, j) for the pixel of interest (i, j) and the quantization contour direction D (u, v) for the reference pixel (u, v). D (u, v) −D (i, j) is calculated. Here, when the difference value ΔD is 0, that is, when D (u, v) and D (i, j) are equal, the direction evaluation value F (| ΔD |) is determined as the maximum value 1. When the difference value ΔD is not 0, that is, when D (u, v) and D (i, j) are not equal, the direction evaluation value F (| ΔD |) is determined as the minimum value 0.
 方向評価部414は、注目画素(i,j)に対する量子化輪郭方向D(i,j)が参照画素(u,v)に対する量子化輪郭方向D(u,v)に近似するほど、つまり差分値ΔDの絶対値|ΔD|が小さいほど、大きくなるように方向評価値F(ΔD)を定めてもよい。例えば、方向評価部414は、F(0)=1、F(1)=0.75、F(2)=0.5、F(3)=0.25、F(|ΔD|)=0(|ΔD|>3)とする。 The direction evaluation unit 414 determines that the quantization contour direction D (i, j) for the target pixel (i, j) approximates the quantization contour direction D (u, v) for the reference pixel (u, v), that is, the difference. The direction evaluation value F (ΔD) may be determined so as to increase as the absolute value | ΔD | of the value ΔD decreases. For example, the direction evaluation unit 414 calculates F (0) = 1, F (1) = 0.75, F (2) = 0.5, F (3) = 0.25, F (| ΔD |) = 0. (| ΔD |> 3).
 但し、量子化輪郭方向D(i,j)と量子化輪郭方向D(u,v)のうちいずれか一方が、N/2よりも大きく、他方がN/2よりも小さい場合、それぞれが示す輪郭方向が近似するにも関わらず、絶対値|ΔD|が大きくなるために、誤った方向評価値F(ΔD)が算出されることがある。例えば、D(i,j)=7、D(u,v)=0の場合、|ΔD|=7となる。しかし、量子化輪郭方向の差分はπ/8であり、本来は|ΔD|=1と定められるべきである。そこで、方向評価部414は、量子化輪郭方向D(i,j)と量子化輪郭方向D(u,v)のうちいずれか一方が、N/2よりも大きい場合、他方の量子化輪郭方向の値にNを加算して補正値を算出する。方向評価部414は、算出した補正値と一方の量子化輪郭方向との差分値に対する絶対値を算出する。このようにして算出した絶対値を、上述の|ΔD|として用いることで意図した方向評価値を定める。後述する積和演算部415において、上述の方向評価値F(|ΔD|)を用いることにより、注目画素(i,j)における輪郭方向とは、輪郭方向が異なる参照画素(u,v)による影響を無視又は軽視することができる。 However, when one of the quantization contour direction D (i, j) and the quantization contour direction D (u, v) is larger than N d / 2 and the other is smaller than N d / 2, Although the contour direction indicated by is approximated, the absolute value | ΔD | becomes large, so that an incorrect direction evaluation value F (ΔD) may be calculated. For example, when D (i, j) = 7 and D (u, v) = 0, | ΔD | = 7. However, the difference in the quantization contour direction is π / 8, and should be defined as | ΔD | = 1. Therefore, when one of the quantization contour direction D (i, j) and the quantization contour direction D (u, v) is larger than N d / 2, the direction evaluation unit 414 determines the other quantization contour. A correction value is calculated by adding Nd to the direction value. The direction evaluation unit 414 calculates an absolute value for a difference value between the calculated correction value and one quantized contour direction. The intended direction evaluation value is determined by using the absolute value thus calculated as | ΔD |. In the product-sum operation unit 415 described later, by using the above-described direction evaluation value F (| ΔD |), the contour direction of the pixel of interest (i, j) is different from the reference pixel (u, v) having a different contour direction. The effect can be ignored or neglected.
 方向評価部414においても、参照画素(u,v)が属する参照領域の大きさ、つまり水平方向もしくは垂直方向の画素数は、2n+1個又はこの個数よりも多ければよい。なお、方向評価部414における参照領域の大きさは、輪郭方向推定部411における参照領域の大きさと異なっていてもよい。例えば、方向評価部414における参照領域の水平方向及び垂直方向の画素数は、それぞれ7であるのに対し、輪郭方向推定部411における参照領域の水平方向及び垂直方向の画素数は、それぞれ5であってもよい。方向評価部414は、注目画素(i,j)毎に各参照画素(u,v)の方向評価値F(ΔD)を表す方向評価値情報を積和演算部415に出力する。 Also in the direction evaluation unit 414, the size of the reference region to which the reference pixel (u, v) belongs, that is, the number of pixels in the horizontal direction or the vertical direction may be 2n + 1 or larger. Note that the size of the reference region in the direction evaluation unit 414 may be different from the size of the reference region in the contour direction estimation unit 411. For example, the number of pixels in the horizontal direction and the vertical direction of the reference area in the direction evaluation unit 414 is 7 respectively, whereas the number of pixels in the horizontal direction and the vertical direction of the reference area in the contour direction estimation unit 411 is 5 respectively. There may be. The direction evaluation unit 414 outputs direction evaluation value information representing the direction evaluation value F (ΔD) of each reference pixel (u, v) to the product-sum operation unit 415 for each pixel of interest (i, j).
(積和演算部415)
 積和演算部415は、注目画素(i,j)毎に、方向評価部414から方向評価値情報が、参照領域荷重処理部412から参照領域荷重情報が、前処理部413から差信号がそれぞれ入力される。積和演算部415は、差信号が表す差信号値Y(u,v)-Y(i,j)に対して、方向評価値情報が表す方向評価値F(|ΔD|)、参照領域荷重情報が表す参照領域荷重R(D(i,j),u’,v’)を積和演算して、平滑化差分値ΔY(i,j)を算出する。積和演算部415は、平滑化差分値ΔY(i,j)を算出する際、例えば、式(9)を用いる。
(Product-sum operation unit 415)
For each pixel of interest (i, j), the product-sum operation unit 415 receives the direction evaluation value information from the direction evaluation unit 414, the reference region load information from the reference region load processing unit 412, and the difference signal from the preprocessing unit 413. Entered. The product-sum operation unit 415 performs the direction evaluation value F (| ΔD |) represented by the direction evaluation value information, the reference region load, with respect to the difference signal value Y (u, v) −Y (i, j) represented by the difference signal. A smoothing difference value ΔY (i, j) is calculated by multiply-adding the reference region load R (D (i, j), u ′, v ′) represented by the information. The product-sum operation unit 415 uses, for example, Expression (9) when calculating the smoothed difference value ΔY (i, j).
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009
 式(9)において、Rs(D(i,j))は、注目画素(i,j)に係る参照画素のうち輪郭方向参照画素を選択する関数(領域選択関数)を表す。つまり、u’,v’∈Rs(D(i,j))は、輪郭方向参照画素を示す。式(9)は、方向評価値F(|ΔD|)、参照領域荷重R(D(i,j),u’,v’)及び差信号が表す差信号値Y(u,v)-Y(i,j)の積を参照画素毎に算出し、算出した積の参照領域に属する参照画素間の総和を算出することを表す。式(9)は、算出した総和に対して参照面積N(i,j)(参照領域に属する参照画素の数)で除算して平滑化差分値ΔY(i,j)を算出することを表す。積和演算部415は、算出した平滑化差分値ΔY(i,j)を表す平滑化差分信号を生成し、生成した平滑化差分信号(ΔY(i,j))を第1差分信号として合成演算部417に出力する。 In Equation (9), Rs (D (i, j)) represents a function (region selection function) for selecting a contour direction reference pixel among the reference pixels related to the target pixel (i, j). That is, u ′, v′εRs (D (i, j)) indicates a contour direction reference pixel. Expression (9) is obtained by calculating the direction evaluation value F (| ΔD |), the reference region load R (D (i, j), u ′, v ′), and the difference signal value Y (u, v) −Y represented by the difference signal. This indicates that the product of (i, j) is calculated for each reference pixel, and the sum between the reference pixels belonging to the reference area of the calculated product is calculated. Equation (9) represents that the smoothed difference value ΔY (i, j) is calculated by dividing the calculated sum by the reference area N (i, j) (the number of reference pixels belonging to the reference region). . The product-sum operation unit 415 generates a smoothed difference signal representing the calculated smoothed difference value ΔY (i, j), and synthesizes the generated smoothed difference signal (ΔY (i, j)) as the first difference signal. The result is output to the calculation unit 417.
 このように、本実施形態における差分画像生成部410は、各画素における輪郭方向を算出し、注目画素における輪郭方向又はその方向に近似する方向であって、その輪郭方向が注目画素と同一又は近似する方向である参照画素に係る信号値を用いて注目画素を平滑化する。 As described above, the difference image generation unit 410 according to the present embodiment calculates the contour direction of each pixel, and is the contour direction of the target pixel or a direction that approximates the direction, and the contour direction is the same as or approximate to the target pixel. The target pixel is smoothed using the signal value relating to the reference pixel which is the direction in which the pixel is to be moved.
 図6に戻り、説明を続ける。パラメータ調整部416は、差分画像生成部410の積和演算部415から出力される画素毎の第1差分信号(ΔY(i,j))にパラメータ出力部30から出力されたパラメータKを乗算した第2差分信号(K・ΔY(i,j))を合成演算部417に出力する。 Returning to FIG. 6, the description will be continued. The parameter adjustment unit 416 multiplies the first difference signal (ΔY (i, j)) for each pixel output from the product-sum operation unit 415 of the difference image generation unit 410 by the parameter K output from the parameter output unit 30. The second difference signal (K · ΔY (i, j)) is output to the composition calculation unit 417.
 合成演算部417は、画素毎に、パラメータ調整部416から出力された第2差分信号と、RGB-YUV変換部15から出力されたY信号とを加算したY’信号を出力する。パラメータK=0の場合には、原画像の輝度信号と同等のY信号が出力され、ノイズ低減化処理の効果が最も小さくなる。また、パラメータK=1の場合には、原画像の輝度信号に平滑化差分信号が加算されたY信号が出力され、ノイズ低減化処理の効果が最も大きくなる。つまり、入力される画像信号における木々等の細かいテクスチャの部分についてはノイズの低減化が抑制され、他のオブジェクトの部分についてはノイズが低減される。また、パラメータKが、0<K<1の場合には、検出対象オブジェクトに対する尤度に応じた第2差分信号がY信号に加算されるので、パラメータKが1と0とに急激に切り替わることによるアーチファクトが低減される。 The composition calculation unit 417 outputs, for each pixel, a Y ′ signal obtained by adding the second difference signal output from the parameter adjustment unit 416 and the Y signal output from the RGB-YUV conversion unit 15. When the parameter K = 0, a Y signal equivalent to the luminance signal of the original image is output, and the effect of the noise reduction process is minimized. When the parameter K = 1, a Y signal obtained by adding the smoothed difference signal to the luminance signal of the original image is output, and the effect of the noise reduction processing is maximized. That is, noise reduction is suppressed for fine texture portions such as trees in the input image signal, and noise is reduced for other object portions. Further, when the parameter K is 0 <K <1, the second difference signal corresponding to the likelihood for the detection target object is added to the Y signal, so that the parameter K is rapidly switched between 1 and 0. Artifacts due to are reduced.
 上述した第1実施形態では、検出対象オブジェクトの特徴に応じた色空間(UV平面)における検出対象オブジェクトの色を構成する複数の成分を基準として、画像信号における各画素の信号値のスカラー値を求める。そして、各画素についてスカラー値の大きさに応じた尤度が設定され、画素の尤度に応じたパラメータを用いて画像処理を行う。このように構成することにより、検出対象オブジェクトと略一致するか否かの判断があいまいな画素であっても、その画素の尤度に応じたパラメータを用いて画像処理が行われるので、その画素に適した画像処理が施される。その結果、画像全体に対して均一なパラメータを用いて画像処理を行う場合と比べて、画像処理によるアーチファクトを低減させることができる。 In the first embodiment described above, the scalar value of the signal value of each pixel in the image signal is determined based on a plurality of components constituting the color of the detection target object in the color space (UV plane) according to the characteristics of the detection target object. Ask. Then, a likelihood corresponding to the magnitude of the scalar value is set for each pixel, and image processing is performed using a parameter corresponding to the likelihood of the pixel. With this configuration, even if a pixel is vaguely determined whether or not it substantially matches the detection target object, image processing is performed using parameters according to the likelihood of the pixel. The image processing suitable for is performed. As a result, it is possible to reduce artifacts due to image processing as compared with the case where image processing is performed using uniform parameters for the entire image.
<第2実施形態>
 第1実施形態では、検出対象オブジェクトの色(特徴)を有する画素の信号値をUV平面におけるスカラー値で表し、スカラー値に基づいて検出対象オブジェクトに対する尤度を求める例について説明した。以下、検出対象オブジェクトに対する尤度を求める他の方法について説明する。
Second Embodiment
In the first embodiment, the signal value of the pixel having the color (feature) of the detection target object is expressed as a scalar value on the UV plane, and the likelihood for the detection target object is obtained based on the scalar value. Hereinafter, another method for obtaining the likelihood for the detection target object will be described.
 図8は、本実施形態に係る画像処理装置の概略構成を示すブロック図である。画像処理装置1aは、設定部20Bにおいて、距離演算処理部24と最大値演算処理部25を備えている点で第1実施形態の設定部20Aと異なる。以下、設定部20Bの各部について説明する。 FIG. 8 is a block diagram showing a schematic configuration of the image processing apparatus according to the present embodiment. The image processing device 1a is different from the setting unit 20A of the first embodiment in that the setting unit 20B includes a distance calculation processing unit 24 and a maximum value calculation processing unit 25. Hereinafter, each part of the setting unit 20B will be described.
 距離演算処理部24は、検出対象オブジェクトの色(特徴)に対応するUV平面上の複数の領域における基準点と、画素の信号値が示す位置との距離Rを算出する。距離演算処理部24は、基準点との距離Rが小さいほど大きい値となるように距離Rの値を変換する。本実施形態において、距離Rは、検出対象オブジェクトの色に対するスカラー値の一例である。 The distance calculation processing unit 24 calculates the distance R between the reference point in the plurality of regions on the UV plane corresponding to the color (feature) of the detection target object and the position indicated by the pixel signal value. The distance calculation processing unit 24 converts the value of the distance R so that the value becomes larger as the distance R to the reference point is smaller. In the present embodiment, the distance R is an example of a scalar value for the color of the detection target object.
 本実施形態では、例えば、図9A~図9Cに示すように、UV平面における略楕円形状の領域f1、f2、f3が検出対象オブジェクトに対応する色(ここでは緑色)の範囲として設定されている。この例では、領域f1のU軸方向の径SUnとV軸方向の径SVnは、(SU1,SV1)=(20,40)であり、その中心となる点q1が領域f1の基準点である。UV平面における基準点q1の座標(U,V)は(16,32)である。領域f2の径は、(SU2,SV2)=(10,20)であり、その中心となる点q2が領域f2の基準点である。UV平面における基準点q2の座標は、(U,V)=(32,64)である。また、領域f3の径は、(SU3,SV3)=(5,10)であり、その中心となる点q3が領域f3の基準点である。UV平面における基準点q3の座標は、(U,V)=(16,32)である。領域f1~f3の径SUn、SVnが大きくなるほど各領域は大きくなり、基準点との距離が近いと判断される領域が広がる。 In this embodiment, for example, as shown in FIGS. 9A to 9C, substantially elliptical regions f1, f2, and f3 on the UV plane are set as a color range (here, green) corresponding to the detection target object. . In this example, the diameter SUn in the U-axis direction and the diameter S Vn in the V-axis direction of the region f1 are (S U1 , S V1 ) = (20, 40), and the center point q1 is the reference of the region f1 Is a point. The coordinates (U 1 , V 1 ) of the reference point q1 on the UV plane are (16, 32). The diameter of the region f2 is (S U2 , S V2 ) = (10, 20), and the center point q2 is the reference point of the region f2. The coordinates of the reference point q2 on the UV plane are (U 2 , V 2 ) = (32, 64). The diameter of the region f3 is (S U3 , S V3 ) = (5, 10), and the center point q3 is the reference point of the region f3. The coordinates of the reference point q3 on the UV plane are (U 3 , V 3 ) = (16, 32). The larger the diameters S Un and S Vn of the regions f1 to f3, the larger the regions, and the wider the regions that are determined to be closer to the reference point.
 距離演算処理部24は、特徴空間変換部10から出力されるYUV信号における各画素の信号値(U,V)と基準点q1、q2、q3との距離を以下の式(10)を用いて算出する。 The distance calculation processing unit 24 uses the following equation (10) to calculate the distance between the signal value (U, V) of each pixel in the YUV signal output from the feature space conversion unit 10 and the reference points q1, q2, q3. calculate.
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010
 距離演算処理部24は、各画素の信号値(U,V)と基準点q1、q2、q3との距離Rnを、以下の式(11)又は式(12)により変換する。式(11)及び式(12)は、距離Rnが小さいほど大きくなる関数である。 The distance calculation processing unit 24 converts the distance Rn between the signal value (U, V) of each pixel and the reference points q1, q2, q3 by the following formula (11) or formula (12). Expressions (11) and (12) are functions that increase as the distance Rn decreases.
Figure JPOXMLDOC01-appb-M000011
Figure JPOXMLDOC01-appb-M000011
 最大値演算処理部25は、距離演算処理部24によって算出された距離R1’、R2’、R3’から最大値の距離Rn’を選択して数値制限処理部23に出力する。最大値の距離Rn’を選択することにより、結果として領域f1、f2、f3の論理和をとることができる。図10に示すように、領域f1、f2、f3の各領域を重ね合せた領域fが、領域f以外の領域より検出対象オブジェクトに対する尤度が高い領域となる。 The maximum value calculation processing unit 25 selects the maximum value distance Rn ′ from the distances R1 ′, R2 ′, R3 ′ calculated by the distance calculation processing unit 24, and outputs it to the numerical value limiting processing unit 23. By selecting the maximum distance Rn ′, the logical sum of the regions f1, f2, and f3 can be obtained as a result. As illustrated in FIG. 10, the region f obtained by superimposing the regions f1, f2, and f3 is a region having a higher likelihood for the detection target object than the region other than the region f.
 数値制限処理部23は、第1実施形態と同様、最大値演算処理部25から出力された距離Rn’を、予め定められた尤度の最大値Pmaxと最小値Pminの範囲内となるように尤度Pに変換する。つまり、距離Rn’が予め定められた最大値以上となる場合には、距離Rn’=Pmax=Pに変換し、距離Rn’が予め定められた最小値となる場合には、距離Rn’=Pmin=Pに変換する。また、距離Rn’が最小値より大きく最大値未満となる場合には、距離Rn’=Pに変換する。 As in the first embodiment, the numerical value limit processing unit 23 sets the distance Rn ′ output from the maximum value calculation processing unit 25 to be within the range between the predetermined maximum value Pmax and minimum value Pmin. Convert to likelihood P. That is, when the distance Rn ′ is equal to or greater than a predetermined maximum value, the distance Rn ′ = Pmax = P is converted, and when the distance Rn ′ is a predetermined minimum value, the distance Rn ′ = Convert to Pmin = P. When the distance Rn ′ is greater than the minimum value and less than the maximum value, the distance Rn ′ = P is converted.
 上述した第2実施形態では、色空間(UV平面)において予め定めた検出対象オブジェクトの色(特徴)の範囲を示す複数の領域に定めた基準点と、色空間における画素の信号値が示す位置との距離に基づいて検出対象オブジェクトに対する尤度を多値で表す。このように構成することにより、画素の信号値が検出対象オブジェクトの特徴を有し、検出対象オブジェクトと一致するか否かの判断があいまいな場合であっても、尤度に応じたパラメータを用いて画像処理を行うことができる。その結果、オブジェクトに応じた適切な画像処理が施され、画像全体に対して均一なパラメータを用いて画像処理を行う場合と比べて、画像処理によるアーチファクトを低減させることができる。 In the second embodiment described above, the reference points defined in a plurality of areas indicating the predetermined color (feature) range of the detection target object in the color space (UV plane), and the positions indicated by the pixel signal values in the color space The likelihood with respect to the detection target object is represented by a multi-value based on the distance between the two. With this configuration, even if the signal value of the pixel has the characteristics of the detection target object and it is unclear whether it matches the detection target object, the parameter corresponding to the likelihood is used. Image processing. As a result, appropriate image processing corresponding to the object is performed, and artifacts due to image processing can be reduced as compared with a case where image processing is performed using uniform parameters for the entire image.
 以上、本発明の実施の形態を説明したが、上述した実施の形態は本発明を実施するための例示に過ぎない。よって、本発明は上述した実施の形態に限定されることなく、その趣旨を逸脱しない範囲内で上述した実施の形態を適宜変形して実施することが可能である。以下、本発明の変形例について説明する。 As mentioned above, although embodiment of this invention was described, embodiment mentioned above is only the illustration for implementing this invention. Therefore, the present invention is not limited to the above-described embodiment, and can be implemented by appropriately modifying the above-described embodiment without departing from the spirit thereof. Hereinafter, modifications of the present invention will be described.
<変形例> <Modification>
 (1)上述した第1実施形態では、画像処理部40において、ノイズ低減化処理部41によりパラメータKを用いて各画素の信号値について平滑化処理を行う例について説明したが、画素毎に、尤度に応じたパラメータを用いて鮮鋭化処理を行うようにしてもよい。図11は、本変形例に係る画像処理装置の構成を示すブロック図である。図11に示すように、画像処理装置1bは、第1実施形態におけるパラメータ出力部30と画像処理部40に替えて、パラメータ出力部30Aと画像処理部40Aとを有する。 (1) In the first embodiment described above, the example in which the image processing unit 40 performs the smoothing process on the signal value of each pixel using the parameter K by the noise reduction processing unit 41 has been described. Sharpening processing may be performed using a parameter corresponding to the likelihood. FIG. 11 is a block diagram showing a configuration of an image processing apparatus according to this modification. As shown in FIG. 11, the image processing apparatus 1b includes a parameter output unit 30A and an image processing unit 40A instead of the parameter output unit 30 and the image processing unit 40 in the first embodiment.
 パラメータ出力部30Aは、第1実施形態と同様、設定部20Aから出力される尤度Pに応じた混合比αを求める。また、上述した式(4)と同様にして、パラメータK’(画像処理用パラメータの一例)を算出して画像処理部40Aに出力する。本変形例では、画像処理として鮮鋭化処理を施すため、検出対象オブジェクトが木々等の細かいテクスチャを含む画素に対しては鮮鋭化の効果を強めるようにする。そのため、パラメータ出力部30Aにおいて、検出対象オブジェクトに対する鮮鋭化処理に用いる第1パラメータK1’としてK1’=1、他のオブジェクトに対する鮮鋭化処理に用いる第2パラメータK2’としてK2’=0を予め設定する。 The parameter output unit 30A obtains the mixing ratio α corresponding to the likelihood P output from the setting unit 20A, as in the first embodiment. Further, in the same manner as the above-described equation (4), a parameter K ′ (an example of an image processing parameter) is calculated and output to the image processing unit 40A. In the present modification, sharpening processing is performed as image processing. Therefore, the sharpening effect is enhanced for pixels whose detection target object includes fine textures such as trees. Therefore, in the parameter output unit 30A, K1 ′ = 1 is set as the first parameter K1 ′ used for the sharpening process for the detection target object, and K2 ′ = 0 is set as the second parameter K2 ′ used for the sharpening process for the other object in advance. To do.
 尤度P=Pmax=16の場合、α=1となり、パラメータK’=K1’となる。この場合には、尤度Pが最大値であり、処理対象の画素は検出対象オブジェクトの色と略一致している可能性が高い。そのため、このような画素に対しては、検出対象オブジェクトの鮮鋭化処理に適したパラメータK’(K1’=1)を出力する。また、尤度P=Pmin=0の場合、α=0となり、パラメータK’=K2’となる。この場合には、尤度Pが最小値であり、処理対象の画素は検出対象オブジェクトの色と一致していない可能性が高い。そのため、このような画素に対しては、他のオブジェクトに適したパラメータK’(K2’=0)を出力する。また、例えば、尤度Pが最大値と最小値の中間値(=(Pmax+Pmin)/2)の場合には、α=0.5となり、パラメータK’=(K1’+K2’)/2となる。この場合は、尤度Pが中間値であり、処理対象の画素が検出対象オブジェクトの色と一致するか否かの判断があいまいになる。そのため、このような画素に対しては、第1パラメータK1’と第2パラメータK2’の中間値を加算したパラメータK’(=0.5)を出力する。 When the likelihood P = Pmax = 16, α = 1 and the parameter K ′ = K1 ′. In this case, the likelihood P is the maximum value, and there is a high possibility that the pixel to be processed substantially matches the color of the detection target object. Therefore, a parameter K ′ (K1 ′ = 1) suitable for the sharpening process of the detection target object is output for such a pixel. When the likelihood P = Pmin = 0, α = 0 and the parameter K ′ = K2 ′. In this case, the likelihood P is the minimum value, and there is a high possibility that the pixel to be processed does not match the color of the detection target object. Therefore, a parameter K ′ (K2 ′ = 0) suitable for other objects is output for such a pixel. For example, when the likelihood P is an intermediate value between the maximum value and the minimum value (= (Pmax + Pmin) / 2), α = 0.5 and the parameter K ′ = (K1 ′ + K2 ′) / 2. . In this case, the likelihood P is an intermediate value, and it is ambiguous to determine whether the processing target pixel matches the color of the detection target object. Therefore, for such a pixel, a parameter K ′ (= 0.5) obtained by adding an intermediate value between the first parameter K1 ′ and the second parameter K2 ′ is output.
 画像処理部40Aは、鮮鋭化処理部42を有する。図12は、鮮鋭化処理部42の概略構成を示す図である。図12に示すように、鮮鋭化処理部42は、高周波成分出力部420、パラメータ調整部428、及び合成演算部429を有する。高周波成分出力部420は、RGB-YUV変換部15から出力されるY信号について鮮鋭化処理を行い、Y信号の高周波成分値NL2Dを出力する。 The image processing unit 40A includes a sharpening processing unit 42. FIG. 12 is a diagram illustrating a schematic configuration of the sharpening processing unit 42. As illustrated in FIG. 12, the sharpening processing unit 42 includes a high frequency component output unit 420, a parameter adjustment unit 428, and a synthesis calculation unit 429. The high frequency component output unit 420 performs a sharpening process on the Y signal output from the RGB-YUV conversion unit 15 and outputs a high frequency component value NL 2D of the Y signal.
 図13は、高周波成分出力部420の一例を示すブロック図である。図13に示すように、高周波成分出力部420は、輪郭方向推定部411、低域フィルタ部420a及び高周波拡張部427を有する。低域フィルタ部420aは、前処理部413、方向評価部414、参照領域荷重処理部412、積和演算部425、及び合成演算部426を有する。なお、図13において、第1実施形態と同様の構成には、第1実施形態と同じ符号を付している。以下、第1実施形態と同様の構成の説明を省略し、積和演算部425、合成演算部426、及び高周波拡張部427について説明する。 FIG. 13 is a block diagram showing an example of the high-frequency component output unit 420. As illustrated in FIG. 13, the high frequency component output unit 420 includes a contour direction estimation unit 411, a low-pass filter unit 420 a, and a high frequency extension unit 427. The low-pass filter unit 420 a includes a preprocessing unit 413, a direction evaluation unit 414, a reference region load processing unit 412, a product-sum operation unit 425, and a synthesis operation unit 426. In FIG. 13, the same reference numerals as those in the first embodiment are assigned to the same configurations as those in the first embodiment. Hereinafter, description of the same configuration as in the first embodiment will be omitted, and the product-sum operation unit 425, the synthesis operation unit 426, and the high-frequency extension unit 427 will be described.
(積和演算部425)
 積和演算部425は、注目画素(i,j)毎に、方向評価部414から方向評価値情報が、参照領域荷重処理部412から参照領域荷重情報が、前処理部413から輝度信号がそれぞれ入力される。積和演算部425は、方向評価値情報が表す方向評価値F(|ΔD|)、参照領域荷重情報が表す参照領域荷重R(D(i,j),u’,v’)及び輝度信号が表す信号値Y(u,v)に基づいて、例えば式(13)を用いて、積和値S(i,j)を算出する。
(Product-sum operation unit 425)
For each pixel of interest (i, j), the product-sum operation unit 425 receives the direction evaluation value information from the direction evaluation unit 414, the reference region load information from the reference region load processing unit 412, and the luminance signal from the preprocessing unit 413. Entered. The product-sum operation unit 425 includes a direction evaluation value F (| ΔD |) represented by the direction evaluation value information, a reference region load R (D (i, j), u ′, v ′) represented by the reference region load information, and a luminance signal. The product-sum value S (i, j) is calculated using the equation (13), for example, based on the signal value Y (u, v) represented by.
Figure JPOXMLDOC01-appb-M000012
Figure JPOXMLDOC01-appb-M000012
 式(13)は、方向評価値F(|ΔD|)、参照領域荷重R(D(i,j),u’,v’)及び輝度信号が表す信号値Y(u,v)の積を参照画素毎に算出し、算出した積の参照領域に属する参照画素間の総和を、積和値S(i,j)として算出することを表す。つまり、式(13)は、積和値S(i,j)を、方向評価値F(|ΔD|)と参照領域荷重R(D(i,j),u’,v’)との積を重み係数として、信号値Y(u,v)を重み付け加算して算出するとみることもできる。方向評価値F(|ΔD|)と参照領域荷重R(D(i,j),u’,v’)との積を、方向評価領域荷重と呼ぶことがある。積和演算部425は、方向評価値情報が表す方向評価値F(|ΔD|)及び参照領域荷重情報が表す参照領域荷重R(D(i,j),u’,v’)に基づいて、例えば式(14)を用いて、荷重面積C(i,j)を算出する。 Equation (13) is the product of the direction evaluation value F (| ΔD |), the reference region load R (D (i, j), u ′, v ′), and the signal value Y (u, v) represented by the luminance signal. The calculation is performed for each reference pixel, and the sum between reference pixels belonging to the reference area of the calculated product is calculated as a product sum value S (i, j). That is, the equation (13) indicates that the product sum value S (i, j) is the product of the direction evaluation value F (| ΔD |) and the reference region load R (D (i, j), u ′, v ′). Can be considered to be calculated by weighted addition of the signal value Y (u, v), using as a weighting factor. The product of the direction evaluation value F (| ΔD |) and the reference region load R (D (i, j), u ′, v ′) may be referred to as a direction evaluation region load. The product-sum operation unit 425 is based on the direction evaluation value F (| ΔD |) represented by the direction evaluation value information and the reference region load R (D (i, j), u ′, v ′) represented by the reference region load information. For example, the load area C (i, j) is calculated using Equation (14).
Figure JPOXMLDOC01-appb-M000013
Figure JPOXMLDOC01-appb-M000013
 式(14)は、方向評価値F(|ΔD|)及び参照領域荷重R(D(i,j),u’,v’)の積を参照画素毎に算出し、算出した積の参照領域に属する参照画素間の総和を、荷重面積C(i,j)として算出することを表す。つまり、荷重面積C(i,j)は、参照領域荷重R(D(i,j),u’,v’)を参照画素毎に方向評価値F(|ΔD|)で重み付けされた値、即ち式(13)の積和演算において実質的に参照される参照画素の数を表す。言い換えれば、式(14)は、荷重面積C(i,j)を、上述の方向評価領域荷重を参照領域内で総和をとって算出することを示す。また、積和演算部425は、参照領域荷重情報が表す参照領域荷重R(D(i,j),u’,v’)の参照領域に属する参照画素間の総和を参照面積N(i,j)として算出する。参照面積N(i,j)は、式(13)の積和演算において名目的に参照される参照画素の数を表す。積和演算部425は、注目画素(i,j)毎に算出した積和値S(i,j)を表す積和値情報、荷重面積C(i,j)を表す荷重面積情報、参照面積N(i,j)を表す参照面積情報を合成演算部426に出力する。 Expression (14) calculates the product of the direction evaluation value F (| ΔD |) and the reference area load R (D (i, j), u ′, v ′) for each reference pixel, and the reference area of the calculated product This represents that the sum between reference pixels belonging to is calculated as a load area C (i, j). That is, the load area C (i, j) is a value obtained by weighting the reference region load R (D (i, j), u ′, v ′) with the direction evaluation value F (| ΔD |) for each reference pixel. That is, it represents the number of reference pixels that are substantially referred to in the product-sum operation of Expression (13). In other words, equation (14) indicates that the load area C (i, j) is calculated by taking the sum of the above-described direction evaluation region loads within the reference region. In addition, the product-sum operation unit 425 calculates the sum between reference pixels belonging to the reference region of the reference region load R (D (i, j), u ′, v ′) represented by the reference region load information as a reference area N (i, j). The reference area N (i, j) represents the number of reference pixels that are referred to for nominal purposes in the product-sum operation of Equation (13). The product-sum operation unit 425 includes product-sum value information representing the product-sum value S (i, j) calculated for each pixel of interest (i, j), load area information representing the load area C (i, j), and reference area. Reference area information representing N (i, j) is output to the composition calculation unit 426.
(合成演算部426)
 合成演算部426は、積和演算部425から積和値情報、荷重面積情報及び参照面積情報が入力される。合成演算部426は、積和値情報が表す積和値S(i,j)を荷重面積情報が表す荷重面積C(i,j)で除算して方向平滑化値Y’(i,j)を算出する。即ち、算出された方向平滑化値Y’(i,j)は、注目画素(i,j)における量子化輪郭方向又はその方向を近似する方向にある参照画素であって、その輪郭方向が注目画素の輪郭方向と等しいか近似する参照画素間で平滑化された信号値を表す。
(Composition operation unit 426)
The sum calculation unit 426 receives product sum value information, load area information, and reference area information from the product sum calculation unit 425. The synthesis calculation unit 426 divides the product sum value S (i, j) represented by the product sum value information by the load area C (i, j) represented by the load area information to thereby smooth the direction smoothing value Y ′ (i, j). Is calculated. That is, the calculated direction smoothing value Y ′ (i, j) is a reference pixel in the quantization contour direction of the pixel of interest (i, j) or a direction approximating the direction, and the contour direction is of interest. It represents a signal value smoothed between reference pixels that are equal to or approximate to the pixel contour direction.
 合成演算部426は、荷重面積C(i,j)を参照面積情報が表す参照面積N(i,j)で除算して混合比w(i,j)を算出する。混合比w(i,j)は、注目画素(i,j)における量子化輪郭方向又はその方向を近似する方向にある参照画素のうち、その輪郭方向が注目画素の輪郭方向と等しいか近似する参照画素の個数比を表す。 The composition calculation unit 426 calculates the mixture ratio w (i, j) by dividing the load area C (i, j) by the reference area N (i, j) represented by the reference area information. The mixture ratio w (i, j) approximates whether the contour direction is equal to the contour direction of the pixel of interest among the reference pixels in the quantization contour direction of the pixel of interest (i, j) or the direction approximating the direction. This represents the number ratio of reference pixels.
 合成演算部426は、方向平滑化値Y’(i,j)と入力された輝度信号(Y信号)が表す信号値Y(i,j)を、それぞれ混合比w(i,j)、(1-w(i,j))で重み付け加算(合成演算)を行って、低域通過信号値Y’’(i,j)を算出する。この重み付け加算は、式(15)で表される。 The composition calculation unit 426 converts the direction smoothed value Y ′ (i, j) and the signal value Y (i, j) represented by the input luminance signal (Y signal) into the mixing ratios w (i, j) and ( 1-w (i, j)) is added with weight (addition operation) to calculate the low-pass signal value Y ″ (i, j). This weighted addition is expressed by equation (15).
Figure JPOXMLDOC01-appb-M000014
Figure JPOXMLDOC01-appb-M000014
 合成演算部426は、算出した低域通過信号値Y’’(i,j)を表す輝度信号Y’’を高周波拡張部427と合成演算部429に出力する。図14は、高周波拡張部427の構成を示すブロック図である。図14に示すように、高周波拡張部427は、2次元高域通過フィルタ部381と非線形演算部382とを有する。 The synthesis calculation unit 426 outputs the luminance signal Y ″ representing the calculated low-pass signal value Y ″ (i, j) to the high frequency extension unit 427 and the synthesis calculation unit 429. FIG. 14 is a block diagram illustrating a configuration of the high-frequency extension unit 427. As illustrated in FIG. 14, the high-frequency extension unit 427 includes a two-dimensional high-pass filter unit 381 and a nonlinear calculation unit 382.
(2次元高域通過フィルタ部381)
 2次元高域通過フィルタ部381は、合成演算部426から輝度信号Y’’が入力され、輪郭方向推定部411から量子化輪郭方向情報が入力される。2次元高域通過フィルタ部381は、輝度信号Y’’が表す低域通過信号値Y’’(i,j)に対して量子化輪郭方向情報が表す量子化輪郭方向D(i,j)に係る高域成分を表す輪郭方向成分信号W2Dを算出する。2次元高域通過フィルタ部381は、算出した輪郭方向成分信号W2Dを非線形演算部382に出力する。
(Two-dimensional high-pass filter unit 381)
The two-dimensional high-pass filter unit 381 receives the luminance signal Y ″ from the synthesis calculation unit 426 and the quantized contour direction information from the contour direction estimation unit 411. The two-dimensional high-pass filter unit 381 has a quantized contour direction D (i, j) represented by the quantized contour direction information with respect to the low-pass signal value Y ″ (i, j) represented by the luminance signal Y ″. calculating a contour direction component signal W 2D representing the high frequency component according to. The two-dimensional high-pass filter unit 381 outputs the calculated contour direction component signal W2D to the non-linear operation unit 382.
 図15は、2次元高域通過フィルタ部381の構成を表す概略図である。2次元高域通過フィルタ部381は、遅延メモリ3811、方向選択部3812、乗算部3813、フィルタ係数メモリ3814及び合成演算部3815を含んで構成される。遅延メモリ3811は、入力信号を2n+1個のWサンプル遅延させる遅延素子3811-1~3811-2n+1を備える。遅延素子2811-v-1~2811-v-2n+1は、それぞれ入力信号をW-2n、W-2n+1、…、Wサンプル遅延させた2n+1サンプルの信号値を含む遅延信号を方向選択部3812に出力する。 FIG. 15 is a schematic diagram illustrating the configuration of the two-dimensional high-pass filter unit 381. The two-dimensional high-pass filter unit 381 includes a delay memory 3811, a direction selection unit 3812, a multiplication unit 3813, a filter coefficient memory 3814, and a synthesis calculation unit 3815. The delay memory 3811 includes delay elements 3811-1 to 3811-2n + 1 that delay the input signal by 2n + 1 W x samples. Delay elements 2811-v-1 ~ 2811- v-2n + 1 , respectively an input signal W x -2n, W x -2n + 1, ..., W x samples delayed by 2n + 1 samples direction selection unit delay signal including a signal value of It outputs to 3812.
 遅延素子3811-1~3811-2n+1は、それぞれ直列に接続されている。遅延素子3811-1の一端に、輝度信号Y’’が表す低域通過信号値Y’’(i,j)が入力され、遅延素子3811-1の他端は、Wサンプル遅延させた遅延信号を遅延素子3811-2の一端に出力する。遅延素子3811-2~3811-2n+1の一端は、遅延素子3811-1~3811-2nの他端からそれぞれ、W~2n・Wサンプル遅延された遅延信号が入力される。遅延素子3811-2n+1の他端は、(2n+1)・Wサンプル遅延させた遅延信号を、それぞれ方向選択部3812に出力する。従って、輝度信号Y’’を表す信号値であって、注目画素を中心に互いに水平方向及び垂直方向に隣接する(2n+1)・(2n+1)画素の信号値が方向選択部3812に出力される。これらの信号値に対応する画素は、注目画素を中心とする参照領域に属する参照画素である。 The delay elements 3811-1 to 3811-2n + 1 are connected in series. The low-pass signal value Y ″ (i, j) represented by the luminance signal Y ″ is input to one end of the delay element 3811-1, and the other end of the delay element 3811-1 is delayed by W x samples. The signal is output to one end of the delay element 3811-2. One end of each of the delay elements 3811-2 to 3811-2n + 1 receives a delay signal delayed by W x to 2n · W x samples from the other end of each of the delay elements 3811-1 to 3811-2n. The other end of the delay element 3811-2n + 1 outputs a delayed signal delayed by (2n + 1) · W x samples to the direction selection unit 3812. Accordingly, the signal value representing the luminance signal Y ″ and the signal values of (2n + 1) · (2n + 1) pixels adjacent to each other in the horizontal direction and the vertical direction around the target pixel are output to the direction selection unit 3812. Pixels corresponding to these signal values are reference pixels belonging to a reference region centered on the target pixel.
 方向選択部3812は、輪郭方向推定部411から入力された量子化輪郭方向情報が表す画素毎の量子化輪郭方向D(i,j)に基づいて、注目画素(i,j)から量子化輪郭方向D又はその方向に近似する方向にある参照画素(u’,v’)を選択する。選択される参照画素は、例えば次の条件を満足する参照画素である。(1)注目画素(i,j)の中心から量子化輪郭方向に延びる線分が、その領域を通過する参照画素(u’,v’)である。(2)当該線分が通過する水平方向又は垂直方向の長さが、0.5画素又は0.5画素よりも長い。(3)水平方向、垂直方向のうち少なくとも一方向の2n+1個の各座標について各1個の参照画素を選択する。以下、選択された参照画素を選択参照画素と呼ぶことがある。 Based on the quantized contour direction D (i, j) for each pixel represented by the quantized contour direction information input from the contour direction estimating unit 411, the direction selecting unit 3812 starts the quantization contour from the target pixel (i, j). A reference pixel (u ′, v ′) in the direction D or a direction approximating that direction is selected. The selected reference pixel is, for example, a reference pixel that satisfies the following condition. (1) A line segment extending in the quantization contour direction from the center of the target pixel (i, j) is a reference pixel (u ′, v ′) that passes through the region. (2) The horizontal or vertical length through which the line segment passes is longer than 0.5 pixels or 0.5 pixels. (3) One reference pixel is selected for each of 2n + 1 coordinates in at least one of the horizontal direction and the vertical direction. Hereinafter, the selected reference pixel may be referred to as a selected reference pixel.
 方向選択部3812は、2n+1個の各座標について各1個の参照画素が選択された方向(以下、選択座標方向と呼ぶことがある)が、水平方向、垂直方向又はその両者かを判断する。方向選択部3812は、例えば、選択座標方向のインデックスの降順に2n+1個の選択参照画素に係る信号値を乗算部3813に出力する。なお、方向選択部3812は、量子化輪郭方向毎の選択参照画素を表す選択参照画素情報が予め記憶された記憶部を備え、注目画素(i,j)の量子化輪郭方向D(i,j)に対応した選択参照画素情報を選択してもよい。これにより、方向選択部3812は、選択した選択参照画素情報が表す選択参照画素に係る信号値を乗算部3813に出力する。 The direction selection unit 3812 determines whether the direction in which one reference pixel is selected for each of the 2n + 1 coordinates (hereinafter sometimes referred to as a selected coordinate direction) is the horizontal direction, the vertical direction, or both. For example, the direction selection unit 3812 outputs the signal values related to 2n + 1 selected reference pixels to the multiplication unit 3813 in descending order of the index in the selected coordinate direction. Note that the direction selection unit 3812 includes a storage unit in which selection reference pixel information representing a selection reference pixel for each quantization contour direction is stored in advance, and a quantization contour direction D (i, j) of the pixel of interest (i, j). The selected reference pixel information corresponding to () may be selected. Accordingly, the direction selection unit 3812 outputs the signal value related to the selected reference pixel represented by the selected selected reference pixel information to the multiplication unit 3813.
 乗算部3813は、2n+1個の乗算器3813-1~3813-2n+1を含んで構成される。乗算器3813-1~3813-2n+1は、方向選択部3812から入力された各信号値と、フィルタ係数メモリ3814から読み出した各フィルタ係数を乗算し、乗算値をそれぞれ合成演算部3815に出力する。ここで、乗算器3813-1~3813-2n+1の順序が、その各々に入力される信号値の順序(選択座標方向のインデックスの降順)が一致するように、当該信号値が入力される。フィルタ係数メモリ3814は、乗算器3813-1~3813-2n+1で各々用いられる2n+1個のフィルタ係数aD-n、aD-n+1、~aD+nが予め記憶されている。フィルタ係数aD-n、aD-n+1、~aD+nは、信号値との積和演算によって高域通過フィルタを実現する高域通過フィルタ係数である。この高域通過フィルタ係数は、上述のフィルタ係数aD-n、aD-n+1、~aD+nと同様な高域通過特性を有し、直流成分を遮断する特性を有する値であってもよい。 The multiplication unit 3813 includes 2n + 1 multipliers 3813-1 to 3813-2n + 1. Multipliers 3813-1 to 3813-2 n + 1 multiply each signal value input from direction selection unit 3812 by each filter coefficient read from filter coefficient memory 3814, and output the multiplied value to synthesis operation unit 3815. Here, the signal values are inputted so that the order of the multipliers 3813-1 to 3813-2n + 1 matches the order of the signal values inputted to each of them (descending order of the index in the selected coordinate direction). The filter coefficient memory 3814 stores 2n + 1 filter coefficients a D−n , a D−n + 1 , and a D + n used in the multipliers 3813-1 to 3813-2 n + 1 in advance. The filter coefficients a D−n , a D−n + 1 , to a D + n are high-pass filter coefficients that realize a high-pass filter by a product-sum operation with a signal value. The high pass filter coefficients, the filter coefficients of the above a D-n, a D- n + 1, has a high pass characteristic similar to ~ a D + n, may be a value having a property of blocking the DC component .
 合成演算部3815は、乗算部3813から入力された2n+1個の乗算値を加算して、それらの総和である信号値を有する輪郭方向成分信号W2Dを生成する。合成演算部3815は、生成した輪郭方向成分信号W2Dを非線形演算部382に出力する。 Combining unit 3815 adds the 2n + 1 multiplications value input from the multiplication unit 3813, and generates a contour direction component signal W 2D having a signal value which is their total. The composition calculation unit 3815 outputs the generated contour direction component signal W2D to the nonlinear calculation unit 382.
 非線形演算部382は、2次元高域通過フィルタ部381から入力された方向成分信号W2Dが表す信号値に非線形演算を行う。非線形演算部382は、算出した非線形出力値が表す高周波成分値NL2Dをパラメータ調整部428に出力する。 Nonlinear operator 382 performs a nonlinear operation on the signal values representing the direction component signal W 2D input from two-dimensional high-pass filter 381. The nonlinear calculation unit 382 outputs the high frequency component value NL 2D represented by the calculated nonlinear output value to the parameter adjustment unit 428.
(非線形演算部382)
 図16は、非線形演算部382の構成を示すブロック図である。非線形演算部382は、絶対値算出部2821-A、べき乗演算部2822-A、フィルタ係数メモリ2823-A、乗算部2824-A、合成演算部2825-A、符号検出部2826-A及び乗算部2827-Aを含んで構成される。
(Nonlinear calculation unit 382)
FIG. 16 is a block diagram illustrating a configuration of the nonlinear arithmetic unit 382. The nonlinear calculation unit 382 includes an absolute value calculation unit 2821-A, a power calculation unit 2822-A, a filter coefficient memory 2823-A, a multiplication unit 2824-A, a synthesis calculation unit 2825-A, a code detection unit 2826-A, and a multiplication unit. 2827-A.
 非線形演算部382は、入力された信号値W(=W2D)に対するl(エル)次(lは、1よりも大きい整数である)の奇関数sgn|W|・(c・|W|+c・|W|+…+c・|W|)を非線形出力値NLAとして出力する。c、c、…cは、それぞれ1、2、…l次の係数である。 The non-linear operation unit 382 generates an odd function sgn | W | · (c 1 · | W |) of l (el) order (where l is an integer greater than 1) with respect to the input signal value W (= W 2D ). + C 2 · | W | 2 +... + C l · | W | l ) is output as the nonlinear output value NLA. c 1 , c 2 ,..., c 1 are 1, 2,.
 絶対値算出部2821-Aは、入力された方向成分信号が示す信号値Wの絶対値|W|を算出し、算出した絶対値|W|をべき乗演算部2822-Aに出力する。べき乗演算部2822-Aは、l-1個の乗算器2822-A-2~2822-A-lを備え、絶対値算出部2821-Aから入力された絶対値|W|を乗算部2824-Aに出力する。 The absolute value calculation unit 2821-A calculates the absolute value | W | of the signal value W indicated by the input direction component signal, and outputs the calculated absolute value | W | to the power operation unit 2822-A. The power calculation unit 2822-A includes l-1 multipliers 2822-A-2 to 2822-A-l, and multiplies the absolute value | W | input from the absolute value calculation unit 2821-A by the multiplier 2824-. Output to A.
 乗算器2822-A-2は、絶対値算出部2821-Aから入力された絶対値|W|同士を乗算して絶対二乗値|W|を算出する。乗算器2822-A-2は、算出した絶対二乗値|W|を乗算器2822-A-3及び乗算部2824-Aに出力する。 Multiplier 2822-A-2 calculates absolute square value | W | 2 by multiplying absolute values | W | input from absolute value calculation unit 2821-A. The multiplier 2822-A-2 outputs the calculated absolute square value | W | 2 to the multiplier 2822-A-3 and the multiplier 2824-A.
 乗算器2822-A-3~2822-A-l-1は、乗算器2822-A-2~2822-A-l-2から入力された絶対二乗値|W|~絶対l-2乗値|W|l-2に、絶対値算出部2821-Aから入力された絶対値|W|を乗算して絶対三乗値|W|~絶対l-1乗値|W|l-1をそれぞれ算出する。乗算器2822-A-3~2822-Al-1は、算出した絶対三乗値|W|~絶対l-1乗値|W|l-1をそれぞれ乗算器2822-A-4~2822-A-l及び乗算部2824-Aに出力する。乗算器2822-A-lは、乗算器2822-A-l-1から入力された絶対l-1乗値|W|l-1と絶対値算出部2821-Aから入力された絶対値|W|を乗算して絶対l乗値|W|を算出する。乗算器2822-A-lは、算出した絶対l乗値|W|を乗算部2824-Aに出力する。 Multipliers 2822-A-3 to 2822-A-1-1 are absolute square values | W | 2 to absolute l-2 power values inputted from multipliers 2822-A-2 to 2822-A-1-2. | W | to l-2, the absolute value calculating unit 2821-a is input from the absolute value | a l-1 | W | absolute cubed value by multiplying the | W | 3 ~ absolute l-1 squared value | W Calculate each. The multipliers 2822-A-3 to 2822-Al-1 use the calculated absolute cube values | W | 3 to absolute l-1 power values | W | l-1 respectively as multipliers 2822-A-4 to 2822-. The data is output to A-l and the multiplier 2824-A. The multiplier 2822-A-l receives the absolute l-1 power value | W | l-1 input from the multiplier 2822-A -1 and the absolute value | W input from the absolute value calculation unit 2821-A. | multiplied by the absolute l square value | W | to calculate the l. The multiplier 2822-A-l outputs the calculated absolute l-th power value | W | l to the multiplication unit 2824-A.
 フィルタ係数メモリ2823-Aは、l個の記憶素子2823-A-1~2823-A-lを備える。記憶素子2823-A-1~2823-A-lには、それぞれ1~l次の係数c~cが記憶されている。 The filter coefficient memory 2823-A includes l storage elements 2823-A-1 to 2823-A-l. The storage device 2823-A-1 ~ 2823- A-l, respectively 1 ~ l following coefficients c 1 ~ c l are stored.
 乗算部2824-Aは、l個の乗算器2824-A-1~2824-A-lを備える。乗算器2824-A-1~2824-A-lは、それぞれべき乗演算部2822-Aから入力された絶対値|W|~絶対l乗値|W|に、記憶素子2823-A-1~2823-A-lに記憶された1~l次の係数c~cを乗算して乗算値を算出する。 The multiplication unit 2824-A includes l multipliers 2824-A-1 to 2824-A-l. Multipliers 2824-A-1 ~ 2824- A-l , the absolute value is input from the respective power calculating section 2822-A | W | ~ absolute l squared | W | to l, the memory element 2823-A-1 ~ 2823-a-l to by multiplying the stored 1 ~ l following coefficients c 1 ~ c l to calculate a multiplication value.
 乗算器2824-A-1~2824-A-lは、算出した乗算値をそれぞれ合成演算部2825-Aに出力する。合成演算部2825-Aは、乗算器2824-A-1~2824-A-lからそれぞれ入力された乗算値を加算して合成値を算出する。合成演算部2825-Aは、算出した合成値を乗算部2827-Aに出力する。 Multipliers 2824-A-1 to 2824-A-l output the calculated multiplication values to the synthesis operation unit 2825-A, respectively. The synthesis operation unit 2825-A adds the multiplication values respectively input from the multipliers 2824-A-1 to 2824-A-1 to calculate a synthesis value. The combination calculation unit 2825-A outputs the calculated combination value to the multiplication unit 2827-A.
 符号検出部2826-Aは、2次元高域通過フィルタ381から入力された方向成分信号が示す信号値Wの符号、即ち正負を検出する。符号検出部2826-Aは、信号値が0よりも小さい場合、-1を符号値として乗算部2827-Aに出力する。符号検出部2826-Aは、信号値が0又は0よりも大きい場合、1を符号値として乗算部2827-Aに出力する。 The sign detection unit 2826-A detects the sign of the signal value W indicated by the direction component signal input from the two-dimensional high-pass filter 381, that is, positive / negative. When the signal value is smaller than 0, the code detection unit 2826-A outputs −1 as a code value to the multiplication unit 2827-A. When the signal value is 0 or larger than 0, the code detection unit 2826-A outputs 1 as the code value to the multiplication unit 2827-A.
 乗算部2827-Aは、合成演算部2825-Aから入力された合成値と符号検出部2826-Aから入力された符号値を乗算して、高周波成分値NL2Dを算出する。乗算部2827-Aは、算出した高周波成分値をパラメータ調整部428(図12参照)に出力する。上述の構成を備える非線形演算部382は、比較的回路規模が大きくなるが、少数の係数を用いて出力される高周波成分値を調整することができる。なお、最高次数l以外の係数値c~cl-1が0である場合には、これらの次数の積和演算に係る構成を省略してもよい。省略できる構成は、非線形演算部382において記憶素子2823-A-1~2823-A-l-1、乗算器2824-A-1~2824-A-l-1である。例えば、f(W)=sgn(W)|W|の場合、記憶素子2823-A-1、乗算器2824-A-1を省略してもよい。 The multiplication unit 2827-A multiplies the combined value input from the combining calculation unit 2825-A and the code value input from the code detection unit 2826-A to calculate the high frequency component value NL 2D . The multiplier 2827-A outputs the calculated high frequency component value to the parameter adjuster 428 (see FIG. 12). The non-linear operation unit 382 having the above-described configuration has a relatively large circuit scale, but can adjust the output high-frequency component value using a small number of coefficients. If the coefficient values c 1 to c l−1 other than the highest order l are 0, the configuration related to the product-sum operation of these orders may be omitted. The configurations that can be omitted are the storage elements 2823-A-1 to 2823-A-1 and the multipliers 2824-A-1 to 2824-A-1 in the nonlinear arithmetic unit 382. For example, in the case of f (W) = sgn (W) | W | 2 , the storage element 2823-A-1 and the multiplier 2824-A-1 may be omitted.
 図12に戻り、説明を続ける。パラメータ調整部428は、高周波成分出力部420から出力される高周波成分値NL2D(第1高周波成分値)と、パラメータ出力部30から出力されるパラメータKとを乗算した高周波成分値(K×NL2D)(以下、第2高周波成分値)を合成演算部429に出力する。 Returning to FIG. 12, the description will be continued. The parameter adjustment unit 428 multiplies the high frequency component value NL 2D (first high frequency component value) output from the high frequency component output unit 420 by the parameter K output from the parameter output unit 30 (K × NL). 2D ) (hereinafter referred to as the second high-frequency component value) is output to the composition calculation unit 429.
(合成演算部429)
 合成演算部429は、パラメータ調整部428から出力される第2高周波成分値と、高周波成分出力部420の合成演算部426から出力される低域通過信号値Y’’(i,j)とを加算(合成)して高周波拡張信号値Z(i,j)を算出する。合成演算部429は、算出した高周波拡張信号値Z(i,j)を表す輝度信号Zを生成し、特徴空間変換部10から入力される輝度信号Yを輝度信号Zに更新し、輝度信号Zと色差信号U,Vとを含んだ画像信号を出力する。
(Combining operation unit 429)
The synthesis calculation unit 429 uses the second high-frequency component value output from the parameter adjustment unit 428 and the low-pass signal value Y ″ (i, j) output from the synthesis calculation unit 426 of the high-frequency component output unit 420. Addition (combination) is performed to calculate the high-frequency extension signal value Z (i, j). The synthesis calculation unit 429 generates a luminance signal Z representing the calculated high-frequency extension signal value Z (i, j), updates the luminance signal Y input from the feature space conversion unit 10 to the luminance signal Z, and outputs the luminance signal Z And an image signal including the color difference signals U and V are output.
 パラメータK’=1の場合、つまり、検出対象オブジェクトと略一致している画素については、高周波成分が低域通過信号に合成された輝度信号Zが出力される。また、パラメータK’=0の場合、つまり、検出対象オブジェクトと一致しない画素については、高周波成分が低域通過信号に合成されない輝度信号Zが出力される。その結果、入力される画像信号における木々等の細かいテクスチャの部分については、他の部分より鮮鋭化の効果が強まり、木々等の細かな線が強調され、木々が茂っている状態がより適切に表現される。また、パラメータK’が、0<K’<1の場合には、検出対象オブジェクトに対する尤度に応じた高周波成分値が低域通過信号に合成された輝度信号Zが出力されるので、パラメータK’が1と0とに急激に切り替わることによるアーチファクトが低減される。 In the case of the parameter K ′ = 1, that is, for the pixel that substantially matches the detection target object, the luminance signal Z in which the high frequency component is combined with the low-pass signal is output. Further, when the parameter K ′ = 0, that is, for a pixel that does not match the detection target object, the luminance signal Z in which the high frequency component is not combined with the low-pass signal is output. As a result, in the input image signal, fine texture parts such as trees are sharpened more effectively than other parts, and fine lines such as trees are emphasized, and the state where trees are thicker is more appropriate. Expressed. When the parameter K ′ is 0 <K ′ <1, the luminance signal Z in which the high-frequency component value corresponding to the likelihood for the detection target object is combined with the low-pass signal is output. Artifacts due to the rapid switching of 'to 1 and 0 are reduced.
(2)上述した第1実施形態では、画像処理部40においてノイズ低減化処理部41によりノイズ低減化処理を行い、第2実施形態では、画像処理部40Aにおいて鮮鋭化処理部42により鮮鋭化処理を行う例を説明したが、画像処理部においてノイズ低減化処理と鮮鋭化処理とを行うように構成してもよい。例えば、第1実施形態における画像処理部40の構成において、第2実施形態における鮮鋭化処理部42の高周波拡張部427を備えるように構成する。この場合、高周波拡張部427は、ノイズ低減化処理部41から出力される輝度信号Y’(i,j)と輪郭方向推定部411から出力される量子化輪郭方向D(i,j)を用いて高周波成分値を算出する。 (2) In the first embodiment described above, noise reduction processing is performed by the noise reduction processing unit 41 in the image processing unit 40, and in the second embodiment, sharpening processing is performed by the sharpening processing unit 42 in the image processing unit 40A. However, the image processing unit may be configured to perform noise reduction processing and sharpening processing. For example, the configuration of the image processing unit 40 in the first embodiment is configured to include the high-frequency extension unit 427 of the sharpening processing unit 42 in the second embodiment. In this case, the high frequency extension unit 427 uses the luminance signal Y ′ (i, j) output from the noise reduction processing unit 41 and the quantized contour direction D (i, j) output from the contour direction estimation unit 411. To calculate the high-frequency component value.
(3)上述した第1実施形態及び第2実施形態では、画素の信号値に基づくUV平面におけるスカラー値や、UV平面における画素の色と検出対象オブジェクトの色の基準点との距離に基づく尤度に応じて画像処理用のパラメータを調整する例を説明したが、ノイズ低減化処理や鮮鋭化処理に用いるフィルタのタップ数、係数、オーバーシュートの制限値等を用いて調整するようにしてもよい。 (3) In the first and second embodiments described above, the likelihood based on the scalar value in the UV plane based on the signal value of the pixel or the distance between the color of the pixel in the UV plane and the color reference point of the detection target object. Although the example of adjusting the image processing parameters according to the degree has been described, the adjustment may be performed using the number of taps of the filter used for the noise reduction processing or the sharpening processing, the coefficient, the overshoot limit value, or the like. Good.
(4)上述した第1及び第2実施形態において、画像信号において検出対象オブジェクトを有する画素を特定し、画素の信号値をUV平面における画素のスカラー値や検出対象オブジェクトとの距離に変換して尤度を設定する処理を、ファジィ推論装置を用いて行うようにしてもよい。 (4) In the first and second embodiments described above, the pixel having the detection target object is specified in the image signal, and the pixel signal value is converted into the scalar value of the pixel on the UV plane and the distance to the detection target object. You may make it perform the process which sets likelihood using a fuzzy inference apparatus.
(5)上述した第1及び第2実施形態では、検出対象オブジェクトの特徴空間として、色空間を例に説明したが、周波数成分(低周波成分及び高周波成分)からなる周波数空間であってもよい。この場合には、検出対象オブジェクトが特定の周波数成分を有する場合、入力信号における各画素の信号値の周波数成分と特定の周波数成分とに基づいて画素毎の尤度を設定するようにしてもよい。例えば、設定部において、処理対象の画素の輝度値Y(i,j)と、その画素の水平方向に位置する画素の輝度値Y(i+1,j)との差分の絶対値Gxと、その画素の垂直方向に位置する画素の輝度値Y(i,j+1)と輝度値Y(i,j)との差分の絶対値Gyとを求め、絶対値GxとGyとを加算した結果(以下、絶対値Gと称する)を処理対象の画素の周波数成分(スカラー値の一例)としてもよい。この場合、画素の周波数成分は、特徴空間における特徴量であると共にスカラー値を表す。設定部は、絶対値Gに応じて検出対象オブジェクトに対する尤度を設定するようにしてもよい。例えば、絶対値Gと閾値との差が大きいほど尤度が高くなるように、各画素について尤度を設定してもよい。なお、各画素の輝度値の差分を抽出する差分回路としては、FIRフィルタを用いてもよい。 (5) In the first and second embodiments described above, the color space has been described as an example of the feature space of the detection target object. However, a frequency space composed of frequency components (low frequency components and high frequency components) may be used. . In this case, when the detection target object has a specific frequency component, the likelihood for each pixel may be set based on the frequency component of the signal value of each pixel in the input signal and the specific frequency component. . For example, in the setting unit, the absolute value Gx of the difference between the luminance value Y (i, j) of the pixel to be processed and the luminance value Y (i + 1, j) of the pixel located in the horizontal direction of the pixel, and the pixel The absolute value Gy of the difference between the luminance value Y (i, j + 1) and the luminance value Y (i, j) of the pixel located in the vertical direction is obtained, and the result of adding the absolute values Gx and Gy (hereinafter, absolute The frequency component of the pixel to be processed (an example of a scalar value) may be used. In this case, the frequency component of the pixel is a feature value in the feature space and represents a scalar value. The setting unit may set the likelihood for the detection target object according to the absolute value G. For example, the likelihood may be set for each pixel so that the likelihood increases as the difference between the absolute value G and the threshold value increases. Note that an FIR filter may be used as a difference circuit that extracts a difference between luminance values of pixels.
 また、上記他の例として、設定部において、検出対象オブジェクトが有する周波数パターンと、各画素の信号値の周波数パターンとの一致度(スカラー値の一例)を、パターンマッチング回路を用いて求め、一致度に応じた尤度を設定するようにしてもよい。 As another example, the setting unit obtains the degree of coincidence (an example of a scalar value) between the frequency pattern of the detection target object and the frequency pattern of the signal value of each pixel using a pattern matching circuit. The likelihood corresponding to the degree may be set.
(6)上述した第1実施形態及び第2実施形態では、検出対象オブジェクトの特徴空間として色空間を例に説明し、上述の変形例(5)では、検出対象オブジェクトの特徴空間として周波数空間を例に説明したが、特徴空間は、検出対象オブジェクトの特徴を表す成分からなる多次元のスカラー場であればこれに限定されない。例えば、設定部は、上述した第1及び第2実施形態と同様にして、画素毎に、検出対象オブジェクトの色に対するスカラー値を求めると共に、上述の変形例(5)と同様にして、各画素の周波数成分をスカラー値として求める。この例では、特徴空間は、検出対象オブジェクトの特徴を表す色及び周波数の各成分からなる。設定部は、画素毎に、第1又は第2実施形態と同様にして、色に対するスカラー値に基づく尤度を設定し、スカラー値である周波数成分が閾値以上である場合には、その尤度を閾値との差に応じて調整するようにしてもよい。検出対象オブジェクトの色だけでは、処理対象の画素が検出対象オブジェクトであるか否の判断があいまいになる場合であっても、このように構成することで検出対象オブジェクトであるか否かを適切に判断することが可能となる。 (6) In the first embodiment and the second embodiment described above, a color space is described as an example of the feature space of the detection target object. In the modification (5), a frequency space is used as the feature space of the detection target object. As described in the example, the feature space is not limited to this as long as it is a multidimensional scalar field composed of components representing the features of the detection target object. For example, the setting unit obtains a scalar value for the color of the detection target object for each pixel in the same manner as in the first and second embodiments described above, and each pixel in the same manner as in the modification (5) described above. Is obtained as a scalar value. In this example, the feature space is composed of color and frequency components representing the features of the detection target object. The setting unit sets the likelihood based on the scalar value for the color for each pixel in the same manner as in the first or second embodiment. When the frequency component that is the scalar value is equal to or greater than the threshold, the likelihood is set. May be adjusted according to the difference from the threshold. Even if the determination of whether or not the pixel to be processed is a detection target object is vague based only on the color of the detection target object, this configuration appropriately determines whether or not it is a detection target object. It becomes possible to judge.
 なお、上記の例において、画素の周波数成分は特徴量であると共にスカラー値であるものとしたが、例えば、YUV成分と周波数(F)成分からなる特徴空間における特徴量(UV値,周波数成分)をスカラー値として求めてもよい。この場合には、以下の式(16)(17)(18)によって、各画素の特徴量から各スカラー値(S1’,S2’,S3’)が求められる。このように、画素の各特徴量が各スカラー値となるように、特徴量からスカラー値へ変換する重み(ベクトル)が設定されてもよいし、第1実施形態及び第2実施形態のように、特徴空間における検出対象オブジェクトの特徴量に基づいて重み(ベクトル)が設定されてもよい。 In the above example, the frequency component of the pixel is a feature value and a scalar value. For example, a feature value (UV value, frequency component) in a feature space including a YUV component and a frequency (F) component is used. May be obtained as a scalar value. In this case, each scalar value (S1 ', S2', S3 ') is obtained from the feature quantity of each pixel by the following equations (16), (17), and (18). In this way, weights (vectors) for converting feature values into scalar values may be set so that each feature value of a pixel becomes a scalar value, or as in the first and second embodiments. The weight (vector) may be set based on the feature amount of the detection target object in the feature space.
Figure JPOXMLDOC01-appb-M000015
Figure JPOXMLDOC01-appb-M000015
 本発明は、テレビやPC等の表示装置に搭載される画像処理装置として産業上の利用が可能である。 The present invention can be industrially used as an image processing apparatus mounted on a display device such as a television or a PC.

Claims (9)

  1.  画像信号における各画素の信号値に対し、入力される画像処理用パラメータを用いて画像処理を行う画像処理部と、
     前記画像信号における各画素の信号値を用い、予め定められたオブジェクトに対する前記画素の尤度を設定する設定部と、
     前記設定部で設定された前記尤度に基づいて前記画像処理用パラメータを算出して出力するパラメータ出力部と、
     を備える画像処理装置。
    An image processing unit that performs image processing on the signal value of each pixel in the image signal using an input image processing parameter;
    A setting unit that sets the likelihood of the pixel with respect to a predetermined object using the signal value of each pixel in the image signal;
    A parameter output unit that calculates and outputs the image processing parameter based on the likelihood set by the setting unit;
    An image processing apparatus comprising:
  2.  前記パラメータ出力部は、前記オブジェクトに対する前記画像処理に用いるための予め定められた第1パラメータと、他のオブジェクトに対する前記画像処理に用いるための予め定められた第2パラメータとを、前記尤度に基づく混合比によって混合して前記画像処理用パラメータを算出する、請求項1に記載の画像処理装置。 The parameter output unit uses a predetermined first parameter for use in the image processing for the object and a predetermined second parameter for use in the image processing for another object as the likelihood. The image processing apparatus according to claim 1, wherein the image processing parameter is calculated by mixing with a mixing ratio based on the mixing ratio.
  3.  前記設定部は、前記オブジェクトの特徴を表す複数の成分からなる特徴空間において、前記各画素の信号値を、前記成分の大きさを表す特徴量に変換して、前記特徴量に基づくスカラー値を求め、前記スカラー値に基づいて前記尤度を設定する、請求項1又は2に記載の画像処理装置。 The setting unit converts a signal value of each pixel into a feature amount representing the size of the component in a feature space including a plurality of components representing the feature of the object, and converts a scalar value based on the feature amount. The image processing apparatus according to claim 1, wherein the likelihood is determined and the likelihood is set based on the scalar value.
  4.  前記特徴空間は、色を構成する第1成分と第2成分とを含む色空間である、請求項3に記載の画像処理装置。 4. The image processing apparatus according to claim 3, wherein the feature space is a color space including a first component and a second component constituting a color.
  5.  前記特徴空間は、前記オブジェクトに含まれる予め定められた周波数成分を含む周波数空間を有する、請求項3又は4に記載の画像処理装置。 The image processing device according to claim 3 or 4, wherein the feature space has a frequency space including a predetermined frequency component included in the object.
  6.  前記設定部は、前記各画素の信号値を、前記色空間における前記特徴量に変換し、前記特徴空間における前記オブジェクトの範囲を表すベクトルを用いて、前記第1成分と前記第2成分を主として表した前記スカラー値を求める、請求項4に記載の画像処理装置。 The setting unit converts the signal value of each pixel into the feature amount in the color space, and mainly uses the first component and the second component by using a vector representing the range of the object in the feature space. The image processing apparatus according to claim 4, wherein the represented scalar value is obtained.
  7.  前記設定部は、前記各画素の信号値を、前記色空間における前記特徴量に変換し、前記色空間における前記オブジェクトの範囲を表す複数の領域において定められた各基準点と、前記特徴量との距離を前記スカラー値として求める、請求項4に記載の画像処理装置。 The setting unit converts the signal value of each pixel into the feature amount in the color space, and each reference point defined in a plurality of regions representing the range of the object in the color space; the feature amount; The image processing apparatus according to claim 4, wherein the distance is calculated as the scalar value.
  8.  前記設定部は、前記各画素の信号値から前記周波数空間における前記画素の周波数成分を求め、求めた周波数成分を前記画素の前記スカラー値とする、請求項5に記載の画像処理装置。 The image processing apparatus according to claim 5, wherein the setting unit obtains a frequency component of the pixel in the frequency space from a signal value of each pixel, and uses the obtained frequency component as the scalar value of the pixel.
  9.  前記画像処理部は、前記画像処理として、前記画像信号における前記信号値について平滑化処理を行う第1画像処理、及び前記画像信号における前記信号値について鮮鋭化処理を行う第2画像処理の少なくとも一方を行う、請求項1から8のいずれか一項に記載の画像処理装置。 The image processing unit includes at least one of first image processing that performs smoothing processing on the signal value in the image signal and second image processing that performs sharpening processing on the signal value in the image signal as the image processing. The image processing apparatus according to any one of claims 1 to 8, wherein:
PCT/JP2013/081988 2012-12-04 2013-11-28 Image processing device WO2014087909A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012265454 2012-12-04
JP2012-265454 2012-12-04

Publications (1)

Publication Number Publication Date
WO2014087909A1 true WO2014087909A1 (en) 2014-06-12

Family

ID=50883328

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/081988 WO2014087909A1 (en) 2012-12-04 2013-11-28 Image processing device

Country Status (1)

Country Link
WO (1) WO2014087909A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003290170A (en) * 2002-03-29 2003-10-14 Konica Corp Image processing apparatus, method for image processing, program, and recording medium
JP2005141477A (en) * 2003-11-06 2005-06-02 Noritsu Koki Co Ltd Image sharpening process and image processor implementing this process
JP2011087087A (en) * 2009-10-14 2011-04-28 Olympus Corp Apparatus, program and method for processing image signal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003290170A (en) * 2002-03-29 2003-10-14 Konica Corp Image processing apparatus, method for image processing, program, and recording medium
JP2005141477A (en) * 2003-11-06 2005-06-02 Noritsu Koki Co Ltd Image sharpening process and image processor implementing this process
JP2011087087A (en) * 2009-10-14 2011-04-28 Olympus Corp Apparatus, program and method for processing image signal

Similar Documents

Publication Publication Date Title
Saleem et al. Image fusion-based contrast enhancement
US8014034B2 (en) Image contrast enhancement
Gupta et al. Review of different local and global contrast enhancement techniques for a digital image
Zhou et al. Multi-scale retinex-based adaptive gray-scale transformation method for underwater image enhancement
CN108416745B (en) Image self-adaptive defogging enhancement method with color constancy
Tsai Adaptive local power-law transformation for color image enhancement
JP5822157B2 (en) Noise reduction apparatus, noise reduction method, and program
Wang et al. Variational single nighttime image haze removal with a gray haze-line prior
US9648212B2 (en) Image processing device, image processing method, and computer program
CN104700360A (en) Image zooming method and system based on edge self-adaptation
Xu et al. Deep retinex decomposition network for underwater image enhancement
Hu et al. A novel retinex algorithm and its application to fog-degraded image enhancement
CN106981056A (en) One kind strengthens wave filter based on partial fractional differential graph of equation image contrast
Ma et al. Underwater image restoration through a combination of improved dark channel prior and gray world algorithms
US7064770B2 (en) Single-pass image resampling system and method with anisotropic filtering
CN106709890A (en) Method and device for processing low-light video image
Zhou et al. Underwater image enhancement method based on color correction and three-interval histogram stretching
Bao et al. An edge-preserving filtering framework for visibility restoration
WO2020107308A1 (en) Low-light-level image rapid enhancement method and apparatus based on retinex
CN107292844B (en) Total variation regularization variation stochastic resonance self-adaptive dark image filtering enhancement method
Hong et al. Image interpolation using interpolative classified vector quantization
Thai et al. Performance evaluation of high dynamic range image tone mapping operators based on separable non-linear multiresolution families
WO2014087909A1 (en) Image processing device
Choi et al. Fast, trainable, multiscale denoising
Ke et al. Underwater image enhancement via color correction and multi-feature image fusion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13860323

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13860323

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP