WO2013125198A1 - Image processor, imaging device, and image processing program - Google Patents

Image processor, imaging device, and image processing program Download PDF

Info

Publication number
WO2013125198A1
WO2013125198A1 PCT/JP2013/000860 JP2013000860W WO2013125198A1 WO 2013125198 A1 WO2013125198 A1 WO 2013125198A1 JP 2013000860 W JP2013000860 W JP 2013000860W WO 2013125198 A1 WO2013125198 A1 WO 2013125198A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
color
target image
correction
Prior art date
Application number
PCT/JP2013/000860
Other languages
French (fr)
Japanese (ja)
Inventor
慎哉 海老原
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Publication of WO2013125198A1 publication Critical patent/WO2013125198A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • H04N25/611Correction of chromatic aberration

Definitions

  • the present invention relates to an image processing device, an imaging device, and an image processing program.
  • an image obtained by imaging a subject imaged by an optical system is affected by chromatic aberration, particularly axial chromatic aberration, by the optical system.
  • the MTF characteristic mismatch between the color components is adjusted so that the color plane of the reference color component is smoothed and the color difference from the color plane of the other color components is minimized.
  • a technique for correcting axial chromatic aberration has been developed (see Patent Document 1, etc.).
  • an object of the present invention is to provide a technique capable of correcting axial chromatic aberration with high accuracy while considering the color structure of an image.
  • An aspect of the image processing apparatus illustrating the present invention includes an image smoothing unit that smoothes a target image having pixel values of a plurality of color components to generate a plurality of smooth images having different degrees of smoothing, and pixels of the target image
  • arithmetic means for calculating the color difference and the variance of the color difference, which is the difference between the predetermined color component of the target image and the color component different from the predetermined color component of the smooth image
  • a color component having the highest sharpness is determined
  • an adjustment means for calculating an adjustment amount for adjusting the sharpness between the color components in the pixels of the target image based on the determined color component having the highest sharpness, and a color difference and an adjustment amount.
  • Correction means for determining the image structure of the pixels of the target image based on the result and correcting the sharpness between the color components of the pixels of the target image with the adjustment amount according to the determination result.
  • the adjustment means may determine the color component having the highest sharpness in the pixel of the target image based on the minimum value of the color difference variance.
  • the calculation means calculates a texture amount indicating the number of image structures in the target image by calculating the difference between the target image and the one smoothed image for each same color component in each pixel of the target image, and the correction means
  • the image structure at the pixel of the target image may be determined based on the color difference, the adjustment amount, and the texture amount.
  • the adjustment means may extract a blurred image area where the texture amount is equal to or less than a predetermined value, and smooth the minimum value of the color difference variance in the pixels of the blurred image area.
  • the correcting means determines whether the image structure in the pixel of the target image is a color boundary based on the change rate of the color difference in the pixel of the target image and the size of the adjustment amount. You may exclude the pixel of an image from correction object.
  • the correction means compares the adjustment amount of the pixel of the target image with the adjustment amount of the adjacent pixel to determine whether or not the image structure of the pixel of the target image is an isolated structure. If it is determined, the pixel of the target image may be excluded from the correction target.
  • the correction means determines whether the image structure of the pixel of the target image is a pseudo-axial chromatic aberration region similar to the axial chromatic aberration based on the texture amount and the correction amount of the pixel of the target image. When it is determined as the axial chromatic aberration region, the pixel of the target image may be excluded from the correction target.
  • the correcting means determines whether or not the image structure of the pixel of the target image determined to be a color boundary is a color blur due to a density difference around the saturation region based on the distribution of each color component, and determines that the color blur
  • the pixels of the target image may be the correction target.
  • color difference correction means for correcting the pixel value of the pixel of the target image whose sharpness has been adjusted to be the same as the direction of the color difference of the pixel value before the sharpness adjustment in the color difference space may be provided.
  • the image smoothing means smoothes the target image with at least two different smoothing levels to generate a set of smoothed images, and mixes the set of smoothed images at a predetermined ratio to generate a plurality of smoothed images. May be.
  • Another aspect of the image processing apparatus illustrating the present invention includes a determination unit that determines an image structure in a pixel of a target image having pixel values of a plurality of color components, and a color component having the highest sharpness in the pixel of the target image. Determine the amount of adjustment to adjust the sharpness of the color component in the target image pixel based on the determined color component with the highest sharpness, and adjust the sharpness of the color component in the target image pixel according to the determination result Correction means for adjusting the amount.
  • An aspect of an imaging apparatus that exemplifies the present invention includes imaging means for capturing a subject and generating a target image having pixel values of a plurality of color components, and the image processing apparatus of the present invention.
  • One aspect of an image processing program that exemplifies the present invention is an input procedure for reading a target image having pixel values of a plurality of color components, and image smoothing that smoothes the target image and generates a plurality of smooth images having different degrees of smoothing.
  • the calculation procedure for calculating the color difference and the variance of the color difference which is the difference between the predetermined color component of the target image and the color component different from the predetermined color component of the smooth image, for each pixel of the target image, based on the variance of the color difference
  • Another aspect of the image processing program illustrating the present invention is a determination procedure for determining an image structure in a pixel of a target image having pixel values of a plurality of color components, and determining a color component having the highest sharpness in the pixel of the target image
  • the amount of adjustment for adjusting the sharpness of the color component in the pixel of the target image is calculated based on the determined color component having the highest sharpness, and the amount of sharpness of the color component in the pixel of the target image is adjusted according to the determination result.
  • FIG. 1 is a block diagram showing the configuration of a computer that operates as an image processing apparatus according to an embodiment of the present invention.
  • Diagram explaining color structure 7 is a flowchart showing image processing operations performed by a computer according to an embodiment.
  • FIG. 1 is a block diagram showing a configuration of a computer 10 that operates as an image processing apparatus according to an embodiment of the present invention.
  • a computer 10 shown in FIG. 1A includes a CPU 1, a storage unit 2, an input / output interface (input / output I / F) 3, and a bus 4.
  • the CPU 1, the storage unit 2, and the input / output I / F 3 are connected via the bus 4 so that information can be transmitted.
  • the computer 10 is connected to an output device 30 for displaying the progress of image processing and processing results, and an input device 40 for receiving input from the user via the input / output I / F 3.
  • a general liquid crystal monitor, a printer, or the like can be used for the output device 30, and a keyboard, a mouse, or the like can be appropriately selected and used for the input device 40.
  • the target image processed by the computer 10 has pixel values of red (R), green (G), and blue (B) color components in each pixel. That is, it is assumed that the target image of the present embodiment is an image captured by a three-plate color digital camera or an image captured by a single-plate color digital camera and subjected to color interpolation processing. The target image is assumed to have an influence of axial chromatic aberration due to the imaging lens when picked up by a digital camera or the like, and the sharpness between the color components is different.
  • the CPU 1 is a processor that comprehensively controls each unit of the computer 10. For example, the CPU 1 reads an image processing program stored in the storage unit 2 based on an instruction from the user received by the input device 40.
  • the CPU 1 operates as an image smoothing unit 20, a calculation unit 21, an adjustment unit 22, a correction unit 23, and a color difference correction unit 24 by executing the image processing program (FIG. 1 (b)), and is on the axis with respect to the target image. Chromatic aberration correction processing is performed.
  • the CPU 1 displays the result of image processing for the image on the output device 30.
  • the image smoothing unit 20 uses, for example, N known Gaussian filters with different degrees of smoothing (blur index), and smoothes the target image according to the blur index of each Gaussian filter to generate N smooth images.
  • N is a natural number of 2 or more.
  • the blur index in the present embodiment refers to the size of the blur radius, for example.
  • the calculation unit 21 uses the target image and N smooth images, and the difference between the color component of the target image and the color component different from the color component of the target image of the smooth image at the pixel position of each pixel. From the absolute value of, the value of the color difference surface (color difference) and its standard deviation (variance) corresponding to the blurring index are calculated. In addition, the calculation unit 21 of the present embodiment makes a difference between the target image and the smooth image having the smallest blurring index (that is, the smallest smoothing degree) for each identical color component, and the image structure of the target image is large. The amount of texture of each color component indicating is calculated.
  • the adjustment unit 22 applies a known interpolation method to the standard deviation distribution with respect to the blur index at the pixel position of each pixel of the target image, and obtains the blur index that gives the minimum standard deviation at the pixel position.
  • the adjustment unit 22 determines a color component having the highest sharpness based on a blurring index (hereinafter referred to as a minimum blurring index) that gives the minimum standard deviation of each color difference plane obtained in each pixel.
  • the adjustment unit 22 calculates a correction value (adjustment amount) for adjusting the sharpness between the color components at the pixel position of each pixel of the target pixel based on the determined color component having the highest sharpness.
  • the correction unit 23 determines whether the image structure at the pixel position of each pixel of the target image is on-axis. It is determined whether or not the structure can correct chromatic aberration. In the case of a correctable image structure, the correction unit 23 corrects the sharpness between color components in each pixel using the correction value calculated by the adjustment unit 22.
  • FIG. 2 shows a typical color structure considered in this embodiment.
  • FIG. 2A is a target image obtained by imaging a subject in which a single black line is arranged on a white background, for example, an R component (dotted line), a G component (solid line) in the scanning direction orthogonal to the black line, The distribution of pixel values of the B component (broken line) is shown.
  • FIG. 2B shows, as an example, a target image obtained by imaging a subject having a color boundary composed of red and white, and an R component (dotted line), a G component (solid line), and a B component in a scanning direction orthogonal to the color boundary. The distribution of pixel values (broken line) is shown.
  • FIG. 2A is a target image obtained by imaging a subject in which a single black line is arranged on a white background, for example, an R component (dotted line), a G component (solid line) in the scanning direction orthogonal to the black line, The distribution of pixel values of the B component (broken line)
  • FIG. 2C shows, for example, an image of a subject with a bright light source such as a streetlight, and a target image affected by so-called purple fringe.
  • the R component (dotted line) and G component (in the scanning direction passing through the center of the light source) A distribution of pixel values of a solid line) and a B component (broken line) is shown.
  • purple fringing refers to purple color bleeding that occurs around high-luminance areas (saturated areas) where the pixel values of each color component are saturated due to a large amount of light, such as around a light source such as a streetlight or the reflection of water. It is.
  • the imaging lens of the camera that captured each target image in FIG. 2 has axial chromatic aberration and is focused on the G component.
  • the G component reproduces the color structure well, whereas the R and B components are blurred in color structure due to axial chromatic aberration.
  • the black line portion is blurred in green or magenta.
  • the sharpness adjustment that is, correction of axial chromatic aberration
  • the axial chromatic aberration correction processing is based on the premise that the distributions of the respective color components of the original target pixel match each other.
  • the texture amount calculated by the calculation unit 21 shows a distribution that differs greatly for each color component, whereas the minimum blurring index of each color difference calculated by the adjustment unit 22 is similar to each other. It shows such a gentle and uniform distribution.
  • the correction unit 23 of the present embodiment determines whether the color structure is a color boundary as shown in FIG. 2B based on the distribution of the minimum blurring index of each color difference and the size of the correction value.
  • the correction unit 23 determines that the color structure is a color boundary, in this embodiment, the correction of the axial chromatic aberration with respect to the pixel at the color boundary is not performed. Thereby, the discoloration which arises by performing the correction process of the axial chromatic aberration with respect to a color boundary can be suppressed.
  • the saturation region is different for each color component, the G component first decreases as the distance from the light source increases, and the R component is distributed over the widest range. Due to such a distribution, a purple color appears.
  • the black line portion shown in FIG. 2A is similar to the purple fringe in that the black line portion blurs in green or magenta due to axial chromatic aberration. Therefore, in the present embodiment, the same correction process as that for the longitudinal chromatic aberration is performed on the purple fringe image area. For this purpose, as shown in FIG.
  • the correction unit 23 obtains a distribution of pixel values of each color component in the peripheral area centering on the pixel of the target image or the entire target image, and from the distribution, A saturation region of pixel values is obtained.
  • the correction unit 23 extracts the saturation region of the color component that is most widely distributed (the R component in the case of FIG. 2C) and the region that is widened by ⁇ from the end of the saturation region as a purple fringe region.
  • the correcting unit 23 determines whether the color structure is a purple fringe based on whether the pixel position is included in the extracted purple fringe region.
  • the correction unit 23 performs axial chromatic aberration correction processing based on the correction value calculated by the adjustment unit 22 for each color component of the pixel determined to be purple fringe.
  • an isolated structure that shows an abnormal correction value by the pixel value of one pixel due to the influence of shot noise or the like, or a color similar to the image area of axial chromatic aberration In spite of showing the structure, when the axial chromatic aberration correction process is performed, the area that is excessively corrected and reduces the image quality of the target image (hereinafter referred to as the pseudo-axial chromatic aberration area) is also considered. To do. In this embodiment, axial chromatic aberration correction processing is not performed on these image structures. Note that these image structure determination processes will be described later.
  • the color difference correction unit 24 corrects the pixel value of each color component of the pixel whose sharpness has been adjusted to be the same as the direction of the color difference before the adjustment in the color difference space, and changes the color change caused by the correction process of the longitudinal chromatic aberration. Suppress.
  • the storage unit 2 records an image processing program and the like for correcting axial chromatic aberration in the target image along with the target image. Images, programs, and the like stored in the storage unit 2 can be appropriately referred to from the CPU 1 via the bus 4.
  • a storage device such as a general hard disk device or a magneto-optical disk device can be selected and used for the storage unit 2.
  • the storage unit 2 is incorporated in the computer 10, it may be an external storage device. In this case, the storage unit 2 is connected to the computer 10 via the input / output I / F 3.
  • the user uses the input device 40 to input an image processing program command or double-click the icon of the program displayed on the output device 30 to instruct the CPU 1 to start the image processing program.
  • the CPU 1 receives the instruction via the input / output I / F 3 and reads and executes the image processing program stored in the storage unit 2.
  • CPU1 starts the process from step S10 of FIG.
  • Step S10 The CPU 1 reads the target image to be corrected for the longitudinal chromatic aberration designated by the user via the input device 40.
  • Step S11 The image smoothing unit 20 of the CPU 1 smoothes the read target image according to the blur index of each Gaussian filter, and generates N smooth images.
  • the target image itself is also one of the smooth images, and the total number of smooth images in this embodiment is (N + 1).
  • Step S12 The calculation unit 21 of the CPU 1 uses the color difference surface Cr of the R component and the G component, the color difference surface Cb of the B component and the G component, and the color difference surface Crb of the R component and the B component as the target image and each smoothed image. And using
  • the calculation unit 21 calculates the pixel value G0 (i, j) of the G component that is a predetermined color component of the target image and the pixel value Rk of the R component that is a color component different from the predetermined color component of the smooth image.
  • the absolute value of the difference from (i, j) is obtained at each pixel position, and the color difference plane Cr [ ⁇ k] (i, j) shown in the following equation (1) is calculated.
  • Cr [ ⁇ k] (i, j)
  • (i, j) indicates the coordinates of the pixel position of each pixel of the target image.
  • the negative blur index k indicates that the color difference surface Cr is obtained by sequentially blurring the R surface on the negative side.
  • the calculation unit 21 calculates the pixel value R0 (i, j) of the R component that is a predetermined color component of the target image and the pixel value of the G component that is a color component different from the predetermined color component of the smooth image.
  • the absolute value of the difference from Gk (i, j) is obtained at each pixel (i, j), and the color difference plane Cr [k] (i, j) of the following equation (2) is calculated.
  • Cr [k] (i, j)
  • the blurring index k being positive indicates that the color difference surface Cr is obtained by sequentially blurring the G surface on the positive side.
  • the calculation unit 21 determines the color difference plane Cb of the B component and the G component and the color difference plane Crb of the R component and the B component, respectively, for each pixel (i, j). In the calculation.
  • Step S13 The calculation unit 21 calculates the standard deviations DEVr, DEVb, and DEVrb of each color difference plane in the pixel (i, j) for each blur index using the color difference planes Cr, Cb, and Crb calculated in step S12.
  • the calculation unit 21 displays each color difference plane Cr of a pixel in the reference area AR ⁇ b> 1 having a size of 15 pixels ⁇ 15 pixels centered on the calculation target pixel (i, j) indicated by diagonal lines.
  • Cb and Crb are used to calculate the standard deviation.
  • the size of the reference area AR1 is 15 pixels ⁇ 15 pixels.
  • the size of the reference area AR1 is appropriately determined according to the processing capability of the CPU 1 and the accuracy of correction of the axial chromatic aberration. It is preferably set in the range of 30 pixels.
  • the calculation unit 21 calculates standard deviations DEVr, DEVb, and DEVrb of each color difference plane using the following equations (7) to (9).
  • k ′ is an integer blur index of ⁇ N to N.
  • (l, m) and (x, y) represent pixel positions in the reference area AR1, respectively.
  • TB (i, j) B0 (i, j) ⁇ B1 (i, j)
  • the calculation unit 21 adds, for each color component, a value obtained by adding the differences obtained by Expression (10) of each pixel in the range of 5 pixels ⁇ 5 pixels centered on the pixel (i, j) for each color component.
  • a value normalized in units of one pixel is calculated as the texture amount of the pixel (i, j).
  • the calculating part 21 is good also considering each difference of Formula (10) as the texture amount of each color component.
  • Step S16 The adjusting unit 22 of the CPU 1 extracts a blurred image region (hereinafter referred to as a background region) other than the focused region from the target image based on the texture amount obtained in Step S14. For this purpose, the adjustment unit 22 determines whether the texture amount of the pixel (i, j) is equal to or less than a predetermined value d. The adjustment unit 22 extracts the pixel (i, j) as the background area when the texture amount is equal to or less than the predetermined value d. The adjustment unit 22 performs such extraction processing for all the pixels of the target image.
  • a background region a blurred image region
  • Step S17 The adjustment unit 22 selects a color component having the highest sharpness in the pixel (i, j) of the target image based on the minimum blurring indexes ⁇ r , ⁇ b , and ⁇ rb obtained in steps S15 and S16. A correction value for adjusting the sharpness between the color components is calculated based on the determined color component having the highest sharpness.
  • the adjustment unit 22 determines that it is the G component that has a higher sharpness in the pixel (i, j).
  • the minimum blurring index ⁇ r is negative, the adjustment unit 22 determines that it is the R component that has a higher sharpness in the pixel (i, j).
  • the adjustment unit 22 also determines color components having higher sharpness based on the signs of the minimum blurring indexes ⁇ b and ⁇ rb .
  • the adjustment unit 22 determines whether or not the color component having the highest sharpness has been determined for each pixel. That is, when the same color component is determined in two of the results on the three color difference planes, the adjustment unit 22 determines the color component as the color component having the highest sharpness in the pixel. On the other hand, for example, when the R component, the G component, and the B component are respectively determined based on the blur coefficients ⁇ r , ⁇ b , and ⁇ rb of each color difference surface, the adjusting unit 22 determines whether the pixel (i, j) The color component with the highest sharpness cannot be determined as one.
  • the adjustment unit 22 determines that the pixel is indefinite and does not perform the process of correcting axial chromatic aberration for the pixel.
  • the color component having the highest sharpness may be determined by comparing the sharpness of each color component determined for each color difference plane.
  • the adjustment unit 22 calculates a correction value for adjusting the sharpness between the color components based on the determined color component having the highest sharpness. For example, the adjustment unit 22 obtains the blurring index s that gives the true standard deviation value in the pixel (i, j) based on the distribution of the standard deviation DEVr of the color difference plane Cr shown in FIG. That is, the minimum blurring index ⁇ r obtained by the calculation unit 21 in step S15 is not necessarily a blurring index that truly gives the minimum standard deviation DEVr, as indicated by the dotted line in FIG.
  • adjustment unit 22 the minimum and the blur index alpha r, by applying the interpolation to three points with minimum blur index alpha r -1 and alpha r +1 adjacent ends, more accurate minimum blur indicator (internal (Insert point) s is obtained.
  • the coefficient a is a slope and is (DEVr [ ⁇ r ⁇ 1] (i, j) ⁇ DEVr [ ⁇ r ] (i, j)) / (( ⁇ r ⁇ 1) ⁇ r ).
  • the minimum blur index s is expressed as the following equation (12).
  • s (( ⁇ r ⁇ 1) + ⁇ r ) / 2 + (DEVr [ ⁇ r ⁇ 1] (i, j) ⁇ DEVr [ ⁇ r ] (i, j)) / 2 / a (12)
  • the slope a is (DEVr [ ⁇ r +1] (i, j) ⁇ DEVr [ ⁇ r ] (i, j)) / (( ⁇ r +1) ⁇ r ).
  • the adjustment unit 22 smoothes the distribution of the interpolation points s that are the minimum blurring indexes of the color difference surfaces in the background region extracted in step S16. That is, since the background area is a blurred image area, it is considered that the minimum blur index of each color difference plane is gently and uniformly distributed. However, when there is a pixel in which the minimum blur index greatly changes in the background area, the axial chromatic aberration correction process cannot be appropriately performed on the pixel, and the image quality of the target image is reduced as noise. In order to avoid this, the adjustment unit 22 of the present embodiment, for example, sets the interpolation point s of the pixel (i, j) in the background region to 5 pixels ⁇ 5 pixels centered on the pixel (i, j).
  • the adjusting unit 22 performs known weighted addition using G ⁇ r (i, j) and G ( ⁇ r +1) (i, j) of the blurring indexes ⁇ r and ⁇ r +1 together with the interpolation point s, A correction value G ′ (i, j) is calculated.
  • Step S18 The correction unit 23 of the CPU 1 determines whether or not the color structure of the pixel (i, j) is a color boundary based on the minimum blurring indexes ⁇ r , ⁇ b , ⁇ rb and the correction value. First, the correction unit 23 obtains the rate of change of each of the minimum blurring indexes ⁇ r , ⁇ b , ⁇ rb in the pixel (i, j). For this purpose, the correcting unit 23 uses, for example, a known differential coefficient formula for each of the minimum blur indices ⁇ r , ⁇ b , ⁇ rb of adjacent pixels at a width of about 5 pixels centered on the pixel (i, j).
  • the inclination is obtained by applying to the above, and the inclination is set as the change rate of the minimum blurring index of each color difference of the pixel (i, j).
  • Correcting unit 23 their rate of change of the threshold epsilon 1 or more, and determines whether or not the correction value is the threshold epsilon 2 or more.
  • the values of the threshold ⁇ 1 and the threshold ⁇ 2 are preferably determined according to the accuracy required for the axial chromatic aberration correction processing, the processing capability of the CPU 1, and the like.
  • the correction unit 23 determines that the color structure of the pixel (i, j) is a color boundary, and proceeds to step S19 (YES side). On the other hand, when the determination result is false, the correction unit 23 proceeds to step S20 (NO side).
  • Step S19 The correcting unit 23 determines whether or not the color structure of the pixel (i, j) determined as the color boundary in Step S18 is a purple fringe. For this purpose, the correction unit 23 uses the pixel value of each color component of the pixel (i, j) and its surrounding pixels or the entire target image, and calculates the pixel value for each color component as shown in FIG. A saturated region in which the pixel value is saturated (in the case of an image with 255 gradations, the pixel value is 255) is obtained. The correction unit 23 has a widest area among the saturation areas of the respective color components.
  • an area obtained by combining a saturation area of the R component and an area widened by ⁇ from the end of the saturation area is referred to as a purple fringe area.
  • the value of the width ⁇ in this embodiment is, for example, about 10 pixels.
  • the size of the width ⁇ is preferably determined according to the processing capability of the CPU 1, the accuracy of the axial chromatic aberration correction processing, and the degree of decrease from the saturated state in each color component.
  • the correction unit 23 determines whether or not the pixel (i, j) is within the purple fringe region based on the obtained purple fringe region. When the pixel (i, j) is in the purple fringe region, the correction unit 23 proceeds to step S20 (YES side). On the other hand, when the pixel (i, j) is not in the purple fringe region, the correcting unit 23 proceeds to step S24 (NO side).
  • Step S20 correction unit 23, the correction value of the threshold epsilon 2 or more at which the pixel is calculated in step 17 (i, j) is determined whether a difference of the correction value and the threshold value epsilon 3 or more adjacent pixels To do. That is, the correction unit 23 determines whether the cause that the correction value of the pixel (i, j) is equal to or greater than the threshold ⁇ 2 is due to an isolated structure due to shot noise or the like.
  • the value of the threshold ⁇ 3 is preferably determined and set according to the accuracy required for the axial chromatic aberration correction process, the processing capability of the CPU 1, and the like.
  • the correction unit 23 determines that the color structure of the pixel (i, j) is an isolated structure, and proceeds to step S24 (YES side). On the other hand, when the determination result is false, the correction unit 23 proceeds to step S21 (NO side).
  • Step S21 The correction unit 23 determines whether or not the color structure of the pixel (i, j) is a pseudo-axial chromatic aberration region based on the texture amount and the correction value in each pixel of the target image.
  • the pseudo-axial chromatic aberration region shows a color structure that is affected by the axial chromatic aberration, but is a region to which the axial chromatic aberration correction process cannot be applied.
  • the texture amount in the image area of axial chromatic aberration shown in FIG. 2A shows a distribution that varies greatly for each color component, whereas the texture amount of each color component has a large value as a feature of the pseudo-axial chromatic aberration area. Show similar value distribution.
  • the correction unit 23 of the present embodiment determines whether or not the texture amounts of the respective color components of the pixel (i, j) are all equal to or greater than the threshold ⁇ 4 and the correction value of the pixel (i, j) is equal to or less than the threshold ⁇ 5. To do.
  • the values of the threshold ⁇ 4 and the threshold ⁇ 5 are preferably determined and set according to the accuracy required for the axial chromatic aberration correction process, the processing capability of the CPU 1, and the like.
  • the correction unit 23 determines that the color structure of the pixel (i, j) is a pseudo-axial chromatic aberration region, and proceeds to step S24 (YES side). On the other hand, when the determination result is false, the correction unit 23 proceeds to step S22 (NO side).
  • Step S22 The correction unit 23 corrects the axial chromatic aberration of the pixel (i, j) using the correction value calculated in step S17.
  • the correction unit 23 adjusts the sharpness of the R component of the pixel (i, j) based on the following equation (13) when the sharpness of the pixel (i, j) is the G plane.
  • R ′ (i, j) R0 (i, j) + (G0 (i, j) ⁇ G ′ (i, j)) (13)
  • the correction unit 23 calculates a correction value G ′′ (i, j) for the B component based on the distribution of the standard deviation DEVb of the color difference plane Cb between the B component and the G component, and the following equation (14) Based on the above, the sharpness of the B component in the pixel (i, j) is adjusted to correct the axial chromatic aberration.
  • Step S23 The color difference correction unit 24 of the CPU 1 performs color difference correction on the pixel value of each color component of the pixel (i, j) that has been subjected to the axial chromatic aberration correction processing.
  • step S22 the pixel value of each color component of the pixel on which the axial chromatic aberration correction processing has been performed changes greatly in the direction of the color difference component particularly in the luminance color difference color space when compared with the pixel value before correction. In some cases, this causes discoloration. Therefore, in the present embodiment, in order to suppress the occurrence of the color change, the color difference correction unit 24 determines that the corrected color difference component in the pixel (i, j) is the direction of the color difference component before correction in the luminance color difference space. Correct to be the same as.
  • the color difference correction unit 24 applies a known conversion process to the pixel values of the respective color components before and after the correction of the pixel (i, j), so that the RGB pixel values (R ′, G, B ′). Are converted into YCrCb luminance components and color difference components (Y ′, Cr ′, Cb ′).
  • the luminance component and the color difference component before correction are (Y0, Cr0, Cb0).
  • the color difference correction unit 24 corrects the direction of the color difference component of the pixel (i, j) to the direction before correction by the following equation (15).
  • the luminance component Y ′ is not corrected.
  • the color difference correction unit 24 applies the above-described known conversion process again, and converts the luminance component and color difference components (Y ′, Cr ′′, Cb ′′) after the pixel color difference correction into RGB pixel values (R 1 , Gb). 1 , B 1 ).
  • the color difference correction unit 24 sets the pixel value (R 1 , G 1 , B 1 ) as the pixel value of the pixel (i, j).
  • Step S24 The CPU 1 determines whether or not the processing has been completed for all the pixels of the target image. If the processing has not been completed for all the pixels, the CPU 1 proceeds to step S18 (NO side), and performs the processing from step S18 to step S23 for the next pixel. On the other hand, when the processing is completed for all the pixels, the CPU 1 records an image in which the axial chromatic aberration is corrected in the storage unit 2 or displays it on the output device 30. Then, the CPU 1 ends a series of processes.
  • the correction of axial chromatic aberration is corrected with high accuracy by determining the color structure in each pixel based on the minimum blur index of each color difference surface, the texture amount of each color component, and the value of the correction value. can do.
  • the present invention can be applied to a digital camera as shown in FIG. 6 having the image processing program of the present invention.
  • the image sensor 102 and the DFE 103 of the digital front-end circuit that performs signal processing such as A / D conversion of the image signal input from the image sensor 102 and color interpolation processing are imaged. It is preferable to constitute a part.
  • the CPU 104 realizes each process of the image smoothing unit 20, the calculation unit 21, the adjustment unit 22, the correction unit 23, and the color difference correction unit 24 by software. Alternatively, these processes may be realized by hardware using an ASIC.
  • the image smoothing unit 20 uses the N Gaussian filters to generate N smooth images from the target image, but the present invention is not limited to this.
  • the image smoothing unit 20 uses a PSF instead of a Gaussian filter to generate a smooth image. It may be generated.
  • the image smoothing unit 20 first generates a set of smooth images using, for example, at least two Gaussian filters, and mixes the set of smooth images at a predetermined ratio so that the blurring indices are mutually equal. N different smooth images may be generated. Thereby, it is possible to simplify and speed up the image processing. Further, by adjusting the mixing ratio of the smooth images within a range where the maximum radius of the Gaussian filter does not exceed the blur amount, a smooth image having an arbitrary blur index can be easily generated. That is, it is possible to easily generate a smooth image within a desired blur amount range even for an image region that requires a very large blur amount in correcting axial chromatic aberration.
  • the longitudinal chromatic aberration of the target image is based on the color difference surface Cr between the R component and the G component, the color difference surface Cb between the B component and the G component, and the color difference surface Crb between the R component and the B component.
  • the present invention is not limited to this.
  • axial chromatic aberration of the target image may be corrected based on two of the three color difference surfaces. As a result, the speed of the correction process can be increased.
  • the target image has pixel values of R component, G component, and B component in each pixel, but the present invention is not limited to this.
  • each pixel of the target image may have two or four or more color components.
  • R, G, and B color filters are arranged in accordance with a known Bayer array on each image element on the light receiving surface of the image sensor 102 of the digital camera shown in FIG. 6, the RAW imaged by the image sensor 102.
  • the present invention can also be applied to images.
  • the correction unit 23 detects an area in which the change rate of the minimum blur index is smaller than the threshold ⁇ 1 and the correction value is equal to or larger than the threshold ⁇ 2 in the image area around the pixel determined to be the color boundary. It is preferable to exclude the chromatic aberration correction target. This is because such an area around the color boundary is found to deteriorate the image quality of the target image when the axial chromatic aberration is corrected according to the present invention.
  • step S21 the correction unit 23 determines whether or not the color structure of the pixel is a pseudo-axial chromatic aberration region, but the present invention is not limited to this, and the process of step S21 is omitted. May be.
  • the chrominance correction unit 24 performs chrominance correction on all the pixels that have been subjected to the axial chromatic aberration correction processing.
  • the present invention is not limited to this, and after correction of those pixels. If the value of the color difference component size L ′ is smaller than the value of the color difference component size L before correction, the color difference correction may not be performed.
  • the color difference correction unit 24 is defined by FIG. 7 and the following equation (16) when the corrected color difference component size L ′ is larger than the uncorrected color difference component size L (predetermined size).
  • the corrected color difference component output rate may be reduced based on the following equation (17) obtained by modifying the equation (15).
  • the function clip (V, U 1, U 2) when the value of the parameter V is outside the range of values of the lower limit value U 1 and the upper limit value U 2, is clipped to the lower limit value U 1 or the upper limit value U 2 .
  • an image of 255 gradations is displayed.
  • the coefficient WV is set to a value of 5 to 10.
  • the value of the coefficient WV is preferably set as appropriate according to the required degree of suppression of discoloration.
  • the correction unit 23 does not perform the correction process for the axial chromatic aberration when the color structure of the pixel is a color boundary or a pseudo-axial chromatic aberration region, but the present invention is not limited to this.
  • the correction unit 23 may perform the processes of step S22 and step S23 on the pixels in the color boundary and the pseudo-axial chromatic aberration region.
  • the correction unit 23 performs the following equations (18) and (19) instead of the equations (13) and (14) in step S22.
  • R ′ (i, j) R0 (i, j) + ⁇ ⁇ (G0 (i, j) ⁇ G ′ (i, j)) (18)
  • B ′ (i, j) B0 (i, j) + ⁇ ⁇ (G0 (i, j) ⁇ G ′′ (i, j)) (19)
  • the coefficient ⁇ is preferably set to a small value according to the blurring index, the correction value, the texture amount of each color component, or the like.
  • it is preferable to set the coefficient ⁇ 1 or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

An image smoothing means generates a plurality of smooth images that are the smoothed images of a target image for different degrees of smoothing. A calculation means calculates the color difference between the color components of the specified color component of the target image and color components that differ from the specified color components of the smoothed image, and calculates the color difference variance. An adjusting means determines the color component having the highest sharpness in the pixels of the target image based on the color difference variance, and calculates the adjustment amount for adjusting the sharpness between the color components in the pixels of the target image. A correction means determines the image structure of the pixels of the target image based on the color difference and the adjustment amount, and corrects the sharpness between the color components by the adjustment amount.

Description

画像処理装置、撮像装置および画像処理プログラムImage processing apparatus, imaging apparatus, and image processing program
 本発明は、画像処理装置、撮像装置および画像処理プログラムに関する。 The present invention relates to an image processing device, an imaging device, and an image processing program.
 従来、撮像レンズ等の光学系によって結像された被写体を撮像した画像は、その光学系による色収差、特に、軸上色収差の影響を受ける。 Conventionally, an image obtained by imaging a subject imaged by an optical system such as an imaging lens is affected by chromatic aberration, particularly axial chromatic aberration, by the optical system.
 それを解決するために、例えば、基準となる色成分の色面を平滑化して他の色成分の色面との色差が最小となるように、各色成分間のMTF特性の不整合を調整して、軸上色収差を補正する技術が開発されている(特許文献1等参照)。 In order to solve this, for example, the MTF characteristic mismatch between the color components is adjusted so that the color plane of the reference color component is smoothed and the color difference from the color plane of the other color components is minimized. Thus, a technique for correcting axial chromatic aberration has been developed (see Patent Document 1, etc.).
特開2007-28041号公報JP 2007-28041 A
 しかしながら、従来技術では、基準となる色成分の色面の平滑化を、他の色成分の色面との色差が最小となるようにする際、平滑化の度合いが大きくなると色差が予測できない振る舞いをする場合がある。 However, in the prior art, when smoothing the color plane of the reference color component to minimize the color difference from the color planes of other color components, the behavior in which the color difference cannot be predicted if the degree of smoothing increases. May be.
 また、軸上色収差による影響以外の画像の色構造部分では、彩度が小さくなり色抜けするという問題がある。 In addition, there is a problem in that the color structure portion of the image other than the influence of the longitudinal chromatic aberration reduces the saturation and loses color.
 上記従来技術が有する問題に鑑み、本発明の目的は、画像の色構造を考慮しつつ、軸上色収差を確度高く補正できる技術を提供することにある。 In view of the above-described problems of the prior art, an object of the present invention is to provide a technique capable of correcting axial chromatic aberration with high accuracy while considering the color structure of an image.
 本発明を例示する画像処理装置の一態様は、複数の色成分の画素値を有する対象画像を平滑化して平滑化の度合いが異なる複数の平滑画像を生成する画像平滑手段と、対象画像の画素それぞれにおいて、対象画像の所定の色成分と平滑画像の所定の色成分とは異なる色成分との差分である色差および色差の分散を算出する演算手段と、色差の分散に基づき対象画像の画素において鮮鋭度が最も高い色成分を決定し、決定した鮮鋭度が最も高い色成分に基づき対象画像の画素における色成分間の鮮鋭度を調整する調整量を算出する調整手段と、色差および調整量に基づいて対象画像の画素における画像構造を判定し、判定結果に応じて対象画像の画素における色成分間の鮮鋭度を調整量で補正する補正手段と、を備える。 An aspect of the image processing apparatus illustrating the present invention includes an image smoothing unit that smoothes a target image having pixel values of a plurality of color components to generate a plurality of smooth images having different degrees of smoothing, and pixels of the target image In each of the above, in the pixel of the target image based on the variance of the color difference, arithmetic means for calculating the color difference and the variance of the color difference, which is the difference between the predetermined color component of the target image and the color component different from the predetermined color component of the smooth image A color component having the highest sharpness is determined, an adjustment means for calculating an adjustment amount for adjusting the sharpness between the color components in the pixels of the target image based on the determined color component having the highest sharpness, and a color difference and an adjustment amount. Correction means for determining the image structure of the pixels of the target image based on the result and correcting the sharpness between the color components of the pixels of the target image with the adjustment amount according to the determination result.
 また、調整手段は、対象画像の画素における鮮鋭度が最も高い色成分を、色差の分散の最小値に基づいて決定してもよい。 Further, the adjustment means may determine the color component having the highest sharpness in the pixel of the target image based on the minimum value of the color difference variance.
 また、演算手段は、対象画像の画素それぞれにおいて、対象画像と一の平滑画像とを同じ色成分ごとに差分して、対象画像における画像構造の多さを示すテクスチャ量を算出し、補正手段は、色差、調整量およびテクスチャ量に基づいて対象画像の画素における画像構造を判定してもよい。 Further, the calculation means calculates a texture amount indicating the number of image structures in the target image by calculating the difference between the target image and the one smoothed image for each same color component in each pixel of the target image, and the correction means The image structure at the pixel of the target image may be determined based on the color difference, the adjustment amount, and the texture amount.
 また、調整手段は、テクスチャ量が所定の値以下となるぼけた画像領域を抽出し、ぼけた画像領域の画素における色差の分散の最小値を平滑化してもよい。 Further, the adjustment means may extract a blurred image area where the texture amount is equal to or less than a predetermined value, and smooth the minimum value of the color difference variance in the pixels of the blurred image area.
 また、補正手段は、対象画像の画素における色差の変化率および調整量の大きさに基づいて、対象画像の画素における画像構造が色境界か否かを判定し、色境界と判定した場合、対象画像の画素を補正対象から除外してもよい。 Further, the correcting means determines whether the image structure in the pixel of the target image is a color boundary based on the change rate of the color difference in the pixel of the target image and the size of the adjustment amount. You may exclude the pixel of an image from correction object.
 また、補正手段は、対象画像の画素における調整量と隣接する画素の調整量との大きさを比較して対象画像の画素における画像構造が孤立した構造か否かを判定し、孤立した構造と判定した場合、対象画像の画素を補正対象から除外してもよい。 Further, the correction means compares the adjustment amount of the pixel of the target image with the adjustment amount of the adjacent pixel to determine whether or not the image structure of the pixel of the target image is an isolated structure. If it is determined, the pixel of the target image may be excluded from the correction target.
 また、補正手段は、対象画像の画素におけるテクスチャ量および補正量の大きさに基づいて、対象画像の画素における画像構造が軸上色収差に類似した擬似軸上色収差領域か否かを判定し、擬似軸上色収差領域と判定した場合、対象画像の画素を補正対象から除外してもよい。 Further, the correction means determines whether the image structure of the pixel of the target image is a pseudo-axial chromatic aberration region similar to the axial chromatic aberration based on the texture amount and the correction amount of the pixel of the target image. When it is determined as the axial chromatic aberration region, the pixel of the target image may be excluded from the correction target.
 また、補正手段は、各色成分の分布に基づいて、色境界と判定された対象画像の画素の画像構造が飽和領域周辺の濃度差による色にじみか否かを判定し、色にじみと判定した場合、対象画像の画素を補正対象としてもよい。 Further, the correcting means determines whether or not the image structure of the pixel of the target image determined to be a color boundary is a color blur due to a density difference around the saturation region based on the distribution of each color component, and determines that the color blur The pixels of the target image may be the correction target.
 また、鮮鋭度が調整された対象画像の画素の画素値を、色差空間において、鮮鋭度の調整前の画素値の色差の向きと同じになるように補正する色差補正手段を備えてもよい。 Further, color difference correction means for correcting the pixel value of the pixel of the target image whose sharpness has been adjusted to be the same as the direction of the color difference of the pixel value before the sharpness adjustment in the color difference space may be provided.
 また、画像平滑手段は、少なくとも2つの異なる平滑化の度合いで対象画像を平滑化して一組の平滑画像を生成し、一組の平滑画像を所定の割合で混合して複数の平滑画像を生成してもよい。 The image smoothing means smoothes the target image with at least two different smoothing levels to generate a set of smoothed images, and mixes the set of smoothed images at a predetermined ratio to generate a plurality of smoothed images. May be.
 本発明を例示する画像処理装置の別の態様は、複数の色成分の画素値を有する対象画像の画素における画像構造を判定する判定手段と、対象画像の画素において鮮鋭度が最も高い色成分を決定し、決定した鮮鋭度が最も高い色成分に基づき対象画像の画素における色成分の鮮鋭度を調整する調整量を算出し、判定結果に応じて対象画像の画素における色成分の鮮鋭度を調整量で調整する補正手段と、を備える。 Another aspect of the image processing apparatus illustrating the present invention includes a determination unit that determines an image structure in a pixel of a target image having pixel values of a plurality of color components, and a color component having the highest sharpness in the pixel of the target image. Determine the amount of adjustment to adjust the sharpness of the color component in the target image pixel based on the determined color component with the highest sharpness, and adjust the sharpness of the color component in the target image pixel according to the determination result Correction means for adjusting the amount.
 本発明を例示する撮像装置の一態様は、被写体を撮像して、複数の色成分の画素値を有する対象画像を生成する撮像手段と、本発明の画像処理装置と、を備える。 An aspect of an imaging apparatus that exemplifies the present invention includes imaging means for capturing a subject and generating a target image having pixel values of a plurality of color components, and the image processing apparatus of the present invention.
 本発明を例示する画像処理プログラムの一態様は、複数の色成分の画素値を有する対象画像を読み込む入力手順、対象画像を平滑化して平滑化の度合いが異なる複数の平滑画像を生成する画像平滑手順、対象画像の画素それぞれにおいて、対象画像の所定の色成分と平滑画像の所定の色成分とは異なる色成分との差分である色差および色差の分散を算出する演算手順、色差の分散に基づき対象画像の画素において鮮鋭度が最も高い色成分を決定し、決定した鮮鋭度が最も高い色成分に基づき対象画像の画素の色成分間の鮮鋭度を調整する調整量を算出する調整手順、色差および調整量に基づいて対象画像の画素における画像構造を判定し、判定結果に応じて対象画像の画素における色成分間の鮮鋭度を調整量で補正する補正手順、をコンピュータに実行させる。 One aspect of an image processing program that exemplifies the present invention is an input procedure for reading a target image having pixel values of a plurality of color components, and image smoothing that smoothes the target image and generates a plurality of smooth images having different degrees of smoothing. Based on the procedure, the calculation procedure for calculating the color difference and the variance of the color difference, which is the difference between the predetermined color component of the target image and the color component different from the predetermined color component of the smooth image, for each pixel of the target image, based on the variance of the color difference An adjustment procedure for determining a color component having the highest sharpness in the pixel of the target image, and calculating an adjustment amount for adjusting the sharpness between the color components of the pixel of the target image based on the determined color component having the highest sharpness, a color difference And a correction procedure for determining the image structure of the pixel of the target image based on the adjustment amount and correcting the sharpness between the color components of the pixel of the target image with the adjustment amount according to the determination result. It is executed by the over data.
 本発明を例示する画像処理プログラムの別の態様は、複数の色成分の画素値を有する対象画像の画素における画像構造を判定する判定手順、対象画像の画素において鮮鋭度が最も高い色成分を決定し、決定した鮮鋭度が最も高い色成分に基づき対象画像の画素における色成分の鮮鋭度を調整する調整量を算出し、判定結果に応じて対象画像の画素における色成分の鮮鋭度を調整量で調整する補正手順、をコンピュータに実行させる。 Another aspect of the image processing program illustrating the present invention is a determination procedure for determining an image structure in a pixel of a target image having pixel values of a plurality of color components, and determining a color component having the highest sharpness in the pixel of the target image The amount of adjustment for adjusting the sharpness of the color component in the pixel of the target image is calculated based on the determined color component having the highest sharpness, and the amount of sharpness of the color component in the pixel of the target image is adjusted according to the determination result. Makes the computer execute the correction procedure to be adjusted with.
 本発明によれば、画像の色構造を考慮しつつ、軸上色収差を確度高く補正できる。 According to the present invention, it is possible to correct axial chromatic aberration with high accuracy while considering the color structure of an image.
本発明の一の実施形態に係る画像処理装置として動作させるコンピュータの構成を示すブロック図1 is a block diagram showing the configuration of a computer that operates as an image processing apparatus according to an embodiment of the present invention. 色構造について説明する図Diagram explaining color structure 一の実施形態に係るコンピュータによる画像処理の動作を示すフローチャート7 is a flowchart showing image processing operations performed by a computer according to an embodiment. 画素(i,j)と参照領域との関係を示す図The figure which shows the relationship between a pixel (i, j) and a reference area. 画素(i,j)における標準偏差DEVr[k’]の分布を示す図The figure which shows distribution of the standard deviation DEVr [k '] in a pixel (i, j). 本発明に係るデジタルカメラの構成の一例を示す図The figure which shows an example of a structure of the digital camera which concerns on this invention 補正後の色差成分の大きさに応じた補正出力率を示す図The figure which shows the correction output rate according to the magnitude | size of the color difference component after correction | amendment
 図1は、本発明の一の実施形態に係る画像処理装置として動作させるコンピュータ10の構成を示すブロック図である。 FIG. 1 is a block diagram showing a configuration of a computer 10 that operates as an image processing apparatus according to an embodiment of the present invention.
 図1(a)に示すコンピュータ10は、CPU1、記憶部2、入出力インタフェース(入出力I/F)3およびバス4から構成される。CPU1、記憶部2および入出力I/F3は、バス4を介して情報伝達可能に接続される。また、コンピュータ10には、入出力I/F3を介して、画像処理の途中経過や処理結果を表示する出力装置30、ユーザからの入力を受け付ける入力装置40がそれぞれ接続される。出力装置30には、一般的な液晶モニタやプリンタ等を用いることができ、入力装置40には、キーボードやマウス等をそれぞれ適宜選択して使用できる。 A computer 10 shown in FIG. 1A includes a CPU 1, a storage unit 2, an input / output interface (input / output I / F) 3, and a bus 4. The CPU 1, the storage unit 2, and the input / output I / F 3 are connected via the bus 4 so that information can be transmitted. The computer 10 is connected to an output device 30 for displaying the progress of image processing and processing results, and an input device 40 for receiving input from the user via the input / output I / F 3. A general liquid crystal monitor, a printer, or the like can be used for the output device 30, and a keyboard, a mouse, or the like can be appropriately selected and used for the input device 40.
 なお、コンピュータ10によって処理される対象画像は、各画素において、赤色(R)、緑色(G)、青色(B)の色成分それぞれの画素値を有することを前提とする。すなわち、本実施形態の対象画像は、3板式カラーのデジタルカメラで撮像された、または、単板式カラーのデジタルカメラで撮像され色補間処理された画像であるとする。また、対象画像は、デジタルカメラ等による撮像の際、撮像レンズによる軸上色収差の影響を有し、色成分間の鮮鋭度が異なるものとする。 Note that it is assumed that the target image processed by the computer 10 has pixel values of red (R), green (G), and blue (B) color components in each pixel. That is, it is assumed that the target image of the present embodiment is an image captured by a three-plate color digital camera or an image captured by a single-plate color digital camera and subjected to color interpolation processing. The target image is assumed to have an influence of axial chromatic aberration due to the imaging lens when picked up by a digital camera or the like, and the sharpness between the color components is different.
 CPU1は、コンピュータ10の各部を統括的に制御するプロセッサである。例えば、CPU1は、入力装置40で受け付けたユーザからの指示に基づいて、記憶部2に記憶されている画像処理プログラムを読み込む。CPU1は、その画像処理プログラムの実行により、画像平滑部20、演算部21、調整部22、補正部23、色差補正部24として動作し(図1(b))、対象画像に対して軸上色収差の補正処理を施す。CPU1は、その画像に対する画像処理の結果を、出力装置30に表示する。 The CPU 1 is a processor that comprehensively controls each unit of the computer 10. For example, the CPU 1 reads an image processing program stored in the storage unit 2 based on an instruction from the user received by the input device 40. The CPU 1 operates as an image smoothing unit 20, a calculation unit 21, an adjustment unit 22, a correction unit 23, and a color difference correction unit 24 by executing the image processing program (FIG. 1 (b)), and is on the axis with respect to the target image. Chromatic aberration correction processing is performed. The CPU 1 displays the result of image processing for the image on the output device 30.
 画像平滑部20は、平滑化の度合い(ぼかし指標)が異なる、例えば、N個の公知のガウシアンフィルタを用い、各ガウシアンフィルタのぼかし指標に応じて対象画像を平滑化しN個の平滑画像を生成する(Nは2以上の自然数)。なお、本実施形態におけるぼかし指標とは、例えば、ぼかし半径の大きさを指す。 The image smoothing unit 20 uses, for example, N known Gaussian filters with different degrees of smoothing (blur index), and smoothes the target image according to the blur index of each Gaussian filter to generate N smooth images. (N is a natural number of 2 or more). Note that the blur index in the present embodiment refers to the size of the blur radius, for example.
 演算部21は、後述するように、対象画像およびN個の平滑画像を用い、各画素の画素位置において、対象画像の色成分と平滑画像の対象画像の色成分とは異なる色成分との差分の絶対値から、ぼかし指標に応じた色差面(色差)およびその標準偏差(分散)の値を算出する。また、本実施形態の演算部21は、対象画像とぼかし指標が最も小さい(すなわち平滑化の度合いが最も小さい)平滑画像とを同一の色成分ごとに差分し、対象画像の画像構造の多さを示す各色成分のテクスチャ量を算出する。 As will be described later, the calculation unit 21 uses the target image and N smooth images, and the difference between the color component of the target image and the color component different from the color component of the target image of the smooth image at the pixel position of each pixel. From the absolute value of, the value of the color difference surface (color difference) and its standard deviation (variance) corresponding to the blurring index are calculated. In addition, the calculation unit 21 of the present embodiment makes a difference between the target image and the smooth image having the smallest blurring index (that is, the smallest smoothing degree) for each identical color component, and the image structure of the target image is large. The amount of texture of each color component indicating is calculated.
 調整部22は、対象画像の各画素の画素位置における、ぼかし指標に対する標準偏差の分布に公知の内挿法を適用して、その画素位置で最小の標準偏差を与えるぼかし指標を求める。調整部22は、各画素において求められた各色差面の最小の標準偏差を与えるぼかし指標(以下、最小ぼかし指標と称す)に基づいて、鮮鋭度が最も高い色成分を決定する。調整部22は、決定した鮮鋭度が最も高い色成分に基づいて、対象画素の各画素の画素位置での色成分間の鮮鋭度を調整する補正値(調整量)を算出する。 The adjustment unit 22 applies a known interpolation method to the standard deviation distribution with respect to the blur index at the pixel position of each pixel of the target image, and obtains the blur index that gives the minimum standard deviation at the pixel position. The adjustment unit 22 determines a color component having the highest sharpness based on a blurring index (hereinafter referred to as a minimum blurring index) that gives the minimum standard deviation of each color difference plane obtained in each pixel. The adjustment unit 22 calculates a correction value (adjustment amount) for adjusting the sharpness between the color components at the pixel position of each pixel of the target pixel based on the determined color component having the highest sharpness.
 補正部23は、演算部21により算出された色差および各色成分のテクスチャ量と、調整部22により算出された補正値とに基づいて、対象画像の各画素の画素位置における画像構造が、軸上色収差の補正が可能な構造か否かを判定する。補正部23は、補正可能な画像構造の場合、調整部22により算出された補正値を用いて、各画素における色成分間の鮮鋭度を補正する。 Based on the color difference calculated by the calculation unit 21 and the texture amount of each color component and the correction value calculated by the adjustment unit 22, the correction unit 23 determines whether the image structure at the pixel position of each pixel of the target image is on-axis. It is determined whether or not the structure can correct chromatic aberration. In the case of a correctable image structure, the correction unit 23 corrects the sharpness between color components in each pixel using the correction value calculated by the adjustment unit 22.
 ここで、図2は、本実施形態において考慮する代表的な色構造を示す。図2(a)は、例えば、白地の背景に一本の黒線が配置された被写体を撮像した対象画像で、黒線に直交する走査方向におけるR成分(点線)、G成分(実線)、B成分(破線)の画素値の分布を示す。図2(b)は、一例として、赤と白とからなる色境界を有する被写体を撮像した対象画像で、色境界に直交する走査方向におけるR成分(点線)、G成分(実線)、B成分(破線)の画素値の分布を示す。図2(c)は、例えば、街灯等の明るい光源がある被写体を撮像し、いわゆるパープルフリンジの影響を受けた対象画像で、光源の中心を通る走査方向におけるR成分(点線)、G成分(実線)、B成分(破線)の画素値の分布を示す。ここで、パープルフリンジとは、街灯等の光源の周囲や水面の照り返し等、光量が大きいために各色成分の画素値が飽和した高輝度領域(飽和領域)の周りに生じる紫色の色にじみのことである。なお、図2の各対象画像を撮像したカメラの撮像レンズは、軸上色収差を有しG成分で合焦しているものとする。 Here, FIG. 2 shows a typical color structure considered in this embodiment. FIG. 2A is a target image obtained by imaging a subject in which a single black line is arranged on a white background, for example, an R component (dotted line), a G component (solid line) in the scanning direction orthogonal to the black line, The distribution of pixel values of the B component (broken line) is shown. FIG. 2B shows, as an example, a target image obtained by imaging a subject having a color boundary composed of red and white, and an R component (dotted line), a G component (solid line), and a B component in a scanning direction orthogonal to the color boundary. The distribution of pixel values (broken line) is shown. FIG. 2C shows, for example, an image of a subject with a bright light source such as a streetlight, and a target image affected by so-called purple fringe. The R component (dotted line) and G component (in the scanning direction passing through the center of the light source) A distribution of pixel values of a solid line) and a B component (broken line) is shown. Here, purple fringing refers to purple color bleeding that occurs around high-luminance areas (saturated areas) where the pixel values of each color component are saturated due to a large amount of light, such as around a light source such as a streetlight or the reflection of water. It is. Note that the imaging lens of the camera that captured each target image in FIG. 2 has axial chromatic aberration and is focused on the G component.
 図2(a)に示すように、G成分は色構造をよく再現するのに対し、軸上色収差によりR成分やB成分は色構造がぼけてしまう。これにより、黒線の部分は緑またはマゼンダに滲む。しかしながら、各色成分の分布は、本来互いに一致することから、後述するように、演算部21、調整部22、補正部23によるR成分およびB成分に対する鮮鋭度の調整(すなわち軸上色収差の補正)により、図2(a)に示す各色成分の分布を互いに一致させることができる。逆に言えば、本実施形態の軸上色収差の補正処理は、元の対象画素の各色成分の分布は、互いに一致していることを前提としている。なお、このような色構造付近では、演算部21により算出されるテクスチャ量は色成分ごとに大きく異なる分布を示すのに対し、調整部22により算出される各色差の最小ぼかし指標は互いに似たようななだらかで一様な分布を示す。 As shown in FIG. 2 (a), the G component reproduces the color structure well, whereas the R and B components are blurred in color structure due to axial chromatic aberration. As a result, the black line portion is blurred in green or magenta. However, since the distributions of the respective color components essentially match each other, as will be described later, the sharpness adjustment (that is, correction of axial chromatic aberration) with respect to the R component and the B component by the calculation unit 21, the adjustment unit 22, and the correction unit 23 is performed. As a result, the distributions of the respective color components shown in FIG. In other words, the axial chromatic aberration correction processing according to the present embodiment is based on the premise that the distributions of the respective color components of the original target pixel match each other. In the vicinity of such a color structure, the texture amount calculated by the calculation unit 21 shows a distribution that differs greatly for each color component, whereas the minimum blurring index of each color difference calculated by the adjustment unit 22 is similar to each other. It shows such a gentle and uniform distribution.
 次に、図2(b)に示す色境界では、各色成分の色構造が大きく異なる。すなわち、色境界付近において、テクスチャ量の分布は色成分ごとに大きく異なるとともに、各色差の最小ぼかし指標の分布も急激な変化を示す。このことから、図2(b)に示すような色境界では、上述した前提が成り立たたず、調整部22により算出される補正値も大きな値となり、軸上色収差を適切に補正することができない。そこで、本実施形態の補正部23は、各色差の最小ぼかし指標の分布および補正値の大きさに基づいて、色構造が図2(b)に示すような色境界か否かを判定する。補正部23が、色構造が色境界であると判定した場合、本実施形態では、その色境界の画素に対する軸上色収差の補正を行わないようにする。これにより、色境界に対する軸上色収差の補正処理を施すことにより生じる変色を抑制することができる。 Next, at the color boundary shown in FIG. 2B, the color structure of each color component is greatly different. That is, in the vicinity of the color boundary, the distribution of the texture amount is greatly different for each color component, and the distribution of the minimum blurring index for each color difference also shows an abrupt change. For this reason, at the color boundary as shown in FIG. 2B, the above-mentioned premise is not satisfied, and the correction value calculated by the adjustment unit 22 becomes a large value, and the axial chromatic aberration cannot be corrected appropriately. . Therefore, the correction unit 23 of the present embodiment determines whether the color structure is a color boundary as shown in FIG. 2B based on the distribution of the minimum blurring index of each color difference and the size of the correction value. When the correction unit 23 determines that the color structure is a color boundary, in this embodiment, the correction of the axial chromatic aberration with respect to the pixel at the color boundary is not performed. Thereby, the discoloration which arises by performing the correction process of the axial chromatic aberration with respect to a color boundary can be suppressed.
 一方、図2(c)に示すパープルフリンジの場合、色成分ごとに飽和領域が異なり、光源から離れるに従ってG成分が最初に減少し、R成分が最も広い範囲まで分布する。このような分布により、紫色の色にじみとなって現れる。このような各色成分の分布は、図2(a)に示す色構造とも異なる。しかしながら、上述したように、図2(a)に示す黒線の部分が軸上色収差により緑またはマゼンダに滲むという点は、パープルフリンジと似ている。そこで、本実施形態では、パープルフリンジの画像領域に対して、軸上色収差と同様の補正処理を施す。そのために、補正部23は、図2(c)に示すように、対象画像の画素を中心とする周辺領域、または対象画像全体における各色成分の画素値の分布を求め、その分布から各色成分の画素値の飽和領域を求める。補正部23は、最も広く分布する色成分の飽和領域(図2(c)の場合は、R成分)およびその飽和領域の端から幅β広げた領域をパープルフリンジ領域として抽出する。補正部23は、画素位置が抽出したパープルフリンジ領域に含まれるか否かに基づいて、色構造がパープルフリンジであるか否か判定する。補正部23は、パープルフリンジと判定された画素の各色成分に対し、調整部22により算出された補正値に基づいて軸上色収差の補正処理を施す。 On the other hand, in the purple fringe shown in FIG. 2 (c), the saturation region is different for each color component, the G component first decreases as the distance from the light source increases, and the R component is distributed over the widest range. Due to such a distribution, a purple color appears. Such distribution of each color component is different from the color structure shown in FIG. However, as described above, the black line portion shown in FIG. 2A is similar to the purple fringe in that the black line portion blurs in green or magenta due to axial chromatic aberration. Therefore, in the present embodiment, the same correction process as that for the longitudinal chromatic aberration is performed on the purple fringe image area. For this purpose, as shown in FIG. 2C, the correction unit 23 obtains a distribution of pixel values of each color component in the peripheral area centering on the pixel of the target image or the entire target image, and from the distribution, A saturation region of pixel values is obtained. The correction unit 23 extracts the saturation region of the color component that is most widely distributed (the R component in the case of FIG. 2C) and the region that is widened by β from the end of the saturation region as a purple fringe region. The correcting unit 23 determines whether the color structure is a purple fringe based on whether the pixel position is included in the extracted purple fringe region. The correction unit 23 performs axial chromatic aberration correction processing based on the correction value calculated by the adjustment unit 22 for each color component of the pixel determined to be purple fringe.
 なお、パープルフリンジ付近では、図2(b)に示す色境界と同様に、テクスチャ量の分布は色成分ごとに大きく異なるとともに、各色差の最小ぼかし指標の分布も急激な変化を示す。 Note that, in the vicinity of the purple fringe, the texture amount distribution varies greatly for each color component, and the distribution of the minimum blurring index for each color difference also shows an abrupt change, similar to the color boundary shown in FIG.
 本実施形態では、図2に示す色構造以外にも、ショットノイズなどの影響により1画素の画素値だけ異常な補正値を示す孤立した構造や、軸上色収差の画像領域と似たような色構造を示すにもかかわらず、軸上色収差の補正処理を施した場合、過剰に補正がかかってしまい対象画像の画質を低下させてしまう領域(以下、擬似軸上色収差領域と称す)についても考慮する。そして、本実施形態では、これらの画像構造については軸上色収差の補正処理を施さない。なお、これらの画像構造の判定処理については後述する。 In the present embodiment, in addition to the color structure shown in FIG. 2, an isolated structure that shows an abnormal correction value by the pixel value of one pixel due to the influence of shot noise or the like, or a color similar to the image area of axial chromatic aberration In spite of showing the structure, when the axial chromatic aberration correction process is performed, the area that is excessively corrected and reduces the image quality of the target image (hereinafter referred to as the pseudo-axial chromatic aberration area) is also considered. To do. In this embodiment, axial chromatic aberration correction processing is not performed on these image structures. Note that these image structure determination processes will be described later.
 色差補正部24は、鮮鋭度が調整された画素の各色成分の画素値を、色差空間において、調整前の色差の向きと同じになるように補正し、軸上色収差の補正処理によって生じる変色を抑制する。 The color difference correction unit 24 corrects the pixel value of each color component of the pixel whose sharpness has been adjusted to be the same as the direction of the color difference before the adjustment in the color difference space, and changes the color change caused by the correction process of the longitudinal chromatic aberration. Suppress.
 記憶部2は、対象画像とともに、対象画像における軸上色収差を補正するための画像処理プログラム等を記録する。記憶部2に記憶される画像やプログラム等は、バス4を介して、CPU1から適宜参照することができる。記憶部2には、一般的なハードディスク装置、光磁気ディスク装置等の記憶装置を選択して用いることができる。なお、記憶部2は、コンピュータ10に組み込まれるとしたが、外付けの記憶装置でもよい。この場合、記憶部2は、入出力I/F3を介してコンピュータ10に接続される。 The storage unit 2 records an image processing program and the like for correcting axial chromatic aberration in the target image along with the target image. Images, programs, and the like stored in the storage unit 2 can be appropriately referred to from the CPU 1 via the bus 4. A storage device such as a general hard disk device or a magneto-optical disk device can be selected and used for the storage unit 2. Although the storage unit 2 is incorporated in the computer 10, it may be an external storage device. In this case, the storage unit 2 is connected to the computer 10 via the input / output I / F 3.
 次に、本実施形態のコンピュータ10による軸上色収差を補正する画像処理の動作について、図3に示すフローチャートを参照しつつ説明する。 Next, the operation of image processing for correcting axial chromatic aberration by the computer 10 of this embodiment will be described with reference to the flowchart shown in FIG.
 ユーザは、入力装置40を用いて、画像処理プログラムのコマンドを入力、または、出力装置30に表示されたそのプログラムのアイコンをダブルクリック等することにより、画像処理プログラムの起動をCPU1に指示する。CPU1は、その指示を入出力I/F3を介して受け付け、記憶部2に記憶されている画像処理プログラムを読み込み実行する。CPU1は、図3のステップS10からの処理を開始する。 The user uses the input device 40 to input an image processing program command or double-click the icon of the program displayed on the output device 30 to instruct the CPU 1 to start the image processing program. The CPU 1 receives the instruction via the input / output I / F 3 and reads and executes the image processing program stored in the storage unit 2. CPU1 starts the process from step S10 of FIG.
 ステップS10:CPU1は、入力装置40を介して、ユーザによって指定された軸上色収差の補正対象の対象画像を読み込む。 Step S10: The CPU 1 reads the target image to be corrected for the longitudinal chromatic aberration designated by the user via the input device 40.
 ステップS11:CPU1の画像平滑部20は、各ガウシアンフィルタのぼかし指標に応じ、読み込んだ対象画像を平滑化し、N個の平滑画像を生成する。なお、本実施形態では、対象画像自身も平滑画像の1つとし、本実施形態における平滑画像の総数は、(N+1)個となる。 Step S11: The image smoothing unit 20 of the CPU 1 smoothes the read target image according to the blur index of each Gaussian filter, and generates N smooth images. In this embodiment, the target image itself is also one of the smooth images, and the total number of smooth images in this embodiment is (N + 1).
 ステップS12:CPU1の演算部21は、R成分とG成分との色差面Cr、B成分とG成分との色差面Cb、およびR成分とB成分との色差面Crbを対象画像と各平滑画像とを用いて算出する。 Step S12: The calculation unit 21 of the CPU 1 uses the color difference surface Cr of the R component and the G component, the color difference surface Cb of the B component and the G component, and the color difference surface Crb of the R component and the B component as the target image and each smoothed image. And using
 例えば、演算部21は、対象画像の所定の色成分であるG成分の画素値G0(i,j)と、平滑画像の上記所定の色成分とは異なる色成分であるR成分の画素値Rk(i,j)との差分の絶対値を各画素位置において求め、次式(1)に示す色差面Cr[-k](i,j)を算出する。
Cr[-k](i,j)=|Rk(i,j)-G0(i,j)|  …(1)
ここで、(i,j)は、対象画像の各画素の画素位置の座標を示す。kは、平滑画像のぼかし指標であり、0≦k≦Nの整数である。なお、式(1)において、ぼかし指標kがマイナスであるのは、マイナス側にR面を順次ぼかした色差面Crであることを表す。また、ぼかし指標k=0は、対象画像自身、すなわち平滑化されていない画像を表す。
For example, the calculation unit 21 calculates the pixel value G0 (i, j) of the G component that is a predetermined color component of the target image and the pixel value Rk of the R component that is a color component different from the predetermined color component of the smooth image. The absolute value of the difference from (i, j) is obtained at each pixel position, and the color difference plane Cr [−k] (i, j) shown in the following equation (1) is calculated.
Cr [−k] (i, j) = | Rk (i, j) −G0 (i, j) | (1)
Here, (i, j) indicates the coordinates of the pixel position of each pixel of the target image. k is a blurring index of a smooth image and is an integer of 0 ≦ k ≦ N. In Expression (1), the negative blur index k indicates that the color difference surface Cr is obtained by sequentially blurring the R surface on the negative side. The blur index k = 0 represents the target image itself, that is, an image that has not been smoothed.
 同様に、演算部21は、対象画像の所定の色成分であるR成分の画素値R0(i,j)と、平滑画像の上記所定の色成分とは異なる色成分であるG成分の画素値Gk(i,j)との差分の絶対値を各画素(i,j)において求め、次式(2)の色差面Cr[k](i,j)を算出する。
Cr[k](i,j)=|R0(i,j)-Gk(i,j)|   …(2)
 なお、式(2)において、ぼかし指標kがプラスであるのは、プラス側にG面を順次ぼかした色差面Crであることを表す。
Similarly, the calculation unit 21 calculates the pixel value R0 (i, j) of the R component that is a predetermined color component of the target image and the pixel value of the G component that is a color component different from the predetermined color component of the smooth image. The absolute value of the difference from Gk (i, j) is obtained at each pixel (i, j), and the color difference plane Cr [k] (i, j) of the following equation (2) is calculated.
Cr [k] (i, j) = | R0 (i, j) −Gk (i, j) | (2)
In Expression (2), the blurring index k being positive indicates that the color difference surface Cr is obtained by sequentially blurring the G surface on the positive side.
 同様に、演算部21は、式(3)~(6)に基づいて、B成分とG成分との色差面Cb、R成分とB成分との色差面Crbそれぞれを各画素(i,j)において算出する。
Cb[-k](i,j)=|Bk(i,j)-G0(i,j)|  …(3)
Cb[k](i,j)=|B0(i,j)-Gk(i,j)|   …(4)
Crb[-k](i,j)=|Rk(i,j)-B0(i,j)| …(5)
Crb[k](i,j)=|R0(i,j)-Bk(i,j)|  …(6)
 ステップS13:演算部21は、ステップS12で算出した色差面Cr、Cb、Crbを用い、画素(i,j)における各色差面の標準偏差DEVr、DEVb、DEVrbを、ぼかし指標ごとに算出する。すなわち、演算部21は、図4に示すように、斜線で示す演算対象の画素(i,j)を中心とする15ピクセル×15ピクセルの大きさの参照領域AR1にある画素の各色差面Cr、Cb、Crbの値を用い標準偏差を算出する。なお、本実施形態では、参照領域AR1の大きさを15ピクセル×15ピクセルとするが、CPU1の処理能力や軸上色収差の補正の精度に応じて適宜決めることが好ましく、例えば、一辺を10~30ピクセルの範囲に設定されるのが好ましい。
Similarly, based on the equations (3) to (6), the calculation unit 21 determines the color difference plane Cb of the B component and the G component and the color difference plane Crb of the R component and the B component, respectively, for each pixel (i, j). In the calculation.
Cb [−k] (i, j) = | Bk (i, j) −G0 (i, j) | (3)
Cb [k] (i, j) = | B0 (i, j) −Gk (i, j) | (4)
Crb [−k] (i, j) = | Rk (i, j) −B0 (i, j) | (5)
Crb [k] (i, j) = | R0 (i, j) −Bk (i, j) | (6)
Step S13: The calculation unit 21 calculates the standard deviations DEVr, DEVb, and DEVrb of each color difference plane in the pixel (i, j) for each blur index using the color difference planes Cr, Cb, and Crb calculated in step S12. In other words, as shown in FIG. 4, the calculation unit 21 displays each color difference plane Cr of a pixel in the reference area AR <b> 1 having a size of 15 pixels × 15 pixels centered on the calculation target pixel (i, j) indicated by diagonal lines. , Cb and Crb are used to calculate the standard deviation. In the present embodiment, the size of the reference area AR1 is 15 pixels × 15 pixels. However, it is preferable that the size of the reference area AR1 is appropriately determined according to the processing capability of the CPU 1 and the accuracy of correction of the axial chromatic aberration. It is preferably set in the range of 30 pixels.
 演算部21は、次式(7)~(9)を用い各色差面の標準偏差DEVr、DEVb、DEVrbを算出する。 The calculation unit 21 calculates standard deviations DEVr, DEVb, and DEVrb of each color difference plane using the following equations (7) to (9).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
ここで、k’は、-N~Nの整数のぼかし指標である。rは、参照領域AR1の一辺のピクセル数を示し、本実施形態ではr=15ピクセルである。また、(l,m)および(x,y)は、参照領域AR1内の画素位置をそれぞれ表す。 Here, k ′ is an integer blur index of −N to N. r indicates the number of pixels on one side of the reference area AR1, and in this embodiment, r = 15 pixels. Further, (l, m) and (x, y) represent pixel positions in the reference area AR1, respectively.
 ステップS14:演算部21は、画素(i,j)におけるテクスチャ量を算出するために、対象画像およびぼかし指標k=1の平滑画像と、次式(10)とを用いて色成分ごとに差分する。
TR(i,j)=R0(i,j)-R1(i,j)
TG(i,j)=G0(i,j)-G1(i,j)  …(10)
TB(i,j)=B0(i,j)-B1(i,j)
 演算部21は、画素(i,j)を中心とする5ピクセル×5ピクセルなどの範囲にある各画素の式(10)で求めた差分を色成分ごとに加算した値、あるいは色成分ごとに1画素単位で規格化した値を、画素(i,j)のテクスチャ量として算出する。なお、演算部21は、式(10)の各差分を各色成分のテクスチャ量としてもよい。
Step S14: The calculation unit 21 calculates a difference for each color component using the target image, the smooth image with the blurring index k = 1, and the following equation (10) in order to calculate the texture amount at the pixel (i, j). To do.
TR (i, j) = R0 (i, j) −R1 (i, j)
TG (i, j) = G0 (i, j) −G1 (i, j) (10)
TB (i, j) = B0 (i, j) −B1 (i, j)
The calculation unit 21 adds, for each color component, a value obtained by adding the differences obtained by Expression (10) of each pixel in the range of 5 pixels × 5 pixels centered on the pixel (i, j) for each color component. A value normalized in units of one pixel is calculated as the texture amount of the pixel (i, j). In addition, the calculating part 21 is good also considering each difference of Formula (10) as the texture amount of each color component.
 ステップS15:演算部21は、ステップS13で算出された各色差面の標準偏差DEVr、DEVb、DEVrbを用い、各画素(i,j)において、最小の標準偏差の値を与える最小ぼかし指標k’を色差面ごとに求める。例えば、図5に示すように、演算部21は、ぼかし指標に応じた色差面Crの標準偏差DEVr[k’]の分布に基づいて、画素(i,j)における最小ぼかし指標k’=αを求める。 Step S15: The calculation unit 21 uses the standard deviations DEVr, DEVb, and DEVrb of the color difference surfaces calculated in Step S13, and uses the standard deviation DEVr, DEVb, and DEVrb for each pixel (i, j) to provide the minimum standard deviation index k ′ For each color difference plane. For example, as illustrated in FIG. 5, the calculation unit 21 calculates the minimum blur index k ′ = α in the pixel (i, j) based on the distribution of the standard deviation DEVr [k ′] of the color difference plane Cr corresponding to the blur index. Find r .
 演算部21は、色差面Cbおよび色差面Crbの場合についても同様の処理を行い、最小の標準偏差DEVb、DEVrbを与える最小ぼかし指標k’=αおよびk’=αrbを、対象画像の全ての画素(i,j)について求める。 The calculation unit 21 performs the same processing for the color difference plane Cb and the color difference plane Crb, and calculates the minimum blurring indices k ′ = α b and k ′ = α rb that give the minimum standard deviations DEVb and DEVrb to the target image. It calculates | requires about all the pixels (i, j).
 ステップS16:CPU1の調整部22は、ステップS14で求めたテクスチャ量に基づいて、対象画像のうち、合焦領域以外のぼけた画像領域(以下、背景領域と称す)を抽出する。そのために、調整部22は、画素(i,j)のテクスチャ量が所定の値d以下か否かを判定する。調整部22は、テクスチャ量が所定の値d以下の場合、画素(i,j)を背景領域として抽出する。調整部22は、このような抽出処理を対象画像の全ての画素について行う。 Step S16: The adjusting unit 22 of the CPU 1 extracts a blurred image region (hereinafter referred to as a background region) other than the focused region from the target image based on the texture amount obtained in Step S14. For this purpose, the adjustment unit 22 determines whether the texture amount of the pixel (i, j) is equal to or less than a predetermined value d. The adjustment unit 22 extracts the pixel (i, j) as the background area when the texture amount is equal to or less than the predetermined value d. The adjustment unit 22 performs such extraction processing for all the pixels of the target image.
 ステップS17:調整部22は、ステップS15およびステップS16で求められた最小ぼかし指標α、α、αrbに基づいて、対象画像の画素(i,j)において鮮鋭度が最も高い色成分を決定し、決定した鮮鋭度が最も高い色成分に基づいて、色成分間の鮮鋭度を調整する補正値を算出する。 Step S17: The adjustment unit 22 selects a color component having the highest sharpness in the pixel (i, j) of the target image based on the minimum blurring indexes α r , α b , and α rb obtained in steps S15 and S16. A correction value for adjusting the sharpness between the color components is calculated based on the determined color component having the highest sharpness.
 例えば、調整部22は、最小ぼかし指標αが正の場合、画素(i,j)において鮮鋭度がより高いのはG成分であると決定する。一方、調整部22は、最小ぼかし指標αが負の場合、画素(i,j)において鮮鋭度がより高いのはR成分であると決定する。調整部22は、最小ぼかし指標αおよびαrbに対しても、それらの符号に基づいて、鮮鋭度がより高い色成分をそれぞれ決定する。 For example, when the minimum blurring index α r is positive, the adjustment unit 22 determines that it is the G component that has a higher sharpness in the pixel (i, j). On the other hand, when the minimum blurring index α r is negative, the adjustment unit 22 determines that it is the R component that has a higher sharpness in the pixel (i, j). The adjustment unit 22 also determines color components having higher sharpness based on the signs of the minimum blurring indexes α b and α rb .
 上記結果に基づいて、調整部22は、各画素において最も鮮鋭度の高い色成分が決定できたか否かを判定する。すなわち、3つの色差面における結果のうち、2つで同一の色成分が決定された場合、調整部22は、その色成分を、その画素において鮮鋭度が最も高い色成分として決定する。一方、例えば、各色差面のぼかし係数α、α、αrbそれぞれに基づいて、R成分、G成分、B成分がそれぞれ決定された場合、調整部22は、画素(i,j)における最も鮮鋭度の高い色成分を1つに決定することができない。このような場合、調整部22は、不定と判定し、その画素に対する軸上色収差の補正処理を行わないにするのが好ましい。なお、色差面ごとに決定された各色成分の鮮鋭度を比較し鮮鋭度が最も高い色成分を決定してもよい。 Based on the above result, the adjustment unit 22 determines whether or not the color component having the highest sharpness has been determined for each pixel. That is, when the same color component is determined in two of the results on the three color difference planes, the adjustment unit 22 determines the color component as the color component having the highest sharpness in the pixel. On the other hand, for example, when the R component, the G component, and the B component are respectively determined based on the blur coefficients α r , α b , and α rb of each color difference surface, the adjusting unit 22 determines whether the pixel (i, j) The color component with the highest sharpness cannot be determined as one. In such a case, it is preferable that the adjustment unit 22 determines that the pixel is indefinite and does not perform the process of correcting axial chromatic aberration for the pixel. The color component having the highest sharpness may be determined by comparing the sharpness of each color component determined for each color difference plane.
 次に、調整部22は、決定した鮮鋭度が最も高い色成分に基づいて、色成分間の鮮鋭度を調整するための補正値を算出する。例えば、調整部22は、図5に示す色差面Crの標準偏差DEVrの分布に基づいて、画素(i,j)における真の意味での最小の標準偏差の値を与えるぼかし指標sを求める。すなわち、演算部21がステップS15で求めた最小ぼかし指標αは、図5の点線が示すように、必ずしも真に最小の標準偏差DEVrを与えるぼかし指標ではない。そこで、調整部22は、最小ぼかし指標αと、隣接する両端の最小ぼかし指標α-1およびα+1との3点に対し内挿法を適用し、より正確な最小ぼかし指標(内挿点)sを求める。なお、画素(i,j)で最も鮮鋭度が高い色成分をG成分とした場合の一例を以下に示す。 Next, the adjustment unit 22 calculates a correction value for adjusting the sharpness between the color components based on the determined color component having the highest sharpness. For example, the adjustment unit 22 obtains the blurring index s that gives the true standard deviation value in the pixel (i, j) based on the distribution of the standard deviation DEVr of the color difference plane Cr shown in FIG. That is, the minimum blurring index α r obtained by the calculation unit 21 in step S15 is not necessarily a blurring index that truly gives the minimum standard deviation DEVr, as indicated by the dotted line in FIG. Therefore, adjustment unit 22, the minimum and the blur index alpha r, by applying the interpolation to three points with minimum blur index alpha r -1 and alpha r +1 adjacent ends, more accurate minimum blur indicator (internal (Insert point) s is obtained. An example in which the color component having the highest sharpness in the pixel (i, j) is the G component is shown below.
 ここで、標準偏差DEVr[k’](i,j)の分布において、DEVr[α-1](i,j)>DEVr[α+1](i,j)の場合、最小ぼかし指標sは、次式(11)のように表される。
s=((α+1)+α)/2+(DEVr[α+1](i,j)-DEVr[α](i,j))/2/a  …(11)
ここで、係数aは傾きであり、(DEVr[α-1](i,j)-DEVr[α](i,j))/((α-1)-α)となる。
Here, in the distribution of the standard deviation DEVr [k ′] (i, j), when DEVr [α r −1] (i, j)> DEVr [α r +1] (i, j), the minimum blur index s Is expressed by the following equation (11).
s = ((α r +1) + α r ) / 2 + (DEVr [α r +1] (i, j) −DEVr [α r ] (i, j)) / 2 / a (11)
Here, the coefficient a is a slope and is (DEVr [α r −1] (i, j) −DEVr [α r ] (i, j)) / ((α r −1) −α r ).
 一方、DEVr[α-1](i,j)<DEVr[α+1](i,j)の場合、最小ぼかし指標sは、次式(12)のように表される。
s=((α-1)+α)/2+(DEVr[α-1](i,j)-DEVr[α](i,j))/2/a  …(12)
なお、傾きaは(DEVr[α+1](i,j)-DEVr[α](i,j))/((α+1)-α)となる。
On the other hand, in the case of DEVr [α r −1] (i, j) <DEVr [α r +1] (i, j), the minimum blur index s is expressed as the following equation (12).
s = ((α r −1) + α r ) / 2 + (DEVr [α r −1] (i, j) −DEVr [α r ] (i, j)) / 2 / a (12)
The slope a is (DEVr [α r +1] (i, j) −DEVr [α r ] (i, j)) / ((α r +1) −α r ).
 その後、調整部22は、ステップS16で抽出した背景領域における各色差面の最小ぼかし指標である内挿点sの分布を平滑化する。つまり、背景領域は、ぼけた画像領域であることから、各色差面の最小ぼかし指標はなだらかに一様に分布するものと考えられる。しかしながら、背景領域において最小ぼかし指標が大きく変化する画素が存在する場合、その画素に対する軸上色収差の補正処理を適正に施すことができず、ノイズとして対象画像の画質を低下させてしまう。それを回避するために、本実施形態の調整部22は、例えば、背景領域の画素(i,j)の内挿点sを、画素(i,j)を中心とする5ピクセル×5ピクセルなどの範囲で隣接する画素の内挿点sと平均し、その平均値を画素(i,j)の新たな内挿点sとして上書きする。他の色差面の内挿点sについても同様の処理が施される。これにより、背景領域において各色差面の最小ぼかし指標をなだらかに一様に分布させる。 Thereafter, the adjustment unit 22 smoothes the distribution of the interpolation points s that are the minimum blurring indexes of the color difference surfaces in the background region extracted in step S16. That is, since the background area is a blurred image area, it is considered that the minimum blur index of each color difference plane is gently and uniformly distributed. However, when there is a pixel in which the minimum blur index greatly changes in the background area, the axial chromatic aberration correction process cannot be appropriately performed on the pixel, and the image quality of the target image is reduced as noise. In order to avoid this, the adjustment unit 22 of the present embodiment, for example, sets the interpolation point s of the pixel (i, j) in the background region to 5 pixels × 5 pixels centered on the pixel (i, j). Is averaged with the interpolation point s of the adjacent pixels in the range of, and the average value is overwritten as a new interpolation point s of the pixel (i, j). Similar processing is performed for the interpolation point s of the other color difference planes. As a result, the minimum blurring index of each color difference surface is gently and uniformly distributed in the background region.
 そして、調整部22は、内挿点sとともに、ぼかし指標α、α+1のGα(i,j)、G(α+1)(i,j)を用いた公知の重み付け加算により、補正値G’(i,j)を算出する。 Then, the adjusting unit 22 performs known weighted addition using Gα r (i, j) and G (α r +1) (i, j) of the blurring indexes α r and α r +1 together with the interpolation point s, A correction value G ′ (i, j) is calculated.
 ステップS18:CPU1の補正部23は、最小ぼかし指標α、α、αrbおよび補正値に基づいて、画素(i,j)の色構造が色境界か否かを判定する。まず、補正部23は、画素(i,j)における最小ぼかし指標α、α、αrbそれぞれの変化率を求める。そのために、補正部23は、例えば、画素(i,j)を中心とした5ピクセル幅くらいで隣接する両端の画素の最小ぼかし指標α、α、αrbそれぞれを公知の微分係数の公式に適用して傾きを求め、その傾きを画素(i,j)の各色差の最小ぼかし指標の変化率とする。補正部23は、それらの変化率が閾値ε以上、且つ補正値が閾値ε以上か否かを判定する。なお、本実施形態では、例えば、閾値ε=4、および閾値ε=40(対象画像が255階調の画像の場合)と設定する。なお、閾値εおよび閾値εの値は、軸上色収差の補正処理に要求される精度やCPU1の処理能力等に応じて決定されることが好ましい。 Step S18: The correction unit 23 of the CPU 1 determines whether or not the color structure of the pixel (i, j) is a color boundary based on the minimum blurring indexes α r , α b , α rb and the correction value. First, the correction unit 23 obtains the rate of change of each of the minimum blurring indexes α r , α b , α rb in the pixel (i, j). For this purpose, the correcting unit 23 uses, for example, a known differential coefficient formula for each of the minimum blur indices α r , α b , α rb of adjacent pixels at a width of about 5 pixels centered on the pixel (i, j). The inclination is obtained by applying to the above, and the inclination is set as the change rate of the minimum blurring index of each color difference of the pixel (i, j). Correcting unit 23, their rate of change of the threshold epsilon 1 or more, and determines whether or not the correction value is the threshold epsilon 2 or more. In the present embodiment, for example, the threshold value ε 1 = 4 and the threshold value ε 2 = 40 (when the target image is an image having 255 gradations) are set. Note that the values of the threshold ε 1 and the threshold ε 2 are preferably determined according to the accuracy required for the axial chromatic aberration correction processing, the processing capability of the CPU 1, and the like.
 そして、補正部23は、その判定結果が真の場合、画素(i,j)の色構造が色境界と判定し、ステップS19(YES側)へ移行する。一方、補正部23は、判定結果が偽の場合、ステップS20(NO側)へ移行する。 Then, when the determination result is true, the correction unit 23 determines that the color structure of the pixel (i, j) is a color boundary, and proceeds to step S19 (YES side). On the other hand, when the determination result is false, the correction unit 23 proceeds to step S20 (NO side).
 ステップS19:補正部23は、ステップS18において色境界と判定された画素(i,j)の色構造がパープルフリンジであるか否かを判定する。そのために、補正部23は、画素(i,j)およびその周辺画素、または対象画像全体の各色成分の画素値を用い、図2(c)に示すような色成分ごとの画素値の分布から画素値が飽和した飽和領域(255階調の画像の場合、画素値が255)を求める。補正部23は、各色成分の飽和領域の大きさのうち、最も領域が広い、例えば、R成分の飽和領域とその飽和領域の端から幅β広げた領域とを合わせた領域をパープルフリンジ領域とする。なお、本実施形態における幅βの値は、例えば、10ピクセル程度とする。ただし、幅βの大きさは、CPU1の処理能力、軸上色収差の補正処理の精度や各色成分における飽和状態からの減少度合いに応じて決定されるのが好ましい。 Step S19: The correcting unit 23 determines whether or not the color structure of the pixel (i, j) determined as the color boundary in Step S18 is a purple fringe. For this purpose, the correction unit 23 uses the pixel value of each color component of the pixel (i, j) and its surrounding pixels or the entire target image, and calculates the pixel value for each color component as shown in FIG. A saturated region in which the pixel value is saturated (in the case of an image with 255 gradations, the pixel value is 255) is obtained. The correction unit 23 has a widest area among the saturation areas of the respective color components. For example, an area obtained by combining a saturation area of the R component and an area widened by β from the end of the saturation area is referred to as a purple fringe area. To do. Note that the value of the width β in this embodiment is, for example, about 10 pixels. However, the size of the width β is preferably determined according to the processing capability of the CPU 1, the accuracy of the axial chromatic aberration correction processing, and the degree of decrease from the saturated state in each color component.
 補正部23は、求めたパープルフリンジ領域に基づいて、画素(i,j)がそのパープルフリンジ領域内か否かを判定する。補正部23は、画素(i,j)がパープルフリンジ領域にある場合、ステップS20(YES側)へ移行する。一方、補正部23は、画素(i,j)がパープルフリンジ領域にない場合、ステップS24(NO側)へ移行する。 The correction unit 23 determines whether or not the pixel (i, j) is within the purple fringe region based on the obtained purple fringe region. When the pixel (i, j) is in the purple fringe region, the correction unit 23 proceeds to step S20 (YES side). On the other hand, when the pixel (i, j) is not in the purple fringe region, the correcting unit 23 proceeds to step S24 (NO side).
 ステップS20:補正部23は、ステップ17で算出され閾値ε以上である画素(i,j)の補正値が、隣接する画素の補正値と閾値ε以上の差を有するか否かを判定する。つまり、補正部23は、画素(i,j)の補正値が閾値ε以上である原因が、ショットノイズなどによる孤立した構造によるものか否かを判定する。なお、閾値εの値は軸上色収差の補正処理に要求される精度やCPU1の処理能力等に応じて決定され設定されることが好ましい。 Step S20: correction unit 23, the correction value of the threshold epsilon 2 or more at which the pixel is calculated in step 17 (i, j) is determined whether a difference of the correction value and the threshold value epsilon 3 or more adjacent pixels To do. That is, the correction unit 23 determines whether the cause that the correction value of the pixel (i, j) is equal to or greater than the threshold ε 2 is due to an isolated structure due to shot noise or the like. The value of the threshold ε 3 is preferably determined and set according to the accuracy required for the axial chromatic aberration correction process, the processing capability of the CPU 1, and the like.
 そして、補正部23は、その判定結果が真の場合、画素(i,j)の色構造が孤立した構造であると判定し、ステップS24(YES側)へ移行する。一方、補正部23は、判定結果が偽の場合、ステップS21(NO側)へ移行する。 If the determination result is true, the correction unit 23 determines that the color structure of the pixel (i, j) is an isolated structure, and proceeds to step S24 (YES side). On the other hand, when the determination result is false, the correction unit 23 proceeds to step S21 (NO side).
 ステップS21:補正部23は、対象画像の各画素におけるテクスチャ量および補正値に基づいて、画素(i,j)の色構造が擬似軸上色収差領域か否かを判定する。 Step S21: The correction unit 23 determines whether or not the color structure of the pixel (i, j) is a pseudo-axial chromatic aberration region based on the texture amount and the correction value in each pixel of the target image.
 ここで、擬似軸上色収差領域は、上述したように、軸上色収差の影響を受けたような色構造を示すが、軸上色収差の補正処理が適用できない領域である。図2(a)に示す軸上色収差の画像領域におけるテクスチャ量は色成分ごとに大きく異なる分布を示すのに対し、擬似軸上色収差領域の特徴として、各色成分のテクスチャ量は大きな値で、互いに似たような値の分布を示す。そこで、本実施形態の補正部23は、画素(i,j)の各色成分のテクスチャ量が全て閾値ε以上、且つ画素(i,j)の補正値が閾値ε以下か否かを判定する。なお、閾値εおよび閾値εの値は、軸上色収差の補正処理に要求される精度やCPU1の処理能力等に応じて決定され設定されることが好ましい。 Here, as described above, the pseudo-axial chromatic aberration region shows a color structure that is affected by the axial chromatic aberration, but is a region to which the axial chromatic aberration correction process cannot be applied. The texture amount in the image area of axial chromatic aberration shown in FIG. 2A shows a distribution that varies greatly for each color component, whereas the texture amount of each color component has a large value as a feature of the pseudo-axial chromatic aberration area. Show similar value distribution. Accordingly, the correction unit 23 of the present embodiment determines whether or not the texture amounts of the respective color components of the pixel (i, j) are all equal to or greater than the threshold ε 4 and the correction value of the pixel (i, j) is equal to or less than the threshold ε 5. To do. The values of the threshold ε 4 and the threshold ε 5 are preferably determined and set according to the accuracy required for the axial chromatic aberration correction process, the processing capability of the CPU 1, and the like.
 そして、補正部23は、その判定結果が真の場合、画素(i,j)の色構造が擬似軸上色収差領域であると判定し、ステップS24(YES側)へ移行する。一方、補正部23は、判定結果が偽の場合、ステップS22(NO側)へ移行する。 Then, when the determination result is true, the correction unit 23 determines that the color structure of the pixel (i, j) is a pseudo-axial chromatic aberration region, and proceeds to step S24 (YES side). On the other hand, when the determination result is false, the correction unit 23 proceeds to step S22 (NO side).
 ステップS22:補正部23は、ステップS17において算出された補正値を用いて画素(i,j)の軸上色収差を補正する。 Step S22: The correction unit 23 corrects the axial chromatic aberration of the pixel (i, j) using the correction value calculated in step S17.
 補正部23は、例えば、画素(i,j)で最も鮮鋭度が高いのがG面とした場合、次式(13)に基づいて画素(i,j)のR成分の鮮鋭度を調整し軸上色収差を補正する。
R’(i,j)=R0(i,j)+(G0(i,j)-G’(i,j)) …(13)
 補正部23は、同様にB成分についても、B成分とG成分との色差面Cbの標準偏差DEVbの分布に基づいて、補正値G”(i,j)を算出し、次式(14)に基づいて、画素(i,j)におけるB成分の鮮鋭度を調整し軸上色収差を補正する。
B’(i,j)=B0(i,j)+(G0(i,j)-G”(i,j)) …(14)
 ステップS23:CPU1の色差補正部24は、軸上色収差の補正処理が施された画素(i,j)の各色成分の画素値に対して色差補正を行う。
For example, the correction unit 23 adjusts the sharpness of the R component of the pixel (i, j) based on the following equation (13) when the sharpness of the pixel (i, j) is the G plane. Correct axial chromatic aberration.
R ′ (i, j) = R0 (i, j) + (G0 (i, j) −G ′ (i, j)) (13)
Similarly, the correction unit 23 calculates a correction value G ″ (i, j) for the B component based on the distribution of the standard deviation DEVb of the color difference plane Cb between the B component and the G component, and the following equation (14) Based on the above, the sharpness of the B component in the pixel (i, j) is adjusted to correct the axial chromatic aberration.
B ′ (i, j) = B0 (i, j) + (G0 (i, j) −G ″ (i, j)) (14)
Step S23: The color difference correction unit 24 of the CPU 1 performs color difference correction on the pixel value of each color component of the pixel (i, j) that has been subjected to the axial chromatic aberration correction processing.
 すなわち、ステップS22において、軸上色収差の補正処理が施された画素の各色成分の画素値は、補正前の画素値と比較した場合、特に、輝度色差の色空間における色差成分の向きが大きく変わる場合があり、それにより変色が発生する。そこで、本実施形態では、その変色の発生を抑制するために、色差補正部24は、画素(i,j)における補正後の色差成分が、輝度色差の空間において、補正前の色差成分の向きと同じになるように補正する。 That is, in step S22, the pixel value of each color component of the pixel on which the axial chromatic aberration correction processing has been performed changes greatly in the direction of the color difference component particularly in the luminance color difference color space when compared with the pixel value before correction. In some cases, this causes discoloration. Therefore, in the present embodiment, in order to suppress the occurrence of the color change, the color difference correction unit 24 determines that the corrected color difference component in the pixel (i, j) is the direction of the color difference component before correction in the luminance color difference space. Correct to be the same as.
 具体的には、色差補正部24は、画素(i,j)の補正前後の各色成分の画素値を、公知の変換処理を適用して、RGBの画素値(R’,G,B’)をYCrCbの輝度成分と色差成分(Y’,Cr’,Cb’)に変換する。ここで、補正前の輝度成分および色差成分を(Y0,Cr0,Cb0)とする。そして、色差補正部24は、次式(15)により、画素(i,j)の色差成分の向きを補正前の向きに補正する。なお、本実施形態において、輝度成分Y’は補正しない。 Specifically, the color difference correction unit 24 applies a known conversion process to the pixel values of the respective color components before and after the correction of the pixel (i, j), so that the RGB pixel values (R ′, G, B ′). Are converted into YCrCb luminance components and color difference components (Y ′, Cr ′, Cb ′). Here, it is assumed that the luminance component and the color difference component before correction are (Y0, Cr0, Cb0). Then, the color difference correction unit 24 corrects the direction of the color difference component of the pixel (i, j) to the direction before correction by the following equation (15). In the present embodiment, the luminance component Y ′ is not corrected.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 色差補正部24は、再度、上述した公知の変換処理を適用して、画素の色差補正後の輝度成分および色差成分(Y’,Cr”,Cb”)をRGBの画素値(R,G,B)に変換する。色差補正部24は、画素値(R,G,B)を画素(i,j)の画素値とする。 The color difference correction unit 24 applies the above-described known conversion process again, and converts the luminance component and color difference components (Y ′, Cr ″, Cb ″) after the pixel color difference correction into RGB pixel values (R 1 , Gb). 1 , B 1 ). The color difference correction unit 24 sets the pixel value (R 1 , G 1 , B 1 ) as the pixel value of the pixel (i, j).
 ステップS24:CPU1は、対象画像の全ての画素について処理が終了したか否かを判定する。CPU1は、全ての画素について処理が終了していない場合、ステップS18(NO側)へ移行し、次の画素について、ステップS18からステップS23の処理を行う。一方、CPU1は、全ての画素について処理が終了した場合、軸上色収差が補正された画像を、記憶部2に記録したり出力装置30に表示したりする。そして、CPU1は、一連の処理を終了する。 Step S24: The CPU 1 determines whether or not the processing has been completed for all the pixels of the target image. If the processing has not been completed for all the pixels, the CPU 1 proceeds to step S18 (NO side), and performs the processing from step S18 to step S23 for the next pixel. On the other hand, when the processing is completed for all the pixels, the CPU 1 records an image in which the axial chromatic aberration is corrected in the storage unit 2 or displays it on the output device 30. Then, the CPU 1 ends a series of processes.
 このように、本実施形態では、各色差面の最小ぼかし指標、各色成分のテクスチャ量および補正値の値に基づいて各画素における色構造を判定することにより、軸上色収差の補正を確度高く補正することができる。 As described above, in this embodiment, the correction of axial chromatic aberration is corrected with high accuracy by determining the color structure in each pixel based on the minimum blur index of each color difference surface, the texture amount of each color component, and the value of the correction value. can do.
 また、色境界や擬似軸上色収差領域等と判定された画素に対する軸上色収差の補正処理を行わないことにより、変色や色抜け等の発生を回避することができる。 Further, by not performing the axial chromatic aberration correction processing on the pixels determined to be color boundaries, pseudo-axial chromatic aberration regions, or the like, it is possible to avoid occurrence of discoloration or color loss.
 さらに、補正後の画素値に対して、色差空間において補正前の画素値が有した色差成分の向きに補正することにより、変色や色抜け等を抑制しつつ軸上色収差の補正をより精度よく行うことができる。
《実施形態の補足事項》
 (1)本発明の画像処理装置は、画像処理プログラムをコンピュータ10に実行させることで実現させたが、本発明はこれに限定されない。本発明に係る画像処理装置における処理をコンピュータ10で実現するためのプログラムおよびそれを記録した媒体に対しても適用可能である。
Furthermore, by correcting the corrected pixel value in the direction of the chrominance component of the pixel value before correction in the chrominance space, axial chromatic aberration can be corrected more accurately while suppressing discoloration and color loss. It can be carried out.
<< Additional items of embodiment >>
(1) Although the image processing apparatus of the present invention is realized by causing the computer 10 to execute an image processing program, the present invention is not limited to this. The present invention is also applicable to a program for realizing the processing in the image processing apparatus according to the present invention by the computer 10 and a medium on which the program is recorded.
 また、本発明の画像処理プログラムを有した図6に示すようなデジタルカメラに対しても適用可能である。なお、図6に示すデジタルカメラにおいて、撮像素子102と、撮像素子102から入力される画像信号のA/D変換や、色補間処理などの信号処理を行うデジタルフロントエンド回路のDFE103とが、撮像部を構成することが好ましい。 Also, the present invention can be applied to a digital camera as shown in FIG. 6 having the image processing program of the present invention. Note that in the digital camera shown in FIG. 6, the image sensor 102 and the DFE 103 of the digital front-end circuit that performs signal processing such as A / D conversion of the image signal input from the image sensor 102 and color interpolation processing are imaged. It is preferable to constitute a part.
 また、デジタルカメラを本発明の画像処理装置として動作させる場合、CPU104は、画像平滑部20、演算部21、調整部22、補正部23、色差補正部24の各処理をソフトウエア的に実現してもよいし、ASICを用いてこれらの各処理をハードウエア的に実現してもよい。 When the digital camera is operated as the image processing apparatus of the present invention, the CPU 104 realizes each process of the image smoothing unit 20, the calculation unit 21, the adjustment unit 22, the correction unit 23, and the color difference correction unit 24 by software. Alternatively, these processes may be realized by hardware using an ASIC.
 (2)上記実施形態では、画像平滑部20が、N個のガウシアンフィルタを用いて、対象画像からN個の平滑画像を生成したが、本発明はこれに限定されない。例えば、図6に示すようなデジタルカメラの撮像レンズ101などの光学系の点広がり関数(PSF)が得られる場合、画像平滑部20は、ガウシアンフィルタを用いる代わりに、PSFを用いて平滑画像を生成してもよい。 (2) In the above-described embodiment, the image smoothing unit 20 uses the N Gaussian filters to generate N smooth images from the target image, but the present invention is not limited to this. For example, when a point spread function (PSF) of an optical system such as an imaging lens 101 of a digital camera as shown in FIG. 6 is obtained, the image smoothing unit 20 uses a PSF instead of a Gaussian filter to generate a smooth image. It may be generated.
 また、画像平滑化部20は、例えば、少なくとも2つのガウシアンフィルタを用いて一組の平滑画像を最初に生成し、その一組の平滑画像を所定の割合で混合することにより、ぼかし指標が互いに異なるN個の平滑画像を生成してもよい。これにより、画像処理の単純化および高速化を図ることができる。また、ガウシアンフィルタの最大半径がぼかし量を超えない範囲で、平滑画像の混合の割合を調整することにより、任意のぼかし指標を有する平滑画像を容易に生成することができる。つまり、軸上色収差の補正において、非常に大きなぼかし量が必要となる画像領域に対しても、所望のぼかし量の範囲で平滑画像を容易に生成できる。 Further, the image smoothing unit 20 first generates a set of smooth images using, for example, at least two Gaussian filters, and mixes the set of smooth images at a predetermined ratio so that the blurring indices are mutually equal. N different smooth images may be generated. Thereby, it is possible to simplify and speed up the image processing. Further, by adjusting the mixing ratio of the smooth images within a range where the maximum radius of the Gaussian filter does not exceed the blur amount, a smooth image having an arbitrary blur index can be easily generated. That is, it is possible to easily generate a smooth image within a desired blur amount range even for an image region that requires a very large blur amount in correcting axial chromatic aberration.
 (3)上記実施形態では、演算部21は、対象画像とぼかし指標が最も小さいk=1の平滑画像とを同一の色成分ごとに差分してテクスチャ量を算出したが、本発明はこれに限定されない。演算部21は、どの程度の空間スケールの画像構造まで考慮するかに応じて、いずれかのぼかし指標の平滑画像を用いテクスチャ量を算出するのが好ましい。 (3) In the above embodiment, the calculation unit 21 calculates the texture amount by subtracting the target image and the smooth image of k = 1 having the smallest blurring index for each identical color component. It is not limited. It is preferable that the calculation unit 21 calculates the texture amount using a smoothed image of any blur index depending on how much of the spatial scale image structure is considered.
 (4)上記実施形態では、R成分とG成分との色差面Cr、B成分とG成分との色差面Cb、R成分とB成分との色差面Crbに基づいて、対象画像の軸上色収差の補正を行ったが、本発明はこれに限定されない。例えば、3つの色差面のうち2つの色差面に基づいて、対象画像の軸上色収差の補正を行ってもよい。これにより、補正処理の高速化を図ることができる。 (4) In the above embodiment, the longitudinal chromatic aberration of the target image is based on the color difference surface Cr between the R component and the G component, the color difference surface Cb between the B component and the G component, and the color difference surface Crb between the R component and the B component. However, the present invention is not limited to this. For example, axial chromatic aberration of the target image may be corrected based on two of the three color difference surfaces. As a result, the speed of the correction process can be increased.
 (5)上記実施形態では、対象画像は、各画素においてR成分、G成分、B成分の画素値を有するとしたが、本発明はこれに限定されない。例えば、対象画像の各画素において、2つまたは4つ以上の色成分を有してもよい。 (5) In the above embodiment, the target image has pixel values of R component, G component, and B component in each pixel, but the present invention is not limited to this. For example, each pixel of the target image may have two or four or more color components.
 また、図6に示すデジタルカメラの撮像素子102の受光面の各画素子に、R、G、Bのカラーフィルタが公知のベイヤ配列に従って配置されている場合、その撮像素子102によって撮像されたRAW画像に対しても、本発明は適用可能である。 Further, when R, G, and B color filters are arranged in accordance with a known Bayer array on each image element on the light receiving surface of the image sensor 102 of the digital camera shown in FIG. 6, the RAW imaged by the image sensor 102. The present invention can also be applied to images.
 (6)上記実施形態では、ステップS18において、補正部23が、画素(i,j)における各色差の最小ぼかし指標の変化率が閾値ε以上且つ補正値が閾値ε以上の場合、画素(i,j)の色構造は色境界であると判定したが、本発明ではこれに限定されない。例えば、補正部23は、色境界と判定された画素周辺の画像領域において、最小ぼかし指標の変化率が閾値εより小さく、且つ補正値が閾値ε以上となる領域を検出し、軸上色収差の補正対象から除外することが好ましい。これは、色境界の周辺のそのような領域は、本発明により、軸上色収差の補正が施された場合、対象画像の画質を低下させてしまうことが判明したからである。 (6) In the above embodiment, in step S18, the correction unit 23, pixel (i, j) in the case and the correction value change rate threshold epsilon 1 or more minimum blur index of each color difference threshold epsilon 2 or more, the pixel Although the color structure of (i, j) is determined to be a color boundary, the present invention is not limited to this. For example, the correction unit 23 detects an area in which the change rate of the minimum blur index is smaller than the threshold ε 1 and the correction value is equal to or larger than the threshold ε 2 in the image area around the pixel determined to be the color boundary. It is preferable to exclude the chromatic aberration correction target. This is because such an area around the color boundary is found to deteriorate the image quality of the target image when the axial chromatic aberration is corrected according to the present invention.
 (7)本実施形態では、ステップS21において、補正部23は、画素の色構造が擬似軸上色収差領域か否かを判定したが、本発明はこれに限定されず、ステップS21の処理を省略してもよい。 (7) In the present embodiment, in step S21, the correction unit 23 determines whether or not the color structure of the pixel is a pseudo-axial chromatic aberration region, but the present invention is not limited to this, and the process of step S21 is omitted. May be.
 (8)上記実施形態では、色差補正部24は、軸上色収差の補正処理が施された画素全てに対して色差補正を行ったが、本発明これに限定されず、それらの画素の補正後の色差成分の大きさL’の値が、補正前の色差成分の大きさLの値より小さい場合、色差補正を行わないようにしてもよい。 (8) In the above embodiment, the chrominance correction unit 24 performs chrominance correction on all the pixels that have been subjected to the axial chromatic aberration correction processing. However, the present invention is not limited to this, and after correction of those pixels. If the value of the color difference component size L ′ is smaller than the value of the color difference component size L before correction, the color difference correction may not be performed.
 また、色差補正部24は、補正後の色差成分の大きさL’が、補正前の色差成分の大きさL(所定の大きさ)より大きい場合、図7および次式(16)で定義される補正出力率φ(L’)を用いて、式(15)を変形した次式(17)に基づき補正後の色差成分の出力率を小さくしてもよい。 The color difference correction unit 24 is defined by FIG. 7 and the following equation (16) when the corrected color difference component size L ′ is larger than the uncorrected color difference component size L (predetermined size). Using the corrected output rate φ (L ′), the corrected color difference component output rate may be reduced based on the following equation (17) obtained by modifying the equation (15).
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 これにより、変色の発生をより正確に抑制することが可能となる。なお、関数clip(V,U,U)は、パラメータVの値が下限値Uと上限値Uとの範囲外の値の場合、下限値Uまたは上限値Uにクリップする。なお、係数WVは、補正出力率φ(L’)を上限値U(=1)から下限値U(=0)に変化させる幅を示し、本実施形態では、255階調の画像の場合、例えば、係数WV=5~10の値とする。ただし、係数WVの値は、要求される変色の抑制の度合い等に応じて適宜設定されることが好ましい。 This makes it possible to more accurately suppress the occurrence of discoloration. Incidentally, the function clip (V, U 1, U 2), when the value of the parameter V is outside the range of values of the lower limit value U 1 and the upper limit value U 2, is clipped to the lower limit value U 1 or the upper limit value U 2 . The coefficient WV indicates a width for changing the corrected output rate φ (L ′) from the upper limit value U 2 (= 1) to the lower limit value U 1 (= 0). In the present embodiment, an image of 255 gradations is displayed. In this case, for example, the coefficient WV is set to a value of 5 to 10. However, the value of the coefficient WV is preferably set as appropriate according to the required degree of suppression of discoloration.
 (9)上記実施形態では、補正部23は、画素の色構造が色境界や擬似軸上色収差領域の場合、軸上色収差の補正処理を行わないとしたが、本発明はこれに限定されない。例えば、補正部23は、色境界や擬似軸上色収差領域の画素に対して、ステップS22およびステップS23の処理を行ってもよい。ただし、補正部23は、ステップS22において、式(13)および(14)の代わりに次式(18)および(19)を用いて行うのが好ましい。
R’(i,j)=R0(i,j)+γ×(G0(i,j)-G’(i,j))…(18)
B’(i,j)=B0(i,j)+γ×(G0(i,j)-G”(i,j))…(19)
ここで、係数γは、0~1の間で色構造に応じて設定される値である。すなわち、色構造が色境界や擬似軸上色収差領域などの場合、それらによる影響があまり目立たないようにするために、係数γ=0または0.1~0.2以下等、各色差面の最小ぼかし指標、補正値または各色成分のテクスチャ量等に応じて、係数γを小さな値にするのが好ましい。一方、色構造が軸上色収差やパープルフリンジ領域と判定された場合、係数γ=1等の値にするのが好ましい。
(9) In the above embodiment, the correction unit 23 does not perform the correction process for the axial chromatic aberration when the color structure of the pixel is a color boundary or a pseudo-axial chromatic aberration region, but the present invention is not limited to this. For example, the correction unit 23 may perform the processes of step S22 and step S23 on the pixels in the color boundary and the pseudo-axial chromatic aberration region. However, it is preferable that the correction unit 23 performs the following equations (18) and (19) instead of the equations (13) and (14) in step S22.
R ′ (i, j) = R0 (i, j) + γ × (G0 (i, j) −G ′ (i, j)) (18)
B ′ (i, j) = B0 (i, j) + γ × (G0 (i, j) −G ″ (i, j)) (19)
Here, the coefficient γ is a value set between 0 and 1 according to the color structure. That is, when the color structure is a color boundary, a pseudo-axial chromatic aberration region, or the like, the coefficient γ = 0 or the minimum of each color difference surface such as 0.1 to 0.2 or less so that the influence thereof is not so noticeable. The coefficient γ is preferably set to a small value according to the blurring index, the correction value, the texture amount of each color component, or the like. On the other hand, when it is determined that the color structure is an axial chromatic aberration or purple fringe region, it is preferable to set the coefficient γ = 1 or the like.
 以上の詳細な説明により、実施形態の特徴点および利点は明らかになるであろう。これは、特許請求の範囲が、その精神および権利範囲を逸脱しない範囲で前述のような実施形態の特徴点および利点にまで及ぶことを意図する。また、当該技術分野において通常の知識を有する者であれば、あらゆる改良および変更に容易に想到できるはずであり、発明性を有する実施形態の範囲を前述したものに限定する意図はなく、実施形態に開示された範囲に含まれる適当な改良物および均等物によることも可能である。 From the above detailed description, the features and advantages of the embodiment will become apparent. It is intended that the scope of the claims extend to the features and advantages of the embodiments as described above without departing from the spirit and scope of the right. Further, any person having ordinary knowledge in the technical field should be able to easily come up with any improvements and modifications, and there is no intention to limit the scope of the embodiments having the invention to those described above. It is also possible to use appropriate improvements and equivalents within the scope disclosed in.
1…CPU;2…記憶部;3…入出力I/F;4…バス;10…コンピュータ;20…画像平滑部;21…演算部;22…調整部;23…補正部;24…色差補正部;30…出力装置;40…入力装置 DESCRIPTION OF SYMBOLS 1 ... CPU; 2 ... Storage part; 3 ... Input / output I / F; 4 ... Bus; 10 ... Computer; 20 ... Image smoothing part; 21 ... Calculation part; 22 ... Adjustment part; 23 ... Correction part; Part: 30 ... output device; 40 ... input device

Claims (14)

  1.  複数の色成分の画素値を有する対象画像を平滑化して平滑化の度合いが異なる複数の平滑画像を生成する画像平滑手段と、
     前記対象画像の画素それぞれにおいて、前記対象画像の所定の色成分と前記平滑画像の前記所定の色成分とは異なる色成分との差分である色差および前記色差の分散を算出する演算手段と、
     前記色差の分散に基づき前記対象画像の画素において鮮鋭度が最も高い色成分を決定し、決定した前記鮮鋭度が最も高い色成分に基づき前記対象画像の画素における色成分間の鮮鋭度を調整する調整量を算出する調整手段と、
     前記色差および前記調整量に基づいて前記対象画像の画素における画像構造を判定し、前記判定結果に応じて前記対象画像の画素における色成分間の鮮鋭度を前記調整量で補正する補正手段と、
     を備えることを特徴とする画像処理装置。
    Image smoothing means for smoothing a target image having pixel values of a plurality of color components and generating a plurality of smooth images having different degrees of smoothing;
    A calculation means for calculating a color difference that is a difference between a predetermined color component of the target image and a color component different from the predetermined color component of the smoothed image and a variance of the color difference in each pixel of the target image;
    The color component having the highest sharpness is determined in the pixel of the target image based on the variance of the color difference, and the sharpness between the color components in the pixel of the target image is adjusted based on the determined color component having the highest sharpness. An adjustment means for calculating an adjustment amount;
    Correction means for determining an image structure in a pixel of the target image based on the color difference and the adjustment amount, and correcting sharpness between color components in the pixel of the target image with the adjustment amount according to the determination result;
    An image processing apparatus comprising:
  2.  請求項1に記載の画像処理装置において、
     前記調整手段は、前記対象画像の画素における前記鮮鋭度が最も高い色成分を、前記色差の分散の最小値に基づいて決定することを特徴とする画像処理装置。
    The image processing apparatus according to claim 1.
    The image processing apparatus according to claim 1, wherein the adjustment unit determines a color component having the highest sharpness in a pixel of the target image based on a minimum value of the variance of the color difference.
  3.  請求項1または請求項2に記載の画像処理装置において、
     前記演算手段は、前記対象画像の画素それぞれにおいて、前記対象画像と一の前記平滑画像とを同じ色成分ごとに差分して、前記対象画像における画像構造の多さを示すテクスチャ量を算出し、
     前記補正手段は、前記色差、前記調整量および前記テクスチャ量に基づいて前記対象画像の画素における画像構造を判定する
     ことを特徴とする画像処理装置。
    The image processing apparatus according to claim 1 or 2,
    The calculation means, for each pixel of the target image, the difference between the target image and the one smooth image for each same color component, to calculate a texture amount indicating the number of image structures in the target image,
    The image processing apparatus, wherein the correction unit determines an image structure in a pixel of the target image based on the color difference, the adjustment amount, and the texture amount.
  4.  請求項3に記載の画像処理装置において、
     前記調整手段は、前記テクスチャ量が所定の値以下となるぼけた画像領域を抽出し、前記ぼけた画像領域の画素における前記色差の分散の最小値を平滑化することを特徴とする画像処理装置。
    The image processing apparatus according to claim 3.
    The adjustment unit extracts a blurred image region in which the texture amount is equal to or less than a predetermined value, and smoothes the minimum value of the variance of the color difference in the pixels of the blurred image region. .
  5.  請求項1ないし請求項4のいずれか1項に記載の画像処理装置において、
     前記補正手段は、前記対象画像の画素における前記色差の変化率および前記調整量の大きさに基づいて、前記対象画像の画素における画像構造が色境界か否かを判定し、前記色境界と判定した場合、前記対象画像の画素を補正対象から除外することを特徴とする画像処理装置。
    The image processing apparatus according to any one of claims 1 to 4,
    The correction means determines whether the image structure in the pixel of the target image is a color boundary based on the change rate of the color difference in the pixel of the target image and the magnitude of the adjustment amount, and determines the color boundary In such a case, an image processing apparatus that excludes pixels of the target image from correction targets.
  6.  請求項1ないし請求項5のいずれか1項に記載の画像処理装置において、
     前記補正手段は、前記対象画像の画素における前記調整量と隣接する画素の調整量との大きさを比較して前記対象画像の画素における画像構造が孤立した構造か否かを判定し、前記孤立した構造と判定した場合、前記対象画像の画素を補正対象から除外することを特徴とする画像処理装置。
    The image processing apparatus according to any one of claims 1 to 5,
    The correction means compares the adjustment amount of the pixel of the target image with the adjustment amount of an adjacent pixel to determine whether or not the image structure of the pixel of the target image is an isolated structure. An image processing apparatus that excludes pixels of the target image from correction targets when it is determined that the structure has been corrected.
  7.  請求項3または請求項4に記載の画像処理装置において、
     前記補正手段は、前記対象画像の画素における前記テクスチャ量および前記補正量の大きさに基づいて、前記対象画像の画素における画像構造が軸上色収差に類似した擬似軸上色収差領域か否かを判定し、前記擬似軸上色収差領域と判定した場合、前記対象画像の画素を補正対象から除外することを特徴とする画像処理装置。
    The image processing apparatus according to claim 3 or 4,
    The correction unit determines whether the image structure of the pixel of the target image is a pseudo-axial chromatic aberration region similar to the axial chromatic aberration based on the texture amount and the magnitude of the correction amount of the pixel of the target image. And when it determines with the said quasi-axial chromatic aberration area | region, the pixel of the said target image is excluded from correction object, The image processing apparatus characterized by the above-mentioned.
  8.  請求項5に記載の画像処理装置において、
     前記補正手段は、前記各色成分の分布に基づいて、前記色境界と判定された前記対象画像の画素の画像構造が飽和領域周辺の濃度差による色にじみか否かを判定し、前記色にじみと判定した場合、前記対象画像の画素を前記補正対象とすることを特徴とする画像処理装置。
    The image processing apparatus according to claim 5.
    The correction means determines, based on the distribution of each color component, whether or not the image structure of the pixel of the target image determined to be the color boundary is a color blur due to a density difference around a saturation region, and the color blur and An image processing apparatus characterized in that when it is determined, a pixel of the target image is the correction target.
  9.  請求項1ないし請求項8のいずれか1項に記載の画像処理装置において、
     前記鮮鋭度が調整された前記対象画像の画素の画素値を、色差空間において、前記鮮鋭度の調整前の画素値の色差の向きと同じになるように補正する色差補正手段を備えることを特徴とする画像処理装置。
    The image processing apparatus according to any one of claims 1 to 8,
    A color difference correction unit that corrects the pixel value of the pixel of the target image with the sharpness adjusted to be the same as the direction of the color difference of the pixel value before the sharpness adjustment in a color difference space. An image processing apparatus.
  10.  請求項1ないし請求項9のいずれか1項に記載の画像処理装置において、
     前記画像平滑手段は、少なくとも2つの異なる平滑化の度合いで前記対象画像を平滑化して一組の平滑画像を生成し、前記一組の平滑画像を所定の割合で混合して前記複数の平滑画像を生成することを特徴とする画像処理装置。
    The image processing apparatus according to any one of claims 1 to 9,
    The image smoothing means smoothes the target image with at least two different degrees of smoothing to generate a set of smoothed images, and mixes the set of smoothed images at a predetermined ratio to obtain the plurality of smoothed images. Generating an image processing apparatus.
  11.  複数の色成分の画素値を有する対象画像の画素における画像構造を判定する判定手段と、
     前記対象画像の画素において鮮鋭度が最も高い色成分を決定し、決定した前記鮮鋭度が最も高い色成分に基づき前記対象画像の画素における色成分の鮮鋭度を調整する調整量を算出し、前記判定結果に応じて前記対象画像の画素における色成分の鮮鋭度を前記調整量で調整する補正手段と、
     を備えること特徴とする画像処理装置。
    Determining means for determining an image structure in a pixel of a target image having pixel values of a plurality of color components;
    Determining the color component having the highest sharpness in the pixel of the target image, calculating an adjustment amount for adjusting the sharpness of the color component in the pixel of the target image based on the determined color component having the highest sharpness; Correction means for adjusting the sharpness of the color component in the pixel of the target image by the adjustment amount according to the determination result;
    An image processing apparatus comprising:
  12.  被写体を撮像して、複数の色成分の画素値を有する対象画像を生成する撮像手段と、
     請求項1ないし請求項11のいずれか1項に記載の画像処理装置と、
     を備えることを特徴とする撮像装置。
    Imaging means for imaging a subject and generating a target image having pixel values of a plurality of color components;
    An image processing apparatus according to any one of claims 1 to 11,
    An imaging apparatus comprising:
  13.  複数の色成分の画素値を有する対象画像を読み込む入力手順、
     前記対象画像を平滑化して平滑化の度合いが異なる複数の平滑画像を生成する画像平滑手順、
     前記対象画像の画素それぞれにおいて、前記対象画像の所定の色成分と前記平滑画像の前記所定の色成分とは異なる色成分との差分である色差および前記色差の分散を算出する演算手順、
     前記色差の分散に基づき前記対象画像の画素において鮮鋭度が最も高い色成分を決定し、決定した前記鮮鋭度が最も高い色成分に基づき前記対象画像の画素の色成分間の鮮鋭度を調整する調整量を算出する調整手順、
     前記色差および前記調整量に基づいて前記対象画像の画素における画像構造を判定し、前記判定結果に応じて前記対象画像の画素における色成分間の鮮鋭度を前記調整量で補正する補正手順、
     をコンピュータに実行させることを特徴とする画像処理プログラム。
    An input procedure for reading a target image having pixel values of a plurality of color components;
    An image smoothing procedure for smoothing the target image and generating a plurality of smoothed images having different degrees of smoothing;
    A calculation procedure for calculating a color difference that is a difference between a predetermined color component of the target image and a color component different from the predetermined color component of the smoothed image and a variance of the color difference in each pixel of the target image;
    The color component having the highest sharpness is determined in the pixel of the target image based on the variance of the color difference, and the sharpness between the color components of the pixel of the target image is adjusted based on the determined color component having the highest sharpness. Adjustment procedure for calculating the adjustment amount,
    A correction procedure for determining an image structure in a pixel of the target image based on the color difference and the adjustment amount, and correcting sharpness between color components in the pixel of the target image with the adjustment amount according to the determination result;
    An image processing program for causing a computer to execute.
  14.  複数の色成分の画素値を有する対象画像の画素における画像構造を判定する判定手順、
     前記対象画像の画素において鮮鋭度が最も高い色成分を決定し、決定した前記鮮鋭度が最も高い色成分に基づき前記対象画像の画素における色成分の鮮鋭度を調整する調整量を算出し、前記判定結果に応じて前記対象画像の画素における色成分の鮮鋭度を前記調整量で調整する補正手順、
     をコンピュータに実行させることを特徴とする画像処理プログラム。
    A determination procedure for determining an image structure in a pixel of a target image having pixel values of a plurality of color components;
    Determining the color component having the highest sharpness in the pixel of the target image, calculating an adjustment amount for adjusting the sharpness of the color component in the pixel of the target image based on the determined color component having the highest sharpness; A correction procedure for adjusting the sharpness of the color component in the pixel of the target image by the adjustment amount according to the determination result,
    An image processing program for causing a computer to execute.
PCT/JP2013/000860 2012-02-22 2013-02-18 Image processor, imaging device, and image processing program WO2013125198A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012036464A JP2013172402A (en) 2012-02-22 2012-02-22 Image processing device, image pickup device, and image processing program
JP2012-036464 2012-02-22

Publications (1)

Publication Number Publication Date
WO2013125198A1 true WO2013125198A1 (en) 2013-08-29

Family

ID=49005395

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/000860 WO2013125198A1 (en) 2012-02-22 2013-02-18 Image processor, imaging device, and image processing program

Country Status (2)

Country Link
JP (1) JP2013172402A (en)
WO (1) WO2013125198A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10659738B2 (en) 2015-05-12 2020-05-19 Olympus Corporation Image processing apparatus, image processing method, and image processing program product
CN115731829A (en) * 2021-08-30 2023-03-03 广州视源电子科技股份有限公司 Image quality adjusting method, storage medium and display device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016084223A1 (en) 2014-11-28 2016-06-02 オリンパス株式会社 Image processing apparatus, image processing method, and image processing program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010252231A (en) * 2009-04-20 2010-11-04 Canon Inc Image processing apparatus, method of controlling the same, and program
JP2011160168A (en) * 2010-02-01 2011-08-18 Hitachi Consumer Electronics Co Ltd Image processing apparatus
JP2012015951A (en) * 2010-07-05 2012-01-19 Nikon Corp Image processing device, imaging device and image processing program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010252231A (en) * 2009-04-20 2010-11-04 Canon Inc Image processing apparatus, method of controlling the same, and program
JP2011160168A (en) * 2010-02-01 2011-08-18 Hitachi Consumer Electronics Co Ltd Image processing apparatus
JP2012015951A (en) * 2010-07-05 2012-01-19 Nikon Corp Image processing device, imaging device and image processing program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10659738B2 (en) 2015-05-12 2020-05-19 Olympus Corporation Image processing apparatus, image processing method, and image processing program product
CN115731829A (en) * 2021-08-30 2023-03-03 广州视源电子科技股份有限公司 Image quality adjusting method, storage medium and display device

Also Published As

Publication number Publication date
JP2013172402A (en) 2013-09-02

Similar Documents

Publication Publication Date Title
US20190087941A1 (en) System for image correction processing
US8514304B2 (en) Image processing device and image pickup device using the same
US8941762B2 (en) Image processing apparatus and image pickup apparatus using the same
JP5441652B2 (en) Image processing method, image processing apparatus, imaging apparatus, and image processing program
EP1931130B1 (en) Image processing apparatus, image processing method, and program
KR101536162B1 (en) Image processing apparatus and method
JP2011124692A5 (en)
JP5541205B2 (en) Image processing apparatus, imaging apparatus, image processing program, and image processing method
JP5917048B2 (en) Image processing apparatus, image processing method, and program
JP5479187B2 (en) Image processing apparatus and imaging apparatus using the same
JPWO2002071761A1 (en) Image processing apparatus and image processing program
WO2013125198A1 (en) Image processor, imaging device, and image processing program
JP2012156715A (en) Image processing device, imaging device, image processing method, and program
JP6415108B2 (en) Image processing method, image processing apparatus, imaging apparatus, image processing program, and storage medium
JP5811635B2 (en) Image processing apparatus, imaging apparatus, and image processing program
JP5630105B2 (en) Image processing apparatus, imaging apparatus, and image processing program
WO2012004973A1 (en) Image processing device, imaging device, and image processing program
JP6238673B2 (en) Image processing apparatus, imaging apparatus, imaging system, image processing method, image processing program, and storage medium
JP6843510B2 (en) Image processing equipment, image processing methods and programs
US9055232B2 (en) Image processing apparatus capable of adding soft focus effects, image processing method, and storage medium
JP2012100215A (en) Image processing device, imaging device, and image processing program
JP4385890B2 (en) Image processing method, frequency component compensation unit, image processing apparatus including the frequency component compensation unit, and image processing program
JP6604737B2 (en) Image processing apparatus, imaging apparatus, image processing method, image processing program, and storage medium
JP2017224906A (en) Image processing method, imaging apparatus using the same, image processing system, and image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13752088

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13752088

Country of ref document: EP

Kind code of ref document: A1