WO2005101854A1 - 色ずれ補正機能を有する画像処理装置、画像処理プログラム、および電子カメラ - Google Patents
色ずれ補正機能を有する画像処理装置、画像処理プログラム、および電子カメラ Download PDFInfo
- Publication number
- WO2005101854A1 WO2005101854A1 PCT/JP2005/006951 JP2005006951W WO2005101854A1 WO 2005101854 A1 WO2005101854 A1 WO 2005101854A1 JP 2005006951 W JP2005006951 W JP 2005006951W WO 2005101854 A1 WO2005101854 A1 WO 2005101854A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- color
- image processing
- correction
- processing apparatus
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
- H04N25/611—Correction of chromatic aberration
Definitions
- Image processing apparatus having color shift correction function, image processing program, and electronic camera
- the present invention relates to an image processing device, an image processing program, and an electronic camera.
- the conventional device of Patent Document 1 first detects a color shift based on an edge portion of an image, and performs image processing based on the color shift to correct lateral chromatic aberration.
- the conventional device of Patent Document 2 performs magnification adjustment for each color component of an image, and searches for a minimum point of a difference between the color components, thereby correcting chromatic aberration of magnification.
- Patent Document 3 discloses a technique for locally detecting a color shift as a technique for correcting a color shift that occurs when a moving image is shot.
- Patent Document 1 JP-A-2000-299874
- Patent Document 2 JP-A-2002-344978 (FIGS. 1 and 3)
- Patent Document 3 Patent No. 2528826
- RGB Red, Green, Blue
- RAW data in which one type of color component is arranged for each pixel is generated.
- the electronic camera generates image data in which all colors are aligned for each pixel by generating the missing color component of the RAW data by color interpolation processing. With this color interpolation process, false color noise is generated in the image data. In order to reduce this false color noise, the electronic camera applies a spatial frequency low-pass filter to the color difference components of the image data. Difference LPF ”).
- This low-frequency color difference change is not removed by the above-described color difference LPF, but passes through as it is. That is, in the image region where the spatial frequency is low, the color shift of the chromatic aberration of magnification remains even after the processing of the color difference LPF.
- Patent Documents 1 and 2 are based on the premise that a color shift due to chromatic aberration of magnification occurs point-symmetrically with respect to the center of the screen (the center of the optical axis).
- the color difference LPF is applied, the chromatic aberration of magnification becomes uneven, and this point symmetry easily collapses. If the aberration amount is detected in such a state, an error occurs in the detection result.
- an object of the present invention is to provide a technique for optimizing color shift correction for an image having a plurality of color component surfaces.
- Another object of the present invention is to propose a new idea of monitoring miscorrection of color misregistration when correcting color misregistration of an image.
- Another object of the present invention is to propose a technique for correctly detecting a color shift of chromatic aberration of magnification by taking into account the influence of false colors and the like.
- An image processing apparatus is an image processing apparatus that performs image processing on an image represented by a plurality of color component planes, and includes a correction unit and an erroneous correction detection unit. This correcting means corrects the color shift of the image.
- the erroneous correction detection means compares the images before and after the correction by the correction means and determines erroneous correction by the correction means.
- the erroneous correction detection means sets erroneous correction when the color difference of the processing target portion of the corrected image is not within an allowable range set based on the image before correction.
- the erroneous correction detection means suppresses the erroneous correction by limiting the color difference of the corrected image within an allowable range.
- the erroneous correction detecting means detects the color difference in a local area including a processing target portion of the image before correction and a minimum color difference to a maximum color difference of a group including a predetermined low chroma color difference. It is characterized in that it is determined as an allowable range.
- the correction unit is characterized in that, when the erroneous correction detection unit determines erroneous correction, the erroneous correction is suppressed by suppressing a shift width of the color misregistration correction.
- the color misregistration is a color misregistration generated in an image due to chromatic aberration of magnification of the optical system, and the correcting unit adjusts the magnification of the color component plane in a direction to cancel the chromatic aberration of magnification. Resizing).
- another image processing apparatus of the present invention is an image processing apparatus for estimating a multiple chromatic aberration of a photographic optical system from an input image, and includes a color misregistration detecting unit and an estimating unit.
- the color misregistration detection unit detects a color misregistration width in a radial direction at a plurality of detection locations of an image.
- the estimating unit obtains the statistical distribution of the chromatic aberration of magnification on the screen based on the color misregistration widths at a plurality of locations, and obtains a value that is larger than the average value of the statistical distribution and within a spread range of the statistical distribution. , The estimated value of the chromatic aberration of magnification.
- the estimation unit includes a normalization unit and a coefficient processing unit.
- the normalization unit calculates the magnification difference of the color component indicated by the color shift by dividing the color shift width by the image height (moving radius) of the detected position.
- the coefficient processing unit obtains a histogram distribution with respect to the magnification difference obtained for each detection location, and belongs to the side where the tail is biased due to the asymmetry of the tail of the histogram distribution, and falls within the spread range of the tail of the histogram distribution. A value that falls within the range is selected and used as an estimated value of the chromatic aberration of magnification.
- the estimating unit obtains an estimated value of chromatic aberration of magnification for each divided region of the image.
- the estimation unit detects a color shift in the radial direction for each divided region of the image.
- the estimation unit detects a color shift in the radial direction for each divided region of the image.
- chromatic aberration of magnification asymmetric with respect to the center of the screen is detected.
- another image processing apparatus of the present invention is an image processing apparatus for performing image processing on an image represented by a plurality of color component planes, comprising: an edge area detecting unit; And color shift detecting means.
- the edge detecting means detects an edge region in the image.
- the direction detecting means detects, for each edge region, a gradient direction of the luminance of the edge.
- the color misregistration detecting means detects, for each edge region, a color misregistration between an image signal of an arbitrary color component surface in an image of the edge region and an image signal of a color component surface different from the arbitrary color component surface.
- another image processing apparatus of the present invention is an image processing apparatus that performs image processing on an image represented by a plurality of color component planes, the image processing apparatus comprising: an edge area detection unit; Equipped with misalignment detection means.
- the edge area detecting means detects an edge area in the image.
- the color misregistration detecting means detects, for each edge region, a color shift between an image signal of an arbitrary color component surface in the image of the edge region and an image signal of a color component surface different from the arbitrary color component surface. The detection is performed in the radial direction around the portion corresponding to the optical axis of the optical system used when the image was generated.
- the image processing apparatus further includes a correction unit for correcting, for each edge region, the color shift detected by the color shift detection unit in the edge region.
- the image processing apparatus further includes a saturation determination unit that detects, for each edge region, the saturation of an image in the edge region and determines whether the saturation is higher than a predetermined value.
- the color shift detecting means detects the color shift when the saturation determining means determines that the saturation is higher than the predetermined value.
- the color misregistration detecting means provides, for each edge region, a window having a size determined based on the width of the edge in the edge region, and performs color misregistration in the image of the window. Is detected.
- the color misregistration detecting means includes an arbitrary color component in the image of the window.
- a color shift is detected by calculating a correlation between the image signal of the dividing plane and the image signal of a color component plane different from the arbitrary color component plane.
- the apparatus further includes an evaluation means for reviewing the color shift detected by the color shift detecting means from the color shift detection results at a plurality of detection points.
- the correction unit corrects the color misregistration based on the result of the review by the evaluation unit.
- the correcting means corrects the color shift after performing a smoothing process on the color shift detected by the color shift detecting means.
- the above-mentioned correction means smoothes the color difference for each edge area after correcting the color shift.
- An image processing program according to the present invention causes a computer to function as the image processing device according to any one of the above [1] to [19].
- An electronic camera includes the image processing device according to any one of the above [1] to [19], and an imaging unit that captures a subject image to generate an image.
- the image generated by the imaging unit is processed by the image processing device.
- the image processing apparatus performs a comparison on the image before and after the correction, not only by detecting and correcting the color shift but also by correcting the color shift. Based on the comparison result, erroneous correction of color misregistration is determined.
- the number of erroneously corrected portions is equal to or more than a predetermined number (predetermined area)
- a predetermined number predetermined area
- the image processing apparatus detects an erroneous correction according to a color difference change before and after the color misregistration correction.
- the abnormal color difference at the color misregistration changes in the direction in which it is overwritten.
- the color difference at the color misregistration portion often changes to the opposite sign. Therefore, by monitoring the color difference change before and after correcting the color misregistration, erroneous correction can be determined more accurately. It becomes possible to do.
- the image processing apparatus is intended to obtain the chromatic aberration of magnification with high accuracy from the statistical distribution of the chromatic aberration of magnification on the screen.
- a portion where the chromatic aberration of magnification is reduced and a portion where the chromatic aberration is not reduced are mixed. If the statistical distribution of the chromatic aberration of magnification is obtained for such an image, the average value of the statistical distribution is smaller in absolute value than the actual chromatic aberration of magnification. At this time, the actual chromatic aberration of magnification does not protrude from the spread range of the statistical distribution.
- a value larger than the average value and within the spread range of the statistical distribution is obtained from the statistical distribution of the chromatic aberration of magnification, and is used as an estimated value of the chromatic aberration of magnification.
- the image processing device detects an edge region from within the image, and detects a gradient direction of the edge luminance. The color shift in these edge regions is detected in the luminance gradient direction.
- the color shift generated in the edge area is easily noticeable.
- the color shift that occurs in the gradient direction of the edge region is more noticeable.
- the color shift is hardly noticeable.
- the image processing apparatus detects an edge region from within the image, and determines a color shift of the edge region corresponding to an optical axis of an optical system used when the image is generated. Detects in the radial direction centered on the part.
- the color shift of chromatic aberration of magnification occurs in the radial direction with respect to the center of the optical axis of the photographing optical system.
- other false colors occur irrespective of the radial direction. Therefore, by detecting color misregistration only in the radial direction, erroneous color misregistration detection due to false colors can be suppressed. It becomes possible. As a result, it becomes possible to preferentially detect a color shift due to chromatic aberration of magnification from within the image.
- FIG. 1 is a flowchart illustrating a program according to the first embodiment.
- FIG. 2 is a diagram showing a setting example of a divided area and a radial direction.
- FIG. 3 is a diagram illustrating detection of a level change portion.
- FIG. 4 is a diagram showing a histogram distribution of a magnification difference.
- FIG. 5 is a view for explaining the effect of reducing color shift in the opposite direction.
- FIG. 6 is a diagram illustrating an effect of an allowable range.
- FIG. 7 is a flowchart illustrating region division in a second embodiment.
- FIG. 8 is a diagram illustrating calculation of a displacement vector.
- FIG. 9 is a block diagram showing a configuration of a computer 10.
- FIG. 10 is a flowchart showing the operation of the third embodiment.
- FIG. 11 is a diagram illustrating a color misregistration detection direction.
- FIG. 12 is a diagram illustrating a filter.
- FIG. 13 is a diagram illustrating detection of an edge width.
- FIG. 14 is a view for explaining a radial direction.
- FIG. 15 is a diagram illustrating detection of an edge width.
- FIG. 16 is a flowchart showing an operation of the fifth embodiment.
- FIG. 17 is a diagram showing an embodiment of an electronic camera.
- FIG. 1 is a flowchart illustrating a computer operation by an image processing program. This image processing program allows a computer to function as an image processing device.
- the computer captures image data (hereinafter referred to as “input image”) captured by the electronic camera via a recording medium or a communication interface.
- This input image is already subjected to color interpolation and color difference LPF in the electronic camera.
- the computer sets the optical axis center of the photographing optical system in the screen of the input image. Normally, the center of the screen of the input image is set as the default of this optical axis center. Note that the computer may acquire the position of the optical axis center from the accompanying information (such as Exif information) of the input image. In particular, it is preferable to obtain the position of the center of the optical axis for a crop image (a trimmed or other partial image).
- the computer divides the input image in the circumferential direction with the optical axis center as the origin, and sets a plurality of divided areas and the radial direction of each divided area.
- FIG. 2 is a diagram showing an example of setting the divided areas and the radial direction.
- the input image is divided into eight divided regions N, NE, E, SE, S, SW, W, and NW, and the arrows shown in each divided region are set to the radial direction.
- the computer easily calculates the luminance Y from the RGB components of the input image using the following equation or the like.
- the computer calculates the level change in the radial direction of the luminance Y in the screen area where the chromatic aberration of magnification is visible (such as an area at least 50% of the maximum image height from the optical axis center).
- the radial direction for each divided region it is possible to perform a simple calculation based on the following equation.
- the computer searches for a pixel position (x, y) that is greater than or equal to the absolute value of the obtained gradY (x, y) and a predetermined value Th (for example, about 10 in 256 gradation data).
- the computer obtains a starting point a at which the brightness starts to change and an ending point b at which the change ends at a location as shown in FIG.
- the computer stores the midpoint c of these abs as a level change portion, and stores the distance between abs as the thickness of the edge.
- the detection of the level change portion may be performed discretely at a predetermined sampling interval on the screen.
- the computer sets a local window centering on the level change point and acquires the G array in this window ⁇ .
- the computer obtains the R array for the position force that displaces this window in the radial direction.
- the computer adjusts each signal level of the R array so that the average value of the G array and the average value of the R array match each other, and then calculates the difference between the element units of the G array and the R array.
- the superposition error is obtained by cumulatively adding the absolute values of the differences.
- the computer searches for the displacement width that minimizes (or minimizes) the overlay error while changing the displacement width of the R array relative to the G array, and stores the displacement width at this time as the color shift width between the RG components. I do.
- the color shift width with an accuracy equal to or less than the pixel interval by interpolating the value of the overlay error.
- this window it is preferable to set this window wider as the edge thickness obtained in step S2 is larger.
- the computer calculates the width of the color shift determined for each level change Divide by the image height (radius from the optical axis center) to calculate the magnification difference between the R and G planes.
- the computer sums up the frequency of the magnification difference obtained for each level change location and creates a histogram distribution.
- FIG. 4 is a diagram illustrating an example of this histogram distribution.
- the solid line shown in FIG. 4 is a histogram distribution obtained for the input image after the above-described color difference LPF.
- the most frequent value in this case is almost zero. This indicates that the chromatic difference LPF has reduced most of the chromatic aberration of magnification.
- the skirt shape that spreads the most frequent value force is asymmetric, and Figure 4 shows a skirt shape that is biased to the negative side.
- This bias in the foot shape indicates the remaining magnification color difference. That is, the difference in magnification due to the remaining chromatic aberration of magnification takes a minus value almost uniformly. Therefore, the true magnification difference of the photographing optical system to be obtained exists on the side where the foot shape is biased.
- the dotted line shown in FIG. 4 shows the empirical histogram distribution of the magnification difference for the RAW data (the data before the color interpolation and the color difference LPF).
- the skirt shape is almost symmetrical with the most frequent value as the center, and is considered to be almost normally distributed. In this case, it can be considered that the value of the most frequent value indicates the magnification difference of the imaging optical system to be obtained.
- the magnification difference of the most frequent value obtained from the RAW data is also equal to the value of the input image power after the chrominance LPF, which is also approximately the center value on the side where the skirt shape of the solid line histogram is biased.
- the computer calculates the magnification difference of the solid-line histogram force after the color difference LPF according to any one of the estimation methods (1), (2), and (3) above, and calculates the magnification difference coefficient of the R plane as k. fe. .
- the computer uses the magnification difference coefficient k of the R plane obtained in this way to correct the color shift between the G plane and the R plane.
- the computer determines an allowable range of the color difference after the color shift correction in order to suppress the color shift in the reverse direction.
- the computer obtains color difference data Cr for each pixel position of the input image.
- This color difference data Cr is calculated for each pixel position (x, y) of the input image.
- the computer calculates the following equation using the magnification difference coefficient k of the R plane, and calculates the corresponding color shift of the R plane at each pixel position (x, y) of the G plane by a displacement vector (dx, dy ).
- (xo, yo) is the center of the optical axis.
- the computer determines the position (x-dx, y-dy) of the R pixel displaced by the chromatic aberration of magnification from the pixel position (x, y) on the G plane and the displacement vector (dx, dy), and refers to it. Position.
- the computer calculates the pixel value at the reference position (x-dx, y-dy) by interpolation of the input image! ⁇ , G 'are calculated. Subsequently, the computer calculates the color difference Cr ′ at the reference position by the following equation as a second candidate of the above-described allowable range.
- the computer selects a low chroma color difference (here, zero color difference) as the third candidate of the above-mentioned allowable range.
- This low-saturation color difference is likely to cause a color difference of opposite sign to the color difference Cr, Cr 'due to a color shift in the opposite direction, and therefore attempts to limit the corrected color difference to low saturation. .
- the low chroma color difference is not limited to zero color difference, and may be any color difference indicating low chroma.
- the computer determines the upper and lower limits of the allowable range for each G pixel position as follows.
- Color difference upper limit max (0, Cr, Cr ')
- Color difference lower limit min (0, Cr, Cr ')
- a color difference group consisting of a color difference in a local area including the pixel position (x, y) collected from the uncorrected image color and a low chroma color difference is collected, and the color difference upper limit and the color difference lower limit of the color difference group are allowed.
- the range may be determined. As for this allowable range, there is an IJ point that the allowable range of the image power before correction can be easily determined.
- the computer temporarily determines the color difference (R'-G) that appears after the color misregistration correction.
- the computer limits the color difference (R'-G) by the upper limit and lower limit of the color difference determined in step S10. At this time, a portion where the color difference protrudes from the allowable range is a detection position of the color misregistration correction.
- the pixel value! If ⁇ is the R component Rout after correction If the color difference lower limit> color difference (R'-G), the lower limit pixel value (G + color difference lower limit) is set as the R component Rout. If the color difference is the upper limit and the color difference (R'-G), the upper limit pixel value (G + the upper limit of the color difference) is set as the R component Rout.
- the correction may be performed again by reducing the correction width of the color misregistration at that point. Also, if a portion that is not within the allowable range occurs over a certain amount (a certain area), reduce the value of the magnification difference to cover the entire R surface! / You can redo the color misregistration correction.
- the computer performs the same processing for the B component of the input image, and obtains the B component Bout after color shift correction for each pixel position (x, y).
- the computer outputs an output image in which the original G component of the input image, the R component Rout obtained by color shift correction, and the B component Bout obtained by similar color shift correction are color components.
- FIG. 5 is a diagram illustrating the operation and effect of the present embodiment.
- Fig. 5 [A] shows a gentle edge (R, G) and a steep edge (R ⁇ , G ⁇ ). If the image heights of these two edges are equal, a color shift ⁇ C due to the chromatic aberration of magnification occurs equally at both edges.
- FIG. 5B shows the color difference between both edges.
- a low-frequency color difference change (R-G) appears at a gentle edge due to the color shift ⁇ C.
- R-G a high-frequency color difference change
- FIG. 5C shows the result of applying a color difference LPF to both color difference changes.
- the low-frequency color difference change is preserved and passes through as it is.
- the color difference LPF is suppressed by the color difference LPF.
- FIG. 5D shows the RG components (R1 and G in the figure) after the color difference LPF.
- the R components R and R1 before and after the color difference LPF do not change much, and the color shift of the chromatic aberration of magnification is not changed.
- ⁇ AC remains as it is.
- the effect of the color difference LPF suppresses the change of the color difference in the high frequency range, so that the phase difference of the RG components (R1 "and G ⁇ in the figure) after the color difference LPF is reduced, and as a result, Color shift due to chromatic aberration of magnification is improved.
- step S6 of the present embodiment by paying attention to this asymmetrical bias, a variation distribution of the magnification difference that should be originally symmetric is estimated, and a true magnification difference (magnification difference coefficient) is determined from the estimation result. Success!
- FIG. 5E shows a state in which the color shift ⁇ C of the R component is returned to the original pixel position using the magnification difference coefficient determined in this way.
- the displacement to R2 in the figure succeeds in removing the chromatic aberration of magnification.
- the steep edge is affected by the color difference LPF, so that the color shift correction acts excessively and the steep edge is displaced to R2 "in the figure, and the reverse color shift occurs.
- Fig. 5 [F] shows the color difference (R2-G) and (R2 "-G) after the color misregistration correction.
- a gentle edge is favorably corrected for color misregistration. Therefore, the color difference (R2—G) does not change significantly before and after the color shift correction, while the steep edge has a color shift in the opposite direction, so the color difference (R2 “) before and after the color shift correction. G ⁇ ) changes greatly.
- an allowable range is determined so that the color difference does not significantly change before and after the color misregistration correction.
- FIG. 5G shows the allowable range (vertical stripe area in the figure) determined in this way.
- the allowable range vertical stripe area in the figure
- FIG. 5H shows image data after color misregistration correction in which color difference restriction is performed.
- gentle Ets As in R2 and G shown in the figure color difference limitation is hardly applied, and good color shift correction is maintained as it is.
- R3 "and G ⁇ in the figure the color difference is limited, and the color shift correction is weak. As a result, the color shift in the opposite direction is suppressed.
- color difference limitation described here is also effective for suppressing erroneous correction (especially overcorrection) when correcting color misregistration of general images, not only for images that have been subjected to color difference LPF processing. is there.
- FIG. 6 is a diagram for explaining the reason why the upper limit and the lower limit of the color difference are set to max (0, Cr, Cr ′) and min (0, Cr, Cr ′), respectively.
- FIG. 6A shows an example in which the image structure disappears when the upper limit of the allowable color difference is set to max (0, Cr).
- the G component does not change.
- the R component R1 fluctuates greatly locally.
- the color difference also increases at the peak of the R component R1.
- this R component R1 is shifted to the right in the drawing as shown in FIG. 6B, a new peak is generated in the corrected color difference.
- the reason why the color difference after the correction has a large peak is derived from the structure of the image and is not caused by a false color or a secondary color shift.
- the upper limit of the color difference is max (0, Cr)
- the allowable range is the shaded area shown in FIG. 6 [C]
- the corrected color difference disappears like the RG component shown in FIG. 6 [D]. .
- the allowable range becomes the shaded range in FIG. 6 [E] or FIG. 6 [F].
- the corrected color difference peak is maintained, and the image structure is correctly maintained as shown by R2 and G in FIG. 6 [G].
- FIG. 6H shows an example in which the image structure is destroyed when the lower limit of the color difference in the allowable range is min (0, Cr).
- FIG. 6 [I] is an example in which the image structure is destroyed when the upper limit of the allowable color difference is set to max (0, Cr ′).
- FIG. 6 Q [] shows an example in which the image structure is destroyed when the lower limit of the allowable color difference is set to min (0, Cr '). In any of these cases, it is possible to verify that the corrected image structure is correctly maintained by the same procedure as in FIGS. 6 [A] to [G]. ⁇ Second Embodiment >>
- FIG. 7 and FIG. 8 are diagrams illustrating the second embodiment.
- the center of the optical axis of the photographing optical system is substantially located at the center of the screen of the image data. Therefore, normally, chromatic aberration of magnification occurs point-symmetrically with respect to the center of the screen.
- the center of the optical axis of the photographing optical system does not always coincide with the center of the screen of the image data.
- trimming cropping
- the center of the optical axis of the shooting optical system and the center of the screen of the screen data do not match.
- the chromatic aberration of magnification may not be point-symmetric with respect to the center of the screen.
- the color shift occurring in the image also depends on the spectral distribution of the subject, a different color shift may occur in each region. In such a case, if the color shift correction is performed at the center of the screen, the correction effect is slightly reduced.
- an effective color shift correction is disclosed even in a case where the center of the optical axis is shifted from the center of the screen.
- the computer executes the operations of steps S2 to S4 or steps S21 to S24 described above, and performs the operations for each of the eight divided regions N, NE, E, SE, S, SW, W, and NW. Calculate the color difference magnification difference.
- the computer combines these three divided areas into three groups, and forms an upper group (NW, N, NE) and a right group (NE, E, SE). ), Lower group (SE, S, SW) and left group (SW, W, NW).
- the computer performs a histogram analysis of the magnification difference for each of these four classifications, and obtains four magnification difference coefficients Kn, Ke, Ks, and Kw, respectively.
- the computer divides the screen into upper right, lower right, lower left, and upper left as shown in FIG.
- the computer calculates the displacement vector (dx, dy) of the color shift by combining the magnification difference coefficients of the adjacent groups and combining the vectors according to the following equation.
- the magnification difference coefficients in a plurality of directions are vector-combined and used.
- the intersection of the displacement vectors in the above equation, that is, the optical axis center is shifted from the screen center (xo, yo). Therefore, the case where the center of the optical axis is shifted from the center of the screen (xo, yo) can be flexibly handled by the above calculation, and a more appropriate and general-purpose displacement vector can be calculated. Will be possible.
- the computer 10 includes a control unit 1, an external interface unit 2, and an operation unit 3.
- the control unit 1 stores a program for performing the image processing of the present invention in advance.
- the external interface unit 2 interfaces with an external device such as an electronic camera via a predetermined cable or a wireless transmission path.
- the operation unit 3 includes a keyboard and a mouse (not shown).
- the external interface unit 2 can capture an image from an external device such as an electronic camera (not shown) according to an instruction from the control unit 1. Further, the operation unit 3 receives various user operations. The state of the operation unit 3 is detected by the control unit 1.
- FIG. 10 is a diagram showing a general flow of the color misregistration correction. Hereinafter, description will be given in accordance with step numbers.
- the computer 10 also reads an image from an external device such as an electronic camera (not shown) via the external interface unit 2.
- an image recorded in a recording unit (not shown) in the computer 10 may be read (read).
- the image is an image to be processed and has R, G, and B color component planes.
- the number of pixels in the X-axis direction of the image to be read is Nx
- the number of pixels in the y-axis direction is Ny
- the origin is one of the four corners of the image.
- a predetermined value for example, about 10 in 256 gradations
- the control unit 1 performs the above processing on the entire image and detects an edge.
- color misregistration direction detection points are provided at intervals of several pixels, and the direction in which the luminance differential is the largest among the eight directions in which the luminance differential is obtained at that point is the edge luminance luminance.
- the gradient direction is assumed. Then, the gradient direction of the luminance of the edge is set as a color shift detection direction described later.
- FIG. 13A is an overall view of an image to be processed
- FIG. 13B is a partially enlarged view.
- Pixel A in FIG. 13 shows an example of the above-described color shift detection position.
- FIG. 13 shows an example in which the luminance gradient direction is the direction of arrow d in FIG.
- the shaded area in FIG. 13 indicates the subject area.
- the derivative of pixel A and the pixel adjacent to pixel A along arrow d is obtained.
- the derivative between the adjacent pixels is further obtained, and when the next pixel P (p + 1) that is not an edge of a certain pixel P (P) is an edge, the pixel P (p) is set as a pixel B. (See Figure 13). Further, the same processing is performed in the opposite direction. If a certain pixel P (q) is an edge and the next pixel P (q + 1) is not an edge, the pixel P (q) is set as a pixel C (FIG. 13). reference).
- the pixels B and C obtained in this way correspond to substantially edge-end pixels as shown in FIG.
- the control unit 1 sets the distance between the pixel B and the pixel C as the width of the edge. Based on the width of this edge, the size of the window Decide. Specifically, the size of the window is about four times the width of the edge.
- the absolute value of Cr of pixel A is greater than the absolute value of Cr of pixels B and C, or the absolute value of Cb of pixel A is the absolute value of Cb of pixels B and C. If the difference is larger than the pixel A, the pixel A is set as a target for detecting the color shift, and the area between the pixel B and the pixel C is set as the area for correcting the color shift.
- the absolute value of the color difference between the pixels B and C corresponds to the “predetermined value” in the above [14]. That is, the saturation of the edge portion is observed by such processing, and the detection and correction of the color misregistration can be performed only on the portion where the chromatic aberration of magnification occurs at the edge of the image.
- the computer 10 detects a color shift of the R component image with respect to the G component image.
- the computer also detects a color shift of the B component image with respect to the G component image.
- the G component image corresponds to “an image signal of an arbitrary color component surface among a plurality of color component surfaces”
- the R component image an image of the B component
- the control unit 1 detects a color shift of the R component image with respect to the G component image for each color shift detection position determined in step S102.
- the control unit 1 first provides a window centering on the color misregistration detection point.
- the window is a square area, and the length of one side is about four times the width of the edge obtained in step S102.
- the window image is extracted from the G component image among the processing target images.
- the window image from which the G component image power is also extracted is called a “G component window”.
- the image of the R component is extracted from the position shifted by the shift amount (dx, dy) in the X direction and the y direction with respect to the G component window.
- the extracted R component image is an image at a position shifted by a shift amount (dx, dy) from the G component window, and is hereinafter referred to as an “R component window”.
- d is the maximum value at which color misregistration is detected.
- the efficiency can be improved.
- the edge region is located at the edge of the image and a portion near the edge, the image is folded back and copied around the image to expand the image, and the same detection is performed.
- the control unit 1 performs the above-described processing for each color misregistration detection position detected in step S102.
- the computer 10 corrects the color shift of the R component (B component) image with respect to the G component image.
- the control unit 1 corrects the color shift of the R component image with respect to the G component image for each edge region. For the edge area, the control unit 1 determines a chromatic aberration of magnification detection value d obtained in advance.
- control unit 1 performs the above-described processing for each edge region detected in step S102.
- the computer 10 combines the G component image (the original G component image), the corrected R component image, and the B component image among the images read in step S101, and generates the computer 10 In a recording unit (not shown).
- the control unit 1 calculates the G component image (the original G component image), the R component image corrected in step S104, and the B component image corrected in step S106.
- the image and the image are combined and recorded in a recording unit (not shown) in the computer 10.
- the G component image which is the image of the color component responsible for luminance
- the R component image or the B component image
- the chromatic aberration of magnification can be corrected while maintaining the image quality (sharpness) without deterioration of the high frequency component.
- the detected color misregistration may be evaluated, and the corrected chromatic aberration may be corrected based on the evaluation result.
- the control unit 1 calculates the chromatic aberration of magnification detection value d for each edge area, and then determines
- the correction may be performed by disabling min and substituting the detected value of magnification chromatic aberration d of the neighboring pixels.
- magnification chromatic aberration detection value d is invalidated or
- the low-pass filter is applied to the detected chromatic aberration of magnification d obtained for each pixel in the edge
- the correction may be performed after performing the filtering process.
- For certain parts of the image when only correction is performed, for example, by performing a Lonos filter processing using a 3 ⁇ 3 averaging filter, it is possible to prevent the correction amount from becoming discontinuous and thus preventing the corrected image from deteriorating.
- Monkey for example, by performing a Lonos filter processing using a 3 ⁇ 3 averaging filter, it is possible to prevent the correction amount from becoming discontinuous and thus preventing the corrected image from deteriorating.
- 'An erroneous correction may be detected by comparing an image before correction with an image after correction.
- the control unit 1 obtains a signal value difference between the G component image and the R component image before correction, Further, a difference between the signal values of the G component image and the corrected R component image is obtained, and the two differences are compared. If the difference between the signal values of the G component image and the corrected R component image is smaller and the two differences have the same sign, the control unit 1 determines that the correction has been performed correctly. . On the other hand, if the difference between the signal values of the G component image and the corrected R component image is larger, the control unit 1 determines that the erroneous correction has been performed. Then, the control unit 1 calculates the magnification chromatic aberration detection value d
- the degree of correction is made smaller than the correction based on min, and the color shift is corrected again.
- re-correction of the color shift may be performed on the original image, or on the image after the correction of the color shift is performed once. Alternatively, re-correction may be performed in the reverse direction. If the degree of correction cannot be reduced, the state before correction may be returned, or the detection may be performed again from the detection of color misregistration. As described above, by detecting the erroneous correction, more appropriate correction can be automatically performed.
- an edge area is detected from an image represented by a plurality of color component (each of R, G, and B) planes, and each edge area is detected.
- the gradient direction of the luminance of the edge is detected, and for each edge region, an image signal of an arbitrary color component surface in the image of the edge region and an image signal of a color component surface different from the arbitrary color component surface are detected.
- the color shift is detected in the luminance gradient direction, and the detected color shift is corrected for each edge region to be corrected.
- the amount of color misregistration is uneven, and the amount of color shift differs for each edge region.
- the image edge region by selecting an image edge region and detecting and correcting a color shift for each edge region as in the present embodiment, it is possible to appropriately cope with color shift unevenness such as after color interpolation. it can.
- chromatic aberration of magnification generally occurs in an edge portion which is hardly generated in a flat portion. I'm sorry. Therefore, by detecting an edge area and detecting a color shift in the edge area, it is possible to shorten the processing time for detecting and correcting the color shift while performing the detection efficiently.
- chromatic aberration of magnification is generally conspicuous when it occurs in a luminance gradient direction.
- this method can be used even when the image to be processed is a trimmed image.
- the region for which magnification chromatic aberration is to be detected coincide with the region to be corrected, the locally occurring chromatic aberration of magnification can be appropriately corrected.
- irregular chromatic aberration of magnification that occurs in an image to be processed can be corrected with high accuracy and high speed.
- the saturation of the image in the edge region is detected, and whether or not the saturation is higher than a predetermined value is determined. If it is determined to be higher than the value, a color shift is detected. Therefore, the target of detection and correction of color misregistration is only the portion where the chromatic aberration of magnification occurs at the edge of the image, and the processing is made more efficient. Furthermore, since the edge of the region where the saturation of the subject is high is not to be corrected, the deterioration of the image due to erroneous detection of the color shift can be suppressed.
- the size of the edge region is changed based on the width of the edge.
- the size of the edge region suitable for detecting the color shift may be small.
- the size of the edge area necessary for detecting color misregistration must be somewhat large. Therefore, when the width of the edge is narrow, the size of the edge region is reduced, and when the width of the edge is wide, the size of the edge region is increased. By changing the size of the edge area in this way, color misregistration can be detected efficiently.
- step S102 different from the third embodiment will be described.
- step S102 after detecting the edge area, the control unit 1 performs the following processing instead of detecting the luminance gradient direction.
- the control unit 1 first divides the image to be processed into eight regions as shown in FIG. 14 centering on the center of the image, and when there is an edge region in an area, the area is represented as a representative.
- Direction (arrows a to h) is defined as a color misregistration detection direction.
- the chromatic aberration of magnification is expressed in the radial direction centered on a portion (corresponding to the center of the image to be processed) corresponding to the optical axis of the optical system used when the image was generated (see arrows a to a in FIG. 14). h direction). Therefore, by detecting the color misregistration shifted in the radial direction, the processing time required for the color misregistration detection can be reduced without lowering the detection accuracy.
- FIG. 15A is an overall view of an image to be processed
- FIG. 15B is a partially enlarged view.
- FIG. 15 shows an example in which the derivative of the luminance is obtained in the direction of arrow d in FIG. 11 (the edge region is in region d in FIG. 14).
- the shaded area in FIG. 15 indicates the subject area.
- the derivative of two pixels adjacent along arrow d is calculated, and the next pixel P (p + 1) where one pixel P (p) is not an edge If) is an edge, let pixel P (P) be pixel A (see Figure 15). Further, differentiation is continuously performed. If a certain pixel P (q) is an edge and the next pixel P (q + 1) is not an edge, the pixel P (q) is set as a pixel B (see FIG. 15). Finally, the pixel corresponding to the midpoint between pixel A and pixel B is defined as pixel C (see FIG. 15). As shown in FIG. 15, the pixels A to C obtained in this way indicate pixels A and B substantially at the edge end, and pixel C indicates a pixel substantially at the edge boundary.
- the distance between pixel A and pixel B is defined as the width of the edge.
- the size of the edge area is determined based on the width of the edge. Specifically, the size of the edge area is about four times the width of the edge To.
- an edge area is detected from an image represented by a plurality of color component (each of R, G, and B) planes, and the edge area is detected for each edge area.
- the color shift between the image signal of an arbitrary color component plane in the image of the area and the image signal of a color component plane different from the arbitrary color component plane is calculated by using the optical axis of the optical system used when the image was generated. Is detected in the radial direction centering on the portion corresponding to, and the detected color shift is corrected for each edge region to be corrected.
- the fifth embodiment is an embodiment that corrects color fringing in addition to the correction of the chromatic aberration of magnification described in the third embodiment and the fourth embodiment.
- chromatic aberration of magnification causes color bleeding.
- the range of wavelengths received by the R image sensor corresponding to the R color filter has a width, and different wavelengths ⁇ 1 and 2 are both received by the R image sensor. Therefore, if the photographing lens has a magnification color difference, the ⁇ 1 and ⁇ 2 light emitted from the same point on the subject will form images at different positions on the R image sensor, and the R component image will be blurred and color blurred. . In the present embodiment, this color blur is corrected.
- FIG. 16 is a flowchart illustrating this correction operation.
- the control unit 1 calculates a difference in signal value between the G component image and the R component image before correction, further calculates a difference in signal value between the G component image and the corrected R component image, Two differences Compare. When the difference between the signal values of the G component image and the corrected R component image is smaller and the two differences have the same sign, the control unit 1 determines that the correction has been correctly performed. . On the other hand, if the difference between the signal values of the G component image and the corrected R component image is larger, the control unit 1 determines that the erroneous correction has been performed. Then, the control unit 1 calculates the obtained doubled chromatic aberration detection value d
- the correction degree is made smaller than the correction based on min, and the color shift is corrected again.
- the color shift may be re-corrected for the original image, or the image after the color shift correction is once performed may be performed in the reverse direction.
- the correction may be performed again. If the degree of correction cannot be reduced, the state before correction may be returned, or the detection may be performed again from the detection of color misregistration.
- the control unit 1 calculates the color difference Cr of the R component for the entire image to be processed. Cr is given, for example, by (R-G). Then, a low-pass filter is applied to the edge region of the Cr image obtained above. The low-pass filter replaces, for example, the color difference Cr at a certain point with a weighted average value of a square area centered at the point. Further, the width of the square area is determined according to the detected chromatic aberration of magnification d obtained in step S113. concrete
- the length of one side of the square area is set to 2 to 4 times the color shift detection value (unit: pixel).
- the above processing is performed for the entire image. Then, assuming that the color difference of the processed R is Cr ', the corrected R component is given by (Cr, + G).
- the color difference is smoothed for each edge area after correcting the color shift.
- the greater the occurrence of chromatic aberration of magnification the greater the degree of chromatic blur. Therefore, by performing such processing, it is possible to remove chromatic blur in addition to chromatic aberration of magnification, and it is possible to further improve image quality. it can.
- color fringing that is not caused by chromatic aberration of magnification is also corrected.
- another color system such as cyan, magenta, and yellow may be used as an example in which an image to be processed includes an RGB color component.
- an image of another color system may be generated based on the image to be processed, and the same detection and correction may be performed as the image to be processed.
- the image processing apparatus that detects and corrects a color shift has been described. However, only the color shift may be detected. An image processing device that detects such a color shift can be used in an inspection process in a lens manufacturing process.
- the sixth embodiment is an embodiment of an electronic camera.
- FIG. 17 is a block diagram showing the configuration of the present embodiment.
- a photographing lens 12 is attached to an electronic camera 11.
- the light receiving surface of the image sensor 13 is arranged.
- the operation of the image sensor 13 is controlled by the output pulse of the timing generator 22b.
- the image generated by the image sensor 13 is temporarily stored in a buffer memory 17 via an AZD converter 15 and a signal processor 16.
- This buffer memory 17 is connected to a bus 18.
- An image processing unit 19, a card interface 20, a microprocessor 22, a compression / decompression unit 23, and an image display unit 24 are connected to the bus 18.
- the card interface 20 reads and writes data from and to the removable memory card 21.
- a user operation signal is input to the microprocessor 22 from the switch group 22a of the electronic camera 11.
- the image display unit 24 displays an image on a monitor screen 25 provided on the back of the electronic camera 11.
- the microprocessor 22 and the image processing unit 19 execute the image processing of the first to fifth embodiments.
- the present invention is a technology that can be used for an image processing device, an image processing program, an electronic camera, and the like.
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05728731.0A EP1746846B1 (en) | 2004-04-12 | 2005-04-08 | Image processing device having color shift-correcting function, image processing program, and electronic camera |
JP2006512318A JP4706635B2 (ja) | 2004-04-12 | 2005-04-08 | 色ずれ補正機能を有する画像処理装置、画像処理プログラム、および電子カメラ |
US11/545,559 US7916937B2 (en) | 2004-04-12 | 2006-10-11 | Image processing device having color shift correcting function, image processing program and electronic camera |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004116584 | 2004-04-12 | ||
JP2004-116584 | 2004-04-12 | ||
JP2004-161474 | 2004-05-31 | ||
JP2004161474 | 2004-05-31 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/545,559 Continuation US7916937B2 (en) | 2004-04-12 | 2006-10-11 | Image processing device having color shift correcting function, image processing program and electronic camera |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005101854A1 true WO2005101854A1 (ja) | 2005-10-27 |
Family
ID=35150362
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/006951 WO2005101854A1 (ja) | 2004-04-12 | 2005-04-08 | 色ずれ補正機能を有する画像処理装置、画像処理プログラム、および電子カメラ |
Country Status (4)
Country | Link |
---|---|
US (1) | US7916937B2 (ja) |
EP (1) | EP1746846B1 (ja) |
JP (1) | JP4706635B2 (ja) |
WO (1) | WO2005101854A1 (ja) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008068884A1 (ja) * | 2006-11-30 | 2008-06-12 | Nikon Corporation | 画像の色を補正する画像処理装置、および画像処理プログラム |
JP2010045588A (ja) * | 2008-08-12 | 2010-02-25 | Sony Corp | 画像処理装置及び画像処理方法 |
JP2010086138A (ja) * | 2008-09-30 | 2010-04-15 | Canon Inc | 画像処理方法、画像処理装置及び撮像装置 |
JP2011013857A (ja) * | 2009-06-30 | 2011-01-20 | Toshiba Corp | 画像処理装置 |
JP2011101129A (ja) * | 2009-11-04 | 2011-05-19 | Canon Inc | 画像処理装置及びその制御方法 |
JP2011151597A (ja) * | 2010-01-21 | 2011-08-04 | Nikon Corp | 画像処理装置および画像処理プログラム並びに電子カメラ |
WO2011118071A1 (ja) | 2010-03-25 | 2011-09-29 | 富士フイルム株式会社 | 画像処理方法および装置,ならびに画像処理プログラムおよびこのプログラムを記録した媒体 |
WO2013031367A1 (ja) * | 2011-08-31 | 2013-03-07 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
KR101517407B1 (ko) | 2008-11-06 | 2015-05-06 | 삼성전자주식회사 | 색수차 제거 방법 및 장치 |
KR101532605B1 (ko) * | 2008-11-06 | 2015-07-01 | 삼성전자주식회사 | 색수차 제거 방법 및 장치 |
JP2015170897A (ja) * | 2014-03-05 | 2015-09-28 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
US9160998B2 (en) | 2008-11-06 | 2015-10-13 | Samsung Electronics Co., Ltd. | Method and apparatus for canceling chromatic aberration |
JP2018082238A (ja) * | 2016-11-14 | 2018-05-24 | キヤノン株式会社 | 画像符号化装置、画像符号化方法、及びプログラム |
US10247933B2 (en) | 2014-08-29 | 2019-04-02 | Carl Zeiss Microscopy Gmbh | Image capturing device and method for image capturing |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4415318B2 (ja) * | 2004-04-27 | 2010-02-17 | 富士フイルム株式会社 | カラー画像の色ずれ補正方法及びカラー画像撮像装置 |
JP4469324B2 (ja) * | 2005-11-01 | 2010-05-26 | イーストマン コダック カンパニー | 色収差抑圧回路及び色収差抑圧プログラム |
JP4816725B2 (ja) * | 2006-03-01 | 2011-11-16 | 株式会社ニコン | 倍率色収差を画像解析する画像処理装置、画像処理プログラム、電子カメラ、および画像処理方法 |
US8144984B2 (en) * | 2006-12-08 | 2012-03-27 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and program for color fringing estimation and compensation |
KR101340518B1 (ko) | 2007-08-23 | 2013-12-11 | 삼성전기주식회사 | 영상의 색수차 보정 방법 및 장치 |
JP5017597B2 (ja) * | 2007-11-27 | 2012-09-05 | 株式会社メガチップス | 画素補間方法 |
KR101257942B1 (ko) * | 2008-04-23 | 2013-04-23 | 고려대학교 산학협력단 | 광역 역광보정 영상처리에서의 전처리 방법 및 장치 |
KR101460610B1 (ko) * | 2008-07-30 | 2014-11-13 | 삼성전자주식회사 | 색수차 제거 방법 및 장치 |
US8477206B2 (en) * | 2008-09-30 | 2013-07-02 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method for performing image restoration using restoration filter |
JP5562242B2 (ja) * | 2008-12-22 | 2014-07-30 | パナソニック株式会社 | 画像拡大装置、方法、集積回路及びプログラム |
JP5486273B2 (ja) * | 2008-12-26 | 2014-05-07 | キヤノン株式会社 | 画像処理装置及び画像処理方法 |
TWI389576B (zh) * | 2009-07-02 | 2013-03-11 | Mstar Semiconductor Inc | 影像處理裝置以及影像處理方法 |
US20110018993A1 (en) * | 2009-07-24 | 2011-01-27 | Sen Wang | Ranging apparatus using split complementary color filters |
US8520125B2 (en) | 2009-10-27 | 2013-08-27 | Panasonic Corporation | Imaging device and distance-measuring device using same |
US8958009B2 (en) * | 2010-01-12 | 2015-02-17 | Nikon Corporation | Image-capturing device |
JP5757099B2 (ja) | 2010-02-15 | 2015-07-29 | 株式会社ニコン | 焦点調節装置、及び焦点調節プログラム |
JP5284537B2 (ja) * | 2010-03-31 | 2013-09-11 | キヤノン株式会社 | 画像処理装置、画像処理方法、画像処理プログラム、およびそれを用いた撮像装置 |
JP5505135B2 (ja) * | 2010-06-30 | 2014-05-28 | ソニー株式会社 | 画像処理装置、画像処理方法、および、プログラム |
DE112010005743B4 (de) * | 2010-07-16 | 2021-09-02 | Robert Bosch Gmbh | Verfahren für die Detektion und Korrektur von lateraler chromatischer Aberration |
DE112010005744B4 (de) * | 2010-07-16 | 2021-09-09 | Robert Bosch Gmbh | Verfahren für die Detektion und Korrektur einer lateralen chromatischen Aberration |
JP5665451B2 (ja) * | 2010-09-21 | 2015-02-04 | キヤノン株式会社 | 画像処理装置及びその倍率色収差補正方法、撮像装置、倍率色収差補正プログラム、並びに記録媒体 |
JP5264968B2 (ja) * | 2011-08-08 | 2013-08-14 | キヤノン株式会社 | 画像処理装置、画像処理方法、撮像装置、および、画像処理プログラム |
JP5414752B2 (ja) | 2011-08-08 | 2014-02-12 | キヤノン株式会社 | 画像処理方法、画像処理装置、撮像装置、および、画像処理プログラム |
CN103765276B (zh) | 2011-09-02 | 2017-01-18 | 株式会社尼康 | 对焦评价装置、摄像装置及程序 |
JP6066866B2 (ja) * | 2013-08-22 | 2017-01-25 | キヤノン株式会社 | 画像処理装置、その制御方法、および制御プログラム |
KR102211592B1 (ko) * | 2014-03-19 | 2021-02-04 | 삼성전자주식회사 | 전자 장치 및 이의 영상 처리 방법 |
EP3481050B1 (en) | 2016-06-29 | 2021-03-10 | Sony Corporation | Imaging device, control method, and program |
EP3493522B1 (en) | 2016-08-01 | 2020-10-28 | Sony Corporation | Image processing apparatus, image processing method, and program |
JP2018142778A (ja) | 2017-02-27 | 2018-09-13 | オリンパス株式会社 | 画像処理装置、画像処理方法、画像処理プログラム |
CN111447426B (zh) * | 2020-05-13 | 2021-12-31 | 中测新图(北京)遥感技术有限责任公司 | 一种影像色彩校正方法以及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1251687A2 (en) | 2001-04-17 | 2002-10-23 | Xerox Corporation | Sampling of customer images as color data for process control |
JP2003255424A (ja) * | 2002-03-05 | 2003-09-10 | Sony Corp | 画像撮影装置及び色収差補正方法 |
JP2004064710A (ja) * | 2002-07-31 | 2004-02-26 | Fuji Photo Film Co Ltd | 撮像装置及びディストーション補正方法 |
JP2004248077A (ja) * | 2003-02-14 | 2004-09-02 | Konica Minolta Holdings Inc | 画像処理装置及び画像処理方法並びに画像処理プログラム |
JP2005167485A (ja) * | 2003-12-01 | 2005-06-23 | Canon Inc | 画像処理装置及び画像処理方法 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2528826B2 (ja) | 1985-12-20 | 1996-08-28 | 池上通信機株式会社 | 画像の位置ずれ補正装置 |
JPH03263993A (ja) | 1990-03-14 | 1991-11-25 | Hitachi Denshi Ltd | レジストレーション検出装置 |
JP3080127B2 (ja) * | 1993-12-30 | 2000-08-21 | 日本ビクター株式会社 | 映像信号改善装置 |
EP0878970A3 (en) * | 1997-05-16 | 1999-08-18 | Matsushita Electric Industrial Co., Ltd. | Imager registration error and chromatic aberration measurement system for a video camera |
US6853400B1 (en) * | 1998-06-16 | 2005-02-08 | Fuji Photo Film Co., Ltd. | System and method for correcting aberration of lenses through which images are projected |
JP3758377B2 (ja) * | 1998-09-14 | 2006-03-22 | コニカミノルタビジネステクノロジーズ株式会社 | 画像読取装置および色収差補正方法 |
JP2000299874A (ja) | 1999-04-12 | 2000-10-24 | Sony Corp | 信号処理装置及び方法並びに撮像装置及び方法 |
US6483941B1 (en) * | 1999-09-10 | 2002-11-19 | Xerox Corporation | Crominance channel overshoot control in image enhancement |
JP2001356173A (ja) * | 2000-06-13 | 2001-12-26 | Konica Corp | 放射線画像撮像装置及び放射線画像撮像方法 |
JP2002344978A (ja) | 2001-05-17 | 2002-11-29 | Ichikawa Soft Laboratory:Kk | 画像処理装置 |
JP2004062651A (ja) * | 2002-07-30 | 2004-02-26 | Canon Inc | 画像処理装置、画像処理方法、その記録媒体およびそのプログラム |
JP2004153323A (ja) * | 2002-10-28 | 2004-05-27 | Nec Micro Systems Ltd | 色収差補正画像処理システム |
US7454081B2 (en) * | 2004-01-30 | 2008-11-18 | Broadcom Corporation | Method and system for video edge enhancement |
-
2005
- 2005-04-08 WO PCT/JP2005/006951 patent/WO2005101854A1/ja active Application Filing
- 2005-04-08 JP JP2006512318A patent/JP4706635B2/ja active Active
- 2005-04-08 EP EP05728731.0A patent/EP1746846B1/en active Active
-
2006
- 2006-10-11 US US11/545,559 patent/US7916937B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1251687A2 (en) | 2001-04-17 | 2002-10-23 | Xerox Corporation | Sampling of customer images as color data for process control |
JP2003255424A (ja) * | 2002-03-05 | 2003-09-10 | Sony Corp | 画像撮影装置及び色収差補正方法 |
JP2004064710A (ja) * | 2002-07-31 | 2004-02-26 | Fuji Photo Film Co Ltd | 撮像装置及びディストーション補正方法 |
JP2004248077A (ja) * | 2003-02-14 | 2004-09-02 | Konica Minolta Holdings Inc | 画像処理装置及び画像処理方法並びに画像処理プログラム |
JP2005167485A (ja) * | 2003-12-01 | 2005-06-23 | Canon Inc | 画像処理装置及び画像処理方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1746846A4 |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8155439B2 (en) | 2006-11-30 | 2012-04-10 | Nikon Corporation | Image processing apparatus for correcting image color and image processing program |
JP2008141323A (ja) * | 2006-11-30 | 2008-06-19 | Nikon Corp | 画像の色を補正する画像処理装置、および画像処理プログラム |
WO2008068884A1 (ja) * | 2006-11-30 | 2008-06-12 | Nikon Corporation | 画像の色を補正する画像処理装置、および画像処理プログラム |
JP2010045588A (ja) * | 2008-08-12 | 2010-02-25 | Sony Corp | 画像処理装置及び画像処理方法 |
JP2010086138A (ja) * | 2008-09-30 | 2010-04-15 | Canon Inc | 画像処理方法、画像処理装置及び撮像装置 |
US8605163B2 (en) | 2008-09-30 | 2013-12-10 | Canon Kabushiki Kaisha | Image processing method, image processing apparatus, image pickup apparatus, and storage medium capable of suppressing generation of false color caused by image restoration |
KR101517407B1 (ko) | 2008-11-06 | 2015-05-06 | 삼성전자주식회사 | 색수차 제거 방법 및 장치 |
KR101532605B1 (ko) * | 2008-11-06 | 2015-07-01 | 삼성전자주식회사 | 색수차 제거 방법 및 장치 |
US9160998B2 (en) | 2008-11-06 | 2015-10-13 | Samsung Electronics Co., Ltd. | Method and apparatus for canceling chromatic aberration |
JP2011013857A (ja) * | 2009-06-30 | 2011-01-20 | Toshiba Corp | 画像処理装置 |
JP2011101129A (ja) * | 2009-11-04 | 2011-05-19 | Canon Inc | 画像処理装置及びその制御方法 |
JP2011151597A (ja) * | 2010-01-21 | 2011-08-04 | Nikon Corp | 画像処理装置および画像処理プログラム並びに電子カメラ |
WO2011118071A1 (ja) | 2010-03-25 | 2011-09-29 | 富士フイルム株式会社 | 画像処理方法および装置,ならびに画像処理プログラムおよびこのプログラムを記録した媒体 |
US8229217B2 (en) | 2010-03-25 | 2012-07-24 | Fujifilm Corporation | Image processing method and apparatus, image processing program and medium storing this program |
WO2013031367A1 (ja) * | 2011-08-31 | 2013-03-07 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
US9179113B2 (en) | 2011-08-31 | 2015-11-03 | Sony Corporation | Image processing device, and image processing method, and program |
JP2015170897A (ja) * | 2014-03-05 | 2015-09-28 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
US10247933B2 (en) | 2014-08-29 | 2019-04-02 | Carl Zeiss Microscopy Gmbh | Image capturing device and method for image capturing |
JP2018082238A (ja) * | 2016-11-14 | 2018-05-24 | キヤノン株式会社 | 画像符号化装置、画像符号化方法、及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JP4706635B2 (ja) | 2011-06-22 |
EP1746846A4 (en) | 2008-12-24 |
EP1746846B1 (en) | 2019-03-20 |
JPWO2005101854A1 (ja) | 2008-03-06 |
US7916937B2 (en) | 2011-03-29 |
US20070116375A1 (en) | 2007-05-24 |
EP1746846A1 (en) | 2007-01-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005101854A1 (ja) | 色ずれ補正機能を有する画像処理装置、画像処理プログラム、および電子カメラ | |
JP4815807B2 (ja) | Rawデータから倍率色収差を検出する画像処理装置、画像処理プログラム、および電子カメラ | |
US8233710B2 (en) | Image processing device and image processing method | |
JP4054184B2 (ja) | 欠陥画素補正装置 | |
US7016549B1 (en) | Image processing method for direction dependent low pass filtering | |
EP1855486B1 (en) | Image processor correcting color misregistration, image processing program, image processing method, and electronic camera | |
JP3706789B2 (ja) | 信号処理装置及び信号処理方法 | |
KR100780242B1 (ko) | 이미지의 어두운 영역에서의 노이즈 제거 방법 및 장치 | |
US8145014B2 (en) | Apparatus and method of removing color noise of digital image | |
EP2056607B1 (en) | Image processing apparatus and image processing program | |
US7623705B2 (en) | Image processing method, image processing apparatus, and semiconductor device using one-dimensional filters | |
JP5324508B2 (ja) | 画像処理装置および方法,ならびに画像処理プログラム | |
JP4945942B2 (ja) | 画像処理装置 | |
JP6825617B2 (ja) | 画像処理装置、撮像装置、および画像処理方法、並びにプログラム | |
JP4945943B2 (ja) | 画像処理装置 | |
KR101600312B1 (ko) | 영상처리장치 및 영상처리방법 | |
JP4934839B2 (ja) | 画像処理装置及びその方法並びにプログラム | |
KR100627615B1 (ko) | 조정 가능한 임계값을 이용한 노이즈 제거 장치 | |
JP2003123063A (ja) | 画像処理装置 | |
JP4797478B2 (ja) | 画像処理装置 | |
KR101327790B1 (ko) | 영상 보간 방법 및 장치 | |
TWI389571B (zh) | 影像處理方法以及影像處理裝置 | |
WO2012007059A1 (en) | Method for lateral chromatic aberration detection and correction | |
WO2007026655A1 (ja) | 画像の色ズレ処理を行う画像処理装置、プログラム、撮像装置、および方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006512318 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11545559 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005728731 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2005728731 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 11545559 Country of ref document: US |