WO2010086037A1 - Method and system for lens aberration detection - Google Patents

Method and system for lens aberration detection Download PDF

Info

Publication number
WO2010086037A1
WO2010086037A1 PCT/EP2009/062780 EP2009062780W WO2010086037A1 WO 2010086037 A1 WO2010086037 A1 WO 2010086037A1 EP 2009062780 W EP2009062780 W EP 2009062780W WO 2010086037 A1 WO2010086037 A1 WO 2010086037A1
Authority
WO
WIPO (PCT)
Prior art keywords
matrix
image
pixels
vector
block
Prior art date
Application number
PCT/EP2009/062780
Other languages
French (fr)
Inventor
Frank Hassenpflug
Wolfgang Endress
Martin Boehning
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2010086037A1 publication Critical patent/WO2010086037A1/en

Links

Classifications

    • G06T5/80
    • G06T3/047

Abstract

The present invention relates to a system and method of processing image data to generate simulated lens coefficients. A method (1200) according to the present invention comprises dividing image data into a plurality of input images in separate color planes (1210). The method (800) further comprises identifying a radial vector in each of the color planes, such that the radial vector is in a corresponding location in each color plane (1220). Further, in each color plane located along the radial vector, a block of pixels is identified (1230). The pixel values in the block of pixels in each color plane are correlated for generating a shift vector (1240), from which a shift vector matrix is produced (1250). Finally, the method generates lens coefficients from the shift vector matrix, such that the lens coefficients define the chromatic aberration in the input image (1260).

Description

- -
Method and System for Lens Aberration Detection
The present invention relates to the field of optical lens systems. In particular, exemplary embodiments of the present invention relate to a method and system for detecting chromatic aberrations.
The optical elements used by cameras and other optical devices to collect images from the environment often introduce errors into the images. Such errors may include various aberrations that distort the color or perspective of the images. Such errors may be perceptible to a viewer and, thus, may decrease the accuracy or aesthetic value of the images.
Two common types of error introduced into images by optical systems are chromatic distortions and curvilinear distortions. Chromatic distortions are caused by the wavelength dependency of the refractive index of the materials used in the optical elements. The different refractive indices lead to different focal points for the differing wavelengths. As discussed in further detail below this may lead to blurring of the colors in images. Curvilinear distortions may be caused by optical elements that differ from ideal designs, which can lead to different focal points for light entering the optical elements at different points. This type of distortion may cause curvature in lines that should be straight in images and, thus, cause distortions in perspective.
Various systems have been implemented to attempt to solve the problem of detecting these distortions in image collection systems, so that such distortions could be corrected by appropriate processing. Generally, image aberration detection may take place prior to the collection of the images, by modification of the design of the optical elements, or after image collection, through processing of stored images in a computer system. While significant advancement has been made in improving the design of complex optical elements to minimize image distortions, the availability of increasingly powerful microprocessors of ever decreasing size has made the use of post-collection processing to detect image distortions practical within image collection systems. - -
U.S. Patent Application Publication No. 2008/0062409, to Utsugi purports to disclose an image processing device for detecting chromatic aberrations. Accordingly, the system has an input section and a detecting section. The input section receives raw data made up of color components arranged in each pixel in a predetermined pattern. The detecting section detects a color shift amount by calculating a correlation between two color components included in the raw data. The detection section further determines the chromatic aberration of magnification of the optical system used for capturing the image from the color shift amount.
U.S. Patent No. 6,323,934 to Enomoto, which claims priority to Japanese Patent No. JP 9-333943, purports to disclose an image processing method for correcting at least one of lateral chromatic aberration, distortion, brightness, and blurring caused by an image collection lens. The method is generally used to correct low quality images on photographic film, but may also be used to correct images collected using a digital camera. The images are scanned from the film into an electronic device at a resolution sufficient to minimize distortions from the scanning process. The aberration to be corrected is selected, and lens data specific to the aberration is used to perform the correction calculations. The corrections are generally performed in two steps. In a first step, a lateral chromatic aberration is corrected and, in a second step, curvilinear distortions are corrected.
U.S. Patent No. 7,123,685, to Okada et al., which claims priority to JP P2003- 124930, purports to disclose an image processing device for correcting aberrations caused by a lens in an image collection device. The technique is claimed to simultaneously correct both chromatic aberrations and distortions. A correction vector is calculated for each pixel in a color plane of an image based on the lens characteristics. The image is separated into the individual color planes, and then the correction vector is applied. The corrected color planes are then recombined to form the image. The correction vector is also purported to - -
correct for camera shake, i.e., the failure of an operator to hold the camera steady.
An improved method and system for detecting lens aberrations is desirable.
A method of processing image data according to the present invention is set forth in claim 1. The method relates to generating simulated lens coefficients used for detecting lens chromatic aberrations (LCA). According to the method, image data is divided into a plurality of input images in separate color planes. The method comprises identifying a radial vector in each of the color planes, such that the radial vector is in a corresponding location in each color plane. Further, in each color plane located along the radial vector, a block of pixels is identified. The pixel values in the block of pixels in each of the color planes are numerically correlated for generating a shift vector, from which a shift vector matrix is generated. Finally, the method generates lens coefficients from the shift vector matrix, such that the lens coefficients define the chromatic aberration in the input image.
In one exemplary embodiment, the method utilizes a polar coordinate system and/or a Cartesian coordinate system for representing the radial vectors disposed on the color planes. Accordingly, when using the Cartesian coordinate system, the method utilizes a first quadrant of the Cartesian coordinate for defining the radial vectors along the image. Once the block of pixels is defined for each color plane, the method correlates the pixels in each block using, for example, a Bravais-Pearson correlation formula. The method further obtains a largest correlation between the pixels in the block of each color plane to ultimately obtain the shift vector matrix. The vector matrix is reduced to an identity matrix, augmented by an extra column whose entries represent the lens coefficients.
Another exemplary embodiment of the present invention provides an image processing system. The system has a first component configured to identify a radial vector in each of a plurality of separate color planes having a plurality of input image data. The radial vector is disposed in a corresponding location in - -
each color plane. The system further has a second component configured to correlate pixel values disposed in a block of pixels in each color plane to generate a shift vector. The system also comprises a third component adapted to generate lens coefficients from a shift vector matrix derived from the shift vector, such that the lens coefficients define the chromatic aberration in the input image.
In one exemplary embodiment, the first component of the image processing system is configured to divide the image data into the plurality of separate color planes. In addition, the first and/or second components are adapted to identify the block of pixels in each color plane located along radial vector. The image processing system may further include a fourth component configured to provide normalization data to the second component. The normalization data normalizes pixel values corresponding to images of varying size.
A preferred embodiment of the present invention is described with reference to the accompanying drawings. The preferred embodiment merely exemplifies the invention. Plural possible modifications are apparent to the skilled person. The gist and scope of the present invention is defined in the appended claims of the present application.
Fig. 1 is a diagram that is useful in explaining chromatic aberrations.
Fig. 2 is a diagram that is useful in explaining lateral chromatic aberrations.
Fig. 3 is a diagram that is useful in explaining pincushion distortions.
Fig. 4 is a diagram that is useful in explaining barrel distortions.
Fig. 5 is a diagram showing a polar coordinate system superimposed over a distorted image on a Cartesian coordinate system, which may be used to detect chromatic aberrations, in accordance with an exemplary embodiment of the present invention. - -
Fig. 6 is a block diagram of a system for detecting lens chromatic aberrations, in accordance with an exemplary embodiment of the present invention.
Fig. 7 is a graphical representation for evaluating parameters used as by a system for detecting chromatic aberrations, in accordance with an exemplary embodiment of the present invention.
Fig. 8 is an illustration of color components of pixels disposed on different color planes, in accordance with an exemplary embodiment of the present invention.
Fig. 9 is another illustration of color components of pixels disposed on different color planes, in accordance with an exemplary embodiment of the present invention.
Fig. 10 is yet another illustration of color components of pixels disposed on different color planes, in which correlations are found therebetween, in accordance with an exemplary embodiment of the present invention.
Fig. 11 is a graphical representation of a method used to detect lens chromatic aberrations, in accordance with an exemplary embodiments of the present invention.
Fig. 12 is a process flow diagram showing a detailed method of processing image data to generate simulated lens coefficients.
In accordance with an exemplary embodiment of the present invention, an imaging processing system may be embedded in an image acquisition device, such as a digital camera, a digital video camera, and the like, to detect lateral chromatic aberrations and curvilinear distortions in real time as the images are collected. Moreover, technical effects provided by the invention include the correction of images for spherical and chromatic aberrations, lowering the need - -
for costly, high accuracy lens systems and allowing correction of aberrations as the images are collected.
In the process, image may be collected in real time and analyzed by various algorithms for detecting lateral chromatic aberrations. Such algorithms may use various mathematical schemes to manipulate image data, so as to generate lens coefficients characterizing imperfections inherent to the optical elements of the image acquisition system. In so doing, the lens chromatic aberrations can be detected indirectly, that is, without actually accessing those optical elements within the optical elements. Furthermore, information gathered from the aberrations detection, as derived from the systems and methods described herein, may better yet facilitate systems and methods for correcting such chromatic aberrations.
As will be described further below, to detect lens chromatic aberrations (LCA) the disclosed systems and methods employ a variety of algorithms and mathematical schemes adapted for analyzing image data. As is well known, images are generally made up of multiple pixels, where each pixel represents a combination of three colors, namely, red (R), green (B) and blue (B). For example, by assigning a certain intensity to each of the RGB component for each pixel, it is possible to "trick" a human eye to perceive the pixel as having almost any color of the visible spectrum, provided the right combination of RGB is chosen for the desired color. In addition, it is no less important to ensure that the three RGB color components for any particular pixel are spatially aligned relative to one another, so that the pixels render the images uniformly, both in color and shape, across the image plane. Because spatial alignment between color components is an important concept in that it impacts the manner pixels (and images) are ultimately perceived, it is useful to conceptualize a pixel as an element having three separate color planes defined for each color, for example, one for R, one for G and one for B. Because the color planes span the entire image plane, it follows that an image can be thought of as being made up of three separate images, as defined by the three RGB color planes. - -
To the extent performance of optical elements of an image acquisition system depend on the wavelength of light used for acquiring the image, chromatic aberrations may be manifested as image distortions due to misalignment and/or shifting between the color planes of the pixels in the acquired image. Accordingly, the disclosed systems and methods are adapted to analyze image data by analyzing the color planes of certain chosen pixels throughout the acquired image. The system correlates between different color planes to quantify the shifting existing between the various RGB planes. Because such shifting can be characterized as having a magnitude and direction, quantities termed "shift vectors" can be defined between the color planes of the pixels. The disclosed system and method utilizes such shift vectors together with graphical and algebraic algorithms for deriving a matrix representation, whose coefficients provide desired lens coefficients that define the chromatic aberration in the acquired images.
Those skilled in the art will appreciate that the system and the algorithms used therewith for LCA detection, as described below, may be implemented through a combination of hardware and/or software components. Hence, such systems and methods may reduce the complexity of lens systems required for collecting images and, thus, reduce the cost or weight of an image collection system.
Further, effective detection of distorted images may contribute to the manner by such images are ultimately corrected.
Fig. 1 is a diagram that is useful in explaining chromatic aberrations. In Fig. 1 , a light beam 102 is aligned along an axis 104 and impinges on a lens 106. The lens 106 focuses the light beam 102 toward a desired image plane 108. However, the material of the lens 106 will generally show chromatic dispersion, wherein the refractive index of the lens 106 depends on the wavelength of the light impinging on the lens 106. Accordingly, while one wavelength of light, for example, yellow light 110, may be focused at the desired image plane 108, the refractive index for blue light 112 will be higher, leading to a higher angle of refraction from the lens 106. Thus, the focal point 114 of the blue light 112 may land in front of the desired image plane 108. Similarly, a red light 116 may have - -
a lower index of refraction in the lens 106 than the yellow light 110, leading to less refraction by the lens 106, providing a focal point 118 that is beyond the desired image plane 108. The failure of the different wavelengths 110, 112, and 116 to focus at the same point, e.g., on the desired image plane 108, leads to a circle-of-uncertainty 120, over which a single point of an image may be spread out, depending on the colors in the point. This may cause color fringes to appear around features in an image. Similar issues cause lateral chromatic aberrations, as discussed with respect to Fig. 2.
Fig. 2 is a diagram that is useful in explaining lateral chromatic aberrations. In Fig. 2, a light beam 202 is aligned along an axis 204 that is not aligned with an axis 206 of a lens 208 and desired image plane 210. The light beam 202 impinges on the lens 208 and is focused toward the desired image plane 210. However, for the reasons discussed with respect to Fig. 1 , different wavelengths of light are refracted at different angles by the lens 208. Accordingly, while a yellow light 212 may have a focal point 214 that lands at a correct position on the desired image plane 210, a blue light 216 may have a focal point 218 that is offset to one side of the yellow light 212. Similarly, a red light 220 may have a focal point 222 that is offset on the opposite side of the yellow light 212 from the blue light 216. This blurring of the colors may cause offset color fringes, e.g., magenta or green fringes, to appear on one side of an object. Chromatic aberrations are not the only distortions that may be caused by optical elements, such as lens. Curvilinear distortions, as discussed with respect to Figs. 3 and 4 may also be an issue.
Curvilinear distortions are distortions in which straight lines in a subject appear to be curved in an image. Various types of curvilinear distortions exist, including pincushion and barrel distortions as discussed with respect to Figs. 3 and 4. In the illustration 300 of a pincushion distortion shown in Fig. 3, a subject 302 is focused along an axis 304 through a lens 306 to form an image 308 at an image plane 310. The desired mapping of points from the subject 302 to the image 308 is illustrated by the rays 312. However, due to distortions caused, for example, by the placement of an aperture or stop 314 between the lens 306 and the image - -
308, the rays may not land where they are expected, as indicated by ray 316. This may cause the sides of the subject 302 to appear to curve inwards in the image 308.
Similarly, as shown in the illustration 400 in Fig. 4, the placement of an aperture or stop 402 between the subject 302 and the lens 306 may make rays 404 land in different places than expected, as indicated by rays 406. This distortion may make the sides of the subject 302 appear to curve outwards in the image 308.
To detect distortions of the type discussed above in Figs. 1-4, an image acquiring and processing system may employ a real time algorithm adapted for analyzing acquired images, so as to quantify the amount and magnitude of the distortions across the image. Generally, because an image is made up of a two dimensional grid of pixels, such analysis is typically done while mapping the pixels along one or more two-dimensional coordinate systems. In this manner, it is possible to graphically and/or algebraically analyze each color plane, as well as their relative positions to another throughout the input image. As will be described further below, the use of such techniques is instrumental in determining the lens coefficients of a fourth order polynomial (given below as Equation 2), as sought by the present technique to define the LCA in the input image.
Accordingly, Fig. 5 is a diagram showing a polar coordinate system superimposed over a distorted image on a Cartesian coordinate system, in accordance with an exemplary embodiment of the present invention. An image 502 shown in this illustration 500 has a pincushion distortion. A Cartesian coordinate system is imposed over the image 502, wherein the vertical axis 504 is labeled as "y" and, similarly, the horizontal axis 506 is labeled "x". The polar coordinates are represented by the vector 508 illustrating the angle of a point from the center, and the circle 510 representing the distance of the point from the center. In the illustration 500, the vector 508 and circle 510 represent a radial pixel coordinate, for example, coordinate 512 (also referred to below as radin), of an input image. Generally, a radial pixel coordinate of a desired image, for example, coordinate 514 (also referred to below as radout) may be expected to - -
Ne along the vector 508, as well. However, if the input image shows a pincushion distortion, as in Fig. 5, coordinate 514 would be at a farther distance out from the center than coordinate 512. Hence, the variation between coordinates 512 and 514 may stem from the relative shifting between the RGB color planes of the image. As described below, such variations may also be described via a functional relationship existing between radin and radout.
Fig. 6 is a block diagram of a system used for detecting LCA, in accordance with an exemplary embodiment of the present invention. Block diagram 600 illustrates a system having multiple components adapted to execute an LCA detection algorithm for defining the LCA in an input image. Accordingly, the system 600 intakes an input image 602 which, through the image acquisition process, attains one or more of the above discussed chromatic aberrations. The image 602 is provided to block 604, adapted to ascertain pixel values for each RGB component (plane) along certain radial lines defined within the image. The radial lines along which the above values are taken may be similar to those lines discussed above with reference to Fig. 5, as well as to those discussed further in more detail below.
More specifically, the block 604 separates each pixel into its color plane, i.e., RGB plane, such that in each plane, the pixel acquires a particular color value. The, RGB pixel values for each plane may be denoted, for example, as RadLineR , RadLineG and RadLineB, as shown by Fig. 6. In one exemplary embodiment, and as discussed further below, the system 600 obtains the aforementioned values along certain lines, such as those exhibiting pronounced distortions and/or notable artifacts throughout the input image.
Further, once the block 604 obtains the RadLines for each RGB colors, the Radlines are then provided to block 606. By use of various mathematical algorithms, the block 606 correlates between the different color planes, so as to determine the relative shifting therebetween. It should be born in mind that to ensure a smooth and sequential processing of the aforementioned data, the - -
system 600 may employ three separate buffers for storing each of the three Radlines when those are processed between the blocks 604 and 606.
In accordance with the present technique, the block 606 is further adapted to evaluate which pixels among the different color planes possess the highest correlation along the radial lines defined in the image. As discussed below, such analysis provides the basis for constructing the shift vectors, which quantify in part the chromatic aberrations in the image. More specifically, the block 606 may assemble the shift vector data in a form of tables, labeled as PosMatchRG, or PosMatchBG, etc. These parameters further define the relative shift that exists between the different color planes, as well as the relationship existing between the input image and an output image., i.e., one in which there are no aberrations.
As further illustrated by Fig. 6, the block 606 is adapted to receive input normalization data from block 608. The normalization data provided by the block 608 is adapted to treat the input images on an equal footing. In other words, with the normalization implementation, input images of various scales are processed consistently, such that the PosMatch values generated by the block 606 are applied uniformly to images having different sizes.
In accordance with an exemplary embodiment of the present invention, the block 608 defines four types of normalization factors given by the following Equations1 a-1d:
Image width: Norm = Equation 1 a
InWidth - 1
2
Image height: Norm = Equation 1 b
InHeight - 1
2 Image diagonal: Norm = , Equation 1 c
■J (in Width - if + (inHeight - 1)2 - -
User value: Norm = Equation 1d
UserValue - 1
Hence, the different "Norms", as labeled above, define normalization factors for the processed image in terms of the input image data 602. The term "InWidth" is defined as the width of the input image 602, "InHeight" is defined as the height of the input image 602, and "UserValue" is a user defined normalization basis.
The PosMatch values generated by the block 606 along with the corresponding normalization values provided by the block 608, are further provided to block 610 for obtaining lens coefficients of a 4th order polynomial defining the LCA in the input image. This polynomial is given by the following Equation 2.
radinyradout) = a radout4 + b radout3 + c radout2 + d radout Equation 2
In Equation 2, radout stands for a radial distance from the origin of the coordinate system of an output image (e.g., a desired output image). Similarly, radin stands for a radial distance from the origin of the coordinate system of the input image (see Fig. 7 below). The coefficients a, b, c, and d stand for lens specific coefficients, which numerically characterize the lens chromatic aberration of the input image.
As exemplified by Equation 2, the radial pixel coordinate of the input image, for example, radin, is given as a function of the radial pixel coordinate of the output image, for example, radout. The functional relationship between the two aforementioned parameters is primarily defined through the coefficients a, b, c, and d. Hence, with the input provided by the block 606, it is the main purpose of the block 610 to obtain the lens coefficients while using certain mathematical algorithms. While in one exemplary embodiment, the block 610, utilizes a Gaussian linear regression algorithm (as shown below) to calculate the coefficients a, b, c, and d, other mathematical schemes, such as approximate curve fitting, finite elements schemes, etc., may be used to obtain the above coefficients. - -
Fig. 7 is a graphical representation 700 for evaluating parameters used by a system for detecting chromatic aberrations, in accordance with an exemplary embodiment of the present invention. The graphical representation 700 is a depiction of an input image shown to have acquired one or more of the above discussed chromatic aberrations. While the representation 700 may seem to depict a barrel-type aberration, it should be borne in mind that the ensuing discussion of the present technique may be applicable to other types of aberrations in general and to chromatic aberration in particular.
As illustrated, the image 700 has a center 702 from which multiples radial lines emanate. Accordingly, radial lines 704, 706, 708 and 710 originate from the center 702, such that each of the radial lines may have a different length and radial direction throughout the image 700. The lines 704-710 may mark paths along the image replete with distortions and/or artifacts, such as those resulting from chromatic aberrations.
In accordance with the present technique, one or more of the lines 704-710 may be chosen to obtain pixel values along different RGB planes, as performed by the block 604 of the system 600 discussed above with reference to Fig. 6. For example, in one embodiment, pixel values may be captured along the radial lines 704 and 708, while in other embodiments the pixel values may be captured along the lines 706 to 710. Still in other embodiments, the pixel values may be captured on all the lines 704-710 and/or additional radial lines not shown herein. The choice of radial lines for obtaining pixel values may be left to the discretion of the user, or it may be predetermined by an algorithm, or it may even be dynamically chosen by a combination of users and/or an algorithms, depending on the image type, scale, desired quality and so forth.
Figs. 8 and 9 are illustrations of color components of pixels disposed on different color planes, in accordance with an exemplary embodiment of the present invention. Fig. 8 depicts two areas, namely, areas 800 and 802, of two separate color planes, chosen along one of the radial lines 704-710 of Fig. 7. In one - -
exemplary embodiment, the area 800 may represent an area of a G plane along a particular radial line, while the area 802 may represent an area of an R plane along the same radial line. While the portions of the two areas may be chosen to encompass identical number of pixels, for example, 13 pixels in total as shown in the areas 800 and 802, other embodiments may employ areas having a different number of pixels. As further illustrated by the Fig. 8, a pixel 804 may be chosen as a center pixel, around which the G and R color components are analyzed.
Fig. 9 is another illustration of color components of pixels disposed on different color planes, in accordance with an exemplary embodiment of the present invention. Accordingly, Fig. 9, depicts subareas/blocks 900 and 902 chosen to have certain number of pixels from the pixel areas 800 and 802, respectively. The blocks 900 and 902 may both have the same area and/or may posses the same amount of pixels. The blocks 900 and 902 may define the pixels that are analyzed, such as by the block 606 of the system 600 (Fig. 6), to ascertain possible correlations between the respective color planes on which the blocks 900 and 902 are disposed. In one embodiment, the blocks 800 and 802 may be mathematically analyzed using a Bravais-Pearson formula for finding the correlation parameters between two color planes, for example, RG. The Bravais- Pearson formula is given by the following Equation 3.
r = Equation 3
Figure imgf000016_0001
In Equation 3, "r" designates a correlation coefficient having values between 0 and 1 , i.e., 0<=|r|<=1 . In other words, the greater the value of "r", the greater the correlation between the color pixels disposed on the different planes (e.g., G and R pixels shown in Figs. 8 and 9). The parameter "n" designates the amount of pixels within the blocks 900 and 902, x, designates a pixel value of one of the pixels in the block 900, and y, designates the pixel value of one of the pixels in - -
block 902. Further, * designates the arithmetic mean of the pixel values of the block 900, as given by the following Equation 4.
— 1 "
Equation 4
Still further, y designates the arithmetic mean of the pixel values of the block 902, as given by the following Equation 5.
- 1 ^ n >=l Equation 5
Fig. 10 is yet another illustration of color components of pixels disposed on different color planes in which correlations are obtained therebetween, in accordance with an exemplary embodiment of the present invention. Accordingly, Fig. 10 illustrates the manner by which pixels contained in the block 902 of the area 802 of the R color plane are correlated in relation to pixels contained in the block 900 of the area 800 disposed on the G plane. Those of ordinary skill will appreciate that Equations 3-5 are applicable to finding correlation between color planes different from those discussed herein. Furthermore, Fig. 10 compared to Fig.9 illustrates the movement of block 902 within the area 802 for ascertaining the highest correlation between block 900 and block 902.
Further, the LCA detection algorithm, as executed by the system 600, obtains numerical values for the above defined correlation parameters, so as to determine, first, which correlation parameters exceed a certain threshold if indeed such correlations between the color planes do exist. Second, if correlation parameters of more than one respective pixels is found to exceed a threshold value, then the algorithm determines which of those values is a maximum value. For example, in Fig. 10, the block having the greatest correlation to the block 900 occurs when the block 902 is shifted to the left of the block 900. Accordingly, a - -
shift vector 1000 pointing to the right represents the direction in which the pixels in the R plane are shifted relative to the G plane, thereby defining a chromatic aberration in the image. By further example, if the position of the pixel 804 having a position value of 10 in block 900, and the shift vector yields a value of 3, the corresponding pixel in the block 902 has a position value of 10-3=7. Thus, pixel 10 and pixel 7 have the largest correlations between the RG planes having the areas 800 and 802, respectively. In case none of the correlation coefficients calculated for the block 900 and 902 reaches a predetermined threshold value, the position values of the pixels in both blocks are set to a value of 0.
Fig. 11 is a graphical representation of a method used for detecting lens chromatic aberrations, in accordance with an exemplary embodiment of the present invention. As discussed above, the LCA correction algorithm obtains shift vectors along radial lines of the images, as shown above with reference to Fig. 7. While such vectors may have a polar coordinate representation, it may be desirable for further calculation purposes to transform such coordinates to a different coordinates, i.e., Cartesian coordinates. Hence, Fig. 11 depicts an input image 1100 disposed on a first quadrant of a Cartesian coordinate system, where the lower bottom left corner of the image coincides with the origin of the coordinate system. As further illustrated, radial lines, 1110 and 1112 emanating from the center of the image 1100 are disposed along a diagonal of the Cartesian coordinate system. Thus, the radial lines 1110 and 1112 provide a path along the image 1100 from which the LCA detection algorithm captures the pixel values to obtain correlations between RGB color planes, as described above.
Accordingly, position of pixels disposed on the diagonal formed by the lines 1110 and 1112 may be mathematically represented by the following Equation 6.
. . InHeight - 1 ._ ,. _ y(χ) = x Equation 6
InWidth - 1
The terms in Equation 6, are further defined by Equations 1 a and 1 b. Hence, the LCA correction algorithm utilizes Equation 6, as well as the methods discussed - -
with reference to Figs. 8-11 for generating shift vectors for the RGB color planes along the diagonals 1110 and 1112. The values of such shift vectors is given by the following Equation 7.
π , ^ r r ( iniagediag PixPos imagediagλ ^ τ ._ . ■ -, PosMatch = abs\ • Norm Equation 7,
I. InWidth - 1 2 J
Where the term "imagediag" is given by the following Equation 8:
iniagediag = ^(inWidth - if + (inHeight - if Equation 8.
In Equation 7, "imagediag" is designated as length of the image diagonal (made of radial lines 1110 and 1112), and PixPos is designated as the pixel position within the image diagonal line.
For example, for an image with dimension of 1920x1080 pixels, Equations 7 and 8 can be used to obtain the following.
imaagediag = <J(ln Width - if + (inHeight - if = -y/(l920 -l)2 + (l080 -l)2 = 2201.55
Thus, for the 10th green pixel (as described with reference to Figs. 9-10) disposed on the diagonals 1110 and 1112, the shift vector is given numerically by the below expressions.
π , , , _, . f imagediag PixPos imagediag λ _ τ
PosMatchG = abs\ — • Norm
{ InWidth - l 2 J
Figure imgf000019_0001
For the 7th red pixel (as described with reference to Figs. 9-10) disposed on the diagonals 1110 and 1112, the shift vector is given numerically by the following expressions. - -
( imagediag PixPos imagediag λ
PosMatchR = abs\ Norm
I InWidth - 1 2 J 2201.55 - 7 2201.55^
PosMatchR = abs\ 0.000908 = 0.992212
^ 1920 - 1
The above numerical results define a PosMatchRG entry, namely, [0.992212, 0989087] which is ultimately used to obtain the coefficients of Equation 2.
The LCA correction algorithm utilizes similar calculations, such as those yielding values for PosMatchBG, for obtaining shift vector data for other pixels in other color planes. It should be borne in mind that while the above numerical results specifically apply to the direction of the diagonals 1110 and 1112, as shown in the image 1100 of Fig.11 , other diagonals having different directions along the image 1100 may be chosen and similar calculations apply.
In one exemplary embodiment, the above results can be used to obtain a relationship between the radin[green] and radout[red] parameters for obtaining the coefficients a, b, c, and d of Equation 2. The following Table 1 summarizes these relationships.
Table 1
Figure imgf000020_0001
The above Table 1 lists the radout position of i th green pixel and the corresponding radin position of the ; ith red pixel while taking into account the - -
correlations between R and G pixels and the PosMatchRG parameter described above.
Next, with the values of Table 1 at hand, as well as similar values of the radin and radout of additional pixels in the image 1100, it is possible to obtain the lens coefficients a, b, c and d of the Equation 2. In so doing, the present technique utilizes a 4 by 5 matrix (labeled as Matrix 1 ) having the following entries:
Matrix 1
Figure imgf000021_0001
\ [fjradout )■ fjradout )] [fjradout )■ fjradout )] ψadin fjradout )] [fjradout)- fjradout)] [f2(radout )-f2(radout )] [fjradout )- fjradout )] [fjradout )■ fjradout )] [radin fjradout )]
[fjradout )- fjradout )] [fjradout )- fjradout )] [fjradout )■ fjradout )] [fjradout )-f4(radout )] [radin fjradout )]
[fjradout )- f[radout \ [fjradout )-f2[radout )\ [fjradout yfjradout )] [fjradout f-fjradout )] [radin fjradout )]
In Matrix 1 , the following relationships apply: Z1 (radout) := radout4 Equation 9a
f2(radout) := radout3 Equation 9b
/3 (radout) := radout2 Equation 9c
/4 (radout) := radout Equation 9d
Further, each of the squared brackets appearing in Matrix 1 represents a Gaussian Sum, as defined by the following Equation 10.
[radin fγ (radout)] = ^ radinl -/j (radout \ ) Equation 10
To obtain the coefficients a, b, c and d of the 4th order polynomial used in the LCA correction algorithm, i.e., Equation 2, a suitable transformation is applied to Matrix 1 , so as to transform it to a matrix having a form displayed by the following Matrix 2.
Matrix 2 - -
1 0 0 0 a 0 1 0 0 b 0 0 1 0 c
0 0 0 1 d
As appreciated by one of ordinary skill, the transformation of Matrix 1 into Matrix
2 produces a four by four (4X4) identity matrix augmented by a fifth column, whose entries are the desired lens coefficients of Equation 2.
To proceed forward and to obtain numerical values for the coefficients a, b, c and d for the particular case of Table 1, Equations 9a-9d are first substituted in Matrix
1 to obtain the following Matrix 3.
Matrix 3 adin radout radin radout3 adin radout2
Figure imgf000022_0001
radin radout
Next, Gaussian terms, as given by Equation 10, are calculated for each of the entries of the Matrix 3. These are given by the following numerical results.
[radout* J = O8 +0.258 +0.58 +0.758 + 18 =1.104034
[radout 'J = O7 + 0.257 + 0.57 + 0.757 + 17 = 1.141357
[radout6 J = O6 +0.256 +0.56 +0.756 + 16 = 1.193848
[radout5 J=O5+ 0.255 + 0.55 + 0.755 + 15 = 1.269531
[radout4 \= O4 +0.254 +0.54 +0.754 +14 =1.382813 [radout3 J=O3+ 0.253 + 0.53 + 0.753 + 13 = 1.562500
[ra dout2 J = O2 +0.252 +0.52 +0.752 +12 =1.875000 - -
[radin radout4 J = O - O4 + 0.237227 • 0.254 + 0.480625 • 0.54 + 0.733477 • 0.754 + 1 • I4 = 1.263042 [radin radout3 J = O - O3 + 0.237227 • 0.253 + 0.480625 • 0.53 + 0.733477 • 0.753 + 1 • I3 = 1.373220 [radin radout2 J = 0 • O2 + 0.237227 • 0.252 + 0.480625 • 0.52 + 0.733477 • 0.752 + 1 • I2 = 1.547564 [radin radout J = O - O + 0.237227 - 0.25 + 0.480625 - 0.5 + 0.733477 - 0.75 + 1 - 1 = 1.849727
Hence, given the above calculation, Matrix 3 is numerically given by Matrix 4
Matrι> (4
1. 104034 1 .141357 1 193848 1. 269531 1 .263042
1. 141357 1 .193848 1 269531 1. 382813 1 .373220
1. 193848 1 .269531 1 382813 1. 562500 1 .547564
1. 269531 1 .382813 1 562500 1. 875000 1 .849727
In accordance with the present technique, the Gaussian Elimination Method is applied to the first diagonal element of the Matrix 4, so that it attains a value of 1. This is done by dividing all elements of the first row of the Matrix 4 by the first element appearing in row 1 , i.e., 1.104034, of the Matrix 4. Hence, this results in the following Matrix 5.
Matrix 5
1 .000000 1 .033806 1 .081350 1 .149902 1 .144025
1 .141357 1 .193848 1 .269531 1 .382813 1 .373220
1 .193848 1 .269531 1 .382813 1 .562500 1 .547564
1 .269531 1 .382813 1 .562500 1 .875000 1 .849727
Thereafter, the first entries of the second, third and fourth rows of the matrix 5 are made to have a value of zero. This is done by, first, multiplying 1.141357 times the first entry of the first row of Matrix 5, and then subtracting that value from the - -
first entry of the second row of Matrix 5. Second, multiplying 1.193848 times the first entry of the first row of Matrix 5, and then subtracting that value from the first entry of the third row of the matrix. Third, multiplying 1.269531 times the first entry of the fourth row of Matrix 5, and then subtracting that value from the fourth row of the matrix. The aforementioned manipulations yield the following Matrix 6.
Matrix i
1 .000000 1 .033806 1 081350 1 .149902 1 .144025
0 .000000 0 .013905 0 035324 0 .070363 0 .067480
0 .000000 0 .035324 0 091845 0 .189692 0 .181773
0 .000000 0 .070363 0 189692 0 .415164 0 .397352
The Gaussian Elimination Method can now be applied to the second diagonal element of the Matrix 6, such that it, too, attains a value of unity by dividing all elements of the second row by 0.013905. The resulting Matrix 7 is given by the following.
Matrix 7
1 .000000 1 .033806 1 081350 1 .149902 1 .144025
0 .000000 1 .000000 2 540317 5 .060119 4 .852725
0 .000000 0 .035324 0 091845 0 .189692 0 .181773
0 .000000 0 .070363 0 189692 0 .415164 0 .397352
Thereafter, the Matrix 7 is manipulated in a manner similar to that described above to generate a matrix in which the first, third and fourth entries of the second column are made to attain a value of zero. This is achieved by, first, multiplying the value 1.033806 times the second entry of the first column, and then subtracting that value from the first entry of the second column of the above - -
matrix. Second, multiplying the value 0.035324 times the second entry of the second column, and then subtracting that value from the third entry of the second column of the Matrix 7. Third, multiplying 0.070363 times the second entry of the second column, and then subtracting that value from the fourth entry of the above matrix. The aforementioned manipulations yield the following Matrix 8.
Matrix 8
1 .000000 0 .000000 1 .544845 4 .081280 3 .872752
0 .000000 1 .000000 2 .540317 5 .060119 4 .852725
0 .000000 0 .000000 0 .002110 0 .010947 0 .010353
0 .000000 0 .000000 0 .010947 0 .059116 0 .055898
Those skilled in the art will appreciate that iteratively operating on the Matrix 8 using computational steps similar to those described above should ultimately yield a matrix identical in form to that given by the Matrix 2. Accordingly, such steps lead to a matrix whose last column i.e., 5th, column, gives the lens coefficients, namely, a, b, c and d of Equation 2. Thus, in the above example, the Matrix 8 is further manipulated to give the final Matrix 9.
Matrix 9
1 .000000 0 .000000 0 .000000 0 .000000 0.009963
0 .000000 1 .000000 0 .000000 0 .000000 0.020075
0 .000000 0 .000000 1 .000000 0 .000000 0.029953
0 .000000 0 .000000 0 .000000 1 .000000 0.940009
Hence, the last column of the Matrix 9 gives the desired coefficients of the 4th order correction polynomial, as given by the following Equation 11. - -
radiniradoui) = 0.009963 • radout4 + 0.020075 • radout3 + 0.029953 • radout2 + 0.940009 • radout
Equation 11
In this form, Equation 11 , defines and, thus, characterizes the LCA in the input image.
Fig. 12 is a process flow diagram showing a detailed method 1200 of processing image data to generate simulated lens coefficients. The method 1200 generally describes the manner by which the LCA detection algorithm captures and correlates pixels on RGB color planes for ultimately obtaining the above described lens coefficients a, b, c and d of Equation 2. Accordingly, the method begins at block 1210 where image data is acquired and defined into a plurality of input images in separate color planes. Thereafter, the method 1200 proceeds to block 1220 where the method identifies a radial vector in each of the color planes, such that the radial vector is in a corresponding location in each color plane. From block 1220, the method proceeds to block 1230 in which the method 1200 identifies a block of pixels in each color plane located along the radial vector. While in a preferred embodiment the block of pixels in each plane may be chosen to contain the same number of pixels, the two blocks may be chosen to have a different number of pixels.
From step 1230, the method 1200 proceeds to block 1240 where the method correlates the pixel values in the block of pixels in each color plane to generate a shift vector. Thereafter, the method 1200 proceeds to block 1250 where the method generates a shift vector matrix from the shift vector(s). From the shift vector matrix, at block 1260, the method generates the lens coefficients defining the chromatic aberration in the input image.
One of ordinary skill will appreciate that combining any of the above-recited features of the present invention together may be desirable.

Claims

- -Claims
1. A method (1200) of processing image data to generate simulated lens coefficients, comprising: dividing image data into a plurality of input images in separate color planes
(1210); identifying a radial vector in each of the color planes, wherein the radial vector is in a corresponding location in each color plane (1220); identifying a block of pixels in each color plane that is located along the radial vector (1230); correlating the pixel values in the block of pixels in each color plane to generate a shift vector (1240); generating a shift vector matrix from the shift vector (1250); and generating lens coefficients from the shift vector matrix, wherein the lens coefficients define the chromatic aberration in the input image
(1260).
2. Method (1200) of processing image data according to claim 1 , wherein the color planes are represented using a polar coordinate system.
3. Method (1200) of processing image data according to claim 1 , wherein the color planes are represented using a Cartesian coordinate system.
4. Method (1200) of processing image data according to claim 3, wherein the color planes are represented on a quadrant of the Cartesian coordinate system.
5. Method (1200) of processing image data according to any of claims 1 -4, wherein correlating the pixels values comprises using a Bravais-Pearson correlation formula.
6. Method (1200) of processing image data according to any of claims 1 -5, comprising reducing the shift vector matrix to a four by five matrix, wherein - -
the four by five matrix is comprised of a four by four identity matrix augmented by a fifth column, wherein entries of the fifth column represents the lens coefficients.
7. Method (1200) of processing image data according to any of claims 1 -6, wherein the shift vector is determined by a largest correlation value existing between the block of pixels in each color plane.
8. An image processing system (600), comprising: a first component (602) configured to identify a radial vector in each of a plurality of separate color planes comprising a plurality of input image data, wherein the radial vector is in a corresponding location in each color plane; a second component (606) configured to correlate pixel values disposed in a block of pixels in each color plane to generate a shift vector; and a third component (610) adapted to generate lens coefficients from a shift vector matrix derived from the shift vector, wherein the lens coefficients define the chromatic aberration in the input image.
9. The image processing system (600) according to claim 8, wherein the first component is further configured to divide the image data into the plurality of separate color planes.
10. The image processing system (600) according to any of claims 8-9, wherein the first and/or second components are further adapted to identify the block of pixels in each color plane that is located along the radial vector.
11. The image processing system (600) according to any of claims 8-10, wherein the second and/or third components are further adapted to generate a shift vector matrix from the shift vector. - -
12. The image processing system (600) according to any of claims 8-11 , comprising a fourth component configured to provide normalization data to the second component, wherein the normalization data normalizes pixel values corresponding to images of varying size.
13. The image processing system (600) according to any of claims 8-12, wherein the third component is configured to reduce the shift vector matrix to a four by five matrix, wherein the four by five matrix is comprised of a four by four identity matrix augmented by a fifth column, wherein entries of the fifth column represent the lens coefficients.
PCT/EP2009/062780 2009-01-30 2009-10-01 Method and system for lens aberration detection WO2010086037A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP09305084.7 2009-01-30
EP09305084 2009-01-30

Publications (1)

Publication Number Publication Date
WO2010086037A1 true WO2010086037A1 (en) 2010-08-05

Family

ID=41317929

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2009/062780 WO2010086037A1 (en) 2009-01-30 2009-10-01 Method and system for lens aberration detection

Country Status (1)

Country Link
WO (1) WO2010086037A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5369450A (en) * 1993-06-01 1994-11-29 The Walt Disney Company Electronic and computational correction of chromatic aberration associated with an optical system used to view a color video display
US5818527A (en) * 1994-12-21 1998-10-06 Olympus Optical Co., Ltd. Image processor for correcting distortion of central portion of image and preventing marginal portion of the image from protruding
US6323934B1 (en) * 1997-12-04 2001-11-27 Fuji Photo Film Co., Ltd. Image processing method and apparatus
US6747702B1 (en) * 1998-12-23 2004-06-08 Eastman Kodak Company Apparatus and method for producing images without distortion and lateral color aberration
US20040218813A1 (en) * 2003-04-30 2004-11-04 Miyuki Okada Image processing device, image processing method, and image capturing device
US20080062409A1 (en) * 2004-05-31 2008-03-13 Nikon Corporation Image Processing Device for Detecting Chromatic Difference of Magnification from Raw Data, Image Processing Program, and Electronic Camera
WO2009112309A2 (en) * 2008-03-12 2009-09-17 Thomson Licensing Method and system for lens aberration correction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5369450A (en) * 1993-06-01 1994-11-29 The Walt Disney Company Electronic and computational correction of chromatic aberration associated with an optical system used to view a color video display
US5818527A (en) * 1994-12-21 1998-10-06 Olympus Optical Co., Ltd. Image processor for correcting distortion of central portion of image and preventing marginal portion of the image from protruding
US6323934B1 (en) * 1997-12-04 2001-11-27 Fuji Photo Film Co., Ltd. Image processing method and apparatus
US6747702B1 (en) * 1998-12-23 2004-06-08 Eastman Kodak Company Apparatus and method for producing images without distortion and lateral color aberration
US20040218813A1 (en) * 2003-04-30 2004-11-04 Miyuki Okada Image processing device, image processing method, and image capturing device
US20080062409A1 (en) * 2004-05-31 2008-03-13 Nikon Corporation Image Processing Device for Detecting Chromatic Difference of Magnification from Raw Data, Image Processing Program, and Electronic Camera
WO2009112309A2 (en) * 2008-03-12 2009-09-17 Thomson Licensing Method and system for lens aberration correction

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
BASU A ET AL: "Modeling fish-eye lenses", INTELLIGENT ROBOTS AND SYSTEMS '93, IROS '93. PROCEEDINGS OF THE 1993 IEIEE/RSJ INTERNATIONAL CONFERENCE ON YOKOHAMA, JAPAN 26-30 JULY 1993, NEW YORK, NY, USA,IEEE, US, vol. 3, 26 July 1993 (1993-07-26), pages 1822 - 1828, XP010219209, ISBN: 978-0-7803-0823-7 *
BOULT T E ET AL: "Correcting chromatic aberrations using image warping", PROCEEDINGS OF THE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. CHAMPAIGN, IL, JUNE 15 - 18, 1992; [PROCEEDINGS OF THE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION], NEW YORK, IEEE, US, vol. -, 15 June 1992 (1992-06-15), pages 684 - 687, XP010029284, ISBN: 978-0-8186-2855-9 *
J.H. GOODNIGHT: "A tutorial on the SWEEP Operator", THE AMERICAN STATISTICIAN, vol. 33, no. 3, 1979, pages 149 - 158, XP002557168 *
KYEONGTAE HWANG ET AL: "Correction of lens distortion using point correspondence", TENCON 99. PROCEEDINGS OF THE IEEE REGION 10 CONFERENCE CHEJU ISLAND, SOUTH KOREA 15-17 SEPT. 1999, PISCATAWAY, NJ, USA,IEEE, US, vol. 1, 15 September 1999 (1999-09-15), pages 690 - 693, XP010368263, ISBN: 978-0-7803-5739-6 *
LI, H. AND HARTLEY, R.: "A non-iterative method for correcting lens distortion from nine point correspondences", PROC. OF THE OMNIVISION ICCV WORKSHOP, 2005, pages 1 - 4, XP002557167 *
LIU HONG ET AL: "Lens distortion in optically coupled digital x-ray imaging", MEDICAL PHYSICS, AIP, MELVILLE, NY, US, vol. 27, no. 5, 1 May 2000 (2000-05-01), pages 906 - 912, XP012011162, ISSN: 0094-2405 *

Similar Documents

Publication Publication Date Title
US10282822B2 (en) Digital correction of optical system aberrations
US9142582B2 (en) Imaging device and imaging system
KR101633946B1 (en) Image processing device, image processing method, and recording medium
JP5358039B1 (en) Imaging device
US8482659B2 (en) Image processing apparatus and image processing method
Kang Automatic removal of chromatic aberration from a single image
US20110193997A1 (en) Image processing method, image processing apparatus, and image pickup apparatus
EP3261328A2 (en) Image processing apparatus, image capturing apparatus, image processing method, and computer-readable storage medium
JP6786225B2 (en) Image processing equipment, imaging equipment and image processing programs
CN112070845A (en) Calibration method and device of binocular camera and terminal equipment
WO2012137437A1 (en) Image processing apparatus and image processing method
US8937662B2 (en) Image processing device, image processing method, and program
JP7234057B2 (en) Image processing method, image processing device, imaging device, lens device, program, storage medium, and image processing system
WO2012086362A1 (en) Image processing device, program thereof, and image processing method
CN110520768B (en) Hyperspectral light field imaging method and system
JP6578960B2 (en) IMAGING DEVICE, IMAGING METHOD, IMAGING PROGRAM, AND RECORDING MEDIUM CONTAINING THE IMAGING PROGRAM
WO2010086037A1 (en) Method and system for lens aberration detection
EP2306397A1 (en) Method and system for optimizing lens aberration detection
KR100835058B1 (en) Image processing method for extending depth of field
KR100843433B1 (en) Method for measuring amount of blurring of micro camera module
JP6331339B2 (en) Imaging apparatus, imaging system including the imaging apparatus, and false color removal method
TWI450594B (en) Cross-color image processing systems and methods for sharpness enhancement
JP7009219B2 (en) Image processing method, image processing device, image pickup device, image processing program, and storage medium
Lluis-Gomez et al. Chromatic aberration correction in RAW domain for image quality enhancement in image sensor processors
van Zwanenberg et al. Camera system performance derived from natural scenes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09783655

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20/12/2012)

122 Ep: pct application non-entry in european phase

Ref document number: 09783655

Country of ref document: EP

Kind code of ref document: A1