WO2009141403A1 - Correction of optical lateral chromatic aberration in digital imaging systems - Google Patents

Correction of optical lateral chromatic aberration in digital imaging systems Download PDF

Info

Publication number
WO2009141403A1
WO2009141403A1 PCT/EP2009/056185 EP2009056185W WO2009141403A1 WO 2009141403 A1 WO2009141403 A1 WO 2009141403A1 EP 2009056185 W EP2009056185 W EP 2009056185W WO 2009141403 A1 WO2009141403 A1 WO 2009141403A1
Authority
WO
Grant status
Application
Patent type
Prior art keywords
lens
image
method according
correction
colour
Prior art date
Application number
PCT/EP2009/056185
Other languages
French (fr)
Inventor
Julia Dietimeier
John Mallon
Paul Francis Whelan
Original Assignee
Dublin City University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration, e.g. from bit-mapped to bit-mapped creating a similar image
    • G06T5/006Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré, halo, even if the automatic gain control is involved
    • H04N5/217Circuitry for suppressing or minimising disturbance, e.g. moiré, halo, even if the automatic gain control is involved in picture signal generation in cameras comprising an electronic image sensor, e.g. digital cameras, TV cameras, video cameras, camcorders, webcams, to be embedded in other devices, e.g. in mobile phones, computers or vehicles
    • H04N5/2173Circuitry for suppressing or minimising disturbance, e.g. moiré, halo, even if the automatic gain control is involved in picture signal generation in cameras comprising an electronic image sensor, e.g. digital cameras, TV cameras, video cameras, camcorders, webcams, to be embedded in other devices, e.g. in mobile phones, computers or vehicles in solid-state picture signal generation
    • H04N5/2176Correction or equalization of amplitude response, e.g. dark current, blemishes, non-uniformity
    • H04N5/2178Correction or equalization of amplitude response, e.g. dark current, blemishes, non-uniformity by initial calibration, e.g. with memory means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/335Transforming light or analogous information into electric information using solid-state image sensors [SSIS]
    • H04N5/357Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N5/3572Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Abstract

A technique for removing the effects of a particular optical aberration: Lateral Chromatic Aberration (LCA) in digital imaging systems is presented. The process is targeted on single, multi-spectral sensors, arranged in a predefined pattern with spectral filters. Removal process based on a pre-mosaicing or preinterpolation processing of the raw sensor data. This enacts a non-linear displacement of pixels with an inverse profile to the LCA produced by the optical assembly. The pre-mosaicing process is controlled by a multivariate mathematical model of the lenses lateral chromatic characteristics. In this way the lateral chromatic aberration content is precisely removed throughout the image region and lens range. The present invention also relates to an image processing program that realises the image processing device on a computer. Furthermore, the present invention relates to an electronic device that incorporates the image processing device. The method may be employed with predefined test images or images taken generally by a user.

Description

Title

Correction of Optical Lateral Chromatic Aberration in Digital Imaging Systems

Field of the Application The present application relates to the field of optics and in particular to methods of correcting optical aberration

Background Of The Application

In an optical system or lens, departures from ideal behaviour are known as aberrations. During the lens design process designers strive to minimise a multitude of aberrations. Notwithstanding, residual aberration appear in resulting images or projections. This present application is directed at Lateral Chromatic Aberration (LCA), which refers to the slight variation in magnification with wavelength or colour. This phenomenon is commonly observed in digital images where often a coloured fringe appears in high contrast areas, especially near the image boarders. The distribution of LCA throughout the image pick-up apparatus, and consequently the image, is commonly assumed to be a change in magnification about the optical axis for a colour channel with respect to another spectral channel.

Thus for example, some attempts have been employed where the magnification is estimated and reversed for individual colours. For example, JP2000299874 describes a method where the chromatic aberration is calculated at one location in the image and this correction is applied throughout the image to reduce its influence. JP20012344978 adjusts the magnification of each colour component of the image to ensure a minimum difference between colour channels.

In practice the LCA distribution is highly non-linear as there are many factors that contribute to this profile, which results in poor performance of the above described methods in certain instances. One area that has been traditionally concentrated on is lens design as it should be appreciated that whilst the present application is directed to digital imaging techniques, LCA was present before the development of digital imaging techniques. Accordingly, particular lens designs are employed to correct for chromatic aberration. Achromatised lenses cancel chromatic aberration for two selected wavelength. Lenses that correct for three wavelengths are known as apochromatic, while superachromatic lenses correct for four wavelengths. Whilst, these corrected lenses have an improved performance with respect to uncorrected lens, they are designed typically to correct the image in the center portion and LCA still occurs at the edges. Moreover, images acquired with these types of lenses still present a secondary spectrum, which is referred to as residual chromatic aberration. This residual chromatic aberration exhibits a varying intensity across the surface of the sensor and accordingly in the captured image.

Errors or inaccuracies in lens manufacture and assembly leads to a more complex chromatic wavefront, which, among other effects, displaces the apparent centre of chromatic aberration away from the optical axis. Slight misalignments of lens elements due, for example, to zoom barrel displacements or otherwise also contribute to displacing the apparent centre of aberration and among other effects contributes to a tangential as well as radial LCA profile. The combination of these effects and others, leads to a highly non-linear lateral chromatic aberration profile on a sensor such as a CCD device. Furthermore, for variable focal length, focus or aperture lenses, the LCA content is further modified in a non-linear fashion with focal length, focus and\or aperture.

US20070097267 describes the suppression of chromatic aberration within the gamma correction process. This method is fundamentally ineffective in the removal of chromatic aberration as the correlations introduced through the demosaicing process introduce additional false colour artefacts in the image.

US2004150732 describes a method that applies correction employing interpolation based on stored blocks of correction data. US2004240726 describes a method where zonal correction is applied during the interpolation process. These methods have the drawback that cannot be used with general interpolation techniques.

US20070242897 describes a technique where the correction is applied at zonal sections throughout the image, centered on the optical axis. More fundamentally, all these methods, being region based, do not completely remove the aberration at every pixel. Correction levels are chosen iteratively, increasing processing efficiency. Correction levels based on scene content can potentially result in erroneous correction

JP2006135805 describes a method where correction is applied based on a storage map. Similarly, JP2001186533 appears to describe a correction system that applies a correction based on a pre-stored map for every pixel in the image. This correction map is generated by viewing a pattern who's aberration free image is stored in advance for the lens which is fixed to the camera. Does not take into account settings other than photographed. Both methods require large storage space for the correction map.

US6697522 outlines a method for use in a colour scanner where correction is applied based on a stored memory coefficient. Whilst approaches such as this provide correction. They are limited to the settings at which the correction data was obtained. Thus, the chromatic aberration content at different lens settings will not be removed effectively.

US20050179788 and US2005168614 describes similar systems to above with the additional consideration for variations in the optical axis or center of aberration due to camera shake process. While the latter two techniques consider the shift in the optical axis due to shake, no considerations is given to the numerous additional factors which might perturb the apparent center of the aberration. These include camera lens misalignment, errors in lens manufacture and barrel tilt. Aside from this issue the described re-magnification removal process cannot accommodate the non-linear correction that is required to completely remove the effects of the aberration. These methods are again based on a re-magnification implementation of the correction. The non-linear elements are thus ignored and remain. The method employs the lens setting as input and generates a single control parameter to drive the magnification or reduction process. However, the method is limited in that it assumes that LCA is linear whereas it is not.

In addition a variety of lenses are known in which the optical axis is deliberately shifted. These lenses would include tilt shift and swing lenses or combinations thereof which are used for example in architectural photography. None of the above described methods are suitable for use with such lenses.

The present invention seeks to provide an improved method of correcting LCA in Digital Imaging Systems.

Summary

The present application addresses the deficiencies in the prior art by allowing for the removal of Lateral Chromatic Aberration throughout the range of operation of a photographic lens and across the entire sensor.

To achieve this, the present application provides a method for creating a model for correcting LCA in an imaging system, the method comprises: a) setting the lens of the imaging system to an initial setting, b) using the imaging system to acquire a RAW image comprising at least two colour components for a test pattern, where the test pattern has a plurality of test points defined therein, c) determining the LCA shift that has occurred for at least one colour at each test point for the particular lens setting, d) repeating steps c) and d) for different lens settings, e) using the determined LCA shifts in a fitting method to determine appropriate constants for a model comprising a plurality of equations, where the model may be employed to estimate an LCA correction value for any pixel in an image, where the inputs to the model comprise the pixels location and the current lens settings.

A correction system for an imaging system comprising a lens having one or more settings and an image sensor arrangement, the image sensor providing a RAW image having at least two colour components, the correction system applying to each pixel of at least one colour a correction equation to correct for LCA where the constants of the equations have been previously determined and the inputs to the equation comprise the individual pixel location and the at least one lens setting.

Accordingly, a first embodiment provides a method of correcting for Lateral Chromatic Aberration in a lens comprising the steps of: acquiring a RAW image comprising at least three colour components and at least one lens setting corresponding to those of the RAW image, applying a first mathematical correction to a first colour component of the RAW image to provide a corrected first colour component, applying a second mathematical correction to a second colour component of the RAW image to provide a corrected second colour component, wherein the first and second mathematical corrections are performed individually on pixels of the individual colour components by application of a series of non-linear equations, wherein the coefficients for the series of nonlinear equations have been determined from previous measurements of chromatic aberration at a series of points in a test image for a plurality of different lens setting and the inputs to the equations comprise the position of the pixel and the at least one lens setting, combining the corrected first colour component and the corrected second colour component with the third colour component to provide a corrected raw image.

In contrast to the prior art this method allows for accurate correction of chromatic aberration at each pixel and for different lens settings in a form which allows for fast computation and minimal storage of correction information in contrast to the use of maps which result in a correction map of the same size as the image at least dimensionally. In addition, in contrast to zonal or regional based methods, a correction may be determined for each individual pixel.

Key to achieving the benefits of the present application is the compact representation of the lens chromatic aberration characteristics through a multivariate mathematical model. This requires minimal storage requirements in a device, yet provides correction for every conceivable pixel. Implementation of the removal process may use a process such as such as bi-linear or cubic interpolation. The removal process displaces each pixel by the inverse amount of lateral chromatic aberration at that location.

The removal process is made efficient by requiring one quarter the time required to process one colour channel. This is achieved by processing the RAW image data, which is also essential for the complete and correct removal of the aberration.

A further embodiment provides a method for creating a model for correcting LCA in an imaging system having a lens with multiple settings, the method comprises: a) retrieving a RAW image acquired by the imaging system having a first lens setting, the RAW image comprising a reference colour component and at least one other colour component, b) determining test points from high regions of change in each component, c) determining the LCA shift that has occurred for the at least one colour at each test point with respect to the reference colour component for the particular lens setting, d) repeating steps a) b) and c) for images acquired with different lens settings, e) using the determined LCA shifts in a fitting method to determine the constants of a model comprising a plurality of equations, where the model may be employed to estimate an LCA correction value for any pixel in an image, where the inputs to the model comprise the pixels location and the current lens settings. It will be appreciated that the high regions of change is with respect to pixel intensity. In this embodiment, the method may be applied for two colour components and constants are determined separately for each colour component. Suitably, the imaging system employs RGB format. The Green component may be selected as the reference colour component and the model parameters determined to correct the Red and\or Blue components.

For any of these embodiments, the lens setting may comprise one or more of the following: a) the aperture setting, b) where the lens is a zoom lens, the selected focal length, c) the focus distance, d) where the lens is a shift lens, the degree of lens shift, e) where the lens is a tilt lens, the degree of lens tilt, f) where the lens is a swing lens, the degree of swing.

Brief Description Of The Drawings

The present invention will now be described with reference to the accompanying drawings in which:

Figure 1 is an exemplary process flow for a calibration process according the present application,

Figure 2 is a test set-up for the calibration process of Figure 1 ,

Figure 3 is a correction process employing the calibration data obtained from a process and set-up for example as shown in Figures 1 and 2 respectively, and Figure 4 is a further exemplary process according to the present application,

Figure 5 illustrates experimental data to explain part of the process of

Figure 4, and

Figure 6 illustrates exemplary measurement data obtained using the process of Figure 4.

Detailed Description Of The Drawings The different elements of the process will now be explained with reference to some exemplary embodiments. In a first embodiment, a first process acquires test images as part of an initial calibration step whereas a second process uses an existing library of images acquired by an imaging system for the initial calibration process. In both cases, the data obtained from the calibration step is subsequently employed in a correction process to correct LCA in images.

The first calibration process comprises a series of steps, as illustrated in the process flow of Figure 1. The initial step comprises the setting up of the imaging apparatus such as an SLR camera in a test configuration. The camera 30 is set up on a tripod 36 or other support and directed at a reference image 38. As will be explained below, the reference object is suitably selected to simplify the subsequent processing of the data. The reference object is positioned a distance away from camera. This distance is loosely based upon the focal length of the lens. Different focal lengths for a lens may be placed at different distances to ensure that the reference object presented to the image sensor fills the image, i.e. there is no border. To assist the process, optionally a track or similar arrangement may be provided by which the distance between the camera and reference object may be adjusted. Suitable markings may be provided on the track to identify the appropriate positions depending on the selected focal length of the lens. It is not important that the exact same image or view position is presented to the sensor for each acquired image since the LCA movements will be detected by reference to differences between the different colours in the one image at the positions of the identified corners. For the numerical minimisation techniques described below it is not necessary that the LCA measurement sites be the same from image to image, it is sufficient that they are known.

As, a measure of aberration is being obtained it is important to capture an image of a specific known target so that the captured image and target may be compared. Thus the reference object (target) is suitably selected to facilitate the subsequent image analysis aspect of the calibration process. An example of a suitable reference object is a planar surface with a checkerboard pattern defined thereon. A checkerboard pattern is suitable since the positions of the corners of the individual squares may readily be identified using conventional image derivative processing such as corner detection techniques. Suitably, the number of squares in the pattern would ensure that each square represented a pixel area in the image of x by x pixels, where x is suitably a value in the range of 40 to 200 pixels depending on the sensor size and the degree of improvement required.

Once the camera, lens 32 and target have been appropriately set up, an image is acquired by the camera. Image analysis is then performed on the acquired image to identify the positions in a reference plane of the corners of the squares in the checkerboard pattern. The identification of the corner positions may readily be implemented using conventional corner detection techniques which would be familiar to those skilled in the art. Suitably in an RGB sensor camera, the Green plane is used as the reference as it is the one generally most densely sampled by the colour filters of the CFA (Colour filter array).

The process of identifying the corner positions is repeated for the first of the remaining colour panes, i.e. Red or Blue in a RGB sensor. Once the corner positions have been determined, they are compared to the corner positions of the reference plane to determine the displacement of each corner with respect its corresponding corner in the reference plane. These displacements are stored along with the lens settings. The process is then repeated for any remaining colour planes.

Accordingly, the Red and Blue plane displacements with respect to the green plane are recovered In contrast, to some of the prior art, the present application applies this process to the RAW data obtained from the sensor to recover un-correlated independent estimates of the LCA throughout the sensor area. Compared with the full colour LCA estimates these are significantly more accurate. It will be appreciated that corrections for LCA are the inverse values of the LCA estimates since the purpose of the method is to negate displacements in the image introduced by the lens.

These steps are then repeated a number of times for different lens settings, e.g. changes of focal length, aperture and focus settings. This data collection measures the LCA content at different pixels throughout the range of the lens.

The next stage in the process compacts the acquired LCA measurements into a compact numerical form which returns the correction values for every pixel for each of the non-reference planes for any given lens settings.

In particular, the correction process is embedded in a multi-vahate model of the lens. This model represents the highly non-linear nature of the aberration throughout the lens. The variables of this model are estimated using the collection of data from the measurement process through a non-linear least squares procedure. This compact model representation can be interrogated for every pixel in the image to recover the appropriate correction. Uniquely, the removal process can be conducted using standard methods such as bi-linear or cubic interpolation. The removal process may be conducted efficiently on the RAW data channels, where only active pixels are considered. This reduces the processing time required, and is essential for proper subsequent interpolation of the corrected RAW data. The pre-interpolation removal enacts the discrete inverse of the multi-vahate LCA function described above. This data is then passed directly to any interpolation technique, where the resulting image will exhibit a significant reduction in and even elimination of lateral chromatic aberration.

The mathematical representation of Lateral Chromatic Aberration correction quantities for a specific focal, aperture and focus setting is given in x and y pixel coordinates by the equation: C' (pt r" ; > — r , ; f + ( j f- f t ^ + < . ( Λ' rj + itj ) + 2< 1 1 f tιt

O p , . c'O ,, = < , iit -j- ( 2!If I 2J + 2( x t , H, -f < i{ >trt + >ή ) . ( I )

This models a particular colour frequency or plane of interest (g), and a reference plane (f). Here x = piχx — c% and y = pixy — cy represent the coordinates of the image with the origin at the aberration center, (Cχ,cy) where (pixx, pixy) are the image pixel location with reference to the predetermined origin (generally top left hand corner). The vector & = (ci, C2, C3, c4)τ is a correction parameter vector. This model is based on a fourth order approximation to the wave aberration equation, as described in a previous paper by the present inventor (Calibration and removal of lateral chromatic aberration in images, Pattern Recognition Letters 28(1 ):125-135, 2007) the entire contents of which are incorporated herein by reference.

The basic model is only valid for fixed lens settings. In a lens with any or all of the following variable settings: focal length, aperture and focusing, the chromatic aberration content in the image varies. These variations are nonlinear and have differing levels of impact on the amount of LCA. Experiments show that focal length has the greatest influence, and focusing has the least impact on the chromatic aberration content. The lens settings are incorporated into the above model by varying the parameter vector & based on the lens settings. These variations are modelled according to:

Figure imgf000013_0001

where the index n represents the parameter id, i.e. 1 to 4, ^F(z)w models

the variations caused by the focal length changes, 1F(S)n represents the aperture setting changes and 1F(J)U represents the focus changes. Specifically: for parameter each parameter in the above model this becomes:

Figure imgf000014_0001
ro = kr, ::L> + kVi : + kr f + Av x

<"Λ = A^ : + k^a + ha

(\i = kϊ2~: + ku>a + ku

Cr = ^ 15 - + ^ US^ + /* 17

Figure imgf000014_0002

By applying a suitable numerical minimisation technique these parameters are determined from the measured data. Once determined, they may be applied to a raw image to correct each pixel, provided the lens settings are known.

Although, the above method has been described with respect to focus, aperture and focal length changes, it will be appreciated that the method may also be applied to tilt, shift and\or swing lenses or combinations thereof in which (LCA) displacement measurements as described above are taken for different degrees of tilt, shift and\or swing and the numerical minimisation technique is applied with the degree of tilt, shift and\or swing provided as one or more lens setting inputs in place of or in addition to focus, aperture and focal length settings. Moreover, it will be appreciated, that generally these specialised lenses are fixed focal length in nature and accordingly, the focal length parameter may be replaced with the degree of tilt, shift or sweep as appropriate.

The techniques of the present application may be applied in a variety of different ways in both cameras and projection devices. In the case of a camera, the method is applied after it has been sensed at the camera sensor, whereas in the projection device, it is applied before being presented to the projecting elements.

An exemplary process for correcting a subsequently acquired image using the previously determined parameters is shown in Figure 3. It will be appreciated from the discussions above that the present correction process is applied to the raw image data in advance of any demosaicing process. Once the raw image is acquired 50. The model is then applied 52 using the lens setting at which the raw image was acquired in order to estimate the correction required at each pixel in a first colour plane, e.g. Red in the case of an RGB image where Green was taken as the reference plane. These corrections are applied 54 to produce a corrected Red plane. The process is then repeated 56 for the second colour plane, e.g. Blue to produce a corrected Blue plane. The corrected Blue and Red planes are then combined 58 with the reference Green plane to produce a corrected RGB RAW image. Depending on the configuration in the camera, the corrected image may then be stored in RAW format or demosaiced 60 and stored in a compressed form such as JPEG.

It will be appreciated that whilst the important factor is that the model is based on measurements taken of RAW image data and that the method is applied to RAW image data, the compact nature of the model is such that the model may be applied within the camera the method need not be applied directly within the camera and may be applied to the captured raw images using external image processing software. The external software may be specifically for the purpose of correcting for LCA or it may be provided as a plug-in for more general image processing software, such as for example Adobe Photoshop™

In one arrangement, a kit is provided to a user including a reference image (checkerboard) and the software to produce correction information for their individual lenses. This information may then be stored and applied as required to images taken using a particular lens to correct for LCA. Conventionally, digital cameras include cameraMens setting information with the RAW image data. Typically, this information would include the focal length setting of the lens and the aperture selected. Accordingly, it would be advantageous if the information was also provided by the camera with the raw image. It will be appreciated that this may require modification to some designs of camerasMenses to include a sensor for this information. As an alternative, the software may require the user to input an estimate of the focus distance, typically the distance from the object being photographed. As it has been determined by the experiments of the present inventor that focus least affects the LCA changes, the software may simply ask the user to identify whether the focus was near, far or midway.

Whilst such an arrangement is ideally suited to the experienced amateur\professional, it is unlikely to be favoured with the less experienced user. Moreover, as only higher end models of cameras provide images in RAW format to the user, most others using the JPEG format, it would be preferable to include the correction process within the camera. In addition, if the process is included in the camera, less expensive lenses may be employed to achieve comparable results to more expensive lenses where the correction process is not employed. Thus for example, the process may readily be employed in compact digital cameras and mobile phones to improve image quality without increasing the optics. To achieve this, the cameras may be calibrated during or at the end of the manufacturing process to determine the correction parameters which may subsequently be employed to correct acquired images before the images are demosaiced and stored as JPEG's or in another compressed format.

In more expensive cameras, such as digital SLR's where the lenses may be changed, the firmware of the camera may be configured to include the above described correction process. In this arrangement, the camera firmware may be configured initially or updated to determine the correction parameters. In this arrangement, the correction parameters may be determined by running through a calibration routine performed by the firmware for each lens of the user. The determined calibration settings may then be associated with the individual lens, e.g. by reference to the identification of the lens when it connects to the camera or alternatively by selection of the user from a menu. It will be appreciated that users may prefer not to have this complexity or they may prefer to share the information between several cameras. In this scenario, the parameters for the correction may be stored in a memory card or similar and loaded into the camera as required. This type of scenario is known where user's custom settings may be saved and retrieved from a memory card.

In another arrangement, the lenses are calibrated during their manufacturing process to determine appropriate correction parameters and these parameters are stored as meta data within the lens for loading by a camera subsequently when connected. The correction of an image taken with a lens could then be implemented onboard the devices hardware (similar to the white balance function) or on an external processor.

In addition to imaging systems for cameras, the technology may also be applied to industrial inspection systems and image projection systems. On the case of image projection systems it will be appreciated that the LCA occurs in reverse to a camera, i.e. the original image is perfect and it is the projection through the lens that introduces LCA. To address this, the corrections would be applied to the data sent to the projector (or indeed within the projector) such that when the corrected image is transmitted through the lens, the viewer sees the original and thus uncorrected image without LCA), i.e correction for the aberration is implemented in reverse through the deliberate introducing of chromatic to the driving image. The lens aberrations will subsequently remove these aberrations to render an aberration free image. The image processing path is modified slightly within this processing path as would be appreciated by the person skilled in the art.

The description is given with respect to an electronic camera that incorporates a single-plate type image pickup device has colour filters of RGB etc. arranged on an image pickup plane, and generates RAW data of each colour component arranged at each pixel. As the aberration source is independent of the pick-up this technology could equally be employed in other devices that utilise lenses such as projectors. In cameras full colour images are generated from this RAW data through an interpolation or demosaicing process. Numerous techniques for interpolation are in use, and they strive to minimize the introduction of colour artefacts into the full colour image. Although, the application has been described with respect to RGB sensors, it will be appreciated that the application may be applied to any colour imaging sensor.

Whilst, it will be understood that the use of a test pattern is advantageous as its characteristics are thus those of its test points are known. It would be advantageous if a more general approach which was not reliant upon a predetermined test pattern could be employed.

Accordingly, a further method is now described which employs an alternative means of acquiring measurements of LCA in general images rather than the exemplary chessboard test pattern described above, where the corners are employed as test points.

In contrast to the previously described method which was employed in advance of use by a user of their camera, this alternative method would facilitate the detection of LCA using regular images, thus relaxing the test pattern constraint and data acquisition stage that was previously required.

Advantageously, the method may be applied to a library images previously acquired by the camera using the META data associated with the images (e.g. their exif tags) to identify different lens settings and then to provide a measure of the LCA for these different lens settings and then proceed as before with the calibration or parameter estimation process.

A second advantage of this automatic LCA detection is that it allows for the single or blind correction of the aberration in any image. This involves running the automatic LCA detection and followed subsequently with a parameter estimation phase with constant focal, aperture lengths. The correction can then be applied as before, for example within photo editing software.

A description of this automatic detection process is now provided with reference to Figure 4 and to the following exemplary method steps which begins 40 with a RAW image taken by the camera. In contrast to above the image taken need not be of a test pattern such as the chess board, but may any image previously taken by a user of the camera at a particular desired lens setting to be measured for. This lens setting information is generally available from META data associated with the image, e.g. EXIF data.

The RAW image is then analysed using suitable image processing routines in software. The first stage of this analysis is to find high variation regions 42 (i.e. to identify regions in the image from which suitable candidate to employ as test points may be identified) of the image via a fast derivative procedure on RAW image data to provide one or more derivative planes. This may equally be applied to interpolated or full colour images by someone familiar with the art and the following steps. The derivatives may be calculated in the horizontal and/or vertical directions on each colour plane. This is beneficial as the LCA model above (equation 1 ) can be resolved into horizontal or vertical directions, and thus can be calibrated with data in at least one of these directions. The uni-directional approach offers a speed increase by a factor of 2, and is thus advantageous. The following explanation continues with the unidirectional derivative, but may equally be applied to a bi-directional derivative by someone familiar with the art.

The derivative planes (one for reference colour and at least one of the other colours) are then processed to find 44 suitable candidate edges (test points) for measurement. This involves setting a threshold based on a histogram of the derivative plane data, wherein only the top x% of the derivative data is considered. Typical values for x% would be between are 5% to 20%. It will be appreciated that different values could be employed depending on the nature of the image, thus in the case of a test pattern such as the checkboard, a lower figure of 5% could be employed whereas in an image of a natural (non test) scene, a higher figure of 40% might be used but more usefully a figure of 20% may be used generally. Derivative pixels above this threshold are further reduced to include only those with a low valued supporting regions in the direction of the derivative (horizontal or vertical), i.e. to avoid confusion that may arise if there were multiple regions of high change close together. As stated above, this step is performed on the reference colour plane and at least one other chromatic plane. To ensure that a test point is available for measurement, test sites are only selected if both reference and chromatic plane satisfy the threshold test. An example of a test site is shown in Figure 5, where the high intensity derivative in both the reference and chromatic plane are shown with low valued supporting regions on either side.

For each test site (test point) identified in the step above, a Gaussian function is now fitted to the selected derivative colour plane. The function for one selected data point on one plane is:

Figure imgf000020_0001

A number of active pixels either side of a test site are used to find the parameters p . This is efficiently done using a Gauss-Newton iteration. In the equation, pi represents the peak position of the Gaussian (i.e. the mean), p2 represents the spread of the derivative intensity profile either side of the peak, while po represents the intensity or height of the derivative intensity profile. Comparisons of pi over the reference and chromatic plane give chromatic displacement at that location in the derivative direction/s selected. The parameters p2 and p0 are used to perform ancillary checks.

A number of ancillary tests may be performed to ensure a reliable fit is achieved, including, for example: a. Excessive iterations indicating poor test site b. High residual error indicating poor test site c. High value for parameter p2 indicating a poor bell type shape to data d. Excessive deviation of parameter from the site location, by more than a predetermined limit for a maximum LCA shift, for example 3 or 4 pixels, which again would indicate a poor test site

The fitting step is repeated for each test site (test point) selected. The returned parameters, in particular respective pi parameters give a reading of the misalignment of the two selected colour planes at each test point. A small correction should be applied to one of the planes to accommodate the underlying colour filter array pattern if present, by shifting one of the returned measurements by 1 pixel. An example of the outcome of which is shown in Figure 6.

6) For each LCA measurement data set, a model is fitted. Since the

LCA model can be resolved into horizontal and vertical directions, only the equations relating to the derivative direction are required. This results in a calibration model for LCA that fits the LCA throughout the image. Any incorrect measurements that may result from colours in the scene may be discarded within a RANSAC style fitting or other similar technique which may reduce erroneous samples.

For lenses with variable focal length and aperture, more extensive modelling can be employed as described with reference to the earlier method employing a test pattern described above. More particularly, the above described method may be run on a library of images taken by a user where meta data is available to identify the lens settings. For multiple images with the same lens settings, the results can be reinforced by the effective repetition\determination of the model across multiple images. For different lens settings, the methods described above may be applied to develop a model 52 for correcting LCA across all lens settings as described above. This model may then be installed in the camera\image processing software to automatically correct images taken by the camera. As explained above, the advantage of including the correction within firmware or similar in the camera is that the adjustment may be made before the image is converted from RAW to JPEG or other image format.

The words comprises/comprising when used in this specification are to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers , steps, components or groups thereof.

Claims

Claims
1. A method for creating a model for correcting LCA in an imaging system, the method comprises: a) obtaining a RAW image taken by an imaging system with an initial lens setting, the image comprising a reference colour component and at least one other colour component b) examining the RAW image to identify test sites in the image, c) determining the LCA shift that has occurred for the at least one colour at each test site with respect to the reference colour component at that test site, d) repeating steps a), b) and c) for RAW images with different lens settings, e) using the determined LCA shifts in a fitting method to determine the constants of a model comprising a plurality of equations, where the model may be employed to estimate an LCA correction value for any pixel in an image, where the inputs to the model comprise the pixels location and the current lens settings.
2. A method according to claim 1 or claim 2, wherein each test site comprises a test point.
3. A method according to any preceding claim, wherein the method is applied for two colour components and constants are determined separately for each colour component.
4. A method according to any preceding claim, wherein the imaging system employs RGB format.
5. A method according to claim 4, wherein the Green component is selected as the reference colour component and the model parameters are determined to correct the Red and\or Blue components.
6. A method according to claim 1 , wherein the step of obtaining a RAW image comprises the steps of setting the lens of the imaging system to an initial setting, and using the imaging system to acquire a RAW image of a test pattern, where the test pattern has a plurality of test sites defined therein.
7. A method according to claim 6, wherein the test pattern is a checkerboard pattern and the test points are the corners of the squares.
8. A method according to any preceding claim, wherein the examination of the image for test sites comprises the step of identifying high regions of change in each component as potential test sites.
9. A method according to claim 8, wherein the step of identifying high regions of change comprises determining horizontal and\or vertical derivative values throughout the image.
10. A method according to claim 9, wherein a region is identified as a region of high change when its derivative is within the top 20% of derivative values within the image.
11. A method according to claim 9, wherein a region is identified as a region of high change when its derivative is within the top 5% of derivative values within the image.
12. A method according to any one of claims 8 to 11 , wherein high regions of change are considered only if they exist in both the reference colour component and the at least one other colour component.
13. A method according to any one of claims 8 to 12, wherein high regions of change are considered only if they exist in both the reference colour component and the at least one other colour component.
14. A method according to any one of claims 8 to 13, wherein high regions of change are only considered if they have low valued regions of change on at least one side.
15. A method according to claims 14, wherein high regions of change are only considered if they have low valued regions of change on both sides.
16. A method according to any preceding claim, wherein a lens setting comprises the aperture setting.
17. A method according to any preceding claim, wherein the lens is a zoom lens and a lens setting comprises the selected focal length.
18. A method according to any preceding claim, wherein a lens setting comprises the focus distance
19. A method according to any preceding claim, wherein a lens setting comprises the degree of lens shift.
20. A method according to any preceding claim, wherein a lens setting comprises the degree of lens tilt.
21. A method according to any preceding claim, wherein a lens setting comprises the degree of lens swing.
22. A computer program embodied on a computer medium which when executed by a processor performs the method steps of any preceding claim.
23. A correction system for an imaging system comprising a lens having one or more settings and an imaging element, the imaging system employing RAW image data having at least two colour components, the correction system applying to each pixel of at least one colour a correction equation to correct for LCA where the constants of the equations have been previously determined by the method of anyone of claims 1 to 22 and the inputs to the equation comprise the individual pixel location and the at least one lens setting.
24. A correction system according to claim 23, wherein the method is applied to two colour components with previously determined constants for each colour component.
25. A correction system according to claim 23 or claim 24, wherein the imaging system employs RGB format.
26. A correction system according to claim 25, wherein the Red and\or
Blue components are corrected.
27. A correction system according to any one of claims 23 to 26, wherein a lens setting comprises the aperture setting.
28. A correction system according to any one of claims 23 to 27, wherein the lens is a zoom lens and a lens setting comprises the selected focal length.
29. A correction system according to any one of claims 23 to 28, wherein a lens setting comprises the focus distance
30. A correction system according to any one of claims 23 to 29, wherein a lens setting comprises the degree of lens shift.
31. A correction system according to any one of claims 23 to 30, wherein a lens setting comprises the degree of lens tilt.
32. A correction system according to any one of claims 23 to 31 , wherein a lens setting comprises the degree of lens swing.
33. A camera comprising the correction system of any one of claims 23 to 32, wherein the imaging element is a sensor.
34. A camera according to claim 33, wherein the sensor is a CCD device.
35. A projector comprising the correction system of any one of claims 23 to 32.
36. A projector according to claim 35, wherein the imaging element is a LCD device.
37. A method of correcting for Lateral Chromatic Aberration in a lens comprising the steps of: acquiring a RAW image comprising at least three colour components and at least one lens setting corresponding to those of the RAW image, applying a first mathematical correction to a first colour component of the RAW image to provide a corrected first colour component, applying a second mathematical correction to a second colour component of the RAW image to provide a corrected second colour component, wherein the first and second mathematical corrections are performed individually on pixels of the individual colour components by application of a series of equations, wherein the coefficients for the series equations have been determined from previous measurements of chromatic aberration at points in a plurality of images with a plurality of different lens setting and the inputs to the equations comprise the position of the pixel and the at least one lens setting, combining the corrected first colour component and the corrected second colour component with the third colour component to provide a corrected raw image.
38. A method according to claim 31 , wherein the plurality of images are images taken of a test pattern.
PCT/EP2009/056185 2008-05-20 2009-05-20 Correction of optical lateral chromatic aberration in digital imaging systems WO2009141403A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0809154.8 2008-05-20
GB0809154A GB0809154D0 (en) 2008-05-20 2008-05-20 Correction of optical lateral chromatic aberration in digital imaging systems

Publications (1)

Publication Number Publication Date
WO2009141403A1 true true WO2009141403A1 (en) 2009-11-26

Family

ID=39596202

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2009/056185 WO2009141403A1 (en) 2008-05-20 2009-05-20 Correction of optical lateral chromatic aberration in digital imaging systems

Country Status (2)

Country Link
GB (1) GB0809154D0 (en)
WO (1) WO2009141403A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102215340A (en) * 2010-04-07 2011-10-12 索尼公司 Imaging apparatus and imaging signal correcting method
DE102016203275A1 (en) 2016-02-29 2017-08-31 Carl Zeiss Industrielle Messtechnik Gmbh Method and apparatus for determining a defocus and method and apparatus for image-based determination of a dimensional size

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5789542B2 (en) * 2012-02-29 2015-10-07 日立マクセル株式会社 Imaging device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6323934B1 (en) * 1997-12-04 2001-11-27 Fuji Photo Film Co., Ltd. Image processing method and apparatus
US6536907B1 (en) * 2000-02-08 2003-03-25 Hewlett-Packard Development Company, L.P. Aberration compensation in image projection displays
US20040218071A1 (en) * 2001-07-12 2004-11-04 Benoit Chauville Method and system for correcting the chromatic aberrations of a color image produced by means of an optical system
US20080291447A1 (en) * 2007-05-25 2008-11-27 Dudi Vakrat Optical Chromatic Aberration Correction and Calibration in Digital Cameras

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6816625B2 (en) * 2000-08-16 2004-11-09 Lewis Jr Clarence A Distortion free image capture system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6323934B1 (en) * 1997-12-04 2001-11-27 Fuji Photo Film Co., Ltd. Image processing method and apparatus
US6536907B1 (en) * 2000-02-08 2003-03-25 Hewlett-Packard Development Company, L.P. Aberration compensation in image projection displays
US20040218071A1 (en) * 2001-07-12 2004-11-04 Benoit Chauville Method and system for correcting the chromatic aberrations of a color image produced by means of an optical system
US20080291447A1 (en) * 2007-05-25 2008-11-27 Dudi Vakrat Optical Chromatic Aberration Correction and Calibration in Digital Cameras

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CAO F; GUICHARD F; HORNUNG H; SIBADE C: "Characterization and measurement of color fringing", PROCEEDINGS OF THE SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING USA, vol. 6817, January 2008 (2008-01-01), pages 68170G - 1, XP002543103, ISSN: 0277-786X *
KOZUBEK M; MATULA P: "An efficient algorithm for measurement and correction of chromatic aberrations in fluorescence microscopy", JOURNAL OF MICROSCOPY BLACKWELL SCIENCE UK, vol. 200, December 2000 (2000-12-01), pages 206 - 217, XP002543102, ISSN: 0022-2720 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102215340A (en) * 2010-04-07 2011-10-12 索尼公司 Imaging apparatus and imaging signal correcting method
DE102016203275A1 (en) 2016-02-29 2017-08-31 Carl Zeiss Industrielle Messtechnik Gmbh Method and apparatus for determining a defocus and method and apparatus for image-based determination of a dimensional size

Also Published As

Publication number Publication date Type
GB2460241A (en) 2009-11-25 application
GB0809154D0 (en) 2008-06-25 grant

Similar Documents

Publication Publication Date Title
US20090040364A1 (en) Adaptive Exposure Control
US6870564B1 (en) Image processing for improvement of color registration in digital images
US20060093234A1 (en) Reduction of blur in multi-channel images
US20140267243A1 (en) Systems and Methods for Synthesizing Images from Image Data Captured by an Array Camera Using Restricted Depth of Field Depth Maps in which Depth Estimation Precision Varies
US20040155970A1 (en) Vignetting compensation
US20100157127A1 (en) Image Display Apparatus and Image Sensing Apparatus
US7590305B2 (en) Digital camera with built-in lens calibration table
US8866912B2 (en) System and methods for calibration of an array camera using a single captured image
US7327390B2 (en) Method for determining image correction parameters
US8878950B2 (en) Systems and methods for synthesizing high resolution images using super-resolution processes
JP2006050494A (en) Image photographing apparatus
US20080291447A1 (en) Optical Chromatic Aberration Correction and Calibration in Digital Cameras
JP2003116060A (en) Correcting device for defective picture element
US20080062409A1 (en) Image Processing Device for Detecting Chromatic Difference of Magnification from Raw Data, Image Processing Program, and Electronic Camera
JP2006135805A (en) Chromatic difference of magnification correction device, method, and program
WO1999067743A1 (en) Image correcting method and image inputting device
JP2004222231A (en) Image processing apparatus and image processing program
JPH09181913A (en) Camera system
JP2002344978A (en) Image processing unit
JP2000299874A (en) Signal processor, signal processing method, image pickup device and image pickup method
US20080030603A1 (en) Color filter, image processing apparatus, image processing method, image-capture apparatus, image-capture method, program and recording medium
US7865031B2 (en) Method and system for automatic correction of chromatic aberration
US20110149103A1 (en) Image processing apparatus and image pickup apparatus using same
JP2011123589A (en) Image processing method and apparatus, image capturing apparatus, and image processing program
JP2006295626A (en) Fish-eye image processing apparatus, method thereof and fish-eye imaging apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09749893

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.03.2011)

122 Ep: pct application non-entry in european phase

Ref document number: 09749893

Country of ref document: EP

Kind code of ref document: A1