WO2010131210A1 - A system and method for correcting non-uniformity defects in captured digital images - Google Patents

A system and method for correcting non-uniformity defects in captured digital images Download PDF

Info

Publication number
WO2010131210A1
WO2010131210A1 PCT/IB2010/052108 IB2010052108W WO2010131210A1 WO 2010131210 A1 WO2010131210 A1 WO 2010131210A1 IB 2010052108 W IB2010052108 W IB 2010052108W WO 2010131210 A1 WO2010131210 A1 WO 2010131210A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
image
data
imaging system
channel
Prior art date
Application number
PCT/IB2010/052108
Other languages
French (fr)
Inventor
Arnaud Obin
Original Assignee
Lord Ingenierie
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lord Ingenierie filed Critical Lord Ingenierie
Publication of WO2010131210A1 publication Critical patent/WO2010131210A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/401Compensating positionally unequal response of the pick-up or reproducing head
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • H04N25/611Correction of chromatic aberration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/67Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
    • H04N25/671Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction

Definitions

  • This invention relates to a system for correcting non-uniformity defects associated with the use of a pixel array imager and a method thereof.
  • FPN Fixed pattern noise is a particular noise pattern on digital imaging sensors often noticeable during longer exposure shots where particular pixels are susceptible to giving brighter intensities above the general background noise.
  • FPN identifies a temporally constant lateral non-uniformity (forming a constant pattern) in a pixelized imaging system. It is characterized by the same pattern of 15 'hot' (brighter) and 'cold' (darker) pixels occurring with images taken under the same illumination conditions in an imaging array.
  • the effect of FPN is that groups of pixels exhibits relatively different strengths in their responses to uniform input light.
  • This problem arises from small differences in the individual responsitivity 20 of the sensor array (including any local postamplification stages) that might be caused by variations in the pixel size, material or interference with the local circuitry (i.e. mismatch of circuit structures in the integrated circuits process variations). It might be affected by changes in the environment like different temperatures, exposure times, etc.
  • 25 FPN usually refers to two parameters: the DSNU (dark signal non- uniformity), which is the offset from the average across the imaging array at a particular setting (temperature, integration time) without external illumination (i.e. the sensor has a non-zero output signal depending on the pixel range in the array) and the PRNU (photo response non-uniformity), which describes the non- 30 uniformity of the gain or ratio between optical power on a pixel versus the electrical signal output.
  • the PRNU can be described as the local, pixel dependent photo response non-linearity (PRNL) and is often simplified as a single value measured at almost saturation level to permit a linear approximation of the nonlinear pixel response.
  • FPN is commonly suppressed by flat-field correction (FFC) that uses DSNU and PRNU to linearly interpolate and reduce the local photo response (non-uniform PRNL) to the array average.
  • FFC flat-field correction
  • PRNU linearly interpolate and reduce the local photo response
  • two exposures with an equal illumination across the array are necessary (one without light and one close to saturation) to obtain the values.
  • Another optical unintended and undesired effect caused by imaging systems is the vignetting effect reducing image's brightness or saturation at the periphery as compared to the image center.
  • vignetting There are several causes of vignetting: mechanical vignetting; optical vignetting; natural vignetting; pixel vignetting.
  • Mechanical vignetting occurs when light beams emanating from object points located off-axis are partially blocked by external objects such as thick or stacked filters, secondary lenses, and improper lens hoods.
  • the corner darkening can be gradual or abrupt, depending on the lens aperture. Complete blackening is possible with mechanical vignetting.
  • Optical vignetting is caused by the physical dimensions of a multiple element lens. Rear elements are shaded by elements in front of them, which reduces the effective lens opening for off- axis incident light. The result is a gradual decrease in light intensity towards the image periphery. Optical vignetting is sensitive to the lens aperture and can be completely cured by a reduction in aperture of 2-3 stops (i.e. an increase in the F-number.)
  • Natural vignetting (also known as natural illumination falloff) is not due to the blocking of light rays.
  • the falloff is approximated by the cos 4 law of illumination falloff.
  • the light falloff is proportional to the fourth power of the cosine of the angle at which the light impinges on the film or sensor array.
  • Wide angle rangefinder designs and the lens designs used in compact cameras are particularly prone to natural vignetting.
  • a gradual grey filter or postprocessing techniques may be used to compensate for natural vignetting, as it cannot be cured by stopping down the lens.
  • Some modern lenses are specifically designed so that the light strikes the imager parallel or nearly so, eliminating or greatly reducing vignetting.
  • Pixel vignetting only affects digital cameras and is caused by angle- dependence of the digital sensors. Light incident on the sensor at a right angle produces a stronger signal than light hitting it at an oblique angle. This is due to the non-square dimensions of the individual photodetectors. Most digital cameras use built-in image processing to compensate for optical vignetting and pixel vignetting when converting raw sensor data to standard image formats such as - A -
  • microlenses over the image sensor can also reduce the effect of pixel vignetting.
  • the present invention discloses a technique and system for correcting a plurality of non-uniformity defects of different types (e.g. FPN Fixed Pattern Noise, vignetting etc) generated by a pixel array imager (e.g. such as a linear CCD sensor).
  • a pixel array imager e.g. such as a linear CCD sensor.
  • a digital imaging system comprises an integral device comprising: an image sensor assembly (e.g. pixel array imager or camera device) adapted for capturing images and generating digital image data indicative thereof, and is preferably equipped with an integral control unit (image data processor) connected to the image sensor assembly for directly receiving the image data and processing image data before transferring to a frame grabber.
  • the frame grabber receives, from the digital imaging system, a digital processed image data.
  • the acquired and processed image data may be in the form of static images or of a video stream processed continuously or not (intermittently) Therefore, the image sensor assembly is adapted for capturing static images and/or a plurality of still frames forming a video stream.
  • the control unit is configured and operable to apply direct real-time and continuous processing to the video stream, to continuously generate on the fly corrected output video stream.
  • the image sensor assembly comprises at least one of the following devices: a pixel array imager, a linear CCD sensor, a tri-linear CCD sensor, a line-scan camera device, a tri-CCD camera device.
  • the line-scan camera device comprises a line- scan image sensor chip, and a focusing mechanism.
  • the line-scan camera device uses a single array (e.g. line) of pixel sensors (i.e. linear or tri-linear sensors utilizing only a single line of sensors, or three lines for the three colors), instead of a matrix of them.
  • the digital image data generated by the line-scan camera device is processed by the integral control unit, to receive the one-dimensional line data and to create a two- dimensional image.
  • Each one-dimensional line typically has one or more of primary color channels: R, G, B for color devices, or one channel for monochrome devices.
  • Line- scan technology is capable of capturing data extremely fast, and at very high image resolutions.
  • the video stream is captured in digital form and then displayed, processed, analysed, stored or transmitted in raw or compressed digital form.
  • the frame grabber comprises a memory utility acquiring and storing digitized video stream.
  • the control unit comprises an image processor configured and operable to apply direct real-time processing to the image data.
  • Direct real-time processing comprising correcting image data for defects induced by the image sensor assembly, by carrying out at least one of the following: (i) concurrently correcting multiple non-uniformity defects in the digital image data induced in the image data by the imaging sensor assembly, and generating a substantially uniform processed image data; and (ii) processing the image data to compensate for the at least one chromatic aberration, and generating a substantially corrected processed image data.
  • control unit comprises a read-out circuit for generating the digital data indicative of acquired images and transmitting the digital data to the image processor.
  • the present invention enables the ability to fit all the camera modules within the camera device (especially reference acquisition and filtering) resulting in no Central Processing Unit (CPU) overhead, no code to write on a workstation for the calibration purpose, no need for "go and back" between the frame grabber and the camera.
  • CPU Central Processing Unit
  • the imaging system comprises a reference acquisition module adapted for defining reference data indicative of one or more reference images.
  • the reference acquisition module may have one of the following configurations: is a part of the image sensor assembly; is a part of the control unit, or is a distributed utility distributed between the image sensor assembly and the control unit.
  • the reference data defined by the reference acquisition module comprises data indicative of a first reference white image and a second reference black image.
  • control unit is configured and operable to apply a linear function to the image data from the image sensor assembly.
  • the linear function is a transformation between the reference data and white and black levels in the image data.
  • the present invention takes in account a plurality of non-uniformity defects of different types such as fixed pattern noise (FPN), photoresponse non- uniformity (PRNU), vignetting, and lighting non-uniformity generated by a pixel array imager.
  • FPN fixed pattern noise
  • PRNU photoresponse non- uniformity
  • vignetting vignetting
  • the image processing includes the steps of defining reference images, and applying, to data indicative of the captured image, a linear function representative of a transformation between the "white” and “black” reference images (corresponding data) and of the "white” and “black” level in the incoming image.
  • Reference images from a stream of grabbed images are defined.
  • the reference images correspond to "white” and “black” image references.
  • Reference data indicative of the reference image is used for correcting an incoming image, by applying to said data a linear function, being representative of a transformation between the "white” and “black” reference images (corresponding data) and of the "white” and “black” level in the incoming image. It should be noted that the above correction is performed in the configuration of the pixel array imager itself and at the pixel speed, without the need to data exchange in between the imager and a frame grabber.
  • the present invention provides the capability to change quickly the contrast and brightness of the image by changing the parameters of the reference data such as the output "white” level “BrightTarget" and the output
  • the reference signal may be filtered by a low pass filter to reduce noise.
  • the noise in the reference image i.e. random variation of brightness or color information produced by the sensor and circuitry of a digital camera, to be differentiated from the FPN
  • the reference signal is preferably acquired by a plurality of linear pixel samples and then filtered to substantially eliminate random noise.
  • the image processor comprises a field- programmable gate array (FPGA) configured and operable to carry out the application of the linear function to the image data.
  • the correction can be implemented by a FPGA included in the array imager to achieve a realtime on the fly correction.
  • the imaging system comprises a reference acquisition module adapted for defining reference data (provide reference calibration) indicative of one or more reference images.
  • the reference data comprises data indicative of a first reference white image and a second reference black image. Therefore, the imaging system is able to acquire itself new black or white reference images or to continuously refine the existing reference images by the continuous analysis of incoming image data, eliminating the need to transfer large amount of data between the host frame grabber and the imaging system.
  • the continuous analysis of the incoming image data thus comprises continuously identifying the incoming image data as being possible/potential new black or white reference images.
  • connection between the host frame grabber and the imaging system can therefore be a simple low frequency link (e.g. serial link).
  • the low frequency link may be used conveniently to trigger the reference calibration, without impairing the real-time capability of the imaging system.
  • the command send to the imaging system is thus reduced to a few characters.
  • the output processed data is not corrupted, and the frame grabber does not have to disregard the incoming images (e.g. video stream). Therefore, an efficient triggering of the calibration via a serial link is provided enabling on the fly triggering between products allowing refining the calibration as needed. This is essentially useful in web inspection.
  • the image sensor assembly may be a color imager.
  • the image data comprises at least one data piece corresponding to at least one color channel from primary color channels, R, G, B.
  • the system of the present invention may also be used with monochrome or color line- scan camera (one implementation per color).
  • the system and method of the present invention may also be used for correction of chromatic aberration generated by such array imager. This is implemented by real-time re- sampling the image data of the pixel array imager (e.g. by a FPGA included in the imager).
  • the image processor is adapted to process the image data by obtaining a polynomial correction indicative of a corrected shift value defined such that transitions between bright and dark or vice versa occur substantially simultaneously for each channel; and applying the polynomial correction to at least one input image.
  • the correction of the chromatic aberration is performed by identifying an image data level (e.g. a video signal level) for each pixel number for each RGB channel; applying a band pass filter acting as a derivative function, on each RGB channel signal and generating a derivative signal; processing the derivative signal for identifying at least one extrema in the derivative signal; and using the extrema for thresholding the derivative signal to obtain the location of a transition between bright and dark or vice versa; calculating a correlation of the derivative signal for each channel pair and at each transition obtaining a shift value from one channel (color) to another; and providing data indicative thereof.
  • a polynomial correction giving the amount of shift required to make the transitions occurring simultaneously for each color is then calculated for each pixel to provide a corrected shift value and applied to each image acquired from the array to provide an output corrected image.
  • This correction can be implemented by a FPGA included in the array imager to achieve a real-time on the fly correction even on a continuous video stream.
  • this correction may be performed by an external module generating the polynomial correction and transmitting such polynomial correction to the image processor.
  • the present invention can be used for example for web inspection with line- scan camera devices.
  • a digital camera comprising a light sensitive pixel array assembly for capturing images and generating digital output corresponding to image data indicative of the captured images, and a controller directly connected to the digital output of the light sensitive pixel array assembly for directly receiving and real-time processing to the image data to concurrently correct multiple non- uniformity defects in the digital image data induced in the image data by the light sensitive pixel array, and generate a substantially uniform processed image data.
  • an imaging method comprising: capturing images and generating a corresponding digital image data indicative thereof; directly processing the image data in real-time and concurrently correcting multiple non-uniformity defects in the image data, to thereby enable to generate a substantially uniform processed image data; thereby enabling for integration of an image processor with an image capture device eliminating a frame grabber between them.
  • directly processing in real-time of the image data comprises defining at least two reference data corresponding to one reference white image and one reference black image respectively and applying to the image data a linear function representative of a transformation between the reference data and the white and black level in the captured image.
  • defining at least two reference data comprises controllably varying the reference data to control contrast and brightness of the image.
  • directly processing further comprises obtaining data indicative of at least one chromatic aberration and correcting the image data for the at least one chromatic aberration.
  • the processing comprises: obtaining a polynomial correction indicative of a corrected shift value defined such that transitions between bright and dark or vice versa occur substantially simultaneously for each channel; and applying the polynomial correction to at least one input image.
  • the processing comprises: providing an image data level as a function of pixel number for each of the R,G,B channel; applying a band pass filter to the image data level of each RGB channel to generate a derivative signal, identifying at least one extrema in the derivative signal; thresholding the derivative signal to obtain location of a transition between bright and dark or vice versa; calculating a correlation of the derivative signal for each channel pair in the primary color signals and at each transition to obtain a shift value from one channel to another and providing data indicative thereof; calculating a polynomial correction for each pixel to provide a corrected shift value defined such that the transitions occur substantially simultaneously for each channel.
  • the capturing of images comprises capturing a plurality of still frames forming a video stream.
  • Directly processing the image data in real-time and concurrently correcting defects in the image data comprises applying direct real-time and continuous processing to the video stream, to continuously generate on the fly corrected output video stream.
  • a method for correction of at least one chromatic aberration comprises: providing an image data level as a function of pixel number; the image data comprising data pieces corresponding to one or more of primary color channels, R,G,B; applying a band pass filter to the image data level of each RGB channel to generate a derivative signal, identifying at least one extrema in the derivative signal; thresholding the derivative signal to obtain location of a transition between bright and dark or vice versa; calculating a correlation of the derivative signal for each channel pair in the primary color signals and at each transition to obtain a shift value from one channel to another and providing data indicative thereof; calculating a polynomial correction for each pixel to provide a corrected shift value defined such that the transitions occur substantially simultaneously for each channel; the polynomial correction being indicative of the correction to at least one chromatic aberration; and applying the polynomial correction to at least one input image to thereby enable to generate a corrected output image.
  • Fig. IA is a schematic block diagram illustrating the main components of the imaging system of the present invention.
  • Fig. IB is a graph representing a "grey level" of a video signal vs. pixel number; the dark image reference, the bright image reference, the input raw video signal and the corrected output video signal are represented;
  • Fig. 1C is a schematic block diagram illustrating the differences in magnification between color channels of a tri-linear CCD sensor;
  • Figs. 2A-2C are images captured by conventional digital imaging sensors, in which chromatic aberrations clearly appear;
  • Fig. 3 A is a graph representing a video signal level vs. pixel number, for three colors: red, green and blue;
  • Fig. 3B is a graph representing an enlarged view of a specific transition
  • Fig. 3C is a graph representing a derivative signal of the first falling edge in the image for three colors: red, green and blue;
  • Fig. 3D is a graph representing the correlation result between red and blue channels for the first falling edge
  • Fig. 3E is a graph representing a corrected profile according to one embodiment of the present invention.
  • Fig. 4A is an image captured by a conventional digital imaging system and Fig. 4B is the same image captured by the digital imaging system according to one embodiment of the present invention.
  • the digital imaging system 100 comprises an integral device comprising: an imaging sensor assembly 102 capturing image and generating an image data indicative thereof and a control unit 104 connected to the image sensor assembly for directly receiving the image data.
  • the control unit 104 comprises an image processor 106 configured and operable to apply direct and to directly process in real-time the image data thereby correct a plurality of non-uniformity defects generated by the digital imaging sensor assembly 102.
  • the control unit 104 comprises a read-out circuit for generating the digital data indicative of acquired images and transmitting the digital data to the image processor 106.
  • the digital imaging system 100 generates a substantially uniform processed image data 108. In this connection, it should be understood that typically, image non- uniformity can be corrected by a linear function:
  • Pix_out(n) a(n) * Pix_in(n) + b(n) where a(n) and b(n) are respectively pixel-dependant gain and offset.
  • a common way to implement this correction is to setup two tables in a memory utility generally included in the frame grabber, containing respectively the a and b coefficients and then apply the linear above function on the incoming pixels in a field-programmable gate array (FPGA) also included in the frame grabber.
  • FPGA field-programmable gate array
  • the computation of the gain and offset coefficients in line- scan camera device is left to the user.
  • the link from the camera device to the frame grabber is fast, the opposite link from the frame grabber to the camera device is generally slower.
  • the link from the frame grabber to the camera device is a serial link (115 kbits/s)
  • the video stream is continuously acquired/captured by the image sensor assembly and therefore, during this upload period, the reference/calibration frame has a constant delay with respect to the incoming images.
  • the beginning of the video line is processed by taking account the up-to-date reference/calibration data, and the end of the line is corrected with the previous calibration data.
  • This imply a loss of video data during a certain period of time (e.g. few seconds for 20,000 video lines)-, which is neither acceptable nor possible for in wide range of applications such as in web inspection.
  • the reference/calibration data would be updated during the video line time period, the time of such updating would be substantially greater than the time of the video line time period which is also neither acceptable nor possible for in wide range of applications.
  • the present invention solves the above problem by preventing go and back between the camera device and the frame grabber and minimizing overhead.
  • the integral control unit 104 directly process in real-time the image data by defining at least two reference data corresponding to one reference white image and one reference black image respectively and applying to the image data a linear function representative of a transformation between the reference data and the white and black level in the captured image.
  • the corrected signal can be expressed as:
  • PixOut(n) BrightTarget * ( Pixln(n) - DarkRef(n) ) / ( BrightRef(n) - DarkRef(n) ) + DarkTarget
  • n is the number of pixels across the array
  • Pixln is the incoming pixel data before correction
  • PixOut is the corrected processed pixel data
  • BrightRef is the bright reference image data
  • DarkRef is the dark reference image data
  • BrightTarget is the output "white” level
  • DarkTarget is the output "black” level.
  • the input grey levels situated between BrightRef and DarkRef are spread by a linear function, between BrightTarget and DarkTarget.
  • the grey level can take values between 0 and 2 n - 1, n being the width of the digital video bus.
  • the result of calibration calculation is restricted to the range between 0 and 2 n - 1 to match practical values.
  • Fig. IB representing the grey level of a video signal vs. pixel number.
  • the dark image reference is represented by curve 1
  • the bright image reference is represented by curve 2
  • the input raw video signal is represented by curve 3
  • the corrected output video signal is represented by curve 4.
  • the imaging system may comprise a reference acquisition module adapted for defining reference data indicative of one or more reference images.
  • the reference acquisition module can be implemented in an FPGA and saved in non-volatile memory for reuse.
  • a command via a serial link activates the white or dark reference acquisition, for a chosen number of video lines.
  • the imaging system of the present invention enables thus the need to upload a complete calibration table.
  • the imaging system has just to initialize the loading of previously acquired data or to initiate a new calibration, internal to the imaging system.
  • the reference signal can be calculated as:
  • Ref(n,p) ⁇ * Pixln(n,p) + (1 - ⁇ ) * Ref(n-l,p)
  • Ref(n,p) is the filtered (bright or dark) reference for a pixel p, and for a current line n
  • Ref(n-l,p) is the filtered (bright or dark) reference for a pixel p, and previous line n-1
  • Pixln(n,p) is the incoming video data for a pixel p, and current line n
  • is the coefficient of a low pass filter.
  • the reference signal can be fed with a raw input video.
  • the reference is then corrected at the first video line.
  • the accumulation of further video lines reduces the noise of the reference signal.
  • the coefficient of a low pass filter ⁇ can be chosen by the user to achieve a compromise between noise and convergence speed.
  • the user can either choose to continue the accumulation to an existing reference signal or to re-initiate the profile.
  • the system and method of the present invention may also be used for correction of chromatic aberrations.
  • lenses exhibit chromatic aberrations: the magnification ratio between object and image depends on the wavelength of the light. It occurs because lenses have a different refractive index for different wavelengths of light (the dispersion of the lens). The refractive index decreases with increasing wavelength.
  • Chromatic aberration manifests itself as "fringes" of color along boundaries (far from the optical axis of the image plane) that separate dark and bright parts of the image, because each color in the optical spectrum cannot be focused at a single common point on the optical axis. Since the focal length of a lens is dependent on the refractive index, different wavelengths of light will be focused on different positions.
  • Chromatic aberration can be both longitudinal, in that different wavelengths are focused at a different distance from the lens; and transverse or lateral, in that different wavelengths are focused at different positions in the focal plane (because the magnification of the lens also varies with wavelength).
  • the color fringes appear at dark to bright or bright to dark transitions in the boundaries of the image rather than in the center of the image.
  • chromatic aberration has revolution symmetry around the optical axis.
  • a typical lens exhibit magnification ratio variations with the wavelength of the light.
  • Color fringes are typically artifacts that can be seen as product defects in web inspection, or induce loss in a system resolution if the image processing discards them.
  • Fig. 1C illustrating the use of a tri-linear CCD sensor having three different channels for the three colors.
  • Each R, G, B channel reflects the red, green, and blue image components at different locations in the object plane.
  • the distance between the object and the optical axis of the imaging system varies (e.g. the optical axis is not perpendicular to the object to be imaged or the object has a certain curvature as a rotating cylinder in this specific and non-limiting example)
  • differences in magnification for red, green, and blue image components exist.
  • Such differences in magnification between color channels are displayed as color fringes on the dark to bright or bright to dark transitions and can be treated/corrected as chromatic aberrations.
  • Figs. 2A-2C are enlargements of the first and the last squares of the full image of the linear CCD array of Fig. 2A.
  • the chromatic aberration correction comprises inter alia providing a data indicative of at least one chromatic aberration of the image data (done by an external module of the imaging system); and directly processing in real-time (i.e. real-time re-sampling) the image data by using the digital imaging sensor (e.g. done by an FPGA inside the camera device, on the fly) to correct at least one chromatic aberration to thereby enable to generate from the digital imaging sensor a processed output image data in which chromatic aberrations have been substantially corrected.
  • the digital imaging sensor e.g. done by an FPGA inside the camera device, on the fly
  • Fig. 3 A representing a video signal level vs. pixel number, for three colors: red (R), green (G) and blue (B).
  • Fig. 3B is an enlarged view of the same showing that on the first falling transition (from bright to dark) the red signal falls before the green and blue ones. This is the reason why in Fig. 2B a cyan fringe 200 appears at the left on the first black land.
  • a band pass filter acting as a derivative function is applied to the video signal level, on each RGB channel signal to generate a derivative signal and identify extrema in the derivative signal.
  • the derivative signal of the first falling edge in Fig. 3 A is represented in Fig. 3C.
  • the extrema is at the rising or falling edges of the transition region between bright and dark or vice versa.
  • the derivative signal is then threshold to obtain location of the transition region. Thresholding the derivative signal give a rough estimate of the locations of the edges.
  • a sample image illustrated for example in Fig. 3A there are 20 edges, 10 falling and 10 rising.
  • the derivative signal is then correlated for each channel pair and at each edge, giving the amount of shift from a color to another.
  • Fig. 3D illustrates a correlation result between red and blue channels for the first falling edge. One can see that the red channel has to be shifted about three pixels to right, in order that the red and blue channel falling edges occur simultaneously. The sub-pixel accuracy is given by calculating the centro ⁇ d of the correlation peak.
  • a polynomial correction being indicative of the amount of shift required to make the edges occurring simultaneously for each color per pixel is then determined.
  • a tabulation comprising the location of each input pixel for each pixel of the output corrected image is then determined.
  • the polynomial correction can be a non-integer value, the decimal value defining an interpolation between two adjacent pixels of the input image.
  • the output image is processed by resampling the input image, by using the processed shift needed for each pixel.
  • Fig. 4A and Fig. 4B representing respectively an image captured by a conventional digital imaging system and the same image captured by the digital imaging system according to one embodiment of the present invention.
  • the effect on the corrected image is clearly observable.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The present invention relates to a digital imaging system. The digital imaging system comprises an integral device comprising: an image sensor assembly adapted for capturing images and generating digital image data indicative thereof, and a control unit connected to the image sensor assembly for directly receiving said image data, the control unit comprising an image processor configured and operable to apply direct real-time processing to said image data, said direct real-time processing comprising correcting image data for defects induced by the image sensor assembly, by carrying out at least one of the following: (i) concurrently correcting multiple non-uniformity defects in the digital image data induced in said image data by the imaging sensor assembly, and generating a substantially uniform processed image data; and (ii) processing the image data to compensate for said at least one chromatic aberration, and generating a substantially corrected processed image data.

Description

A SYSTEM AND METHOD FOR CORRECTING NON-UNIFORMITY DEFECTS IN CAPTURED DIGITAL IMAGES
5 FIELD OF THE INVENTION
This invention relates to a system for correcting non-uniformity defects associated with the use of a pixel array imager and a method thereof.
BACKGROUND OF THE INVENTION
10 Fixed pattern noise is a particular noise pattern on digital imaging sensors often noticeable during longer exposure shots where particular pixels are susceptible to giving brighter intensities above the general background noise. FPN identifies a temporally constant lateral non-uniformity (forming a constant pattern) in a pixelized imaging system. It is characterized by the same pattern of 15 'hot' (brighter) and 'cold' (darker) pixels occurring with images taken under the same illumination conditions in an imaging array. The effect of FPN is that groups of pixels exhibits relatively different strengths in their responses to uniform input light.
This problem arises from small differences in the individual responsitivity 20 of the sensor array (including any local postamplification stages) that might be caused by variations in the pixel size, material or interference with the local circuitry (i.e. mismatch of circuit structures in the integrated circuits process variations). It might be affected by changes in the environment like different temperatures, exposure times, etc.
25 FPN usually refers to two parameters: the DSNU (dark signal non- uniformity), which is the offset from the average across the imaging array at a particular setting (temperature, integration time) without external illumination (i.e. the sensor has a non-zero output signal depending on the pixel range in the array) and the PRNU (photo response non-uniformity), which describes the non- 30 uniformity of the gain or ratio between optical power on a pixel versus the electrical signal output. The PRNU can be described as the local, pixel dependent photo response non-linearity (PRNL) and is often simplified as a single value measured at almost saturation level to permit a linear approximation of the nonlinear pixel response. To remove the effect of FPN, conventional calibration process involves measuring an output based on a known optical input and comparing it against an expected value. In image sensors, a white light of known intensity is typically shone onto the sensors and used as the input calibration signal. In principle, if there is no mismatch in the sensor devices, the voltage signal output from every pixel cell should be identical. In reality, significant differences in signal output values are read out across the array, even if the same input light stimulus is applied to the matrix. These differences can be calibrated and stored to be used in the normal FPN correction process. Typically, these difference data are stored separately in a separate, off-chip, non-volatile memory device, and during FPN correction process, the deviation data is then used to compensate each bit line output to produce a corrected pixel output value..
In practice, a long exposure (integration time) emphasizes the inherent differences in pixel response so they may become a visible defect, degrading the image. Although FPN does not change appreciably across a series of captures, it may vary with integration time, imager temperature, imager gain and incident illumination, it is not expressed in a random (uncorrelated or changing) spatial distribution, occurring only at certain and fixed pixel locations.
FPN is commonly suppressed by flat-field correction (FFC) that uses DSNU and PRNU to linearly interpolate and reduce the local photo response (non-uniform PRNL) to the array average. Hence, two exposures with an equal illumination across the array are necessary (one without light and one close to saturation) to obtain the values. Another optical unintended and undesired effect caused by imaging systems is the vignetting effect reducing image's brightness or saturation at the periphery as compared to the image center. There are several causes of vignetting: mechanical vignetting; optical vignetting; natural vignetting; pixel vignetting.
Mechanical vignetting occurs when light beams emanating from object points located off-axis are partially blocked by external objects such as thick or stacked filters, secondary lenses, and improper lens hoods. The corner darkening can be gradual or abrupt, depending on the lens aperture. Complete blackening is possible with mechanical vignetting.
Optical vignetting is caused by the physical dimensions of a multiple element lens. Rear elements are shaded by elements in front of them, which reduces the effective lens opening for off- axis incident light. The result is a gradual decrease in light intensity towards the image periphery. Optical vignetting is sensitive to the lens aperture and can be completely cured by a reduction in aperture of 2-3 stops (i.e. an increase in the F-number.)
Natural vignetting (also known as natural illumination falloff) is not due to the blocking of light rays. The falloff is approximated by the cos4 law of illumination falloff. Here, the light falloff is proportional to the fourth power of the cosine of the angle at which the light impinges on the film or sensor array. Wide angle rangefinder designs and the lens designs used in compact cameras are particularly prone to natural vignetting. A gradual grey filter or postprocessing techniques may be used to compensate for natural vignetting, as it cannot be cured by stopping down the lens. Some modern lenses are specifically designed so that the light strikes the imager parallel or nearly so, eliminating or greatly reducing vignetting.
Pixel vignetting only affects digital cameras and is caused by angle- dependence of the digital sensors. Light incident on the sensor at a right angle produces a stronger signal than light hitting it at an oblique angle. This is due to the non-square dimensions of the individual photodetectors. Most digital cameras use built-in image processing to compensate for optical vignetting and pixel vignetting when converting raw sensor data to standard image formats such as - A -
JPEG or TIFF. The use of microlenses over the image sensor can also reduce the effect of pixel vignetting.
GENERAL DESCRIPTION The present invention discloses a technique and system for correcting a plurality of non-uniformity defects of different types (e.g. FPN Fixed Pattern Noise, vignetting etc) generated by a pixel array imager (e.g. such as a linear CCD sensor).
According to the invention, a digital imaging system comprises an integral device comprising: an image sensor assembly (e.g. pixel array imager or camera device) adapted for capturing images and generating digital image data indicative thereof, and is preferably equipped with an integral control unit (image data processor) connected to the image sensor assembly for directly receiving the image data and processing image data before transferring to a frame grabber. The frame grabber receives, from the digital imaging system, a digital processed image data. The acquired and processed image data may be in the form of static images or of a video stream processed continuously or not (intermittently) Therefore, the image sensor assembly is adapted for capturing static images and/or a plurality of still frames forming a video stream. In some embodiments, when the captured images are in the form of a video stream, the control unit is configured and operable to apply direct real-time and continuous processing to the video stream, to continuously generate on the fly corrected output video stream.
In some embodiments, the image sensor assembly comprises at least one of the following devices: a pixel array imager, a linear CCD sensor, a tri-linear CCD sensor, a line-scan camera device, a tri-CCD camera device. The line-scan camera device comprises a line- scan image sensor chip, and a focusing mechanism. Typically, the line-scan camera device uses a single array (e.g. line) of pixel sensors (i.e. linear or tri-linear sensors utilizing only a single line of sensors, or three lines for the three colors), instead of a matrix of them. The digital image data generated by the line-scan camera device is processed by the integral control unit, to receive the one-dimensional line data and to create a two- dimensional image. Each one-dimensional line typically has one or more of primary color channels: R, G, B for color devices, or one channel for monochrome devices.
Line- scan technology is capable of capturing data extremely fast, and at very high image resolutions.
In some embodiments, the video stream is captured in digital form and then displayed, processed, analysed, stored or transmitted in raw or compressed digital form. Typically, the frame grabber comprises a memory utility acquiring and storing digitized video stream.
By providing an integral system including an image processor and a digital imaging system, real-time correction and image processing of non- uniformity defects generated by the imager is obtained. The control unit comprises an image processor configured and operable to apply direct real-time processing to the image data. Direct real-time processing comprising correcting image data for defects induced by the image sensor assembly, by carrying out at least one of the following: (i) concurrently correcting multiple non-uniformity defects in the digital image data induced in the image data by the imaging sensor assembly, and generating a substantially uniform processed image data; and (ii) processing the image data to compensate for the at least one chromatic aberration, and generating a substantially corrected processed image data.
In some embodiments, the control unit comprises a read-out circuit for generating the digital data indicative of acquired images and transmitting the digital data to the image processor.
Therefore, the present invention enables the ability to fit all the camera modules within the camera device (especially reference acquisition and filtering) resulting in no Central Processing Unit (CPU) overhead, no code to write on a workstation for the calibration purpose, no need for "go and back" between the frame grabber and the camera.
In some embodiments, the imaging system comprises a reference acquisition module adapted for defining reference data indicative of one or more reference images. The reference acquisition module may have one of the following configurations: is a part of the image sensor assembly; is a part of the control unit, or is a distributed utility distributed between the image sensor assembly and the control unit. The reference data defined by the reference acquisition module comprises data indicative of a first reference white image and a second reference black image.
In some embodiments, the control unit is configured and operable to apply a linear function to the image data from the image sensor assembly. The linear function is a transformation between the reference data and white and black levels in the image data. The present invention takes in account a plurality of non-uniformity defects of different types such as fixed pattern noise (FPN), photoresponse non- uniformity (PRNU), vignetting, and lighting non-uniformity generated by a pixel array imager.
The correction of non-uniformity defects is implemented as follows: Generally, the image processing includes the steps of defining reference images, and applying, to data indicative of the captured image, a linear function representative of a transformation between the "white" and "black" reference images (corresponding data) and of the "white" and "black" level in the incoming image. Reference images from a stream of grabbed images are defined. The reference images correspond to "white" and "black" image references. Reference data indicative of the reference image is used for correcting an incoming image, by applying to said data a linear function, being representative of a transformation between the "white" and "black" reference images (corresponding data) and of the "white" and "black" level in the incoming image. It should be noted that the above correction is performed in the configuration of the pixel array imager itself and at the pixel speed, without the need to data exchange in between the imager and a frame grabber.
Moreover, the present invention provides the capability to change quickly the contrast and brightness of the image by changing the parameters of the reference data such as the output "white" level "BrightTarget" and the output
"black" level "DarkTarget" on the fly. This is especially useful for color cameras device to achieve white balance on the fly.
In some embodiments, the reference signal may be filtered by a low pass filter to reduce noise. In this connection, it should be noted that if the reference signal is acquired by a single line of pixel sensors and is not filtered, the noise in the reference image (i.e. random variation of brightness or color information produced by the sensor and circuitry of a digital camera, to be differentiated from the FPN) would be introduced in the processed image and alter it as an FPN. Therefore, the reference signal is preferably acquired by a plurality of linear pixel samples and then filtered to substantially eliminate random noise.
In some embodiments, the image processor comprises a field- programmable gate array (FPGA) configured and operable to carry out the application of the linear function to the image data. In this case, the correction can be implemented by a FPGA included in the array imager to achieve a realtime on the fly correction. In some embodiments, the imaging system comprises a reference acquisition module adapted for defining reference data (provide reference calibration) indicative of one or more reference images. The reference data comprises data indicative of a first reference white image and a second reference black image. Therefore, the imaging system is able to acquire itself new black or white reference images or to continuously refine the existing reference images by the continuous analysis of incoming image data, eliminating the need to transfer large amount of data between the host frame grabber and the imaging system. The continuous analysis of the incoming image data thus comprises continuously identifying the incoming image data as being possible/potential new black or white reference images.
The connection between the host frame grabber and the imaging system can therefore be a simple low frequency link (e.g. serial link). The low frequency link may be used conveniently to trigger the reference calibration, without impairing the real-time capability of the imaging system. The command send to the imaging system is thus reduced to a few characters. Moreover, while the calibration process is running, the output processed data is not corrupted, and the frame grabber does not have to disregard the incoming images (e.g. video stream). Therefore, an efficient triggering of the calibration via a serial link is provided enabling on the fly triggering between products allowing refining the calibration as needed. This is essentially useful in web inspection. In some embodiments, the image sensor assembly may be a color imager. The image data comprises at least one data piece corresponding to at least one color channel from primary color channels, R, G, B. In particular, the system of the present invention may also be used with monochrome or color line- scan camera (one implementation per color). As described above, the system and method of the present invention may also be used for correction of chromatic aberration generated by such array imager. This is implemented by real-time re- sampling the image data of the pixel array imager (e.g. by a FPGA included in the imager). In some embodiments, the image processor is adapted to process the image data by obtaining a polynomial correction indicative of a corrected shift value defined such that transitions between bright and dark or vice versa occur substantially simultaneously for each channel; and applying the polynomial correction to at least one input image.
In some embodiments, the correction of the chromatic aberration is performed by identifying an image data level (e.g. a video signal level) for each pixel number for each RGB channel; applying a band pass filter acting as a derivative function, on each RGB channel signal and generating a derivative signal; processing the derivative signal for identifying at least one extrema in the derivative signal; and using the extrema for thresholding the derivative signal to obtain the location of a transition between bright and dark or vice versa; calculating a correlation of the derivative signal for each channel pair and at each transition obtaining a shift value from one channel (color) to another; and providing data indicative thereof. A polynomial correction giving the amount of shift required to make the transitions occurring simultaneously for each color is then calculated for each pixel to provide a corrected shift value and applied to each image acquired from the array to provide an output corrected image.
This correction can be implemented by a FPGA included in the array imager to achieve a real-time on the fly correction even on a continuous video stream. Alternatively, this correction may be performed by an external module generating the polynomial correction and transmitting such polynomial correction to the image processor. The present invention can be used for example for web inspection with line- scan camera devices. According to another broad aspect of the present invention, there is also provided a digital camera comprising a light sensitive pixel array assembly for capturing images and generating digital output corresponding to image data indicative of the captured images, and a controller directly connected to the digital output of the light sensitive pixel array assembly for directly receiving and real-time processing to the image data to concurrently correct multiple non- uniformity defects in the digital image data induced in the image data by the light sensitive pixel array, and generate a substantially uniform processed image data.
According to another broad aspect of the present invention, there is also provided an imaging method comprising: capturing images and generating a corresponding digital image data indicative thereof; directly processing the image data in real-time and concurrently correcting multiple non-uniformity defects in the image data, to thereby enable to generate a substantially uniform processed image data; thereby enabling for integration of an image processor with an image capture device eliminating a frame grabber between them. In some embodiments, directly processing in real-time of the image data comprises defining at least two reference data corresponding to one reference white image and one reference black image respectively and applying to the image data a linear function representative of a transformation between the reference data and the white and black level in the captured image.
In some embodiments, defining at least two reference data comprises controllably varying the reference data to control contrast and brightness of the image.
In some embodiments, directly processing further comprises obtaining data indicative of at least one chromatic aberration and correcting the image data for the at least one chromatic aberration.
In some embodiments, the processing comprises: obtaining a polynomial correction indicative of a corrected shift value defined such that transitions between bright and dark or vice versa occur substantially simultaneously for each channel; and applying the polynomial correction to at least one input image.
In some embodiments, the processing comprises: providing an image data level as a function of pixel number for each of the R,G,B channel; applying a band pass filter to the image data level of each RGB channel to generate a derivative signal, identifying at least one extrema in the derivative signal; thresholding the derivative signal to obtain location of a transition between bright and dark or vice versa; calculating a correlation of the derivative signal for each channel pair in the primary color signals and at each transition to obtain a shift value from one channel to another and providing data indicative thereof; calculating a polynomial correction for each pixel to provide a corrected shift value defined such that the transitions occur substantially simultaneously for each channel.
In some embodiments, the capturing of images comprises capturing a plurality of still frames forming a video stream. Directly processing the image data in real-time and concurrently correcting defects in the image data, comprises applying direct real-time and continuous processing to the video stream, to continuously generate on the fly corrected output video stream.
According to another broad aspect of the present invention, there is also provided a method for correction of at least one chromatic aberration. The method comprises: providing an image data level as a function of pixel number; the image data comprising data pieces corresponding to one or more of primary color channels, R,G,B; applying a band pass filter to the image data level of each RGB channel to generate a derivative signal, identifying at least one extrema in the derivative signal; thresholding the derivative signal to obtain location of a transition between bright and dark or vice versa; calculating a correlation of the derivative signal for each channel pair in the primary color signals and at each transition to obtain a shift value from one channel to another and providing data indicative thereof; calculating a polynomial correction for each pixel to provide a corrected shift value defined such that the transitions occur substantially simultaneously for each channel; the polynomial correction being indicative of the correction to at least one chromatic aberration; and applying the polynomial correction to at least one input image to thereby enable to generate a corrected output image.
BRIEF DESCRIPTION OF THE DRAWINGS
In order to understand the invention and to see how it may be carried out in practice, embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:
Fig. IA is a schematic block diagram illustrating the main components of the imaging system of the present invention;
Fig. IB is a graph representing a "grey level" of a video signal vs. pixel number; the dark image reference, the bright image reference, the input raw video signal and the corrected output video signal are represented;
Fig. 1C is a schematic block diagram illustrating the differences in magnification between color channels of a tri-linear CCD sensor; Figs. 2A-2C are images captured by conventional digital imaging sensors, in which chromatic aberrations clearly appear;
Fig. 3 A is a graph representing a video signal level vs. pixel number, for three colors: red, green and blue; Fig. 3B is a graph representing an enlarged view of a specific transition
(bright to dark) of Fig. 3A for three colors: red, green and blue;
Fig. 3C is a graph representing a derivative signal of the first falling edge in the image for three colors: red, green and blue;
Fig. 3D is a graph representing the correlation result between red and blue channels for the first falling edge;
Fig. 3E is a graph representing a corrected profile according to one embodiment of the present invention;
Fig. 4A is an image captured by a conventional digital imaging system and Fig. 4B is the same image captured by the digital imaging system according to one embodiment of the present invention.
DETAILED DESCRIPTION OF EMBODIMENTS
Reference is made to Fig. IA representing a general schematic diagram of the digital imaging system 100 of the present invention. The digital imaging system 100 comprises an integral device comprising: an imaging sensor assembly 102 capturing image and generating an image data indicative thereof and a control unit 104 connected to the image sensor assembly for directly receiving the image data. The control unit 104 comprises an image processor 106 configured and operable to apply direct and to directly process in real-time the image data thereby correct a plurality of non-uniformity defects generated by the digital imaging sensor assembly 102. The control unit 104 comprises a read-out circuit for generating the digital data indicative of acquired images and transmitting the digital data to the image processor 106. The digital imaging system 100 generates a substantially uniform processed image data 108. In this connection, it should be understood that typically, image non- uniformity can be corrected by a linear function:
Pix_out(n) = a(n) * Pix_in(n) + b(n) where a(n) and b(n) are respectively pixel-dependant gain and offset. A common way to implement this correction is to setup two tables in a memory utility generally included in the frame grabber, containing respectively the a and b coefficients and then apply the linear above function on the incoming pixels in a field-programmable gate array (FPGA) also included in the frame grabber. Typically, the computation of the gain and offset coefficients in line- scan camera device is left to the user.
However, this type of correction of non-uniformity has some drawbacks such as the need to develop software function to acquire bright and dark images in the frame grabber and compute the coefficients (implies an overhead); and the upload of the tables to the camera device.
Even if the link from the camera device to the frame grabber is fast, the opposite link from the frame grabber to the camera device is generally slower. For example, if the link from the frame grabber to the camera device is a serial link (115 kbits/s), for a given 8k pixels camera, it would take around three seconds to upload two 8k x 16 bits tables, without any protocol nor control. These three seconds are to be compared to the 100-200 μs typical line period of a line- scan camera.
In this connection, it should be understood that, typically, the video stream is continuously acquired/captured by the image sensor assembly and therefore, during this upload period, the reference/calibration frame has a constant delay with respect to the incoming images. More specifically, in conventional line- scan applications, the beginning of the video line is processed by taking account the up-to-date reference/calibration data, and the end of the line is corrected with the previous calibration data. This imply a loss of video data during a certain period of time (e.g. few seconds for 20,000 video lines)-, which is neither acceptable nor possible for in wide range of applications such as in web inspection. Alternatively, if the reference/calibration data would be updated during the video line time period, the time of such updating would be substantially greater than the time of the video line time period which is also neither acceptable nor possible for in wide range of applications.
Therefore, a real-time correction of drifts induced by the system (such as light source aging or thermal drift) is difficult to be considered.
The present invention solves the above problem by preventing go and back between the camera device and the frame grabber and minimizing overhead.
In some embodiments, the integral control unit 104 directly process in real-time the image data by defining at least two reference data corresponding to one reference white image and one reference black image respectively and applying to the image data a linear function representative of a transformation between the reference data and the white and black level in the captured image.
Assuming that white and black reference images are defined inside the camera device, the corrected signal can be expressed as:
PixOut(n) = BrightTarget * ( Pixln(n) - DarkRef(n) ) / ( BrightRef(n) - DarkRef(n) ) + DarkTarget where: n is the number of pixels across the array, Pixln is the incoming pixel data before correction, PixOut is the corrected processed pixel data, BrightRef is the bright reference image data; DarkRef is the dark reference image data; BrightTarget is the output "white" level and DarkTarget is the output "black" level. The input grey levels situated between BrightRef and DarkRef are spread by a linear function, between BrightTarget and DarkTarget. The grey level can take values between 0 and 2n - 1, n being the width of the digital video bus. The result of calibration calculation is restricted to the range between 0 and 2n - 1 to match practical values. Reference is made to Fig. IB representing the grey level of a video signal vs. pixel number. In the figure, the dark image reference is represented by curve 1, the bright image reference is represented by curve 2, the input raw video signal is represented by curve 3 and the corrected output video signal is represented by curve 4. In some embodiments, the imaging system may comprise a reference acquisition module adapted for defining reference data indicative of one or more reference images. The reference acquisition module can be implemented in an FPGA and saved in non-volatile memory for reuse.
As detailed above, a command via a serial link (just a few bytes, not a complete table) activates the white or dark reference acquisition, for a chosen number of video lines. The imaging system of the present invention enables thus the need to upload a complete calibration table. The imaging system has just to initialize the loading of previously acquired data or to initiate a new calibration, internal to the imaging system. The reference signal can be calculated as:
Ref(n,p) = ε * Pixln(n,p) + (1 - ε) * Ref(n-l,p) where: Ref(n,p) is the filtered (bright or dark) reference for a pixel p, and for a current line n; Ref(n-l,p) is the filtered (bright or dark) reference for a pixel p, and previous line n-1; Pixln(n,p) is the incoming video data for a pixel p, and current line n; ε is the coefficient of a low pass filter.
To improve the convergence of the low pass filter, the reference signal can be fed with a raw input video. The reference is then corrected at the first video line. The accumulation of further video lines reduces the noise of the reference signal. The coefficient of a low pass filter ε can be chosen by the user to achieve a compromise between noise and convergence speed.
The user can either choose to continue the accumulation to an existing reference signal or to re-initiate the profile.
The system and method of the present invention may also be used for correction of chromatic aberrations. Generally, lenses exhibit chromatic aberrations: the magnification ratio between object and image depends on the wavelength of the light. It occurs because lenses have a different refractive index for different wavelengths of light (the dispersion of the lens). The refractive index decreases with increasing wavelength. Chromatic aberration manifests itself as "fringes" of color along boundaries (far from the optical axis of the image plane) that separate dark and bright parts of the image, because each color in the optical spectrum cannot be focused at a single common point on the optical axis. Since the focal length of a lens is dependent on the refractive index, different wavelengths of light will be focused on different positions. Chromatic aberration can be both longitudinal, in that different wavelengths are focused at a different distance from the lens; and transverse or lateral, in that different wavelengths are focused at different positions in the focal plane (because the magnification of the lens also varies with wavelength).
The color fringes appear at dark to bright or bright to dark transitions in the boundaries of the image rather than in the center of the image.
The color fringe depending on the distance between the object and the optical axis in the image plane, chromatic aberration has revolution symmetry around the optical axis. Moreover, a typical lens exhibit magnification ratio variations with the wavelength of the light. Color fringes are typically artifacts that can be seen as product defects in web inspection, or induce loss in a system resolution if the image processing discards them.
Reference is made to Fig. 1C illustrating the use of a tri-linear CCD sensor having three different channels for the three colors. Each R, G, B channel reflects the red, green, and blue image components at different locations in the objet plane. If the distance between the object and the optical axis of the imaging system varies (e.g. the optical axis is not perpendicular to the object to be imaged or the object has a certain curvature as a rotating cylinder in this specific and non-limiting example), differences in magnification for red, green, and blue image components exist. Such differences in magnification between color channels are displayed as color fringes on the dark to bright or bright to dark transitions and can be treated/corrected as chromatic aberrations.
More specifically, when the digital imaging sensor is a linear CCD array, the chromatic aberration turns to a one-dimensional concern: as illustrated in Figs. 2A-2C in which the color fringes appear as vertical lines. Figs. 2B-2C are enlargements of the first and the last squares of the full image of the linear CCD array of Fig. 2A.
According to the technique of the present invention, the chromatic aberration correction comprises inter alia providing a data indicative of at least one chromatic aberration of the image data (done by an external module of the imaging system); and directly processing in real-time (i.e. real-time re-sampling) the image data by using the digital imaging sensor (e.g. done by an FPGA inside the camera device, on the fly) to correct at least one chromatic aberration to thereby enable to generate from the digital imaging sensor a processed output image data in which chromatic aberrations have been substantially corrected.
Reference is made to Fig. 3 A representing a video signal level vs. pixel number, for three colors: red (R), green (G) and blue (B). Fig. 3B is an enlarged view of the same showing that on the first falling transition (from bright to dark) the red signal falls before the green and blue ones. This is the reason why in Fig. 2B a cyan fringe 200 appears at the left on the first black land.
When the video signal level vs. pixel number is obtained for each RGB channel; a band pass filter acting as a derivative function is applied to the video signal level, on each RGB channel signal to generate a derivative signal and identify extrema in the derivative signal. The derivative signal of the first falling edge in Fig. 3 A is represented in Fig. 3C.
The extrema is at the rising or falling edges of the transition region between bright and dark or vice versa. The derivative signal is then threshold to obtain location of the transition region. Thresholding the derivative signal give a rough estimate of the locations of the edges. In a sample image illustrated for example in Fig. 3A, there are 20 edges, 10 falling and 10 rising. The derivative signal is then correlated for each channel pair and at each edge, giving the amount of shift from a color to another. Fig. 3D illustrates a correlation result between red and blue channels for the first falling edge. One can see that the red channel has to be shifted about three pixels to right, in order that the red and blue channel falling edges occur simultaneously. The sub-pixel accuracy is given by calculating the centroϊd of the correlation peak.
A polynomial correction being indicative of the amount of shift required to make the edges occurring simultaneously for each color per pixel is then determined. A tabulation comprising the location of each input pixel for each pixel of the output corrected image is then determined.
The polynomial correction can be a non-integer value, the decimal value defining an interpolation between two adjacent pixels of the input image.
The output image is processed by resampling the input image, by using the processed shift needed for each pixel. Reference is made to Fig. 3E illustrating the corrected profile by using the technique of the present invention. The edges of the three channels occur at the same location (simultaneously).
Reference is made to Fig. 4A and Fig. 4B representing respectively an image captured by a conventional digital imaging system and the same image captured by the digital imaging system according to one embodiment of the present invention. The effect on the corrected image is clearly observable.

Claims

CLAIMS:
1. A digital imaging system comprising: an integral device comprising: an image sensor assembly adapted for capturing images and generating digital image data indicative thereof, and a control unit connected to the image sensor assembly for directly receiving said image data, the control unit comprising an image processor configured and operable to apply direct real-time processing to said image data, said direct real-time processing comprising correcting image data for defects induced by the image sensor assembly, by carrying out at least one of the following:
(i) concurrently correcting multiple non-uniformity defects in the digital image data induced in said image data by the imaging sensor assembly, and generating a substantially uniform processed image data; and (ii) processing the image data to compensate for said at least one chromatic aberration, and generating a substantially corrected processed image data.
2. The imaging system of claim 1, wherein said control unit comprises a read-out circuit for generating said digital data indicative of acquired images and transmitting said digital data to said image processor.
3. The imaging system of claim 1 or 2, wherein said non-uniformity defects comprise at least one of photoresponse non-uniformity, fixed pattern noise, vignetting and lighting non-uniformity.
4. The imaging system of any one of the preceding claims, wherein said image sensor assembly comprises at least one of a pixel array imager, a linear
CCD sensor, a tri-linear CCD sensor, a line- scan camera device, and a tri-CCD camera device.
5. The imaging system of any one of the preceding claims, wherein said image sensor assembly is adapted for capturing static images.
6. The imaging system of any one of the preceding claims, wherein said image sensor assembly is adapted for capturing a plurality of still frames forming a video stream.
7. The imaging system of claim 6, wherein said a control unit is configured and operable to apply direct real-time and continuous processing to said video stream, to continuously generate on the fly corrected output video stream.
8. The imaging system of any one of the preceding claims, comprising a reference acquisition module adapted for defining reference data indicative of one or more reference images.
9. The imaging system of Claim 8, wherein said reference acquisition module has one of the following configurations: is a part of the image sensor assembly; is a part of the control unit, or is a distributed utility distributed between the image sensor assembly and the control unit.
10. The imaging system of Claim 8 or 9, wherein said reference data defined by said reference acquisition module comprises data indicative of a first reference white image and a second reference black image.
11. The imaging system of 10, wherein said control unit is configured and operable to apply a linear function to said image data from the image sensor assembly, said linear function being a transformation between the reference data and white and black levels in the image data.
12. The imaging system of any one of the preceding claims, wherein said image processor comprises a field-programmable gate array.
13. The imaging system of 11, wherein said image processor comprises a field-programmable gate array configured and operable to carry out said application of the linear function to the image data.
14. The imaging system of any one of the preceding claims, wherein said image sensor assembly is a color imager, said image data comprising at least one data piece corresponding to at least one color channel from primary color channels, R, G, B.
15. The imaging system of Claim 14, wherein said image processor is adapted to process said image data by obtaining data indicative of at least one chromatic aberration and obtaining a polynomial correction indicative of a corrected shift value defined such that transitions between bright and dark or vice versa occur substantially simultaneously for each channel; and applying said polynomial correction to at least one input image.
16. The imaging system of Claim 15, wherein said image processor is adapted to process said image data by carrying out the following: identifying for each pixel number an image data level for each of the RGB channels, applying a band pass filtering to said image data level of each RGB channel and generating a derivative signal, processing said derivative signal for identifying at least one extrema in the derivative signal, and using said extrema for thresholding said derivative signal to obtain a location of a transition between bright and dark or vice versa; calculating a correlation of the derivative signal for each channel pair; and at each transition, obtaining a shift value from one channel to another and providing data indicative thereof; calculating a polynomial correction for each pixel to provide a corrected shift value defined such that the transitions occur substantially simultaneously for each channel.
17. A digital camera comprising: a light sensitive pixel array assembly for capturing images and generating digital output corresponding to image data indicative of the captured images, and a controller directly connected to the digital output of the light sensitive pixel array assembly for directly receiving and real-time processing to said image data to concurrently correct multiple non- uniformity defects in the digital image data induced in said image data by the light sensitive pixel array, and generate a substantially uniform processed image data.
18. An imaging method comprising: capturing images and generating a corresponding digital image data indicative thereof; directly processing said image data in real-time and concurrently correcting multiple non-uniformity defects in said image data, to thereby enable to generate a substantially uniform processed image data; thereby enabling for integration of an image processor with an image capture device eliminating a frame grabber between them.
19. The method of claim 18, wherein said non-uniformity defects comprise at least one of photoresponse non-uniformity, fixed pattern noise, vignetting and lighting non-uniformity.
20. The method of claim 18 or 19, wherein said directly processing in realtime of said image data comprises defining at least two reference data corresponding to one reference white image and one reference black image respectively and applying to said image data a linear function representative of a transformation between the reference data and the white and black level in the captured image.
21. The method of claim 20, wherein said defining at least two reference data comprises controllably varying said reference data to control contrast and brightness of the image.
22. The method of any one of Claims 18 to 21, wherein said directly processing further comprises obtaining data indicative of at least one chromatic aberration and correcting the image data for said at least one chromatic aberration.
23. The method of Claim 22, wherein the image data comprises at least one data piece corresponding to at least one color channel from primary color channels, R, G, B.
24. The method of claim 22 or 23, wherein said processing comprises: obtaining a polynomial correction indicative of a corrected shift value defined such that transitions between bright and dark or vice versa occur substantially simultaneously for each channel; and applying said polynomial correction to at least one input image.
25. The method of any one of Claims 22 to 24, wherein said processing comprises: providing an image data level as a function of pixel number for each of the
R, G, B channel; applying a band pass filter to the image data level of each RGB channel to generate a derivative signal, identifying at least one extrema in said derivative signal; thresholding said derivative signal to obtain location of a transition between bright and dark or vice versa; calculating a correlation of the derivative signal for each channel pair in the primary color signals and at each transition to obtain a shift value from one channel to another and providing data indicative thereof; calculating a polynomial correction for each pixel to provide a corrected shift value defined such that the transitions occur substantially simultaneously for each channel.
26. The method of any one of Claims 18 to 25, wherein said capturing of images comprises capturing a plurality of still frames forming a video stream; said directly processing said image data in real-time and concurrently correcting defects in said image data, comprises applying direct real-time and continuous processing to said video stream, to continuously generate on the fly corrected output video stream.
27. A method for correction of at least one chromatic aberration, the method comprises: providing an image data level as a function of pixel number; said image data comprising data pieces corresponding to one or more of primary color channels, R, G, B applying a band pass filter to the image data level of each RGB channel to generate a derivative signal, identifying at least one extrema in said derivative signal; thresholding said derivative signal to obtain location of a transition between bright and dark or vice versa; calculating a correlation of the derivative signal for each channel pair in the primary color signals and at each transition to obtain a shift value from one channel to another and providing data indicative thereof; calculating a polynomial correction for each pixel to provide a corrected shift value defined such that the transitions occur substantially simultaneously for each channel; said polynomial correction being indicative of the correction to at least one chromatic aberration; and applying said polynomial correction to at least one input image to thereby enable to generate a corrected output image.
PCT/IB2010/052108 2009-05-14 2010-05-12 A system and method for correcting non-uniformity defects in captured digital images WO2010131210A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17810209P 2009-05-14 2009-05-14
US61/178,102 2009-05-14

Publications (1)

Publication Number Publication Date
WO2010131210A1 true WO2010131210A1 (en) 2010-11-18

Family

ID=42475269

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2010/052108 WO2010131210A1 (en) 2009-05-14 2010-05-12 A system and method for correcting non-uniformity defects in captured digital images

Country Status (1)

Country Link
WO (1) WO2010131210A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014160180A1 (en) * 2013-03-13 2014-10-02 Alcoa Inc. System and method for inspection of roll surface
CN110622211A (en) * 2017-03-15 2019-12-27 菲力尔系统公司 System and method for reducing low frequency non-uniformities in images
US10753734B2 (en) 2018-06-08 2020-08-25 Dentsply Sirona Inc. Device, method and system for generating dynamic projection patterns in a confocal camera
CN112312125A (en) * 2020-10-21 2021-02-02 中国科学院光电技术研究所 Multi-tap EMCCD (Electron multiplying Charge coupled device) non-uniformity comprehensive correction method
CN113808046A (en) * 2021-09-18 2021-12-17 凌云光技术股份有限公司 Method and device for acquiring flat field correction parameters
EP4270917A1 (en) 2022-04-29 2023-11-01 Sick Ag Camera and method for detecting an object

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5038225A (en) * 1986-04-04 1991-08-06 Canon Kabushiki Kaisha Image reading apparatus with black-level and/or white level correction
JP2000299874A (en) * 1999-04-12 2000-10-24 Sony Corp Signal processor, signal processing method, image pickup device and image pickup method
US20030072497A1 (en) * 2001-10-17 2003-04-17 Kenji Hiromatsu Image processing method, image processing apparatus and strage medium
US6587224B1 (en) * 1998-09-14 2003-07-01 Minolta Co., Ltd. Image reading apparatus that can correct chromatic aberration caused by optical system and chromatic aberration correction method
US20070076101A1 (en) * 2005-09-30 2007-04-05 Baer Richard L Self-calibrating and/or self-testing camera module
US20080063292A1 (en) * 2003-01-30 2008-03-13 Sony Corporation Image processing method, image processing apparatus and image pickup apparatus and display apparatus suitable for the application of image processing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5038225A (en) * 1986-04-04 1991-08-06 Canon Kabushiki Kaisha Image reading apparatus with black-level and/or white level correction
US6587224B1 (en) * 1998-09-14 2003-07-01 Minolta Co., Ltd. Image reading apparatus that can correct chromatic aberration caused by optical system and chromatic aberration correction method
JP2000299874A (en) * 1999-04-12 2000-10-24 Sony Corp Signal processor, signal processing method, image pickup device and image pickup method
US20030072497A1 (en) * 2001-10-17 2003-04-17 Kenji Hiromatsu Image processing method, image processing apparatus and strage medium
US20080063292A1 (en) * 2003-01-30 2008-03-13 Sony Corporation Image processing method, image processing apparatus and image pickup apparatus and display apparatus suitable for the application of image processing method
US20070076101A1 (en) * 2005-09-30 2007-04-05 Baer Richard L Self-calibrating and/or self-testing camera module

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KOREN I ET AL: "Robust digitization and digital non-uniformity correction in a single-chip CMOS camera", SOLID-STATE CIRCUITS CONFERENCE, 1999. ESSCIRC '99. PROCEEDINGS OF THE 25TH EUROPEAN DUISBURG, GERMANY 21-23 SEPT. 1999, PISCATAWAY, NJ, USA,IEEE, 21 September 1999 (1999-09-21), pages 394 - 397, XP010823565, ISBN: 978-2-86332-246-8 *
MUEHLMANN U ET AL: "A new high speed cmos camera for real-time tracking applications", ROBOTICS AND AUTOMATION, 2004. PROCEEDINGS. ICRA '04. 2004 IEEE INTERN ATIONAL CONFERENCE ON NEW ORLEANS, LA, USA APRIL 26-MAY 1, 2004, PISCATAWAY, NJ, USA,IEEE, US LNKD- DOI:10.1109/ROBOT.2004.1302542, vol. 5, 26 April 2004 (2004-04-26), pages 5195 - 5200, XP010768217, ISBN: 978-0-7803-8232-9 *
YAMASHITA T ET AL: "A lateral chromatic aberration correction system for ultrahigh-definition color video camera", PROCEEDINGS OF THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING (SPIE), SPIE, USA, vol. 6068, 1 January 2006 (2006-01-01), pages 60680N - 1, XP002550035, ISSN: 0277-786X *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014160180A1 (en) * 2013-03-13 2014-10-02 Alcoa Inc. System and method for inspection of roll surface
US10556261B2 (en) 2013-03-13 2020-02-11 Arconic Inc. System and method for inspection of roll surface
CN110622211A (en) * 2017-03-15 2019-12-27 菲力尔系统公司 System and method for reducing low frequency non-uniformities in images
CN110622211B (en) * 2017-03-15 2023-11-14 特利丹菲力尔有限责任公司 System and method for reducing low frequency non-uniformities in images
US10753734B2 (en) 2018-06-08 2020-08-25 Dentsply Sirona Inc. Device, method and system for generating dynamic projection patterns in a confocal camera
CN112312125A (en) * 2020-10-21 2021-02-02 中国科学院光电技术研究所 Multi-tap EMCCD (Electron multiplying Charge coupled device) non-uniformity comprehensive correction method
CN113808046A (en) * 2021-09-18 2021-12-17 凌云光技术股份有限公司 Method and device for acquiring flat field correction parameters
CN113808046B (en) * 2021-09-18 2024-04-02 凌云光技术股份有限公司 Flat field correction parameter acquisition method and device
EP4270917A1 (en) 2022-04-29 2023-11-01 Sick Ag Camera and method for detecting an object

Similar Documents

Publication Publication Date Title
US10412314B2 (en) Systems and methods for photometric normalization in array cameras
US7151560B2 (en) Method and apparatus for producing calibration data for a digital camera
CA3016429C (en) Combined hdr/ldr video streaming
CN108600725B (en) White balance correction device and method based on RGB-IR image data
EP1080443B1 (en) Improved dark frame subtraction
US7876363B2 (en) Methods, systems and apparatuses for high-quality green imbalance compensation in images
CN101198965A (en) Defect pixel correction in an image sensor
US20030234872A1 (en) Method and apparatus for color non-uniformity correction in a digital camera
WO2010131210A1 (en) A system and method for correcting non-uniformity defects in captured digital images
KR20040073378A (en) Vignetting compensation
US8620102B2 (en) Methods, apparatuses and systems for piecewise generation of pixel correction values for image processing
WO2005122549A1 (en) Method, apparatus, imaging module and program for improving image quality in a digital imaging device
JP2007158628A (en) Imaging apparatus and image processing method
GB2460241A (en) Correction of optical lateral chromatic aberration
US8331722B2 (en) Methods, apparatuses and systems providing pixel value adjustment for images produced by a camera having multiple optical states
CN109379535A (en) Image pickup method and device, electronic equipment, computer readable storage medium
US8736722B2 (en) Enhanced image capture sharpening
US8508613B2 (en) Image capturing apparatus
CN113643388B (en) Black frame calibration and correction method and system for hyperspectral image
KR20040095249A (en) Imager and stripe noise removing method
KR102566873B1 (en) Infrared ray photography apparatus and method for manufacturing the same
JP2004200888A (en) Imaging device
KR20050034092A (en) Method for compensating vignetting effect of imaging system and imaging apparatus using the same
Cao et al. Characterization and measurement of color fringing
Konnik et al. Increasing linear dynamic range of commercial digital photocamera used in imaging systems with optical coding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10725281

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10725281

Country of ref document: EP

Kind code of ref document: A1