WO2009112309A2 - Procédé et système destinés à la correction des aberrations d’objectif - Google Patents

Procédé et système destinés à la correction des aberrations d’objectif Download PDF

Info

Publication number
WO2009112309A2
WO2009112309A2 PCT/EP2009/051075 EP2009051075W WO2009112309A2 WO 2009112309 A2 WO2009112309 A2 WO 2009112309A2 EP 2009051075 W EP2009051075 W EP 2009051075W WO 2009112309 A2 WO2009112309 A2 WO 2009112309A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
image data
input
color plane
pixel
Prior art date
Application number
PCT/EP2009/051075
Other languages
English (en)
Other versions
WO2009112309A3 (fr
Inventor
Frank Hassenpflug
Wolfgang Endress
Andreas Hille
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2009112309A2 publication Critical patent/WO2009112309A2/fr
Publication of WO2009112309A3 publication Critical patent/WO2009112309A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Definitions

  • the present invention relates to the field of optical lens systems.
  • exemplary embodiments of the present invention relate to a method and system for correcting focal distortions in a stored image.
  • optical elements used by cameras and other optical devices to collect images from the environment often introduce errors into the images.
  • errors may include various aberrations that distort the color or perspective of the images.
  • Such errors may be perceptible to a viewer and, thus, may decrease the accuracy or aesthetic value of the images.
  • chromatic distortions Two common types of error introduced into images by optical systems are chromatic distortions and curvilinear distortions.
  • Chromatic distortions are caused by the wavelength dependency of the refractive index of the materials used in the optical elements.
  • the different refractive indices lead to different focal points for the differing wavelengths. As discussed in further detail below this may lead to blurring of the colors in images.
  • Curvilinear distortions may be caused by optical elements that differ from ideal designs, which can lead to different focal points for light entering the optical elements at different points. This type of distortion may cause curvature in lines that should be straight in images and, thus, cause distortions in perspective.
  • U.S. Patent No. 6,323,934 to Enomoto which claims priority to Japanese Patent No. JP 9-333943, purports to disclose an image processing method for correcting at least - - one of lateral chromatic aberration, distortion, brightness, and blurring caused by an image collection lens.
  • the method is generally used to correct low quality images on photographic film, but may also be used to correct images collected using a digital camera.
  • the images are scanned from the film into an electronic device at a resolution sufficient to minimize distortions from the scanning process.
  • the aberration to be corrected is selected, and lens data specific to the aberration is used to perform the correction calculations.
  • the corrections are generally performed in two steps. In a first step, a lateral chromatic aberration is corrected and, in a second step, curvilinear distortions are corrected.
  • the image is separated into the individual color planes, and then the correction vector is applied.
  • the corrected color planes are then recombined to form the image.
  • the correction vector is also purported to correct for camera shake, i.e., the failure of an operator to hold the camera steady.
  • a method of processing image data according to the invention is set forth in claim 1.
  • the method comprises dividing the image data into a plurality of input images in separate color planes.
  • the input image in each color plane is normalized to a predetermined image size.
  • An origin of the input image in each color plane is determined and the input image is shifted to occupy all four quadrants of a Cartesian coordinate system.
  • the method then performs an inverse mapping operation for an output image in each color plane to generate a map of locations of pixels in the output image that correspond to locations of pixels in the input image.
  • the quadrant of a Cartesian coordinate system that corresponds to the inverse mapped image in each color plane is determined and a value for each pixel in the input image is copied to a pixel value for a mapped position in the output image.
  • the output images for each - - color plane are then combined to form a final output image that is corrected for lateral chromatic aberration and pincushion/barrel distortion.
  • the method performs an image rotation and panning function on the inverse mapped output image for each color plane prior to copying the value for each pixel from the input image to the output image.
  • the method is performed in an image collection device, where the image collection device may include a digital video camera, a digital still picture camera, or a digitizer for images. The method may also be performed on a separate image processing system.
  • the data from the input image may be filtered prior to being copied to the output image to decrease aliasing artifacts.
  • the image collection system comprises an optical system configured to focus an image on an imaging system.
  • the imaging system is configured to convert the image into image data, which is stored in a memory by a processor.
  • the processor is configured to perform calculations on the stored image data and is coupled to a second memory.
  • the second memory comprises machine readable instructions configured to direct the processor to divide the image data into a plurality of input images in separate color planes and normalize the input image in each color plane to a predetermined image size.
  • the instructions also direct the processor to shift an origin of the input image in each color plane such that the input image occupies all four quadrants of a Cartesian coordinate system. Further, the instructions direct the processor to perform an inverse mapping operation for an output image in each color plane to generate a map of locations of pixels in the output image that correspond to locations of pixels in the input image. The instructions then direct the processor to determine a quadrant of a Cartesian coordinate system that corresponds to the locations of pixels in the input image in each color plane and copy a value for each pixel in the input image to a pixel value in a corresponding position in the output image for each color plane. The instructions may then have the processor combine the output image for each color plane to form - - an image that is corrected for lateral chromatic aberration and pincushion/barrel distortion.
  • the image collection system comprises a filter configured to remove aliasing artifacts formed during the copying process.
  • the image processing system may have a network interface controller to transfer images to an external device.
  • the image collection system may have a digital image storage device to store images before or after processing.
  • the digital image storage device may be a disk drive, a recordable optical disk, a digital tape, or any combinations thereof.
  • Fig. 1 is a diagram that is useful in explaining chromatic aberrations.
  • Fig. 2 is a diagram that is useful in explaining lateral chromatic aberrations.
  • Fig. 3 is a diagram that is useful in explaining pincushion distortions.
  • Fig. 4 is a diagram that is useful in explaining barrel distortions.
  • Fig. 5 is a diagram that presents an overview of an inverse mapping function, in accordance with an exemplary embodiment of the present invention.
  • Fig. 6 is a diagram showing a polar coordinate system superimposed over a distorted image on a Cartesian coordinate system, which may used to calculate a radial pixel shift function, in accordance with an exemplary embodiment of the present invention. - -
  • Fig. 7 is a process flow diagram showing a method of collecting and correcting an image, in accordance with an exemplary embodiment of the present invention.
  • Fig. 8 is a process flow diagram showing a detailed method of correcting an image in a single color plane to remove distortions, in accordance with an exemplary embodiment of the present invention.
  • Fig. 9 is a diagram that is useful in explaining the centering of an image on a Cartesian coordinate plane, in accordance with an exemplary embodiment of the present invention.
  • Fig. 10 is a process flow diagram showing a detailed method of identifying on which side of a vertical axis in a Cartesian coordinate plane a horizontal pixel is located, in accordance with an exemplary embodiment of the present invention.
  • Fig. 11 is a process flow diagram showing a detailed method of identifying on which side of a horizontal axis in a Cartesian coordinate plane a vertical pixel is located, in accordance with an exemplary embodiment of the present invention.
  • Fig. 12 is a drawing that illustrates an input image, in accordance with an exemplary embodiments of the present invention.
  • Fig. 13 is a drawing that illustrates an input image in which the quadrant of the input pixels have not been determined and, thus, all four quadrants have been forced to have the same image.
  • Fig. 14 is a block diagram of an image collection device, in accordance with an exemplary embodiment of the present invention.
  • an imaging processing system may be embedded in an image collection device, such as a digital camera, a digital video camera, and the like, to correct lateral chromatic aberrations and curvilinear distortions as the images are collected.
  • the correction - - of the collected images may be performed in a single step along with any desired color plane rotation, horizontal or vertical panning, and image scaling, wherein all corrections are made without the generation of intermediate images. As discussed in detail below, this may be performed by inverse mapping of the output image to a predicted input image in each color plane. The map of pixels in the output image to pixels in the input image may then be used to transfer the corresponding pixel in the actual input image to the correct pixel location in the output image.
  • the map is generated from lens characteristics and user inputs, such as the desired pan and rotation factors.
  • the elimination of intermediate images may reduce the number of artifacts generated by the image correction process as well as lower the amount of memory required for the processing. Further, the image correction process may reduce the complexity of lens systems required for collecting images and, thus, reduce the cost or weight of an image collection system. Further, effective correction of distorted images may provide higher quality images than may normally be obtained from image collection systems.
  • Fig. 1 is a diagram that is useful in explaining chromatic aberrations.
  • a light beam 102 is aligned along an axis 104 and impinges on a lens 106.
  • the lens 106 focuses the light beam 102 toward a desired image plane 108.
  • the material of the lens 106 will generally show chromatic dispersion, wherein the refractive index of the lens 106 depends on the wavelength of the light impinging on the lens 106. Accordingly, while one wavelength of light, for example, yellow light 110, may be focused at the desired image plane 108, the refractive index for blue light 112 will be higher, leading to a higher angle of refraction from the lens 106.
  • the focal point 114 of the blue light 112 may land in front of the desired image plane 108.
  • a red light 116 may have a lower index of refraction in the lens 106 than the yellow light 110, leading to less refraction by the lens 106, providing a focal point 118 that is beyond the desired image plane 108.
  • Fig. 2 is a diagram that is useful in explaining lateral chromatic aberrations.
  • a light beam 202 is aligned along an axis 204 that is not aligned with an axis 206 of a lens 208 and desired image plane 210.
  • the light beam 202 impinges on the lens 208 and is focused toward the desired image plane 210.
  • different wavelengths of light are refracted at different angles by the lens 208.
  • a yellow light 212 may have a focal point 214 that lands at a correct position on the desired image plane 210
  • a blue light 216 may have a focal point 218 that is offset to one side of the yellow light 212.
  • a red light 220 may have a focal point 222 that is offset on the opposite side of the yellow light 212 from the blue light 216.
  • This blurring of the colors may cause offset color fringes, e.g., magenta or green fringes, to appear on one side of an object.
  • Chromatic aberrations are not the only distortions that may be caused by optical elements, such as lens. Curvilinear distortions, as discussed with respect to Figs. 3 and 4 may also be an issue.
  • Curvilinear distortions are distortions in which straight lines in a subject appear to be curved in an image.
  • Various types of curvilinear distortions exist, including pincushion and barrel distortions as discussed with respect to Figs. 3 and 4.
  • a subject 302 is focused along an axis 304 through a lens 306 to form a image 308 at an image plane 310.
  • the desired mapping of points from the subject 302 to the image 308 is illustrated by the rays 312.
  • the rays may not land where they are expected, as indicated by ray 316. This may cause the sides of the subject 302 to appear to curve inwards in the image 308.
  • the placement of an aperture or stop 402 between the subject 302 and the lens 306 may make rays 404 land in different places than expected, as indicated by rays 406. This distortion may make the sides of the subject 302 appear to curve outwards in the image 308. - -
  • Fig. 5 is a diagram of an inverse mapping function in accordance with an exemplary embodiment of the present invention. As illustrated in Fig. 5, a output image 502 is projected through an inverse mapping function 504 to an input image 506, wherein each output pixel 508 is mapped to a corresponding input pixel 510. The data at each mapped input pixel 510 may then be copied to the corresponding output pixel 508 in the output image 502, as indicated by line 512.
  • a filter core 514 may be included in the copying process to prevent the generation of artifacts from aliasing.
  • the distortions created by the optical elements used for the image collection are determined.
  • Lenses are generally "radial" with respect to distortion and lateral chromatic aberration, sometimes referred to as LCA, i.e., a lens will have a similar distortion profile at a certain distance from the center of the lens around the circumference of the lens. Accordingly, the use of a polar coordinate system is convenient for describing the distortions and calculating a pixel shift function to correct for the distortions.
  • Fig. 6 is a diagram showing a polar coordinate system superimposed over a distorted image on a Cartesian coordinate system, which may used to calculate a radial pixel shift function in accordance with an exemplary embodiment of the present invention.
  • the image 602 shown in this illustration 600 has a pincushion distortion.
  • a Cartesian coordinate system is imposed over the image 602, wherein the vertical axis 604 is labeled v/y to indicate the input and output image axes, respectively.
  • the horizontal axis 606 is labeled u/x to indicate the input and output image axes.
  • the polar coordinates are represented by the vector 608 illustrating the angle of a point from the center, and the circle 610 representing the distance of the point from the center.
  • the vector 608 and circle 610 represent the radial pixel coordinate of the input image, e.g., rsrc 612.
  • the radial pixel coordinate of the output image e.g., rdst 614, may be expected to lie along the vector 608.
  • rdst 614 would be at a farther distance out from the center than rsrc 612.
  • the correction of the LCA in this algorithm may be performed by a 4th order polynomial, e.g., as shown in Equation 1.
  • the radial pixel coordinate of the input image e.g., rsrc 612
  • the radial pixel coordinate of the output image e.g., rdst 614.
  • the coefficients a, b, c, and d of the polynomial are the lens parameters measured for the specific LCA correction.
  • rsrc(rdst) a ⁇ rdst 4 + b ⁇ rdst 3 + c ⁇ rdst 2 + d ⁇ rdst Equation 1
  • Equation 1 Other correction functions may be used in place of the 4th order polynomial presented in Equation 1.
  • the center of the polar coordinate system is equal to the center of the image.
  • the origin of the distortion and correction function may be offset from the center of the image.
  • An offset of the distortion from the center of the lens may be caused by deviations during the manufacturing of lenses.
  • the origin of the LCA correction function e.g., Equation 1
  • the polar coordinates may be geometrically mapped to the Cartesian coordinate system.
  • the radial distance to the origin of the output image, rdst may be represented by Equation 2, in which x and y represent the pixel coordinate in the output image.
  • Equation 3 The radial distance to the origin of the input image, rsrc, is represented by Equation 3, in which u and v represent the pixel coordinates in the input image.
  • Equation 4 Substituting rdst in Equation 1 with Equation 2 provides the radial distance from the Cartesian coordinate system origin of the input image as a function of the output image coordinates with LCA correction, as shown in Equation 4.
  • rsrc(x, y) a - [Jx 2 + y 2 ⁇ + b ⁇ [Jx 2 + y 2 ⁇ + c ⁇ [jx 2 + y 2 ⁇ + d ⁇ [Jx 2 + y 2 J
  • Equation 4 does not describe the input pixel coordinates u and v as a function of the output pixel coordinates x and y.
  • Equation 3 may be solved for u and v, as shown in Equations 5 and 6.
  • Equation 7 may then be solved for u and v, resulting in Equations 8 and 9.
  • Equations 8 and 9 Substituting u and v of Equations 8 and 9 into Equations 5 and 6 results in Equations 10 and 11. v - x
  • Equation 14 Substituting rsrc from Equation 4 into Equations 12 and 13 provides the final pixel coordinates of the input image by inverse mapping using the lens specific coefficients a, b, c, c/ for LCA correction, as shown in Equations 14 and 15.
  • the origin of the coordinate system is located in the center of the input and output image 600.
  • the horizontal input pixel coordinate provided by inverse mapping, when x ⁇ 0, is shown in Equation 14.
  • Equation 15 The vertical input pixel coordinate provided by inverse mapping, when y ⁇ 0, is shown in Equation 15.
  • Equations 14 and 15 x and y represent the pixel coordinate in the output image, u and v represent the pixel coordinate in the input image, and a, b, c, and d represent - - the lens specific coefficients for the LCA correction.
  • the equations provided above may generally be used in a general procedure to correct distortions in an image.
  • Fig. 7 is a process flow diagram showing a method 700 of correcting an image to remove distortions in accordance with an exemplary embodiment of the present invention.
  • the method 700 may be implemented in an image collection device, as discussed with respect to Fig. 14, or may be implemented on a standalone system for post collection image processing.
  • the method 700 begins with the collection of image data, as indicated in block 702. This may be performed using a digital still camera, a digital video camera, a digitized image, or any other image capture technique.
  • the method 700 may be used to correct a single image or to correct each image in a sequence of images, for example, in a video clip.
  • the image data is collected, it is stored for processing, as indicated in block 704.
  • the image may be stored in a video collection device, for example, on a disk drive, in a hardware memory, or on magnetic tape, or may be transferred to an external unit, such as a computer or network, for storage.
  • the stored image is generally divided into separate color planes for correction of distortions, as indicated in block 706, wherein each color plane is then stored for individual processing.
  • each color plane is then stored for individual processing.
  • an image may be divided into red, green, and blue or RGB color planes.
  • the present techniques are not limited to additive color models, such as the RGB color planes, but may also be used to correct images in other color models, such as a subtractive model using cyan, magenta, and yellow.
  • each color plane is corrected to remove distortions, as indicated in block 708.
  • the image correction generally uses Equations 14 and 15 in the procedures discussed in detail with respect to Fig. 8.
  • the individual color planes may be recombined to form the final corrected image, as indicated in block 710 - -
  • Fig. 8 is a process flow diagram showing a method 800 of correcting a single color plane of an image to remove distortions, in accordance with an exemplary embodiment of the present invention.
  • Fig. 8 generally illustrates the main function blocks of the LCA correction algorithm and their relations.
  • the main function of the method 800 is the inverse mapping of an output image pixel position, HOutPixPos and VOutPixPos, as indicated by reference number 802, onto an input image pixel position, HlnPixPos and VlnPixPos, as indicated by reference number 804.
  • the mapping function is then used to copy the value of the pixel at the calculated input point into the value of the output image pixel at the mapped point.
  • a filter core 512 may be used to protect from aliasing artifacts that may result from the mapping.
  • a horizontal/vertical 2 tap linear filter or a horizontal/vertical 32 tap linear filter may be used as the filter core 512, depending on the filtering and processing overhead desired.
  • the filter core 512 may follow some basic rules to enhance the image. For example, if HlnPixPos or VlnPixPos is negative or greater than the input image width or height, the pixel is located on the background of the input image and the value of that pixel at that position may be loaded with the background color, e.g., black. This may be the case if the zoomed out image is smaller than the output image.
  • the fractional parts of HlnPixPos and VlnPixPos generally represent the sub phases of the pixel. The sub phases may be used for the calculation of the output image pixel values.
  • the method 800 may calculate a normalization factor, Norm, from any combination of four possible factors, input image width, input image height, input image diagonal, or a user entered value.
  • the normalization factor based on the input image width, or InWidth is generally calculated by the scaling equation shown in Equation 16. - -
  • the normalization factor based on the input image height, or InHeight is generally calculated by the scaling equation shown in Equation 17.
  • the normalization factor based on the image diagonal is generally calculated by the scaling equation shown in Equation 18.
  • Equation 19 the normalization factor based on a user entered value is generally calculated by the scaling equation shown in Equation 19.
  • a center point for the input image may also calculated in block 806 by Equations 20 and 21.
  • Norm represents a normalization factor that is used for the inverse mapping function calculation. Further, InWidth represents the width of the input image, InHeight represents the height of the input image, UserValue represents a value entered by a user for the normalization, and InWidthHalf and InHeightHalf represent the center point of the input image. - -
  • an image dimension is described by positive integer values.
  • the image is located in the first quadrant, as shown by image 902 in Fig. 9.
  • image 904 it is convenient to have an image centered within the Cartesian coordinate system, as shown in image 904 in Fig. 9.
  • horizontal and vertical offsets are calculated to reposition the image onto the desired place within the coordinate system, as indicated in block 808 of Fig. 8.
  • the transformation e.g., the inverse map
  • the image will be shifted back to a positive coordinate system.
  • the offset values used to shift the center of the image are calculated by Equations 22 and 23.
  • HCartOutPixPos HOutPixPos - (HOriginDevIntern ⁇ Zoom)- OutWidthHalf Equation 22
  • VCartOutPixPos VOutPixPos - ⁇ VOriginDevIntern ⁇ Zoom) - OutHeightHalf Equation
  • HOutPixPos and VOutPixPos i.e., x' and y', represent positive position values of the "real" output image coordinate system
  • HCartOutPixPos and VCartOutPixPos represent the shifted position values of the output image coordinate system, i.e., x and y.
  • Zoom is a scaling factor for the total image
  • OutWidthHalf and OutHeightHalf represent the position of the center of the output image.
  • OutWidthHalf and OutHeightHalf are calculated by the formulae in Equations 24 and 25.
  • OutWidth is an entered value representing the width of the output image and OutHeight is an entered value representing the height of the output image.
  • HOriginDevlntern and VOriginDevlntern used in Equations 22 and 23 are the deviations of the coordinate origin from the image center and are calculated from the values entered for the origin deviation, HOriginDev and VOriginDev, of the input image, the rotation angle, RotationAngle, and the horizontal and vertical panning, HPan and VPan. The calculation is performed using a rotation matrix, as shown in the formulae in Equations 26-29.
  • HOriginDevlntern ⁇ HOriginDev + HPan ) • CosRot) + ⁇ VOriginDev + VPan ) • SinRot) Equation 26
  • VOriginDevlntern ⁇ VOriginDev + VPan)- CosRot)- ⁇ HOriginDev + HPan) ⁇ SinRot)
  • CosRot and SinRot represent the cosine and sine, respectively, of the desired image rotation angle. Further, the parameters
  • HOriginDevlntern VOriginDevlntern, SinRot, and CosRot, calculated in block 808, are used for additional image processing steps in the method 800.
  • the inverse mapping function is calculated in block 810, generally using the formulae presented in Equations 14 and 15. However, two additional parameters are also used. The first of these additional factors is the scaling factor, Zoom, discussed with respect to block 808 above. As the Inverse Mapping function is generally radial in nature, the scaling factors for the horizontal and vertical direction are the same. Zoom scales the radial position of the output image pixel position, rdst, as discussed with respect to Fig. 6. The second additional factor used in block 810 to calculate the inverse mapping function is Norm, calculated in block 806. For convenience in implementation, the numerators and denominators of Equations 14 and 15 are separately calculated. - -
  • Equation 30 rsrc is a function of rdst, as shown in Equation 32.
  • Equation 32 rdst may be calculated by the formula in Equation 33.
  • Equation 30 and 31 may be calculated using the formulae shown in Equations 34 and 35.
  • VDenominator J-. ⁇ - + ⁇ Equation 35
  • rsrc represents the radial pixel coordinate of the input image
  • rdst represents the radial pixel coordinate of the output image.
  • VCartOutPixPos represent the shifted position value of the original image, i.e., x and y.
  • the factors a, b, c, and d are the lens specific coefficients for the LCA correction, Norm is the normalization factor, and Zoom is the overall image scaling factor.
  • UDenominator and VDenominator represent the denominators of the - - horizontal and vertical inverse mapping functions
  • AbsHCartlnPixPos and AbsVCartlnPixPos represent the horizontal and vertical input coordinate without quadrant mapping.
  • Solving Equations 30 and 31 for AbsHCartlnPixPos, and AbsVCartlnPixPos, is done in the radial domain and, thus, no information is provided to determine which quadrant u and v are located in the Cartesian coordinate system. Further, HCartOutPixPos 2 and VCartOutPixPos 2 always result in absolute values. Therefore, the origination quadrant for must be determined to properly locate the coordinates of the pixel in the input image, HCartlnPixPos and VCartlnPixPos, as indicated in block 812.
  • the method 1000 starts with the value of HCartOutPixPos, as indicated in block 102.
  • the method 100 determines if HCartOutPixPos is at zero, indicating that the current pixel should be set to the value of the offset used to shift the axis of the Cartesian coordinate system, HOriginDevlntern, as indicated in block 1006. If not, the method determines if HCartOutPixPos is negative, as indicated in block 1008. If so, then HCartlnPixPos is set to the value of the offset minus the absolute value of the input pixel, as indicated in block 1010.
  • HCartlnPixPos is set to the value of the offset plus the absolute value of the input pixel, as indicated in block 1012.
  • An analogous technique may be used to determine the quadrant for the vertical pixel of the input image, as shown by the method 1100 in Fig. 11.
  • HCartOutPixPos and VCartOutPixPos represent the zero-shifted Cartesian coordinates of a pixel in the output image, i.e., x and y
  • HCartlnPixPos and VCartlnPixPos represent the zero-shifted Cartesian coordinates of a pixel in the input image, i.e., u and v.
  • HOriginDevlntern and VOriginDevlntern represent the horizontal and vertical deviation of the output image origin as a function of desired origin deviation, rotation, and panning, as calculated using the formulae shown in
  • AbsHCartlnPixPos represent the absolute values of the horizontal and vertical coordinates of a pixel in the input image without quadrant mapping and are calculated by the formulae in Equations 30 and 31. - -
  • the final step that map be performed in producing the inverse map is rotation and panning of the input image, as indicated in block 814.
  • the horizontal and vertical coordinates of the image can be rotated and translated around the center of the output image. This procedure may be used for effects or for imager registration correction.
  • the image axis is also returned to the lower left corner, resulting in all positive coordinates for the pixels.
  • HInPixPos (HCartlnPixPos ⁇ CosRot) - ⁇ VCartlnPixPos ⁇ SinRot) + v ' v ' Equation 36
  • VInPixPos ⁇ VCartlnPix Pos ⁇ CosRot ) - ⁇ CartlnPix Pos ⁇ SinRot ) + _ A . ., v ' v ' Equation 37
  • HCartlnPixPos and VCartlnPixPos represent the coordinate of a pixel on the image after centering the axis.
  • CosRot and SinRot represent the cosine and sine of a desired rotation of the image and are calculated using the formulae in Equations 28 and 29.
  • InWidthHalf and InHeightHalf are the coordinates of the center of the input image, as discussed with respect to block 806.
  • HPan and VPan are user entered values corresponding to the desired horizontal and vertical movement of the input image movement, as discussed with respect to block 808.
  • HInPixPos and VInPixPos represent the coordinate of a pixel in an input image prior to centering the horizontal and vertical axes. These values are used by the Filter Core 512 to copy - - the value of an input pixel, InPixVal, at a particular coordinate, HlnPixPos, VlnPixPos to the value of the output pixel, OutPixVal, at the appropriate coordinate, HOutPixPos, VOutPixPos.
  • Fig. 14 is a block diagram of an image collection device 1400 that may be used in exemplary embodiments of the present techniques.
  • light 1402 reflected from an scene is collected and focused by optical elements 1404.
  • the focused light 1406 is projected onto a detector 1408, which may be, for example, a charge coupled device or any other kind of multi-channel light conversion system.
  • the focused light 1406 is converted by the detector 1408 into an electrical signal, and is then transferred over signal lines 1410 to a detector controller 1412. In the detector controller 1412, the individual signals from the detector 1408 are converted into a digital image.
  • the digital image may then be transferred by a processor 1414 over a bus 1416 to a random access memory, or RAM, 1418 for further processing.
  • the RAM 1418 may be a DRAM, an SRAM, a flash memory module, or any other kind of memory unit capable of high speed access.
  • the optical elements 1404 may be tied to the bus 1416 to allow the optical elements 1404 to be controlled by the processor 1414.
  • the processor 1414 may adjust the focus, the stop, or other properties of the optical elements 1404 through the bus 1416.
  • the processor 1414 may be controlled by image collection and processing programs contained in a read only memory, or ROM, 1420 that is accessible from the bus 1416.
  • the programs do not have to be in a ROM 1420, but may be contained in any type of long term memory unit, such as a disk drive, a flash card, or an EEPROM, among others.
  • the programs in the ROM 1420 may include the image correction procedures discussed with respect to Figs. 7-13. - -
  • the digital image may be stored before or after processing in a separate digital image storage 1422, such as a digital video tape, a recordable optical disk, a hard drive, and the like.
  • the digital image storage 1422 may also be combined with the program storage.
  • a disk drive may be used both to store both programs and digital images.
  • the images may be displayed during or after collection on a display unit 1424 that may be tied to the bus 1416.
  • Controls 1426 may also be connected to the bus 1416 to control the collection and processing of the image by the processor 1414.
  • Such controls 1426 may include keypads, selection knobs, and separate buttons for functions such as zooming, focusing, and starting the collection of images, among others.
  • Images may be transferred from the image collection device 1400 through a network interface controller, or NIC, 1428 that may be tied to the bus 1416.
  • the NIC 1428 may be connected to an external LAN 1430, which may be used to transfer the images, either before or after processing, to an external device 1432 located on the LAN 1430.
  • the NIC 1428 may be directly coupled to an area of the RAM 1418 to allow direct memory access, or DMA, transfers to occur directly to and from the RAM 1418 of the digital collection device. This may accelerate data transfers when a large amount of data is involved, such as in a high definition digital video camera.
  • the controls 1426 and display 1428 may be combined into a single unit.
  • the display 1428 may be directly tied to the detector controller 1412 to off-load the display function from the processor 1414.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé (800) et un système permettant de traiter des données d’image. Le procédé (800) selon l’invention implique d’utiliser les caractéristiques de l’objectif et les saisies utilisateur pour chaque couleur d’une pluralité de couleurs séparées afin de générer une mappe inversée d’emplacements de pixels dans une image de sortie à partir des emplacements de pixels dans une image d’entrée (810). Les données d’image sont réparties dans la pluralité de plans de couleur, puis la mappe inverse est utilisée pour copier la valeur du pixel à chaque emplacement dans l’image d’entrée et la positionner à l’emplacement correspondant dans l’image de sortie (800). Les plans d’image sont recombinés, ce qui donne une image débarrassée des aberrations chromatiques latérales ainsi que des distorsions en coussinet/en barillet. En outre, les valeurs entrées par l’utilisateur permettent de panoramiquer et de zoomer (812) dans l’image. Le calcul de correction se fait en une seule étape, ce qui évite les erreurs provoquées par de multiples calculs de correction.
PCT/EP2009/051075 2008-03-12 2009-01-30 Procédé et système destinés à la correction des aberrations d’objectif WO2009112309A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP08102545.4 2008-03-12
EP08102545 2008-03-12

Publications (2)

Publication Number Publication Date
WO2009112309A2 true WO2009112309A2 (fr) 2009-09-17
WO2009112309A3 WO2009112309A3 (fr) 2009-12-10

Family

ID=41065594

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2009/051075 WO2009112309A2 (fr) 2008-03-12 2009-01-30 Procédé et système destinés à la correction des aberrations d’objectif

Country Status (1)

Country Link
WO (1) WO2009112309A2 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010086037A1 (fr) * 2009-01-30 2010-08-05 Thomson Licensing Procédé et système de détection d'aberration de lentille

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040218071A1 (en) * 2001-07-12 2004-11-04 Benoit Chauville Method and system for correcting the chromatic aberrations of a color image produced by means of an optical system
EP1650705A1 (fr) * 2003-07-28 2006-04-26 Olympus Corporation Appareil pour traiter des images, procede associe et procede pour corriger la distorsion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040218071A1 (en) * 2001-07-12 2004-11-04 Benoit Chauville Method and system for correcting the chromatic aberrations of a color image produced by means of an optical system
EP1650705A1 (fr) * 2003-07-28 2006-04-26 Olympus Corporation Appareil pour traiter des images, procede associe et procede pour corriger la distorsion

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ASARI V K ET AL: "A Pipelined Architecture for Real-Time Correction of Barrel Distortion in Wide-Angle Camera Images" IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 15, no. 3, 1 March 2005 (2005-03-01), pages 436-444, XP011127220 ISSN: 1051-8215 *
NG Y M ET AL: "Correcting the chromatic aberration in barrel distortion of endoscopic images" SCI 2003. 7TH WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS PROCEEDINGS IIIS ORLANDO, FL, USA, vol. 10, 2003, pages 55-60 Vol.10, XP002550033 ISBN: 980-6560-01-9 *
REBIAI M ET AL: "Image distortion from zoom lenses: modeling and digital correction" IBC 1992. INTERNATIONAL BROADCASTING CONVENTION. (CONF. PUBL. NO.358) IEE LONDON, UK, 1992, pages 438-441, XP006515214 ISBN: 0-85296-547-8 *
YAMASHITA T ET AL: "A lateral chromatic aberration correction system for ultrahigh-definition color video camera" PROCEEDINGS OF SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING - SENSORS, CAMERAS, AND SYSTEMS FOR SCIENTIFIC/INDUSTRIAL APPLICATIONS VII - PROCEEDINGS OF SPIE-IS AND T ELECTRONIC IMAGING 2006 SPIE US, vol. 6068, 2006, XP002550035 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010086037A1 (fr) * 2009-01-30 2010-08-05 Thomson Licensing Procédé et système de détection d'aberration de lentille

Also Published As

Publication number Publication date
WO2009112309A3 (fr) 2009-12-10

Similar Documents

Publication Publication Date Title
JP5284537B2 (ja) 画像処理装置、画像処理方法、画像処理プログラム、およびそれを用いた撮像装置
JP5546229B2 (ja) 画像処理方法、画像処理装置、撮像装置および画像処理プログラム
US9041833B2 (en) Image processing method, image processing apparatus, and image pickup apparatus
JP6299124B2 (ja) 投影システム、画像処理装置、投影方法およびプログラム
JP5188651B2 (ja) 画像処理装置、およびそれを用いた撮像装置
US8659672B2 (en) Image processing apparatus and image pickup apparatus using same
US8885067B2 (en) Multocular image pickup apparatus and multocular image pickup method
CN106683071B (zh) 图像的拼接方法和装置
US11403739B2 (en) Methods and apparatus for retargeting and prioritized interpolation of lens profiles
JP5441652B2 (ja) 画像処理方法、画像処理装置、撮像装置および画像処理プログラム
CN102326380B (zh) 有行缓冲区高效透镜畸变校正功能的图像传感器装置和方法
JP5709911B2 (ja) 画像処理方法、画像処理装置、画像処理プログラムおよび撮像装置
WO2018029950A1 (fr) Dispositif d'étalonnage, procédé d'étalonnage, dispositif optique, dispositif d'imagerie et dispositif de projection
US20180158175A1 (en) Digital correction of optical system aberrations
US8699820B2 (en) Image processing apparatus, camera apparatus, image processing method, and program
US9652847B2 (en) Method for calibrating a digital optical imaging system having a zoom system, method for correcting aberrations in a digital optical imaging system having a zoom system, and digital optical imaging system
US20090002574A1 (en) Method and a system for optical design and an imaging device using an optical element with optical aberrations
US20100033584A1 (en) Image processing device, storage medium storing image processing program, and image pickup apparatus
US20100246994A1 (en) Image processing device, image processing method, and image processing program
TW201618531A (zh) 影像擷取裝置及其數位變焦方法
US8610801B2 (en) Image processing apparatus including chromatic aberration correcting circuit and image processing method
CN102227746A (zh) 立体图像处理装置、方法、记录介质和立体成像设备
JP2000196939A (ja) 歪曲収差及び面方向色収差のない画像形成装置及びその方法
WO2009095422A2 (fr) Procédés et appareils pour traiter des aberrations chromatiques et des franges de distorsion violettes
JP5479187B2 (ja) 画像処理装置及びそれを用いた撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09719307

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09719307

Country of ref document: EP

Kind code of ref document: A2