US20090046944A1 - Restoration of Color Components in an Image Model - Google Patents

Restoration of Color Components in an Image Model Download PDF

Info

Publication number
US20090046944A1
US20090046944A1 US11/632,093 US63209305A US2009046944A1 US 20090046944 A1 US20090046944 A1 US 20090046944A1 US 63209305 A US63209305 A US 63209305A US 2009046944 A1 US2009046944 A1 US 2009046944A1
Authority
US
United States
Prior art keywords
image
restoration
degradation
colour component
colour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/632,093
Inventor
Radu Ciprian Bilcu
Sakari Alenius
Mejdi Trimeche
Markku Vehvilainen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Oyj
Original Assignee
Nokia Oyj
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Oyj filed Critical Nokia Oyj
Priority to US11/632,093 priority Critical patent/US20090046944A1/en
Assigned to NOKIA CORPORATION reassignment NOKIA CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VEHVILAINEN, MARKKU, BILCU, RADU CIPRIAN, ALENIUS, SAKARI, TRIMECHE, MEJDI
Publication of US20090046944A1 publication Critical patent/US20090046944A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2209/00Details of colour television systems
    • H04N2209/04Picture signal generators
    • H04N2209/041Picture signal generators using solid-state devices
    • H04N2209/042Picture signal generators using solid-state devices having a single pick-up sensor
    • H04N2209/045Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
    • H04N2209/046Colour interpolation to calculate the missing colour values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet

Definitions

  • This invention relates to image processing and particularly to a restoration of colour components in a system for storage or acquisition of digital images.
  • Blurring or degradation of an image can be caused by various factors, e.g. out-of-focus optics, or any other aberrations that result from the use of a wide-angle lens, or the combination of inadequate aperture value, focal length and lens positioning.
  • out-of-focus optics or any other aberrations that result from the use of a wide-angle lens, or the combination of inadequate aperture value, focal length and lens positioning.
  • the movement of the camera, or the imaged subject can result in motion blurring of the picture.
  • short exposure time the number of photons being captured is reduced, this results in high noise levels, as well as poor contrast in the captured image.
  • Defect block in the image can be replaced with the average of some of all of the surrounding blocks.
  • One example is to use three blocks that are situated above the defect.
  • spatial error concealment techniques attempt to hide a defect by forming a good reconstruction of the missing or corrupted pixels.
  • One of the methods is to find a mean of the pixels in an area surrounding the defect and to replace the defect with the mean pixel value.
  • a requirement for the variance of the reconstruction can be added to equal the variance of the area around the defect.
  • Bilinear interpolation can be applied to pixels on four corners of the defect rectangle. This makes a linear, smooth transition of pixel values across the defect area. Bilinear interpolation is defined by the pixel value being reconstructed, pixels at corners of the reconstructed pixel and a horizontal and vertical distance from the reconstructed pixel to the corner pixels.
  • Another method is edge-sensitive nonlinear filtering, which interpolates missing samples in an image.
  • the purpose of image restoration is to remove those degradations so that the restored images look as close as possible to the original scene.
  • the restored image can be obtained as the inverse process of the degradation.
  • Several methods to solve for this inverse mathematical problem are known from the prior art. However, most of these techniques do not consider the image reconstruction process in the modelling of the problem, and assume simplistic linear models. Typically, the solutions in implementations are quite complicated and computationally demanding.
  • Image restoration generally involves two important steps, the deblurring and noise filtering steps.
  • Some approaches for deblurring are known from related art. These approaches can be categorized into non-iterative and iterative techniques.
  • the solution is obtained through a one pass processing algorithm, e.g. Laplacian high pass filtering, unsharp masking, or frequency domain inverse filtering.
  • the iterative methods the result is refined during several processing passes.
  • the de-blurring process is controlled by a cost function that sets the criteria for the refining process, e.g. Least Squares method or adaptive Landweber algorithm. Usually, after a few iterations, there is not much improvement between adjacent steps.
  • the methods from related art are typically applied in restoration of images in high-end applications such as astronomy and medical imaging. Their use in consumer products is limited, due to the difficulty of quantifying the image gathering process and the typical complexity and computational power needed to implement these algorithms. Some of the approaches have been used in devices that have limited computational and memory resources.
  • the methods from the related art are typically designed as a post-processing operation, which means that the restoration is applied to the image, after it has been acquired and stored. In a post-processing operation each colour component has a different point spread function that is an important criteria that can be used to evaluate the performance of imaging systems. If the restoration is applied as post-processing, the information about the different blurring in each colour component is not relevant anymore.
  • the aim of this invention is to provide an improved way to restore images. This can be achieved by a method, a model, use of a model, a de-blurring method, a device, a module, a system, program modules and computer program products.
  • the method for forming a model for improving image quality of a digital image captured with an imaging module comprises at least imaging optics and an image sensor, where the image is formed through the imaging optics, said image consisting of at least one colour component, wherein degradation information of each colour component is found, an image degradation function is obtained and said each colour component is restored by said degradation function.
  • model for improving image quality of a digital image is provided, said model being obtainable by a claimed method.
  • model also use of the model is provided.
  • the method for improving image quality of a digital image captured with an imaging module comprising at least imaging optics and an image sensor where the image is formed through the imaging optics, said image consisting at least of one colour component, wherein degradation information of each colour component of the image is found, a degradation function is obtained according to the degradation information and said each colour component is restored by said degradation function.
  • a method for restoration of an image wherein the restoration is implemented by an iterative restoration function where at each iteration a de-blurring method with regularization is implemented.
  • a system for determining a model for improving image quality of a digital image with an imaging module comprising at least imaging optics and an image sensor, where the image is formed through the imaging optics, said image consisting of at least one colour component, wherein the system comprises first means for finding degradation information of each colour component of the image, second means for obtaining a degradation function according to the degradation information, and third means for restoring said each colour component by said degradation function.
  • the imaging module comprising imaging optics and an image sensor for forming an image through the imaging optics onto the light sensitive image sensor wherein a model for improving image quality is related to said imaging module.
  • a device comprising an imaging module is provided.
  • the program module for improving an image quality in a device comprising an imaging module, said program module comprising means for finding degradation information of each colour component of the image, obtaining a degradation function according to the degradation information, and restoring said each colour component by said degradation function.
  • other program module for a restoration of an image comprising means for implementing a de-blurring with regularization at each iteration of an iterative restoration.
  • the computer program product comprising instructions for finding degradation information of each colour component of the image, obtaining a degradation function according to the degradation information, and restoring said each colour component by said degradation function.
  • a computer program product for a restoration of an image comprising computer readable instruction for implementing a de-blurring with regularization at each iteration of an iterative restoration.
  • first image model corresponds to such an image, which is already captured with an image sensor, such as a CCD (Charged Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor), but not processed in any way.
  • the first image model is raw image data.
  • the second image model is the one for which a degradation information has been determined. It will be appreciated that other sensor types, other than CMOS or CCD can be used with the invention.
  • the first image model is used for determining the blurring of the image, and the second image model is restored according to the invention.
  • the restoration can also be regulated according to the invention. After these steps have been done, other image reconstruction functions can be applied to it. If considering the whole image reconstruction chain, the idea of the invention is to apply the restoration as a pre-processing operation, whereby the following image reconstruction operations will benefit from the restoration. Applying the restoration as a pre-processing operation means that the restoration algorithm is targeted directly to the raw colour image data and in such a manner, that each colour component is handled separately.
  • the blurring caused by optics can be reduced significantly.
  • the procedure is particularly effective if fixed focal length optics is used.
  • the invention is also applicable to varying focal length systems, in which case the processing considers several deblurring functions from a look-up table depending on the focal position of the lenses.
  • the deblurring function can also be obtained through interpolation from look-up tables.
  • One possibility to define the deblurring function is to use continuous calculation, in which focal length is used as a parameter to deblurring function.
  • the resulting images are sharper and have better spatial resolution. It is worth mentioning that the proposed processing is different from traditional sharpening algorithms, which can also result in sharper images with amplified high frequencies.
  • this invention presents a method to revert the degradation process and to minimize blurring, which is caused e.g. by optic, whereas the sharpening algorithms use generic high-pass filters to add artefacts to an image in order to make it look sharper.
  • the model according to the invention is more viable for different types of sensors that can be applied in future products (because of better fidelity to the linear image formation model).
  • the following steps and algorithms of the image reconstruction chain benefit from the increased resolution and contrast of solution.
  • Applying the image restoration as a pre-processing operation may minimize non-linearities that are accumulated in the image capturing process.
  • the invention also may prevent over-amplification of colour information.
  • the data restoration sharpens the image by iterative inverse filtering.
  • This inverse filtering can be controlled by a controlling method that is also provided by the invention. Due to the controlling method, the iteration is stopped when the image is sharp enough.
  • the controlling method provides a mechanism to process differently the pixels that are at different locations into the image. According to this, the overshooting in the restored image can be reduced thus giving a better visual quality of the final image.
  • pixels that are located at edges in the observed image are restored differently than the pixels that are located on smooth areas.
  • the controlling method can address the problem of spatial varying point spread function. For example if point spread function of the optical system is different for different pixel coordinates, restoration of the image using independent processing of the pixels can solve this problem. Further, the controlling method can be implemented with several de-blurring algorithms in order to improve their performances.
  • the invention can also be applied for restoration of video.
  • FIG. 1 illustrates an example of the system according to the invention
  • FIG. 2 illustrates another example of the system according to the invention
  • FIG. 3 illustrates an example of a device according to the invention
  • FIG. 4 illustrates an example of an arrangement according to the invention
  • FIG. 5 illustrates an example of an iterative restoration method and a controlling method according to the invention.
  • This invention relates to a method for improving image quality of a digital image captured with an imaging module comprising at least imaging optics and an image sensor, where the image is formed through the imaging optics, the image consisting of at least one colour component.
  • the degradation information of each colour component of the image is found and is used for improving image quality.
  • the degradation information of each colour component is specified by a point-spread function.
  • Each colour component is restored by said degradation function.
  • the image can be unprocessed image data.
  • the invention also relates to several alternatives for implementing the restoration, and for controlling and regularizing the inverse process.
  • the description of the restoration of images according to the invention can be targeted to three main points, wherein at first the blur degradation function is determined, e.g. by measuring a point-spread function (PSF) for at least one raw colour component. Secondly, a restoration algorithm is designed for at least one raw colour component. Thirdly, a regularization mechanism can be integrated to moderate the effect of high pass filtering.
  • the optics in mobile devices are used as an example, because they may generally be limited to a wide focus range. It will, however, be apparent to the man skilled in the art, that the mobile devices are not the only suitable devices.
  • the invention can be utilized by digital cameras, web cameras or similar devices, as well as by high-end applications. The aim of this algorithm is to undo or attenuate a degradation process (blurring) resulting from the optics. Due to the algorithm the resulting images becomes sharper and have an improved resolution.
  • colour component relates to various colour systems.
  • the example in this invention is RGB-system (red, green, blue), but a person skilled in the art will appreciate other systems such as HSV (Hue, Saturation, Value), YUV (Luminance, chrominance) or CMYK (Cyan, Magenta, Yellow, Black) etc.
  • HSV Human, Saturation, Value
  • YUV Luminance, chrominance
  • CMYK Cyan, Magenta, Yellow, Black
  • the image model in the spatial domain can be described as:
  • g i is a measured colour component image
  • f i is an original colour component
  • h i is a corresponding linear blurring in the colour component
  • n i is an additive noise term.
  • g i , f i , n i are defined over an array of pixels (m, n) spanning the image area
  • h i is defined on the pixels (u, v) spanning blurring (point-spread function) support.
  • FIGS. 1 and 2 each illustrating a block diagram of the image restoration system according to the invention.
  • the procedure for estimating the degradation ( FIG. 1 , 110 ) in the image that has been captured by an optical element ( 100 ) is described next.
  • the degradation can be estimated by means of the point-spread function 210 corresponding to the blur in three colour channels (in this example R, G, B) (raw data).
  • the point-spread functions are used to show different characteristics for each colour channel.
  • the point-spread function is an important criterion that can be used to evaluate the performance of imaging systems.
  • the point-spread function changes as a function of the wavelength and the position in the camera field of view. Because of that, finding a good point-spread function may be difficult. In the description an out-of-focus close range imaging and a space invariant blurring are assumed.
  • the practical procedure for estimating the point-spread function (h i ) that is associated with each colour component, can also be used as stand-alone application to help in the evaluation process of camera systems.
  • the four outer corner points are located manually, and first a rough estimate of the corner positions is determined. The exact locations (at subpixel accuracy) are recalculated again by refining the search within a square window of e.g. 10 ⁇ 10 pixels. Using those corner points, an approximation for the original grid image f i can be reconstructed by averaging the central parts of each square and by asserting a constant luminance value to those squares.
  • the point-spread function is assumed to be space invariant, whereby the blur can be calculated through a pseudo-inverse filtering method (e.g. in Fourier domain). Since the pseudo-inverse technique is quite sensitive to noise, a frequency low-pass filter can be used to limit the noise and the procedure can be applied with several images to obtain an average estimate of the point-spread function. (The normalized cut-off frequency of the mentioned low pass filter is around 0.6, but at least any value from 0.4 to 0.9 may be applicable).
  • S psf describes the extent of the blurring.
  • the channels have different blurring patterns. For example when studying Mirage-1 camera, the obtained S psf values were:
  • the data concerning colour components is measured by a sensor 120 e.g. by Bayer sensor 220 (in FIG. 2 ), like a CMOS or CCD sensor.
  • the colour component can be red (R), green1(G1) blue (B) and green2 (G2) colour components as illustrated in FIG. 2 .
  • Each of these colour “images” is quarter size of the final output image.
  • the second image model is provided for to be restored ( 130 ; 250 ).
  • the images are arranged lexicographically into vectors, and the point-spread function h i is arranged into a block-Toeplitz circulant matrix H i .
  • the second image model is then expressed as:
  • H i the purpose of image restoration is to recover the best estimate f i from the degraded observation g i .
  • the blurring function H i is non-invertible (it is already defined on a limited support, so its inverse will have infinite support), so a direct inverse solution is not possible.
  • the classical direct approach to solving the problem considers minimizing the energy between input and simulated re-blurred image, this is given by the norm:
  • J LS ⁇ g i ⁇ H i ⁇ circumflex over (f) ⁇ i ⁇ 2 (4)
  • ⁇ max is the largest eigenvalue of the matrix H T H. The iteration continues until the normalized change in energy becomes quite small.
  • the image sensor electronics such as CCD and CMOS sensors
  • the image sensor electronics may introduce non-linearities to the image, of which the saturation is one of the most serious. Due to non-linearities unaccounted for in the image formation model, the separate processing of the colour channels might result in serious false colouring around the edges.
  • the invention introduces an improved regularization mechanism ( FIG. 2 ; 240 ) to be applied to restoration.
  • the pixel areas being saturated or under-exposed are used to devise a smoothly varying coefficient that moderates the effect of high-pass filtering in the surrounding areas.
  • the formulation of the image acquisition process is invariably assumed to be a linear one (1). Due to the sensitivity difference of the three colour channels, and fuzzy exposure controls, pixel saturation can happen incoherently in each of the colour channels.
  • the separate channel restoration near those saturated areas results in over-amplification in that colour component alone, thus creating artificial colour mismatch and false colouring near those regions.
  • a regularization mechanism according to the invention is proposed.
  • the regularization mechanism is integrated in the iterative solution of equation (6).
  • the idea is to spatially adapt ⁇ in order to limit the restoration effect near saturated areas.
  • the adapted step size is given as follows:
  • is the global step-size as discussed earlier, and ⁇ sat is the local saturation control that modulates the step size.
  • ⁇ sat is obtained using the following algorithm:
  • ⁇ sat varies between 0 and 1 depending on the number of saturated pixels in any of the colour channels.
  • the previous data restoration sharpens the image by iterative inverse filtering.
  • This inverse filtering can be controlled by a controlling method whereby the iteration is stopped when the image is sharp enough.
  • a basic idea of this controlling method is illustrated in FIG. 5 as a block chart.
  • the image is initialized equal with the observed image, and the parameters of the de-blurring algorithm are set up ( 510 ).
  • the de-blurring algorithm is applied to the observed image.
  • This can be any of the existing one pass algorithms such as unsharp masking method, blur domain de-blurring, differential filtering, etc. ( 520 ).
  • the de-blurring is meaningful at every iteration, because if the de-blurring does not have good performances the overall performance of the system will not be that good.
  • the restored image is updated. If a pixel location in the de-blurred image corresponds to an overshoot edge, it is not any further updated in the iterative process. Otherwise, the pixels from the restored image are normally updated. Also, the pixels that correspond to overshooting are marked such that in the next iterations the corresponding restored pixels are unchanged (for those pixels the restoration process is terminated at this point).
  • the intermediate output image is scanned and the pixels that still contain overshooting are detected. If persistent overshooting is detected ( 560 ) the global iterative process is stopped and the restored image is returned. Otherwise the parameters of the de-blurring algorithm are changed ( 570 ) and the next iteration is started with the de-blurring of the observed image.
  • the last procedure ( 560 - 570 ) makes the algorithm suitable for blind deconvolution.
  • the algorithm disclosed here prevents the restored image from overshooting that appears due to over-amplification of edges. This is done in two different ways. First, at each iteration, the pixels are updated separately such that the ones that are degraded are not updated into the restored image. Second, the whole de-blurring process is stopped if there is a pixel in the restored image that is too much degraded. Detailed description of implementation of the de-blurring method is discussed next.
  • the method steps of FIG. 5 are done for one of the colour components R, G, B.
  • the other two components are processed separately exactly in the same manner. If the YUV colour system is used, only component Y needs to be processed.
  • the image is initialized equal with the observed image, and the parameters of the deblurring algorithm are set up.
  • the input observed image is denoted here by I and the final restored image is denoted by Ir.
  • the parameters of the de-blurring method are also initialized. For instance, if the unsharp masking method is used for de-blurring the number of blurred images used and their parameters are chosen. If another algorithm is implemented, its parameters will be set up at this point.
  • a matrix of size equal with the size of the image and with unity elements is initialized. The matrix is denoted by mask.
  • the de-blurring algorithm is applied to the observed image and the de-blurred image Idb is obtained.
  • every pixel from the deblurred image is checked to detect the overshooting such as over-amplified edges.
  • the pixels from the de-blurred image Idb are scanned and the horizontal and vertical differences between adjacent pixels can be computed as follows:
  • the local shape of the de-blurred image is compared with the local shape of the observed image. This is done by comparing the signs of the corresponding differences from the two images in horizontal and also in vertical direction. When a difference in the shape of the two images is found (whether in horizontal or vertical direction), this means that the corresponding pixel from the de-blurred image might be too much emphasized. For those pixels an estimated value of the overshooting is compared with a threshold (th 1 ). If the amount of overshooting is larger than a threshold (th 1 ) the corresponding pixel is marked as distorted (the value of the mask is made equal to zero).
  • the threshold (th 1 ) is defined as percents from the maximum value of the pixels from the observed image (the value MAX is the maximum value of I). Choosing this kind of threshold computation we ensure that the value of the threshold (th 1 ) is adapted to the image range.
  • the restored image is updated.
  • the pixels that form the restored image are simply updated with the pixels from the de-blurred image that were not marked as distorted. This step can be implemented as follows:
  • the intermediate output image is scanned and the pixels that still contain overshooting are detected.
  • the horizontal and vertical differences between adjacent pixels can be computed as follows:
  • the local shapes of the two images are compared.
  • the overshooting in the restored image is estimated by taking the minimum of the absolute value of the two adjacent differences. This is computed on both vertical and horizontal directions.
  • step 560 the overshooting is checked. If the maximum overshooting is larger than a predefined step, the restoration procedure is stopped and the restored image Ir is returned at the output. If there is no pixel in the restored image that has overshooting larger than the threshold the parameters of the de-blurring method are changed and the procedure continue from step 520 .
  • This step can be implemented as follows:
  • the threshold th 2 for overshooting detection is defined as percents from the maximum pixel value of the original image I.
  • the regularization method ( 530 , 550 and 560 from FIG. 5 ) can also be combined with the above described iterative restoration algorithm from equation (6).
  • Other non-iterative restoration algorithms such as high pass filtering can be implemented in an iterative manner following the above method with local and global regularization.
  • the local and global regularizations defined above can be applied together or separately also to some other iterative restoration techniques.
  • restoration as the first operation in the reconstruction chain ensures the best fidelity to be assumed linear imaging model.
  • the following algorithms, especially the colour filter array interpolation and the noise reduction algorithms act as an additional regularization mechanism to prevent over amplification due to excessive restoration.
  • the system according to the invention can be arranged into a device such as a mobile terminal, a web cam, a digital camera or other digital device for imaging.
  • the system can be a part of digital signal processing in camera module to be installed into one of said devices.
  • One example of the device is an imaging mobile terminal as illustrated as a simplified block chart in FIG. 3 .
  • the device 300 comprises optics 310 or a similar device for capturing images that can operatively communicate with the optics or a digital camera for capturing images.
  • the device 300 can also comprise a communication means 320 having a transmitter 321 and a receiver 322 .
  • the first communicating means 320 can be adapted for telecommunication and the other communicating means 380 can be a kind of short-range communicating means, such as a BluetoothTM system, a WLAN system (Wireless Local Area Network) or other system which suits local use and for communicating with another device.
  • the device 300 according to the FIG. 3 also comprises a display 340 for displaying visual information.
  • the device 300 comprises a keypad 350 for inputting data, for controlling the image capturing process etc.
  • the device 300 can also comprise audio means 360 , such as an earphone 361 and a microphone 362 and optionally a codec for coding (and decoding, if needed) the audio information.
  • the device 300 also comprises a control unit 330 for controlling functions in the device 300 , such as the restoration algorithm according to the invention.
  • the control unit 330 may comprise one or more processors (CPU, DSP).
  • the device further comprises memory 370 for storing data, programs etc.
  • the imaging module comprises imaging optics and image sensor and means for finding degradation information of each colour component and using said degradation information for determining a degradation function, and further means for restoring said each colour component by said degradation function.
  • This imaging module can be arranged into the device being described previously.
  • the imaging module can be also arranged into a stand-alone device 410 , as illustrated in FIG. 4 , communicating with an imaging device 400 and with a displaying device, which displaying device can be also said imaging device 400 or some other device, like a personal computer.
  • Said stand-alone device 410 comprises a restoration module 411 and optionally other imaging module 412 and it can be used for image reconstruction independently.
  • the communication between the imaging device 400 and the stand-alone device 410 can be handled by a wired or wireless network. Examples of such networks are Internet, WLAN, Bluetooth, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)

Abstract

This invention relates to a method for improving image quality of a digital image captured with an imaging module comprising at least imaging optics and an image sensor, where the image is formed through the imaging optics, the image consisting of at least one colour component. In the method, the degradation information of each colour component of the image is found and is used for improving image quality. The degradation information of each colour component is specified by a point-spread function. Each colour component is restored by the degradation function. The image can be unprocessed image data. The invention also relates to several alternatives for implementing the restoration, and for controlling and regularizing the inverse process independently of the image degradation. The invention also relates to a device, to a module, to a system and to a computer program products and to a program modules.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is for entry into the U.S. national phase under §371 for International Application No. PCT/FI05/050001 having an international filing date of Jan. 4, 2005, and from which priority is claimed under all applicable sections of Title 35 of the United States Code including, but not limited to, Sections 120, 363 and 365(c). This application is also a continuation-in-part of U.S. patent application Ser. No. 10/888,534, filed on Jul. 9, 2004, from which domestic priority is claimed.
  • FIELD OF THE INVENTION
  • This invention relates to image processing and particularly to a restoration of colour components in a system for storage or acquisition of digital images.
  • BACKGROUND OF THE INVENTION
  • Blurring or degradation of an image can be caused by various factors, e.g. out-of-focus optics, or any other aberrations that result from the use of a wide-angle lens, or the combination of inadequate aperture value, focal length and lens positioning. During the image capture process, when long exposure times are used, the movement of the camera, or the imaged subject, can result in motion blurring of the picture. Also, when short exposure time is used, the number of photons being captured is reduced, this results in high noise levels, as well as poor contrast in the captured image.
  • Various methods for reconstructing images that contain defects are known from related art. Defect block in the image can be replaced with the average of some of all of the surrounding blocks. One example is to use three blocks that are situated above the defect. Further there is a method called “best neighbours matching” which restores images by taking a sliding block the same size as the defect region and moves it through the image. At each position, except for ones where the sliding block overlaps the defect, the pixels around the border of the sliding block are placed in a vector. The pixel values around the border of the defect are placed in another vector and the mean squared error between them is computed. The defect region is then replaced by the block that has the lowest border-pixel.
  • For example spatial error concealment techniques attempt to hide a defect by forming a good reconstruction of the missing or corrupted pixels. One of the methods is to find a mean of the pixels in an area surrounding the defect and to replace the defect with the mean pixel value. A requirement for the variance of the reconstruction can be added to equal the variance of the area around the defect.
  • Different interpolation methods can also be used in the image reconstruction process. For example a bilinear interpolation can be applied to pixels on four corners of the defect rectangle. This makes a linear, smooth transition of pixel values across the defect area. Bilinear interpolation is defined by the pixel value being reconstructed, pixels at corners of the reconstructed pixel and a horizontal and vertical distance from the reconstructed pixel to the corner pixels. Another method is edge-sensitive nonlinear filtering, which interpolates missing samples in an image.
  • The purpose of image restoration is to remove those degradations so that the restored images look as close as possible to the original scene. In general, if the degradation process is known; the restored image can be obtained as the inverse process of the degradation. Several methods to solve for this inverse mathematical problem are known from the prior art. However, most of these techniques do not consider the image reconstruction process in the modelling of the problem, and assume simplistic linear models. Typically, the solutions in implementations are quite complicated and computationally demanding.
  • Image restoration generally involves two important steps, the deblurring and noise filtering steps. Some approaches for deblurring are known from related art. These approaches can be categorized into non-iterative and iterative techniques. In the non-iterative methods, the solution is obtained through a one pass processing algorithm, e.g. Laplacian high pass filtering, unsharp masking, or frequency domain inverse filtering. In the iterative methods, the result is refined during several processing passes. The de-blurring process is controlled by a cost function that sets the criteria for the refining process, e.g. Least Squares method or adaptive Landweber algorithm. Usually, after a few iterations, there is not much improvement between adjacent steps. The continuation of the de-blurring algorithm beyond a certain point might introduce annoying artefacts into the restored image, such as e.g. overshooting of the edges due to over-emphasis of the details or even false colouring. Another approach to solve the de-blurring problem is to apply iteratively a one step de-blurring method with varying parameters and the best result is kept (blind deconvolution).
  • The methods from related art are typically applied in restoration of images in high-end applications such as astronomy and medical imaging. Their use in consumer products is limited, due to the difficulty of quantifying the image gathering process and the typical complexity and computational power needed to implement these algorithms. Some of the approaches have been used in devices that have limited computational and memory resources. The methods from the related art are typically designed as a post-processing operation, which means that the restoration is applied to the image, after it has been acquired and stored. In a post-processing operation each colour component has a different point spread function that is an important criteria that can be used to evaluate the performance of imaging systems. If the restoration is applied as post-processing, the information about the different blurring in each colour component is not relevant anymore. The exact modelling of the image acquisition process is more difficult and (in most cases) is not linear. So the “inverse” solution is less precise. Most often, the output of the digital cameras is compressed to jpeg-format. If the restoration is applied after the compression (which is typically lossy), the result can amplify unwanted blocking artefacts.
  • SUMMARY OF THE INVENTION
  • The aim of this invention is to provide an improved way to restore images. This can be achieved by a method, a model, use of a model, a de-blurring method, a device, a module, a system, program modules and computer program products.
  • According to present invention the method for forming a model for improving image quality of a digital image captured with an imaging module comprises at least imaging optics and an image sensor, where the image is formed through the imaging optics, said image consisting of at least one colour component, wherein degradation information of each colour component is found, an image degradation function is obtained and said each colour component is restored by said degradation function.
  • According to present invention also the model for improving image quality of a digital image is provided, said model being obtainable by a claimed method. According to the present invention also use of the model is provided.
  • Further according to present invention the method for improving image quality of a digital image captured with an imaging module comprising at least imaging optics and an image sensor is provided, where the image is formed through the imaging optics, said image consisting at least of one colour component, wherein degradation information of each colour component of the image is found, a degradation function is obtained according to the degradation information and said each colour component is restored by said degradation function.
  • Further according to present invention a method for restoration of an image is provided, wherein the restoration is implemented by an iterative restoration function where at each iteration a de-blurring method with regularization is implemented.
  • Further according to present invention a system for determining a model for improving image quality of a digital image with an imaging module is provided, said module comprising at least imaging optics and an image sensor, where the image is formed through the imaging optics, said image consisting of at least one colour component, wherein the system comprises first means for finding degradation information of each colour component of the image, second means for obtaining a degradation function according to the degradation information, and third means for restoring said each colour component by said degradation function.
  • Further according to present invention the imaging module is provided, comprising imaging optics and an image sensor for forming an image through the imaging optics onto the light sensitive image sensor wherein a model for improving image quality is related to said imaging module. Further according to present invention a device comprising an imaging module is provided.
  • In addition, according to present invention the program module for improving an image quality in a device is provided, comprising an imaging module, said program module comprising means for finding degradation information of each colour component of the image, obtaining a degradation function according to the degradation information, and restoring said each colour component by said degradation function. According to present invention also other program module for a restoration of an image is provided, comprising means for implementing a de-blurring with regularization at each iteration of an iterative restoration.
  • Further the computer program product is provided, comprising instructions for finding degradation information of each colour component of the image, obtaining a degradation function according to the degradation information, and restoring said each colour component by said degradation function. According to the present invention also a computer program product for a restoration of an image is provided, comprising computer readable instruction for implementing a de-blurring with regularization at each iteration of an iterative restoration.
  • Other features of the invention are described in appended dependent claims.
  • In the description a term “first image model” corresponds to such an image, which is already captured with an image sensor, such as a CCD (Charged Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor), but not processed in any way. The first image model is raw image data. The second image model is the one for which a degradation information has been determined. It will be appreciated that other sensor types, other than CMOS or CCD can be used with the invention.
  • The first image model is used for determining the blurring of the image, and the second image model is restored according to the invention. The restoration can also be regulated according to the invention. After these steps have been done, other image reconstruction functions can be applied to it. If considering the whole image reconstruction chain, the idea of the invention is to apply the restoration as a pre-processing operation, whereby the following image reconstruction operations will benefit from the restoration. Applying the restoration as a pre-processing operation means that the restoration algorithm is targeted directly to the raw colour image data and in such a manner, that each colour component is handled separately.
  • With the invention the blurring caused by optics can be reduced significantly. The procedure is particularly effective if fixed focal length optics is used. The invention is also applicable to varying focal length systems, in which case the processing considers several deblurring functions from a look-up table depending on the focal position of the lenses. The deblurring function can also be obtained through interpolation from look-up tables. One possibility to define the deblurring function is to use continuous calculation, in which focal length is used as a parameter to deblurring function. The resulting images are sharper and have better spatial resolution. It is worth mentioning that the proposed processing is different from traditional sharpening algorithms, which can also result in sharper images with amplified high frequencies. In fact, this invention presents a method to revert the degradation process and to minimize blurring, which is caused e.g. by optic, whereas the sharpening algorithms use generic high-pass filters to add artefacts to an image in order to make it look sharper.
  • The model according to the invention is more viable for different types of sensors that can be applied in future products (because of better fidelity to the linear image formation model). In the current approach, the following steps and algorithms of the image reconstruction chain benefit from the increased resolution and contrast of solution.
  • Applying the image restoration as a pre-processing operation may minimize non-linearities that are accumulated in the image capturing process. The invention also may prevent over-amplification of colour information.
  • The data restoration sharpens the image by iterative inverse filtering. This inverse filtering can be controlled by a controlling method that is also provided by the invention. Due to the controlling method, the iteration is stopped when the image is sharp enough. The controlling method provides a mechanism to process differently the pixels that are at different locations into the image. According to this, the overshooting in the restored image can be reduced thus giving a better visual quality of the final image. In addition, pixels that are located at edges in the observed image are restored differently than the pixels that are located on smooth areas. The controlling method can address the problem of spatial varying point spread function. For example if point spread function of the optical system is different for different pixel coordinates, restoration of the image using independent processing of the pixels can solve this problem. Further, the controlling method can be implemented with several de-blurring algorithms in order to improve their performances.
  • The invention can also be applied for restoration of video.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention is illustrated with reference to examples in accompanying drawings and following description.
  • FIG. 1 illustrates an example of the system according to the invention,
  • FIG. 2 illustrates another example of the system according to the invention,
  • FIG. 3 illustrates an example of a device according to the invention,
  • FIG. 4 illustrates an example of an arrangement according to the invention, and
  • FIG. 5 illustrates an example of an iterative restoration method and a controlling method according to the invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • This invention relates to a method for improving image quality of a digital image captured with an imaging module comprising at least imaging optics and an image sensor, where the image is formed through the imaging optics, the image consisting of at least one colour component. In the method, the degradation information of each colour component of the image is found and is used for improving image quality. The degradation information of each colour component is specified by a point-spread function. Each colour component is restored by said degradation function. The image can be unprocessed image data. The invention also relates to several alternatives for implementing the restoration, and for controlling and regularizing the inverse process.
  • The description of the restoration of images according to the invention can be targeted to three main points, wherein at first the blur degradation function is determined, e.g. by measuring a point-spread function (PSF) for at least one raw colour component. Secondly, a restoration algorithm is designed for at least one raw colour component. Thirdly, a regularization mechanism can be integrated to moderate the effect of high pass filtering. In the description the optics in mobile devices are used as an example, because they may generally be limited to a wide focus range. It will, however, be apparent to the man skilled in the art, that the mobile devices are not the only suitable devices. For example the invention can be utilized by digital cameras, web cameras or similar devices, as well as by high-end applications. The aim of this algorithm is to undo or attenuate a degradation process (blurring) resulting from the optics. Due to the algorithm the resulting images becomes sharper and have an improved resolution.
  • Wherever a term “colour component” is used, it relates to various colour systems. The example in this invention is RGB-system (red, green, blue), but a person skilled in the art will appreciate other systems such as HSV (Hue, Saturation, Value), YUV (Luminance, chrominance) or CMYK (Cyan, Magenta, Yellow, Black) etc.
  • The image model in the spatial domain can be described as:

  • g i(m,n)=h i(u,v)*f i(m,n)+n(m,n)   (1)
  • where gi is a measured colour component image, fi is an original colour component, hi is a corresponding linear blurring in the colour component and ni is an additive noise term. gi, fi, ni are defined over an array of pixels (m, n) spanning the image area, whereas hi is defined on the pixels (u, v) spanning blurring (point-spread function) support. The index i={1,2,3,4} denotes respectively the data concerning colour components, such as red, green1, blue and green2 colour components.
  • The invention is described in more detail by means of FIGS. 1 and 2 each illustrating a block diagram of the image restoration system according to the invention.
  • Blur Specification
  • The procedure for estimating the degradation (FIG. 1, 110) in the image that has been captured by an optical element (100) is described next. As can be seen in FIG. 2, the degradation can be estimated by means of the point-spread function 210 corresponding to the blur in three colour channels (in this example R, G, B) (raw data). The point-spread functions are used to show different characteristics for each colour channel. The point-spread function is an important criterion that can be used to evaluate the performance of imaging systems.
  • The point-spread function changes as a function of the wavelength and the position in the camera field of view. Because of that, finding a good point-spread function may be difficult. In the description an out-of-focus close range imaging and a space invariant blurring are assumed. The practical procedure for estimating the point-spread function (hi) that is associated with each colour component, can also be used as stand-alone application to help in the evaluation process of camera systems.
  • Given a blurred image corresponding to one colour component of a checker-board pattern, the four outer corner points are located manually, and first a rough estimate of the corner positions is determined. The exact locations (at subpixel accuracy) are recalculated again by refining the search within a square window of e.g. 10×10 pixels. Using those corner points, an approximation for the original grid image fi can be reconstructed by averaging the central parts of each square and by asserting a constant luminance value to those squares.
  • The point-spread function is assumed to be space invariant, whereby the blur can be calculated through a pseudo-inverse filtering method (e.g. in Fourier domain). Since the pseudo-inverse technique is quite sensitive to noise, a frequency low-pass filter can be used to limit the noise and the procedure can be applied with several images to obtain an average estimate of the point-spread function. (The normalized cut-off frequency of the mentioned low pass filter is around 0.6, but at least any value from 0.4 to 0.9 may be applicable).
  • In order to quantify the extent of blur that occurs with each colour channel, a simple statistics is defined, which statistics is determined as a mean of the weighted distance from the centre of the function (in pixels), said weight corresponding to the value of the normalized point-spread function at that point:
  • S psf ( h i ) = M 1 N 1 m , n h i ( m , n ) m = 0 M 1 n = 0 N 1 ( m 2 + n 2 ) h i ( m , n ) ( 2 )
  • wherein M1 and N1 are the support of the point-spread function filter. Spsf describes the extent of the blurring. Experiments confirm that the channels have different blurring patterns. For example when studying Mirage-1 camera, the obtained Spsf values were:
  • S psf ( h i ) = { 5 , 42 i = 1 ( red ) 5 , 01 i = 2 ( green ) 4 , 46 i = 3 ( blue )
  • It can be seen from the results, that the red component was most blurred and noisy, whereby the least blurred was the blue component, which also had the least contrast.
  • Restoration Algorithm
  • The data concerning colour components is measured by a sensor 120 e.g. by Bayer sensor 220 (in FIG. 2), like a CMOS or CCD sensor. The colour component can be red (R), green1(G1) blue (B) and green2 (G2) colour components as illustrated in FIG. 2. Each of these colour “images” is quarter size of the final output image.
  • The second image model is provided for to be restored (130; 250). The images are arranged lexicographically into vectors, and the point-spread function hi is arranged into a block-Toeplitz circulant matrix Hi. The second image model is then expressed as:

  • g i =H i f i + η i   (3)
  • Having a reasonable approximation of Hi the purpose of image restoration is to recover the best estimate f i from the degraded observation g i. The blurring function Hi is non-invertible (it is already defined on a limited support, so its inverse will have infinite support), so a direct inverse solution is not possible. The classical direct approach to solving the problem considers minimizing the energy between input and simulated re-blurred image, this is given by the norm:

  • J LS =∥ g i −H i {circumflex over (f)} i2   (4)
  • thus providing a least squares fit to the data. The minimization of the norm also leads to the solution of the maximum-likelihood, when the noise is known to be Gaussian. It also leads to the generalized inverse filter, which is given by:

  • (H T H) {circumflex over (f)} i =H T g i   (5)
  • In order to solve for this, it is common to use deterministic iterative techniques with the method of successive approximations, which leads to following iteration:
  • f _ ^ i ( 0 ) = μ H T g _ i f _ ^ i ( k + 1 ) = f _ ^ i ( k ) + μ H T ( g _ i - f _ ^ i ( k ) ) ( 6 )
  • This iteration converges, if
  • 0 < μ < 2 λ max ,
  • where λmax is the largest eigenvalue of the matrix HTH. The iteration continues until the normalized change in energy becomes quite small.
  • It can be seen from FIGS. 1 and 2 that the restoration (130; 250) is made separately for each of the colour components R, G, B.
  • The main advantages of iterative techniques are that there is no need to explicitly implement the inverse of the blurring operator and that the restoration process could be monitored as it progresses.
  • The last squares can be extended to classical least squares (CLS) technique. When spoken theoretically, the problem of image restoration is ill-posed, i.e. a small perturbation in the output, for example noise, can result in an unbounded perturbation of the direct least squares solution that is presented above. For this reason, the constrained least squares method is usually considered in the literatures. These algorithms minimize the term in equation (4) subject to the (smoothness) regularization term, which consists of a high-pass filtered version of the output. The regularization term permits the inclusion of prior information about the image.
  • One Example of Regularization Mechanism
  • In practise, the image sensor electronics, such as CCD and CMOS sensors, may introduce non-linearities to the image, of which the saturation is one of the most serious. Due to non-linearities unaccounted for in the image formation model, the separate processing of the colour channels might result in serious false colouring around the edges. Hence the invention introduces an improved regularization mechanism (FIG. 2; 240) to be applied to restoration. The pixel areas being saturated or under-exposed are used to devise a smoothly varying coefficient that moderates the effect of high-pass filtering in the surrounding areas. The formulation of the image acquisition process is invariably assumed to be a linear one (1). Due to the sensitivity difference of the three colour channels, and fuzzy exposure controls, pixel saturation can happen incoherently in each of the colour channels. The separate channel restoration near those saturated areas results in over-amplification in that colour component alone, thus creating artificial colour mismatch and false colouring near those regions. To avoid this, a regularization mechanism according to the invention is proposed. The regularization mechanism is integrated in the iterative solution of equation (6). The idea is to spatially adapt μ in order to limit the restoration effect near saturated areas. The adapted step size is given as follows:

  • μadap(m,n)=βsat(u,m)μ  (9)
  • where μ is the global step-size as discussed earlier, and βsat is the local saturation control that modulates the step size. βsat is obtained using the following algorithm:
      • for each colour channel image gi, i={1 . . . 4},
      • consider the values of the window (w×w) surrounding the pixel location gi(m, n),
      • count the number of saturated pixels Si(m,n) in that window.
      • The saturation control is given by the following equation:
  • β sat ( m , n ) = max ( 0 , ( w 2 - i = 1 4 S i ( m , n ) ) / w 2 ) .
  • βsat varies between 0 and 1 depending on the number of saturated pixels in any of the colour channels.
  • Another Example of Iterative Restoration Method and a Regularization Mechanism
  • The previous data restoration sharpens the image by iterative inverse filtering. This inverse filtering can be controlled by a controlling method whereby the iteration is stopped when the image is sharp enough. A basic idea of this controlling method is illustrated in FIG. 5 as a block chart. At the beginning of the method the image is initialized equal with the observed image, and the parameters of the de-blurring algorithm are set up (510). After this, the de-blurring algorithm is applied to the observed image. This can be any of the existing one pass algorithms such as unsharp masking method, blur domain de-blurring, differential filtering, etc. (520). The de-blurring is meaningful at every iteration, because if the de-blurring does not have good performances the overall performance of the system will not be that good. In the next step (530), pixels from the de-blurred image can be checked to detect the overshooting such as over-amplified edges. The following step (540) the restored image is updated. If a pixel location in the de-blurred image corresponds to an overshoot edge, it is not any further updated in the iterative process. Otherwise, the pixels from the restored image are normally updated. Also, the pixels that correspond to overshooting are marked such that in the next iterations the corresponding restored pixels are unchanged (for those pixels the restoration process is terminated at this point). In the next step (550) the intermediate output image is scanned and the pixels that still contain overshooting are detected. If persistent overshooting is detected (560) the global iterative process is stopped and the restored image is returned. Otherwise the parameters of the de-blurring algorithm are changed (570) and the next iteration is started with the de-blurring of the observed image. The last procedure (560-570) makes the algorithm suitable for blind deconvolution. The algorithm disclosed here prevents the restored image from overshooting that appears due to over-amplification of edges. This is done in two different ways. First, at each iteration, the pixels are updated separately such that the ones that are degraded are not updated into the restored image. Second, the whole de-blurring process is stopped if there is a pixel in the restored image that is too much degraded. Detailed description of implementation of the de-blurring method is discussed next.
  • The method steps of FIG. 5 are done for one of the colour components R, G, B. The other two components are processed separately exactly in the same manner. If the YUV colour system is used, only component Y needs to be processed.
  • At the step 510 the image is initialized equal with the observed image, and the parameters of the deblurring algorithm are set up. The input observed image is denoted here by I and the final restored image is denoted by Ir. The restored image Ir is initialized with I (Ir=I) at the beginning. The parameters of the de-blurring method are also initialized. For instance, if the unsharp masking method is used for de-blurring the number of blurred images used and their parameters are chosen. If another algorithm is implemented, its parameters will be set up at this point. A matrix of size equal with the size of the image and with unity elements is initialized. The matrix is denoted by mask.
  • At the step 520 the de-blurring algorithm is applied to the observed image and the de-blurred image Idb is obtained. At the step 530 every pixel from the deblurred image is checked to detect the overshooting such as over-amplified edges. The pixels from the de-blurred image Idb are scanned and the horizontal and vertical differences between adjacent pixels can be computed as follows:

  • dh1(x,y)=Idb(x, y)−Idb(x, y−1)

  • dh2(x,y)=Idb(x, y)−Idb(x, y+1)

  • dv1(x,y)=Idb(x, y)−Idb(x−1, y)

  • dv2(x,y)=Idb(x, y)−Idb(x+1, y)
  • where x, y represents the vertical and horizontal pixel coordinates respectively. Also the pixels from the observed image are scanned and the horizontal and vertical differences between the adjacent pixels can be computed as follows:

  • dh3(x,y)=I(x, y)−I(x, y−1)

  • dh4(x,y)=I(x, y)−I(x, y+1)

  • dv3(x,y)=I(x, y)−I(x−1, y)

  • dv4(x,y)=I(x, y)−I(x+1, y)
  • It is checked for every pixel from the de-blurred image whether the sign of the corresponding differences dh1 and dh3, dh2 and dh4, dv1 and dv3, and dv2 and dv4 are different. If they are different it means that the pixel at coordinates x, y contain overshooting. This checking can be carried out by the following algorithm:
  • if NOT[sign(dh1(x,y))=sign(dh3(x,y))] OR
    NOT[sign(dh2(x,y))=sign(dh4(x,y))]
    if [abs(dh1(x,y))>=th1*MAX] AND [abs(dh2(x,y))>=th1*MAX]
    mh=0;
    end
    end
    if NOT[sign(dv1(x,y))=sign(dv3(x,y))] OR
    NOT[sign(dv2(x,y))=sign(dv4(x,y))]
    if [abs(dv1(x,y))>=th1*MAX] AND [abs(dv2(x,y))>=th1*MAX]
    mv=0;
    end
    end
    if (mh=0) OR (mv=0)
    mask(x,y)=0;
    end
  • Basically the idea is that for every pixel from the de-blurred image the local shape of the de-blurred image is compared with the local shape of the observed image. This is done by comparing the signs of the corresponding differences from the two images in horizontal and also in vertical direction. When a difference in the shape of the two images is found (whether in horizontal or vertical direction), this means that the corresponding pixel from the de-blurred image might be too much emphasized. For those pixels an estimated value of the overshooting is compared with a threshold (th1). If the amount of overshooting is larger than a threshold (th1) the corresponding pixel is marked as distorted (the value of the mask is made equal to zero).
  • The threshold (th1) is defined as percents from the maximum value of the pixels from the observed image (the value MAX is the maximum value of I). Choosing this kind of threshold computation we ensure that the value of the threshold (th1) is adapted to the image range.
  • At the step 540 the restored image is updated. The pixels that form the restored image are simply updated with the pixels from the de-blurred image that were not marked as distorted. This step can be implemented as follows:
  • for every pixel from Idb(x,y)
    if mask(x,y)=1
    Ir(x,y)=Idb(x,y);
    end
    end
  • At the step 550, the intermediate output image is scanned and the pixels that still contain overshooting are detected. When the restored image is scanned the horizontal and vertical differences between adjacent pixels can be computed as follows:

  • dh5(x,y)=Ir(x, y)−Ir(x, y−1)

  • dh6(x,y)=Ir(x, y)−Ir(x, y+1)

  • dv5(x,y)=Ir(x, y)−Ir(x−1, y)

  • dv6(x,y)=Ir(x, y)−Ir(x+1, y)
  • The sign of the corresponding differences dh5 and dh3, dh6 and dh4, dv5 and dv3, and dv6 and dv4 are compared. If the signs are different then the amount of overshooting may be computed as:
  • If NOT[sign(dh5(x,y))=sign(dh3(x,y))] OR
    NOT[sign(dh6(x,y))=sign(dh4(x,y))]
    H(x,y)=min(abs(dh5(x,y)),abs(dh6(x,y)));
    end
    If NOT[sign(dv5(x,y))=sign(dv3(x,y))] OR
    NOT[sign(dv6(x,y))=sign(dv4(x,y))]
    V(x,y)=min(abs(dh5(x,y)),abs(dh6(x,y)));
    end
  • Comparing the signs of the differences computed on the restored image and on the original image the local shapes of the two images are compared. For pixels where the local shapes differ, the overshooting in the restored image is estimated by taking the minimum of the absolute value of the two adjacent differences. This is computed on both vertical and horizontal directions.
  • At the step 560 the overshooting is checked. If the maximum overshooting is larger than a predefined step, the restoration procedure is stopped and the restored image Ir is returned at the output. If there is no pixel in the restored image that has overshooting larger than the threshold the parameters of the de-blurring method are changed and the procedure continue from step 520. This step can be implemented as follows:
  • if max(max(H(x,y)),max(V(x,y)))>=th2*MAX
    return the image Ir and stop the restoration process
    else
    modify the parameters of the de-blurring method and go to step 520.
    end
  • The threshold th2 for overshooting detection is defined as percents from the maximum pixel value of the original image I.
  • The regularization method (530, 550 and 560 from FIG. 5) can also be combined with the above described iterative restoration algorithm from equation (6). Other non-iterative restoration algorithms such as high pass filtering can be implemented in an iterative manner following the above method with local and global regularization. The local and global regularizations defined above can be applied together or separately also to some other iterative restoration techniques.
  • Image Reconstruction Chain
  • The previous description of the restoration of each of the colour component is applied as the first operation in the image reconstruction chain. The other operations (140, 260) will follow such as for example Automatic White Balance, Colour Filter Array Interpolation (CFAI), Colour gamut conversion, Geometrical distortion and shading correction, Noise reduction, Sharpening. It will be appreciated that the final image quality (270) may depend on the effective and optimized use of all these operations in the reconstruction chain. One of the most effective implementations of the image reconstruction algorithms are non-linear. In FIG. 1 the image processing continues e.g. with image compression (150) or/and downsampling/dithering (160) process. Image can be viewed (180) by camera viewfinder or display or be stored (170) in compressed form in the memory.
  • The use of restoration as the first operation in the reconstruction chain ensures the best fidelity to be assumed linear imaging model. The following algorithms, especially the colour filter array interpolation and the noise reduction algorithms act as an additional regularization mechanism to prevent over amplification due to excessive restoration.
  • Implementation
  • The system according to the invention can be arranged into a device such as a mobile terminal, a web cam, a digital camera or other digital device for imaging. The system can be a part of digital signal processing in camera module to be installed into one of said devices. One example of the device is an imaging mobile terminal as illustrated as a simplified block chart in FIG. 3. The device 300 comprises optics 310 or a similar device for capturing images that can operatively communicate with the optics or a digital camera for capturing images. The device 300 can also comprise a communication means 320 having a transmitter 321 and a receiver 322. There can also be other communicating means 380 having a transmitter 381 and a receiver 382. The first communicating means 320 can be adapted for telecommunication and the other communicating means 380 can be a kind of short-range communicating means, such as a Bluetooth™ system, a WLAN system (Wireless Local Area Network) or other system which suits local use and for communicating with another device. The device 300 according to the FIG. 3 also comprises a display 340 for displaying visual information. In addition the device 300 comprises a keypad 350 for inputting data, for controlling the image capturing process etc. The device 300 can also comprise audio means 360, such as an earphone 361 and a microphone 362 and optionally a codec for coding (and decoding, if needed) the audio information. The device 300 also comprises a control unit 330 for controlling functions in the device 300, such as the restoration algorithm according to the invention. The control unit 330 may comprise one or more processors (CPU, DSP). The device further comprises memory 370 for storing data, programs etc.
  • The imaging module according to the invention comprises imaging optics and image sensor and means for finding degradation information of each colour component and using said degradation information for determining a degradation function, and further means for restoring said each colour component by said degradation function. This imaging module can be arranged into the device being described previously. The imaging module can be also arranged into a stand-alone device 410, as illustrated in FIG. 4, communicating with an imaging device 400 and with a displaying device, which displaying device can be also said imaging device 400 or some other device, like a personal computer. Said stand-alone device 410 comprises a restoration module 411 and optionally other imaging module 412 and it can be used for image reconstruction independently. The communication between the imaging device 400 and the stand-alone device 410 can be handled by a wired or wireless network. Examples of such networks are Internet, WLAN, Bluetooth, etc.
  • The foregoing detailed description is provided for clearness of understanding only, and not necessarily limitation should be read therefrom into the claims herein.
  • While there have been shown and described and pointed out fundamental novel features of the invention as applied to preferred embodiments thereof, it will be understood that various omissions and substitutions and changes in the form and details of the devices and methods described may be made by those skilled in the art without departing from the spirit of the invention. For example, it is expressly intended that all combinations of those elements and/or method steps which perform substantially the same function in substantially the same way to achieve the same results are within the scope of the invention. Moreover, it should be recognized that structures and/or elements and/or method steps shown and/or described in connection with any disclosed form or embodiment of the invention may be incorporated in any other disclosed or described or suggested form or embodiment as a general matter of design choice. It is the intention, therefore, to be limited only as indicated by the scope of the claims appended hereto. Furthermore, in the claims means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures. Thus although a nail and a screw may not be structural equivalents in that a nail employs a cylindrical surface to secure wooden parts together, whereas a screw employs a helical surface, in the environment of fastening wooden parts, a nail and a screw may be equivalent structures.

Claims (43)

1. A method for developing a model for improving image quality of a digital image comprising:
finding degradation information of each of at least one colour component of said image,
obtaining a degradation function according to the degradation information, and
restoring said each colour component by said degradation function.
2. The method according to claim 1, wherein a regularization control is applied to the restored colour components.
3. (canceled)
4. (canceled)
5. (canceled)
6. (canceled)
7. (canceled)
8. (canceled)
9. (canceled)
10. A model for improving image quality of a digital image, said model being obtainable by a method as claimed in claim 1.
11. A use of a model according to claim 10 for improving the image quality of a digital image.
12. A method for improving image quality of a digital image comprising:
finding degradation information of each of at least one colour component of the image,
obtaining a degradation function according to the degradation information, and
restoring said each colour component by said degradation function.
13. The method according to claim 12, wherein a regularization control is applied to the restored colour components.
14. The method according to claim 12, wherein said degradation information of each colour component is found by a point-spread function.
15. The method according to claim 14, wherein the restoration is implemented by an iterative restoration function being determined from the point-spread function of each colour component.
16. The method according to claim 12, wherein the restoration is implemented by an iterative restoration function where at each iteration a one step de-blurring method with regularization is implemented.
17. The method according to claim 12, wherein said image is unprocessed image data, wherein said restored colour components are further processed by other image reconstruction algorithms.
18. The method according to claim 12, wherein one of the following colour systems is used: red, green, blue; hue, saturation, value; cyan, magenta, yellow, blue; luminance, chrominance.
19. The method according to claim 13, wherein the regularization control is implemented into the de-blurring method for obtaining a de-blurred image.
20. The method according to claim 19, wherein overshooting pixels are detected by a first and a second threshold values.
21. A method for restoration of an image, wherein the restoration is implemented by an iterative restoration function where at each iteration a de-blurring method with regularization is implemented.
22. The method according to claim 21, wherein a regularization control is applied to the restored colour components.
23. The method according to claim 21, wherein the regularization control is implemented into the de-blurring method for obtaining a de-blurred image.
24. The method according to claim 21, wherein overshooting pixels are detected by a first and a second threshold values.
25. An apparatus for determining a model for improving image quality of a digital image comprising:
a control unit configured for finding degradation information of each of at least one colour component of the image,
said control unit configured for obtaining a degradation function according to the degradation information, and
said control unit further configured for restoring said each colour component by said degradation function.
26. The apparatus according to claim 25, wherein the control unit is further configured for applying regularization control during the restoration.
27. The apparatus according to claim 25, wherein the control unit is further configured for further processing said image by other image reconstruction algorithms.
28. The apparatus according to claim 25 being capable of utilizing one of the following colour systems: red, green, blue; hue, saturation, value; cyan, magenta, yellow, blue; luminance, chrominance.
29. The apparatus according to claim 26, wherein for the regularization control, said system control unit is further configured for de-blurring the restored image.
30. An imaging module comprising imaging optics and an image sensor for forming an image through the imaging optics onto the light sensitive image sensor wherein a model for improving image quality as claimed in claim 10 is related to said imaging module.
31. The imaging module according to claim 30, wherein a control unit is further configured for applying regularization control during the restoration.
32. A device comprising an imaging module as claimed in claim 30.
33. The device according to claim 32 being a mobile device equipped with communication capabilities.
34. A program module for improving image quality in a device comprising an imaging module, said program module comprising a control unit configured for:
finding a degradation information of each colour component of the image,
obtaining a degradation function according to the degradation information, and
restoring said each colour component by said degradation function.
35. The program module according to claim 34, wherein the control unit further comprises instructions for applying regularization control during the restoration.
36. (canceled)
37. (canceled)
38. A computer program product for improving image quality comprising computer implemented instructions stored on a readable medium, said instructions when executed by a processor for
finding degradation information of each colour component of the image,
obtaining a degradation function according to the degradation information, and
restoring said each colour component by said degradation function.
39. The computer program product according to claim 38, further comprising instructions for applying regularization control during the restoration.
40. A computer program product for a restoration of an image, comprising computer readable instructions for implementing a de-blurring with regularization at each iteration of an iterative restoration.
41. The computer program product according to claim 40, further comprising instructions for detecting overshooting pixels by a first and a second threshold values.
42. An apparatus for determining a model for improving image quality of a digital image comprising:
means for finding degradation information of each of at least one colour component of the image,
means for obtaining a degradation function according to the degradation information, and
means for restoring said each colour component by said degradation function.
43. The apparatus according to claim 42, further comprising:
means for applying regularization control during the restoration.
US11/632,093 2004-07-09 2005-01-04 Restoration of Color Components in an Image Model Abandoned US20090046944A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/632,093 US20090046944A1 (en) 2004-07-09 2005-01-04 Restoration of Color Components in an Image Model

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US10/888,534 US7728844B2 (en) 2004-07-09 2004-07-09 Restoration of color components in an image model
US10/888534 2004-07-09
PCT/FI2005/050001 WO2006005798A1 (en) 2004-07-09 2005-01-04 Methods, system, program modules and computer program product for restoration of color components in an image model
US11/632,093 US20090046944A1 (en) 2004-07-09 2005-01-04 Restoration of Color Components in an Image Model

Publications (1)

Publication Number Publication Date
US20090046944A1 true US20090046944A1 (en) 2009-02-19

Family

ID=35599485

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/888,534 Expired - Fee Related US7728844B2 (en) 2004-07-09 2004-07-09 Restoration of color components in an image model
US11/632,093 Abandoned US20090046944A1 (en) 2004-07-09 2005-01-04 Restoration of Color Components in an Image Model

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/888,534 Expired - Fee Related US7728844B2 (en) 2004-07-09 2004-07-09 Restoration of color components in an image model

Country Status (6)

Country Link
US (2) US7728844B2 (en)
EP (1) EP1766569A1 (en)
JP (1) JP4571670B2 (en)
KR (1) KR100911890B1 (en)
CN (1) CN1985274A (en)
WO (1) WO2006005798A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080012964A1 (en) * 2006-07-14 2008-01-17 Takanori Miki Image processing apparatus, image restoration method and program
US20080239088A1 (en) * 2007-03-28 2008-10-02 Konica Minolta Opto, Inc. Extended depth of field forming device
US20090043524A1 (en) * 2007-08-07 2009-02-12 Szepo Robert Hung Surface mesh matching for lens roll-off correction
US20090245633A1 (en) * 2006-06-09 2009-10-01 Radu Bilcu Method, a Device, a Module and a Computer Program Product for Determining the Quality of an Image
US20100040141A1 (en) * 2008-08-15 2010-02-18 Shaw-Min Lei Adaptive restoration for video coding
US20100079630A1 (en) * 2008-09-29 2010-04-01 Kabushiki Kaisha Toshiba Image processing apparatus, imaging device, image processing method, and computer program product
US20120288215A1 (en) * 2011-05-13 2012-11-15 Altek Corporation Image processing device and processing method thereof
US20120301016A1 (en) * 2011-05-26 2012-11-29 Via Technologies, Inc. Image processing system and image processing method
WO2013148139A1 (en) * 2012-03-29 2013-10-03 Nikon Corporation Algorithm for minimizing latent sharp image and point spread function cost functions with spatial mask fidelity
US8644645B2 (en) * 2012-04-24 2014-02-04 Altek Corporation Image processing device and processing method thereof
CN103606130A (en) * 2013-10-22 2014-02-26 中国电子科技集团公司第二十八研究所 Infrared degraded image adaptive restoration method
US8798364B2 (en) * 2011-05-26 2014-08-05 Via Technologies, Inc. Image processing system and image processing method
US9262815B2 (en) 2012-03-29 2016-02-16 Nikon Corporation Algorithm for minimizing latent sharp image cost function and point spread function cost function with a spatial mask in a regularization term
EP3012802A1 (en) * 2014-10-21 2016-04-27 Canon Kabushiki Kaisha Image processing apparatus, image capturing apparatus, image processing method and image processing program
US20160253790A1 (en) * 2015-02-26 2016-09-01 Nokia Technologies Oy Method, apparatus and computer program product for reducing chromatic aberrations in deconvolved images
US10165263B2 (en) 2013-09-30 2018-12-25 Nikon Corporation Point spread function estimation of optics blur
US10846829B1 (en) * 2018-01-30 2020-11-24 Ambarella International Lp Image sharpening with edge direction based undershoot and overshoot reduction
US20210185285A1 (en) * 2018-09-18 2021-06-17 Zhejiang Uniview Technologies Co., Ltd. Image processing method and apparatus, electronic device, and readable storage medium

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101819325B (en) * 2003-01-16 2015-11-25 帝欧希数字光学科技国际有限公司 The method of optical system and the described optical system of generation
JP4250583B2 (en) * 2004-09-30 2009-04-08 三菱電機株式会社 Image capturing apparatus and image restoration method
JP2007060457A (en) * 2005-08-26 2007-03-08 Hitachi Ltd Image signal processor and processing method
CN101490694B (en) * 2005-11-10 2012-06-13 德萨拉国际有限公司 Image enhancement in the mosaic domain
WO2007072477A2 (en) * 2005-12-21 2007-06-28 D-Blur Technologies Ltd. Image enhancement using hardware-based deconvolution
US20070165961A1 (en) * 2006-01-13 2007-07-19 Juwei Lu Method And Apparatus For Reducing Motion Blur In An Image
JP2007304525A (en) * 2006-05-15 2007-11-22 Ricoh Co Ltd Image input device, electronic equipment, and image input method
TWI419079B (en) * 2006-11-07 2013-12-11 Digitaloptics Corp Internat Image enhancement in the mosaic domain
US7830428B2 (en) * 2007-04-12 2010-11-09 Aptina Imaging Corporation Method, apparatus and system providing green-green imbalance compensation
US7876363B2 (en) * 2007-04-19 2011-01-25 Aptina Imaging Corporation Methods, systems and apparatuses for high-quality green imbalance compensation in images
WO2008135995A2 (en) * 2007-05-07 2008-11-13 D-Blur Technologies Ltd. Image restoration with enhanced filtering
US7983509B1 (en) * 2007-05-31 2011-07-19 Hewlett-Packard Development Company, L.P. Estimating a point spread function of an image capture device
US8547444B2 (en) * 2007-06-05 2013-10-01 DigitalOptics Corporation International Non-linear transformations for enhancement of images
KR101341101B1 (en) * 2007-09-11 2013-12-13 삼성전기주식회사 Apparatus and Method for restoring image
KR101399012B1 (en) * 2007-09-12 2014-05-26 삼성전기주식회사 apparatus and method for restoring image
US8131072B2 (en) * 2007-11-26 2012-03-06 Aptina Imaging Corporation Method and apparatus for reducing image artifacts based on aperture-driven color kill with color saturation assessment
JP5071188B2 (en) * 2008-03-24 2012-11-14 富士通株式会社 Image encryption / decryption device and program
US8135233B2 (en) * 2008-05-22 2012-03-13 Aptina Imaging Corporation Method and apparatus for the restoration of degraded multi-channel images
US8131097B2 (en) * 2008-05-28 2012-03-06 Aptina Imaging Corporation Method and apparatus for extended depth-of-field image restoration
US20100011044A1 (en) * 2008-07-11 2010-01-14 James Vannucci Device and method for determining and applying signal weights
US20100011041A1 (en) 2008-07-11 2010-01-14 James Vannucci Device and method for determining signals
CN101666637B (en) * 2008-09-03 2012-06-13 鸿富锦精密工业(深圳)有限公司 Roundness calculation and display system and method
EP2175416A1 (en) * 2008-10-13 2010-04-14 Sony Corporation Method and system for image deblurring
EP2351428B1 (en) * 2008-10-31 2017-02-22 Nokia Solutions and Networks Oy Carrier selection for accessing a cellular system
JP5213688B2 (en) * 2008-12-19 2013-06-19 三洋電機株式会社 Imaging device
KR100990791B1 (en) * 2008-12-31 2010-10-29 포항공과대학교 산학협력단 Method For Removing Blur of Image And Recorded Medium For Perfoming Method of Removing Blur
US8199248B2 (en) * 2009-01-30 2012-06-12 Sony Corporation Two-dimensional polynomial model for depth estimation based on two-picture matching
JP5743384B2 (en) * 2009-04-14 2015-07-01 キヤノン株式会社 Image processing apparatus, image processing method, and computer program
US8587703B2 (en) * 2009-12-01 2013-11-19 Aptina Imaging Corporation Systems and methods for image restoration
JP2011134204A (en) * 2009-12-25 2011-07-07 Sony Corp Image processing device, image processing method and program
JPWO2011132279A1 (en) * 2010-04-21 2013-07-18 キヤノン株式会社 Image processing apparatus, method, and recording medium
CN102156968B (en) * 2011-04-11 2012-06-13 合肥工业大学 Color cubic priori based single image visibility restoration method
EA016450B1 (en) * 2011-09-30 2012-05-30 Закрытое Акционерное Общество "Импульс" Method for brightness correction of defective pixels of digital monochrome image
KR101382921B1 (en) * 2012-06-28 2014-04-08 엘지이노텍 주식회사 Camera, image sensor thereof, and driving method thereof
TWI605418B (en) * 2013-04-15 2017-11-11 晨星半導體股份有限公司 Image editing method and apparatus
JP6221330B2 (en) * 2013-05-01 2017-11-01 富士通株式会社 Image processing program, image processing apparatus, and image processing method
JP5701942B2 (en) * 2013-07-10 2015-04-15 オリンパス株式会社 Imaging apparatus, camera system, and image processing method
CN103679652B (en) * 2013-11-29 2017-04-19 北京空间机电研究所 Image restoration system capable of improving imaging quality greatly
TWI512682B (en) * 2014-09-30 2015-12-11 Quanta Comp Inc Image processing system and saturation compensation method
JP6350205B2 (en) * 2014-10-21 2018-07-04 富士通株式会社 Processing apparatus, processing method, and processing program
JP2017028583A (en) * 2015-07-24 2017-02-02 キヤノン株式会社 Image processor, imaging apparatus, image processing method, image processing program, and storage medium
US10593291B2 (en) * 2015-09-17 2020-03-17 Apple Inc. Methods for color sensing ambient light sensor calibration
KR102172634B1 (en) * 2015-10-08 2020-11-03 삼성전기주식회사 Camera module, electronic device, and method for operating the same
US10250782B2 (en) * 2015-10-08 2019-04-02 Samsung Electro-Mechanics Co., Ltd. Camera module, electronic device, and method of operating the same using pre-estimated lens-customized point spread function (PSF)
US9858653B2 (en) * 2016-02-02 2018-01-02 Motorola Mobility Llc Deblurring an image
CN111123538B (en) * 2019-09-17 2022-04-05 印象认知(北京)科技有限公司 Image processing method and method for adjusting diffraction screen structure based on point spread function
CN114339447B (en) * 2020-09-29 2023-03-21 北京字跳网络技术有限公司 Method, device and equipment for converting picture into video and storage medium
CN116456098A (en) * 2022-01-05 2023-07-18 南宁富联富桂精密工业有限公司 Video compression method, terminal and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5790709A (en) * 1995-02-14 1998-08-04 Ben-Gurion, University Of The Negev Method and apparatus for the restoration of images degraded by mechanical vibrations
US20010008418A1 (en) * 2000-01-13 2001-07-19 Minolta Co., Ltd. Image processing apparatus and method
US20020008715A1 (en) * 2000-02-03 2002-01-24 Noam Sorek Image resolution improvement using a color mosaic sensor
US6822758B1 (en) * 1998-07-01 2004-11-23 Canon Kabushiki Kaisha Image processing method, system and computer program to improve an image sensed by an image sensing apparatus and processed according to a conversion process

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69223850T2 (en) * 1991-05-30 1998-05-14 Canon Information Syst Res Compression increase in graphic systems
JPH0628469A (en) * 1992-07-06 1994-02-04 Olympus Optical Co Ltd Deteriorated image restoring system
JP3166462B2 (en) * 1992-12-28 2001-05-14 ミノルタ株式会社 Image recording / reproducing system and image reproducing apparatus having blur correction function
US5852675A (en) * 1995-04-14 1998-12-22 Kiyoshi Matsuo Color chart for image correction and method of color correction
SE9601229D0 (en) * 1996-03-07 1996-03-29 B Ulf Skoglund Apparatus and method for providing reconstruction
JP3964042B2 (en) * 1998-04-08 2007-08-22 株式会社リコー Color image processing apparatus and color image processing method
US6414760B1 (en) * 1998-10-29 2002-07-02 Hewlett-Packard Company Image scanner with optical waveguide and enhanced optical sampling rate
US6288798B1 (en) * 1998-11-30 2001-09-11 Xerox Corporation Show-through compensation apparatus and method
JP2001197356A (en) 2000-01-13 2001-07-19 Minolta Co Ltd Device and method for restoring picture
JP2001197354A (en) * 2000-01-13 2001-07-19 Minolta Co Ltd Digital image pickup device and image restoring method
JP2001197355A (en) * 2000-01-13 2001-07-19 Minolta Co Ltd Digital image pickup device and image restoring method
JP2002290830A (en) * 2001-03-27 2002-10-04 Minolta Co Ltd Imaging apparatus with image restoring function
JP2002300459A (en) * 2001-03-30 2002-10-11 Minolta Co Ltd Image restoring device through iteration method, image restoring method and its program, and recording medium
JP2002300461A (en) 2001-03-30 2002-10-11 Minolta Co Ltd Image restoring device, image restoring method and program thereof and recording medium
JP2002300384A (en) * 2001-03-30 2002-10-11 Minolta Co Ltd Image recovery device, image recovery method, program and recording medium
JP2003060916A (en) 2001-08-16 2003-02-28 Minolta Co Ltd Image processor, image processing method, program and recording medium
KR100444329B1 (en) 2002-02-16 2004-08-16 주식회사 성진씨앤씨 Digital video processing device eliminating the noise generated under insufficient illulmination

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5790709A (en) * 1995-02-14 1998-08-04 Ben-Gurion, University Of The Negev Method and apparatus for the restoration of images degraded by mechanical vibrations
US6822758B1 (en) * 1998-07-01 2004-11-23 Canon Kabushiki Kaisha Image processing method, system and computer program to improve an image sensed by an image sensing apparatus and processed according to a conversion process
US20010008418A1 (en) * 2000-01-13 2001-07-19 Minolta Co., Ltd. Image processing apparatus and method
US20020008715A1 (en) * 2000-02-03 2002-01-24 Noam Sorek Image resolution improvement using a color mosaic sensor

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8275200B2 (en) * 2006-06-09 2012-09-25 Nokia Siemens Netowrks Oy Method, a device, a module and a computer program product for determining the quality of an image
US20090245633A1 (en) * 2006-06-09 2009-10-01 Radu Bilcu Method, a Device, a Module and a Computer Program Product for Determining the Quality of an Image
US20080012964A1 (en) * 2006-07-14 2008-01-17 Takanori Miki Image processing apparatus, image restoration method and program
US20080239088A1 (en) * 2007-03-28 2008-10-02 Konica Minolta Opto, Inc. Extended depth of field forming device
US8023758B2 (en) * 2007-08-07 2011-09-20 Qualcomm Incorporated Surface mesh matching for lens roll-off correction
US20090043524A1 (en) * 2007-08-07 2009-02-12 Szepo Robert Hung Surface mesh matching for lens roll-off correction
US20100040141A1 (en) * 2008-08-15 2010-02-18 Shaw-Min Lei Adaptive restoration for video coding
US8325801B2 (en) * 2008-08-15 2012-12-04 Mediatek Inc. Adaptive restoration for video coding
US8798141B2 (en) 2008-08-15 2014-08-05 Mediatek Inc. Adaptive restoration for video coding
US20100079630A1 (en) * 2008-09-29 2010-04-01 Kabushiki Kaisha Toshiba Image processing apparatus, imaging device, image processing method, and computer program product
US20120288215A1 (en) * 2011-05-13 2012-11-15 Altek Corporation Image processing device and processing method thereof
US8467630B2 (en) * 2011-05-13 2013-06-18 Altek Corporation Image processing device and processing method for generating a super-resolution image
US8781223B2 (en) * 2011-05-26 2014-07-15 Via Technologies, Inc. Image processing system and image processing method
US20120301016A1 (en) * 2011-05-26 2012-11-29 Via Technologies, Inc. Image processing system and image processing method
US8798364B2 (en) * 2011-05-26 2014-08-05 Via Technologies, Inc. Image processing system and image processing method
WO2013148139A1 (en) * 2012-03-29 2013-10-03 Nikon Corporation Algorithm for minimizing latent sharp image and point spread function cost functions with spatial mask fidelity
US9245328B2 (en) 2012-03-29 2016-01-26 Nikon Corporation Algorithm for minimizing latent sharp image cost function and point spread function with a spatial mask in a fidelity term
US9262815B2 (en) 2012-03-29 2016-02-16 Nikon Corporation Algorithm for minimizing latent sharp image cost function and point spread function cost function with a spatial mask in a regularization term
US8644645B2 (en) * 2012-04-24 2014-02-04 Altek Corporation Image processing device and processing method thereof
US10165263B2 (en) 2013-09-30 2018-12-25 Nikon Corporation Point spread function estimation of optics blur
CN103606130A (en) * 2013-10-22 2014-02-26 中国电子科技集团公司第二十八研究所 Infrared degraded image adaptive restoration method
EP3012802A1 (en) * 2014-10-21 2016-04-27 Canon Kabushiki Kaisha Image processing apparatus, image capturing apparatus, image processing method and image processing program
US9654707B2 (en) 2014-10-21 2017-05-16 Canon Kabushiki Kaisha Image processing apparatus, image capturing apparatus, image processing method and storage medium storing image processing program
US20160253790A1 (en) * 2015-02-26 2016-09-01 Nokia Technologies Oy Method, apparatus and computer program product for reducing chromatic aberrations in deconvolved images
US9836827B2 (en) * 2015-02-26 2017-12-05 Nokia Technologies Oy Method, apparatus and computer program product for reducing chromatic aberrations in deconvolved images
US10846829B1 (en) * 2018-01-30 2020-11-24 Ambarella International Lp Image sharpening with edge direction based undershoot and overshoot reduction
US20210185285A1 (en) * 2018-09-18 2021-06-17 Zhejiang Uniview Technologies Co., Ltd. Image processing method and apparatus, electronic device, and readable storage medium

Also Published As

Publication number Publication date
WO2006005798A1 (en) 2006-01-19
US20060013479A1 (en) 2006-01-19
KR100911890B1 (en) 2009-08-11
JP2008506174A (en) 2008-02-28
EP1766569A1 (en) 2007-03-28
CN1985274A (en) 2007-06-20
KR20070036773A (en) 2007-04-03
US7728844B2 (en) 2010-06-01
JP4571670B2 (en) 2010-10-27

Similar Documents

Publication Publication Date Title
US7728844B2 (en) Restoration of color components in an image model
JP7242185B2 (en) Image processing method, image processing apparatus, image processing program, and storage medium
JP5213670B2 (en) Imaging apparatus and blur correction method
CN108259774B (en) Image synthesis method, system and equipment
US9626745B2 (en) Temporal multi-band noise reduction
US9344636B2 (en) Scene motion correction in fused image systems
US7999867B2 (en) Image edge detection apparatus and method, image sharpness emphasizing apparatus and method, recorded meduim recorded the program performing it
US8941762B2 (en) Image processing apparatus and image pickup apparatus using the same
US7683950B2 (en) Method and apparatus for correcting a channel dependent color aberration in a digital image
US9036032B2 (en) Image pickup device changing the size of a blur kernel according to the exposure time
US8913153B2 (en) Imaging systems and methods for generating motion-compensated high-dynamic-range images
US9633417B2 (en) Image processing device and image capture device performing restoration processing using a restoration filter based on a point spread function
KR101901602B1 (en) Apparatus and method for noise removal in a digital photograph
US8565524B2 (en) Image processing apparatus, and image pickup apparatus using same
TWI382754B (en) Method of adjusting blur image
US20090190853A1 (en) Image noise reduction apparatus and method, recorded medium recorded the program performing it
JPWO2011122284A1 (en) Image processing apparatus and imaging apparatus using the same
JP5344648B2 (en) Image processing method, image processing apparatus, imaging apparatus, and image processing program
Tico et al. Image enhancement method via blur and noisy image fusion
Trimeche et al. Multichannel image deblurring of raw color components
JP2017130167A (en) Image processing device, imaging device, and image processing program
JP7248042B2 (en) Image processing device, image processing method and image processing program
JP2010239493A (en) Image pickup apparatus, and method of correcting video signal
Lee et al. Motion Deblurring of RAW Mosaic Image Using Coded Exposure Photography
Sato 8 Image-Processing

Legal Events

Date Code Title Description
AS Assignment

Owner name: NOKIA CORPORATION, FINLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BILCU, RADU CIPRIAN;ALENIUS, SAKARI;TRIMECHE, MEJDI;AND OTHERS;REEL/FRAME:020994/0060;SIGNING DATES FROM 20070215 TO 20070220

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION