WO2014048451A1 - Method and device for blind correction of optical aberrations in a digital image - Google Patents

Method and device for blind correction of optical aberrations in a digital image Download PDF

Info

Publication number
WO2014048451A1
WO2014048451A1 PCT/EP2012/068868 EP2012068868W WO2014048451A1 WO 2014048451 A1 WO2014048451 A1 WO 2014048451A1 EP 2012068868 W EP2012068868 W EP 2012068868W WO 2014048451 A1 WO2014048451 A1 WO 2014048451A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
point spread
spread function
digital image
estimating
Prior art date
Application number
PCT/EP2012/068868
Other languages
French (fr)
Inventor
Christian Schuler
Michael Hirsch
Stefan Harmeling
Bernhard SCHÖLKOPF
Original Assignee
MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. filed Critical MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V.
Priority to PCT/EP2012/068868 priority Critical patent/WO2014048451A1/en
Publication of WO2014048451A1 publication Critical patent/WO2014048451A1/en

Links

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering

Definitions

  • the present invention relates to the correction of optical aberrations in digital images.
  • Optical lenses image scenes by refracting light onto photosensitive surfaces.
  • the lens of the vertebrate eye creates images on the retina
  • the lens of a photographic camera creates images on digital sensors.
  • This transformation should ideally satisfy a number of constraints formalizing our notion of a veridical imaging process.
  • the design of any lens forms a trade-off between these constraints, leaving residual errors that are called optical aberrations.
  • Some errors are due to the fact that light coining through different parts of the lens cannot be focused onto a single point, namely spherical aberrations, astigmatism and coma.
  • Some errors appear because refraction depends on the wave length of the light, e.g. chromatic aberrations.
  • a third type of error not treated in the present application, leads to image distortion by a deviation from a rectilinear projection.
  • Camera lenses are carefully designed to minimize optical aberrations by combining elements of multiple shapes and glass types.
  • deconvolution is harder still, since one additionally has to estimate the blur kernel, and non-uniform or space-varying deconvolution means that one has to estimate the blur kernels as a function of image position.
  • deconvolution methods i.e., they require a time-consuming calibration step to measure the point spread function (PSF) of the given camera-lens combination, and in principle they require this for all parameter settings.
  • PSF point spread function
  • Early work is due to Joshi et al. (Joshi, N., Szeliski, R., Kriegman, D.: PSF estimation using sharp edge prediction. In: Proc. IEEE Conf. Comput. Vision and Pattern Recognition, 2008), who used a calibration sheet to estimate the PSF. By finding sharp edges in the image, they also were able to remove chromatic aberrations blindly.
  • Kee et al. Keree, E., Paris, S., Chen, S., Wang, J.: Modeling and removing spatially- varying optical blur.
  • DxO Optics Pro also removes "lens softness” relying on a previous calibration of a long list of lens/camera combinations referred to as “modules.”
  • modules the commercial software "DxO Optics Pro”
  • Smart Sharpener correcting for lens blur after setting parameters for blur size and strength. It does not require knowledge about the lens used. However, it is unclear if a genuine PSF is inferred from the image, or the blur is just determined by the parameters.
  • a computer-implemented method for recovering a digital image from an image acquired through an optical system, wherein the characteristics of the optical system are unknown comprises the steps of estimating a point spread function based on the ac- quired image; estimating the recovered digital image, based on the estimated point spread function and the acquired image; and outputting the recovered digital image (x).
  • the steps of estimating the point spread function and of estimating the recovered digital image may be repeated one or several times before outputting the recovered digital image.
  • Outputting may comprise storing the recovered digital image in a computer- readable medium or sending the recovered digital image over an electronic communications network, such as the telephone network, the internet, a wireless access network and the like.
  • the step of estimating a point spread function is blind in that the estimation is not based on actual characteristics of the optical system, which are unknown.
  • the inventive approach is based on a forward model for the image formation process that incorporates two assumptions:
  • the image contains certain elements typical of natural images; in particular there are sharp edges.
  • Inverting a forward model has the benefit that if the assumptions are correct, it will lead to a plausible explanation of the image, making it more credible than an image obtained by sharpening the blurry image using, say, an algorithm that filters the image to increase high frequencies.
  • the inventive approach requires as an input only the blurry image, and not a point spread function that may have been obtained using other means such as a calibration step. This is a substantial advantage, since the actual blur depends not only on the particular photographic lens but also on settings such as focus, aperture and zoom. Moreover, there are cases where the camera settings are lost and the camera may even no longer be available, e.g., for historic photographs.
  • Fig 1 shows a forward model of an optical aberration in an acquired image according to the invention
  • Fig 2 shows three example groups of patches used for defining basis elements of a point spread function according to the invention
  • Fig 3 shows different shifts used according to the invention for generating basis elements for the middle group of figure 2;
  • Fig 4 shows an SVD spectrum of a typical basis matrix B with cut-off
  • Fig 5 illustrates how a chromatic shock filter removes color fringing
  • Fig 6 compares an application of a method according to an embodiment of the invention with a semi-blind and a non-blind method according to the prior art.
  • Fig 7 compares point spread functions estimated by a method according to an embodiment of the invention and estimated according to a non-blind approach according to the prior art.
  • Fig 8 compares the inventive approach with a non-blind method according to the prior art, applied to an image taken with a Canon 24mm f/1.4 lens. shows a comparison between the blind approach of the invention and two non-blind approaches of Kee et al. (Kee, E., Paris, S., Chen, S., Wang, J.: Modeling and removing spatially-varying optical blur. In: Proc. IEEE Int. Conf. Computational Photography. 2011).
  • Fig. 10 shows a photo and some detail from the Library of Congress archive that was taken around 1940 for comparing the inventive approach with a semi-blind method.
  • Figure 1 shows how an optical aberration may be represented by a forward model.
  • Optical aberrations cause image blur that is spatially varying across the image. As such they can be modeled as a non-uniform point spread function (PSF),
  • PSF point spread function
  • x denotes the ideal image and y is the image degraded by optical aberration.
  • Variables x and y are assumed to be discretely sampled images, i.e., x and y are finite- sized matrices whose entries correspond to pixel intensities, is a weighting matrix that masks out all of the image x except for a local patch by Hadamard multiplication (symbol ⁇ , pixel-wise product).
  • the r-th patch is convolved (symbol *) with a local blur kernel ⁇ ⁇ ) also represented as a matrix. All blurred patches are summed up to form the degraded image.
  • the image y is split into overlapping patches, each characterized by the weights w ⁇ .
  • the symbol l r denotes the line from the patch center to the image center, and d r the length of line l r , i.e., the distance between patch center and image center. It is assumed that local blur kernels a ⁇ originating from optical aberrations have the following properties:
  • the basis is represented by K basis elements b k each consisting of R local blur kernels
  • the actual blur kernel a(r) can be represented as linear combinations of basis elements
  • each group contains all patches inside a certain ring around the image center, i.e., the center distance d r determines whether a patch belongs to a particular group.
  • Figure 2 shows basis elements for three example groups. All patches inside a group will be assigned similar kernels. The width and the overlap of the rings determine the amount of smoothness between groups, see property (c) above.
  • a series of basis elements may be defined as follows. For each patch in the group matching blur kernels are generated by placing a single delta peak inside the blur kernel and then by mirroring the kernel with respect to the line l r (see figure 3). For patches not in the current group, i.e., in the current ring, the corresponding local blur kernels are zero. This generation process creates basis elements that fulfill the symmetry properties listed above. To increase smoothness of the basis and avoid effects due to pixelization, little Gaussian blurs (standard deviation 0.5 pixels) are placed instead of delta peaks.
  • the basis elements constrain possible blur kernels to fulfill the above symmetry and smoothness properties.
  • the basis is overcomplete and direct projection on the basis is not possible. Therefore it is approximated with an orthonormal one.
  • each basis element may be reshaped as a column vector by vectorizing (operator vec) each local blur kernel b(r) k and stacking them for all patches r:
  • the invention defines an orthonormal basis ⁇ as a matrix that consists of the column vectors of U that correspond to large singular values, i.e., that contains the relevant left singular vectors of B.
  • an orthonormal basis ⁇ 3 ⁇ 4 r is obtained that is tailored to optical aberrations. This representation can be plugged into the forward model in Eq. (1 ),
  • the resulting forward model is linear in the parameters ⁇ .
  • blind deconvolution is performed by extending the method of Cho and Lee (Cho, S., Lee, S.: Fast Motion Deblurring. ACM Trans. Graph. 28, 2009) to the non-uniform blur model (5) of the invention.
  • Cho and Lee Cho, S., Lee, S.: Fast Motion Deblurring. ACM Trans. Graph. 28, 2009
  • the full color image is processed. This allows to better address chromatic aberrations by an improved shock filtering procedure that is tailored to color images: the color channels XR, XG and XB are shock filtered separately but share the same sign expression depending only on the gray scale image z:
  • (a) Prediction step the current estimate x is first denoised with a bilateral filter, then edges are emphasized with chromatic shock filtering and by zeroing at gradient regions in the image (see [ ] for further details).
  • the gradient selection is modified such that for every radius ring the strongest gradients are selected.
  • ⁇ y-j where is the matrix containing the basis elements for the r-th patch. Note that ⁇ is the same for all patches. This optimization can be performed iteratively.
  • the regularization parameters a and ⁇ are set to 0.1 and 0.01, respectively.
  • the iterations are costly, and one can speed up things by using the or- thonormal basis ⁇ .
  • the blur is initially estimated unconstrained and then project- ed onto the orthonormal basis.
  • the fit of the general EFF forward model (without the basis) is first minimized with an additional regularization term on the local blur kernels, i.e., one minimizes
  • the algorithm follows a coarse-to-fine approach. Having estimated the blur parameters ⁇ , a non-uniform version of Krishnan and Fergus' approach (Krishnan, D., Fergus, R. : Fast image deconvolution using hyper-Laplacian priors. In: Advances in Neural Inform. Process. Syst. 2009; Hirsch, M., Schuler, C, Harmeling, S., Scholkopf, B.: Fast removal of non-uniform 613 camera shake. In: Proc. IEEE Intern. Conf. on Comput. Vision. 201 1) is used for the non-blind deconvolution to recover a high-quality estimate of the true image. For the x-sub problem, the direct deconvolution formula ( 1 1 ) is used.
  • the algorithm may be implemented on a Graphics Processing Unit (GPU) in Python using PyCUDA. All experiments were run on 3.0GHz Intel Xeon with an NVIDIA Tesla C2070 GPU with 6GB of memory. The basis elements generated as previously detailed are orthogonalized using the SVDLIBC library
  • Figure 6 compares an application of a method according to an embodiment of the invention with a semi-blind and a non-blind method according to the approach of Schuler et al. (Schuler, C, Hirsch, M., Harmeling, S., Scholkopf, B.: Non-stationary correction of optical aberrations. In: Proc. IEEE Intern. Conf. on Comput. Vision. 201 1). Schuler et al. show deblurring results on images taking with a lens that consists only of a single element, thus exhibiting strong optical aberrations, in particular coma. Since their approach is non-blind, they measure the non-uniform PSF with a point source and apply non-blind deconvolution.
  • the inventive approach is blind and is directly applied to the blurry image.
  • the local blurs scale linearly with radial position, which can be easily incorporated into the basis generation scheme of the invention.
  • Photoshop's ⁇ Smart Sharpening function is applied for removing lens blur. It depends on the blur size and the amount of blur, which are manually controlled by the user.
  • this method may be called semi-blind since it assumes a parametric form. Even though its parameters are chosen carefully, the inventors were not able to obtain comparable results.
  • Figure 9 shows results on an image taken from Kee et al.
  • the closeups reveal that Kee's non-blind approach is slightly superior in terms of sharpness and noise- robustness.
  • the blind approach according to an embodiment of the invention better removes chromatic aberration.
  • a general problem of methods relying on a prior calibration is that optical aberrations depend on the wavelength of the transmitting light continuously: an approximation with only a few (generally three) color channels therefore depends on the lighting of the scene and could change if there is a discrepancy between the calibration setup and a photo's lighting conditions. This is avoided with a blind approach.
  • DXO Optics Pro 7.2 uses a database for combinations of cameras/lenses. While it uses calibration data, it is not clear whether it additionally infers elements of the optical aberration from the image. For comparison, the photo was processed with the options "chromatic aberrations” and “DxO lens softness” set to their default values. The result is good and exhibits less noise than the other two approaches (see figure 9, however, it is not clear if an additional denoising step is employed by the software.
  • the blind approach to removing optical aberrations according to the invention can also be applied to historical photos, where information about the lens is not available.
  • the left column of figure 1 0 shows a photo (and some detail) from the Library of Congress archive that was taken around 1940. Assuming that the analog film used has a sufficiently linear light response, the inventors applied the blind lens correction method and obtained a sharper image. However, the blur appeared to be small, so algorithms like Adobe's "Smart Sharpen” also give reasonable results. Neither DXO nor Kee et al.'s approach can be applied here since lens data is not available.
  • the method according to the invention blindly removes spatially varying blur caused by imperfections in lens designs, including chromatic aberrations. Without relying on elaborate calibration procedures, results comparable to non-blind methods can be achieved.
  • the PSF is constrained to a class that exhibits the generic symmetry properties of lens blur, while fast PSF estimation is possible.
  • the invention is not limited to the above- described embodiment. While it is useful to be able to infer the image blur from a single image, it does not change for photos taken with the same lens settings. On the one hand, this implies that the PSFs estimated for these settings can be transferred for in- stance to images where image prior assumptions are violated. On the other hand, it suggests the possibility to improve the quality of the PSF estimates by utilizing a substantial database of images. Finally, while optical aberrations are a major source of image degradation, a picture may also suffer from motion blur. By choosing a suitable basis, these two effects may be combined.

Abstract

A fast and efficient computer-implemented method for recovering a digital image from an image acquired through an optical system, wherein the characteristics of the optical system are unknown, comprises the steps of estimating a point spread function based on the acquired image; estimating the recovered digital image, based on the estimated point spread function and the acquired image; and outputting the recovered digital image. According to the invention, the step of estimating a point spread function is blind in that the estimation is not based on actual characteristics of the optical system.

Description

METHOD AND DEVICE FOR BLIND CORRECTION OF OPTICAL ABERRATIONS IN A DIGITAL IMAGE
The present invention relates to the correction of optical aberrations in digital images.
TECHNICAL BACKGROUND
Optical lenses image scenes by refracting light onto photosensitive surfaces. The lens of the vertebrate eye creates images on the retina, the lens of a photographic camera creates images on digital sensors. This transformation should ideally satisfy a number of constraints formalizing our notion of a veridical imaging process. The design of any lens forms a trade-off between these constraints, leaving residual errors that are called optical aberrations. Some errors are due to the fact that light coining through different parts of the lens cannot be focused onto a single point, namely spherical aberrations, astigmatism and coma. Some errors appear because refraction depends on the wave length of the light, e.g. chromatic aberrations. A third type of error, not treated in the present application, leads to image distortion by a deviation from a rectilinear projection. Camera lenses are carefully designed to minimize optical aberrations by combining elements of multiple shapes and glass types.
However, it is impossible to make a perfect lens, and it is very expensive to make a close-to-perfect lens. A much cheaper solution is to correct the optical aberration in software. However, deconvolution is a hard inverse problem. In practice, even non- blind uniform deconvolution requires assumptions to work robustly. Blind
deconvolution is harder still, since one additionally has to estimate the blur kernel, and non-uniform or space-varying deconvolution means that one has to estimate the blur kernels as a function of image position.
PRIOR ART
Existing methods for reducing blur due to optical aberrations are non-blind
deconvolution methods, i.e., they require a time-consuming calibration step to measure the point spread function (PSF) of the given camera-lens combination, and in principle they require this for all parameter settings. Early work is due to Joshi et al. (Joshi, N., Szeliski, R., Kriegman, D.: PSF estimation using sharp edge prediction. In: Proc. IEEE Conf. Comput. Vision and Pattern Recognition, 2008), who used a calibration sheet to estimate the PSF. By finding sharp edges in the image, they also were able to remove chromatic aberrations blindly. Kee et al. (Kee, E., Paris, S., Chen, S., Wang, J.: Modeling and removing spatially- varying optical blur. In: Proc. IEEE Int. Conf. Computational Photography, 2011) built upon this calibration method and looked at the problem how lens blur can be modeled such that for continuous parameter settings like zoom, only a few discrete measurements are sufficient. Schuler et al. (Schuler, C, Hirsch, M., Harmeling, S., Scholkopf, B.: Non-stationary correction of optical aberrations. In: Proc. IEEE Intern. Conf. on Comput. Vision, 2011) use point light sources rather than a calibration sheet, and measure the PSF as a function of image location. The commercial software "DxO Optics Pro" (DXO) also removes "lens softness" relying on a previous calibration of a long list of lens/camera combinations referred to as "modules." Furthermore, Adobe's Photoshop comes with a "Smart Sharpener," correcting for lens blur after setting parameters for blur size and strength. It does not require knowledge about the lens used. However, it is unclear if a genuine PSF is inferred from the image, or the blur is just determined by the parameters. Beginning with Fergus et al.'s (Fergus, R., Singh, B., Hertzmann, A., Roweis, S.,
Freeman, W.: Removing camera shake from a single photograph. ACM Trans. Graph. 25, 2006) method for camera shake removal, extending the work of Miskin and Mac- Kay (Miskin, J., MacKay, D.: Ensemble learning for blind image separation and deconvolution. Advances in Independent Component Analysis, 2000) with sparse im- age statistics, blind deconvolution became applicable to real photographs. With Cho and Lee's work (Cho, S., Lee, S.: Fast Motion Deblurring. ACM Trans. Graph. 28, 2009), the running time of blind deconvolution has become acceptable. These early methods were initially restricted to uniform (space invariant) blur, and later extended to real world spatially varying camera blur (Whyte, O., Sivic, J., Zisserman, A., Ponce, J.: Non-uniform deblurring for shaken images. In: Proc. IEEE Conf. Comput. Vision and Pattern Recognition. 2010; Hirsch, M., Schuler, C, Harmeling, S., Scholkopf, B.: Fast removal of non-uniform camera shake. In: Proc. IEEE Intern. Conf. on Comput. Vision. 2011). Progress has also been made regarding the quality of the blur estimation (Krishnan, D., Tay, T., Fergus, R.: Blind deconvolution using a normalized sparsity measure. In: Proc. IEEE Conf. Comput. Vision and Pattern Recognition. 2011; Levin, A., Weiss, Y., Durand, F., Freeman, W.: Efficient marginal likelihood optimization in blind deconvolution. In: Proc. IEEE Conf. Comput. Vision and Pattern Recognition, IEEE 2011), however, these methods are not yet competitive with the runtime of Cho and Lee's approach.
It is therefore an object of the present invention to provide a method and a device for recovering a digital image from an observed digital image that are fast and efficient, even when knowledge of the "true" image or characteristics of the optical system through which the image was acquired are not available.
SHORT SUMMARY OF THE INVENTION
This object is achieved by a method and a device according to the independent claims. Advantageous embodiments are defined in the dependent claims.
A computer-implemented method for recovering a digital image from an image acquired through an optical system, wherein the characteristics of the optical system are unknown, comprises the steps of estimating a point spread function based on the ac- quired image; estimating the recovered digital image, based on the estimated point spread function and the acquired image; and outputting the recovered digital image (x). The steps of estimating the point spread function and of estimating the recovered digital image may be repeated one or several times before outputting the recovered digital image. Outputting may comprise storing the recovered digital image in a computer- readable medium or sending the recovered digital image over an electronic communications network, such as the telephone network, the internet, a wireless access network and the like.
According to the invention, the step of estimating a point spread function is blind in that the estimation is not based on actual characteristics of the optical system, which are unknown.
The inventive approach is based on a forward model for the image formation process that incorporates two assumptions:
(a) The image contains certain elements typical of natural images; in particular there are sharp edges. (b) Even though the blur due to optical aberrations is non-uniform or spatially varying across the image, there are circular symmetries reflecting potential characteristics of the unknown optical system that can be exploited.
Inverting a forward model has the benefit that if the assumptions are correct, it will lead to a plausible explanation of the image, making it more credible than an image obtained by sharpening the blurry image using, say, an algorithm that filters the image to increase high frequencies.
Furthermore, the inventive approach requires as an input only the blurry image, and not a point spread function that may have been obtained using other means such as a calibration step. This is a substantial advantage, since the actual blur depends not only on the particular photographic lens but also on settings such as focus, aperture and zoom. Moreover, there are cases where the camera settings are lost and the camera may even no longer be available, e.g., for historic photographs.
BRIEF DESCRIPTION OF THE FIGURES
These and other aspects and advantages of the present invention will become more apparent when studying the following detailed description, in connection with the figures, in which
Fig 1 shows a forward model of an optical aberration in an acquired image according to the invention;
Fig 2 shows three example groups of patches used for defining basis elements of a point spread function according to the invention;
Fig 3 shows different shifts used according to the invention for generating basis elements for the middle group of figure 2;
Fig 4 shows an SVD spectrum of a typical basis matrix B with cut-off;
Fig 5 illustrates how a chromatic shock filter removes color fringing, Fig 6 compares an application of a method according to an embodiment of the invention with a semi-blind and a non-blind method according to the prior art.
Fig 7 compares point spread functions estimated by a method according to an embodiment of the invention and estimated according to a non-blind approach according to the prior art.
Fig 8 compares the inventive approach with a non-blind method according to the prior art, applied to an image taken with a Canon 24mm f/1.4 lens. shows a comparison between the blind approach of the invention and two non-blind approaches of Kee et al. (Kee, E., Paris, S., Chen, S., Wang, J.: Modeling and removing spatially-varying optical blur. In: Proc. IEEE Int. Conf. Computational Photography. 2011).
Fig. 10 shows a photo and some detail from the Library of Congress archive that was taken around 1940 for comparing the inventive approach with a semi-blind method.
DETAILED DESCRIPTION
Figure 1 shows how an optical aberration may be represented by a forward model.
Optical aberrations cause image blur that is spatially varying across the image. As such they can be modeled as a non-uniform point spread function (PSF),
Figure imgf000006_0001
where x denotes the ideal image and y is the image degraded by optical aberration. Variables x and y are assumed to be discretely sampled images, i.e., x and y are finite- sized matrices whose entries correspond to pixel intensities, is a weighting matrix that masks out all of the image x except for a local patch by Hadamard multiplication (symbol Θ, pixel-wise product). The r-th patch is convolved (symbol *) with a local blur kernel < Γ) also represented as a matrix. All blurred patches are summed up to form the degraded image. The more patches are considered (R is the total number of patch- es), the better the approximation to the true non-uniform PSF. The patches defined by the weighting matrices w(r) usually overlap to yield smoothly varying blurs. The weights are chosen such that they sum up to one for each pixel. This forward model can be computed efficiently by making use of the short-time Fourier transform. Since optical aberrations lead to image degradations that can be locally modeled as convolutions, equation (1) is a valid model. However, not all blurs expressible do correspond to blurs caused by optical aberrations. The invention thus defines various properties, i.e. constraints for the PSF basis that reflect properties of the optical system used for acquiring an image. To define the basis a few notions are introduced. The image y is split into overlapping patches, each characterized by the weights w^. For each patch, the symbol lr denotes the line from the patch center to the image center, and dr the length of line lr, i.e., the distance between patch center and image center. It is assumed that local blur kernels a^ originating from optical aberrations have the following properties:
(a) Local reflection symmetry: a local blur kernel a^ is reflection symmetric with respect to the line lr, implying a reflection symmetry of the unknown optical system.
(b) Global rotation symmetry: two local blur kernels a^ and a^ at the same distance to the image center (i.e., dr = ds) are related to each other by a rotation around the image center, implying a global rotation symmetry of the unknown optical system.
(c) Radial behavior: along a line through the image center, the local blur kernels change smoothly. Furthermore, the maximum size of a blur kernel is assumed to scale linearly with its distance to the image center.
These properties reflect plausible assumptions about the unknown optical system through which the input image was acquired. According to the invention, they lead to good approximations of real-world lens aberrations in practice.
For two dimensional blur kernels, the basis is represented by K basis elements bk each consisting of R local blur kernels
Figure imgf000007_0001
Then the actual blur kernel a(r) can be represented as linear combinations of basis elements,
Figure imgf000007_0002
k= l
(2)
To define the basis elements the patches are grouped into overlapping groups, such that each group contains all patches inside a certain ring around the image center, i.e., the center distance dr determines whether a patch belongs to a particular group. Figure 2 shows basis elements for three example groups. All patches inside a group will be assigned similar kernels. The width and the overlap of the rings determine the amount of smoothness between groups, see property (c) above.
For a single group, a series of basis elements may be defined as follows. For each patch in the group matching blur kernels are generated by placing a single delta peak inside the blur kernel and then by mirroring the kernel with respect to the line lr (see figure 3). For patches not in the current group, i.e., in the current ring, the corresponding local blur kernels are zero. This generation process creates basis elements that fulfill the symmetry properties listed above. To increase smoothness of the basis and avoid effects due to pixelization, little Gaussian blurs (standard deviation 0.5 pixels) are placed instead of delta peaks.
The basis elements constrain possible blur kernels to fulfill the above symmetry and smoothness properties. However, the basis is overcomplete and direct projection on the basis is not possible. Therefore it is approximated with an orthonormal one.
To explain this step with matrices, each basis element may be reshaped as a column vector by vectorizing (operator vec) each local blur kernel b(r) k and stacking them for all patches r:
6¾; vec 6^]T . . . [ ec 6^]T
(3)
Let B be the matrix containing the basis vectors bi, bi as columns. Then the singu- lar value decomposition (SVD) of B is
B = USV
(4) with S being a diagonal matrix containing the singular values of B. Figure 4 shows the SVD spectrum and the chosen cut-off of some typical basis matrix B, with approximately half of the eigenvalues being below numerical precision.
The invention defines an orthonormal basis Ξ as a matrix that consists of the column vectors of U that correspond to large singular values, i.e., that contains the relevant left singular vectors of B. Properly chopping the column vectors of the defined orthonormal basis Ξ into shorter vectors one per patch and reshaping those back to the blur kernel, an orthonormal basis <¾r) is obtained that is tailored to optical aberrations. This representation can be plugged into the forward model in Eq. (1 ),
Figure imgf000009_0001
The resulting forward model is linear in the parameters μ. Having defined a PSF basis, blind deconvolution is performed by extending the method of Cho and Lee (Cho, S., Lee, S.: Fast Motion Deblurring. ACM Trans. Graph. 28, 2009) to the non-uniform blur model (5) of the invention. However, instead of considering only a gray-scale image during PSF estimation, the full color image is processed. This allows to better address chromatic aberrations by an improved shock filtering procedure that is tailored to color images: the color channels XR, XG and XB are shock filtered separately but share the same sign expression depending only on the gray scale image z:
Figure imgf000009_0002
+1 = *G - A T ■ sign(z^)| ^| with z* = + + 4)/3 arS"1 = - At sigii( ) jV l
(6) where ζηη denotes the second derivative in the direction of the gradient. This extension takes all three color channels simultaneously into account. Figure 5 shows the reduction of color fringing on the example of Osher and Rudin (Osher, S., Rudin, L.: Feature-oriented image enhancement using shock filters. 623 SIAM J. Numerical Analysis 27, 1990) adapted to three color channels. Combining the forward model y = μ 0 x defined above and the chromatic shock filtering, the PSF parameters μ and the image x (initialized by y) are estimated by iterating over three steps:
(a) Prediction step: the current estimate x is first denoised with a bilateral filter, then edges are emphasized with chromatic shock filtering and by zeroing at gradient regions in the image (see [ ] for further details). The gradient selection is modified such that for every radius ring the strongest gradients are selected.
(b) PSF estimation: if the overcomplete basis B is used, one would like to find co- efficients τ that minimize the regularized fit of the gradient images 5/and dx,
\dy - [|β«Τ ||2
Figure imgf000010_0001
^y-j where is the matrix containing the basis elements for the r-th patch. Note that τ is the same for all patches. This optimization can be performed iteratively. The regularization parameters a and β are set to 0.1 and 0.01, respectively.
However, the iterations are costly, and one can speed up things by using the or- thonormal basis Ξ. The blur is initially estimated unconstrained and then project- ed onto the orthonormal basis. In particular, the fit of the general EFF forward model (without the basis) is first minimized with an additional regularization term on the local blur kernels, i.e., one minimizes
Figure imgf000010_0002
] |
(8) This optimization problem is approximately minimized using a single step of direct deconvolution in Fourier space, i.e.,
Figure imgf000011_0001
where 1 = [-1, 2,-1] denotes the discrete Laplace operator, F the discrete Fourier transform, and Z& , Zy, Zls Cr and Er appropriate zero-padding and cropping matrices. |u| denotes the entry-wise absolute value of a complex vector u, ΰ its entry-wise complex conjugate. The fraction has to be implemented pixel-wise. Finally, the resulting unconstrained blur kernels a(r) are projected onto the or- thonormal basis Ξ leading to the estimate of the blur parametersμ.
Image estimation: For image estimation given the blurry image y and blur parameters μ, Tikhonov regularization is applied with γ = 0:01 on the gradients of the latent image x, i.e.
y— μ o x + da
(10)
As shown in Hirsch et al. (Hirsch, M., Schuler, C, Harmeling, S., Scholkopf, B.: Fast removal of non-uniform camera shake. In: Proc. IEEE Intern. Conf. on Comput. Vision., 2011), this expression can be approximately minimized with respect to x using a single step of the following direct deconvolution:
TZhSji ø Fr Diag(«;<r)} Z, yV)
Ζ^μ^ + η Ζ ,
(11) where 1 = [-1, 2,-1] denotes the discrete Laplace operator, F the discrete Fourier transform, and Zb, Zy, Zls Cr and Er appropriate zero-padding and cropping matrices. |u| denotes the entry-wise absolute value of a complex vector u, ΰ its entry- wise complex conjugate. The fraction has to be implemented pixel-wise. The normalization factor N accounts for artifacts at patch boundaries which originate from windowing.
The algorithm follows a coarse-to-fine approach. Having estimated the blur parameters μ, a non-uniform version of Krishnan and Fergus' approach (Krishnan, D., Fergus, R. : Fast image deconvolution using hyper-Laplacian priors. In: Advances in Neural Inform. Process. Syst. 2009; Hirsch, M., Schuler, C, Harmeling, S., Scholkopf, B.: Fast removal of non-uniform 613 camera shake. In: Proc. IEEE Intern. Conf. on Comput. Vision. 201 1) is used for the non-blind deconvolution to recover a high-quality estimate of the true image. For the x-sub problem, the direct deconvolution formula ( 1 1 ) is used.
The algorithm may be implemented on a Graphics Processing Unit (GPU) in Python using PyCUDA. All experiments were run on 3.0GHz Intel Xeon with an NVIDIA Tesla C2070 GPU with 6GB of memory. The basis elements generated as previously detailed are orthogonalized using the SVDLIBC library
(http://tcdlab.mit.cdu/~dr/S 3C/). Calculating the SVD for the occurring large sparse matrices can require a few minutes of running times. However, the basis is independent of the image content, so we can compute the orthonormal basis once and reuse it. Table 1 reports the running times of our experiments for both PSF and final non-blind deconvolution along with the EFF parameters and image dimensions. In particular, it shows that using the orthonormal basis instead of the overcomplete one improves the running times by a factor of about six to eight.
Figure imgf000012_0001
Table 1. (a) linage sizes, (b) size of the local blur kernels, c number of patches horizontally and vertically, (d) runtime of PSF estimation using the overcomplete basis B (see Eq. {?)) . (e) runtime of PSF estimation using the orthonormal basis Ξ (see Eq. (8)) as used in our approach, (f) runtime of the final non-blind deconvolution. Figures 6 to 1 0 comparcjesults on real photos and do a comprehensive comparison with other approaches for removing optical aberrations. Image sizes and blur parameters are shown in Table 1 . Figure 6 compares an application of a method according to an embodiment of the invention with a semi-blind and a non-blind method according to the approach of Schuler et al. (Schuler, C, Hirsch, M., Harmeling, S., Scholkopf, B.: Non-stationary correction of optical aberrations. In: Proc. IEEE Intern. Conf. on Comput. Vision. 201 1). Schuler et al. show deblurring results on images taking with a lens that consists only of a single element, thus exhibiting strong optical aberrations, in particular coma. Since their approach is non-blind, they measure the non-uniform PSF with a point source and apply non-blind deconvolution. In contrast, the inventive approach is blind and is directly applied to the blurry image. To better approximate the large blur of that lens, it is additionally assumed that the local blurs scale linearly with radial position, which can be easily incorporated into the basis generation scheme of the invention. For comparison, Photoshop's \Smart Sharpening" function is applied for removing lens blur. It depends on the blur size and the amount of blur, which are manually controlled by the user. Thus, this method may be called semi-blind since it assumes a parametric form. Even though its parameters are chosen carefully, the inventors were not able to obtain comparable results.
Comparing the inventive blind method against the non-blind approach of Schuler et al., it may be observed that PSF estimated according to the invention matches their meas- ured PSFs rather well (see figure 7). However, surprisingly the method of the invention delivers an image that may be considered sharper. The reason could be
over sharpening or a less conservative regularization in the final deconvolution; it is also conceivable that the calibration procedure used by Schuler et al. is not sufficiently accurate. Neither DXO nor Kee et al.'s approach can be applied, lacking calibration data for this lens.
The PSF constraints we are considering assume local axial symmetry of the PSF with respect to the radial axis. For a Canon 24mm f/1.4 lens also used in Schuler et al., this is not exactly fulfilled, which can be seen in the inset in figure 8. The wings of the green blur do not have the same length. Nonetheless, the blind estimation of the invention with enforced symmetry still approximates the PSF shape well and yields a comparable quality of image correction. In contrast to Schuler et al., this was obtained blindly.
Figure 9 shows results on an image taken from Kee et al. The closeups reveal that Kee's non-blind approach is slightly superior in terms of sharpness and noise- robustness. However, the blind approach according to an embodiment of the invention better removes chromatic aberration. A general problem of methods relying on a prior calibration is that optical aberrations depend on the wavelength of the transmitting light continuously: an approximation with only a few (generally three) color channels therefore depends on the lighting of the scene and could change if there is a discrepancy between the calibration setup and a photo's lighting conditions. This is avoided with a blind approach.
The inventors also applied "DxO Optics Pro 7.2" to the blurry image. DXO uses a database for combinations of cameras/lenses. While it uses calibration data, it is not clear whether it additionally infers elements of the optical aberration from the image. For comparison, the photo was processed with the options "chromatic aberrations" and "DxO lens softness" set to their default values. The result is good and exhibits less noise than the other two approaches (see figure 9, however, it is not clear if an additional denoising step is employed by the software.
The blind approach to removing optical aberrations according to the invention can also be applied to historical photos, where information about the lens is not available. The left column of figure 1 0 shows a photo (and some detail) from the Library of Congress archive that was taken around 1940. Assuming that the analog film used has a sufficiently linear light response, the inventors applied the blind lens correction method and obtained a sharper image. However, the blur appeared to be small, so algorithms like Adobe's "Smart Sharpen" also give reasonable results. Neither DXO nor Kee et al.'s approach can be applied here since lens data is not available.
In conclusion, the method according to the invention blindly removes spatially varying blur caused by imperfections in lens designs, including chromatic aberrations. Without relying on elaborate calibration procedures, results comparable to non-blind methods can be achieved. By creating a suitable orthonormal basis, the PSF is constrained to a class that exhibits the generic symmetry properties of lens blur, while fast PSF estimation is possible.
The skilled person will appreciate that the invention is not limited to the above- described embodiment. While it is useful to be able to infer the image blur from a single image, it does not change for photos taken with the same lens settings. On the one hand, this implies that the PSFs estimated for these settings can be transferred for in- stance to images where image prior assumptions are violated. On the other hand, it suggests the possibility to improve the quality of the PSF estimates by utilizing a substantial database of images. Finally, while optical aberrations are a major source of image degradation, a picture may also suffer from motion blur. By choosing a suitable basis, these two effects may be combined.

Claims

1. Computer-im lemented method for recovering a digital image (x) from an image (y) acquired through an optical system, wherein the characteristics of the optical system are unknown, the method comprising the steps: estimating a point spread function (a) based on the acquired image (y); estimating the recovered digital image (x), based on the estimated point spread function (a) and the acquired image (y); and
outputting the recovered digital image (x),
characterized in that the step of estimating a point spread function is blind.
The method according to claim 1 , wherein the point spread function (a) is spatially varying.
The method according to claim 1 or 2, wherein the step of estimating the point spread function takes a possible reflection symmetry of the optical system into account.
The method according to claim 1 or 2, wherein the step of estimating the point spread function takes a possible rotation symmetry of the optical system into account.
The method according to claim 1 or 2, wherein saturated pixels in the observed digital image are at least partially disregarded when estimating the point spread function.
6. The method according to claim 1 , wherein the point spread function (a) further depends on a color channel of the acquired image (y).
7. The method according to claim 6, wherein all color channels of the recovered image (x) are estimated simultaneously.
8. The method according to claim 2, wherein
- the observed image is divided into patches;
a multitude of local blur kernels (a^) are estimated jointly, one for each patch; and
the recovered digital image (x) is estimated based on the jointly estimated local blur kernels (a^) and the observed image.
9. The method according to claim 8, wherein each local blur kernel (aw) is local reflection symmetric.
10. The method according to claim 9, wherein all local blur kernels (aw) are global rotation symmetric.
11. The method according to claim 10, wherein all local blur kernels (a^) change smoothly along a line through the image center.
12. The method according to claim 11 , wherein the maximum size of a blur kernel (a^) is assumed to scale proportionally with its distance to the image center.
13. The method according to claim 8, further comprising the steps of
removing noise by bilateral filtering; and
- emphasizing edges by shock filtering
of the acquired digital image (yt).
14. The method according to claim 8, wherein all local blur kernels are defined in terms of the same orthogonal basis elements.
15. The method according to claim 1, wherein outputting the recovered digital image (x) comprises sending the recovered digital image (x) over an electronic communications network.
16. The method according to claim 1, wherein the point spread function (a) is estimated based on a multitude of images acquired with the same optical system.
Device for recovering a digital image (x) from an image acquired through an optical system (y), wherein the characteristics of the optical system are unknown, the device comprising
a module for estimating a point spread function (ft) based on the acquired image (y);
a module for estimating the recovered digital image (x), based on the estimated point spread function (ft) and the acquired image (y); and a module for outputting the recovered digital image (x);
wherein
the device is adapted to execute a method according to claim 1. 18. Computer-readable medium, storing digital images recovered according to a method according to claim 1.
PCT/EP2012/068868 2012-09-25 2012-09-25 Method and device for blind correction of optical aberrations in a digital image WO2014048451A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2012/068868 WO2014048451A1 (en) 2012-09-25 2012-09-25 Method and device for blind correction of optical aberrations in a digital image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2012/068868 WO2014048451A1 (en) 2012-09-25 2012-09-25 Method and device for blind correction of optical aberrations in a digital image

Publications (1)

Publication Number Publication Date
WO2014048451A1 true WO2014048451A1 (en) 2014-04-03

Family

ID=47046553

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2012/068868 WO2014048451A1 (en) 2012-09-25 2012-09-25 Method and device for blind correction of optical aberrations in a digital image

Country Status (1)

Country Link
WO (1) WO2014048451A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3176748A1 (en) * 2015-12-02 2017-06-07 Canon Kabushiki Kaisha Image sharpening with an elliptical model of a point spread function
US9836827B2 (en) 2015-02-26 2017-12-05 Nokia Technologies Oy Method, apparatus and computer program product for reducing chromatic aberrations in deconvolved images

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012041492A1 (en) * 2010-09-28 2012-04-05 MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. Method and device for recovering a digital image from a sequence of observed digital images

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012041492A1 (en) * 2010-09-28 2012-04-05 MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. Method and device for recovering a digital image from a sequence of observed digital images

Non-Patent Citations (16)

* Cited by examiner, † Cited by third party
Title
CHO, S.; LEE, S.: "Fast Motion Deblurring", ACM TRANS. GRAPH., 2009, pages 28
FERGUS, R.; SINGH, B.; HERTZMANN, A.; ROWEIS, S.; FREEMAN, W.: "Removing camera shake from a single photograph", ACM TRANS. GRAPH., 2006, pages 25
HIRSCH, M.; SCHULER, C.; HARMELING, S.; SCHOLKOPF, B.: "Fast removal of non-uniform 613 camera shake", PROC. IEEE INTERN. CONF. ON COMPUT. VISION, 2011
HIRSCH, M.; SCHULER, C.; HARMELING, S.; SCHOLKOPF, B.: "Fast removal of non-uniform camera shake", PROC. IEEE INTERN. CONF. ON COMPUT. VISION, 2011
HIRSCH, M.; SCHULER, C.; HARMELING, S.; SCHOLKOPF, B.: "Fast removal of non-uniform camera shake", PROC. IEEE INTERN. CONF. ON COMPUT. VISION., 2011
JOSHI, N.; SZELISKI, R.; KRIEGMAN, D.: "PSF estimation using sharp edge prediction", PROC. IEEE CONF. COMPUT. VISION AND PATTERN RECOGNITION, 2008
JOSHI, N.; SZELISKI, R.; KRIEGMAN, D.: "PSF estimation using sharp edge prediction", PROC. IEEE CONF. COMPUT. VISION AND PATTERN RECOGNITION, 2008, pages 1 - 8, XP031297392 *
KEE, E.; PARIS, S.; CHEN, S.; WANG, J.: "Modeling and removing spatially-varying optical blur", PROC. IEEE INT. CONF. COMPUTATIONAL PHOTOGRAPHY, 2011
KRISHNAN, D.; FERGUS, R.: "Fast image deconvolution using hyper-Laplacian priors", ADVANCES IN NEURAL INFORM. PROCESS. SYST., 2009
KRISHNAN, D.; TAY, T.; FERGUS, R.: "Blind deconvolution using a normalized sparsity measure", PROC. IEEE CONF. COMPUT. VISION AND PATTERN RECOGNITION, 2011
LEVIN, A.; WEISS, Y.; DURAND, F.; FREEMAN, W.: "Efficient marginal likelihood optimization in blind deconvolution", PROC. IEEE CONF. COMPUT. VISION AND PATTERN RECOGNITION, IEEE, 2011
MISKIN, J.; MACKAY, D.: "Ensemble learning for blind image separation and deconvolution", ADVANCES IN INDEPENDENT COMPONENT ANALYSIS, 2000
OLIVER WHYTE ET AL: "Deblurring shaken and partially saturated images", COMPUTER VISION WORKSHOPS (ICCV WORKSHOPS), 2011 IEEE INTERNATIONAL CONFERENCE ON, IEEE, 6 November 2011 (2011-11-06), pages 745 - 752, XP032095324, ISBN: 978-1-4673-0062-9, DOI: 10.1109/ICCVW.2011.6130327 *
OSHER, S.; RUDIN, L.: "Feature-oriented image enhancement using shock filters", SIAM J. NUMERICAL ANALYSIS, vol. 623, 1990, pages 27
SCHULER, C.; HIRSCH, M.; HARMELING, S.; SCHOLKOPF, B.: "Non-stationary correction of optical aberrations", PROC. IEEE INTERN. CONF. ON COMPUT. VISION, 2011
WHYTE, 0.; SIVIC, J.; ZISSERMAN, A.; PONCE, J.: "Non-uniform deblurring for shaken images", PROC. IEEE CONF. COMPUT. VISION AND PATTERN RECOGNITION, 2010

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9836827B2 (en) 2015-02-26 2017-12-05 Nokia Technologies Oy Method, apparatus and computer program product for reducing chromatic aberrations in deconvolved images
EP3176748A1 (en) * 2015-12-02 2017-06-07 Canon Kabushiki Kaisha Image sharpening with an elliptical model of a point spread function
JP2017103617A (en) * 2015-12-02 2017-06-08 キヤノン株式会社 Image processing apparatus, imaging device, image processing program
CN106817535A (en) * 2015-12-02 2017-06-09 佳能株式会社 Image processing apparatus, image capture device and image processing method
US10270969B2 (en) 2015-12-02 2019-04-23 Canon Kabushiki Kaisha Image processing apparatus configured to sharpen image having complex blur, image capturing apparatus, image processing method and storage medium storing image processing program
CN106817535B (en) * 2015-12-02 2020-05-19 佳能株式会社 Image processing apparatus, image capturing apparatus, and image processing method

Similar Documents

Publication Publication Date Title
Schuler et al. Blind correction of optical aberrations
Zheng et al. Image demoireing with learnable bandpass filters
CN110023810B (en) Digital correction of optical system aberrations
Koh et al. Single-image deblurring with neural networks: A comparative survey
Schuler et al. Non-stationary correction of optical aberrations
Hyun Kim et al. Dynamic scene deblurring
US9692939B2 (en) Device, system, and method of blind deblurring and blind super-resolution utilizing internal patch recurrence
EP2860694B1 (en) Image processing apparatus, image processing method, image processing program, and non-transitory computer-readable storage medium
Whyte et al. Non-uniform deblurring for shaken images
Heide et al. High-quality computational imaging through simple lenses
Hirsch et al. Fast removal of non-uniform camera shake
Hosseini et al. Convolutional deblurring for natural imaging
JP2008511859A (en) Extended depth of focus using a multifocal length lens with a controlled spherical aberration range and a concealed central aperture
Chen et al. Optical aberrations correction in postprocessing using imaging simulation
WO2017100971A1 (en) Deblurring method and device for out-of-focus blurred image
US20160371567A1 (en) Image processing apparatus, image pickup apparatus, image processing method, and non-transitory computer-readable storage medium for estimating blur
Li et al. Universal and flexible optical aberration correction using deep-prior based deconvolution
Purohit et al. Depth-guided dense dynamic filtering network for bokeh effect rendering
Tang et al. What does an aberrated photo tell us about the lens and the scene?
Delbracio et al. Polyblur: Removing mild blur by polynomial reblurring
CN107430762A (en) Digital zooming method and system
Eboli et al. Fast two-step blind optical aberration correction
Li et al. Computational imaging through chromatic aberration corrected simple lenses
WO2014048451A1 (en) Method and device for blind correction of optical aberrations in a digital image
Li et al. A computational photography algorithm for quality enhancement of single lens imaging deblurring

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12775181

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12775181

Country of ref document: EP

Kind code of ref document: A1